text
stringlengths 6.4k
846k
| label
int64 0
9
|
---|---|
arXiv:1711.00549v4 [cs.CL] 2 Mar 2018
Just ASK:
Building an Architecture for Extensible Self-Service
Spoken Language Understanding
Anjishnu Kumar
Amazon.com
[email protected]
Arpit Gupta
Amazon.com
[email protected]
Julian Chan
Amazon.com
[email protected]
Sam Tucker
Amazon.com
[email protected]
Bjorn Hoffmeister
Amazon.com
[email protected]
Markus Dreyer
Amazon.com
[email protected]
Stanislav Peshterliev
Amazon.com
[email protected]
Ariya Rastrow
Amazon.com
[email protected]
Ankur Gandhe
Amazon.com
[email protected]
Christian Monson
Amazon.com
[email protected]
Denis Filiminov
Amazon.com
[email protected]
Agnika Kumar
Amazon.com
[email protected]
Abstract
This paper presents the design of the machine learning architecture that underlies
the Alexa Skills Kit (ASK) a large scale Spoken Language Understanding (SLU)
Software Development Kit (SDK) that enables developers to extend the capabilities of Amazon’s virtual assistant, Alexa. At Amazon, the infrastructure powers
over 25,000 skills deployed through the ASK, as well as AWS’s Amazon Lex SLU
Service. The ASK emphasizes flexibility, predictability and a rapid iteration cycle for third party developers. It imposes inductive biases that allow it to learn
robust SLU models from extremely small and sparse datasets and, in doing so, removes significant barriers to entry for software developers and dialogue systems
researchers.
1
Introduction
Amazon’s Alexa is a popular digital assistant that was not designed around a smart phone, but rather
as a service for ambient computing [1] in the home. Due to the intuitiveness of the Voice User
Interface (VUI), the demand from software engineering teams for voice controlled features far outstripped the pace at which the language experts designing Alexa’s SLU system could accommodate
them. Removing this dependency posed several challenges, the first was the need to build a self
service SLU architecture that could transform natural language queries into API calls, while ensuring that the architecture played well with Alexa’s existing user experience. This paper describes the
design and creation of this service.
The Alexa Skills Kit (ASK) was initially designed to empower internal Amazon developers to prototype new features independently of Alexa’s core Automatic Speech Recognition (ASR) and Natural
Language Understanding (NLU) systems, and then extended to give third-party developers the same
1
capabilities. In order to make Alexa a popular service for ambient computing, it was important
to enable external developers to build sophisticated experiences for Alexa similar to the common
operating systems of the smartphone era, Android and iOS.
An Alexa skill is an SLU subsystem that has two customized components corresponding to the SLU
stack - ASR and NLU. When the user speaks to a particular skill, the Alexa service handles the
conversion of speech into text, performs intent classification, and slot-filling according to a schema
defined by the skill developer. Besides writing the skill definition, the developer is also responsible
for creating a web service, which interacts with JSON requests sent by Alexa. Given the structured
request from Alexa, the developer’s web service can return text to be synthesized by Alexa’s textto-speech engine, an audio stream to be played back, with an optional graphical representation to be
displayed in case the Alexa endpoint supports the visual modality. Figure 1 illustrates the interaction
flow of a skill. A common practice for Alexa skill developers is to use a serverless endpoint such as
AWS’s Lambda product [2].
Figure 1: Interaction flow of an ASK skill
As of November 2017, there are over 25,000 Alexa Skills that have been built and deployed to
customers via ASK. The SLU architecture described in this paper is also at the foundation of the
Amazon Lex SLU service [3] and applications such as Amazon Connect, a hybrid human/AI customer service product which uses Amazon Lex to power virtual agents. In the year following the
launch of the ASK, competing SDKs following similar design principles have also been launched
by Google and Microsoft, namely Actions on Google and the Cortana Skills Kit.
ASK allows researchers to experiment on conversation and dialogue systems without the additional
overhead of maintaining their own ASR and NLU systems [4, 5, 6].
2
Related Work
Prior toolkits allowed users to build ASR and NLU models individually. For example, Kaldi [7]
and HTK [8] are popular toolkits for speech recognition. Stanford’s Core NLP [9] offers a suite
of NLP libraries. These toolkits allow a lot of flexibility to the developer. For example, CoreNLP
gives complete independence in selecting the libraries to use in the language understanding task.
Similarly, Kaldi offers pre-built recipes which can be used as is or modified according to need of
developer. However, this flexibility also poses challenges for the developer who is not well versed
with Speech and NLP literature and ML methodology. These toolkits did not provide speech and
language understanding capabilities in an open and self-service manner. At the other end of the
spectrum, standards such as VoiceXML have existed since the early 2000s. They support clearly
defined interfaces for software developers, but have supported only rigid command structures for
end users. The need to develop a set of portable tools to address domain specificity with small
datasets has long been considered a bottleneck for the large scale deployment of spoken language
technology [10].
In parallel to our work, SpeakToIt (now DialogFlow) launched Api.ai and Microsoft launched
LUIS.ai [11], both of which provided self-service SDKs to third party developers to voice enable
2
their applications in isolation. In our work, we attempt to close the gap by offering ASR and NLU
models that work together out of the box with limited training samples and do not require expertise in either field, as well as the capability to rapidly deploy these systems directly to a large and
growing audience by extending a widely available virtual assistant.
3
Design Considerations
As an SDK for a voice assistant, speech recognition and language understanding technologies are the
key value addition that Alexa can offer compared to existing frameworks. Since such a framework
had never been built for a large user base before, we had to look beyond the software design tenets
that were used to build largely monolithic high performance SLU systems in the past. Successful
modern software development frameworks such as Amazon Web Services, the Python programming
language, and Linux, have large developer communities and could provide relevant design tenets.
We describe some of these tenets below.
The system needs to offer modular and flexible building blocks. To enable maximum flexibility, we
chose to allow developers to specify commands they would want to support rather than limiting them
to a set of commands or intents designed by Amazon. We chose to implement decoupled, modular
subsystems that could be updated independently. We believe that decoupled systems are commonly
studied in software engineering, but remain an underexplored area in existing machine learning
research. In recent advances in deep learning research, there is a trend towards training complex
end-to-end (E2E) models directly from data, [12, 13, 14, 15]. These models offer improved performance and are sometimes easier to maintain by reducing the number of components as compared to
decomposed or cascaded architectures. However, joint modeling and end-to-end modeling can also
introduce dependencies between constituent systems, and make it harder to deploy improvements
independently [16].
A third party must not be allowed to degrade the first party user experience. The danger of customer
experience degradation was mitigated by sandboxing skills and allowing them to be used only once
enabled, either explicitly or implicitly. This design choice resulted in rapid adoption by developers
since they were no longer in direct competition with each other or with the first party system, but
made skill access more difficult for a customer by impairing the naturalness of the interaction.
The discoverable surface area for voice is limited, so the framework must prevent cybersquatting.
To prevent early developers from taking up valuable voice real-estate, a decision was made to allow
overlapping skill names, and allowing developers to choose any name of their choosing as long
as it does not reflect a brand protected by trademark or copyright which is not owned by them.
Namespace disambiguation would be performed by the user at enablement time. Other frameworks
chose to elect winners, by having a more rigorous vetting process and by awarding valuable real
estate to chosen vendors, which results in a more consistent user experience early on, but may limit
innovation in the long run.
The framework should allow fast iteration to support a rapid VUI development cycle. Since a skill
developer did not have any usage data while developing a skill, enabling a rapid iteration cycle
was important to enable them to quickly address issues. This requirement meant that SLU models
needed to be trained in minutes, not hours, and created a tradeoff with the demand for highly accurate
models.
The framework must remain relevant in the face of rapid advances in machine learning. Since the
system used machine learning extensively, the state of the art models implementing its underlying
ASR and NLU technologies were changing rapidly. The API contract thus had to be independent of
the our initial model choice. We chose a lightweight shallow semantic parsing formalism comprised
just of intents and slots. This formalism is analogous to an API call, the intent representing the
function and slots representing parameters to that function. The choice of this simple interface for
communicating between the language understanding system and software developers meant that the
underlying systems could be updated to use arbitrarily complex models and representations, which
could then be compiled back into the API call representation. Since model architectures become
outdated quickly, it was necessary to build up a repertoire of techniques that treat model choices
as black boxes, in order to enable the rapid deployment of new model architectures without having
cascading effects on other functionality.
3
Since a developer could only provide a limited number of samples, any model trained on these
utterances was unlikely to be of high quality. It thus became important to build components to
leverage transfer learning and low resource learning techniques while remaining primarily model
agnostic.
This can be done in several ways. Firstly, data-based transfer, that is, the curation of data resources that can be used generically. Secondly, representation-based transfer, the creation of a label
space that encapsulates prior knowledge about how different entities relate to each other. Thirdly,
feature-based transfer such as in [17, 18] or aided by unsupervised representation learning techniques [19, 20]. Investments in these transfer learning strategies are likely to remain relevant even
when the state of the art model architecture changes. Once gains from generic techniques start to
stagnate, it is imperative to invest in model-based transfer learning strategies that exploit the specific
characteristics of the machine-learning models being used [21, 22, 23, 24]. In order to develop and
deploy these strategies in a fast changing research landscape, it was critical to develop an infrastructure designed for the rapid deployment of research, we discuss these tradeoffs in subsequent
sections.
4
Customer Experience
Customers primarily interact with skills primarily in 3 different ways. Modal interactions are explicit
invocations where the customer first invokes a skill or service e.g. “Open Twitter” then issues a
command “Search for trending tweets”. One-shot invocation targets a skill or service and issues a
command simulatenously, for example, ”What is trending on twitter”.
Both the modal and one-shot invocation modalities are supported by a combination of deterministic
systems and statistical shallow parsing models. The one-shot modality is only available for the set
of skills a customer has enabled previously, in order to prevent unintended collisions. The explicit
launch functionality, for example “Open the Twitter skill” attempts to disambiguate skills and do an
implicit enablement if there is no ambiguity.
A recently launched third modality is skill suggestion. This allows for customers to be suggested
a skill and routed to it when they issue a command that cannot be serviced by Alexa’s first party
systems but can likely be handled by a skill. This can be done by performing statistical matching
between user utterances and relevant skills by using techniques derived from information retrieval
or using semantic matching performed by deep neural networks [25].
Once a skill is invoked, a skill context is established, effectively sandboxing the interaction and
preventing a customer from accessing any of Alexa’s native capabilities until the skill exits, either
gracefully after task completion or because the customer ceases to interact with the device.
5
BlueFlow: A Flexible Model Building Infrastructure
Our modeling infrastructure is developed using BlueFlow, a Python framework intended to accelerate the pace at which ML projects could be brought from research to production by using a shared
codebase. We developed an internal design paradigm called CLI-to-Server. The paradigm ensures
that every API call defined by BlueFlow is also executable locally via a programmatically generated
Command Line Interface (CLI) in order to ensure that it is possible to reproduce the production stack
during experimentation. Using autogenerated CLIs for the APIs adds an extra layer of abstraction
to the codebase but helps mitigate a common machine learning anti-pattern in which production
services and research tooling are implemented separately and go out of sync after a period of time.
BlueFlow is designed to enforce a clean separation between operational concerns and system logic
using the constructs of artifacts, components, activities and recipes. In a manner similar to some
open source deep learning libraries [26], BlueFlow uses python code constructs as a declarative
language to define a symbolic computational graph for data flow management. This computational
graph is a directed acyclic graph (DAG) and can be serialized, then optimized and executed by a
compatible executor locally or in a distributed manner. Refer to Appendix A for details on the
BlueFlow architecture and its syntax.
4
BlueFlow runs model building in production, but can also run it ad-hoc on a research cluster for
conducting experiments. Having both research and production services use the same system allows
us to quickly deploy new modeling techniques to production without spending significant time on
productizing throwaway research code. The use of Python allows for rapid experimentation and
a concise codebase, with an option to optimize bottlenecks in by using C++ bindings. However
Python’s type system increases reliance on unit test coverage. Along with the BlueFlow task execution framework, we extensively leverage technologies developed by AWS. All static artifacts needed
for a model build are stored in Amazon’s Simple Storage Service (S3) [27] and all artifacts that are
loaded at runtime are stored in Amazon’s highly performant key value store DynamoDB [28].
6
Skill Definition and Query Representation
To create a new skill for Alexa, a developer begins by defining an interaction model, which includes defining an intent schema, slot types, providing sample utterances corresponding to a simple
grammar, and an invocation phrase.
A wide variety of utilities exist to help developers define interaction model for an Alexa Skill, including but not limited to Skill Builder, a rich web interface with hints and completions and a testing
console; AWS’s Lex UI, which has an ‘export-as-skill’ option; and programmatic skill management
via the command-line using the Alexa Skill Management API - which also enables third party developers to start building skill development tooling.
ASK gives developers full freedom when defining a voice experience; however, a skill developer
cannot realistically be expected to provide large and representative data samples that reflect real
world usage. While this can be addressed in part by using transfer learning techniques, we also
directly expose concepts from Alexa’s internal domains in the form of builtin intents and builtin slot
types for the developer to pick and choose from.
Figure 2: An utterance represented in AlexaMRL
6.1
The Alexa Meaning Representation Language
In this section we briefly describe the Alexa Meaning Representation Language (AlexaMRL) [29],
which is necessary to understand builtin intents. The AlexaMRL is a decomposable semantic parsing
formalism for defining spoken language commands, which allows for concepts to be efficiently
reused across different SLU domains.
AlexaMRL is composed of Actions, Entities, and Properties. Actions are a higher order abstraction
than the intents that are typically used in natural language tasks, and can be viewed as a templating
system for intents. An Entity from the Alexa Ontology is analogous to a slot from an ASK developer’s perspective. Properties are completely transparent to a skill developer, but under the hood,
they tie Actions and Entities together by making certain Entity Types compatible with certain Ac5
tions. Entities possess Properties similar to object attributes in Object Oriented Programming. For
example, LocalBusiness is an Entity Type. It has Properties such as business hours, address, phone
number etc. Actions require Properties, i.e. they require for Entities to possess certain Properties in
order to be compatible, a CallAction cannot be completed unless it is operating on an Entity with a
Callable Property.
An intent represents an action the user wants to take. This could be searching or information, or
playing a media object. For example, a FindPlaceIntent intent would internally route to an API call
for a location search engine. The reason an Action is more abstract than an NLU intent is because an
intent needs to be aware of the surface forms (string representations) of the Entities it operates on,
but an Action does not. Instead, an Action operates on abstract interfaces that fulfill certain criteria.
For example, An AddAction requires an object Entity which can be added and a target Entity which
can be added to. This is specified using the required Properties of the Action. AddAction has a
required property targetCollection which identifies the type of list to add to and an object Property
that marks the type of the object to be added.
6.2
Exposing Query Representation to Skills
Thus developers have an option to compile builtin intents by filling an Action template with compatible Entity Types, which allows us to reuse data from Alexa’s ontology via AlexaMRL. The skills that
use compiled intents use shared deterministic models and efficiently reuse internal data resources
for their stochastic models. We will discuss statistical modeling in greater detail in Section 7.
Developers that choose to design their own intents with Alexa’s builtin slot types, instead of Action
templates, do not benefit from AlexaMRL’s ability to automatically derive semantic roles, and cannot
fully reuse shared deterministic models. In this case, only the shared data for an Entity Type can
be reused, and the semantic roles of the Entities in an utterance must be derived from developer
provided samples.
7
Statistical Modelling
In order for Alexa to turn user requests into structured representations, we need to build models
based on the developer’s definition for both the ASR and NLU systems. The first step to this process
is to efficiently represent the grammars provided by the developer.
7.1
Weighted Finite State Transducers
Weighted Finite-State Transducers (wFST) provide an easy way to represent data under a weighted
grammar. A path through the FST encodes an input string (or sequence) into output string. We
generate FSTs for both ASR and NLU. ASR uses a skill specific FST to decode utterances into
sentences defined by the developer, whereas NLU uses an FST to recognize intent and slot values.
Both can be generated from the developer-provided sample utterances, intents, and slots. Most of
the FST-generation code is shared between the ASR and NLU. Fig. 3 shows an FST that recognizes
the GetHoroscope intent along with its Date slot from an utterance.
This representation is powerful because we can impose arbitrary distributional priors on the grammar. We infer a distribution over intents and slots from the sample utterances. As the data in sample
utterances is often imbalanced, we follow the principle of maximum entropy to impose uniform priors over intents and then slots on the grammar. This is a configurable switch that can be turned off
when the wFST is generated from the usage pattern of a skill.
The weighted FST representation can be used directly by common sequence models designed to
work with text such as CRFs [30] and LSTMs [31] by sampling utterances according to the distributional priors and feeding them into the respective models as training data via a data recombination
technique similar to [32]. One can also train directly on the lattice itself [33].
7.2
Automatic Speech Recognition
The ASR system uses weighted finite-state transducers (WFST) to represent the Hidden Markov
Model (HMM) transducer (H), phone context-dependency transducer (C), pronunciation lexicon
6
Figure 3: wFST states are represented by circles and marked with their unique number. The input
label i, the output label o, and weight w of a transition are marked on the arcs.
(L), and word-level grammar (G). These FSTs are composed to form an end-to-end recognition
transducer. We refer the reader to Mohri et al. [34] for details on these transducers and their role in
ASR. The goal of the ASR recipe is to generate a word-level grammar (G) that guides the system
to recognize utterances directed to the skill [35]. The (G) decoder for ASK is a hybrid decoder that
uses a skill-specific grammar as well as a main n-gram based Statistical Language Model (SLM)
that shares data with other skills.
Continuous improvements are key to any machine learning product. In addition to regularly ingesting human-transcribed skill-directed utterances to the training corpus of the main SLM, we also
ingest ASR recognition results directly as a form of semi-supervised training [36]. In order to reduce the risk of reinforcing recognition error, we employ weighted language model training [37]
where each semi-supervised sentence is weighted according to its confidence value given by the
ASR system. These semi-supervised learning techniques ameliorate ASR errors due to distributional misallocations during the initial build phase.
7.3
Natural Language Understanding
In this section we describe the Language Understanding component of the Alexa Skills Kit. Given
an utterance, our NLU system aims to recognize its intent and relevant parameters as slots, a task
known as shallow semantic parsing. The NLU system is divided into deterministic and stochastic
subsystems.
The deterministic NLU subsystem uses FST data representation to compile sample utterances provided by developer into a recognizer model. This recognizer guarantees coverage on all utterances
specified by a skill developer while designing her grammar, allowing for predictable experience for
both the developer and the customer.
The stochastic system uses BlueFlow to lend flexibility to the choice of model. We have built individual algorithmic components which implement a linear chain CRF [38], and a maximum entropy
classifier [39], an updated pipeline using LSTMs for each task [40] and joint neural models for entity
and intent prediction [41, 42]. This allowed us to experiment with different configurations as long
as it conforms to the API of receiving a sentence and returning a semantic frame. In fig 4 we show
a traditional NLU system which performs entity recognition followed by intent classification and
finally slot resolution.
7.3.1
Knowledge Injection
The performance of statistical models can be greatly improved by using features derived from
Knowledge-Base (KB) lookups, but it is infeasible to perform expensive queries at runtime. By
using efficient data structures to encode sketches of relevant portions of a knowledge base, we can
7
Figure 4: Overview of the hybrid NLU pipeline.
make them available to statistical models as feature extractors. This is done by encoding ontologically derived word-clusters as bloom filters during training time [43, 44], including those from
custom types defined by developers. These features can be extremely useful for training models in
a low data regime since they encourage feature generalization to classes or categories, with a small
probability of false positives. This can massively increase the effective vocabulary of a small model
without blowing up its feature-space, and has added benefit of enabling class coverage improvements
to deploy asynchronously without needing to retrain statistical models.
Training with KB features can lead to language feature under-training, the models learn to rely too
heavily on KB features to distinguish entities. This can be addressed by introducing a knowledge
dropout regularization parameter to prevent the model from overfitting [45].
7.3.2
Model Optimization for Deployment
Skill models are stored in DynamoDB [28] and loaded at runtime, and network calls account for the
bulk of added latency. We use standard techniques like feature hashing [46], weight quantization,
[47] and sparsity constraints via modified elastic net regularization [48] to ensure that our models
are small enough to be stored cheaply and can be transferred over network calls quickly with no
statistically significant loss in accuracy.
8
Dialogue Subroutines
ASK supports the specification of dialogue subroutines for common tasks, a unified dialogue model
automates simple procedural dialogue capabilities, such as slot elicitation (e.g., User: Get me a cab
– Alexa: Where do you want to go?) confirmation questions (e.g., User: Portland – Alexa: You
want to go to Portland, Oregon, right?) and other easily specified mechanisms. These procedural
subroutines can either be invoked by developers to issue a dialogue act directive during the course of
interaction, or defined as part of the interaction model. Although this system fulfills many usecases,
we recognize the limitations of having a simple dialogue system.
9
Conclusions and Future Work
We described the ASK SLU service which allows third party developers to expand the capability of
the Alexa Voice Service. Our system abstracts away the intricacies of ASR and NLU, and provides
developers with an interface based on structured request data. The ASK service now hosts over
25,000 consumer facing SLU subsystems that improve over time, with hundreds of new Skills being
added every week. This implies that we have made significant progress towards our goal of building
a fully extensible SLU architecture. However challenges still remain in building a seamless user
experience, and to enable the creation of useful and engaging agents. In order to seed research in
this area, Amazon has introduced the Alexa Prize, a 2.5 million dollar university competition to
advance conversational AI through voice. Alexa prize bots were launched to customers as a skill
using ASK, and early results have shown promise [6].
8
References
[1] Fariba Sadri, “Ambient intelligence: A survey,” ACM Computing Surveys (CSUR), vol. 43, no.
4, pp. 36, 2011.
[2] Amazon Web Services, “AWS Serverless Multi-Tier Architectures,” AWS Whitepapers, 2017.
[3] Amazon Web Services, “Amazon Lex Developer Guide,” AWS Whitepapers, 2017.
[4] Gabriel Lyons, Vinh Tran, Carsten Binnig, Ugur Cetintemel, and Tim Kraska, “Making the
case for query-by-voice with echoquery,” in Proceedings of the 2016 International Conference
on Management of Data. ACM, 2016, pp. 2129–2132.
[5] Prasetya Utama, Nathaniel Weir, Carsten Binnig, and Ugur Çetintemel, “Voice-based data
exploration: Chatting with your database,” Proceedings of the 2017 workshop on SearchOriented Conversational AI, 2017.
[6] Iulian V Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin,
Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, et al.,
“A deep reinforcement learning chatbot,” arXiv preprint arXiv:1709.02349, 2017.
[7] Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra
Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al., “The kaldi speech
recognition toolkit,” in IEEE 2011 workshop on automatic speech recognition and understanding. IEEE Signal Processing Society, 2011, number EPFL-CONF-192584.
[8] Steve J Young, The HTK hidden Markov model toolkit: Design and philosophy, University of
Cambridge, Department of Engineering, 1993.
[9] Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and
David McClosky, “The stanford corenlp natural language processing toolkit.,” 2014.
[10] Victor W Zue and James R Glass, “Conversational interfaces: Advances and challenges,”
Proceedings of the IEEE, vol. 88, no. 8, pp. 1166–1180, 2000.
[11] Jason D Williams and Eslam Kamal, “Fast and easy language understanding for dialog systems
with microsoft language understanding intelligent service (LUIS),” 2015.
[12] Jason D. Williams, Kavosh Asadi, and Geoffrey Zweig, “Hybrid code networks: practical and
efficient end-to-end dialog control with supervised and reinforcement learning,” in ACL, 2017.
[13] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, “Neural machine translation by
jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
[14] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang
Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah,
Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo,
Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason
Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey
Dean, “Google’s neural machine translation system: Bridging the gap between human and
machine translation,” CoRR, vol. abs/1609.08144, 2016.
[15] Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau,
“Building end-to-end dialogue systems using generative hierarchical neural network models,”
in Thirtieth AAAI Conference on Artificial Intelligence, 2016.
[16] D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay
Chaudhary, and Michael Young, “Machine learning: The high interest credit card of technical
debt,” in SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop), 2014.
[17] Hal Daumé III, “Frustratingly easy domain adaptation,” ACL 2007, p. 256, 2007.
[18] Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong, “New transfer learning
techniques for disparate label sets,” in ACL, 2015.
[19] Jeffrey Pennington, Richard Socher, and Christopher D Manning, “Glove: Global vectors for
word representation,” in In EMNLP. Citeseer, 2014.
[20] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean, “Distributed representations of words and phrases and their compositionality,” in Advances in neural information
processing systems, 2013, pp. 3111–3119.
9
[21] Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck, “Towards zero shot frame
semantic parsing for domain scaling,” in Interspeech 2017, 2017.
[22] Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya, “Frustratingly easy neural domain adaptation,” in COLING, 2016.
[23] Jan Trmal, Jan Zelinka, and Luděk Müller, “Adaptation of a feedforward artificial neural
network using a linear transform,” in International Conference on Text, Speech and Dialogue.
Springer, 2010, pp. 423–430.
[24] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick,
Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell, “Progressive neural networks,” arXiv
preprint arXiv:1606.04671, 2016.
[25] Anjishnu Kumar, Pavankumar Reddy Muddireddy, Markus Dreyer, and Bjorn Hoffmeister,
“Zero-shot learning across heterogeneous overlapping domains,” in Proc. Interspeech 2017,
2017, pp. 2914–2918.
[26] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu,
Chiyuan Zhang, and Zheng Zhang, “Mxnet: A flexible and efficient machine learning library
for heterogeneous distributed systems,” arXiv preprint arXiv:1512.01274, 2015.
[27] Amazon Web Services, “AWS Storage Services Overview,” AWS Whitepapers, 2015.
[28] Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall, and Werner Vogels, “Dynamo: Amazon’s highly available key-value store,” in ACM SIGOPS Operating Systems Review. 2007, vol. 41, pp. 205–220, ACM.
[29] Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer, “Transfer learning for neural
semantic parsing,” in Proceedings of the 2nd Workshop on Representation Learning for NLP,
Vancouver, Canada, August 2017, pp. 48–56, Association for Computational Linguistics.
[30] John Lafferty, Andrew McCallum, and Fernando CN Pereira, “Conditional random fields:
Probabilistic models for segmenting and labeling sequence data,” in Proceedings of the 18th
International Conference on Machine Learning, 2001, vol. 951, pp. 282–289.
[31] Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris
Dyer, “Neural architectures for named entity recognition,” in Proceedings of NAACL-HLT,
2016, pp. 260–270.
[32] Robin Jia and Percy Liang, “Data recombination for neural semantic parsing,” Proceedings of
ACL, 2016.
[33] Faisal Ladhak, Ankur Gandhe, Markus Dreyer, Lambert Mathias, Ariya Rastrow, and Björn
Hoffmeister, “LatticeRnn: Recurrent neural networks over lattices,” Interspeech 2016, pp.
695–699, 2016.
[34] Mehryar Mohri, Fernando Pereira, and Michael Riley, “Weighted finite-state transducers in
speech recognition,” Computer Speech & Language, vol. 16, no. 1, pp. 69–88, 2002.
[35] Petar Aleksic, Mohammadreza Ghodsi, Assaf Michaely, Cyril Allauzen, Keith Hall, Brian
Roark, David Rybach, and Pedro Moreno, “Bringing contextual information to google speech
recognition,” in Sixteenth Annual Conference of the International Speech Communication
Association, 2015.
[36] Thomas Drugman, Janne Pylkkönen, and Reinhard Kneser, “Active and semi-supervised learning in asr: Benefits on the acoustic and language models.,” in INTERSPEECH, 2016, pp.
2318–2322.
[37] Hui Zhang and David Chiang, “Kneser-ney smoothing on expected counts,” in ACL, 2014.
[38] J. Lafferty, A. McCallum, and F. Pereira, “Conditional random fields: Probabilistic models
for segmenting and labeling sequence data,” in Proc. 18th International Conf. on Machine
Learning, 2001, pp. 282–289.
[39] A. L Berger, V. J.D Pietra, and S. A.D Pietra, “A maximum entropy approach to natural
language processing,” Computational Linguistics, vol. 22, no. 1, pp. 71, 1996.
[40] Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi, “Spoken
language understanding using long short-term memory neural networks,” in Spoken Language
Technology Workshop (SLT), 2014 IEEE. IEEE, 2014, pp. 189–194.
10
[41] Bing Liu and Ian Lane, “Attention-based recurrent neural network models for joint intent
detection and slot filling,” arXiv preprint arXiv:1609.01454, 2016.
[42] Dilek Hakkani T ur, G okhan T ur, Asli Celikyilmaz, Yun Nung Chen, Jianfeng Gao, Li Deng,
and Ye Yi Wang, “Multi domain joint semantic frame parsing using bi directional rnn lstm.,”
in INTERSPEECH, 2016, p. 715 719.
[43] Xiaohu Liu, Ruhi Sarikaya, Liang Zhao, Yong Ni, and Yi-Cheng Pan, “Personalized natural
language understanding.,” 2016.
[44] Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan
Mcdonald, and Slav Petrov, “Natural language processing with small feed-forward networks,” in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 2017, p. 28692875, Supplementary material:
http://aclweb.org/anthology/attachments/D/D17/D17-1308.Attachment.zip.
[45] Eunsuk Yang, Young-Bum Kim, Ruhi Sarikaya, and Yu-Seop Kim, “Drop-out conditional
random fields for twitter with huge mined gazetteer,” in Proceedings of NAACL-HLT, 2016,
pp. 282–288.
[46] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg, “Feature hashing for large scale multitask learning,” in Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009, pp. 1113–1120.
[47] Armand Joulin, Edouard Grave, and Piotr Bojanowski Tomas Mikolov, “Bag of tricks for
efficient text classification,” EACL 2017, p. 427, 2017.
[48] Hui Zou and Trevor Hastie, “Regularization and variable selection via the elastic net,” Journal
of the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 2, pp. 301–320,
2005.
11
Appendix A
BlueFlow Architecture
Architecture. The fundamental units of BlueFlow are components. Component are wrapped in
activities. Activities are chained together into recipes. Recipes are executed by an executor.
Components. A component is a function that solves a defined task. This can be simple (convert
from format A to format B) or complex (train a classifier). In either case, a component is the smallest
unit that other scientists would expect to reuse. Components are normal Python code and can be used
independently of BlueFlow.
Activities. Activities are lightweight wrappers around components, responsible for fetching data
from artifacts and then running a component. Artifacts are lazy file like objects that support a
uniform interface for reading and writing data from heterogeneous data sources such as local files,
DynamoDB [28] and S3 [27]. Activities are expressed as regular Python, but they specify input and
output via an annotation syntax.
Below we show a simple activity to train a Maximum Entropy Classifier.
@Activity(inputs=(’features_artifact’),
outputs=(’model_artifact’))
def train_classifier(features_artifact, model_artifact):
model_file = components.train_maxent(features_artifact.fetch())
model_artifact.put(model_file)
Recipes. A recipe is a chained together set of activities. Recipe code is no longer normal Python, but
rather uses Python language constructs to define a symbolic graph of data flowing through Activities
as Artifacts, and can be serialized as a Directed Acyclic Graph (DAG). Then, we can execute the
independent paths of the DAG in parallel either on the same machine or on a cluster.
Below we show a recipe for training a Maximum Entropy Classifier.
@Recipe
def build_ic_model(data_file, executor):
data_artifact = executor.new_artifact(data_file)
features_artifact = executor.new_artifact()
model_artifact = DynamoDBArtifact(’models/classifier’)
extract_features(data_artifact, features_artifact)
train_classifier(features_artifact, model_artifact)
Executors. An executor abstracts the details of a recipe execution. It is responsible for vending appropriate artifacts and for taking the appropriate actions when executing specific activities. Thus, a
local executor vends local files as intermediate artifacts and runs recipe’s activities on the same machine, whereas a remote executor vends S3 files as intermediate artifacts and runs recipe’s activities
on a cluster.
The executor is also responsible for performing recipe optimizations, via improved scheduling, IO
or smart object caching. The local executor can be used for low latency model builds such as the
ASK model builds. For offline experiments with higher throughput requirements we use the remote
executor to run jobs on a cluster. Switching from a naive local executor to a multithreaded local
executor results in model build speed increases of about 50 percent.
Other than recipe execution, executors add important features for production code such as logging,
metrics, and retries, that would otherwise result in significant boilerplate. Every Recipe is automatically converted into an equivalent command line tool for local execution and reproduction.
Appendix B
Intent Schema and Custom Slot Types
The intent schema specifies the intents supported by a skill and the expected slots for an intent.
An intent is the intended action for an utterance. Developers may define their own intents or use the
Amazon provided built-in intents. An intent may accept a slot as an argument to represent an entity
along with it’s semantic role in the sentence. The data type of the slot is captured by the slot type,
which corresponds to entity types from the ontology.
12
Developers may use the Amazon provided built-in slot types or define their own custom slot types.
Each custom slot type requires a list of representative values.
Listing 1: Sample Intent Schema for a Horoscope skill from the ASK Documentation
1
{
"intents": [
{
"intent": "GetHoroscope",
"slots": [
{
"name": "Sign",
"type": "ZODIAC_SIGNS"
},
{
"name": "Date",
"type": "AMAZON.DATE"
}
]
}
]
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
}
Sample utterances form the training data used for building spoken language understanding models.
Each sample is labelled with an intent followed by an utterance corresponding to the intent.
Listing 2: Sample Utterances for a Horoscope skill
1
2
3
4
GetHoroscope
GetHoroscope
GetHoroscope
GetHoroscope
what is the horoscope for {Sign}
what will the horoscope for {Sign} be on {Date}
get me my horoscope
{Sign}
13
| 2 |
GENERATING THE IDEALS DEFINING UNIONS OF SCHUBERT VARIETIES
arXiv:1405.2945v1 [math.AG] 12 May 2014
ANNA BERTIGER
A BSTRACT. This note computes a Gröbner basis for the ideal defining a union of Schubert varieties.
More precisely, it computes a Gröbner basis for unions of schemes given by northwest rank conditions on the space of all matrices of a fixed size. Schemes given by northwest rank conditions include
classical determinantal varieties and matrix Schubert varieties–closures of Schubert varieties lifted
from the flag manifold to the space of matrices.
1. I NTRODUCTION
We compute a Gröbner basis, and hence ideal generating set, for the ideal defining a union of
schemes each given by northwest rank conditions with respect to an “antidiagonal term order.”
A scheme defined by northwest rank conditions is any scheme whose defining equations are of
the form “all k × k minors in the northwest i × j sub-matrix of a matrix of variables,” where i, j,
and k can take varying values. These schemes represent a generalization of classical determinantal
varieties–those varieties with defining equations all (r+1)×(r+1) minors of a matrix of variables.
One geometrically important collection of schemes defined by northwest rank conditions is the set
of matrix Schubert varieties. Matrix Schubert varieties are closures of the lift of Schubert varieties
from the complete flag manifold to matrix space [Ful92]. In general, a matrix Schubert variety
for a partial permutation π is the subvariety of matrix space given by the rank conditions that
the northwest i × j sub-matrix must have rank at most the number of 1s in the northwest i × j
sub-matrix of the partial permutation matrix for π. Notice that the set of matrix Schubert varieties
contains the set of classical determinantal varieties, which are the zero locus of all minors of a
fixed size on the space of all matrices of fixed size.
Matrix Schubert varieties associated to honest, that is non-partial, permutations are the closures
of the lifts of the corresponding Schubert varieties in the flag manifold, B− \GLn . If Xπ is the matrix
Schubert variety for an honest permutation π the projection
{full rank matrices} B− \ GLn C = F`Cn
sends Xπ ∩ GLn C onto the Schubert variety Xπ ⊆ F`Cn . Schubert varieties, orbits of B+ , stratify
F`Cn and give a basis for H∗ (F`Cn ). It is this application that led to the introduction of matrix
Schubert varieties in [Ful92]. Knutson and Miller showed that matrix Schubert varieties have a
rich algebro-geometric structure corresponding to beautiful combinatorics [KM05]. Fulton’s generators are a Gröbner basis with respect to any antidiagonal term order and their initial ideal is
the Stanley-Reisner ideal of the “pipe dream complex.” Further, Knutson and Miller show that
the pipe dream complex is shellable, hence the original ideal is Cohen-Macaulay. Pipe dreams,
the elements of the pipe dream complex, were originally called RC graphs and were developed
by Bergeron and Billey [BB93] to describe the monomials in polynomial representatives for the
classes corresponding to Schubert varieties in H∗ (F`Cn ).
The importance of Schubert varieties, and hence matrix Schubert varieties, to other areas of
geometry has become increasing evident. For example, Zelevinsky [Zel85] showed that certain
quiver varieties, sequences of vector space maps with fixed rank conditions, are isomorphic to
Date: May 14, 2014.
1
Schubert varieties. Knutson, Miller and Shimozono, [KMS06] produce combinatorial formulae for
quiver varieties using many combinatorial tools reminiscent of those for Schubert varieties.
1.1. Notation and Background. Much of the background surveyed here can be found in [MS05].
Let B− (respectively B+ ) denote the group of invertible lower triangular (respectively upper triangular) n × n matrices. Let M = (mi,j ) be a matrix of variables. In what follows π will be a possibly
partial permutation, written in one-line notation π(1) . . . π(n), with entries for π(i) undefined are
written ?. We shall write permutation even when we mean partial permutation in cases where
there is no confusion. A matrix Schubert variety Xπ is the closure B− πB+ in the affine space of all
matrices, where π is a permutation matrix and B− and B+ act by downward row and rightward
column operations respectively. Notice that for π an honest permutation Xπ is the closure of the
lift of Xπ = B− \B− πB+ ⊆ B− \GLn C to the space of n × n matrices.
The Rothe diagram of a permutation is found by looking at the permutation matrix and crossing
out all of the cells weakly below, and the cells weakly to the right of, each cell containing a 1. The
remaining empty boxes form the Rothe diagram. The essential boxes [Ful92] of a permutation are
those boxes in the Rothe diagram that do not have any boxes of the diagram immediately south
or east of them. The Rothe diagrams for 2143 and 15432 are given in Figure 1.1. In both cases the
essential boxes are marked with the letter e.
1
e
e
1
e
1
e
e
1
1
1
1
1
1
F IGURE 1.1. The Rothe diagrams and essential sets of 2143 (left) and 15432 (right).
The rank matrix of a permutation π, denoted r(π), gives in each cell r(π)ij the rank of the i × j
northwest-justified sub-matrix of the permutation matrix for π. For example, the rank matrix of
15432 is
1 1 1 1 1
1 1 1 1 2
1 1 1 2 3 .
1 1 2 3 4
1 2 3 4 5
Theorem 1.1 ( [Ful92]). Matrix Schubert varieties have radical ideal I(Xπ ) = Iπ given by determinants
representing conditions given in the rank matrix r(π), that is, the (r(π)ij + 1) × (r(π)ij + 1) determinants
of the northwest i × j sub-matrix of a matrix of variables. In fact, it is sufficient to impose only those rank
conditions r(π)ij such that (i, j) is an essential box for π.
Hereafter we call the determinants corresponding the to essential rank conditions, or the analogous determinants for any ideal generated by northwest rank conditions, the Fulton generators.
One special form of ideal generating set is a Gröbner basis. To define a Gröbner basis we set
a total ordering on the monomials in a polynomial ring such that 1 ≤ m and m < n implies
mp < np for all monomials m, n and p. Let init f denote the largest monomial that appears in the
polynomial f. A Gröbner basis for the ideal I is a set {f1 , . . . fr } ⊆ I such that init I := hinit f : f ∈
Ii = hinit f1 , . . . init fr i. Notice that a Gröbner basis for I is necessarily a generating set for I.
2
The antidiagonal of a matrix is the diagonal series of cells in the matrix running from the most
northeast to the most southwest cell. The antidiagonal term (or antidiagonal) of a determinant
is the product of the entries in the antidiagonal. For example, the antidiagonal of ac db is the
cells occupied by b and c, and correspondingly, in the determinant ad − bc the antidiagonal term
is bc. Term orders that select antidiagonal terms from a determinant, called antidiagonal term
orders have proven especially useful in understanding ideals of matrix Schubert varieties. There
are several possible implementations of an antidiagonal term order on an n×n matrix of variables,
any of which would suit the purposes of this paper. One example is weighting the top right entry
highest and decreasing along the top row before starting deceasing again at the right of the next
row; monomials are then ordered by their total weight.
Theorem 1.2 ( [KM05]). The Fulton generators for Iπ form a Gröbner basis under any antidiagonal term
order.
Typically we will denote the cells of a matrix that form antidiagonals by A or B. In what follows if A is the antidiagonal of a sub-matrix of M we will use the notation det(A) to denote the
determinant of this sub-matrix. We shall be fairly liberal in exchanging antidiagonal cells and the
corresponding antidiagonal terms, thus, for any antidiagonal term order, A = init det(A).
1.2. Statement of Result. Let I1 , . . . Ir be ideals defined by northwest rank conditions. We will
produce a Gröbner basis, and hence ideal generating set, for I1 ∩ · · · ∩ Ir . For each list of antidiagonals A1 , . . . , Ar , where Ai is the antidiagonal of a Fulton generator of Ii , we will produce a
Gröbner basis element gA1 ,...,Ar for ∩Ii . The generators gA1 ,...,Ar will be products of determinants,
though not simply the product of the r determinants corresponding to the Ai . For a fixed list of
antidiagonals A1 , . . . , Ar , build the generator gA1 ,...,Ar by:
(1) Begin with gA1 ,...,Ar = 1
(2) Draw a diagram with a dot of color i in each box of Ai and connect the consecutive dots of
color i with a line segment of color i.
(3) Break the diagram into connected components. Two dots are connected if they are either
connected by lines or are connected by lines to dots that occupy the same box.
(4) For each connected component, remove the longest series of boxes B such that there is
exactly one box in each row and column and the boxes are all in the same connected component. If there is a tie use the most northwest of the longest series of boxes. Note that B
need not be any of A1 , . . . , Ar . Multiply gA1 ,...,Ar by det(B). Remove this antidiagonal from
the diagram of the connected component, break the remaining diagram into components
and repeat.
Theorem 1.3. {gA1 ...Ar : Ai is an antidiagonal of a Fulton generator of Ii , 1 ≤ i ≤ r} form a Gröbner
basis, and hence a generating set, for ∩ri=1 Ii .
1.3. Acknowledgements. This work constitutes a portion of my PhD thesis completed at Cornell
University under the direction of Allen Knutson. I wish to thank Allen for his help, advice and
encouragement in completing this project. Thanks also go to Jenna Rajchgot for helpful discussions in the early stages of this work. I’d also like to thank the authors of computer algebra system
Macaulay2, [GS] which powered the computational experiments nessecary to do this work. I’m especially grateful to Mike Stillman who patiently answered many of my Macaulay2 questions over
the course of this work. Kevin Purbhoo gave very helpful comments on drafts of this manuscript
for which I cannot thank him enough.
2. E XAMPLES
We delay the proof of Theorem 1.3 to Section 3 and first give some examples of the generators
produced for given sets of antidiagonals. These examples are given by pictures of the antidiagonals on the left and corresponding determinantal equations on the right. Note that we only give
3
particular generators, rather than entire generating sets, which might be quite large. We then give
entire ideal generating sets for two smaller intersections.
If r = 1 then for each Fulton generator with antidiagonal A the algorithm produces the generator gA = det(A). Therefore, if we intersect only one ideal the algorithm returns the original set of
Fulton generators. The generator for the antidiagonal shown is exactly the determinant of the one
antidiagonal pictured:
m1,1 m1,2 m1,4
m3,1 m3,2 m3,4
m4,1 m4,2 m4,4
.
The generator for two disjoint antidiagonals is the product of the determinants corresponding
to the two disjoint antidiagonals:
m1,1 m1,2
m2,1 m3,2
m1,1 m1,2 m1,4
m3,1 m3,2 m3,4
m4,1 m4,2 m4,4
.
In general, if A1 , . . . Ar are disjoint antidiagonals then the then the algorithm looks at each Ai separately as they are part of separate components and the result is that gA1 ,...Ar = det(A1 ) · · · det(Ar ).
If A1 , . . . Ar overlap to form one antidiagonal X then the last step of the algorithm will occur
only once and will produce gA1 ,...Ar = det(X). For example,
m1,1
m2,1
m3,1
m4,1
m1,2
m2,2
m3,2
m4,2
m1,3
m2,3
m3,3
m4,3
m1,4
m2,4
m3,4
m4,4
.
In this example, there are two longest possible antidiagonals, the three cells occupied by the
green dots and the three cells occupied by the red dots. The ones occupied by the green dots are
more northwest, hence the generator for the three antidiagonals shown below is
4
m1,1 m1,2
m2,1 m2,2
m1,2 m1,3 m1,3
m2,2 m2,3 m2,3
m3,2 m3,3 m3,4
m4,2 m4,3
m5,2 m2,2
.
In the picture below, the longest possible anti diagonal uses all of the cells in the green anti
diagonal but only some of the cells in the red antidiagonal, however, there is only one possible
longest antidiagonal. Thus the generator is
m1,1 m1,2
m2,1 m2,2
m1,1
m2,1
m3,1
m4,1
m1,2
m2,2
m3,2
m4,2
m1,4
m2,4
m3,4
m4,4
m1,5
m2,5
m3,5
m4,5
m5,1
.
We now give two examples where the complete ideals are comparatively small. Firstly, we
calculate I(X231 ∪ X312 ) = I(X231 ) ∩ I(X312 ). I(X231 ) = hm1,1 , m2,1 i and I(X312 ) = hm1,1 , m1,2 i. The
antidiagonals and corresponding generators are shown below with antidiagonals from generators
of I(X231 ) shown in red and antidiagonals of generators of I(X312 ) shown in blue. Note that the
antidiagonals are only one cell each in this case.
m1,1
m1,1 m1,2
m1,1 m2,1
m1,2 m2,1
Theorem 1.3 results in
I(X231 ∪ X312 ) = I(X231 ) ∩ I(X312 ) = hm1,1 , m1,1 m1,2 , m1,1 m2,1 , m1,2 m2,1 i.
As a slightly larger example, consider I(X1423 ∪ X1342 ) = I(X1423 ) ∩ I(X1342 ). These generators
are given below in the order that the antidiagonals are displayed reading left to right and top to
bottom. The antidiagonals for I(X1423 are shown in red while the antidigaonals I(X1342 ) are shown
in blue. for Note that the full 4 × 4 grid is not displayed, but only the northwest 3 × 3 portion
where antidiagonals for these two ideals may lie.
5
Here Theorem 1.3 produces
m1,1
m1,1 m1,2
,
m2,1
m2,1 m2,2
m1,1 m1,2
m1,1
+
m3,1 m2,2
m1,1 m1,2
m1,2
+
m2,1 m3,2
m2,2
m1,2
m2,2
m1,1
m1,1 m1,3
,
m2,1 m2,3
m1,3
m2,2
m2,1 m2,2
m3,1 m2,2
m3,1 m1,2
,
m2,1 m2,2
m1,1 m1,3
m3,1 m3,2
m1,1 m1,2
,
m3,1 m2,3
m1,1 m1,3
,
m3,1 m2,3
m1,2 m1,3
m2,2 m2,2
m2,1 m2,2
m3,1 m2,3
m1,1 m1,2 m1,3
, m2,1 m2,2 m2,3
m3,1 m3,2 m3,4
3. P ROOF OF T HEOREM 1.3
We now prove the main result of this paper, Theorem 1.3, which states that the gA1 ,...,Ar generate
I1 ∩ · · · ∩ Ir .
We begin with a few fairly general statements:
Theorem 3.1 ( [Knu]). If {Ii : i ∈ S} are ideals generated by northwest rank conditions then init(∩i∈S Ii ) =
∩i∈S (init Ii ).
Lemma 3.2 ( [KM05]). If J ⊆ K are homogeneous ideals in a polynomial ring such that init J = init K
then J = K.
Lemma 3.3. Let IA and IB be ideals that define schemes of northwest rank conditions and let det(A) ∈ IA
and det(B) ∈ IB be determinants with antidiagonals A and B respectively such that A ∪ B = X and
A ∩ B 6= ∅. Then det(X) is in IA ∩ IB .
Proof. Let VX = V(det(X)), VA = V(IA ) and VB = V(IB ) be the varieties corresponding to the ideals
hdet(X)i, IA and IB . It is enough to show that VA ⊆ VX and VB ⊆ VX .
We will show that given a matrix with antidiagonal X with a sub-matrix with antidiagonal
A ⊆ X where the sub-matrix northwest of the cells occupied by A has rank at most length(A) − 1
then the full matrix has rank at most length(X) − 1. The corresponding statement for sub-matrix
with antidiagonal B can be proven by replacing A with B everywhere.
The basic idea of this proof is that we know the rank conditions on the rows and columns northwest of those occupied by A. The rank conditions given by A then imply other rank conditions as
adding either a row or a column to a sub-matrix can increase its rank by at most one.
6
column t + 1 column c
rank at most l
northwest of row k − t column c
rank at most l + k − c ≤ k − t − 2
northwest of column k row k − t
rank at most k − 2
row k − t
northwest of column k row k
row k
F IGURE 3.1. The proof of Lemma 3.3. The antidiagonal cells in A are marked in
black and the antidiagonal cells in X − A ⊆ B are marked in white.
Let k be the number of rows, also the number of columns in the antidiagonal X. Let the length
of A be l + 1, so the rank condition on all rows and columns northwest of those occupied by A
is at most l. Assume that the rightmost column of A is c and the leftmost column of A is t + 1.
Notice that this implies that the bottom row occupied by A is k − t, as the antidiagonal element in
column t + 1 is in row k − t. Thus, the northwest (k − t) × c of matrices in VA has rank at most l.
Notice c ≥ (t+1)+(l+1), with equality if A occupies a continuous set of columns, so matrices in
VA have rank at most l in the northwest (k−t)×(t+l+2). Adding k−c ≤ k−(n−t−l−2) columns to
this sub-matrix gives a (k−t)×k sub-matrix with rank at most l+k−c ≤ r+(k−t−l−2) = k−t−2.
Further, by the same principle, moving down t rows, the northwest k × k, i.e. the whole matrix
with antidiagonal X, has rank at most k − t − 2 + t = k − 2, hence has rank at most k − 1 and so is
in VX .
For a visual explanation of the proof of Lemma 3.3 see Figure 3.1.
Lemma 3.4. gA1 ,...,Ar ∈ Ii for 1 ≤ i ≤ r and hence
hgA1 ,...,Ar : Ai ranges over all antidiagonals for Fulton generators of Ii i ⊆ ∩ri=1 Ii .
Proof. Fix i. Let S be the first antidiagonal containing a box occupied by a box contained in Ai
added to gA1 ,...,Ar . We shall show that det(S) is in Ii and hence gA1 ,...,Ar ∈ Ii as it is a multiple of
det(S). If Ai ⊆ S then det(S) ∈ Ii either because S = Ai or S ( Ai in which case we apply Lemma
3.3. Otherwise, |S| ≥ |Ai | and S is weakly to the northwest of Ai . Therefore, there is a subset B
of S such that |B| = |Ai |, and B is weakly northwest of Ai . Hence, B is an antidiagonal for some
determinant in Ii , and again by Lemma 3.3 det(S) ∈ Ii .
Lemma 3.5. init gA1 ,...,Ar = A1 ∪ · · · ∪ Ar under any antidiagonal term order.
Proof. init gA1 ,...,Ar is a product of determinants, with collective antidiagonals A1 ∪ · · · ∪ Ar .
When we combine Lemma 3.5 and Theorem 3.1 we see that inithgA1 ,...Ar i = init(∩Ii ). Then,
Lemmas 3.2 and 3.4 combine to complete the proof of Theorem 1.3.
7
Note that Theorem 1.3 may produce an oversupply of generators. For example, if I1 = I2 , then
inputting the same set of p Fulton generators twice results in a Gröbner basis of p2 polynomials
for I1 ∩ I2 = I1 = I2 .
R EFERENCES
Nantel Bergeron and Sara Billey, RC-graphs and Schubert polynomials, Experiment. Math. 2 (1993), no. 4, 257–
269. MR 1281474 (95g:05107)
[Ful92] William Fulton, Flags, Schubert polynomials, degeneracy loci, and determinantal formulas, Duke Math. J. 65 (1992),
no. 3, 381–420. MR 1154177 (93e:14007)
[GS]
Daniel R. Grayson and Michael E. Stillman, Macaulay2, a software system for research in algebraic geometry,
Available at http://www.math.uiuc.edu/Macaulay2/.
[KM05] Allen Knutson and Ezra Miller, Gröbner geometry of Schubert polynomials, Ann. of Math. (2) 161 (2005), no. 3,
1245–1318. MR 2180402 (2006i:05177)
[KMS06] Allen Knutson, Ezra Miller, and Mark Shimozono, Four positive formulae for type A quiver polynomials, Invent.
Math. 166 (2006), no. 2, 229–325. MR 2249801 (2007k:14098)
[Knu]
Allen Knutson, Frobenius splitting, point-counting and degeneration, Preprint, arXiv:0911.4941v1.
[MS05] Ezra Miller and Bernd Sturmfels, Combinatorial commutative algebra, Graduate Texts in Mathematics, vol. 227,
Springer-Verlag, New York, 2005. MR 2110098 (2006d:13001)
[Zel85] A. V. Zelevinskiı̆, Two remarks on graded nilpotent classes, Uspekhi Mat. Nauk 40 (1985), no. 1(241), 199–200.
MR 783619 (86e:14027)
[BB93]
8
| 0 |
Improving Scalability of Inductive Logic Programming
via Pruning and Best-Effort Optimisation
Mishal Kazmi1 , Peter Schüller2,C , and Yücel Saygın1
arXiv:1706.05171v1 [cs.AI] 16 Jun 2017
1
Faculty of Engineering and Natural Science,
Sabanci University,
Istanbul, Turkey
{mishalkazmi,ysaygin}@sabanciuniv.edu
2
Faculty of Engineering,
Marmara University,
Istanbul, Turkey
[email protected] / [email protected]
Technical Report: manuscript accepted for publication at Expert Systems With Applications (Elsevier).
c 2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license.
http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Abstract
Inductive Logic Programming (ILP) combines rule-based and statistical artificial intelligence
methods, by learning a hypothesis comprising a set of rules given background knowledge and constraints for the search space. We focus on extending the XHAIL algorithm for ILP which is based
on Answer Set Programming and we evaluate our extensions using the Natural Language Processing
application of sentence chunking. With respect to processing natural language, ILP can cater for the
constant change in how we use language on a daily basis. At the same time, ILP does not require
huge amounts of training examples such as other statistical methods and produces interpretable results, that means a set of rules, which can be analysed and tweaked if necessary. As contributions we
extend XHAIL with (i) a pruning mechanism within the hypothesis generalisation algorithm which
enables learning from larger datasets, (ii) a better usage of modern solver technology using recently
developed optimisation methods, and (iii) a time budget that permits the usage of suboptimal results.
We evaluate these improvements on the task of sentence chunking using three datasets from a recent
SemEval competition. Results show that our improvements allow for learning on bigger datasets
with results that are of similar quality to state-of-the-art systems on the same task. Moreover, we
compare the hypotheses obtained on datasets to gain insights on the structure of each dataset.
1
Introduction
Inductive Logic Programming (ILP) (Muggleton and De Raedt, 1994) is a formalism where a set of logical rules is learned from a set of examples and a background knowledge theory. By combining rule-based
and statistical artificial intelligence, ILP overcomes the brittleness of pure logic-based approaches and the
lack of interpretability of models of most statistical methods such as neural networks or support vector
machines. We here focus on ILP that is based on Answer Set Programming (ASP) as our underlying
logic programming language because we aim to apply ILP to Natural Language Processing (NLP) applications such as Machine Translation, Summarization, Coreference Resolution, or Parsing that require
nonmonotonic reasoning with exceptions and complex background theories.
In our work, we apply ILP to the NLP task of sentence chunking. Chunking, also known as ‘shallow parsing’, is the identification of short phrases such as noun phrases which mainly rely on Part of
Speech (POS) tags. In our experiments on sentence chunking (Tjong Kim Sang and Buchholz, 2000) we
encountered several problems with state-of-the-art ASP-based ILP systems XHAIL (Ray, 2009), ILED
1
(Katzouris et al., 2015), and ILASP2 (Law et al., 2015). XHAIL and ILASP2 showed scalability issues
already with 100 sentences as training data. ILED is designed to be highly scalable but failed in the
presence of simple inconsistencies in examples. We decided to investigate the issue in the XHAIL system,
which is open-source and documented well, and we made the following observations:
(i) XHAIL only terminates if it finds a provably optimal hypothesis,
(ii) the hypothesis search is done over all potentially beneficial rules that are supported by at least one
example, and
(iii) XHAIL contains redundancies in hypothesis search and uses outdated ASP technology.
In larger datasets, observation (i) is unrealistic, because finding a near-optimal solution is much easier
than proving optimality of the best solution, moreover in classical machine learning suboptimal solutions
obtained via non-exact methods routinely provide state-of-the-art results. Similarly, observation (ii)
makes it harder to find a hypothesis, and it generates an overfitting hypotheses which contains rules that
are only required for a single example. Observation (iii) points out an engineering problem that can be
remedied with little theoretical effort.
To overcome the above issues, we modified the XHAIL algorithm and software, and we performed
experiments on a simple NLP chunking task to evaluate our modifications.
In detail, we make the following contributions.
• We extend XHAIL with best-effort optimisation using the newest ASP optimisation technology of unsat-core optimisation (Andres et al., 2012) with stratification (Alviano et al., 2015b;
Ansótegui et al., 2013) and core shrinking (Alviano and Dodaro, 2016) using the WASP2 (Alviano et al.,
2013, 2015a) solver and the Gringo (Gebser et al., 2011) grounder. We also extend XHAIL to provide information about the optimality of the hypothesis.
• We extend the XHAIL algorithm with a parameter Pr for pruning, such that XHAIL searches for
hypotheses without considering rules that are supported by fewer than Pr examples.
• We eliminate several redundancies in XHAIL by changing its internal data structures.
• We describe a framework for chunking with ILP, based on preprocessing with Stanford Core NLP
(Manning et al., 2014) tools.
• We experimentally analyse the relationship between the pruning parameter, number of training
examples, and prediction score on the sentence chunking (Tjong Kim Sang and Buchholz, 2000)
subtask of iSTS at SemEval 2016 (Agirre et al., 2016).
• We discuss the best hypothesis found for each of the three datasets in the SemEval task, and we
discuss what can be learned about the dataset from these hypotheses.
Only if we use all the above modifications together, XHAIL becomes applicable in this chunking task.
By learning a hypothesis from 500 examples, we can achieve results competitive with state-of-the-art
systems used in the SemEval 2016 competition.
Our extensions and modifications of the XHAIL software are available in a public fork of the official
XHAIL Git repository (Bragaglia and Schüller, 2016).
In Section 2 we provide an overview of logic programming and ILP. Section 3 gives an account of
related work and available ILP tools. In Section 4 we describe the XHAIL system and our extensions of
pruning, best-effort optimisation, and further improvements. Section 5 gives details of our representation
of the chunking task. In Section 6 we discuss empirical experiments and results. We conclude in Section
7 with a brief outlook on future work.
2
Background
We next introduce logic programming and based on that inductive logic programming.
2
2.1
Logic Programming
A logic programs theory normally comprises of an alphabet (variable, constant, quantifier, etc), vocabulary, logical symbols, a set of axioms and inference rules (Lloyd, 2012). A logic programming system
consists of two portions: the logic and control. Logic describes what kind of problem needs to be solved
and control is how that problem can be solved. An ideal of logic programming is for it to be purely
declarative. The popular Prolog (Clocksin and Mellish, 2003) system evaluates rules using resolution,
which makes the result of a Prolog program depending on the order of its rules and on the order of
the bodies of its rules. Answer Set Programming (ASP) (Brewka et al., 2011; Gebser et al., 2012a) is a
more recent logic programming formalism, featuring more declarativity than Prolog by defining semantics based on Herbrand models (Gelfond and Lifschitz, 1988). Hence the order of rules and the order
of the body of the rules does not matter in ASP. Most ASP programs follow the Generate-Define-Test
structure (Lifschitz, 2002) to (i) generate a space of potential solutions, (ii) define auxiliary concepts,
and (iii) test to invalidate solutions using constraints or incurring a cost on non-preferred solutions.
An ASP program consists of rules of the following structure:
a ← b1 , . . . , bm , not bm+1 , . . . , not bn
where a, bi are atoms from a first-order language, a is the head and b1 , . . . , not bn is the body of the rule,
and not is negation as failure. Variables start with capital letters, facts (rules without body condition)
are written as ‘a.’ instead of ‘a ← ’. Intuitively a is true if all positive body atoms are true and no
negative body atom is true.
The formalism can be understood more clearly by considering the following sentence as a simple
example:
Computers are normally fast machines unless they are old.
This would be represented as a logical rule as follows:
fastmachine(X) ← computer(X), not old(X).
where X is a variable, fastmachine, computer , and old are predicates, and old (X ) is a negated atom.
Adding more knowledge results in a change of a previous understanding, this is common in human
reasoning. Classical First Order Logic does not allow such non-monotonic reasoning, however, ASP was
designed as a commonsense reasoning formalism: a program has zero or more answer sets as solutions,
adding knowledge to the program can remove answer sets as well as produce new ones. Note that ASP
semantics rule out self-founded truths in answer sets. We use the ASP formalism due to its flexibility
and declarativity. For formal details and a complete description of syntax and semantics see the ASPCore-2 standard (Calimeri et al., 2012). ASP has been applied to several problems related to Natural
Language Processing, see for example (Mitra and Baral, 2016; Schüller, 2013, 2014, 2016; Schwitter,
2012; Sharma et al., 2015). An overview of applications of ASP in general can be found in (Erdem et al.,
2016).
2.2
Inductive Logic Programming
Processing natural language based on hand-crafted rules is impractical because human language is constantly evolving, partially due to the human creativity of language use. An example of this was recently
noticed on UK highways where they advised drivers, ‘Don’t Pokémon Go and drive’. Pokémon Go is being
informally used here as a verb even though it was only introduced as a game a few weeks before the sign
was put up. To produce robust systems, it is necessary to use statistical models of language. These models
are often pure Machine Learning (ML) estimators without any rule components (Manning and Schütze,
1999). ML methods work very well in practice, however, they usually do not provide a way for explaining
why a certain prediction was made, because they represent the learned knowledge in big matrices of real
numbers. Some popular classifiers used for processing natural language include Naive Bayes, Decision
Trees, Neural Networks, and Support Vector Machines (SVMs) (Dumais et al., 1998).
In this work, we focus on an approach that combines rule-based methods and statistics and provides interpretable learned models: Inductive Logic Programming (ILP). ILP is differentiated from ML
techniques by its use of an expressive representation language and its ability to make use of logically
3
encoded background knowledge (Muggleton and De Raedt, 1994). An important advantage of ILP over
ML techniques such as neural networks is, that a hypothesis can be made readable by translating it
into piece of English text. Furthermore, if annotated corpora of sufficient size are not available or too
expensive to produce, deep learning or other data intense techniques are not applicable. However, we
can still learn successfully with ILP.
Formally, ILP takes as input a set of examples E, a set B of background knowledge rules, and a set
of mode declarations M , also called mode bias. As output, ILP aims to produce a set of rules H called
hypothesis which entails E with respect to B. The search for H with respect to E and B is restricted
by M , which defines a language that limits the shape of rules in the hypothesis candidates and therefore
the complexity of potential hypotheses.
Example 1. Consider the following example ILP instance (M, E, B) (Ray, 2009).
#modeh flies(+bird).
M = #modeb penguin(+bird).
#modeb not penguin(+bird).
#example flies(a).
#example flies(b).
E=
#example flies(c).
#example not flies(d).
bird(X) :- penguin(X).
bird(a).
B = bird(b).
bird(c).
penguin(d).
(1)
(2)
(3)
Based on this, an ILP system would ideally find the following hypothesis.
H=
3
flies(X) :- bird(X), not penguin(X).
(4)
Related Work
Inductive Logic Programming (ILP) is a rather multidisciplinary field which extends to domains such
as computer science, artificial intelligence, and bioinformatics. Research done in ILP has been greatly
impacted by Machine Learning (ML), Artificial Intelligence (AI) and relational databases. Quite a few
surveys (Gulwani et al., 2015; Kitzelmann, 2009; Muggleton et al., 2012) mention about the systems and
applications of ILP in interdisciplinary areas. We next give related work of ILP in general and then focus
on ILP applied in the field of Natural Language Processing (NLP).
The foundations of ILP can be found in research by Plotkin (Plotkin, 1970, 1971), Shapiro (Shapiro,
1983) and Sammut and Banerji (Sammut and Banerji, 1986). The founding paper of Muggleton (Muggleton,
1991) led to the launch of the first international workshop on ILP. The strength of ILP lay in its ability to
draw on and extend the existing successful paradigms of ML and Logic Programming. At the beginning,
ILP was associated with the introduction of foundational theoretical concepts which included Inverse
Resolution (Muggleton, 1995; Muggleton and Buntine, 1992) and Predicate Invention (Muggleton, 1991;
Muggleton and Buntine, 1992). A number of ILP systems were developed along with learning about
the theoretical concepts of ILP such as FOIL (Quinlan, 1990) and Golem (Muggleton et al., 1990). The
widely-used ILP system Progol (Muggleton, 1995) introduced a new logically-based approach to refinement graph search of the hypothesis space based on inverting the entailment relation. Meanwhile,
the TILDE system (De Raedt, 1997) demonstrated the efficiency which could be gained by upgrading
decision-tree learning algorithms to first-order logic, this was soon extended towards other ML problems.
Some limitations of Prolog-based ILP include requiring extensional background and negative examples,
lack of predicate invention, search limitations and inability to handle cuts. Integrating bottom-up and
top-down searches, incorporating predicate invention, eliminating the need for explicit negative examples
and allowing restricted use of cuts helps in solving these issues (Mooney, 1996).
4
Probabilistic ILP (PILP) also gained popularity (Cussens, 2001a; De Raedt and Kersting, 2008;
Muggleton et al., 1996), its Prolog-based systems such as PRISM (Sato et al., 2005) and FAM (Cussens,
2001b) separate the actual learning of the logic program from the probabilistic parameters estimation
of the individual clauses. However in practice, learning the structure and parameters of probabilistic
logic representation simultaneously has proven to be a challenge (Muggleton, 2002). PILP is mainly a
unification of the probabilistic reasoning of Machine Learning with the relational logical representations
offered by ILP.
Meta-interpretive learning (MIL) (Muggleton et al., 2014) is a recent ILP method which learns recursive definitions using Prolog and ASP-based declarative representations. MIL is an extension of the
Prolog meta-interpreter; it derives a proof by repeatedly fetching the first-order Prolog clauses and additionally fetching higher-order meta-rules whose heads unify with a given goal, and saves the resulting
meta-substitutions to form a program.
Most ILP research has been aimed at Horn programs which exclude Negation as Failure (NAF).
Negation is a key feature of logic programming and provides a means for monotonic commonsense
reasoning under incomplete information. This fails to exploit the full potential of normal programs that
allow NAF.
We next give an overview of ILP systems based on ASP that are designed to operate in the presence
of negation. Then we give an overview of ILP literature related to NLP.
3.1
ASP-based ILP Systems
The eXtended Hybrid Abductive Inductive Learning system (XHAIL) is an ILP approach based on ASP
that generalises techniques of language and search bias from Horn clauses to normal logic programs with
full usage of NAF (Ray, 2009). Like its predecessor system Hybrid Abductive Inductive Learning (HAIL)
which operated on Horn clauses, XHAIL is based on Abductive Logic Programming (ALP) (Kakas et al.,
1992), we give more details on XHAIL in Section 4.
The Incremental Learning of Event Definitions (ILED) algorithm (Katzouris et al., 2015) relies on
Abductive-Inductive learning and comprises of a scalable clause refinement methodology based on a
compressive summarization of clause coverage in a stream of examples. Previous ILP learners were batch
learners and required all training data to be in place prior to the initiation of the learning process. ILED
learns incrementally by processing training instances when they become available and altering previous
inferred knowledge to fit new observation, this is also known as theory revision. It exploits previous
computations to speed-up the learning since revising the hypothesis is considered more efficient than
learning from scratch. ILED attempts to cover a maximum of examples by re-iterating over previously
seen examples when the hypothesis has been refined. While XHAIL can ensure optimal example coverage
easily by processing all examples at once, ILED does not preserve this property due to a non-global view
on examples.
When considering ASP-based ILP, negation in the body of rules is not the only interesting addition
to the overall concept of ILP. An ASP program can have several independent solutions, called answer
sets, of the program. Even the background knowledge B can admit several answer sets without any
addition of facts from examples. Therefore, a hypothesis H can cover some examples in one answer set,
while others are covered by another answer set. XHAIL and ILED approaches are based on finding a
hypothesis that is covering all examples in a single answer set.
The Inductive Learning of Answer Set Programs approach (ILASP) is an extension of the notion
of learning from answer sets (Law et al., 2014). Importantly, it covers positive examples bravely (i.e.,
in at least one answer set) and ensures that the negation of negative examples is cautiously entailed
(i.e., no negative example is covered in any answer set). Negative examples are needed to learn Answer
Set Programs with non-determinism otherwise there is no concept of what should not be in an Answer
Set. ILASP conducts a search in multiple stages for brave and cautious entailment and processes all
examples at once. ILASP performs a less informed hypothesis search than XHAIL or ILED, that means
large hypothesis spaces are infeasible for ILASP while they are not problematic for XHAIL and ILED,
on the other hand, ILASP supports aggregates and constraints while the older systems do not support
these. ILASP2 (Law et al., 2015) extends the hypothesis space of ILASP with choice rules and weak
constraints. This permits searching for hypotheses that encode preference relations.
5
3.2
ILP and NLP
From NLP point of view, the hope of ILP is to be able to steer a mid-course between these two alternatives
of large-scale but shallow levels of analysis and small scale but deep and precise analysis. ILP should
produce a better ratio between breadth of coverage and depth of analysis (Muggleton, 1999). ILP has
been applied to the field of NLP successfully; it has not only been shown to have higher accuracies
than various other ML approaches in learning the past tense of English but also shown to be capable of
learning accurate grammars which translate sentences into deductive database queries (Law et al., 2014).
Except for one early application (Wirth, 1989) no application of ILP methods surfaced until the system
CHILL (Mooney, 1996) was developed which learned a shift-reduce parser in Prolog from a training
corpus of sentences paired with the desired parses by learning control rules and uses ILP to learn control
strategies within this framework. This work also raised several issues regarding the capabilities and
testing of ILP systems. CHILL was also used for parsing database queries to automate the construction
of a natural language interface (Zelle and Mooney, 1996) and helped in demonstrating its ability to learn
semantic mappings as well.
An extension of CHILL, CHILLIN (Zelle et al., 1994) was used along with an extension of FOIL,
mFOIL (Tang and Mooney, 2001) for semantic parsing. Where CHILLIN combines top-down and
bottom-up induction methods and mFOIL is a top-down ILP algorithm designed keeping imperfect
data in mind, which portrays whether a clause refinement is significant for the overall performance with
the help of a pre-pruning algorithm. This emphasised on how the combination of multiple clause constructors helps improve the overall learning; which is a rather similar concept to Ensemble Methods in
standard ML. Note that CHILLIN pruning is based on probability estimates and has the purpose of dealing with inconsistency in the data. Opposed to that, XHAIL already supports learning from inconsistent
data, and the pruning we discuss in Section 4.1 aims to increase scalability.
Previous work ILP systems such as TILDE and Aleph (Srinivasan, 2001) have been applied to preference learning which addressed learning ratings such as good, poor and bad. ASP expresses preferences
through weak constraints and may also contain weak constraints or optimisation statements which impose
an ordering on the answer sets (Law et al., 2015).
The system of Mitra and Baral (Mitra and Baral, 2016) uses ASP as primary knowledge representation and reasoning language to address the task of Question Answering. They use a rule layer that is
partially learned with XHAIL to connect results from an Abstract Meaning Representation parser and
an Event Calculus theory as background knowledge.
4
Extending XHAIL algorithm and system
Initially, we intended to use the latest ILP systems (ILASP2 or ILED) in our work. However, preliminary
experiments with ILASP2 showed a lack in scalability (memory usage) even for only 100 sentences due to
the unguided hypothesis search space. Moreover, experiments with ILED uncovered several problematic
corner cases in the ILED algorithm that led to empty hypotheses when processing examples that were
mutually inconsistent (which cannot be avoided in real-life NLP data). While trying to fix these problems
in the algorithm, further issues in the ILED implementation came up. After consulting the authors of
(Mitra and Baral, 2016) we learned that they had the same issues and used XHAIL, therefore we also
opted to base our research on XHAIL due to it being the most robust tool for our task in comparison to
the others.
Although XHAIL is applicable, we discovered several drawbacks and improved the approach and the
XHAIL system. We provide an overview of the parts we changed and then present our modifications.
Figure 1 shows in the middle the original XHAIL components and on the right our extension.
XHAIL finds a hypothesis using several steps. Initially the examples E plus background knowledge
B are transformed into a theory of Abductive Logic Programming (Kakas et al., 1992). The Abduction
part of XHAIL explains observations with respect to a prior theory, which yields the Kernel Set, ∆. ∆
is a set of potential heads of rules given by M such that a maximum of examples E is satisfied together
with B.
Example 2 (continued). Given (M, E, B) from Example 1, XHAIL uses B, E, and the head part of M ,
6
Examples E
Background Knowledge B
Mode Bias M (Head)
Abduction
∆ (Kernet Set)
Mode Bias M (Body)
Replaced
Modified
ground K program
Deduction
ground K program
Generalisation
(counting)
Generalisation
non-ground K’ program with support counts
non-ground K’ program
Pruning
Induction
subset of K’
Hypothesis
Figure 1: XHAIL architecture. The dotted line shows the replaced module with our version represented
by the thick solid line.
to generate the Kernel Set ∆ by abduction.
flies(a)
flies(b)
∆=
flies(c)
The Deduction part uses ∆ and the body part of the mode bias M to generate a ground program K.
K contains rules which define atoms in ∆ as true based on B and E.
The Generalisation part replaces constant terms in K with variables according to the mode bias M ,
which yields a non-ground program K ′ .
Example 3 (continued). From the
following K and K ′ .
K=
K′ =
above ∆ and M from (1), deduction and generalisation yield the
flies(a) :- bird(a), not penguin(a)
flies(b) :- bird(b), not penguin(b)
flies(c) :- bird(c), not penguin(c)
flies(X) :- bird(X), not penguin(X)
flies(Y) :- bird(Y), not penguin(Y)
flies(Z) :- bird(Z), not penguin(Z)
The Induction part searches for the smallest part of K ′ that entails as many examples of E as possible
given B. This part of K ′ which can contain a subset of the rules of K ′ and for each rule a subset of body
atoms is called a hypothesis H.
Example 4 (continued). The smallest hypothesis that covers all examples E in (2) is (4).
We next describe our modifications of XHAIL.
4.1
Kernel Pruning according to Support
The computationally most expensive part of the search in XHAIL is Induction. Each non-ground rule
in K ′ is rewritten into a combination of several guesses, one guess for the rule and one additional guess
for each body atom in the rule.
We moreover observed that some non-ground rules in K ′ are generalisations of many different ground
rules in K, while some non-ground rules correspond with only a single instance in K. In the following,
we say that the support of r in K is the number of ground rules in K that are transformed into r ∈ K ′
in the Generalisation module of XHAIL (see Figure 1).
7
Intuitively, the higher the support, the more examples can be covered with that rule, and the more
likely that rule or a part of it will be included in the optimal hypothesis.
Therefore we modified the XHAIL algorithm as follows.
• During Generalisation, we keep track of the support of each rule r ∈ K ′ by counting how often a
generalisation yields the same rule r.
• We add an integer pruning parameter P r to the algorithm and use only those rules from K ′ in the
Induction component that have a support higher than P r.
This modification is depicted as bold components which replace the dotted Generalisation module in
Figure 1.
Pruning has several consequences. From a theoretical point of view, the algorithm becomes incomplete
for P r > 0, because Induction searches in a subset of the relevant hypotheses. Hence Induction might not
be able to find a hypothesis that covers all examples, although such a hypothesis might exist with P r = 0.
From a practical point of view, pruning realises something akin to regularisation in classical ML; only
strong patterns in the data will find their way into Induction and have the possibility to be represented
in the hypothesis. A bit of pruning will therefore automatically prevent overfitting and generate more
general hypotheses. As we will show in Experiments in Section 6, the pruning allows to configure a
trade-off between considering low-support rules instead of omitting them entirely, as well as, finding a
more optimal hypothesis in comparison to a highly suboptimal one.
4.2
Unsat-core based and Best-effort Optimisation
We observed that ASP search in XHAIL Abduction and Induction components progresses very slowly
from a suboptimal to an optimal solution. XHAIL integrates version 3 of Gringo (Gebser et al., 2011)
and Clasp (Gebser et al., 2012b) which are both quite outdated. In particular Clasp in this version does
not support three important improvements that have been found for ASP optimisation: (i) unsat-core
optimisation (Andres et al., 2012), (ii) stratification for obtaining suboptimal answer sets (Alviano et al.,
2015b; Ansótegui et al., 2013), and (iii) unsat-core shrinking (Alviano and Dodaro, 2016).
Method (i) inverts the classical branch-and-bound search methodology which progresses from worst
to better solutions. Unsat-core optimisation assumes all costs can be avoided and finds unsatisfiable cores
of the problem until the assumption is true and a feasible solution is found. This has the disadvantage of
providing only the final optimal solution, and to circumvent this disadvantage, stratification in method
(ii) was developed which allows for combining branch-and-bound with method (i) to approach the optimal
value both from cost 0 and from infinite cost. Furthermore, unsat-core shrinking in method (iii), also
called ‘anytime ASP optimisation’, has the purpose of providing suboptimal solutions and aims to find
smaller cores which can speed up the search significantly by cutting more of the search space (at the cost
of searching for a smaller core). In experiments with the inductive encoding of XHAIL we found that all
three methods have a beneficial effect.
Currently, only the WASP solver (Alviano et al., 2013, 2015a) supports all of (i), (ii), and (iii), therefore we integrated WASP into XHAIL, which has a different output than Clasp. We also upgraded XHAIL
to use Gringo version 4 which uses the new ASP-Core-2 standard and has some further (performance)
advantages over older versions.
Unsat-core optimisation often finds solutions with a reasonable cost, near the optimal value, and
then takes a long time to find the true optimum or prove optimality of the found solution. Therefore,
we extended XHAIL as follows:
• a time budget for search can be specified on the command line,
• after the time budget is elapsed the best-known solution at that point is used and the algorithm
continues, furthermore
• the distance from the optimal value is provided as output.
This affects the Induction step in Figure 1 and introduces a best-effort strategy; along with the obtained
hypothesis we also get the distance from the optimal hypothesis, which is zero for optimal solutions.
Using a suboptimal hypothesis means, that either fewer examples are covered by the hypothesis than
possible, or that the hypothesis is bigger than necessary. In practice, receiving a result is better than
8
Preprocessing
Stanford
Core-NLP
tools
Learning
ILP tool
XHAIL
Testing
Chunking
with ASP
Figure 2: General overview of our framework
receiving no result at all, and our experiments show that XHAIL becomes applicable to reasonably-sized
datasets using these extensions.
4.3
Other Improvements
We made two minor engineering contributions to XHAIL. A practically effective improvement of XHAIL
concerns K ′ . As seen in Example 3, three rules that are equivalent modulo variable renaming are
contained in K ′ . XHAIL contains canonicalization algorithms for avoiding such situations, based on
hashing body elements of rules. However, we found that for cases with more than one variable and for
cases with more than one body atom, these algorithms are not effective because XHAIL (i) uses a set data
structure that maintains an order over elements, (ii) the set data structure is sensitive to insertion order,
and (iii) hashing the set relies on the order to be canonical. We made this canonicalization algorithm
applicable to a far wider range of cases by changing the data type of rule bodies in XHAIL to a set that
maintains an order depending on the value of set elements. This comes at a very low additional cost for
set insertion and often reduces size of K ′ (and therefore computational effort for Induction step) without
adversely changing the result of induction.
Another improvement concerns debugging the ASP solver. XHAIL starts the external ASP solver
and waits for the result. During ASP solving, no output is visible, however, ASP solvers provide output
that is important for tracking the distance from optimality during a search. We extended XHAIL so
that the output of the ASP solver can be made visible during the run using a command line option.
5
Chunking with ILP
We evaluate the improvements of the previous section using the NLP task of chunking. Chunking
(Tjong Kim Sang and Buchholz, 2000) or shallow parsing is the identification of short phrases such as
noun phrases or prepositional phrases, usually based heavily on Part of Speech (POS) tags. POS provides
only information about the token type, i.e., whether words are nouns, verbs, adjectives, etc., and chunking
derives from that a shallow phrase structure, in our case a single level of chunks.
Our framework for chunking has three main parts as shown in Figure 2. Preprocessing is done using
the Stanford CoreNLP tool from which we obtain the facts that are added to the background knowledge
of XHAIL or used with a hypothesis to predict the chunks of an input. Using XHAIL as our ILP solver we
learn a hypothesis (an ASP program) from the background knowledge, mode bias, and from examples
which are generated using the gold-standard data. We predict chunks using our learned hypothesis
and facts from preprocessing, using the Clingo (Gebser et al., 2008) ASP solver. We test by scoring
predictions against gold chunk annotations.
Example 5. An example sentence in the SemEval iSTS dataset (Agirre et al., 2016) is as follows.
Former Nazi death camp guard Demjanjuk dead at 91
(5)
The chunking present in the SemEval gold standard is as follows.
[ Former Nazi death camp guard Demjanjuk ] [ dead ] [ at 91 ]
5.1
(6)
Preprocessing
Stanford CoreNLP tools (Manning et al., 2014) are used for tokenisations and POS-tagging of the input.
Using a shallow parser (Bohnet et al., 2013) we obtain the dependency relations for the sentences. Our
ASP representation contains atoms of the following form:
9
pos (c_NNP, 1 ) . head ( 2 , 1 ) . form ( 1 , " Former " ) . r e l (c_NAME, 1 ) .
pos (c_NNP, 2 ) . head ( 5 , 2 ) . form ( 2 , " Nazi " ) . r e l (c_NMOD, 2 ) .
pos (c_NN, 3 ) . head ( 4 , 3 ) . form ( 3 , " death " ) . r e l (c_NMOD, 3 ) .
pos (c_NN, 4 ) . head ( 5 , 4 ) . form ( 4 , "camp" ) . r e l (c_NMOD, 4 ) .
pos (c_NN, 5 ) . head ( 7 , 5 ) . form ( 5 , " guard " ) . r e l ( c_SBJ , 5 ) .
pos (c_NNP, 6 ) . head ( 5 , 6 ) . form ( 6 , " Demjanjuk" ) . r e l (c_APPO, 6 ) .
pos (c_VBD, 7 ) . head ( r o o t , 7 ) . form ( 7 , " dead " ) . r e l (c_ROOT, 7 ) .
pos ( c_IN , 8 ) . head ( 7 , 8 ) . form ( 8 , " a t " ) . r e l (c_ADV, 8 ) .
pos (c_CD, 9 ) . head ( 8 , 9 ) . form ( 9 , " 91 " ) . r e l (c_PMOD, 9 ) .
(a) Preprocessing Output
p o s t y p e (X) :− pos (X,_) .
tok en (X) :− pos (_,X) .
n e x t p o s (P ,X) :− pos (P ,X+1).
(b) Background Knowledge
#modeh s p l i t (+ tok en ) .
#modeb pos ( $posty pe ,+ tok en ) .
#modeb n e x t p o s ( $posty pe ,+ tok en ) .
(c) Mode Restrictions
goodchunk ( 1 ) :− not s p l i t ( 1 ) , not s p l i t ( 2 ) , not s p l i t ( 3 ) ,
not s p l i t ( 4 ) , not s p l i t ( 5 ) , s p l i t ( 6 ) .
goodchunk ( 7 ) :− s p l i t ( 6 ) , s p l i t ( 7 ) .
goodchunk ( 8 ) :− s p l i t ( 7 ) , not s p l i t ( 8 ) .
#example goodchunk ( 1 ) .
#example goodchunk ( 7 ) .
#example goodchunk ( 8 ) .
(d) Examples
Figure 3: XHAIL input for the sentence ’Former Nazi death camp guard Demjanjuk dead at 91’ from
the Headlines Dataset
• pos(P , T ) which represents that token T has POS tag P ,
• form(T , Text ) which represents that token T has surface form Text ,
• head (T1 , T2 ) and rel (R, T ) which represent that token T2 depends on token T1 with dependency
relation R.
Example 6 (continued). Figure 3a shows the result of preprocessing performed on sentence (5), which
is a set of ASP facts.
We use Penn Treebank POS-tags as they are provided by Stanford CoreNLP. To form valid ASP
constant terms from POS-tags, we prefix them with ‘c_’, replace special characters with lowercase
letters (e.g., ‘PRP$’ becomes ‘c_PRPd’). In addition, we create specific POS-tags for punctuation (see
Section 6.4).
5.2
Background Knowledge and Mode Bias
Background Knowledge we use is shown in Figure 3b. We define which POS-tags can exist in predicate
postype/1 and which tokens exist in predicate token/1. Moreover, we provide for each token the POS-tag
of its successors token in predicate nextpos/2.
Mode bias conditions are shown in Figure 3c, these limit the search space for hypothesis generation.
Hypothesis rules contain as head atoms of the form
split (T )
10
which indicates, that a chunk ends at token T and a new chunk starts at token T + 1. The argument of
predicates split /1 in the head is of type token.
The body of hypothesis rules can contain pos/2 and nextpos/2 predicates, where the first argument
is a constant of type postype (which is defined in Figure 3b) and the second argument is a variable of
type token. Hence this mode bias searches for rules defining chunk splits based on POS-tag of the token
and the next token.
We deliberately use a very simple mode bias that does not make use of all atoms in the facts obtained
from preprocessing. This is discussed in Section 6.5.
5.3
Learning with ILP
Learning with ILP is based on examples that guide the search. Figure 3d shows rules that recognise gold
standard chunks and #example instructions that define for XHAIL which atoms must be true to entail
an example. These rules with goodchunk /1 in the head define what a good (i.e., gold standard) chunk is
in each example based on where a split in a chunk occurs in the training data to help in the learning of
a hypothesis for chunking.
Note that negation is present only in these rules, although we could use it anywhere else in the
background knowledge. Using the background knowledge, mode bias, and examples, XHAIL is then able
to learn a hypothesis.
5.4
Chunking with ASP using Learned Hypothesis
The hypothesis generated by XHAIL can then be used together with the background knowledge specified
in Figure 3b, and with the preprocessed input of a new sentence. Evaluating all these rules yields a set
of split points in the sentence, which corresponds to a predicted chunking of the input sentence.
Example 7 (continued). Given sentence (5) with token indices 1, . . . , 9, an answer set that contains the
atoms {split (6), split (7)} and no other atoms for predicate split /1 yields the chunking shown in (6).
6
6.1
Evaluation and Discussion
Datasets
We are using the datasets from the SemEval 2016 iSTS Task 2 (Agirre et al., 2016), which included two
separate files containing sentence pairs. Three different datasets were provided: Headlines, Images, and
Answers-Students. The Headlines dataset was mined by various news sources by European Media Monitor. The Images dataset was a collection of captions obtained from the Flickr dataset (Rashtchian et al.,
2010). The Answers-Students corpus consists of the interactions between students and the BEETLE II
tutorial dialogue system which is an intelligent tutoring engine that teaches students in basic electricity and electronics. In the following, we denote S1 and S2, by sentence 1 and sentence 2 respectively,
of sentence pairs in these datasets. Regarding the size of the SemEval Training dataset, Headlines and
Images datasets are larger and contained 756 and 750 sentence pairs, respectively. However, the AnswersStudents dataset was smaller and contained only 330 sentence pairs. In addition, all datasets contain a
Test portion of sentence pairs.
We use k-fold cross-validation to evaluate chunking with ILP, which yields k learned hypotheses and
k evaluation scores for each parameter setting. We test each of these hypotheses also on the Test portion
of the respective dataset. From the scores obtained this way we compute mean and standard deviation,
and perform statistical tests to find out whether observed score differences between parameter settings
is statistically significant.
Table 1 shows which portions of the SemEval Training dataset we used for 11-fold cross-validation.
In the following, we call these datasets Cross-Validation Sets. We chose the first 110 and 550 examples
to use for 11-fold cross-validation which results in training set sizes 100 and 500, respectively. As the
Answers-Students dataset was smaller, we merged its sentence pairs in order to obtain a Cross-Validation
Set size of 110 sentences, using the first 55 sentences from S1 and S2; and for 550 sentences, using the
first 275 sentences from S1 and S2 each. As Test portions we only use the original SemEval Test datasets
and we always test S1 and S2 separately.
11
Dataset
Cross-Validation Set
Size
Examples
S1
S2
H/I
100
500
100
500
S1
S1
S2
S2
all
all
*
*
*
*
all
all
A-S
100
500
S1 first 55 + S2 first 55
S1 first 275 + S2 first 275
all
all
all
all
first
first
first
first
110
550
110
550
Test Set
Table 1: Dataset partitioning for 11-fold cross-validation experiments. Size indicates the training set
size in cross-validation. Fields marked with * are not applicable, because we do not evaluate hypotheses
learned from the S1 portion of the Headlines (H) and Images (I) datasets on the (independent) S2 portion
of these datasets and vice versa. For the Answers-Students (A-S) dataset we need to merge S1 and S2
to obtain a training set size of 500 from the (small) SemEval Training dataset.
6.2
Scoring
We use difflib.SequenceMatcher in Python to match the sentence chunks obtained from learning in ILP
against the gold-standard sentence chunks. From the matchings obtained this way, we compute precision,
recall, and F1-score as follows.
P recision =
Recall =
No. of Matched Sequences
No. of ILP-learned Chunks
No. of Matched Sequences
No. of Gold Chunks
Score = 2 ×
Precision × Recall
Precision + Recall
To investigate the effectivity of our mode bias for learning a hypothesis that can correctly classify
the dataset, we perform cross-validation (see above) and measure correctness of all hypotheses obtained
in cross-validation also on the Test set.
Because of differences in S1/S2 portions of datasets, we report results separately for S1 and S2. We
also evaluate classification separately for S1 and S2 for the Answers-Students dataset, although we train
on a combination of S1 and S2.
6.3
Experimental Methodology
We use Gringo version 4.5 (Gebser et al., 2011) and we use WASP version 2 (Git hash a44a95) (Alviano et al.,
2015a) configured to use unsat-core optimisation with disjunctive core partitioning, core trimming, a budget of 30 seconds for computing the first answer set and for shrinking unsatisfiable cores with progressive
shrinking strategy. These parameters were found most effective in preliminary experiments. We configure
our modified XHAIL solver to allocate a budget of 1800 seconds for the Induction part which optimises
the hypothesis (see Section 4.2). Memory usage never exceeded 5 GB.
Tables 4–6 contains the experimental results for each Dataset, where columns Size, P r, and So respectively, show the number of sentences used to learn the hypothesis, the pruning parameter for generalising
the learned hypothesis (see Section 4.1), and the rate of how close the learned hypothesis is to the optimal
,
result, respectively. So is computed according to the following formula: So = Upperbound−Lowerbound
Lowerbound
which is based on upper and lower bounds on the cost of the answer set. An So value of zero means
optimality, and values above zero mean suboptimality; so the higher the value, the further away from
optimality. Our results comprise of the mean and standard deviation of the F1-scores obtained from our
11-fold cross-validation test set of S1 and S2 individually (column CV). Due to lack of space, we opted
to leave out the scores of precision and recall, but these values show similar trends as in the Test set.
For the Test sets of both S1 and S2, we include the mean and standard deviation of the Precision, Recall
and F1-scores (column group T).
12
When testing machine-learning based systems, comparing results obtained on a single test set is often
not sufficient, therefore we performed cross-validation to obtain mean and standard deviation about our
benchmark metrics. To obtain even more solid evidence about the significance of the measured results,
we additionally performed a one-tailed paired t-test to check if a measured F1 score is significantly
higher in one setting than in another one. We consider a result significant if p < 0.05, i.e., if there is
a probability of less than 5 % that the result is due to chance. Our test is one-tailed because we check
whether one result is higher than another one, and it is a paired test because we test different parameters
on the same set of 11 training/test splits in cross-validation. There are even more powerful methods for
proving significance of results such as bootstrap sampling (Efron and Tibshirani, 1986), however these
methods require markedly higher computational effort in experiments and our experiments already show
significance with the t-test.
Rows of Tables 4–6 contain results for learning from 100 resp. 500 example sentences, and for different
pruning parameters. For both learning set sizes, we increased pruning stepwise starting from value 0
until we found an optimal hypothesis (So = 0) or until we saw a clear peak in classification score in
cross-validation (in that case, increasing the pruning is pointless because it would increase optimality of
the hypothesis but decrease the prediction scores).
Note that datasets have been tokenised very differently, and that also state-of-the-art systems in
SemEval used separate preprocessing methods for each dataset. We follow this strategy to allow a fair
comparison. One example for such a difference is the Images dataset, where the ‘.’ is considered as
a separate token and is later defined as a separate chunk, however in Answers-Students dataset it is
integrated onto neighboring tokens.
6.4
Results
We first discuss the results of experiments with varying training set size and varying pruning parameter,
then compare our approach with the state-of-the-art systems, and finally inspect the optimal hypotheses.
Training Set Size and Pruning Parameter Tables 4–6 show results of experiments, where T
denotes the Test portion of the respective dataset.
We observe that by increasing the size of the training set to learn the hypothesis, our scores improved
considerably. Due to more information being provided, the learned hypothesis can predict with higher
F1 score. We also observed that for the smaller training set size (100 sentences), lower pruning numbers
(in rare cases even P r=0) resulted in achieving the optimal solution. For a bigger training set size (500
sentences), without pruning the ILP procedure does not find solutions close to the optimal solution.
However, by using pruning values up to P r=10 we can reduce the size of the search space and find
hypotheses closer to the optimum, which predict chunks with a higher F1 score. Our statistical test
shows that, in many cases, several increments of the P r parameter yield significantly better results, up
to a point where prediction accuracy degrades because too many examples are pruned away. To select
the best hypothesis, we increase the pruning parameter Pr until we reach the peak in the F1 score in
cross-validation.
Finding optimal hypotheses in the Inductive search of XHAIL (where So=0) is easily attained when
learning from 100 sentences. For learning from 500 sentences, very higher pruning results in a trivial
optimal hypothesis (i.e., every token is a chunk) which has no predictive power, hence we do not increase
P r beyond a value of 10.
Note that we never encountered timeouts in the Abduction component of XHAIL, only in the Induction part. The original XHAIL tool without our improvements yields only timeouts for learning from 500
examples, and few hypotheses for learning from 100 examples. Therefore we do not show these results
in tables.
State-of-the-art comparison Table 2 shows a comparison of our results with the baseline and the
three best systems from the chunking subtask of Task 2 from SemEval2016 Task2 (Agirre et al., 2016):
DTSim (Banjade et al., 2016), FBK-HLT-NLP (Magnolini et al., 2016) and runs 1 and 2 of IISCNLP
(Tekumalla and Jat, 2016). We also compare with results of our own system ‘Inspire-Manual’ (Kazmi and Schüller,
2016).
13
S1
P
R
F1
Headlines
Baseline
DTSim
FBK-HLT-NLP
IISCNLP - Run1
IISCNLP - Run2
Inspire - Manual
Inspire - Learned
60.5
72.5
63.6
61.9
67.6
64.5
68.1±2.5
36.6
74.3
51.3
68.5
68.5
70.4
70.6±2.5
Images
S2
Baseline
DTSim
FBK-HLT-NLP
IISCNLP - Run1
IISCNLP - Run2
Inspire - Manual
Inspire - Learned
19.0
77.8
41.0
61.6
65.8
74.5
66.4±15.5
Answers-Students
Data System
Baseline
DTSim
FBK-HLT-NLP
IISCNLP - Run1
IISCNLP - Run2
Inspire - Manual
Inspire - Learned
62.1
78.5
70.3
67.9
63.0
66.8
66.8±2.8
Rank
P
R
F1
Rank
37.6
71.3
*
51.5
61.4
64.5
***
62.4
65.4±2.6 **
63.6
72.1
57.1
61.1
71.4
64.3
67.2±1.3
42.5
74.3
51.1
65.7
71.9
68.4
68.2±2.4
42.8
70.5
*
48.3
60.1
68.9
**
62.2
64.0±1.8 ***
15.7
77.4
39.2
60.9
65.6
74.2
74.3±0.7
16.4
77.5
*
38.8
60.7
65.4
74.2
**
73.7±0.7 ***
13.6
79.5
40.5
66.1
67.7
73.8
71.1±0.8
17.5
79.1
43.1
66.2
67.2
73.6
71.1±0.8
13.5
79.2
*
40.8
65.9
67.3
73.6
**
70.9±0.8 ***
30.9
73.6
52.5
63.9
59.8
64.4
70.5±2.5
34.6
72.5
*
52.8
60.7
***
56.9
59.7
63.5±2.4 **
59.2
83.3
72.4
65.7
66.2
71.2
89.3±3.0
33.4
79.2
59.1
55.0
52.5
62.5
80.1±0.7
36.6
77.8
**
59.3
54.0
52.8
62.1
***
80.3±1.7 *
Table 2: Comparison with systems from SemEval 2016 Task 2. The number of stars shows the rank of
the system.
• The baseline makes use of the automatic probabilistic chunker from the IXA-pipeline which provides
Perceptron models (Collins, 2002) for chunking and is trained on CONLL2000 corpora and corrected
manually,
• DTSim uses a Conditional Random Field (CRF) based chunking tool using only POS-tags as
features,
• FBK-HLT-NLP obtains chunks using a Python implementation of MBSP chunker which uses a
Memory-based part-of-speech tagger generator (Daelemans et al., 1996),
• Run 1 of IISCNLP uses OpenNLP chunker which divides the sentence into syntactically correlated
parts of words, but does not specify their internal structure, nor their role in the main sentence.
Run 2 uses Stanford NLP Parser to create parse trees and then uses a perl script to create chunks
based on the parse trees, and
• Inspire-Manual (our previous system) makes use of manually set chunking rules (Abney, 1991)
using ASP (Kazmi and Schüller, 2016).
Using the gold-standard chunks provided by the organisers we were able to compute the precision,
recall, and F1-scores for analysis on the Headlines, Images and Answers-Students datasets.
For the scores of our system ‘Inspire-Learned’, we used the mean and average of the best configuration
of our system as obtained in cross-validation experiments on the Test set and compared against the other
systems’ Test set results. Our system’s performance is quite robust: it is always scores within the top
three best systems.
Inspection of Hypotheses Table 3 shows the rules that are obtained from the hypothesis generated
by XHAIL from Sentence 1 files of all the datasets. We have also tabulated the common rules present
between the datasets and the extra rules which differentiate the datasets from each other. POS-tags for
14
Rules
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
split(V)
::::::::::::::::::::::::::::::::::::::::::::::-
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
token(V),
pos(c_VBD,V).
nextpos(c_IN,V).
nextpos(c_VBZ,V).
pos(c_VB,V).
nextpos(c_TO,V).
nextpos(c_VBD,V).
nextpos(c_VBP,V).
pos(c_VBZ,V), nextpos(c_DT,V).
pos(c_NN,V), nextpos(c_RB,V).
pos(c_NNS,V).
pos(c_VBP,V).
pos(c_VBZ,V).
pos(c_c,V).
nextpos(c_POS,V).
nextpos(c_VBN,V).
nextpos(c_c,V).
pos(c_PRP,V).
pos(c_RP,V).
pos(c_p,V).
nextpos(c_p,V).
pos(c_CC,V), nextpos(c_VBG,V).
pos(c_NN,V), nextpos(c_VBD,V).
pos(c_NN,V), nextpos(c_VBG,V).
pos(c_NN,V), nextpos(c_VBN,V).
pos(c_NNS,V), nextpos(c_VBG,V).
pos(c_RB,V), nextpos(c_IN,V).
pos(c_VBG,V), nextpos(c_DT,V).
pos(c_VBG,V), nextpos(c_JJ,V).
pos(c_VBG,V), nextpos(c_PRPd,V).
pos(c_VBG,V), nextpos(c_RB,V).
pos(c_VBZ,V), nextpos(c_IN,V).
pos(c_EX,V).
pos(c_RB,V).
pos(c_VBG,V).
pos(c_WDT,V).
pos(c_WRB,V).
nextpos(c_EX,V).
nextpos(c_MD,V).
nextpos(c_VBG,V).
nextpos(c_RB,V).
pos(c_IN,V), nextpos(c_NNP,V).
pos(c_NN,V), nextpos(c_WDT,V).
pos(c_NN,V), nextpos(c_IN,V).
pos(c_NNS,V), nextpos(c_IN,V).
pos(c_NNS,V), nextpos(c_VBP,V).
pos(c_RB,V), nextpos(c_DT,V).
H
I
A-S
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Table 3: Rules in the best hypotheses obtained from training on 500 sentences (S1), where X marks the
presence of the rule in a given dataset.
punctuation are ‘c_p’ for sentence-final punctuation (‘.’, ‘ ?’, and ‘ !’) and ‘c_c’ for sentence-separating
punctuation (‘,’, ‘;’, and ‘:’).
Rules which occur in all learned hypotheses can be interpreted as follows (recall the meaning of
split (X) from Section 5.2): (i) chunks end at past tense verbs (VBD, e.g., ‘walked’), (ii) chunks begin at
subordinating conjunctions and prepositions (IN, e.g., ‘in’), and (iii) chunks begin at 3rd person singular
present tense verbs (VBZ, e.g., ‘walks’). Rules that are common to H and AS datasets are as follows:
(i) chunks end at base forms of verbs (VB, e.g., ‘[to] walk’), (ii) chunks begin at ‘to’ prepositions (TO),
and (iii) chunks begin at past tense verbs (VBD). The absence of (i) in hypotheses for the Images dataset
can be explained by the rareness of such verbs in captions of images. Note that (iii) together with the
common rule (i) means that all VBD verbs become separate chunks in H and AS datasets. Rules that
are common to I and AS datasets are as follows: (i) chunks begin at non-3rd person verbs in present
tense (VBP, e.g., ‘[we] walk’), (ii) chunk boundaries are between a determiner (DT, e.g., ‘both’) and a
15
3rd person singular present tense verb (VBZ), and (iii) chunk boundaries are between adverbs (RB, e.g.,
‘usually’) and common, singular, or mass nouns (NN, e.g., ‘humor’). Interestingly, there are no rules
common to H and I datasets except for the three rules mutual to all three datasets.
For rules occurring only in single datasets, we only discuss a few interesting cases in the following.
Rules that are unique to the Headlines dataset include rules which indicate that the sentence separators
‘,’, ‘;’, and ‘:’, become single chunks, moreover chunks start at genitive markers (POS, ‘’s’). Both is
not the case for the other two data sets. Rules unique to the Images dataset include that sentencefinal punctuation (‘.’, ‘ ?’, and ‘ !’) become separate chunks, rules for chunk boundaries between verb
(VB_) and noun (NN_) tokens, and chunk boundaries between possessive pronouns (PRP$, encoded
as ‘c_PRPd’, e.g., ‘their’) and participles/gerunds (VBG, e.g., ‘falling’). Rules unique to AnswersStudents dataset include chunks containing ‘existential there’ (EX), adverb tokens (RB), gerunds (VBG),
and several rules for splits related to WH-determiners (WDT, e.g., ‘which’), WH-adverbs (WRB, e.g.,
‘how’), and prepositions (IN).
We see that learned hypotheses are interpretable, which is not the case in classical machine learning
techniques such as Neural Networks (NN), Conditional Random Fields (CRF), and Support Vector
Machines (SVM).
6.5
Discussion
We next discuss the potential impact of our approach in NLP and in other applications, outline the
strengths and weaknesses, and discuss reasons for several design choices we made.
Impact and Applicability ILP is applicable to many problems of traditional machine learning, but
usually only applicable for small datasets. Our addition of pruning enables learning from larger datasets
at the cost of obtaining a more coarse-grained hypothesis and potentially suboptimal solutions.
The main advantage of ILP is interpretability and that it can achieve good results already with small
datasets. Interpretability of the learned rule-based hypothesis makes the learned hypothesis transparent
as opposed to black-box models of other approaches in the field such as Conditional Random Fields,
Neural Networks, or Support Vector Machines. These approaches are often purely statistical, operate on
big matrices of real numbers instead of logical rules, and are not interpretable. The disadvantage of ILP
is that it often does not achieve the predictive performance of purely statistical approaches because the
complexity of ILP learning limits the number of distinct features that can be used simultaneously.
Our approach allows finding suboptimal hypotheses which yield a higher prediction accuracy than
an optimal hypothesis trained on a smaller training set. Learning a better model from a larger dataset
is exactly what we would expect in machine learning. Before our improvement of XHAIL, obtaining any
hypothesis from larger datasets was impossible: the original XHAIL tool does not return any hypothesis
within several hours when learning from 500 examples.
Our chunking approach learns from a small portion of the full SemEval Training dataset, based
on only POS-tags, but it still achieves results close to the state-of-the-art. Additionally it provides an
interpretable model that allowed us to pinpoint non-uniform annotation practices in the three datasets of
the SemEval 2016 iSTS competition. These observations give direct evidence for differences in annotation
practice for three datasets with respect to punctuation and genitives, as well as differences in the content
of the datasets
Strengths and weaknesses Our additions of pruning and the usage of suboptimal answer sets make
ILP more robust because it permits learning from larger datasets and obtaining (potentially suboptimal)
solutions faster.
Our addition of a time budget and usage of suboptimal answer sets is a purely beneficial addition to
the XHAIL approach. If we disregard the additional benefits of pruning, i.e., if we disable pruning by
setting P r=0, then within the same time budget, the same optimal solutions are to be found as if using
the original XHAIL approach. In addition, before finding the optimal solution, suboptimal hypotheses
are provided in an online manner, together with information about their distance from the optimal
solution.
The strength of pruning before the Induction phase is, that it permits learning from a bigger set of
examples, while still considering all examples in the dataset. A weakness of pruning is, that a hypothesis
16
which fits perfectly to the data might not be found anymore, even if the mode bias could permit such a
perfect fit. In NLP applications this is not a big disadvantage, because noise usually prevents a perfect fit
anyways, and overfitting models is indeed often a problem. However, in other application domains such
as learning to interpret input data from user examples (Gulwani et al., 2015), a perfect fit to the input
data might be desired and required. Note that pruning examples to learn from inconsistent data as done
by Tang and Mooney (Tang and Mooney, 2001) is not necessary for our approach. Instead, non-covered
examples incur a cost that is optimised to be as small as possible.
Design decisions In our study, we use a simple mode bias containing only the current and next POS
tags, which is a deliberate choice to make results easier to compare. We performed experiments with
additional body atoms head /2 and rel /2 in the body mode bias, moreover with negation in the body
mode bias. However, these experiments yielded significantly larger hypotheses with only small increases
in accuracy. Therefore we here limit the analysis to the simple case and consider more complex mode
biases as future work. Note that the best state-of-the-art system (DTSim) is a CRF model solely based
on POS-tags, just as our hypothesis is only making use of POS-tags. By considering more than the
current and immediately succeeding POS tag, DTSim can achieve better results than we do.
The representation of examples is an important part of our chunking case as described in Section 5.
We define predicate goodchunk with rules that consider presence and absence of splits for each chunk. We
make use of the power of NAF in these rules. We also experimented with an example representation that
just gave all desired splits as #example split(X) and all undesired splits as #example not split(Y).
This representation contains an imbalance in the split versus not split class, moreover, chunks are not
represented as a concept that can be optimised in the inductive search for the best hypothesis. Hence,
it is not surprising that this simpler representation of examples gave drastically worse scores, and we do
not report any of these results in detail.
7
Conclusion and Future Work
Inductive Logic Programming combines logic programming and machine learning, and it provides interpretable models, i.e., logical hypotheses, which are learned from data. ILP has been applied to a variety
of NLP and other problems such as parsing (Tang and Mooney, 2001; Zelle and Mooney, 1996), automatic construction of biological knowledge bases from scientific abstracts (Craven and Kumlien, 1999),
automatic scientific discovery (King et al., 2004), and in Microsoft Excel Gulwani et al. (2015) where
users can specify data extraction rules using examples. Therefore, ILP research has the potential for
being used in a wide range of applications.
In this work, we explored the usage of ILP for the NLP task of chunking and extend the XHAIL ILP
solver to increase its scalability and applicability for this task. Results indicate that ILP is competitive to
state-of-the-art ML techniques for this task and that we successfully extended XHAIL to allow learning
from larger datasets than previously possible. Learning a hypothesis using ILP has the advantage of an
interpretable representation of the learned knowledge, such that we know exactly which rule has been
learned by the program and how it affects our NLP task. In this study, we also gain insights about the
differences and common points of datasets that we learned a hypothesis from. Moreover, ILP permits
learning from small training sets where techniques such as Neural Networks fail to provide good results.
As a first contribution to the ILP tool XHAIL we have upgraded the software so that it uses the newest
solver technology, and that this technology is used in a best-effort manner that can utilise suboptimal
search results. This is effective in practice, because finding the optimal solution can be disproportionately
more difficult than finding a solution close to the optimum. Moreover, the ASP technique we use here
provides a clear information about the degree of suboptimality. During our experiments, a new version of
Clingo was published which contains most techniques in WASP (except for core shrinking). We decided
to continue using WASP for this study because we saw that core shrinking is also beneficial to search.
Extending XHAIL to use Clingo in a best-effort manner is quite straight-forward.
As a second contribution to XHAIL we have added a pruning parameter to the algorithm that
allows fine-tuning the search space for hypotheses by filtering out rule candidates that are supported
by fewer examples than other rules. This addition is a novel contribution to the algorithm, which leads
to significant improvements in efficiency, and increases the number of hypotheses that are found in a
given time budget. While pruning makes the method incomplete, it does not reduce expressivity. The
17
hypotheses and background knowledge may still contain unrestricted Negation as Failure. Pruning in
our work is similar to the concept of the regularisation in ML and is there to prevent overfitting in
the hypothesis generation. Pruning enables the learning of logical hypotheses with dataset sizes that
were not feasible before. We experimentally observed a trade-off between finding an optimal hypothesis
that considers all potential rules on one hand, and finding a suboptimal hypothesis that is based on
rules that are supported by few examples. Therefore the pruning parameter has to be adjusted on an
application-by-application basis.
Our work has focused on providing comparable results to ML techniques and we have not utilised the
full power of ILP with NAF in rule bodies and predicate invention. As future work, we plan to extend
the predicates usable in hypotheses to provide a more detailed representation of the NLP task, moreover
we plan to enrich the background knowledge to aid ILP in learning a better hypothesis with a deeper
structure representing the boundaries of chunks.
We provide the modified XHAIL system in a public repository fork (Bragaglia and Schüller, 2016).
Acknowledgments
This research has been supported by the Scientific and Technological Research Council of Turkey
(TUBITAK) [grant number 114E777] and by the Higher Education Commission of Pakistan (HEC).
We are grateful to Carmine Dodaro for providing us with support regarding the WASP solver.
References
Abney, S. P. (1991). Parsing by chunks. In Principle-based parsing, pages 257–278.
Agirre, E., Gonzalez-Agirre, A., Lopez-Gazpio, I., Maritxalar, M., Rigau, G., and Uria, L. (2016).
SemEval-2016 task 2: Interpretable Semantic Textual Similarity. In Proceedings of SemEval, pages
512–524.
Alviano, M. and Dodaro, C. (2016). Anytime answer set optimization via unsatisfiable core shrinking.
Theory and Practice of Logic Programming, 16(5-6):533–551.
Alviano, M., Dodaro, C., Faber, W., Leone, N., and Ricca, F. (2013). WASP: A native ASP solver based
on constraint learning. In Logic Programming and Nonmonotonic Reasoning, pages 54–66.
Alviano, M., Dodaro, C., Leone, N., and Ricca, F. (2015a). Advances in WASP. In Logic Programming
and Nonmonotonic Reasoning , pages 40–54.
Alviano, M., Dodaro, C., Marques-Silva, J., and Ricca, F. (2015b). Optimum stable model search:
algorithms and implementation. Journal of Logic and Computation, page exv061.
Andres, B., Kaufmann, B., Matheis, O., and Schaub, T. (2012). Unsatisfiability-based optimization in
clasp. In International Conference on Logic Programming, Technical Communications, pages 212–221.
Ansótegui, C., Bonet, M. L., and Levy, J. (2013). SAT-based MaxSAT algorithms. Artificial Intelligence,
196:77–105.
Banjade, R., Maharjan, N., Niraula, N. B., and Rus, V. (2016). DTSim at SemEval-2016 task 2:
Interpreting Similarity of Texts Based on Automated Chunking, Chunk Alignment and Semantic
Relation Prediction. In Proceedings of SemEval, pages 809–813.
Bohnet, B., Nivre, J., Boguslavsky, I., Farkas, R., Ginter, F., and Hajič, J. (2013). Joint morphological
and syntactic analysis for richly inflected languages. Transactions of the Association for Computational
Linguistics, 1:415–428.
Bragaglia, S. and Schüller, P. (2016). XHAIL (Version 4c5e0b8) [System for eXtended Hybrid Abductive
Inductive Learning]. Retrieved from https://github.com/knowlp/XHAIL.
Brewka, G., Eiter, T., and Truszczyński, M. (2011). Answer set programming at a glance. Communications of the ACM, 54(12):92–103.
Calimeri, F., Faber, W., Gebser, M., Ianni, G., Kaminski, R., Krennwallner, T., Leone, N., Ricca, F.,
18
and Schaub, T. (2012). ASP-Core-2 Input language format. Technical report, ASP Standardization
Working Group.
Clocksin, W. and Mellish, C. S. (2003). Programming in PROLOG. Springer Science & Business Media.
Collins, M. (2002). Discriminative training methods for hidden markov models: Theory and experiments
with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural
language processing-Volume 10, pages 1–8. Association for Computational Linguistics.
Craven, M. and Kumlien, J. (1999). Constructing biological knowledge bases by extracting information
from text sources. In International Conference on Intelligent Systems for Molecular Biology (ISMB),
pages 77–86.
Cussens, J. (2001a). Integrating probabilistic and logical reasoning. In Foundations of Bayesianism,
pages 241–260.
Cussens, J. (2001b). Parameter estimation in stochastic logic programs. Machine Learning, 44(3):245–
271.
Daelemans, W., Zavrel, J., Berck, P., and Gillis, S. (1996). Mbt: A memory-based part of speech
tagger-generator. arXiv preprint cmp-lg/9607012.
De Raedt, L. (1997). Logical settings for concept-learning. Artificial Intelligence, 95(1):187–201.
De Raedt, L. and Kersting, K. (2008). Probabilistic inductive logic programming. In Probabilistic
Inductive Logic Programming, pages 1–27.
Dumais, S., Platt, J., Heckerman, D., and Sahami, M. (1998). Inductive learning algorithms and representations for text categorization. In Proceedings of the Seventh International Conference on Information
and Knowledge Management, pages 148–155.
Efron, B. and Tibshirani, R. (1986). Bootstrap Methods for Standard Errors, Confidence Intervals, and
Other Measures of Statistical Accuracy. Statistical Science, 1(1):54–75.
Erdem, E., Gelfond, M., and Leone, N. (2016). Applications of Answer Set Programming. AI Magazine,
37(3):53–68.
Gebser, M., Kaminski, R., Kaufmann, B., Ostrowski, M., Schaub, T., and Thiele, S. (2008). A user’s
guide to gringo, clasp, clingo, and iclingo. Technical report, University of Potsdam.
Gebser, M., Kaminski, R., Kaufmann, B., and Schaub, T. (2012a). Answer set solving in practice.
Synthesis Lectures on Artificial Intelligence and Machine Learning, 6(3):1–238.
Gebser, M., Kaminski, R., König, A., and Schaub, T. (2011). Advances in gringo series 3. In International
Conference on Logic Programming and Non-monotonic Reasoning, pages 345–351.
Gebser, M., Kaufmann, B., and Schaub, T. (2012b). Conflict-driven answer set solving: From theory to
practice. Artificial Intelligence, 187:52–89.
Gelfond, M. and Lifschitz, V. (1988). The Stable Model Semantics for Logic Programming. In International Conference and Symposium on Logic Programming, pages 1070–1080.
Gulwani, S., Hernandez-Orallo, J., Kitzelmann, E., Muggleton, S. H., Schmid, U., and Zorn, B. (2015).
Inductive programming meets the real world. Communications of the ACM, 58(11):90–99.
Kakas, A. C., Kowalski, R. A., and Toni, F. (1992). Abductive Logic Programming. Journal of Logic
and Computation, 2(6):719–770.
Katzouris, N., Artikis, A., and Paliouras, G. (2015). Incremental learning of event definitions with
inductive logic programming. Machine Learning, 100(2-3):555–585.
Kazmi, M. and Schüller, P. (2016). Inspire at SemEval-2016 task 2: Interpretable semantic textual
similarity alignment based on answer set programming. In Proceedings of SemEval, pages 1109–1115.
King, R. D., Whelan, K. E., Jones, F. M., Reiser, P. G. K., Bryant, C. H., Muggleton, S. H., Kell, D. B.,
and Oliver, S. G. (2004). Functional genomic hypothesis generation and experimentation by a robot
scientist. Nature, 427(6971):247–252.
Kitzelmann, E. (2009). Inductive programming: A survey of program synthesis techniques. In International Workshop on Approaches and Applications of Inductive Programming, pages 50–73.
19
Law, M., Russo, A., and Broda, K. (2014). Inductive learning of answer set programs. In European
Workshop on Logics in Artificial Intelligence, pages 311–325.
Law, M., Russo, A., and Broda, K. (2015). Learning weak constraints in answer set programming.
Theory and Practice of Logic Programming, 15(4-5):511–525.
Lifschitz, V. (2002). Answer set programming and plan generation. Artificial Intelligence, 138(1-2):39–54.
Lloyd, J. W. (2012). Foundations of logic programming. Springer Science & Business Media.
Magnolini, S., Feltracco, A., and Magnini, B. (2016). FBK-HLT-NLP at SemEval-2016 task 2: A
multitask, deep learning approach for interpretable semantic textual similarity. In Proceedings of
SemEval, pages 783–789.
Manning, C. D. and Schütze, H. (1999). Foundations of statistical natural language processing (Vol.
999). Cambridge:MIT Press.
Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J. R., Bethard, S., and McClosky, D. (2014). The
Stanford CoreNLP natural language processing toolkit. In ACL System Demonstrations, pages 55–60.
Mitra, A. and Baral, C. (2016). Addressing a question answering challenge by combining statistical
methods with inductive rule learning and reasoning. In Association for the Advancement of Artificial
Intelligence, pages 2779–2785.
Mooney, R. J. (1996). Inductive logic programming for natural language processing. In Inductive Logic
Programming, pages 1–22.
Muggleton, S. (1991). Inductive logic programming. New generation computing, 8(4):295–318.
Muggleton, S. (1995). Inverse entailment and Progol. New generation computing, 13(3-4):245–286.
Muggleton, S. (1999). Inductive logic programming: issues, results and the challenge of learning language
in logic. Artificial Intelligence, 114(1-2):283–296.
Muggleton, S. (2002). Learning structure and parameters of stochastic logic programs. In International
Conference on Inductive Logic Programming, pages 198–206.
Muggleton, S. and Buntine, W. (1992). Machine invention of first-order predicates by inverting resolution.
In Proceedings of the Fifth International Conference on Machine Learning, pages 339–352.
Muggleton, S. and De Raedt, L. (1994). Inductive logic programming: Theory and methods. The Journal
of Logic Programming, 19:629–679.
Muggleton, S., De Raedt, L., Poole, D., Bratko, I., Flach, P., Inoue, K., and Srinivasan, A. (2012). ILP
turns 20. Machine Learning, 86(1):3–23.
Muggleton, S. et al. (1996). Stochastic logic programs. Advances in Inductive Logic Programming,
32:254–264.
Muggleton, S., Feng, C., et al. (1990). Efficient induction of logic programs. The Turing Institute.
Muggleton, S. H., Lin, D., Pahlavi, N., and Tamaddoni-Nezhad, A. (2014). Meta-interpretive learning:
application to grammatical inference. Machine Learning, 94(1):25–49.
Plotkin, G. D. (1970). A note on inductive generalization. Machine intelligence, 5(1):153–163.
Plotkin, G. D. (1971). A further note on inductive generalization. Machine intelligence, 6:101–124.
Quinlan, J. R. (1990). Learning logical definitions from relations. Machine Learning, 5(3):239–266.
Rashtchian, C., Young, P., Hodosh, M., and Hockenmaier, J. (2010). Collecting image annotations using
Amazon’s Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech
and Language Data with Amazon’s Mechanical Turk, pages 139–147.
Ray, O. (2009). Nonmonotonic abductive inductive learning. Journal of Applied Logic, 7(3):329–340.
Sammut, C. and Banerji, R. B. (1986). Learning concepts by asking questions. Machine Learning: An
artificial intelligence approach, 2:167–192.
Sato, T., Kameya, Y., and Zhou, N.-F. (2005). Generative modeling with failure in PRISM. In International Joint Conference on Artificial Intelligence, pages 847–852.
20
Schüller, P. (2013). Flexible Combinatory Categorial Grammar Parsing using the CYK Algorithm and
Answer Set Programming. In International Conference on Logic Programming and Non-monotonic
Reasoning, pages 499–511.
Schüller, P. (2014). Tackling Winograd Schemas by Formalizing Relevance Theory in Knowledge Graphs.
In International Conference on Principles of Knowledge Representation and Reasoning (KR), pages
358–367. AAAI Press.
Schüller, P. (2016). Modeling Variations of First-Order Horn Abduction in Answer Set Programming.
Fundamenta Informaticae, 149:159–207.
Schwitter, R. (2012). Answer Set Programming via Controlled Natural Language Processing. In Controlled Natural Language, pages 26–43.
Shapiro, E. Y. (1983). Algorithmic program debugging. MIT press.
Sharma, A., Vo, N. H., Aditya, S., and Baral, C. (2015). Towards addressing the winograd schema
challenge - Building and using a semantic parser and a knowledge hunting module. In International
Joint Conference on Artificial Intelligence (IJCAI), pages 1319–1325.
Srinivasan,
A.
(2001).
The
aleph
manual.
http://www.cs.ox.ac.uk/activities/machinelearning/Aleph/aleph.html.
Retrieved
from
Tang, L. R. and Mooney, R. J. (2001). Using multiple clause constructors in inductive logic programming
for semantic parsing. In European Conference on Machine Learning, pages 466–477.
Tekumalla, L. and Jat, S. (2016). IISCNLP at SemEval-2016 task 2: Interpretable STS with ILP based
Multiple Chunk Aligner. In Proceedings of SemEval, pages 790–795.
Tjong Kim Sang, E. F. and Buchholz, S. (2000). Introduction to the CoNLL-2000 shared task: Chunking.
In Workshop on Learning Language in Logic and Conference on Computational Natural Language
Learning, pages 127–132.
Wirth, R. (1989). Completing logic programs by inverse resolution. In European Working Session on
Learning, pages 239–250.
Zelle, J. M. and Mooney, R. J. (1996). Learning to parse database queries using inductive logic programming. In Proceedings of the National Conference on Artificial Intelligence, pages 1050–1055.
Zelle, J. M., Mooney, R. J., and Konvisser, J. B. (1994). Combining top-down and bottom-up techniques
in inductive logic programming. In Proceedings of the Eleventh International Conference on Machine
Learning, pages 343–351.
21
S1
Size
Pr
So
CV
F1
S2
T
P
R
CV
F1
T
F1
P
R
F1
22
100
0
1
2
3
172.8±46.2
10.9±5.0
0.3±0.8
0.0±0.0
66.3±10.1
|
71.1±13.3 *
73.1±8.0
65.9±5.9
63.0±2.2
69.4±2.0
69.3±0.7
66.6±3.4
64.9±3.3
67.1±2.0
69.2±1.1
69.7±1.7
59.4±3.3
|
63.8±2.2 *
|
65.0±0.4 *
63.0±2.9
70.7±14.2
69.3±15.7
66.5±15.4
65.1±15.6
65.5±2.4
67.3±0.5
65.9±1.5
64.7±0.9
64.6±2.7
66.2±1.4
68.2±0.5
68.5±0.3
60.5±2.6
|
62.4±1.0 *
62.7±1.1
61.6±0.5
500
0
1
2
3
4
5
6
7
8
9
10
31954.4±7057.7
17855.1±6866.0
6238.5±1530.8
4260.9±792.4
1598.6±367.1
1117.2±211.3
732.6±130.4
561.4±81.8
475.0±142.3
312.7±111.2
220.3±59.9
39.4±18.1
39.1±14.9
|
55.8±10.5 *
52.5±11.4
|
65.7±3.8 *
67.0±4.6
69.7±4.0
68.2±4.5
68.9±4.5
69.3±6.2
67.8±4.5
50.9±9.8
51.9±9.2
59.6±4.2
59.2±5.0
65.2±3.3
66.8±3.1
67.5±1.9
67.2±1.9
67.0±2.6
68.1±2.5
67.3±2.1
34.8±18.7
39.0±17.9
57.0±9.2
52.4±11.8
66.3±3.0
67.8±3.1
69.5±2.4
70.5±1.5
69.0±5.8
70.6±2.5
70.9±2.8
35.7±14.0
38.3±13.9
|
53.2±6.8 *
|
49.6±9.3 *
|
61.1±3.0 *
62.9±3.0
64.3±2.1
64.5±1.8
63.8±4.4
65.4±2.6
65.0±2.4
39.2±12.7
40.7±12.6
|
53.0±14.3 *
59.2±7.8
|
67.1±8.4 *
|
73.3±7.3 *
71.7±5.7
71.2±7.1
71.8±5.7
71.2±5.5
73.4±6.7
53.2±8.0
53.4±8.7
59.4±5.7
62.1±2.9
63.3±2.0
65.5±2.3
65.3±1.6
66.5±1.0
67.2±1.3
66.6±1.4
66.1±1.7
38.4±14.1
40.0±14.7
52.0±11.9
58.5±4.6
64.7±4.1
66.4±3.6
67.4±4.4
68.0±1.7
68.2±2.4
67.5±2.3
68.6±2.1
38.9±11.7
39.7±11.9
49.4±9.5
54.7±3.5
|
59.7±3.0 *
|
62.1±2.7 *
|
62.7±2.9 *
63.6±1.0
64.0±1.8
63.3±2.0
63.7±1.6
Table 4: Experimental Results for Headlines Dataset, where * indicates statistical significance (p < 0.05). Additionally, for Size = 500, the F1 scores for
all pruning values P r > 1 are significantly better than P r = 0 (p < 0.05).
S1
Size
Pr
So
23
100
0
1
2
3
0.5±1.0
0.0±0.0
0.0±0.0
0.0±0.0
500
0
1
2
3
4
5
6
7
8
9
10
3797.3±1019.9
670.1±153.1
286.2±90.2
159.1±36.4
83.4±25.8
23.8±11.0
10.8±4.5
3.4±3.6
1.5±1.4
1.2±1.4
0.7±0.8
CV
S2
T
CV
T
F1
P
R
F1
F1
P
R
F1
81.8±12.7
80.9±14.4
78.2±15.3
72.7±14.2
66.4±15.5
64.5±10.8
64.5±13.7
66.4±16.7
74.3±0.7
72.7±1.4
69.2±1.4
67.0±0.5
73.7±0.7
72.1±1.4
68.9±0.8
67.8±0.5
73.9±0.7
72.3±1.4
68.9±1.2
67.1±0.5
60.1±9.5
50.2±5.6
47.5±1.8
47.3±1.5
60.1±9.6
50.0±5.6
47.3±1.8
47.1±1.5
60.1±9.5
50.1±5.6
47.4±1.8
47.2±1.5
47.6±8.6
|
64.2±8.2 *
|*
69.5±4.9
70.9±6.8
74.7±5.7
74.2±6.6
75.3±5.9
74.4±5.9
74.5±5.3
74.5±5.3
74.2±5.2
45.9±12.5
68.1±7.4
73.8±7.1
70.1±7.0
68.8±6.4
70.7±4.7
73.2±4.5
72.1±4.2
72.3±5.8
71.9±5.8
71.8±5.5
47.1±8.8
57.1±11.1
66.4±6.6
66.0±7.6
70.2±2.0
71.9±1.5
71.7±0.4
71.2±0.3
71.2±0.0
71.2±0.0
70.9±0.9
46.2±8.9
|
56.1±11.1 *
|*
65.6±6.6
65.4±7.8
69.6±1.9
|
71.1±1.4 *
71.0±0.4
70.5±0.3
70.4±0.0
70.4±0.0
70.1±0.9
46.4±8.9
|
56.4±11.1 *
|*
65.8±6.6
65.4±7.7
69.7±1.9
71.3±1.4
71.2±0.4
70.7±0.3
70.6±0.0
70.6±0.0
70.4±0.9
45.0±12.9
63.1±9.2
68.4±6.0
69.8±3.7
67.0±7.2
71.0±1.7
71.1±0.8
69.7±1.4
68.6±0.8
68.4±0.5
68.6±0.0
45.5±12.8
63.1±9.5
68.4±6.0
69.7±3.6
66.7±7.2
70.9±1.8
71.1±0.8
69.7±1.4
68.6±0.7
68.3±0.5
68.5±0.0
44.6±13.2
|
62.9±9.4 *
|*
68.2±6.0
69.6±3.7
66.7±7.2
70.8±1.8
70.9±0.8
69.6±1.4
68.4±0.7
68.2±0.5
68.3±0.0
Table 5: Experimental Results for Images Dataset, where * indicates statistical significance (p < 0.05). Additionally, for Size = 500, the F1 scores for all
pruning values P r > 0 are significantly better than P r = 0 (p < 0.05).
Size
Pr
So
24
100
0
1
2
3
93.2±22.6
5.3±4.6
0.0±0.0
0.0±0.0
500
0
1
2
3
4
5
6
7
8
9
10
20723.5±3996.9
6643.4±1131.1
4422.2±734.7
2782.2±626.9
1541.6±311.3
1072.4±155.5
789.1±158.0
634.7±184.0
449.8±87.4
317.0±89.7
225.5±45.7
S1+S2
S1
S2
CV
T
T
F1
P
R
F1
P
R
F1
66.1±12.9
67.0±11.5
65.0±10.8
64.8±10.4
69.3±1.5
67.9±1.3
67.7±0.7
66.8±0.4
63.2±3.2
61.6±1.8
64.9±1.4
63.0±1.4
61.0±2.6
59.4±1.7
61.2±0.8
59.9±0.5
89.3±3.0
87.7±1.0
86.3±1.5
86.6±1.5
80.1±0.7
79.7±0.8
80.2±0.5
80.7±0.3
80.3±1.7
79.5±1.0
78.4±1.4
78.9±1.4
36.3±10.6
|
49.3±8.7 *
54.5±9.7
57.5±10.6
|
65.5±4.1 *
63.6±7.8
64.8±6.5
66.3±7.8
63.9±6.6
63.9±6.4
63.4±5.1
54.2±5.5
60.0±4.9
62.6±2.4
62.2±3.0
66.8±2.8
66.1±2.7
65.5±1.9
65.9±2.2
65.0±2.5
64.1±2.4
66.6±1.7
39.8±13.2
51.3±10.1
60.6±8.1
62.4±8.7
70.5±2.5
67.6±5.1
64.2±4.9
64.6±3.5
64.5±6.5
64.3±4.0
65.3±2.6
38.7±9.9
48.7±7.2
|
56.1±5.5 *
56.7±5.7
|
63.5±2.4 *
61.7±3.5
59.4±3.6
59.7±2.6
59.1±4.5
58.3±3.4
60.1±1.8
51.2±10.8
62.9±9.9
66.4±7.0
67.6±10.8
78.9±2.0
80.8±3.4
83.3±2.7
82.9±3.4
80.3±4.4
82.3±4.3
82.4±5.2
37.2±15.0
53.4±15.6
59.9±11.7
62.2±15.5
79.5±2.0
77.1±3.3
75.9±4.0
75.2±5.2
74.2±8.2
74.9±5.5
74.1±8.0
36.0±12.4
|
52.1±13.8 *
|
57.6±10.8 *
59.4±14.5
|
76.0±2.0 *
74.5±3.0
74.4±3.3
74.2±4.0
72.2±6.8
72.9±5.3
72.7±7.4
Table 6: Experimental Results for Answers-Students Dataset, where * indicates statistical significance (p < 0.05). Additionally, for Size = 500, the F1
scores for all pruning values P r > 0 are significantly better than P r = 0 (p < 0.05).
| 2 |
Robust Probabilistic Modeling with Bayesian Data Reweighting
Yixin Wang 1 Alp Kucukelbir 1 David M. Blei 1
Probabilistic models analyze data by relying on
a set of assumptions. Data that exhibit deviations from these assumptions can undermine inference and prediction quality. Robust models offer
protection against mismatch between a model’s
assumptions and reality. We propose a way to
systematically detect and mitigate mismatch of a
large class of probabilistic models. The idea is
to raise the likelihood of each observation to a
weight and then to infer both the latent variables
and the weights from data. Inferring the weights
allows a model to identify observations that match
its assumptions and down-weight others. This enables robust inference and improves predictive
accuracy. We study four different forms of mismatch with reality, ranging from missing latent
groups to structure misspecification. A Poisson
factorization analysis of the Movielens 1M dataset
shows the benefits of this approach in a practical
scenario.
1. Introduction
Probabilistic modeling is a powerful approach to discovering hidden patterns in data. We begin by expressing assumptions about the class of patterns we expect to discover;
this is how we design a probability model. We follow by
inferring the posterior of the model; this is how we discover
the specific patterns manifest in an observed data set. Advances in automated inference (Hoffman & Gelman, 2014;
Mansinghka et al., 2014; Kucukelbir et al., 2017) enable
easy development of new models for machine learning and
artificial intelligence (Ghahramani, 2015).
In this paper, we present a recipe to robustify probabilistic
models. What do we mean by “robustify”? Departure from
a model’s assumptions can undermine its inference and
prediction performance. This can arise due to corrupted
1
Columbia University, New York City, USA. Correspondence
to: Yixin Wang <[email protected]>.
Proceedings of the 34 th International Conference on Machine
Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by
the author(s).
observations, or in general, measurements that do not belong
to the process we are modeling. Robust models should
perform well in spite of such mismatch with reality.
Consider a movie recommendation system. We gather data
of people watching movies via the account they use to log in.
Imagine a situation where a few observations are corrupted
For example, a child logs in to her account and regularly
watches popular animated films. One day, her parents use
the same account to watch a horror movie. Recommendation models, like Poisson factorization (PF), struggle with
this kind of corrupted data (see Section 4): it begins to
recommend horror movies.
What can be done to detect and mitigate this effect? One
strategy is to design new models that are less sensitive to
corrupted data, such as by replacing a Gaussian likelihood
with a heavier-tailed t distribution (Huber, 2011; Insua &
Ruggeri, 2012). Most probabilistic models we use have
more sophisticated structures; these template solutions for
specific distributions are not readily applicable. Other classical robust techniques act mostly on distances between
observations (Huber, 1973); these approaches struggle with
high-dimensional data. How can we still make use of our favorite probabilistic models while making them less sensitive
to the messy nature of reality?
Main idea. We propose reweighted probabilistic models
(RPM). The idea is simple. First, posit a probabilistic model.
Then adjust the contribution of each observation by raising
each likelihood term to its own (latent) weight. Finally, infer
these weights along with the latent variables of the original
probability model. The posterior of this adjusted model
identifies observations that match its assumptions; it downweights observations that disagree with its assumptions.
Uncorrupted
Corrupted
Original Model
Reweighted Model
Density
arXiv:1606.03860v2 [stat.ML] 23 Sep 2017
Abstract
0
1
2
Figure 1. Fitting a unimodal distribution to a dataset with corrupted
measurements. The RPM downweights the corrupted observations.
Figure 1 depicts this tradeoff. The dataset includes cor-
Robust Probabilistic Modeling with Bayesian Data Reweighting
rupted measurements that undermine the original model;
Bayesian data reweighting automatically trades off the low
likelihood of the corrupted data near 1.5 to focus on the
uncorrupted data near zero. The RPM (green curve) detects
this mismatch and mitigates its effect compared to the poor
fit of the original model (red curve).
Formally, consider a dataset of N independent observations
y = (y1 , . . . , yN ). The likelihood factorizes as a product
QN
n=1 `(yn | β), where β is a set of latent variables. Posit a
prior distribution pβ (β).
Bayesian data reweighting follows three steps:
1. Define a probabilistic model pβ (β)
QN
n=1
`(yn | β).
2. Raise each likelihood to a positive latent weight wn .
Then choose a prior on the weights pw (w), where w =
(w1 , . . . , wN ). This gives a reweighted probabilistic
model (RPM)
p(y, β, w) =
N
Y
1
pβ (β)pw (w)
`(yn | β)wn ,
Z
n=1
where Z is the normalizing factor.
3. Infer the posterior of both the latent variables β and
the weights w, p(β, w | y).
The latent weights w allow an RPM to automatically explore
which observations match its assumptions and which do not.
Writing out the logarithm of the RPM gives some intuition;
it is equal (up to an additive constant) to
X
log pβ (β) + log pw (w) +
wn log `(yn | β).
(1)
n
Posterior inference, loosely speaking, seeks to maximize
the above with respect to β and w. The prior on the weights
pw (w) plays a critical role: it trades off extremely low
likelihood terms, caused by corrupted measurements, while
encouraging the weights to be close to one. We study three
options for this prior in Section 2.
How does Bayesian data reweighting induce robustness?
First, consider how the weights w affect Equation (1). The
logarithm of our priors are dominated by the log wn term:
this is the price of moving wn from one towards zero. By
shrinking wn , we gain an increase in wn log `(yn | β) while
paying a price in a log wn . The gain outweighs the price we
pay if log `(yn | β) is very negative. Our priors are set to
prefer wn to stay close to one; an RPM only shrinks wn for
very unlikely (e.g., corrupted) measurements.
Now consider how the latent variables β affect Equation (1).
As the weights of unlikely measurements shrink, the likelihood term can afford to assign low mass to those corrupted measurements and focus on the rest of the dataset.
Jointly, the weights and latent variables work together to
automatically identify unlikely measurements and focus on
observations that match the original model’s assumptions.
Section 2 presents these intuitions in full detail, along with
theoretical corroboration. In Section 3, we study four models under various forms of mismatch with reality, including
missing modeling assumptions, misspecified nonlinearities,
and skewed data. RPMs provide better parameter inference
and improved predictive accuracy across these models. Section 4 presents a recommendation system example, where
we improve on predictive performance and identify atypical
film enthusiasts in the Movielens 1M dataset.
Related work. Jerzy Neyman elegantly motivates the main
idea behind robust probabilistic modeling, a field that has
attracted much research attention in the past century.
Every attempt to use mathematics to study
some real phenomena must begin with building a
mathematical model of these phenomena. Of necessity, the model simplifies matters to a greater
or lesser extent and a number of details are ignored. [...] The solution of the mathematical
problem may be correct and yet it may be in violent conflict with realities simply because the
original assumptions of the mathematical model
diverge essentially from the conditions of the practical problem considered. (Neyman, 1949, p.22).
Our work draws on three themes around robust modeling.
The first is a body of work on robust statistics and machine
learning (Provost & Fawcett, 2001; Song et al., 2002; Yu
et al., 2012; McWilliams et al., 2014; Feng et al., 2014;
Shafieezadeh-Abadeh et al., 2015). These developments
focus on making specific models more robust to imprecise
measurements.
One strategy is popular: localization. To localize a probabilistic model, allow each likelihood to depend on its own
“copy” of the latent variable βn . This transforms the model
into
p(y, β, α) = pα (α)
N
Y
`(yn | βn )pβ (βn | α),
(2)
n=1
where a top-level latent variable α ties together all the βn
variables (de Finetti, 1961; Wang & Blei, 2015). 1 Localization decreases the effect of imprecise measurements. RPMs
present a broader approach to mitigating mismatch, with
improved performance over localization (Sections 3 and 4).
The second theme is robust Bayesian analysis, which studies sensitivity with respect to the prior (Berger et al., 1994).
1
Localization also relates to James-Stein shrinkage; Efron
(2010) connects these dots.
Robust Probabilistic Modeling with Bayesian Data Reweighting
Recent advances directly focus on sensitivity of the posterior (Minsker et al., 2014; Miller & Dunson, 2015) or the
posterior predictive distribution (Kucukelbir & Blei, 2015).
We draw connections to these ideas throughout this paper.
The third theme is data reweighting. This involves designing individual reweighting schemes for specific tasks and
models. Consider robust methods that toss away “outliers.”
This strategy involves manually assigning binary weights
to datapoints (Huber, 2011). Another example is covariate shift adaptation/importance sampling where reweighting transforms data to match another target distribution
(Veach & Guibas, 1995; Sugiyama et al., 2007; Shimodaira,
2000; Wen et al., 2014). A final example is maximum Lqlikelihood estimation (Ferrari et al., 2010; Qin & Priebe,
2013; 2016). Its solutions can be interpreted as a solution to
a weighted likelihood, whose weights are proportional to a
power transformation of the density. In contrast, RPMs treat
weights as latent variables. The weights are automatically
inferred; no custom design is required. RPMs also connect
to ideas around ensemble learning and boosting (Schapire
& Freund, 2012). Boosting procedures reweight datapoints
to build an ensemble of predictors for supervised learning,
whereas RPMs apply to Bayesian models in general.
Reweighted probabilistic models (RPM) offer a new approach to robust modeling. The idea is to automatically
identify observations that match the assumptions of the
model and to base posterior inference on these observations.
2.1. Definitions
An RPM
QN scaffolds over a probabilistic model,
pβ (β) n=1 `(yn | β). Raise each likelihood to a latent weight and posit a prior on the weights. This gives the
reweighted joint density
N
Y
1
pβ (β)pw (w)
`(yn | β)wn ,
Z
n=1
pβ
yn
β
N
(a) Original probabilistic model
pβ
yn
β
pw
wn
N
(b) Reweighted probabilistic model (RPM)
pα
α
βn
yn
N
(c) Localized probabilistic model
2. Reweighted Probabilistic Models
p(y, β, w) =
models may have additional structure, such as a separation
of local and global latent variables (Hoffman et al., 2013),
or fixed parameters; we omit these in this figure.
(3)
R
QN
where Z = pβ (β)pw (w) n=1 `(yn | β)wn dy dβ dw is
the normalizing factor.
The reweighted density integrates to one when the normalizing factor Z is finite. This is always true when the likelihood
`(· | β) is an exponential family distribution with Lesbegue
base measure (Bernardo & Smith, 2009); this is the class of
models we study in this paper.2
RPM s apply to likelihoods that factorize over the observations. (We discuss non-exchangeable models in Section 5.)
Figure 2 depicts an RPM as a graphical model. Specific
2
Heavy-tailed likelihoods and Bayesian nonparametric priors
may violate this condition; we leave these for future analysis.
Figure 2. RPMs begin with a probabilistic model (a) and introduce
a set of weights w as latent variables. This gives a model (b) that
explores which data observations match its assumptions. Localization (c), instead, builds a hierarchical model. (Appendix A shows
when a localized model is also an RPM.)
The reweighted model introduces a set of weights; these
are latent variables, each with support wn ∈ R>0 . To gain
intuition, consider how these weights affect the posterior,
which is proportional to the product of the likelihood of every measurement. A weight wn that is close to zero flattens
out its corresponding likelihood `(yn | β)wn ; a weight that
is larger than one makes its likelihood more peaked. This, in
turn, enables the posterior to focus on some measurements
more than others. The prior pw (w) ensures that not too
many likelihood terms get flattened; in this sense, it plays
an important regularization role.
We study three options for this prior on weights: a bank
of Beta distributions, a scaled Dirichlet distribution, and a
bank of Gamma distributions.
Bank of Beta priors. This option constrains each weight as
wn ∈ (0, 1). We posit an independent prior for each weight
pw (w) =
N
Y
Beta(wn ; a, b)
(4)
n=1
and use the same parameters a and b for all weights. This
Robust Probabilistic Modeling with Bayesian Data Reweighting
is the most conservative option for the RPM; it ensures that
none of the likelihoods ever becomes more peaked than it
was in the original model.
The parameters a, b offer an expressive language to describe
different attitudes towards the weights. For example, setting
both parameters less than one makes the Beta act like a “two
spikes and a slab” prior, encouraging weights to be close to
zero or one, but not in between. As another example, setting
a greater than b encourages weights to lean towards one.
Scaled Dirichlet prior. This option ensures the sum of the
weights equals N . We posit a symmetric Dirichlet prior on
all the weights
w = Nv
pv (v) = Dirichlet(a1)
(5)
where a is a scalar parameter and 1 is a (N × 1) vector of
ones. In the original model, where all the weights are one,
then the sum of the weights is N . The Dirichlet option maintains this balance; while certain likelihoods may become
more peaked, others will flatten to compensate.
The concentration parameter a gives an intuitive way to
configure the Dirichlet. Small values for a allow the model
to easily up- or down-weight many data observations; larger
values for a prefer a smoother distribution of weights. The
Dirichlet option connects to the bootstrap approaches in
Rubin et al. (1981); Kucukelbir & Blei (2015), which also
preserves the sum of weights as N .
Bank of Gamma priors. Here we posit an independent
Gamma prior for each weight
pw (w) =
N
Y
Gamma(wn ; a, b)
(6)
n=1
and use the same parameters a and b for all weights. We
do not recommend this option, because observations can
be arbitrarily up- or down-weighted. In this paper, we only
consider Equation (6) for our theoretical analysis in Section 2.2.
The bank of Beta and Dirichlet options perform similarly.
We prefer the Beta option as it is more conservative, yet
find the Dirichlet to be less sensitive to its parameters. We
explore these options in the empirical study (Section 3).
2.2. Theory and intuition
How can theory justify Bayesian data reweighting? Here we
investigate its robustness properties. These analyses intend
to confirm our intuition from Section 1. Appendices B and C
present proofs in full technical detail.
Intuition. Recall the logarithm of the RPM joint density
from Equation (1). Now compute the maximum-a-posterior
(MAP) estimate of the weights w. The partial derivative is
∂ log p(y, β, w)
d log pw (wn )
=
+ log `(yn | β)
∂wn
dwn
(7)
for all n = 1, . . . , N . Plug the Gamma prior from Equation (6) into the partial derivative in Equation (7) and set it
equal to zero. This gives the MAP estimate of wn ,
w
bn =
a−1
.
b − log `(yn | β)
(8)
The MAP estimate w
bn is an increasing function of the log
likelihood of yn when a > 1.This reveals that w
bn shrinks
the contribution of observations that are unlikely under the
log likelihood; in turn, this encourages the MAP estimate
for βb to describe the majority of the observations. This is
how an RPM makes a probabilistic model more robust.
A similar argument holds for other exponential family priors
on w with log wn as a sufficient statistic. We formalize this
intuition and generalize it in the following theorem, which
establishes sufficient conditions where a RPM improves the
inference of its latent variables β.
Theorem 1 Denote the true value of β as β ∗ . Let the posterior mean of β under the weighted and unweighted model
be β̄w and β̄u respectively. Assume mild conditions on
pw , ` and the corruption level, and that |`(yn | β̄w ) −
`(yn | β ∗ )| < holds ∀n with high probability. Then,
there exists an N ∗ such that for N > N ∗ , we have
|β̄u − β ∗ | 2 |β̄w − β ∗ |, where 2 denotes second order
stochastic dominance. (Details in Appendix B.)
The likelihood bounding assumption is common in robust
statistics theory; it is satisfied for both likely and unlikely
(corrupted) measurements. How much of an improvement
does it give? We can quantify this through the influence
function (IF) of β̄w .
Consider a distribution G and a statistic T (G) to be a function of data that comes iid from G. Take a fixed distribution,
e.g., the population distribution, F . Then, IF(z; T, F ) measures how much an additional observation at z affects the
statistic T (F ). Define
IF (z; T, F ) = lim
t→0+
T (tδz + (1 − t)F ) − T (F )
t
for z where this limit exists. Roughly, the IF measures the
asymptotic bias on T (F ) caused by a specific observation
z that does not come from F . We consider a statistic T to
be robust if its IF is a bounded function of z, i.e., if outliers
can only exert a limited influence (Huber, 2011).
Here, we study the IF of the posterior mean T = β̄w under
the true data generating distribution F = `(· | β ∗ ). Say a
value z has likelihood `(z | β ∗ ) that is nearly zero; we think
Robust Probabilistic Modeling with Bayesian Data Reweighting
Theorem 2 If lima→−∞ w(a) = 0 and lima→−∞ a ·
w(a) < ∞, then IF(z; β̄w , `(· | β ∗ )) → 0 as `(z | β ∗ ) → 0.
This result shows that an RPM is robust in that its IF goes
to zero for unlikely measurements. This is true for all three
priors. (Details in Appendix C.)
2.3. Inference and computation
We now turn to inferring the posterior of an RPM,
p(β, w | y). The posterior lacks an analytic closed-form
expression for all but the simplest of models; even if the
original model admits such a posterior for β, the reweighted
posterior may take a different form.
To approximate the posterior, we appeal to probabilistic programming. A probabilistic programming system enables a
user to write a probability model as a computer program and
then compile that program into an inference executable. Automated inference is the backbone of such systems: it takes
in a probability model, expressed as a program, and outputs
an efficient algorithm for inference. We use automated inference in Stan, a probabilistic programming system (Carpenter
et al., 2015).
In the empirical study that follows, we highlight how RPMs
detect and mitigate various forms of model mismatch. As
a common metric, we compare the predictive accuracy on
held out data for the original, localized, and reweighted
model.
The posterior predictive likelihood
of a new datapoint
R
y† is poriginal (y† | y) =
`(y† | β)p(β | y) dβ. Localization couples each observation with its own copy
of
RR the latent variable; this gives plocalized (y† | y) =
`(y† | β† )p(β† | α)p(α | y) dα dβ† where β† is the localized latent variable for the new datapoint. The prior
p(β† | α) has the same form as pβ in Equation (2).
Bayesian data reweighting gives the following posterior
predictive likelihood
ZZ
pRPM (y† | y) =
p(y† | β, w† )pRPM (β | y)p(w† ) dw† dβ,
where pRPM (β | y) is the marginal posterior, integrating out
the inferred weights of the training dataset, and the prior
p(w† ) has the same form as pw in Equation (3).
3. Empirical Study
We study RPMs under four types of mismatch with reality.
This section involves simulations of realistic scenarios; the
next section presents a recommendation system example
using real data. We default to No-U-Turn sampler (NUTS)
(Hoffman & Gelman, 2014) for inference in all experiments,
except for Sections 3.5 and 4 where we leverage variational
inference (Kucukelbir et al., 2017). The additional computational cost of inferring the weights is unnoticeable relative
to inference in the original model.
3.1. Outliers: a network wait-time example
A router receives packets over a network and measures the
time it waits for each packet. Suppose we typically observe
wait-times that follow a Poisson distribution with rate β = 5.
We model each measurement using a Poisson likelihood
`(yn | β) = Poisson(β) and posit a Gamma prior on the
rate pβ (β) = Gam(a = 2, b = 0.5).
Imagine that F % percent of the time, the network fails.
During these failures, the wait-times come from a Poisson
with much higher rate β = 50. Thus, the data actually
contains a mixture of two Poisson distributions; yet, our
model only assumes one. (Details in Appendix D.1.)
truth
prior
original
RPM Beta
RPM Dirichlet
localized
Density
of this z as corrupted. Now consider the weight function
induced by the prior pw (w). Rewrite it as a function of the
log likelihood, like w(log `(· | β ∗ )) as in Equation (8).
0
5
10
15
β
20
(a) Posteriors for F = 25% failure rate.
25 β
original
RPM Beta
RPM Dirichlet
localized
15
5
0
0.1
0.2
0.3
0.4
F
(b) Posterior 95% credible intervals.
Figure 3. Outliers simulation study. We compare Beta(0.1, 0.01)
and Dir(1) as priors for the reweighted probabilistic model. (a)
Posterior distributions on β show a marked difference in detecting
the correct wait-time rate of β = 5. (b) Posterior 95% confidence
intervals across failure rates F show consistent behavior for both
Beta and Dirichlet priors. (N = 100 with 50 replications.)
How do we expect an RPM to behave in this situation?
Suppose the network failed 25% of the time. Figure 3a
shows the posterior distribution on the rate β. The original
posterior is centered at 18; this is troubling, not only because
the rate is wrong but also because of how confident the
posterior fit is. Localization introduces greater uncertainty,
yet still estimates a rate around 15. The RPM correctly
Robust Probabilistic Modeling with Bayesian Data Reweighting
identifies that the majority of the observations come from
β = 5. Observations from when the network failed are
down-weighted. It gives a confident posterior centered at
five.
Figure 3b shows posterior 95% credible intervals of β under
failure rates up to F = 45%. The RPM is robust to corrupted
measurements; instead it focuses on data that it can explain
within its assumptions. When there is no corruption, the
RPM performs just as well as the original model.
We simulate this scenario by drawing binary indicators of
color blindness yn ∼ Bernoulli(1/1 + exp(−pn )) where
the pn ’s come from two latent groups: men exhibit a
stronger dependency on family history (pn = 0.5xn ) than
women (pn = 0.01xn ). We simulate family history as
xn ∼ Unif(−10, 10). Consider a Bayesian logistic regression model without intercept. Posit a prior on the slope as
pβ (β) = N (0, 10) and assume a Beta(0.1, 0.01) prior on
the weights. (Details in Appendix D.2.)
βmen
original
β
1
RPM
localized
0.5
F
2
0
1
0
0
75
100
Data Index
Figure 4. Posterior means of the weights w under the Dirichlet
prior. For visualization purposes, we sorted the data into two
groups: the first 75 contain observations from the normal network;
the remaining 25 are the observations when the network fails.
Despite this downweighting, the RPM posteriors on β are
not overdispersed, as in the localized case. This is due to the
interplay we described in the introduction. Downweighting
observations should lead to a smaller effective sample size,
which would increase posterior uncertainty. But the downweighted datapoints are corrupted observations; including
them also increases posterior uncertainty.
The RPM is insensitive to the prior on the weights; both
Beta and Dirichlet options perform similarly. From here on,
we focus on the Beta option. We let the shape parameter
a scale with the data size N such that N/a ≈ 103 ; this
encodes a mild attitude towards unit weights. We now move
on to other forms of mismatch with reality.
3.2. Missing latent groups: predicting color blindness
Color blindness is unevenly hereditary: it is much higher for
men than for women (Boron & Boulpaep, 2012). Suppose
we are not aware of this fact. We have a dataset of both genders with each individual’s color blindness status and his/her
relevant family history. No gender information is available.
Consider analyzing this data using logistic regression. It can
only capture one hereditary group. Thus, logistic regression
misrepresents both groups, even though men exhibit strong
heredity. In contrast, an RPM can detect and mitigate the
0.1
0.2
0.3
0.4
Figure 5. Missing latent groups study. Posterior 95% credible
intervals for the RPM always include the dominant βmen = 0.5,
as we vary the percentage of females in the data. Dataset size
N = 100 with 50 replications.
Figure 5 shows the posterior 95% credible intervals of β
as we vary the percentage of females from F = 0% to
40%. A horizontal line indicates the correct slope for the
dominant group, βmen = 0.5. As the size of the missing
latent group (women) increases, the original model quickly
shifts its credible interval away from 0.5. The reweighted
and localized posteriors both contain βmen = 0.5 for all
percentages, but the localized model exhibits much higher
variance in its estimates.
This analysis shows how RPMs can mitigate the effect of
missing latent groups. While the original logistic regression
model would perform equally poorly on both groups, an
RPM is able to automatically focus on the dominant group.
Corrupted (F = 25%)
Clean (F = 0%)
Density
Weights
Visualizing the weights elucidates this point. Figure 4 shows
the posterior mean estimates of w for F = 25%. The
weights are sorted into two groups, for ease of viewing.
The weights of the corrupted observations are essentially
zero; this downweighting is what allows the RPM to shift
its posterior on β towards five.
missing group effect by focusing on the dominant hereditary
trait. Here we consider men as the dominant group.
0
1
Ep(w|y) [w]
Figure 6. Kernel density estimate of the distribution of weights
across all measurements in the missing latent groups study. The
percentage of females is denoted by F . A hypothetical clean
dataset receives weights that concentrate around one; the actual
corrupted dataset exhibits a two-hump distribution of weights.
An RPM also functions as a diagnostic tool to detect mismatch with reality. The distribution of the inferred weights
indicates the presence of datapoints that defy the assump-
Robust Probabilistic Modeling with Bayesian Data Reweighting
True structure
Model structure
Original
mean(std)
mean(std)
Localization
mean(std)
β0 + β1 x 1 + β 2 x 2 + β3 x 1 x 2
β0 + β1 x1 + β2 x2 + β3 x22
β0 + β1 x 1 + β 2 x 2
β0 + β1 x 1 + β2 x 2
β0 + β1 x 1 + β2 x 2
β0 + β1 x 1
3.16(1.37)
30.79(2.60)
0.58(0.38)
2.20(1.25)
16.32(1.96)
0.60(0.40)
2.63(1.85)
21.08(5.20)
0.98(0.54)
RPM
Table 1. RPMs improve absolute deviations of posterior mean β1 estimates. (50 replications.)
tions of the original model. Figure 6 shows a kernel density
estimate of the inferred posterior weights. A hypothetical
dataset with no corrupted measurements receives weights
close to one. In contrast, the actual dataset with measurements from a missing latent group exhibit a bimodal distribution of weights. Testing for bimodality of the inferred
weights is one way in which an RPM can be used to diagnose
mismatch with reality.
3.3. Covariate dependence misspecification: a lung
cancer risk study
Consider a study of lung cancer risk. While tobacco usage
exhibits a clear connection, other factors may also contribute.
For instance, obesity and tobacco usage appear to interact,
with evidence towards a quadratic dependence on obesity
(Odegaard et al., 2010).
without their respective types of corruption. RPMs improve
prediction for both clean and corrupted data, as they focus
on data that match the assumptions of the original model.
3.5. Skewed data: cluster selection in a mixture model
Finally, we show how RPMs handle skewed data. The
Dirichlet process mixture model (DPMM) is a versatile
model for density estimation and clustering (Bishop, 2006;
Murphy, 2012). While real data may indeed come from
a finite mixture of clusters, there is no reason to assume
each cluster is distributed as a Gaussian. Inspired by the
experiments in Miller & Dunson (2015), we show how a
reweighted DPMM reliably recovers the correct number of
components in a mixture of skewnormals dataset.
Denote tobacco usage as x1 and obesity as x2 . We study
three models of lung cancer risk dependency on these covariates. We are primarily interested in understanding the
effect of tobacco usage; thus we focus on β1 , the regression
coefficient for tobacco. In each model, some form of covariance misspecification discriminates the true structure from
the assumed structure.
For each model, we simulate a dataset of size N = 100 with
random covariates x1 ∼ N (10, 52 ) and x2 ∼ N (0, 102 )
and regression coefficients β0,1,2,3 ∼ Unif(−10, 10). Consider a Bayesian linear regression model with prior pβ (β) =
N (0, 10). (Details in Appendix D.3.)
Table 1 summarizes the misspecification and shows absolute
differences on the estimated β1 regression coefficient. The
RPM yields better estimates of β1 in the first two models.
These highlight how the RPM leverages datapoints useful for
estimating β1 . The third model is particularly challenging
because obesity is ignored in the misspecified model. Here,
the RPM gives similar results to the original model; this
highlights that RPMs can only use available information.
Since the original model lacks dependence on x2 , the RPM
cannot compensate for this.
3.4. Predictive likelihood results
Table 2 shows how RPMs also improve predictive accuracy.
In all the above examples, we simulate test data with and
(a) Original model
(b) RPM
Figure 7. A finite approximation DPMM to skewnormal distributed
data that come from three groups. The shade of each cluster
indicates the inferred mixture proportions (N = 2000).
A standard Gaussian mixture model (GMM) with large K
and a sparse Dirichlet prior on the mixture proportions is
an approximation to a DPMM (Ishwaran & James, 2012).
We simulate three clusters from two-dimensional skewnormal distributions and fit a GMM with maximum K = 30.
Here we use automatic differentiation variational inference
(ADVI), as NUTS struggles with inference of mixture models (Kucukelbir et al., 2017). (Details in Appendix D.4.)
Figure 7 shows posterior mean estimates from the original
GMM ; it incorrectly finds six clusters. In contrast, the RPM
identifies the correct three clusters. Datapoints in the tails
of each cluster get down-weighted; these are datapoints that
do not match the Gaussianity assumption of the model.
Robust Probabilistic Modeling with Bayesian Data Reweighting
Outliers
Clean
Corrupted
Original model
Localized model
−744.2
−730.8
−328.5
RPM
Missing latent groups
Clean
Corrupted
−1244.5
−1258.4
−1146.9
−108.6
−53.6
−43.9
−103.9
−112.7
−90.5
Misspecified structure
Clean
Corrupted
−136.3
−192.5
−124.1
−161.7
−193.1
−144.1
Table 2. Posterior predictive likelihoods of clean and corrupted test data. Outliers and missing latent groups have F = 25%. The
misspecified structure is missing the interaction term. Results are similar for other levels and types of mismatch with reality.
4. Case Study: Poisson factorization for
recommendation
We now turn to a study of real data: a recommendation
system. Consider a video streaming service; data comes as a
binary matrix of users and the movies they choose to watch.
How can we identify patterns from such data? Poisson
factorization (PF) offers a flexible solution (Cemgil, 2009;
Gopalan et al., 2015). The idea is to infer a K-dimensional
latent space of user preferences θ and movie attributes β.
The inner product θ > β determines the rate of a Poisson
likelihood for each binary measurement; Gamma priors
on θ and β promote sparse patterns. As a result, PF finds
interpretable groupings of movies, often clustered according
to popularity or genre. (Full model in Appendix E.)
Weights
How does classical PF compare to its reweighted counterpart? As input, we use the MovieLens 1M dataset, which
contains one million movie ratings from 6 000 users on
4 000 movies. We place iid Gamma(1, 0.001) priors on the
preferences and attributes. Here, we have the option of
reweighting users or items. We focus on users and place a
Beta(100, 1) prior on their weights. For this model, we use
MAP estimation. (Localization is computationally challenging for PF; it requires a separate “copy” of θ for each movie,
along with a separate β for each user. This dramatically
increases computational cost.)
1
1
0.5
0.5
Average log likelihood
Original model
RPM
Corrupted users
0%
1%
2%
−1.68
−1.53
−1.73
−1.53
−1.74
−1.52
Table 3. Held-out predictive accuracy under varying amounts of
corruption. Held-out users chosen randomly (20% of total users).
sponding RPM. The boxplot in Figure 8a shows the inferred
weights. The majority of users receive weight one, but a few
users are down-weighted. These are film enthusiasts who
appear to indiscriminately watch many movies from many
genres. (Appendix F shows an example.) These users do
not contribute towards identifying movies that go together;
this explains why the RPM down-weights them.
Recall the example from our introduction. A child typically
watches popular animated films, but her parents occasionally
use her account to watch horror films. We simulate this by
corrupting a small percentage of users. We replace a ratio
R = (0.1, 0.5, 1) of these users’ movies with randomly
selected movies.
The boxplot in Figure 8b shows the weights we infer for
these corrupted users, based on how many of their movies
we randomly replace. The weights decrease as we corrupt
more movies. Table 3 shows how this leads to higher heldout predictive accuracy; down-weighting these corrupted
users leads to better prediction.
5. Discussion
clean
(a) Original dataset
0.1 0.5 1
Ratio of corruption (R)
(b) Corrupted users
Figure 8. Inferred weights for clean and corrupted data. (a) Most
users receive weights very close to one. (b) Corrupted users receive
weights much smaller than one. Larger ratios of corruption R
imply lower weights.
We begin by analyzing the original (clean) dataset.
Reweighting improves the average held-out log likelihood
from −1.68 of the original model to −1.53 of the corre-
Reweighted probabilistic models (RPM) offer a systematic
approach to mitigating various forms of mismatch with
reality. The idea is to raise each data likelihood to a weight
and to infer the weights along with the hidden patterns.
We demonstrate how this strategy introduces robustness and
improves prediction accuracy across four types of mismatch.
RPM s also offer a way to detect mismatch with reality. The
distribution of the inferred weights sheds light onto datapoints that fail to match the original model’s assumptions.
RPM s can thus lead to new model development and deeper
insights about our data.
Robust Probabilistic Modeling with Bayesian Data Reweighting
RPM s can also work with non-exchangeable data, such
as time series. Some time series models admit exchangeable likelihood approximations (Guinness & Stein, 2013).
For other models, a non-overlapping windowing approach
would also work. The idea of reweighting could also extend
to structured likelihoods, such as Hawkes process models.
Acknowledgements
We thank Adji Dieng, Yuanjun Gao, Inchi Hu, Christian
Naesseth, Rajesh Ranganath, Francisco Ruiz, Dustin Tran,
and Joshua Vogelstein for their insightful pointers and comments. This work is supported by NSF IIS-1247664, ONR
N00014-11-1-0651, DARPA PPAML FA8750-14-2-0009,
DARPA SIMPLEX N66001-15-C-4032, and the Alfred P.
Sloan Foundation.
References
Berger, James O, Moreno, Elías, Pericchi, Luis Raul, Bayarri, M Jesús, Bernardo, José M, Cano, Juan A, De la
Horra, Julián, Martín, Jacinto, Ríos-Insúa, David, Betrò,
Bruno, et al. An overview of robust Bayesian analysis.
Test, 3(1):5–124, 1994.
Bernardo, José M and Smith, Adrian FM. Bayesian Theory.
John Wiley & Sons, 2009.
Bishop, Christopher M. Pattern Recognition and Machine
Learning. Springer New York, 2006.
Boron, Walter F and Boulpaep, Emile L. Medical Physiology. Elsevier, 2012.
Carpenter, Bob, Gelman, Andrew, Hoffman, Matt, Lee,
Daniel, Goodrich, Ben, Betancourt, Michael, Brubaker,
Marcus A, Guo, Jiqiang, Li, Peter, and Riddell, Allen.
Stan: a probabilistic programming language. Journal of
Statistical Software, 2015.
Cemgil, Ali Taylan. Bayesian inference for nonnegative
matrix factorisation models. Computational Intelligence
and Neuroscience, 2009.
de Finetti, Bruno. The Bayesian approach to the rejection
of outliers. In Proceedings of the Fourth Berkeley Symposium on Probability and Statistics, 1961.
Efron, Bradley. Large-Scale Inference. Cambridge University Press, 2010.
Feng, Jiashi, Xu, Huan, Mannor, Shie, and Yan, Shuicheng.
Robust logistic regression and classification. In NIPS.
2014.
Ferrari, Davide, Yang, Yuhong, et al. Maximum lqlikelihood estimation. The Annals of Statistics, 38(2):
753–783, 2010.
Ghahramani, Zoubin. Probabilistic machine learning and
artificial intelligence. Nature, 521(7553):452–459, 2015.
Gopalan, Prem, Hofman, Jake M, and Blei, David M. Scalable recommendation with hierarchical Poisson factorization. UAI, 2015.
Guinness, Joseph and Stein, Michael L. Transformation to
approximate independence for locally stationary Gaussian processes. Journal of Time Series Analysis, 34(5):
574–590, 2013.
Hoffman, Matthew D and Gelman, Andrew. The No-U-Turn
sampler. Journal of Machine Learning Research, 15(1):
1593–1623, 2014.
Hoffman, Matthew D, Blei, David M, Wang, Chong, and
Paisley, John. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347,
2013.
Huber, Peter J. Robust regression: asymptotics, conjectures
and Monte Carlo. The Annals of Statistics, pp. 799–821,
1973.
Huber, Peter J. Robust Statistics. Springer, 2011.
Insua, David Ríos and Ruggeri, Fabrizio. Robust Bayesian
Analysis. Springer Science & Business Media, 2012.
Ishwaran, Hemant and James, Lancelot F. Approximate
Dirichlet process computing in finite normal mixtures.
Journal of Computational and Graphical Statistics, 2012.
Kucukelbir, Alp and Blei, David M. Population empirical
Bayes. In UAI, 2015.
Kucukelbir, Alp, Tran, Dustin, Ranganath, Rajesh, Gelman,
Andrew, and Blei, David M. Automatic differentiation
variational inference. Journal of Machine Learning Research, 18(14):1–45, 2017.
Mansinghka, Vikash, Selsam, Daniel, and Perov, Yura. Venture: a higher-order probabilistic programming platform
with programmable inference. arXiv:1404.0099, 2014.
McWilliams, Brian, Krummenacher, Gabriel, Lucic, Mario,
and Buhmann, Joachim M. Fast and robust least squares
estimation in corrupted linear models. In NIPS. 2014.
Miller, Jeffrey W and Dunson, David B. Robust Bayesian inference via coarsening. arXiv preprint arXiv:1506.06101,
2015.
Minsker, Stanislav, Srivastava, Sanvesh, Lin, Lizhen, and
Dunson, David B. Robust and scalable Bayes via a
median of subset posterior measures. arXiv preprint
arXiv:1403.2660, 2014.
Robust Probabilistic Modeling with Bayesian Data Reweighting
Murphy, Kevin P. Machine Learning: a Probabilistic Perspective. MIT Press, 2012.
Neyman, Jerzy. On the problem of estimating the number of
schools of fish. In J. Neyman, M. Loeve and Yerushalmy,
J. (eds.), University of California Publications in Statistics, volume 1, chapter 3, pp. 21–36. University of California Press, 1949.
Odegaard, Andrew O, Pereira, Mark A, Koh, Woon-Puay,
Gross, Myron D, Duval, Sue, Mimi, C Yu, and Yuan,
Jian-Min. BMI, all-cause and cause-specific mortality in
Chinese Singaporean men and women. PLoS One, 5(11),
2010.
Provost, Foster and Fawcett, Tom. Robust classification
for imprecise environments. Machine Learning, 42(3):
203–231, 2001.
Qin, Yichen and Priebe, Carey E. Maximum l q-likelihood
estimation via the expectation-maximization algorithm:
A robust estimation of mixture models. Journal of
the American Statistical Association, 108(503):914–928,
2013.
Qin, Yichen and Priebe, Carey E. Robust hypothesis testing
via lq-likelihood. 2016.
Rubin, Donald B et al. The Bayesian bootstrap. The annals
of statistics, 9(1):130–134, 1981.
Schapire, R.E. and Freund, Y. Boosting: Foundations and
Algorithms. MIT Press, 2012.
Shafieezadeh-Abadeh, Soroosh, Esfahani, Peyman Mohajerin, and Kuhn, Daniel. Distributionally robust logistic
regression. In NIPS. 2015.
Shimodaira, Hidetoshi. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):
227–244, 2000.
Song, Qing, Hu, Wenjie, and Xie, Wenfang. Robust support vector machine with bullet hole image classification.
Systems, Man, and Cybernetics, Part C: Applications and
Reviews, IEEE Transactions on, 32(4):440–448, 2002.
Sugiyama, Masashi, Krauledat, Matthias, and Müller,
Klaus-Robert. Covariate shift adaptation by importance
weighted cross validation. Journal of Machine Learning
Research, 8:985–1005, 2007.
Veach, Eric and Guibas, Leonidas J. Optimally combining sampling techniques for Monte Carlo rendering. In
Proceedings of the 22nd annual conference on Computer
graphics and interactive techniques, pp. 419–428. ACM,
1995.
Wang, Chong and Blei, David M. A general method
for robust Bayesian modeling.
arXiv preprint
arXiv:1510.05078, 2015.
Wen, Junfeng, Yu, Chun-nam, and Greiner, Russell. Robust learning under uncertain test distributions: Relating
covariate shift to model misspecification. In ICML, 2014.
Yu, Yaoliang, Aslan, Özlem, and Schuurmans, Dale. A
polynomial-time form of robust regression. In NIPS.
2012.
Robust Probabilistic Modeling with Bayesian Data Reweighting
A. Localized generalized linear model as an RPM
Localization in generalized linear models (GLMs) is equivalent to reweighting, with constraints on the weight function w(·)
induced by pw . We prelude the theorem with a simple illustration in linear regression.
Consider N iid observations {(xn , yn )}N
1 . We regress y against x:
iid
yn = β1 (xn − x̄) + β0 + n , n ∼ N (0, σ 2 ),
where x̄ =
PN
n=1
xn . The maximum likelihood estimate of (β0 , β1 ) is
(βb0 , βb1 ) = argminβ0 ,β1
N
X
(yn − β1 (xn − x̄) − β0 )2 .
n=1
The localized model is
iid
iid
yn = β1n × (xn − x̄) + β0 + n , β1n ∼ N (β1 , λ2 ), n ∼ N (0, σ 2 ),
where {β1n }N
⊥ {n }N
n=1 . Marginalizing out β1n ’s gives
n=1 ⊥
iid
yn = β1 × (xn − x̄) + β0 + γn , γn ∼ N (0, (xn − x̄)2 · λ2 + σ 2 ).
The maximum likelihood estimate of (β0 , β1 ) in the localized model thus becomes
(βb0 , βb1 ) = argminβ0 ,β1
N
X
(yn − β1 (xn − x̄) − β0 )2
.
(xn − x̄)2 · λ2 + σ 2
n=1
This is equivalent to the reweighting approach with
wn =
1
(xn −
x̄)2
· λ2 + σ 2
.
We generalize this argument into generalized linear models.
Theorem 3 Localization in a GLM with identity link infers β1 from
yn · ηn − b1 (ηn )
yn | xn , β1n , β0 ∼ exp
+ c1 (yn , φ) ,
a1 (φ)
β1n
ηn = β0 + β1n · (xn − x̄),
β1n · β1 − b2 (β1 )
+ c2 (β1n , ν) ,
| β1 ∼ exp
a2 (ν)
where a1 (·), a2 (·) denote dispersion constants, b1 (·), b2 (·) denote normalizing constants, and c1 (·), c2 (·) denote carrier
densities of exponential family distributions.
Inferring β1 from this localized GLM is equivalent to inferring β1 from the reweighted model with weights
"
!#
(yn − E(yn | β0 + β̃1n (xn − x̄)))(β1n − β1 )(xn − x̄)
wn = Ep(β1n |β1 ) exp
a1 (φ)
for some {β̃1n }N
1 .
Proof A classical GLM with an identity link is
yn ∼ exp
yn · ηn − b1 (ηn )
+ c1 (yn , φ) ,
a1 (φ)
Robust Probabilistic Modeling with Bayesian Data Reweighting
ηn = β0 + β1 · (xn − x̄),
whose maximum likelihood estimate calculates
(βb0 , βb1 ) = argmaxβ0 ,β1
N
Y
Lc,n ,
n=1
where
Lc,n = exp
yn · (β0 + β1 (xn − x̄)) − b1 (β0 + β1 (xn − x̄))
+ c1 (yn , φ) .
a1 (φ)
On the other hand, the maximum likelihood estimate of the localized model calculates
(βb0 , βb1 ) = argmaxβ0 ,β1
N
Y
Ll,n ,
n=1
where
Z
Ll,n =
exp
yn · (β0 + β1n (xn − x̄)) − b1 (β0 + β1n (xn − x̄))
+ c1 (yn , φ)
a1 (φ)
β1n β1 − b2 (β1 )
+ c2 (β1n , ν) dβ1n .
+
a2 (ν)
A localized GLM is thus reweighting the likelihood term of each observation by
Z
Ll,n
yn (β1n − β1 )(xn − x̄) − b1 (β0 + β1n (xn − x̄)) + b1 (β0 + β1 (xn − x̄))
= exp
Lc,n
a1 (φ)
β1n β1 − b2 (β1 )
+
+ c2 (β1n , ν) dβ1n
a2 (ν)
Z
yn (β1n − β1 )(xn − x̄) − b01 (β0 + β̃1n (xn − x̄))(β1n − β1 )(xn − x̄)
= exp
a1 (φ)
β1n β1 − b2 (β1 )
+ c2 (β1n , ν) dβ1n
+
a2 (ν)
Z
(yn − b01 (β0 + β̃1n (xn − x̄)))(β1n − β1 )(xn − x̄) β1n β1 − b2 (β1 )
+
= exp
a1 (φ)
a2 (ν)
+c2 (β1n , ν)) dβ1n
=Ep(β1n |β1 ) exp
(yn − E(yn | β0 + β̃1n (xn − x̄)))(β1n − β1 )(xn − x̄)
a1 (φ)
!
where β̃1n is some value between β1 and β1n and the second equality is due to mean value theorem. The last equality is due
to yn residing in the exponential family.
Robust Probabilistic Modeling with Bayesian Data Reweighting
B. Proof sketch of theorem 1
iid
Denote as `(y | β : β ∈ Θ) the statistical model we fit to the data set y1 , ..., yN ∼ P̄N . `(·|β) is a density function with
respect to some carrier measure ν(dy), and Θ is the parameter space of β.
Denote the desired true value of β as β0 . Let p0 (dβ) be the prior measure absolute continuous in a neighborhood of β0 with
a continuous density at β0 . Let pw (dw) be the prior measure on weights (wn )N
n=1 . Finally, let the posterior mean of β under
the weighted and unweighted model be β̄w and β̄u and the corresponding maximum likelihood estimate (MLE) be βbw and
βbu respectively.
Let us start with some assumptions.
Assumption 1 `(·|β) is twice-differentiable and log-concave.
Assumption 2 There exist an increasing function w(·) : R → R+ such that wn = w(log `(yn |β)) solves
∂
pw ((wn )N
n=1 ) + log `(yn |β) = 0, n = 1, ..., N.
∂wn
We can immediately see that the bank of Beta(α, β) priors with α > 1 and the bank of Gamma(k, θ) priors with k > 1
satisfy this condition.
Assumption 3 P (| log `(yn | βbw ) − log `(yn | β0 )| < ) > 1 − δ1 holds ∀n for some , δ1 > 0.
This assumption includes the following two cases: (1) βbw is close to the true parameter β0 , i.e. the corruption is not at all
influential in parameter estimation, and (2) deviant points in y1 , ..., yN are far enough from typical observations coming
from `(y | β0 ) that log `(yn | βbw ) and log `(yn | β0 ) almost coincide. This assumption precisely explains why the RPM
performs well in Section 3.
Assumption 4 |βbu − β0 | ≥ M for some M .
Assumption 5 There exist a permutation π(i) : {1, ..., N } → {1, ..., N } s.t.
P(
k
X
k
log `(yπ(i) |β0 )0
log `(yπ(i) |β̃n )00
4 X
≤ (1 −
)
, k = 1, ..., n − 1) ≥ 1 − δ2 ,
PN
P
0
00
M n=1 N
n=1
n=1 log `(yπ(i) |β0 )
n=1 log `(yπ(i) |β̌n )
for β̃n and β̌n between βbu and β0 and for some δ2 > 0.
By noticing that
PN
n=1
log `(yn |β0 )0
PN
0
n=1 log `(yn |β0 )
Var(log `(yn |β)00 ) in general,
= 1,
PN
n=1
log `(yn |β̃n )00
PN
00 (1
n=1 log `(yn |β̌n )
−
4
M)
≈ 1, and Var(log `(yn |β)0 ) >>
this assumption is not particularly restrictive. For instance, a normal likelihood has Var(log `(yn |β)00 ) = 0.
Theorem Assume Assumption 1-Assumption 5. There exists an N ∗ such that for N > N ∗ , we have |β̄u − β0 | 2 |β̄w − β0 |,
where 2 denotes second order stochastic dominance.
Proof Sketch. We resort to MAP estimates of {wn }N
1 and δ1 = δ2 = 0 for simplicity of the sketch.
By Bernstein-von Mises theorem, there exists N ∗ s.t. N > N ∗ implies the posterior means β̄w and β̄u are close to their
4
corresponding MLEs βbw and βbu . Thus it is sufficient to show instead that |βbu − β0 |(1 − M
) 2 (|βbw − β0 |).
By mean value theorem, we have
PN
−
w(log `(yn |β0 ))(log `(yn |β0 )0 )
|β̂w − β0 | = PNn=1
00
n=1 w(log `(yn |β0 ))(log `(yn |β̃n ) )
and
PN
− n=1 log `(yn |β0 )0
|β̂u − β0 | = PN
,
00
n=1 log `(yn |β̌n )
Robust Probabilistic Modeling with Bayesian Data Reweighting
where β̃n and β̌n are between β̂u and β0 .
It is thus sufficient to show
N
X
N
X
4
log `(yn |β0 )0
|
(1 −
|
w(log `(yn |β0 )) PN
)| 2 |
w(log `(yn |β0 )) PN
00
0
M
n=1
n=1
n=1 log `(yn |β̌n )
n=1 log `(yn |β0 )
log `(yn |β̃n )00
This is true by Assumption 5 and a version of stochastic majorization inequality (e.g. Theorem 7 of (Egozcue & Wong,
2010)).
The whole proof of Theorem 1 is to formalize the intuitive argument that if we downweight an observation whenever it
deviates from the truth of β0 , our posterior estimate will be closer to β0 than without downweighting, given the presence of
these disruptive observations.
Robust Probabilistic Modeling with Bayesian Data Reweighting
C. Proof sketch of theorem 2
We again resort to MAP estimates of weights for simplicity. Denote a probability distribution with a t-mass at z as
Pt = tδz + (1 − t)Pβ0 . By differentiating the estimating equation
Z
{w(log `(z | β)) log `0 (z | β)}Pt (z)dz = 0
with respect to t, we obtain that
bw , `(·|β0 )) = Jw (β0 )−1 {w(log `(z | β0 )) log `0 (z|β0 )},
IF (z; β
where
Jw (β0 ) = E`(z|β0 ) w(log `(z | β0 )) log `0 (z|β0 ) log `0 (z|β0 )> .
It is natural to consider z with log `(z | β0 ) negatively large as an outlier. By investigating the behavior of w(a) as a goes to
−∞, we can easily see that
bw , `(· | β0 )) → 0, as `(z | β0 ) → 0,
IF (z; β
if
lim w(a) = 0 and
a→−∞
lim a · w(a) < ∞.
a→−∞
Robust Probabilistic Modeling with Bayesian Data Reweighting
D. Empirical study details
We present details of the four models in Section 3.
D.1. Corrupted observations
We generate a data set {yn }N
1 of size N = 100, (1 − F ) · N of them from Poisson(5) and F · N of them from Poisson(50).
The corruption rate F takes values from 0, 0.05, 0.10, ..., 0.45.
The localized Poisson model is
N
{yn }N
1 | {θn }1 ∼
N
Y
Poisson(yn | θn ),
n=1
iid
θn | θ ∼ N (θ, σ 2 ),
with priors
θ ∼ Gamma(γa , γb ),
σ 2 ∼ lognormal(0, ν 2 ).
The RPM is
"
N
p({yn }N
1 | θ, {wn }1 ) =
N
Y
#
"
Poisson(yn ; θ)wn Gamma(θ|2, 0.5)
n=1
N
Y
#
Beta(wn ; 0.1, 0.01) .
n=1
D.2. Missing latent groups
We generate a data set {(yn , xn }N
1 of size N = 100; xn ∼ Unif(−10, 10); yn ∼ Bernoulli(1/1 + exp(−pn )) where
(1 − F ) · N of them from pn = 0.5xn and F · N of them from pn = 0.01xn . The missing latent group size F takes values
from 0, 0.05, 0.10, ..., 0.45.
The localized model is
y|x∼
N
Y
Bernoulli(yn | logit(β1n xn )),
n=1
β1n ∼ N (β1 , σ 2 ),
with priors
β1 ∼ N (0, τ 2 ),
σ 2 ∼ Gamma(γa , γb ).
The RPM is
"
N
p({yn }N
1 , β, {wn }1
|
{xn }N
1 )
=
N
Y
#
wn
Bernoulli(yn ; 1/1 + exp(−βxn ))
N (β; 0, 10)
n=1
"
×
N
Y
#
Beta(wn ; 0.1, 0.01) .
n=1
D.3. Covariate dependence misspecification
iid
iid
iid
2
2
We generate a data set {(yn , x1n , x2n )}N
1 of size N = 100; x1n ∼ N (10, 5 ), x2n ∼ N (0, 10 ), β0,1,2,3 ∼ Unif(−10, 10),
iid
n ∼ N (0, 1).
Robust Probabilistic Modeling with Bayesian Data Reweighting
1. Missing an interaction term
Data generated from yn = β0 + β1 x1 + β2 x2 + β3 x1 x2 + n .
The localized model is
y | (x1 , x2 ) ∼
N
Y
N (yn | β0n + β1n x1n + β2n x2n , σ 2 ),
n=1
iid
βjn | βj ∼ N (βj , σj2 ),
with priors
iid
βj ∼ N (0, τ 2 ), j = 0, 1, 2,
iid
σj2 ∼ lognormal(0, ν 2 ), j = 0, 1, 2,
σ 2 ∼ Gamma(γa , γb ).
The RPM is
N
p {yn }N
1 , β0,1,2 , {wn }1
"N
#
Y
| {x1n , x2n }N
N (yn ; β0 + β1 x1 + β2 x2 , σ 2 )wn
1 ) =
n=1
× Gamma(σ 2 ; 1, 1)
"N
#
2
Y
Y
×
N (βj ; 0, 10)
Beta(wn ; 0.1, 0.01) .
j=0
n=1
2. Missing a quadratic term
Data generated from yn = β0 + β1 x1 + β2 x2 + β3 x22 + n .
The localized model is
y | (x1 , x2 ) ∼
N
Y
N (yn | β0n + β1n x1n + β2n x2n , σ 2 ),
n=1
iid
βjn | βj ∼ N (βj , σj2 ),
with priors
iid
βj ∼ N (0, τ 2 ), j = 0, 1, 2,
iid
σj2 ∼ lognormal(0, ν 2 ), j = 0, 1, 2,
σ 2 ∼ Gamma(γa , γb ).
The RPM is
"
p
N
{yn }N
1 , β0,1,2 , {wn }1
|
{x1n , x2n }N
1 )
=
N
Y
#
2 wn
N (yn ; β0 + β1 x1 + β2 x2 , σ )
n=1
× Gamma(σ 2 ; 1, 1)
"N
#
2
Y
Y
×
N (βj ; 0, 10)
Beta(wn ; 0.1, 0.01) .
j=0
n=1
3. Missing a covariate
Data generated from yn = β0 + β1 x1 + β2 x2 + n .
The localized model is
y | (x1 ) ∼
N
Y
n=1
N (yn | β0n + β1n x1n , σ 2 ),
Robust Probabilistic Modeling with Bayesian Data Reweighting
iid
βjn | βj ∼ N (βj , σj2 ),
with priors
iid
βj ∼ N (0, τ 2 ), j = 0, 1,
iid
σj2 ∼ lognormal(0, ν 2 ), j = 0, 1,
σ 2 ∼ Gamma(γa , γb ).
The RPM is
"
p
N
{yn }N
1 , β0,1 , {wn }1
|
{x1n }N
1 )
=
N
Y
#
2 wn
N (yn ; β0 + β1 x1 , σ )
n=1
× Gamma(σ 2 ; 1, 1)
"N
#
1
Y
Y
×
N (βj ; 0, 10)
Beta(wn ; 0.1, 0.01) .
n=1
j=0
D.4. Skewed distributions
We generate a data set {(x1n , x2n )}N
1 of size N = 2000 from a mixture of three skewed normal distributions, with location
parameters (-2, -2), (3, 0), (-5, 7), scale parameters (2, 2), (2, 4), (4, 2), shape parameters -5, 10, 15, and mixture proportions
0.3, 0.3, 0.4. So the true number of components in this data set is 3.
The RPM is
30
30
30
N
p({(x1n , x2n )}N
1 , {µk }1 , {Σk }1 , {πk }1 , {wn }1 )
#
" N 30
# " 30
Y
YX
wn
πk N ((x1n , x2n ; µk , Σk )]
N (µk,1 ; 0, 10)N (µk,2 ; 0, 10)
=
[
n=1 k=1
"
×
30
Y
k=1
#
lognormal(σk,1 ; 0, 10)lognormal(σk,2 ; 0, 10)
k=1
"
×
Dirichlet((πk )30
1 ; 1)
N
Y
#
Beta(wn ; 1, 0.05) ,
n=1
where µk = (µk,1 , µk,2 ) and Σk =
2
σk,1
0
0
2
σk,2
.
Robust Probabilistic Modeling with Bayesian Data Reweighting
E. Poisson factorization model
Poisson factorization models a matrix of count data as a low-dimensional inner product (Cemgil, 2009; Gopalan et al., 2015).
Consider a data set of a matrix sized U × I with non-negative integer elements xui . In the recommendation example, we
have U users and I items and each xui entry being the rating of user u on item i.
The user-reweighted RPM is
"
I
p({xui }U ×I , {θu }U
1 , {βi }1 )
U Y
I
Y
=
[ Poisson(xui ; θu >βi )]wu
#
u=1 i=1
"
×
U Y
K
Y
#"
Gamma(θu,k ; 1, 0.001)
u=1 k=1
×
U
Y
I Y
K
Y
#
Gamma(βi,k ; 1, 0.001)
i=1 k=1
Beta(wu ; 100, 1),
u=1
where K is the number of latent dimensions.
Dataset . We use the Movielens-1M data set: user-movie ratings collected from a movie recommendation service.3
3
http://grouplens.org/datasets/movielens/
Robust Probabilistic Modeling with Bayesian Data Reweighting
F. Profile of a downweighted user
Here we show a donweighted user in the RPM analysis of the Movielens 1M dataset. This user watched 325 movies; we
rank her movies according to their popularity in the dataset.
Title
Genres
Usual Suspects, The (1995)
2001: A Space Odyssey (1968)
Ghost (1990)
Lion King, The (1994)
Leaving Las Vegas (1995)
Star Trek: Generations (1994)
African Queen, The (1951)
GoldenEye (1995)
Birdcage, The (1996)
Much Ado About Nothing (1993)
Hudsucker Proxy, The (1994)
My Fair Lady (1964)
Philadelphia Story, The (1940)
James and the Giant Peach (1996)
Crumb (1994)
Remains of the Day, The (1993)
Adventures of Priscilla, Queen of the Desert, The (1994)
Reality Bites (1994)
Notorious (1946)
Brady Bunch Movie, The (1995)
Roman Holiday (1953)
Apartment, The (1960)
Rising Sun (1993)
Bringing Up Baby (1938)
Bridges of Madison County, The (1995)
Pocahontas (1995)
Hunchback of Notre Dame, The (1996)
Mr. Smith Goes to Washington (1939)
His Girl Friday (1940)
Tank Girl (1995)
Adventures of Robin Hood, The (1938)
Eat Drink Man Woman (1994)
American in Paris, An (1951)
Secret Garden, The (1993)
Short Cuts (1993)
Six Degrees of Separation (1993)
First Wives Club, The (1996)
Age of Innocence, The (1993)
Father of the Bride (1950)
My Favorite Year (1982)
Shadowlands (1993)
Some Folks Call It a Sling Blade (1993)
Little Women (1994)
Kids in the Hall: Brain Candy (1996)
Cat on a Hot Tin Roof (1958)
Corrina, Corrina (1994)
Muppet Treasure Island (1996)
39 Steps, The (1935)
Farewell My Concubine (1993)
Renaissance Man (1994)
With Honors (1994)
Virtuosity (1995)
Cold Comfort Farm (1995)
Crime|Thriller
Drama|Mystery|Sci-Fi|Thriller
Comedy|Romance|Thriller
Animation|Children’s|Musical
Drama|Romance
Action|Adventure|Sci-Fi
Action|Adventure|Romance|War
Action|Adventure|Thriller
Comedy
Comedy|Romance
Comedy|Romance
Musical|Romance
Comedy|Romance
Animation|Children’s|Musical
Documentary
Drama
Comedy|Drama
Comedy|Drama
Film-Noir|Romance|Thriller
Comedy
Comedy|Romance
Comedy|Drama
Action|Drama|Mystery
Comedy
Drama|Romance
Animation|Children’s|Musical
Animation|Children’s|Musical
Drama
Comedy
Action|Comedy|Musical|Sci-Fi
Action|Adventure
Comedy|Drama
Musical|Romance
Children’s|Drama
Drama
Drama
Comedy
Drama
Comedy
Comedy
Drama|Romance
Drama|Thriller
Drama
Comedy
Drama
Comedy|Drama|Romance
Adventure|Comedy|Musical
Thriller
Drama|Romance
Comedy|Drama|War
Comedy|Drama
Sci-Fi|Thriller
Comedy
%
45.0489
41.6259
32.0293
30.7457
27.3533
27.0171
26.1614
25.1222
19.7433
18.6125
17.1760
17.1760
15.5562
13.8142
13.1724
12.9279
12.8362
12.4389
12.0416
11.9499
11.8888
11.6748
11.1858
11.1553
10.9413
10.8802
10.8191
10.6663
10.5134
10.4218
10.0856
9.9939
9.7188
9.3215
9.0465
8.8325
8.6797
8.3435
8.2213
8.1601
8.1601
8.0990
8.0379
7.9768
7.7017
7.3961
7.3655
7.2127
7.2127
7.1210
6.7543
6.7543
6.4792
Robust Probabilistic Modeling with Bayesian Data Reweighting
Man Without a Face, The (1993)
East of Eden (1955)
Three Colors: White (1994)
Shadow, The (1994)
Boomerang (1992)
Hellraiser: Bloodline (1996)
Basketball Diaries, The (1995)
My Man Godfrey (1936)
Very Brady Sequel, A (1996)
Screamers (1995)
Richie Rich (1994)
Beautiful Girls (1996)
Meet Me in St. Louis (1944)
Ghost and Mrs. Muir, The (1947)
Waiting to Exhale (1995)
Boxing Helena (1993)
Belle de jour (1967)
Goofy Movie, A (1995)
Spitfire Grill, The (1996)
Village of the Damned (1995)
Dracula: Dead and Loving It (1995)
Twelfth Night (1996)
Dead Man (1995)
Miracle on 34th Street (1994)
Halloween: The Curse of Michael Myers (1995)
Once Were Warriors (1994)
Kid in King Arthur’s Court, A (1995)
Road to Wellville, The (1994)
Restoration (1995)
Oliver & Company (1988)
Basquiat (1996)
Pagemaster, The (1994)
Giant (1956)
Surviving the Game (1994)
City Hall (1996)
Herbie Rides Again (1974)
Backbeat (1993)
Umbrellas of Cherbourg, The (1964)
Ruby in Paradise (1993)
Mrs. Winterbourne (1996)
Bed of Roses (1996)
Chungking Express (1994)
Free Willy 2: The Adventure Home (1995)
Party Girl (1995)
Solo (1996)
Stealing Beauty (1996)
Burnt By the Sun (Utomlyonnye solntsem) (1994)
Naked (1993)
Kicking and Screaming (1995)
Jeffrey (1995)
Made in America (1993)
Lawnmower Man 2: Beyond Cyberspace (1996)
Davy Crockett, King of the Wild Frontier (1955)
Vampire in Brooklyn (1995)
NeverEnding Story III, The (1994)
Candyman: Farewell to the Flesh (1995)
Air Up There, The (1994)
Drama
Drama
Drama
Action
Comedy|Romance
Action|Horror|Sci-Fi
Drama
Comedy
Comedy
Sci-Fi|Thriller
Children’s|Comedy
Drama
Musical
Drama|Romance
Comedy|Drama
Mystery|Romance|Thriller
Drama
Animation|Children’s|Comedy
Drama
Horror|Sci-Fi
Comedy|Horror
Comedy|Drama|Romance
Western
Drama
Horror|Thriller
Crime|Drama
Adventure|Comedy|Fantasy
Comedy
Drama
Animation|Children’s
Drama
Adventure|Animation|Fantasy
Drama
Action|Adventure|Thriller
Drama|Thriller
Adventure|Children’s|Comedy
Drama|Musical
Drama|Musical
Drama
Comedy|Romance
Drama|Romance
Drama|Mystery|Romance
Adventure|Children’s|Drama
Comedy
Action|Sci-Fi|Thriller
Drama
Drama
Drama
Comedy|Drama
Comedy
Comedy
Sci-Fi|Thriller
Western
Comedy|Romance
Adventure|Children’s|Fantasy
Horror
Comedy
6.4181
6.2958
5.9597
5.9291
5.6846
5.6540
5.5318
5.3790
5.3484
5.2567
5.1956
5.1650
5.1650
4.9817
4.9817
4.7983
4.7983
4.6760
4.6760
4.6149
4.5232
4.5232
4.4927
4.4621
4.4315
4.3704
4.3399
4.3399
4.2176
4.0648
3.9731
3.8814
3.8509
3.8509
3.8509
3.7897
3.6675
3.5758
3.5452
3.4841
3.4841
3.3619
3.3313
3.2702
3.1785
3.1479
3.1479
2.9034
2.9034
2.8729
2.8423
2.8117
2.7812
2.7506
2.6895
2.6284
2.6284
Robust Probabilistic Modeling with Bayesian Data Reweighting
High School High (1996)
Young Poisoner’s Handbook, The (1995)
Jane Eyre (1996)
Jury Duty (1995)
Girl 6 (1996)
Farinelli: il castrato (1994)
Chamber, The (1996)
Blue in the Face (1995)
Little Buddha (1993)
King of the Hill (1993)
Shanghai Triad (Yao a yao yao dao waipo qiao) (1995)
Scarlet Letter, The (1995)
Blue Chips (1994)
House of the Spirits, The (1993)
Tom and Huck (1995)
Life with Mikey (1993)
For Love or Money (1993)
Princess Caraboo (1994)
Addiction, The (1995)
Mrs. Parker and the Vicious Circle (1994)
Cops and Robbersons (1994)
Wonderful, Horrible Life of Leni Riefenstahl, The (1993)
Strawberry and Chocolate (Fresa y chocolate) (1993)
Bread and Chocolate (Pane e cioccolata) (1973)
Of Human Bondage (1934)
To Live (Huozhe) (1994)
Now and Then (1995)
Flipper (1996)
Mr. Wrong (1996)
Before and After (1996)
Maya Lin: A Strong Clear Vision (1994)
Horseman on the Roof, The (Hussard sur le toit, Le) (1995)
Moonlight and Valentino (1995)
Andre (1994)
House Arrest (1996)
Celtic Pride (1996)
Amateur (1994)
White Man’s Burden (1995)
Heidi Fleiss: Hollywood Madam (1995)
Adventures of Pinocchio, The (1996)
National Lampoon’s Senior Trip (1995)
Angel and the Badman (1947)
Poison Ivy II (1995)
Bitter Moon (1992)
Perez Family, The (1995)
Georgia (1995)
Love in the Afternoon (1957)
Inkwell, The (1994)
Bloodsport 2 (1995)
Bad Company (1995)
Underneath, The (1995)
Widows’ Peak (1994)
Alaska (1996)
Jefferson in Paris (1995)
Penny Serenade (1941)
Big Green, The (1995)
What Happened Was... (1994)
Comedy
Crime
Drama|Romance
Comedy
Comedy
Drama|Musical
Drama
Comedy
Drama
Drama
Drama
Drama
Drama
Drama|Romance
Adventure|Children’s
Comedy
Comedy
Drama
Horror
Drama
Comedy
Documentary
Drama
Drama
Drama
Drama
Drama
Adventure|Children’s
Comedy
Drama|Mystery
Documentary
Drama
Drama|Romance
Adventure|Children’s
Comedy
Comedy
Crime|Drama|Thriller
Drama
Documentary
Adventure|Children’s
Comedy
Western
Thriller
Drama
Comedy|Romance
Drama
Comedy|Romance
Comedy|Drama
Action
Action
Mystery|Thriller
Drama
Adventure|Children’s
Drama
Drama|Romance
Children’s|Comedy
Comedy|Drama|Romance
2.5978
2.5367
2.5367
2.4756
2.4450
2.3227
2.2616
2.2005
2.2005
2.1699
2.1699
2.1699
2.1394
2.1394
2.0477
2.0477
2.0171
1.9560
1.9560
1.9254
1.9254
1.8949
1.8949
1.8643
1.8643
1.8337
1.8337
1.8032
1.8032
1.7115
1.6504
1.6504
1.6504
1.6504
1.6198
1.6198
1.6198
1.5892
1.5892
1.5892
1.5587
1.5587
1.5281
1.4976
1.4670
1.4364
1.4059
1.4059
1.4059
1.3753
1.3753
1.3447
1.2836
1.2531
1.2531
1.2531
1.2531
Robust Probabilistic Modeling with Bayesian Data Reweighting
Great Day in Harlem, A (1994)
Underground (1995)
House Party 3 (1994)
Roommates (1995)
Getting Even with Dad (1994)
Cry, the Beloved Country (1995)
Stalingrad (1993)
Endless Summer 2, The (1994)
Browning Version, The (1994)
Fluke (1995)
Scarlet Letter, The (1926)
Pyromaniac’s Love Story, A (1995)
Castle Freak (1995)
Double Happiness (1994)
Month by the Lake, A (1995)
Once Upon a Time... When We Were Colored (1995)
Favor, The (1994)
Manny & Lo (1996)
Visitors, The (Les Visiteurs) (1993)
Carpool (1996)
Total Eclipse (1995)
Panther (1995)
Lassie (1994)
It’s My Party (1995)
Kaspar Hauser (1993)
It Takes Two (1995)
Purple Noon (1960)
Nadja (1994)
Haunted World of Edward D. Wood Jr., The (1995)
Dear Diary (Caro Diario) (1994)
Faces (1968)
Love & Human Remains (1993)
Man of the House (1995)
Curdled (1996)
Jack and Sarah (1995)
Denise Calls Up (1995)
Aparajito (1956)
Hunted, The (1995)
Colonel Chabert, Le (1994)
Thin Line Between Love and Hate, A (1996)
Nina Takes a Lover (1994)
Ciao, Professore! (Io speriamo che me la cavo ) (1993)
In the Bleak Midwinter (1995)
Naked in New York (1994)
Maybe, Maybe Not (Bewegte Mann, Der) (1994)
Police Story 4: Project S (Chao ji ji hua) (1993)
Algiers (1938)
Tom & Viv (1994)
Cold Fever (A koldum klaka) (1994)
Amazing Panda Adventure, The (1995)
Marlene Dietrich: Shadow and Light (1996)
Jupiter’s Wife (1994)
Stars Fell on Henrietta, The (1995)
Careful (1992)
Kika (1993)
Loaded (1994)
Killer (Bulletproof Heart) (1994)
Documentary
War
Comedy
Comedy|Drama
Comedy
Drama
War
Documentary
Drama
Children’s|Drama
Drama
Comedy|Romance
Horror
Drama
Comedy|Drama
Drama
Comedy|Romance
Drama
Comedy|Sci-Fi
Comedy|Crime
Drama|Romance
Drama
Adventure|Children’s
Drama
Drama
Comedy
Crime|Thriller
Drama
Documentary
Comedy|Drama
Drama
Comedy
Comedy
Crime
Romance
Comedy
Drama
Action
Drama|Romance|War
Comedy
Comedy|Romance
Drama
Comedy
Comedy|Romance
Comedy
Action
Drama|Romance
Drama
Comedy|Drama
Adventure|Children’s
Documentary
Documentary
Drama
Comedy
Drama
Drama|Thriller
Thriller
1.1919
1.1919
1.1614
1.1614
1.1308
1.1308
1.1308
1.1308
1.1308
1.1002
1.1002
1.0697
1.0697
1.0697
1.0391
1.0391
1.0086
1.0086
1.0086
0.9780
0.9780
0.9474
0.9474
0.9169
0.9169
0.9169
0.8863
0.8557
0.8557
0.8252
0.8252
0.7946
0.7946
0.7641
0.7641
0.7641
0.7641
0.7641
0.7335
0.7335
0.7335
0.7029
0.7029
0.7029
0.6724
0.6418
0.6418
0.6418
0.6112
0.6112
0.6112
0.6112
0.6112
0.5807
0.5807
0.5501
0.5501
Robust Probabilistic Modeling with Bayesian Data Reweighting
Clean Slate (Coup de Torchon) (1981)
Killer: A Journal of Murder (1995)
301, 302 (1995)
New Jersey Drive (1995)
Gold Diggers: The Secret of Bear Mountain (1995)
Spirits of the Dead (Tre Passi nel Delirio) (1968)
Fear, The (1995)
From the Journals of Jean Seberg (1995)
Celestial Clockwork (1994)
They Made Me a Criminal (1939)
Man of the Year (1995)
New Age, The (1994)
Reluctant Debutante, The (1958)
Savage Nights (Nuits fauves, Les) (1992)
Faithful (1996)
Land and Freedom (Tierra y libertad) (1995)
Boys (1996)
Big Squeeze, The (1996)
Gumby: The Movie (1995)
All Things Fair (1996)
Kim (1950)
Infinity (1996)
Peanuts - Die Bank zahlt alles (1996)
Ed’s Next Move (1996)
Hour of the Pig, The (1993)
Walk in the Sun, A (1945)
Death in the Garden (Mort en ce jardin, La) (1956)
Collectionneuse, La (1967)
They Bite (1996)
Original Gangstas (1996)
Gordy (1995)
Last Klezmer, The (1995)
Butterfly Kiss (1995)
Talk of Angels (1998)
In the Line of Duty 2 (1987)
Tarantella (1995)
Under the Domin Tree (Etz Hadomim Tafus) (1994)
Dingo (1992)
Billy’s Holiday (1995)
Venice/Venice (1992)
Low Life, The (1994)
Phat Beach (1996)
Catwalk (1995)
Fall Time (1995)
Scream of Stone (Schrei aus Stein) (1991)
Frank and Ollie (1995)
Bye-Bye (1995)
Tigrero: A Film That Was Never Made (1994)
Wend Kuuni (God’s Gift) (1982)
Sonic Outlaws (1995)
Getting Away With Murder (1996)
Fausto (1993)
Brothers in Trouble (1995)
Foreign Student (1994)
Tough and Deadly (1995)
Moonlight Murder (1936)
Schlafes Bruder (Brother of Sleep) (1995)
Crime
Crime|Drama
Mystery
Crime|Drama
Adventure|Children’s
Horror
Horror
Documentary
Comedy
Crime|Drama
Documentary
Drama
Comedy|Drama
Drama
Comedy
War
Drama
Comedy|Drama
Animation|Children’s
Drama
Children’s|Drama
Drama
Comedy
Comedy
Drama|Mystery
Drama
Drama
Drama
Drama
Crime
Comedy
Documentary
Thriller
Drama
Action
Drama
Drama
Drama
Drama
Drama
Drama
Comedy
Documentary
Drama
Drama
Documentary
Drama
Documentary|Drama
Drama
Documentary
Comedy
Comedy
Drama
Drama
Action|Drama|Thriller
Mystery
Drama
0.5501
0.5501
0.5196
0.5196
0.4890
0.4890
0.4890
0.4890
0.4584
0.4584
0.4584
0.4279
0.4279
0.4279
0.4279
0.4279
0.3973
0.3973
0.3973
0.3973
0.3667
0.3667
0.3667
0.3667
0.3667
0.3667
0.3362
0.3362
0.3362
0.3362
0.3362
0.3056
0.3056
0.3056
0.3056
0.3056
0.2751
0.2751
0.2751
0.2751
0.2751
0.2751
0.2751
0.2445
0.2445
0.2445
0.2445
0.2445
0.2445
0.2445
0.2445
0.2445
0.2445
0.2445
0.2445
0.2445
0.2139
Robust Probabilistic Modeling with Bayesian Data Reweighting
Metisse (Cafe au Lait) (1993)
Promise, The (Versprechen, Das) (1994)
Und keiner weint mir nach (1996)
Hungarian Fairy Tale, A (1987)
Liebelei (1933)
Paris, France (1993)
Girl in the Cadillac (1995)
Hostile Intentions (1994)
Two Bits (1995)
Rent-a-Kid (1995)
Beyond Bedlam (1993)
Touki Bouki (Journey of the Hyena) (1973)
Convent, The (Convento, O) (1995)
Open Season (1996)
Lotto Land (1995)
Frisk (1995)
Shadow of Angels (Schatten der Engel) (1976)
Yankee Zulu (1994)
Last of the High Kings, The (1996)
Sunset Park (1996)
Happy Weekend (1996)
Criminals (1996)
Happiness Is in the Field (1995)
Associate, The (L’Associe)(1982)
Target (1995)
Relative Fear (1994)
Honigmond (1996)
Eye of Vichy, The (Oeil de Vichy, L’) (1993)
Sweet Nothing (1995)
Harlem (1993)
Condition Red (1995)
Homage (1995)
Superweib, Das (1996)
Halfmoon (Paul Bowles - Halbmond) (1995)
Silence of the Palace, The (Saimt el Qusur) (1994)
Headless Body in Topless Bar (1995)
Rude (1995)
Garcu, Le (1995)
Guardian Angel (1994)
Roula (1995)
Jar, The (Khomreh) (1992)
Small Faces (1995)
New York Cop (1996)
Century (1993)
Comedy
Romance
Drama|Romance
Fantasy
Romance
Comedy
Drama
Action|Drama|Thriller
Drama
Comedy
Drama|Horror
Drama
Drama
Comedy
Drama
Drama
Drama
Comedy|Drama
Drama
Drama
Comedy
Documentary
Comedy
Comedy
Action|Drama
Horror|Thriller
Comedy
Documentary
Drama
Drama
Action|Drama|Thriller
Drama
Comedy
Drama
Drama
Comedy
Drama
Drama
Action|Drama|Thriller
Drama
Drama
Drama
Action|Crime
Drama
0.2139
0.2139
0.2139
0.2139
0.2139
0.2139
0.2139
0.2139
0.2139
0.2139
0.2139
0.2139
0.2139
0.2139
0.1834
0.1834
0.1834
0.1834
0.1834
0.1834
0.1834
0.1834
0.1528
0.1528
0.1528
0.1528
0.1528
0.1528
0.1528
0.1528
0.1528
0.1528
0.1222
0.1222
0.1222
0.1222
0.1222
0.1222
0.1222
0.0917
0.0917
0.0917
0.0917
0.0917
Robust Probabilistic Modeling with Bayesian Data Reweighting
References
Cemgil, Ali Taylan. Bayesian inference for nonnegative matrix factorisation models. Computational Intelligence and
Neuroscience, 2009.
Egozcue, Martin and Wong, Wing-Keung. Gains from diversification on convex combinations: A majorization and stochastic
dominance approach. European Journal of Operational Research, 200(3):893–900, 2010.
Gopalan, Prem, Hofman, Jake M, and Blei, David M. Scalable recommendation with hierarchical Poisson factorization.
UAI, 2015.
| 2 |
LOCALLY ADAPTIVE CONFIDENCE BANDS∗
By Tim Patschkowski and Angelika Rohde
arXiv:1610.08929v2 [math.ST] 23 Nov 2016
Ruhr-Universität Bochum and Albert-Ludwigs-Universität Freiburg
We develop honest and locally adaptive confidence bands for probability densities. They provide substantially improved confidence statements in
case of inhomogeneous smoothness, and are easily implemented and visualized. The article contributes conceptual work on locally adaptive inference
as a straightforward modification of the global setting imposes severe obstacles for statistical purposes. Among others, we introduce a statistical notion
of local Hölder regularity and prove a correspondingly strong version of local adaptivity. We substantially relax the straightforward localization of
the self-similarity condition in order not to rule out prototypical densities.
The set of densities permanently excluded from the consideration is shown
to be pathological in a mathematically rigorous sense. On a technical level,
the crucial component for the verification of honesty is the identification
of an asymptotically least favorable stationary case by means of Slepian’s
comparison inequality.
1. Introduction. Let X1 , . . . , Xn be independent real-valued random variables which are identically distributed according to some unknown probability measure Pp with Lebesgue density p. Assume that p belongs to a nonparametric function
class P. For any interval [a, b] and any significance level α ∈ (0, 1), a confidence
band for p, described by a family of random intervals Cn (t, α), t ∈ [a, b], is said to
be (asymptotically) honest with respect to P if the coverage inequality
lim inf inf P⊗n
p(t) ∈ Cn (t, α) for all t ∈ [a, b] ≥ 1 − α
p
n
p∈P
is satisfied. The aim of this article is to develop honest confidence bands Cn (t, α), t ∈
[a, b], with smallest possible width |Cn (t, α)| for every t ∈ [a, b]. Adaptive confidence sets maintain specific coverage probabilities over a large union of models
while shrinking at the fastest possible nonparametric rate simultaneously over all
submodels. If P is some class of densities within a union of Hölder balls H(β, L)
with fixed radius L > 0, the confidence band is called globally adaptive, cf. Cai and
Low (2004), if for every β > 0 and for every ε > 0 there exists some constant c > 0,
such that
!
lim sup
sup
n
p∈H(β,L)∩P
P⊗n
p
sup |Cn (t, α)| ≥ c · rn (β)
< ε.
t∈[a,b]
∗
Supported by the DFG Collaborative Research Center 823, Subproject C1, and DFG
Research Grant RO 3766/4-1.
Keywords and phrases: Local regularity and local adaptivity, honesty, confidence bands
in density estimation.
1
2
Here, rn (β) denotes the minimax-optimal rate of convergence for estimation under
supremum norm loss over H(β, L) ∩ P, possibly inflated by additional logarithmic
factors. However, if P equals the set of all densities contained in
[
H(β, L),
0<β≤β ∗
honest and adaptive confidence bands provably do not exist although adaptive
estimation is possible. Indeed, Low (1997) shows that honest random-length intervals for a probability density at a fixed point cannot have smaller expected width
than fixed-length confidence intervals with the size corresponding to the lowest
regularity under consideration. Consequently, it is not even possible to construct
a family of random intervals Cn (t, α), t ∈ [a, b], whose expected length shrinks at
the fastest possible rate simultaneously over two distinct nested Hölder balls with
fixed radius, and which is at the same time asymptotically honest for the union P
of these Hölder balls. Numerous attempts have been made to tackle this adaptation problem in alternative formulations. Whereas Genovese and Wasserman (2008)
relax the coverage property and do not require the confidence band to cover the
function itself but a simpler surrogate function capturing the original function’s
significant features, most of the approaches are based on a restriction of the parameter space. Under qualitative shape constraints, Hengartner and Stark (1995),
Dümbgen (1998, 2003), and Davies, Kovac and Meise (2009) achieve adaptive inference. Within the models of nonparametric regression and Gaussian white noise,
Picard and Tribouley (2000) succeeded to construct pointwise adaptive confidence
intervals under a self-similarity condition on the parameter space, see also Kueh
(2012) for thresholded needlet estimators. Under a similar condition, Giné and
Nickl (2010) even develop asymptotically honest confidence bands for probability
densities whose width is adaptive to the global Hölder exponent. Bull (2012) proved
that a slightly weakened version of the self-similarity condition is necessary and sufficient. Kerkyacharian, Nickl and Picard (2012) develop corresponding results in the
context of needlet density estimators on compact homogeneous manifolds. Under
the same type of self-similarity condition, adaptive confidence bands are developed
under a considerably generalized Smirnov-Bickel-Rosenblatt assumption based on
Gaussian multiplier bootstrap, see Chernozhukov, Chetverikov and Kato (2014a).
Hoffmann and Nickl (2011) introduce a nonparametric distinguishability condition,
under which adaptive confidence bands exist for finitely many models under consideration. Their condition is shown to be necessary and sufficient.
Similar important conclusions concerning adaptivity in terms of confidence statements are obtained under Hilbert space geometry with corresponding L2 -loss, see
Juditsky and Lambert-Lacroix (2003), Baraud (2004), Genovese and Wasserman
(2005), Cai and Low (2006), Robins and van der Vaart (2006), Bull and Nickl
(2013), and Nickl and Szabó (2016). Concerning Lp -loss, we also draw attention to
Carpentier (2013).
In this article, we develop locally adaptive confidence bands. They provide substantially improved confidence statements in case of inhomogeneous smoothness.
Conceptual work on locally adaptive inference is contributed as a straightforward
LOCALLY ADAPTIVE CONFIDENCE BANDS
3
modification of the global setting imposes severe obstacles for statistical purposes.
It is already delicate to specify what a ”locally adaptive confidence band” should
be. Disregarding any measurability issues, one possibility is to require a confidence
band Cn,α = (Cn,α (t))t∈[0,1] to satisfy for every interval U ⊂ [a, b] and for every β
(possibly restricted to a prescribed range)
lim sup
sup
n→∞
p∈P:
p|Uδ ∈HUδ (β,L∗ )
P⊗n
p (|Cn,α (t)| ≥ η rn (β) for some t ∈ U ) → 0
as η → ∞, where Uδ is the δ-enlargement of U . However, this definition reflects a
weaker notion of local adaptivity than the statistician may have in mind. On the
other hand, we prove that, uniformly over the function class P under consideration,
adaptation to the local or pointwise regularity in the sense of Daoudi, Lévy Véhel
and Meyer (1998), Seuret and Lévy Véhel (2002) or Jaffard (1995, 2006) is impossible. Indeed, not even adaptive estimation with respect to pointwise regularity at
a fixed point is achievable. On this way, we introduce a statistically suitable notion
of local regularity βn,p (t), t ∈ [a, b], depending in particular on the sample size n.
We prove a corresponding strong version of local adaptivity, while we substantially
relax the straightforward localization of the global self-similarity condition in order
not to rule out prototypical densities. The set of functions which is excluded from
our parameter space diminishes for growing sample size and the set of permanently
excluded functions is shown to be pathological in a mathematically rigorous sense.
Our new confidence band appealingly relies on a discretized evaluation of a modified
Lepski-type density estimator, including an additional supremum in the empirical
bias term in the bandwidth selection criterion. A suitable discretization and a locally constant approximation allow to piece the pointwise constructions together in
order to obtain a continuum of confidence statements. The complex construction
makes the asymptotic calibration of the confidence band to the level α non-trivial.
Whereas the analysis of the related globally adaptive procedure of Giné and Nickl
(2010) reduces to the limiting distribution of the supremum of a stationary Gaussian
process, our locally adaptive approach leads to a highly non-stationary situation. A
crucial component is therefore the identification of a stationary process as a least favorable case by means of Slepian’s comparison inequality, subsequent to a Gaussian
reduction using recent non-asymptotic techniques of Chernozhukov, Chetverikov
and Kato (2014b). Due to the discretization, the band is computable and feasible
from a practical point of view without losing optimality between the mesh points.
Our results are exemplarily formulated in the density estimation framework but can
be mimicked in other nonparametric models. To keep the representation concise we
restrict the theory to locally adaptive kernel density estimators. The ideas can be
transferred to wavelet estimators to a large extent as has been done for globally
adaptive confidence bands in Giné and Nickl (2010).
The article is organized as follows. Basic notations are introduced in Section 2.
Section 3 presents the main contributions, that is a substantially relaxed localized
self-similarity condition in Subsection 3.1, the construction and in particular the
asymptotic calibration of the confidence band in Subsection 3.2 as well as its strong
local adaptivity properties in Subsection 3.3. Important supplementary results are
4
postponed to Section 4, whereas Section 5 presents the proofs of the main results.
Appendix A contains technical tools for the proofs of the main results.
2. Preliminaries and notation. Let X1 , . . . , Xn , n ≥ 4, be independent
random variables identically distributed according to some unknown probability
measure Pp on R with continuous Lebesgue density p. Subsequently, we consider
kernel density estimators
n
p̂n (·, h) =
1X
Kh (Xi − ·)
n i=1
with bandwidth h > 0 and rescaled kernel Kh (·) = h−1 K(·/h), where K is a
measurable and symmetric kernel with support contained in [−1, 1], integrating to
one, and of bounded variation. Furthermore, K is said to be of order l ∈ N if
Z
Z
j
x K(x) dx = 0 for 1 ≤ j ≤ l,
xl+1 K(x) dx = c with c 6= 0.
For bandwidths of the form h = c · 2−j , j ∈ N, we abbreviate the notation writing
p̂n (·, h) = p̂n (·, j) and Kh = Kj . The open Euclidean ball of radius r around
some point t ∈ R is referred to as B(t, r). Subsequently, the sample is split into
two subsamples. For simplicity, we divide the sample into two parts of equal size
ñ = bn/2c, leaving possibly out the last observation. Let
χ1 = {X1 , . . . , Xñ },
χ2 = {Xñ+1 , . . . , X2ñ }
(1)
(2)
be the distinct subsamples and denote by p̂n (·, h) and p̂n (·, h) the kernel density
estimators with bandwidth h based on χ1 and χ2 , respectively. Eχp 1 and Eχp 2 denote
the expectations with respect to the product measures
Pχp 1 = joint distribution of X1 , . . . , Xñ ,
Pχp 2 = joint distribution of Xñ+1 , . . . , X2ñ ,
respectively.
For some measure Q, we denote by k · kLp (Q) the Lp -norm with respect to Q. Is
Q the Lebesgue measure, we just write k · kp , whereas k · ksup denotes the uniform
norm. For any metric space (M, d) and subset K ⊂ M , we define the covering
number N (K, d, ε) as the minimum number of closed balls with radius at most ε
(with respect to d) needed to cover K. If the metric d is induced by a norm k · k,
we write also N (K, k · k, ε) for N (K, d, ε). As has been shown by Nolan and Pollard
(1987) (Section 4 and Lemma 22), the class
·−t
K= K
: t ∈ R, h > 0
h
with constant envelope kKksup satisfies
(2.1)
N K, k · kLp (Q) , εkKksup
ν
A
≤
,
ε
0 < ε ≤ 1,
p = 1, 2
LOCALLY ADAPTIVE CONFIDENCE BANDS
5
for all probability measures Q and for some finite and positive constants A and ν.
For k ∈ N we denote the k-th order Taylor polynomial of the function p at point
p
y by Py,k
. Denoting furthermore by bβc = max {n ∈ N ∪ {0} : n < β}, the Hölder
class HU (β) to the parameter β > 0 on the open interval U ⊂ R is defined as the
set of functions p : U → R admitting derivatives up to the order bβc and having
finite Hölder norm
kpkβ,U =
bβc
X
k=0
kp(k) kU + sup
x,y ∈ U
x6=y
|p(bβc) (x) − p(bβc) (y)|
< ∞.
|x − y|β−bβc
The corresponding Hölder ball with radius L > 0 is denoted by HU (β, L). With
this definition of k · kβ,U , the Hölder balls are nested, that is
HU (β2 , L) ⊂ HU (β1 , L)
T
for 0 < β1T≤ β2 < ∞ and |U | < 1. Finally, HU (∞, L) = β>0 HU (β, L) and
HU (∞) = β>0 HU (β). Subsequently, for any real function f (β), the expression
f (∞) is to be read as limβ→∞ f (β), provided that this limit exists. Additionally,
the class of probability densities p, such that p|U is contained in the Hölder class
HU (β, L) is denoted by PU (β, L). The indication of U is omitted when U = R.
3. Main results. In this section we pursue the new approach of locally adaptive confidence bands and present the main contribution of this article. A notion of
local Hölder regularity tailored to statistical purposes, a corresponding condition of
admissibility of a class of functions over which both asymptotic honesty and adaptivity (in a sense to be specified) can be achieved, as well as the construction of the
new confidence band are presented. As compared to globally adaptive confidence
bands, our confidence bands provide improved confidence statements for functions
with inhomogeneous smoothness.
Fig 1. Comparison of locally and globally adaptive confidence bands
Figure 1 illustrates the kind of adaptivity that the construction should reveal. The
shaded area sketches the intended locally adaptive confidence band as compared
to the globally adaptive band (dashed line) for the triangular density and for fixed
sample size n. This density is not smoother than Lipschitz at its maximal point
but infinitely smooth at both sides. The region where globally and locally adaptive
6
confidence bands coincide up to logarithmic factors (light gray regime in Figure 1)
should shrink as the sample size increases, resulting in a substantial benefit of
the locally adaptive confidence band outside of a shrinking neighborhood of the
maximal point (dark grey regime together with a shrinking middle grey transition
regime in Figure 1).
3.1. Admissible functions. As already pointed out in the introduction, no
confidence band exists which is simultaneously honest and adaptive. It is necessary
to impose a condition which guarantees the possibility of recovering the unknown
smoothness parameter from the data. The subsequently introduced notion of admissibility aligns to the self-similarity condition as used in Picard and Tribouley
(2000) and Giné and Nickl (2010) among others. Their self-similarity condition
ensures that the data contains enough information to infer on the function’s regularity. As also emphasized in Nickl (2015), self-similarity conditions turn out to
be compatible with commonly used adaptive procedures and have been shown to
be sufficient and necessary for adaptation to a continuum of smoothing parameters
in Bull (2012) when measuring the performance by the L∞ -loss. Giné and Nickl
(2010) consider globally adaptive confidence bands over the set
(3.1)
[
β∗ ≤β≤β ∗
(
c
p ∈ P(β, L) : p ≥ δ on [−ε, 1 + ε], jβ ≤ kKj ∗ p − pksup for all j ≥ j0
2
)
for some constant c > 0 and 0 < ε < 1, where β ∗ = l + 1 with l the order of the
kernel. They work on the scale of Hölder-Zygmund rather than Hölder classes. For
this reason they include the corresponding bias upper bound condition which is not
automatically satisfied for β = β ∗ in that case.
Remark 1. As mentioned in Giné and Nickl (2010), if K(·) = 21 1{· ∈ [−1, 1]}
is the rectangular kernel, the set of all twice differentiable densities p ∈ P(2, L) that
are supported in a fixed compact interval [a, b] satisfies (3.1) with a constant c > 0.
The reason is that due to the constraint of being a probability density, kp00 ksup is
bounded away from zero uniformly over this class, in particular p00 cannot vanish
everywhere.
A localized version of the self-similarity condition characterizing the above class
reads as follows.
For any nondegenerate interval (u, v) ⊂ [0, 1], there exists some β ∈ [β∗ , β ∗ ] with
p|(u,v) ∈ P(u,v) (β, L∗ ) and
(3.2)
c · 2−jβ ≤
sup
s∈(u+2−j ,v−2−j )
for all j ≥ j0 ∨ log2 (1/(v − u)).
|(Kj ∗ p)(s) − p(s)|
7
LOCALLY ADAPTIVE CONFIDENCE BANDS
Remark 2.
Inequality (3.2) can be satisfied only for
β̃ = β̃p (U ) = sup β ∈ (0, ∞] : p|U ∈ HU (β) .
The converse is not true, however.
(i) There exist functions p : U → R, U ⊂ R some interval, which are not Hölder
continuous to their exponent β̃. The Weierstraß function W1 : U → R with
W1 (·) =
∞
X
2−n cos (2n π ·)
n=0
is such an example. Indeed, Hardy (1916) proves that
1
,
W1 (x + h) − W1 (x) = O |h| log
|h|
which implies the Hölder continuity to any parameter β < 1, hence β̃ ≥ 1. Moreover,
he shows in the same reference that W1 is nowhere differentiable, meaning that it
cannot be Lipschitz continuous, that is β̃ = 1 but W1 ∈
/ HU (β̃).
(ii) It can also happen that p|U ∈ HU (β̃) but
(3.3)
lim sup
sup
δ→0
|x−y|≤δ
x,y∈U
|p(bβ̃c) (x) − p(bβ̃c) (y)|
|x − y|β̃−bβ̃c
= 0,
meaning that the left-hand side of (3.2) is violated. In the analysis literature, the
subset of functions in HU (β̃) satisfying (3.3) is called little Lipschitz (or little
Hölder) space. As a complement of an open and dense set, it forms a nowhere
dense subset of HU (β̃).
Due to the localization, a condition like (3.2) rules out examples which seem to
be typical to statisticians. Assume that K is a kernel of order l with l ≥ 1, and
recall β ∗ = l + 1. Then (3.2) excludes for instance the triangular density in Figure 1
because both sides are linear, in particular the second derivative exists and vanishes
when restricted to an interval U which does not contain the maximal point. In
contrast to the observation in Remark 1, kp00 kU may vanish for subintervals U ⊂
[a, b]. For the same reason, densities with a constant piece are excluded. In general,
if p restricted to the 2−j0 -enlargement of U is a polynomial of order at most l, (3.2)
is violated as the left-hand side is not equal to zero. In view of these deficiencies, a
condition like (3.2) is insufficient for statistical purposes.
To circumvent this deficit, we introduce k · kβ,β ∗ ,U by
bβ∧β ∗ c
(3.4)
kpkβ,β ∗ ,U =
X
k=0
∗
p(k)
U
+ sup
x,y ∈ U
x6=y
p(bβ∧β c) (x) − p(bβ∧β
|x − y|β−bβ∧β ∗ c
∗
c)
(y)
8
for β > 0 and for some bounded open subinterval U ⊂ R. As verified in Lemma A.4,
kpkβ1 ,β ∗ ,U ≤ kpkβ2 ,β ∗ ,U for 0 < β1 ≤ β2 < ∞ and |U | ≤ 1. With the help of
k · kβ,β ∗ ,U , we formulate a localized self-similarity type condition in the subsequent
Assumption 3.1, which does not exclude these prototypical densities as mentioned
above. For any bounded open interval U ⊂ R, let Hβ ∗ ,U (β, L) be the set of functions p : U → R admitting derivatives up to the order bβ ∧ β ∗ c with kpkβ,β ∗ ,U ≤ L.
Moreover, Hβ ∗ ,U (β) is the set of functions p : U → R, such
that kpkβ,β ∗ ,U is
T
∗ ,U (∞, L) =
well-defined and
finite.
Correspondingly,
H
Hβ ∗ ,U (β, L) and
β
β>0
T
∗
∗
Hβ ,U (∞) = β>0 Hβ ,U (β). Define furthermore
(3.5)
βp (U ) = sup β ∈ (0, ∞] : p|U ∈ Hβ ∗ ,U (β, L∗ ) .
Remark 3.
(β ∗ )
If for some open interval U ⊂ [0, 1] the derivative p|U
(β ∗ )
p|U
exists and
≡ 0,
then kpkβ,β ∗ ,U is finite uniformly over all β > 0. If
(β ∗ )
p|U
6≡ 0,
then β ∗ , kpkβ,β ∗ ,U is finite if and only if β ≤ β ∗ as a consequence of the mean value
theorem. That is, βp (U ) ∈ (0, β ∗ ] ∪ {∞}.
Assumption 3.1. For sample size n ∈ N, some 0 < ε < 1, 0 < β∗ < 1, and
L∗ > 0, a density p is said to be admissible if p ∈ P(−ε,1+ε) (β∗ , L∗ ) and the following
holds true: for any t ∈ [0, 1] and for any h ∈ G∞ with
G∞ = {2−j : j ∈ N, j ≥ jmin = d2 ∨ log2 (2/ε)e},
there exists some β ∈ [β∗ , β ∗ ] ∪ {∞} such that the following conditions are satisfied
for u = h or u = 2h:
(3.6)
p|B(t,u) ∈ Hβ ∗ ,B(t,u) (β, L∗ )
and
(3.7)
sup
s∈B(t,u−g)
|(Kg ∗ p)(s) − p(s)| ≥
gβ
log n
for all g ∈ G∞ with g ≤ u/8.
The set of admissible densities is denoted by Pnadm = Pnadm (K, β∗ , L∗ , ε).
Lemma 3.2. Any admissible density p ∈ Pnadm (K, β∗ , L∗ , ε) can satisfy (3.6)
and (3.7) for β = βp (B(t, u)) only.
By construction, the collection of admissible densities is increasing with the number
adm
, n ∈ N. The logarithmic denominator even
of observations, that is Pnadm ⊂ Pn+1
weakens the assumption for growing sample size, permitting smaller and smaller
Lipschitz constants.
LOCALLY ADAPTIVE CONFIDENCE BANDS
9
Remark 4. Assumption 3.1 does not require an admissible function to be totally
”unsmooth” everywhere. For instance, if K is the rectangular kernel and L∗ is
sufficiently large, the triangular density as depicted in Figure 1 is (eventually – for
sufficiently large n) admissible. It is globally not smoother than Lipschitz, and the
bias lower bound condition (3.7) is (eventually) satisfied for β = 1 and pairs (t, h)
with |t − 1/2| ≤ (7/8)h. Although the bias lower bound condition to the exponent
β ∗ = 2 is not satisfied for any (t, h) with t ∈ [0, 1] \ (1/2 − h, 1/2 + h), these tuples
(t, h) fulfill (3.6) and (3.7) for β = ∞, which is not excluded anymore by the new
Assumption 3.1. Finally, if the conditions (3.6) and (3.7) are not simultaneously
satisfied for some pair (t, h) with
1
7
< h,
h< t−
8
2
then they are fulfilled for the pair (t, 2h) and β = 1, because |t − 1/2| < (7/8)2h.
In view of this remark, it is crucial not to require (3.6) and (3.7) to hold for every
pair (t, h). We now denote by
Pn = Pn (K, β∗ , L∗ , ε, M ) = p ∈ Pnadm (K, β∗ , L∗ , ε) :
inf
p(x) ≥ M
x∈[−ε,1+ε]
the set of admissible densities being bounded below by M > 0 on [−ε, 1 + ε]. We
restrict our considerations to combinations of parameters for which the class Pn is
non-empty.
The remaining results of this subsection are about the massiveness of the function
classes Pn . They are stated for the particular case of the rectangular kernel. Other
kernels may be treated with the same idea; verification of (3.7) however appears
to require a case-by-case analysis for different kernels. The following proposition
demonstrates that the pointwise minimax rate of convergence remains unchanged
when passing from the class H(β, L∗ ) to Pn ∩ H(β, L∗ ).
Proposition 3.3 (Lower pointwise risk bound). For the rectangular kernel KR
there exists some constant M > 0, such that for any t ∈ [0, 1], for any β ∈ [β∗ , 1],
for any 0 < ε < 1, and for any k ≥ k0 (β∗ ) there exists some x > 0 and some
L(β) > 0 with
β
inf
sup
P⊗n
n 2β+1 |Tn (t) − p(t)| ≥ x > 0
p
Tn
p∈Pk :
p|(−ε,1+ε) ∈H(−ε,1+ε) (β,L)
for all L ≥ L(β), for the class Pk = Pk (KR , β∗ , L∗ , ε, M ), where the infimum is
running over all estimators Tn based on X1 , . . . , Xn .
Note that the classical construction for the sequence of hypotheses in order to prove
minimax lower bounds consists of a smooth density distorted by small β-smooth
perturbations, properly scaled with the sample size n. However, there does not exist
10
a fixed constant c > 0, such that all of its members are contained in the class (3.1).
Thus, the constructed hypotheses in our proof are substantially more complex, for
which reason we restrict attention to β ≤ 1.
Although Assumption 3.1 is getting weaker for growing sample size, some densities
are permanently excluded from consideration. The following proposition states that
the exceptional set of permanently excluded densities is pathological.
Proposition 3.4.
For the rectangular kernel KR (·) =
R=
[
1
2
1{· ∈ [−1, 1]}, let
Pnadm (KR , β∗ , L∗ , ε).
n∈N
Then, for any t ∈ [0, 1], for any h ∈ G∞ and for any β ∈ [β∗ , 1), the set
PB(t,h) (β, L∗ ) \ R|B(t,h)
is nowhere dense in PB(t,h) (β, L∗ ) with respect to k · kβ,B(t,h) .
Among more involved approximation steps, the proof reveals the existence of functions with the same regularity in the sense of Assumption 3.1 on every interval for
β ∈ (0, 1). This property is closely related to but does not coincide with the concept of mono-Hölder continuity from the analysis literature, see for instance Barral
et al. (2013). Hardy (1916) shows that the Weierstraß function is mono-Hölder continuous for β ∈ (0, 1). For any β ∈ (0, 1], the next lemma shows that Weierstraß’
construction
(3.8)
Wβ (·) =
∞
X
2−nβ cos(2n π ·)
n=0
satisfies the bias condition (3.7) for the rectangular kernel to the exponent β on
any subinterval B(t, h), t ∈ [0, 1], h ∈ G∞ .
Lemma 3.5. For all β ∈ (0, 1), the Weierstraß function Wβ as defined in (3.8)
satisfies Wβ|U ∈ HU (β, LW ) with some Lipschitz constant LW = LW (β) for every
open interval U . For the rectangular kernel KR and β ∈ (0, 1], the Weierstraß
function fulfills the bias lower bound condition
4
− 1 gβ
sup
|(KR,g ∗ Wβ )(s) − Wβ (s)| >
π
s∈B(t,h−g)
for any t ∈ R and for any g, h ∈ G∞ with g ≤ h/2.
The whole scale of parameters β ∈ [β∗ , 1] in Proposition 3.4 can be covered by
passing over from Hölder classes to Hölder-Zygmund classes in the definition of Pn .
Although the Weierstraß function W1 in (3.8) is not Lipschitz, a classical result, see
11
LOCALLY ADAPTIVE CONFIDENCE BANDS
Heurteaux (2005) or Mauldin and Williams (1986) and references therein, states
that W1 is indeed contained in the Zygmund class Λ1 . That is, it satisfies
|W1 (x + h) − W1 (x − h) − 2W1 (x)| ≤ C|h|
for some C > 0 and for all x ∈ R and for all h > 0. Due to the symmetry of the
rectangular kernel KR , it therefore fulfills the bias upper bound
kKR,g ∗ W1 − W1 ksup ≤ C 0 g β
for all g ∈ (0, 1].
The local adaptivity theory can be likewise developed on the scale of HölderZygmund rather than Hölder classes – here, we restrict attention to Hölder classes
because they are commonly considered in the theory of kernel density estimation.
3.2. Construction of the confidence band. The new confidence band is based
on a kernel density estimator with variable bandwidth incorporating a localized but
not the fully pointwise Lepski (1990) bandwidth selection procedure. A suitable discretization and a locally constant approximation allow to piece the pointwise constructions together in order to obtain a continuum of confidence statements. The
complex construction makes the asymptotic calibration of the confidence band to
the level α non-trivial. Whereas the analysis of the related globally adaptive procedure of Giné and Nickl (2010) reduces to the limiting distribution of the supremum
of a stationary Gaussian process, our locally adaptive approach leads to a highly
non-stationary situation. An essential component is therefore the identification of
a stationary process as a least favorable case by means of Slepian’s comparison
inequality.
We now describe the procedure. The interval [0, 1] is discretized into equally spaced
grid points, which serve as evaluation points for the locally adaptive estimator. We
discretize by a mesh of width
&
δn = 2
jmin
β∗
log ñ
ñ
'−1
−κ1
(log ñ)
2
β∗
with κ1 ≥ 1/(2β∗ ) and set Hn = {kδn : k ∈ Z}. Fix now constants
(3.9)
c1 >
2
β∗ log 2
and κ2 > c1 log 2 + 4.
Consider the set of bandwidth exponents
Jn = j ∈ N : jmin ≤ j ≤ jmax = log2
ñ
(log ñ)κ2
.
The bound jmin ensures that 2−j ≤ ε ∧ (1/4) for all j ∈ Jn , and therefore avoids
that infinite smoothness in (3.14) and the corresponding local parametric rate is
only attainable in trivial cases as the interval under consideration is [0, 1]. The
12
bound jmax is standard and particularly guarantees consistency of the kernel density
estimator with minimal bandwidth within the dyadic grid of bandwidths
n
o
Gn = 2−j : j ∈ Jn .
We define the set of admissible bandwidths for t ∈ [0, 1] as
n
An (t) = j ∈ Jn :
(3.10)
r
max
s∈B (
t, 87 ·2−j
log ñ
−m
ñ2
)∩Hn
o
for all m, m0 ∈ Jn with m > m0 > j + 2 ,
p̂(2)
n (s, m)
−
0
p̂(2)
n (s, m )
≤ c2
with constant c2 = c2 (A, ν, β∗ , L∗ , K, ε) specified in the proof of Proposition 4.1.
Furthermore, let
ĵn (t) = min An (t),
(3.11)
t ∈ [0, 1],
and ĥn (t) = 2−ĵn (t) . Note that a slight difference to the classical Lepski procedure
is the additional maximum in (3.10), which reflects the idea of adapting localized
but not completely pointwise for fixed sample size n. The bandwidth (3.11) is
determined for all mesh points kδn , k ∈ Tn = {1, . . . , δn−1 } in [0, 1], and set piecewise
constant in between. Accordingly, with
−ĵn ((k−1)δn )−un
ĥloc
,
n,1 (k) = 2
−ĵn (kδn )−un
ĥloc
,
n,2 (k) = 2
where un = c1 log log ñ is some sequence implementing the undersmoothing, the
estimators are defined as
n
o
loc
loc
loc
ĥloc
and
n (t) = ĥn,k = min ĥn,1 (k), ĥn,2 (k)
(3.12)
(1)
p̂loc
n (t, h) = p̂n (kδn , h)
for t ∈ Ik = [(k−1)δn , kδn ), k ∈ Tn \{δn−1 }, Iδn−1 = [1−δn , 1]. The following theorem
lays the foundation for the construction of honest and locally adaptive confidence
bands.
Theorem 3.6 (Least favorable case). For the estimators defined in (3.12) and
normalizing sequences
log(− log δn ) + log 4π
3
1/2
1/2
(−2 log δn ) −
an = c3 (−2 log δn ) , bn =
,
c3
2(−2 log δn )1/2
√
with c3 = 2/T V (K), it holds
!
!
q
lim inf inf P⊗n
p
n→∞ p∈Pn
an
sup
loc
loc
ñĥloc
n (t) p̂n (t, ĥn (t)) − p(t) − bn
≤x
t∈[0,1]
√
≥ 2 P L∗ G ≤ x − 1
for some standard Gumbel distributed random variable G.
LOCALLY ADAPTIVE CONFIDENCE BANDS
13
The proof of Theorem 3.6 is based on several completely non-asymptotic approximation techniques. The asymptotic Komlós-Major-Tusnády-approximation technique,
used in Giné and Nickl (2010), has been evaded using non-asymptotic Gaussian
approximation results recently developed in Chernozhukov, Chetverikov and Kato
(2014b). The essential component of the proof of Theorem 3.6 is the application
of Slepian’s comparison inequality to reduce considerations from a non-stationary
Gaussian process to the least favorable case of a maximum of δn−1 independent and
identical standard normal random variables.
With q1−α/2 denoting the (1 − α/2)-quantile of the standard Gumbel distribution,
we define the confidence band as the family of piecewise constant random intervals
Cn,α = (Cn,α (t))t∈[0,1] with
q
(α)
q
(α)
n
n
loc
loc
loc
, p̂loc
(3.13) Cn,α (t) = p̂n (t, ĥn (t)) − q
n (t, ĥn (t)) + q
loc
loc
ñĥn (t)
ñĥn (t)
and
√
qn (α) =
L∗ · q1−α/2
+ bn .
an
√
For fixed α > 0, qn (α) = O( log n) as n goes to infinity.
Corollary 3.7 (Honesty). The confidence band as defined in (3.13) satisfies
lim inf inf P⊗n
p(t)
∈
C
(t)
for
every
t
∈
[0,
1]
≥ 1 − α.
n,α
p
n→∞ p∈Pn
3.3. Local Hölder regularity and local adaptivity. In the style of global
adaptivity in connection with confidence sets one may call a confidence band
Cn,α = (Cn,α (t))t∈[0,1] locally adaptive if for every interval U ⊂ [0, 1],
lim sup
sup
Pχp 2 |Cn,α (t)| ≥ η · rn (β) for some t ∈ U → 0
n→∞ p∈Pn|U ∩Hβ ∗ ,U (β,L∗ )
δ
δ
as η → ∞, for every β ∈ [β∗ , β ∗ ], where Uδ is the open δ-enlargement of U . As
a consequence of the subsequently formulated Theorem 3.12, our confidence band
satisfies this notion of local adaptivity up to a logarithmic factor. However, in view
of the imagination illustrated in Figure 1 the statistician aims at a stronger notion
of adaptivity, where the asymptotic statement is not formulated for an arbitrary but
fixed interval U only. Precisely, the goal would be to adapt even to some pointwise
or local Hölder regularity, two well established notions from analysis.
Definition 3.8 (Pointwise Hölder exponent, Seuret and Lévy Véhel (2002)).
Let p : R → R be a function, β > 0, β ∈
/ N, and t ∈ R. Then p ∈ Ht (β) if and
only if there exists a real R > 0, a polynomial P with degree less than bβc, and a
constant c such that
|p(x) − P (x − t)| ≤ c|x − t|β
14
for all x ∈ B(t, R). The pointwise Hölder exponent is denoted by
βp (t) = sup{β : p ∈ Ht (β)}.
Definition 3.9 (Local Hölder exponent, Seuret and Lévy Véhel (2002)).
Let p : Ω → R be a function and Ω ⊂ R an open set. One classically says that
p ∈ Hloc (β, Ω), where 0 < β < 1, if there exists a constant c such that
|p(x) − p(y)| ≤ c|x − y|β
for all x, y ∈ Ω. If m < β < m + 1 for some m ∈ N, then p ∈ Hloc (β, Ω) means that
there exists a constant c such that
|∂ m p(x) − ∂ m p(y)| ≤ c|x − y|β−m
for all x, y ∈ Ω. Set now
βp (Ω) = sup{β : p ∈ Hloc (β, Ω)}.
Finally, the local Hölder exponent in t is defined as
βploc (t) = sup{βp (Oi ) : i ∈ I},
where (Oi )i∈I is a decreasing family of open sets with ∩i∈I Oi = {t}. [By Lemma 2.1
in Seuret and Lévy Véhel (2002), this notion is well defined, that is, it does not
depend on the particular choice of the decreasing sequence of open sets.]
The next proposition however shows that attaining the minimax rates of convergence corresponding to the pointwise or local Hölder exponent (possibly inflated by
some logarithmic factor) uniformly over Pn is an unachievable goal.
Proposition 3.10. For the rectangular kernel KR there exists some constant
M > 0, such that for any t ∈ [0, 1], for any β ∈ [β∗ , 1], for any 0 < ε < 1, and
for any k ≥ k0 (β∗ ) there exists some x > 0 and constants L = L(β) > 0 and
c4 = c4 (β) > 0 with
β
2β+1 |T (t) − p(t)| ≥ x
> 0 for all k ≥ k0 (β∗ )
inf sup P⊗n
n
n
p
Tn p∈Sk (β)
with
Sk (β) = Sk (L, β, β∗ , M, KR , ε)
n
1
= p ∈ Pk (KR , β∗ , L, ε, M ) : ∃ r ≥ c4 n− 2β+1
o
such that p|B(t,r) ∈ HB(t,r) (∞, L) ∩ H(−ε,1+ε) (β, L),
where the infimum is running over all estimators Tn based on X1 , . . . , Xn .
LOCALLY ADAPTIVE CONFIDENCE BANDS
15
The proposition furthermore reveals that if a density p ∈ Pk is Hölder smooth
to some exponent η > β on a ball around t with radius at least of the order
n−1/(2β+1) , then no estimator for p(t) can achieve a better rate than n−β/(2β+1) .
We therefore introduce an n-dependent statistical notion of local regularity for any
point t. Roughly speaking, we intend it to be the maximal β such that the density
attains this Hölder exponent within B(t, hβ,n ), where hβ,n is of the optimal adaptive
bandwidth order (log n/n)1/(2β+1) . We realize this idea with k · kβ,β ∗ ,U as defined
in (3.4) and used in Assumption 3.1.
Definition 3.11 (n-dependent local Hölder exponent).
timal bandwidth within the class H(β)
1
log ñ 2β+1
−jmin
,
hβ,n = 2
·
ñ
With the classical op-
define the class Hβ ∗ ,n,t (β, L) as the set of functions p : B(t, hβ,n ) → R, such that p
admits derivatives up to the order bβ ∧β ∗ c and kpkβ,β ∗ ,B(t,hβ,n ) ≤ L, and Hβ ∗ ,n,t (β)
the class of functions p : B(t, hβ,n ) → R for which kpkβ,β ∗ ,B(t,hβ,n ) is well-defined
and finite. The n-dependent local Hölder exponent for the function p at point t is
defined as
n
o
(3.14)
βn,p (t) = sup β > 0 : p|B(t,hβ,n ) ∈ Hβ ∗ ,n,t (β, L∗ ) .
If the supremum is running over the empty set, we set βn,p (t) = 0.
Finally, the next theorem shows that the confidence band adapts to the n-dependent
local Hölder exponent.
Theorem 3.12 (Strong local adaptivity). There exists some γ = γ(c1 ), such
that
(t)
− 2ββn,p(t)+1
n,p
log
ñ
lim sup sup Pχp 2 sup |Cn,α (t)| ·
≥ (log ñ)γ = 0.
ñ
n→∞ p∈Pn
t∈[0,1]
Note that the case βn,p (t) = ∞ is not excluded in the formulation of Theorem 3.12.
That is, if p|U can be represented as a polynomial of degree strictly less than β ∗ ,
the confidence band attains even adaptively the parametric width n−1/2 , up to
∗
∗
logarithmic factors. In particular, the band can be tighter than n−β /(2β +1) . In
general, as long as δ ≤ ε and B(t, hβ ∗ ,n ) ⊂ Uδ ,
βn,p (t) ≥ βp (Uδ )
for all t ∈ U.
For every interval U ⊂ [0, 1],
!
β
log ñ 2β+1
(log ñ)γ
sup |Cn,α (t)| ≥
ñ
t∈U
Corollary 3.13 (Weak local adaptivity).
lim sup
sup
n→∞ p∈Pn|U ∩Hβ ∗ ,U (β,L∗ )
δ
δ
Pχp 2
is equal to zero for every β ∈ [β∗ , β ∗ ], where Uδ is the open δ-enlargement of U and
γ as in Theorem 3.12.
16
4. Supplementary notation and results. The following auxiliary results are crucial ingredients in the proofs of Theorem 3.6 and Theorem 3.12.
Recalling the quantity hβ,n in Definition 3.11, Proposition 4.1 shows that 2−ĵn (·)
lies in a band around
(4.1)
h̄n (·) = hβn,p (·),n
uniformly over all admissible densities p ∈ Pn . Proposition 4.1 furthermore reveals
the necessity to undersmooth, which has been already discovered by Bickel and
Rosenblatt (1973), leading to a bandwidth deflated by some logarithmic factor. Set
now
1
+ 1,
j̄n (·) = log2
h̄n (·)
such that the bandwidth 2−j̄n (·) is an approximation of h̄n (·) by the next smaller
bandwidth on the grid Gn with
1
h̄n (·) ≤ 2−j̄n (·) ≤ h̄n (·).
2
The next proposition states that the procedure chooses a bandwidth which simultaneously in the location t is neither too large nor too small.
Proposition 4.1. The bandwidth ĵn (·) defined in (3.11) satisfies
h
i
o
n
=0
lim sup 1 − Pχp 2 ĵn (kδn ) ∈ kn (kδn ), j̄n (kδn ) + 1 for all k ∈ Tn
n→∞ p∈P
n
where kn (·) = j̄n (·) − mn , and mn = 21 c1 log log ñ.
Lemma 4.2.
(4.2)
Let s, t ∈ [0, 1] be two points with s < t, and let z ∈ (s, t). If
|s − t| ≤
1
hβ ,n
8 ∗
then
1
h̄n (z) ≤ min h̄n (s), h̄n (t) ≤ 3 h̄n (z).
3
Lemma 4.3. There exist positive and finite constants c5 = c5 (A, ν, K) and c6 =
c6 (A, ν, L∗ , K), and some η0 = η0 (A, ν, L∗ , K) > 0, such that
s
!
ñh (i)
χi
χi (i)
sup Pp
sup max
p̂ (s, h) − Ep p̂n (s, h) > η ≤ c5 ñ−c6 η , i = 1, 2
log ñ n
s∈Hn h∈Gn
p∈Pn
for sufficiently large n ≥ n0 (A, ν, L∗ , K) and for all η ≥ η0 .
LOCALLY ADAPTIVE CONFIDENCE BANDS
17
The next lemma states extends the classical upper bound on the bias for the modified Hölder classes Hβ ∗ ,B(t,U ) (β, L).
Lemma 4.4. Let t ∈ R and g, h > 0. Any density p : R → R with p|B(t,g+h) ∈
Hβ ∗ ,B(t,g+h) (β, L) for some 0 < β ≤ ∞ and some L > 0 satisfies
(4.3)
sup |(Kh ∗ p)(s) − p(s)| ≤ b2 hβ
s∈B(t,g)
for some positive and finite constant b2 = b2 (L, K).
Lemma 4.5. For symmetric kernels K and β = 1, the bias bound (4.3) continues to hold if the Lipschitz balls are replaced by the corresponding Zygmund balls.
5. Proofs. We first prove the results of Section 3 in Subsection 5.1 and afterwards proceed with the proofs of the results Section 4 in Subsection 5.2.
For the subsequent proofs we recall the following notion of the theory of empirical
processes.
Definition 5.1. A class of measurable functions H on a measure space (S, S )
is a V apnik-Červonenkis class (VC class) of functions with respect to the envelope H if there exists a measurable function H which is everywhere finite with
suph∈H |h| ≤ H and finite numbers A and v, such that
A v
sup N H , k · kL2 (Q) , εkHkL2 (Q) ≤
ε
Q
for all 0 < ε < 1, where the supremum is running over all probability measures Q
on (S, S ) for which kHkL2 (Q) < ∞.
Nolan and Pollard (1987) call a class Euclidean with respect to the envelope H and
with characteristics A and ν if the same holds true with L1 (Q) instead of L2 (Q).
The following auxiliary lemma is a direct consequence of the results in the same
reference.
Lemma 5.2. If a class of measurable functions H is Euclidean with respect to
a constant envelope H and with characteristics A and ν, then the class
H˜ = {h − EP h : h ∈ H }
√
is a VC class with envelope 2H and characteristics A0 = 4 A ∨ 2A and ν 0 = 3ν for
any probability measure P.
Proof. For any probability measure P and for any functions h̃1 = h1 − EP h1 ,
h̃2 = h2 − EP h2 ∈ H˜ with h1 , h2 ∈ H , we have
kh̃1 − h̃2 kL2 (Q) ≤ kh1 − h2 kL2 (Q) + kh1 − h2 kL1 (P) .
18
For any 0 < ε ≤ 1, we obtain as a direct consequence of Lemma 14 in Nolan and
Pollard (1987)
N H˜ , L2 (Q), 2εkHkL2 (Q)
(5.1)
εkHkL2 (Q)
εkHkL1 (P)
≤ N H , L2 (Q),
· N H , L1 (P),
.
2
2
Nolan and Pollard (1987), page 789, furthermore state that the Euclidean class H
is also a VC class with respect to the envelope H and with
√ !2ν
εkHkL2 (Q)
4 A
2
≤
,
N H , L (Q),
2
ε
whereas
N
εkHkL1 (P)
H , L (P),
2
1
≤
2A
ε
ν
.
Inequality (5.1) thus implies
N H˜ , L2 (Q), 2εkHkL2 (Q) ≤
!3ν
√
4 A ∨ 2A
.
ε
5.1. Proofs of the results in Section 3.
Proof of Lemma 3.2. Let p ∈ Pnadm (K, β∗ , L∗ , ε) be an admissible density.
That is, for any t ∈ [0, 1] and for any h ∈ G∞ there exists some β ∈ [β∗ , β ∗ ] ∪ {∞},
such that for u = h or u = 2h both
p|B(t,u) ∈ Hβ ∗ ,B(t,u) (β, L∗ )
and
sup
|(Kg ∗ p)(s) − p(s)| ≥
s∈B(t,u−g)
gβ
log n
for all g ∈ G∞ with g ≤ u/8
hold. By definition of βp (B(t, u)) in (3.5), we obtain βp (B(t, u)) ≥ β. We now prove
by contradiction that also βp (B(t, u)) ≤ β. If β = ∞, the proof is finished. Assume
now that β < ∞ and that βp (B(t, u)) > β. Then, by Lemma A.4, there exists some
β < β 0 < βp (B(t, u)) with p|B(t,u) ∈ Hβ ∗ ,B(t,u) (β 0 , L∗ ). By Lemma 4.4, there exists
some constant b2 = b2 (L∗ , K) with
0
b2 g β ≥
sup
|(Kg ∗ p)(s) − p(s)| ≥
s∈B(t,u−g)
gβ
log n
for all g ∈ G∞ with g ≤ u/8, which is a contradiction.
Proof of Proposition 3.3. The proof is based on a reduction of the supremum over the class to a maximum over two distinct hypotheses.
LOCALLY ADAPTIVE CONFIDENCE BANDS
19
Part 1. For β ∈ [β∗ , 1), the construction of the hypotheses is based on the Weierstraß function as defined in (3.8) and is depicted in Figure 2. Consider the function
p0 : R → R with
0,
if |x − t| ≥ 10
3
1 + 3 (x − t + 2),
10
if
−
<
x
− t < −2
3
p0 (x) = 41 16
1−2−β
+
W
(x
−
t),
if
|x
−
t|
≤
2
β
12
16
3
−
(x
−
t
− 2),
if 2 < x − t < 10
4
16
3
and the function p1,n : R → R with
p1,n (x) = p0 (x) + qt+ 94 ,n (x) − qt,n (x),
x ∈ R,
where
(
qa,n (x) =
if |x − a| > gβ,n
0,
1−2−β
12
Wβ (x − a) − Wβ (gβ,n ) ,
if |x − a| ≤ gβ,n
for gβ,n = 41 n−1/(2β+1) and a ∈ R. Note that p1,n|B(t,gβ,n ) is constant with value
p1,n (x) =
1 1 − 2−β
+
Wβ (gβ,n )
6
12
for all x ∈ B(t, gβ,n ).
Fig 2. Functions p0 and p1,n for t = 0.5, β = 0.5 and n = 100
We now show that both p0 and p1,n are contained in the class Pk for sufficiently
large k ≥ k0 (β∗ ) with
p0|(−ε,1+ε) , p1,n|(−ε,1+ε) ∈ H(−ε,1+ε) (β, L∗ ).
(i) We first verify that p0 integrates to one. Then, it follows directly that also p1,n
integrates to one. We have
Z
Z t−2
1
3
p0 (x) dx =
+ (x − t + 2) dx
4 16
t− 10
3
Z t+2
1 1 − 2−β
+
+
Wβ (x − t) dx
6
12
t−2
Z t+ 10
3
1
3
+
− (x − t − 2) dx
4 16
t+2
20
1 2 1 − 2−β
= + +
6 3
12
Z
2
Wβ (x) dx +
−2
1
6
= 1,
where the last equality is due to
Z 2
Z
∞
X
−kβ
Wβ (x) dx =
2
−2
k=0
2
cos(2k πx) dx = 0.
−2
(ii) Next, we check the non-negativity of p0 and p1,n to show that they are probability density functions. We prove non-negativity for p0 , whereas non-negativity of
p1,n is an easy implication. Since p0 (−10/3) = 0 and p0 is linear on (t − 10/3, t − 2)
with positive derivative, p0 is non-negative on (t − 10/3, t − 2). Analogously, p0 is
non-negative on (t + 2, t + 10/3). Note furthermore that
|Wβ (x)| ≤ Wβ (0) =
(5.2)
∞
X
2−kβ =
k=0
1
1 − 2−β
for all x ∈ R. Thus, for any x ∈ R with |x − t| ≤ 2, we have
p0 (x) =
1 1 − 2−β
1
1
1
+
Wβ (x − t) ≥ −
=
> 0.
6
12
6 12
12
(iii) As p0 and also p1,n are bounded from below by M = 1/12 on B(t, 2), we
furthermore conclude that they are bounded from below by M on (−1, 2) ⊂ B(t, 2),
and therefore on any interval [−ε, 1 + ε] with 0 < ε < 1.
(iv) We now verify that p0|(−ε,1+ε) , p1,n|(−ε,1+ε) ∈ H(−ε,1+ε) (β, L(β)) for some
positive constant L(β). Note again that for any 0 < ε < 1 and any t ∈ [0, 1], the
inclusion (−ε, 1 + ε) ⊂ B(t, 2) holds. Thus,
|Wβ (x − t) − Wβ (y − t)|
|p0 (x) − p0 (y)|
1 − 2−β
·
sup
=
,
β
|x − y|
12
|(x − t) − (y − t)|β
x,y∈(−ε,1+ε)
x,y∈(−ε,1+ε)
sup
x6=y
x6=y
which is bounded by some constant c(β) according to Lemma 3.5. Together with
(5.2) and with the triangle inequality, we obtain that
p0|(−ε,1+ε) ∈ H(−ε,1+ε) (β, L)
for some Lipschitz constant L = L(β). The Hölder continuity of p1,n is now a
simple consequence. The function p1,n is constant on B(t, gβ,n ) and coincides with
p0 on (−ε, 1+ε)\B(t, gβ,n ). Hence, it remains to investigate combinations of points
x ∈ (−ε, 1 + ε) \ B(t, gβ,n ) and y ∈ B(t, gβ,n ). Without loss of generality assume
that x ≤ t − gβ,n . Then,
|p1,n (x) − p1,n (y)|
|p1,n (x) − p1,n (t − gβ,n )|
|p1,n (x) − p1,n (t − gβ,n )|
=
≤
≤ L,
β
β
|x − y|
|x − y|
|x − (t − gβ,n )|β
LOCALLY ADAPTIVE CONFIDENCE BANDS
21
which proves that also
p1,n|(−ε,1+ε) ∈ H(−ε,1+ε) (β, L).
(v) Finally, we address the verification of Assumption 3.1 for the hypotheses p0 and
p1,n . Again, for any t0 ∈ [0, 1] and any h ∈ G∞ the inclusion B(t0 , 2h) ⊂ B(t, 2)
holds, such that in particular
p0|B(t0 ,h) ∈ Hβ ∗ ,B(t0 ,h) (β, LW (β))
for any t0 ∈ [0, 1] and for any h ∈ G∞ by Lemma 3.5. Simultaneously, Lemma 3.5
implies
gβ
1 − 2−β∗ 4
− 1 gβ ≥
sup
|(KR,g ∗ p0 )(s) − p0 (s)| >
12
π
log k
s∈B(t0 ,h−g)
for all g ≤ h/2 and for sufficiently large k ≥ k0 (β∗ ). That is, for any t0 ∈ [0, 1], both
(3.6) and (3.7) are satisfies for p0 with u = h for any h ∈ G∞ .
Concerning p1,n we distinguish between several combinations of pairs (t0 , h) with
t0 ∈ [0, 1] and h ∈ G∞ .
(v.1) If B(t0 , h) ∩ B(t, gβ,n ) = ∅, the function p1,n coincides with p0 on B(t0 , h), for
which Assumption 3.1 has been already verified.
(v.2) If B(t0 , h) ⊂ B(t, gβ,n ), the function p1,n is constant on B(t0 , h), such that
(3.6) and (3.7) trivially hold for u = h and β = ∞.
(v.3) If B(t0 , h) ∩ B(t, gβ,n ) 6= ∅ and B(t0 , h) 6⊂ B(t, gβ,n ), we have that t0 + h >
t + gβ,n or t0 − h < t − gβ,n . As p1,n|B(t,2) is symmetric around t we assume t0 + h >
t + gβ,n without loss of generality. In this case,
h
(t0 + 2h − g) − (t + gβ,n ) > 2
−g ,
2
such that
3 h
B t0 + h, − g ⊂ B(t0 , 2h − g) \ B(t, gβ,n ).
2 2
Consequently, we obtain
sup
|(KR,g ∗ p1,n )(s) − p1,n (s)| ≥
s∈B(t0 ,2h−g)
sup
|(KR,g ∗ p1,n )(s) − p1,n (s)| .
s∈B (t0 + 23 h, h
2 −g )
If 2h ≥ 8g, we conclude that h/2 ≥ 2g, so that Lemma 3.5 finally proves Assumption
3.1 for u = 2h to the exponent β for sufficiently large k ≥ k0 (β∗ ).
Combining (i) − (v), we conclude that p0 and p1,n are contained in the class Pk
with p0|(−ε,1+ε) , p1,n|(−ε,1+ε) ∈ H(−ε,1+ε) (β, L∗ ) for sufficiently large k ≥ k0 (β∗ ).
The absolute distance of the two hypotheses in t is at least
|p0 (t) − p1,n (t)| =
1 − 2−β
(Wβ (0) − Wβ (gβ,n ))
12
22
∞
1 − 2−β X −kβ
1 − cos(2k πgβ,n )
2
12
k=0
1 − 2−β∗ −k̃β
1 − cos(2k̃ πgβ,n )
2
≥
12
β
≥ 2c7 gβ,n
=
(5.3)
where k̃ ∈ N is chosen such that 2−(k̃+1) < gβ,n ≤ 2−k̃ and
c7 = c7 (β∗ ) =
1 − 2−β∗
.
24
It remains to bound the distance between the associated product probability measures P⊗n
and P⊗n
0
1,n . For this purpose, we analyze the Kullback-Leibler divergence
between these probability measures, which can be bounded from above by
⊗n
K(P⊗n
1,n , P0 ) = n K(P1,n , P0 )
Z
p1,n (x)
1 {p0 (x) > 0} dx
= n p1,n (x) log
p0 (x)
!
Z
qt+ 94 ,n (x) − qt,n (x)
= n p1,n (x) log 1 +
1 {p0 (x) > 0} dx
p0 (x)
2
Z
qt+ 94 ,n (x) − qt,n (x)
1 {p0 (x) > 0} dx
≤ n qt+ 49 ,n (x) − qt,n (x) +
p0 (x)
2
Z q 9 (x) − qt,n (x)
t+ 4 ,n
=n
1 {p0 (x) > 0} dx
p0 (x)
Z
2
≤ 12n
qt+ 94 ,n (x) − qt,n (x) dx
Z
= 24n q0,n (x)2 dx
2 Z gβ,n
1 − 2−β
2
= 24n
(Wβ (x) − Wβ (gβ,n )) dx
12
−gβ,n
2 Z gβ,n
1 − 2−β
2β
2
(gβ,n − x) dx
≤ 24L(β) n
12
−gβ,n
2β+1
≤ c8 ngβ,n
≤ c8
using the inequality log(1 + x) ≤ x, x > −1, Lemma 3.5, and
p0 (t + 5/2) =
5
1
>M =
,
32
12
23
LOCALLY ADAPTIVE CONFIDENCE BANDS
where
c8 = c8 (β) = 48L(β)2 4−(2β+1) 22β
1 − 2−β
12
2
.
Using now Theorem 2.2 in Tsybakov (2009),
β
2β+1 |T (t) − p(t)| ≥ c
sup
inf
P⊗n
n
n
7
p
Tn
p∈Pk :
p|(−ε,1+ε) ∈H(−ε,1+ε) (β,L∗ )
(
≥ max
)
p
1 − c8 /2
1
exp(−c8 ),
> 0.
4
2
Part 2. For β = 1, consider the function p0 : R → R with
p0 (x) =
(
0,
1
4
−
1
16 |x
if |x − t| > 4
if |x − t| ≤ 4
− t|,
and the function p1,n : R → R with
p1,n (x) = p0 (x) + qt+ 94 ,n (x) − qt,n (x),
x ∈ R,
where
qa,n (x) =
(
0,
1
16 (g1,n
− |x − a|),
if |x − a| > g1,n
if |x − a| ≤ g1,n
for g1,n = n−1/3 and a ∈ R. The construction is depicted in Figure 3 below.
Fig 3. Functions p0 and p1,n for t = 0.5, β = 0.5 and n = 10
(i) − (iii) Easy calculations show that both p0 and p1,n are probability densities,
which are bounded from below by M = 1/8 on B(t, 2).
(iv) We now verify that p0|(−ε,1+ε) , p1,n|(−ε,1+ε) ∈ H(−ε,1+ε) (1, L) for some Lipschitz constant L > 0. Note again that for any 0 < ε < 1 and any t ∈ [0, 1], the
inclusion (−ε, 1 + ε) ⊂ B(t, 2) holds. Thus,
1
||x − t| − |y − t||
1
|p0 (x) − p0 (y)|
=
·
sup
≤
.
|x
−
y|
16
|x
−
y|
16
x,y∈(−ε,1+ε)
x,y∈(−ε,1+ε)
sup
x6=y
x6=y
24
Since p0 has maximal value 1/4, we obtain that
5
p0|(−ε,1+ε) ∈ H(−ε,1+ε) 1,
.
16
For the same reasons as before, we also obtain
5
p1,n|(−ε,1+ε) ∈ H(−ε,1+ε) 1,
.
16
(v) Finally, we address the verification of Assumption 3.1 for the hypotheses p0 and
p1,n . Again, for any t0 ∈ [0, 1] and any h ∈ G∞ the inclusion B(t0 , 2h) ⊂ B(t, 2)
holds, and we distinguish between several combinations of pairs (t0 , h) with t0 ∈ [0, 1]
and h ∈ G∞ . We start with p0 .
(v.1) If t ∈
/ B(t0 , h), it holds that kpkβ,B(t0 ,h) ≤ 5/16 for all β > 0, such that (3.6)
and (3.7) trivially hold for u = h and β = ∞.
(v.2) In case t ∈ B(t0 , h), the function p0|B(t0 ,2h) is not differentiable and
kp0 k1,B(t0 ,2h) ≤ 5/16.
Furthermore, t ∈ B(t0 , 2h − g) for any g ∈ G∞ with g < 2h/16 and thus
sup
|(KR,g ∗ p)(s) − p(s)| ≥ |(KR,g ∗ p)(t) − p(t)| =
s∈B(t0 ,2h−g)
1
g.
32
That is, (3.6) and (3.7) are satisfied for u = 2h and β = 1 for sufficiently large
n ≥ n0 .
The density p1,n can be treated in a similar way. It is constant on the interval
B(t, gβ,n ). If B(t0 , h) does not intersect with {t − gβ,n , t + gβ,n }, Assumption 3.1
is satisfied for u = h and β = ∞. If the two sets intersect, t − gβ,n or t + gβ,n is
contained in B(t0 , 2h−g) for any g ∈ G∞ with g < 2h/16, and we proceed as before.
Again, combining (i) − (v), it follows that p0 and p1,n are contained in the class
Pk with p0|(−ε,1+ε) , p1,n|(−ε,1+ε) ∈ H(−ε,1+ε) (1, L) for sufficiently large k ≥ k0 and
some universal constant L > 0. The absolute distance of the two hypotheses in t
equals
|p0 (t) − p1,n (t)| =
1
g1,n .
16
To bound the Kullback-Leibler divergence between the associated product probability measures P⊗n
and P⊗n
0
1,n , we derive as before
⊗n
K(P⊗n
1,n , P0 ) ≤ n
Z
≤ 16n
2
qt+ 94 ,n (x) − qt,n (x)
p0 (x)
Z
1 {p0 (x) > 0} dx
2
qt+ 49 ,n (x) − qt,n (x) dx
25
LOCALLY ADAPTIVE CONFIDENCE BANDS
Z
= 32n
=
q0,n (x)2 dx
1
,
12
using p0 (t + 5/2) > 1/16. Using Theorem 2.2 in Tsybakov (2009) again,
1
1
⊗n
inf
sup
Pp
n 3 |Tn (t) − p(t)| ≥
Tn
32
p∈Pk :
p|(−ε,1+ε) ∈H(−ε,1+ε) (1,L∗ )
(
≥ max
)
p
1 − 1/24
1
exp(−1/12),
> 0.
4
2
Proof of Proposition 3.4. Define
R̃ =
[
R̃n
n∈N
with
(
R̃n = p ∈ H(−ε,1+ε) (β∗ ) : ∀ t ∈ [0, 1] ∀ h ∈ G∞ ∃ β ∈ [β∗ , β ∗ ] with
p|B(t,h) ∈ HB(t,h) (β)and k(KR,g ∗ p) − pkB(t,h−g) ≥
gβ
log n
)
for all g ∈ G∞ with g ≤ h/8 .
Furthermore, let
(
En (β) = p ∈ H(−ε,1+ε) (β) : k(KR,g ∗ p) − pkB(t,h−g) ≥
2 β
g for all t ∈ [0, 1],
log n
)
for all h ∈ G∞ , and for all g ∈ G∞ with g ≤ h/8 .
Note that Lemma 3.5 shows that En (β) is non-empty as soon as
2
4
≤1− .
log n
π
Note additionally that En (β) ⊂ R̃n for any β ∈ [β∗ , β ∗ ], and
[
En (β) ⊂ R̃.
n∈N
26
With
An (β) =
kKR k−1
1
f˜ ∈ H(−1,2) (β) : kf˜ − f kβ,(−ε,1+ε) <
for some f ∈ En (β) ,
log n
we get for any f˜ ∈ An (β) and a corresponding f ∈ En (β) with
kfˇkβ,(−ε,1+ε) < kKR k−1
1
1
log n
and fˇ = f˜ − f , the lower bound
(KR,g ∗ f˜) − f˜
B(t,h−g)
≥ (KR,g ∗ f ) − f
B(t,h−g)
− fˇ − (KR,g ∗ fˇ)
B(t,h−g)
Z
2 β
=
KR (x) fˇ(s + gx) − fˇ(s) dx
g −
sup
log n
s∈B(t,h−g)
Z
fˇ(s0 ) − fˇ(s)
2 β
β
≥
g − g · |KR (x)|
sup
sup
dx
log n
|s − s0 |β
s∈B(t,h−g) s0 ∈B(s,g)
s0 6=s
2 β
g − g β · kKR k1 · kfˇkβ,(−ε,1+ε)
log n
1 β
≥
g
log n
≥
for all g, h ∈ G∞ with g ≤ h/8 and for all t ∈ [0, 1], and therefore
[
A=
An (β) ⊂ R̃.
n∈N
Clearly, An (β) is open in H(−ε,1+ε) (β). Hence, the same holds true for A. Next, we
verify that A is dense in H(−ε,1+ε) (β). Let p ∈ H(−ε,1+ε) (β) and let δ > 0. We now
show that there exists some function p̃δ ∈ A with kp − p̃δ kβ,(−ε,1+ε) ≤ δ. For the
construction of the function p̃δ , set the grid points
tj,1 (k) = (4j + 1)2−k ,
tj,2 (k) = (4j + 3)2−k
for j ∈ {−2k−2 , −2k−2 + 1, . . . , 2k−1 − 1} and k ≥ 2. The function p̃δ shall be
defined as the limit of a recursively constructed sequence. The idea is to recursively
add appropriately rescaled sine waves at those locations where the bias condition
is violated. Let p1,δ = p, and denote
Jk = j ∈ {−2k−2 , . . . , 2k−1 − 1} : max (KR,2−k ∗ pk−1,δ )(tj,i (k)) − pk−1,δ (tj,i (k))
i=1,2
2
1
2−kβ
< c9 δ 1 −
2
π
LOCALLY ADAPTIVE CONFIDENCE BANDS
27
for k ≥ 2, where
c9 = c9 (β) =
3π
7
1
+
·
β−1
2 1−2
1 − 2−β
−1
.
For any k ≥ 2 set
X
pk,δ (x) = pk−1,δ (x) + c9 δ
Sk,β,j (x)
j∈Jk
with functions
Sk,β,j (x) = 2−kβ sin 2k−1 πx 1 |(4j + 2)2−k − x| ≤ 2−k+1
exemplified in Figure 4. That is,
pk,δ (x) = p(x) + c9 δ
k X
X
Sl,β,j (x),
l=2 j∈Jl
and we define p̃δ as the limit
p̃δ (x) = p(x) + c9 δ
∞ X
X
l=2 j∈Jl
∞
X
= pk,δ (x) + c9 δ
Sl,β,j (x)
X
Sl,β,j (x).
l=k+1 j∈Jl
The function p̃δ is well-defined as the series on the right-hand side converges: for
fixed l ∈ N, the indicator functions
1 |(4j + 2)2−k − x| ≤ 2−k+1 , j ∈ {−2l−2 , −2l−2 + 1, . . . , 2l−1 − 1}
have disjoint supports, such that
X
j∈Jl
≤ 2−lβ .
Sl,β,j
(−ε,1+ε)
Fig 4. Functions Sk,β,0 for k = 2, . . . , 5 and β = 0.5
28
S
It remains to verify that p̃δ ∈ n∈N En (β) ⊂ A and also kp − p̃δ kβ,(−ε,1+ε) ≤ δ. As
concerns the inequality kp − p̃δ kβ,(−ε,1+ε) ≤ δ, it remains to show that
∞ X
X
≤
Sl,β,j
l=2 j∈Jl
1
.
c9
β,(−ε,1+ε)
For s, t ∈ (−ε, 1 + ε) with |s − t| ≤ 1, we obtain
∞ X
X
Sl,β,j (s) −
∞ X
X
l=2 j∈Jl
≤
(5.4)
Sl,β,j (t)
l=2 j∈Jl
∞
X
X
2−lβ sin(2l−1 πs)
1{|(4j + 2)2−l − s| ≤ 2−l+1 }
j∈Jl
l=2
− sin(2l−1 πt)
X
1{|(4j + 2)2−l − t| ≤ 2−l+1 } .
j∈Jl
Choose now k 0 ∈ N maximal, such that both
0
0
0
0
0
0
0
0
(4j + 2)2−k − 2−k +1 ≤ s ≤ (4j + 2)2−k + 2−k +1
and
(4j + 2)2−k − 2−k +1 ≤ t ≤ (4j + 2)2−k + 2−k +1
0
0
for some j ∈ {−2k −2 , . . . , 2k −1 − 1}. For 2 ≤ l ≤ k 0 , we have
sin(2l−1 πs)
X
1{|(4j + 2)2−l − s| ≤ 2−l+1 }
j∈Jl
− sin(2l−1 πt)
X
1{|(4j + 2)2−l − t| ≤ 2−l+1 }
j∈Jl
≤ sin(2l−1 πs) − sin(2l−1 πt)
(5.5)
≤ min 2l−1 π|s − t|, 2
by the mean value theorem. For l ≥ k 0 + 1,
sin(2l−1 πs)
X
1{|(4j + 2)2−l − s| ≤ 2−l+1 }
j∈Jl
− sin(2l−1 πt)
X
j∈Jl
(
≤ max
)
sin(2l−1 πs) , sin(2l−1 πt)
.
1{|(4j + 2)2−l − t| ≤ 2−l+1 }
29
LOCALLY ADAPTIVE CONFIDENCE BANDS
Furthermore, due to the choice of k 0 , there exists some z ∈ [s, t] with
sin(2l−1 πz) = 0
for all l ≥ k 0 + 1. Thus, for any l ≥ k 0 + 1, by the mean value theorem,
sin(2l−1 πs) = sin(2l−1 πs) − sin(2l−1 πz)
≤ min 2l−1 π|s − z|, 1
≤ min 2l−1 π|s − t|, 1 .
Analogously, we obtain
sin(2l−1 πt) ≤ min 2l−1 π|s − t|, 1 .
Consequently, together with inequality (5.4) and (5.5),
∞ X
X
Sl,β,j (s) −
l=2 j∈Jl
∞ X
X
Sl,β,j (t) ≤
l=2 j∈Jl
∞
X
2−lβ min 2l−1 π|s − t|, 2 .
l=2
Choose now k ∈ N ∪ {0}, such that 2−(k+1) < |s − t| ≤ 2−k . If k ≤ 1,
∞
X
l=2
2−2β
2
2−lβ min 2l−1 π|s − t|, 2 ≤ 2
≤
|s − t|β .
−β
1−2
1 − 2−β
If k ≥ 2, we decompose
∞
X
l=2
∞
k
X
X
π
2−lβ
2−lβ min 2l−1 π|s − t|, 2 ≤ |s − t|
2l(1−β) + 2
2
l=k+1
l=0
π
2
−2
2−(k+1)β
|s − t|
+
2
·
2
1 − 2β−1
1 − 2−β
π
1
2
≤ |s − t|β ·
·
+
.
2 1 − 2β−1
1 − 2−β
k(1−β)
β−1
=
Since furthermore
∞ X
X
≤
Sl,β,j
l=2 j∈Jl
1
,
1 − 2−β
sup
we have
∞ X
X
l=2 j∈Jl
≤3
Sl,β,j
β,(−ε,1+ε)
1
π
2
·
+
β−1
2 1−2
1 − 2−β
+
1
1
=
−β
1−2
c9
30
and finally kp − p̃ε kβ,(−ε,1+ε) ≤ δ. In particular p̃δ ∈ H(−ε,1+ε) (β).
S
We now show that the function p̃δ is contained in n∈N En (β) ⊂ A. For any
bandwidths g, h ∈ G∞ with g ≤ h/8, it holds that h − g ≥ 4g. Thus, for any
g = 2−k with k ≥ 2 and for any t ∈ (−ε, 1 + ε), there exists some j = j(t, h, g) ∈
{−2k−2 , . . . , 2k−1 −1} such that both tj,1 (k) and tj,2 (k) are contained in B(t, h−g),
which implies
(5.6)
|(KR,g ∗ p̃δ )(s) − p̃δ (s)| ≥ max |(KR,g ∗ p̃δ )(tj,i (k)) − p̃δ (tj,i (k))| .
sup
i=1,2
s∈B(t,h−g)
By linearity of the convolution and the theorem of dominated convergence,
(KR,g ∗ p̃δ )(tj,i (k)) − p̃δ (tj,i (k))
= (KR,g ∗ pk,δ )(tj,i (k)) − pk,δ (tj,i (k))
∞ X
X
+ c9 δ
(KR,g ∗ Sl,β,j )(tj,i (k)) − Sl,β,j (tj,i (k)) .
(5.7)
l=k+1 j∈Jl
We analyze the convolution KR,g ∗ Sl,β,j for l ≥ k + 1. Here,
sin 2l−1 π tj,1 (k) = sin 2l−k−1 π (4j + 1) = 0
and
sin 2l−1 π tj,2 (k) = sin 2l−k−1 π (4j + 3) = 0.
Hence,
X
Sl,β,j (tj,i (k)) = 0,
i = 1, 2
j∈Jl
for any l ≥ k + 1. Furthermore,
1
2g
Z
1
=
2g
Z
(KR,g ∗ Sl,β,j )(tj,i (k)) =
g
Sl,β,j (tj,i (k) − x) dx
−g
tj,i (k)+g
Sl,β,j (x) dx,
i = 1, 2.
tj,i (k)−g
Due to the identities
(4j + 2)2−k − 2−k+1 = tj,1 (k) − g
(4j + 2)2−k + 2−k+1 = tj,2 (k) + g,
we have either
(4j + 2)2−l − 2−l+1 , (4j + 2)2−l + 2−l+1 ⊂ [tj,1 (k) − g, tj,2 (k) + g]
or
(4j + 2)2−l − 2−l+1 , (4j + 2)2−l + 2−l+1 ∩ [tj,1 (k) − g, tj,2 (k) + g] = ∅
31
LOCALLY ADAPTIVE CONFIDENCE BANDS
for any l ≥ k + 1. Therefore, for i = 1, 2,
X
(KR,g ∗ Sl,β,j )(tj,i (k))
j∈Jl
=
X 1 Z tj,i (k)+g
2−lβ sin 2l−1 πx 1 |(4j + 2)2−l − x| ≤ 2−l+1 dx
2g tj,i (k)−g
j∈Jl
=0
such that equation (5.7) then simplifies to
(KR,g ∗ p̃δ )(tj,i (k)) − p̃δ (tj,i (k)) = (KR,g ∗ pk,δ )(tj,i (k)) − pk,δ (tj,i (k)),
i = 1, 2.
Together with (5.6), we obtain
|(KR,g ∗ p̃δ )(s) − p̃δ (s)| ≥ max |(KR,g ∗ pk,δ )(tj,i (k)) − pk,δ (tj,i (k))|
sup
i=1,2
s∈B(t,h−g)
for some j ∈ {−2k−2 , −2k−2 + 1, . . . , 2k−2 − 1}. If j ∈
/ Jk , then
max |(KR,g ∗ pk,δ )(tj,i (k)) − pk,δ (tj,i (k))|
i=1,2
= max |(KR,g ∗ pk−1,δ )(tj,i (k)) − pk−1,δ (tj,i (k))|
i=1,2
1
2
≥ c9 δ 1 −
gβ .
2
π
If j ∈ Jk , then
max |(KR,g ∗ pk,δ )(tj,i (k)) − pk,δ (tj,i (k))|
i=1,2
≥ c9 δ max |(KR,g ∗ Sk,β,j )(tj,i (k)) − Sk,β,j (tj,i (k))|
i=1,2
− max |(KR,g ∗ pk−1,δ )(tj,i (k)) − pk−1,δ (tj,i (k))|
i=1,2
1
2
≥ c9 δ max |(KR,g ∗ Sk,β,j )(tj,i (k)) − Sk,β,j (tj,i (k))| − c9 δ 1 −
gβ .
i=1,2
2
π
Similar as above we obtain
(KR,g ∗ Sk,β,j )(tj,1 (k)) − Sk,β,j (tj,1 (k))
Z tj,1 (k)+g
1
=
2−kβ sin 2k−1 πx dx − 2−kβ
2g tj,1 (k)−g
Z −k+1
1 −kβ 2
2
sin 2k−1 πx dx − 2−kβ
=
2g
0
2
β
=g
−1
π
32
as well as
2
(KR,g ∗ Sk,β,j )(tj,2 (k)) − Sk,β,j (tj,2 (k)) = g β 1 −
,
π
such that
max |(KR,g ∗ pk,δ )(tj,i (k)) − pk,δ (tj,i (k))| ≥
i=1,2
1
2
c9 δ 1 −
gβ .
2
π
Combining the two cases finally gives
1
2
sup
|(KR,g ∗ p̃δ )(s) − p̃δ (s)| ≥ c9 δ 1 −
gβ .
2
π
s∈B(t,h−g)
In particular, p̃δ ∈ En (β) for sufficiently large n ≥ n0 (β, δ), and thus p̃δ ∈ A.
Since A is open and dense in the class H(−ε,1+ε) (β) and A ⊂ R̃, the complement
H(−ε,1+ε) (β) \ R̃ is nowhere dense in H(−ε,1+ε) (β). Thus, because of
H(−ε,1+ε) (β)|B(t,h) = HB(t,h) (β),
and the fact that for any x ∈ H(−ε,1+ε) (β) and any z 0 ∈ HB(t,h) (β) with
kx|B(t,h) − z 0 kβ,B(t,h) < δ
there exists an extension z ∈ H(−ε,1+ε) (β) of z 0 with
kx − zkβ,(−ε,1+ε) < δ,
the set HB(t,h) (β) \ R̃|B(t,h) is nowhere dense in HB(t,h) (β). Since the property
”nowhere dense” is stable when passing over to intersections and the corresponding
relative topology, we conclude that
PB(t,h) (β, L∗ ) \ R|B(t,h)
is nowhere dense in PB(t,h) (β, L∗ ) with respect to k · kβ,B(t,h) .
Proof of Lemma 3.5. As it has been proven in Hardy (1916) the Weierstraß
function Wβ is β-Hölder continuous everywhere. For the sake of completeness, we
state the proof here. Because the Weierstraß function is 2-periodic, it suffices to
consider points s, t ∈ R with |s − t| ≤ 1. Note first that
|Wβ (s) − Wβ (t)| ≤ 2
≤2
∞
X
n=0
∞
X
n=0
1 n
1 n
2 π(s + t) · sin
2 π(s − t)
2
2
1 n
2 π(s − t) .
sin
2
2−nβ sin
2−nβ
LOCALLY ADAPTIVE CONFIDENCE BANDS
33
Choose k ∈ N ∪ {0} such that 2−(k+1) < |s − t| ≤ 2−k . For all summands with index
n ≤ k, use the inequality | sin(x)| ≤ |x| and for all summands with index n > k use
| sin(x)| ≤ 1, such that
|Wβ (s) − Wβ (t)| ≤ 2
k
X
∞
X
1 n
2 π(s − t) + 2
2−nβ
2
2−nβ
n=0
n=k+1
= π |s − t|
k
X
2n(1−β) + 2
n=0
∞
X
2−nβ .
n=k+1
Note that,
k
X
2n(1−β) =
n=0
−β
and, as 2
2k(1−β)
2k(1−β) − 2β−1
2(k+1)(1−β) − 1
≤
,
=
1−β
β−1
2
−1
1−2
1 − 2β−1
< 1,
∞
X
2−nβ =
n=k+1
2−(k+1)β
.
1 − 2−β
Consequently, we have
2k(1−β)
2−(k+1)β
|Wβ (s) − Wβ (t)| ≤ π |s − t|
+
2
1 − 2β−1
1 − 2−β
π
2
β
≤ |s − t|
+
.
1 − 2β−1
1 − 2−β
Furthermore
kWβ ksup ≤
∞
X
2−nβ =
n=0
1
,
1 − 2−β
so that for any interval U ⊂ R,
kWβ kβ,U ≤
π
3
+
.
β−1
1−2
1 − 2−β
We now turn to the proof of bias lower bound condition. For any 0 < β ≤ 1, for any
h ∈ G∞ , for any g = 2−k ∈ G∞ with g ≤ h/2,
and for any t ∈ R, there exists some
s0 ∈ [t − (h − g), t + (h − g)] with cos 2k πs0 = 1, since the function x 7→ cos(2k πx)
is 21−k -periodic. Note that in this case also
(5.8)
cos (2n πs0 ) = 1
for all n ≥ k.
The following supremum is now lower bounded by
Z
sup
KR,g (x − s)Wβ (x) dx − Wβ (s)
s∈B(t,h−g)
Z
1
≥
KR (x)Wβ (s0 + gx) dx − Wβ (s0 ) .
−1
34
As furthermore
sup KR (x)2−nβ cos (2n π(s0 + gx)) ≤ kKR ksup · 2−nβ
x∈R
and
∞
X
kKR ksup · 2−nβ =
n=0
kKR ksup
< ∞,
1 − 2−β
the dominated convergence theorem implies
Z 1
∞
X
2−nβ In (s0 , g)
KR (x)Wβ (s0 + gx) dx − Wβ (s0 ) =
−1
n=0
with
Z
1
In (s0 , g) =
KR (x) cos (2n π(s0 + gx)) dx − cos (2n πs0 ) .
−1
Recalling (5.8), it holds for any index n ≥ k
1 sin(2n π(s0 + g)) − sin(2n π(s0 − g))
·
−1
2
2n πg
sin(2n πg)
=
−1
2n πg
= −1.
In (s0 , g) =
(5.9)
Furthermore, for any index 0 ≤ n ≤ k − 1 holds
1 sin(2n π(s0 + g)) − sin(2n π(s0 − g))
·
− cos (2n πs0 )
2
2n πg
sin(2n πg)
−
1
.
= cos(2n πs0 )
2n πg
In (s0 , g) =
(5.10)
Using this representation, the inequality sin(x) ≤ x for x ≥ 0, and Lemma A.3, we
obtain
sin(2n πg)
2−nβ In (s0 , g) ≤ 2−nβ 1 −
2n πg
(2n πg)2
≤ 2−nβ ·
6
−nβ+2(n−k)+1
≤2
.
Since k − n − 1 ≥ 0 and β ≤ 1, this is in turn bounded by
2−nβ In (s0 , g) ≤ 2−(2k−n−2)β · 22(n−k)+1+2(k−n−1)β
≤ 2−(2k−n−2)β · 22(n−k)+1+2(k−n−1)
(5.11)
≤ 2−(2k−n−2)β .
35
LOCALLY ADAPTIVE CONFIDENCE BANDS
Taking together (5.9) and (5.11), we arrive at
k−3
X
2k−2
X
2−nβ In (s0 , g) +
n=0
2−nβ In (s0 , g) ≤
k−3
X
2−(2k−n−2)β −
n=0
n=k+1
2k−2
X
2−nβ = 0.
n=k+1
Since by (5.9) also
∞
X
2−nβ In (s0 , g) = −
n=2k−1
∞
X
2−nβ < 0,
n=2k−1
it remains to investigate
k
X
2−nβ In (s0 , g).
n=k−2
For this purpose, we distinguish between the three cases
cos(2k−1 πs0 ) = cos(2k−2 πs0 ) = 1
(i)
(ii)
cos(2k−1 πs0 ) = −1, cos(2k−2 πs0 ) = 0
(iii)
cos(2k−1 πs0 ) = 1, cos(2k−2 πs0 ) = −1
and subsequently use the representation in (5.10). In case (i), obviously
k
X
2−nβ In (s0 , g) ≤ −2−kβ < 0.
n=k−2
using sin(x) ≤ x for x ≥ 0 again. In case (ii), we obtain for β ≤ 1
k
X
n=k−2
4
sin(π/2)
− 2−kβ ≤ 2−kβ 1 −
< 0.
2−nβ In (s0 , g) = 2−kβ 2β 1 −
π/2
π
Finally, in case (iii), for β ≤ 1,
k
X
2−nβ In (s0 , g)
n=k−2
sin(π/2)
sin(π/4)
−(k−2)β
=2
−1 −2
− 1 − 2−kβ
π/2
π/4
2
sin(π/4)
−(k−1)β
β
=2
−1 +2 1−
− 2−kβ
π
π/4
2
sin(π/4)
−(k−1)β
<2
+1−8
− 2−kβ
π
π
−(k−1)β
< −2−kβ
< 0.
36
That is,
Z
sup
KR,g (x − s)Wβ (x) dx − Wβ (s)
s∈B(t,h−g)
Z
1
KR (x)Wβ (s0 + gx) dx − Wβ (s0 )
≥
=−
−1
∞
X
2−nβ In (s0 , g)
n=0
≥−
k
X
2−nβ In (s0 , g)
n=k−2
>
4
− 1 gβ .
π
Proof of Theorem 3.6. The proof is structured as follows. First, we show
that the bias term is negligible. Then, we conduct several reduction steps to nonstationary Gaussian processes. We pass over to the supremum over a stationary
Gaussian process by means of Slepian’s comparison inequality, and finally, we employ extreme value theory for its asymptotic distribution.
Step 1 (Negligibility of the bias). Due to the discretization of the interval [0, 1]
in the construction of the confidence band and due to the local variability of the
confidence band’s width, the negligibility of the bias is not immediate. For any
t ∈ [0, 1], there exists some kt ∈ Tn with t ∈ Ikt . Hence,
q
χ1 loc
loc
ñĥloc
n (t) Ep p̂n (t, ĥn (t)) − p(t)
q
loc
χ1 (1)
= ñĥloc
n,kt Ep p̂n (kt δn , ĥn,kt ) − p(t)
q
q
loc
χ1 (1)
p̂
(k
δ
,
ĥ
)
−
p(k
δ
)
+
E
ñĥloc
≤ ñĥloc
t n
t n
p n
n,kt
n,kt
n,kt p(kt δn ) − p(t) .
Assume ĵn (kδn ) ≥ kn (kδn ) = j̄n (kδn ) − mn for all k ∈ Tn , where mn is given in
Proposition 4.1. Since δn ≤ 18 hβ∗ ,n for sufficiently large n ≥ n0 (β∗ , ε),
n
o
mn −un
ĥloc
· min 2−ĵn ((kt −1)δn )−mn , 2−ĵn (kt δn )−mn
n,kt = 2
≤ 2mn −un · min h̄n ((kt − 1)δn ), h̄n (kt δn )
≤ 3 · 2mn −un · h̄n (t)
37
LOCALLY ADAPTIVE CONFIDENCE BANDS
−(j̄n (t)+1)
by Lemma 4.2. In particular, δn + ĥloc
holds for sufficiently large
n,kt ≤ 2
n ≥ n0 (c1 , β∗ ), so that Assumption 3.1, Lemma 3.2, and Lemma 4.4 yield
q
χ1 (1)
loc
sup
ñĥloc
n,kt Ep p̂n (kt δn , ĥn,kt ) − p(kt δn )
p∈Pn
≤ sup
p∈Pn
q
ñĥloc
n,kt
≤ sup b2
p∈Pn
sup
s∈B(t,δn )
loc
Eχp 1 p̂(1)
n (s, ĥn,kt ) − p(s)
q
βp (B(t,2−j̄n (t) ))
loc
ñĥloc
ĥ
n,kt
n,kt
q
βp (B(t,h̄n (t)))
loc
≤ sup b2 ñĥloc
n,kt ĥn,kt
p∈Pn
mn −un
≤ sup b2 3 · 2
2β∗2+1 q
ñh̄n (t)h̄n (t)βn,p (t)
p∈Pn
1
≤ c10 · (log ñ)− 4 c1 (2β∗ +1) log 2
(5.12)
for some constant c10 = b2 · 3(2β∗ +1)/2 , on the event
n
o
ĵn (kδn ) ≥ kn (kδn ) for all k ∈ Tn .
Now, we analyze the expression
q
ñĥloc
n,kt p(kt δn ) − p(t) .
For t ∈ Ik and for n ≥ n0 ,
δnβ∗
−jmin
≤2
log ñ
ñ
κ1 β∗
≤2
−jmin
log ñ
ñ
12
≤2
−jmin
log ñ
ñ
(t)
2ββn,p(t)+1
n,p
,
such that on the same event
q
√ ∗ 1 (m −u ) q
ñĥloc
|p(k
δ
)
−
p(t)|
≤
sup
3L · 2 2 n n ñh̄n (t) · δnβ∗
sup
t n
n,kt
p∈Pn
p∈Pn
1
≤ c11 · (log ñ)− 4 c1 log 2
(5.13)
for some constant c11 = c11 (β∗ , L∗ ). Taking (5.12) and (5.13) together,
q
n
o
χ1 loc
loc
sup sup an ñĥloc
n (t) Ep p̂n (t, ĥn (t)) − p(t) 1 ĵn (kδn ) ≥ kn (kδn )∀ k ∈ Tn
p∈Pn t∈[0,1]
≤ ε1,n ,
with
1
1
ε1,n = c10 · an (log n)− 4 c1 (2β∗ +1) log 2 + c11 · an (log n)− 4 c1 log 2 .
38
According to the definition of c1 in (3.9), ε1,n converges to zero. Observe furthermore that
q
loc
loc
(5.14)
sup
ñĥloc
n (t) p̂n (t, ĥn (t)) − p(t)
t∈[0,1]
can be written as
q
loc
loc
max sup ñĥloc
n (t) p̂n (t, ĥn (t)) − p(t)
k∈Tn t∈Ik
q
(1)
loc
(1)
loc
loc
= max ñĥn,k max p̂n (kδn , ĥn,k ) − inf p(t), sup p(t) − p̂n (kδn , ĥn,k )
t∈Ik
k∈Tn
t∈Ik
with the definitions in (3.12). That is, the supremum in (5.14) is measurable. Then,
by means of Proposition 4.1, with x1,n = x − ε1,n ,
)
!
(
q
inf P⊗n
p
an
p∈Pn
loc
loc
ñĥloc
n (t) p̂n (t, ĥn (t)) − p(t) − bn
sup
(
≥ inf
p∈Pn
≤x
t∈[0,1]
P⊗n
p
an
q
sup
)
ñĥloc
n (t)
loc
p̂loc
n (t, ĥn (t))
− p(t) − bn
≤ x,
t∈[0,1]
ĵn (kδn ) ≥ kn (kδn ) for all k ∈ Tn
(
≥ inf
p∈Pn
P⊗n
p
an
q
sup
)
ñĥloc
n (t)
loc
p̂loc
n (t, ĥn (t))
−
loc
Eχp 1 p̂loc
n (t, ĥn (t))
− bn
≤ x1,n ,
t∈[0,1]
ĵn (kδn ) ≥ kn (kδn ) for all k ∈ Tn
(
≥ inf P⊗n
p
an
p∈Pn
q
sup
)
loc
loc
loc
χ1 loc
ñĥloc
n (t) p̂n (t, ĥn (t)) − Ep p̂n (t, ĥn (t)) − bn
!
≤ x1,n
t∈[0,1]
− sup Pχp 2 ĵn (kδn ) < kn (kδn ) for some k ∈ Tn
p∈Pn
(5.15)
"
= inf
p∈Pn
E⊗n
p
P⊗n
p
(
an
max
k∈Tn
q
(1)
loc
χ1 (1)
loc
ñĥloc
n,k p̂n (kδn , ĥn,k ) − Ep p̂n (kδn , ĥn,k )
)
− bn
for n → ∞.
!#
≤ x1,n χ2
+ o(1)
39
LOCALLY ADAPTIVE CONFIDENCE BANDS
Step 2 (Reduction to the supremum over a non-stationary Gaussian process).
In order to bound (5.15) from below note first that
)
!
(
q
(1)
loc
χ1 (1)
loc
⊗n
loc
Pp
an max ñĥn,k p̂n (kδn , ĥn,k ) − Ep p̂n (kδn , ĥn,k ) − bn ≤ x1,n χ2
k∈Tn
s
(
≥
P⊗n
p
an
max
ñĥloc
n,k
p(kδn )
k∈Tn
)
loc
p̂(1)
n (kδn , ĥn,k )
−
loc
Eχp 1 p̂(1)
n (kδn , ĥn,k )
Using the identity |x| = max{x, −x}, we arrive at
(
q
P⊗n
p
an
max
k∈Tn
ñĥloc
n,k
loc
p̂(1)
n (kδn , ĥn,k )
−
loc
Eχp 1 p̂(1)
n (kδn , ĥn,k )
− bn
!
x1,n
≤√
χ2 .
L∗
)
!
− bn
≤ x1,n χ2
≥ 1 − P1,p − P2,p
with
s
(
P1,p = P⊗n
an
p
max
ñĥloc
n,k
p(kδn )
k∈Tn
loc
χ1 (1)
loc
p̂(1)
(kδ
,
ĥ
)
−
E
p̂
(kδ
,
ĥ
)
n
n
n
n,k
p n
n,k
)
− bn
s
(
P2,p = P⊗n
an
p
max
k∈Tn
ñĥloc
n,k
p(kδn )
x1,n
>√
χ2
L∗
!
loc
(1)
loc
Eχp 1 p̂(1)
n (kδn , ĥn,k ) − p̂n (kδn , ĥn,k )
)
− bn
!
x1,n
>√
χ2 .
L∗
In order to approximate the maxima in P1,p and P2,p by a supremum over a Gaussian process, we verify the conditions in Corollary 2.2 developed recently in Chernozhukov, Chetverikov and Kato (2014b). For this purpose, consider the empirical
process
ñ
1 X
Gpn f = √
f (Xi ) − Ep f (Xi ) ,
ñ i=1
f ∈ Fn
indexed by
Fnp = {fn,k : k ∈ Tn }
with
fn,k : R → R
− 21
x 7→ ñĥloc
p(kδ
)
K
n
n,k
kδn − x
ĥloc
n,k
!
.
40
Note that Chernozhukov, Chetverikov and Kato (2014b) require the class of functions to be centered. We subsequently show that the class Fnp is Euclidean, which
implies by Lemma 5.2 that the corresponding centered class is VC. It therefore
suffices to consider the uncentered class Fnp . Note furthermore that fn,k are random functions but depend on the second sample χ2 only. Conditionally on χ2 , any
function fn,k ∈ Fnp is measurable as K is continuous. Due to the choice of κ2 and
due to
−un
·
ĥloc
n,k ≥ 2
(log ñ)κ2 −c1 log 2
(log ñ)κ2
≥
ñ
ñ
the factor
(5.16)
− 12
1
1
ñĥloc
≤ √ (log ñ) 2 (c1 log 2−κ2 )
n,k p(kδn )
M
tends to zero logarithmically. We now show that Fnp is Euclidean with envelope
Fn =
1
kKksup
√
(log ñ) 2 (c1 log 2−κ2 ) .
M
Note first that
Fnp ⊂ F =
1
1
fu,h,t : t ∈ R, 0 < u ≤ √ (log ñ) 2 (c1 log 2−κ2 ) , 0 < h ≤ 1
M
with
fu,h,t (·) = u · K
t−·
h
.
Hence,
N
Fnp , k
· kL1 (Q) , εFn ≤ N
k · kL1 (Q)
F,
,ε
Fn
for all probability measures Q and it therefore suffices to show that F is Euclidean.
To this aim, note that for any fu,h,t , fv,g,s ∈ F and for any probability measure Q,
kfu,h,t − fv,g,s kL1 (Q)
Fn
kfu,h,t − fv,h,t kL1 (Q)
kfv,h,t − fv,g,s kL1 (Q)
≤
+
Fn
F
n
kKksup
1
t−·
s−·
≤ |u − v| ·
+
K
−K
Fn
kKksup
h
g
.
L1 (Q)
Thus, using the estimate of the covering numbers in (2.1) and Lemma 14 in Nolan
and Pollard (1987), there exist constants A0 = A0 (A, K) and ν 0 = ν + 1 with
sup N
Q
0 ν 0
k · kL1 (Q)
A
F,
,ε ≤
Fn
ε
41
LOCALLY ADAPTIVE CONFIDENCE BANDS
for all 0 < ε ≤ 1. That is, F is Euclidean with the constant function Fn as envelope,
and in particular
0 ν 0
A
p
(5.17)
.
lim sup sup N Fn , k · kL1 (Q) , εFn ≤
ε
n→∞
Q
Hence, by Lemma 5.2, the Pp -centered class Fnp,0 corresponding to Fnp is VC with
envelope 2Fn and
00 ν 00
A
p,0
N Fn , k · kL2 (Q) , 2εFn ≤
ε
and VC characteristics A00 = A00 (A, K) and ν 00 = ν 00 (ν). Next, we verify the Bernstein condition
Z
|f (y)|l p(y) dy ≤ σn2 Bnl−2
sup sup
p,0
p∈Pn f ∈Fn
0
for some Bn ≥ σn > 0 and Bn ≥ 2Fn and l = 2, 3, 4. First, for fn,k
∈ Fnp,0 ,
Z
0
max |fn,k
(y)|2 p(y) dy
k∈Tn
−1 Z
= max ñ p(kδn )
k∈Tn
1
ĥloc
n,k
K(x) −
Z
K(y)p kδn +
ĥloc
n,k y
2
dy p kδn + ĥloc
n,k x dx
−1
≤ σn2
with
σn2 =
2L∗ (kKksup + L∗ kKk1 )2
.
M ñ
Also, using (5.16),
Z
max |fn,k (y)|3 p(y) dy
k∈Tn
= max
k∈Tn
ñĥloc
n,k
p(kδn )
−3/2
ĥloc
n,k
Z
1
K(x) −
ĥloc
n,k
Z
K(x)p kδn +
−1
−1/2
≤ σn2 (kKksup + L∗ kKk1 ) · max ñĥloc
n,k p(kδn )
k∈Tn
≤
σn2
· Bn
with
kKk1
, 2 Fn .
Bn = max 1 + L
kKksup
∗
The condition
Z
sup
sup
p,0
p∈Pn f ∈Fn
|f (y)|4 p(y) dy ≤ σn2 Bn2
ĥloc
n,k y
3
dy p kδn + ĥloc
n,k x dx
42
follows analogously. Furthermore, it holds that k2Fn ksup = Bn . According to Corollary 2.2 in Chernozhukov, Chetverikov and Kato (2014b), for sufficiently large
n ≥ n0 (c1 , κ2 , L∗ , K) such that Bn ≥ σn , there exists a random variable
D
0
GPp f,
Zn,p
= max
p,0
f ∈Fn
and universal constants c12 and c13 , such that for η = 14 (κ2 − c1 log 2 − 4) > 0
√
log ñ
−η
0
Gn f − Zn,p > ε2,n χ2 ≤ c12 (log ñ) +
sup P an ñ max
,
p,0
ñ
f ∈Fn
p∈Pn
where
ε2,n = an
!
√
3/4
ñ1/4 Bn σn Kn
ñ1/3 (Bn σn2 Kn2 )1/3
Bn Kn
+
+
(log ñ)−η/2
(log ñ)−η/2
(log ñ)−η/3
with Kn = c13 ν 00 (log ñ ∨ log(A00 Bn /σn )), and GPp is a version of the Pp -Brownian
motion. That is, it is centered and has the covariance structure
Eχp 1 f (X1 )g(X1 )
for all f, g ∈ Fnp,0 . As can be seen from an application of the Itô isometry, it
possesses in particular the distributional representation
Z
p
D
(5.18)
(GPp f )f ∈Fnp,0 =
f (x) p(x) dW (x)
,
p,0
f ∈Fn
where W is a standard Brownian motion independent of χ2 . An easy calculation
furthermore shows that ε2,n tends to zero for n → ∞ logarithmically due to the
choice of η. Finally,
sup P1,p
p∈Pn
√
√
x1,n
p
0
p
√
a
≤ sup P⊗n
max
G
f
−
Z
≤
ε
χ
ñ
max
G
f
−
b
,
a
ñ
>
n
2,n 2
n
n
p
n
n,p
n
p
p,0
f ∈Fn
L∗
f ∈Fn
p∈Pn
√
0
+ sup P⊗n
an ñ max
Gpn f − Zn,p
> ε2,n χ2
p
p,0
p∈Pn
≤ sup
p∈Pn
P⊗n
p
an
√
f ∈Fn
0
ñ Zn,p
− bn > x2,n χ2 + o(1)
for n → ∞, with
x1,n
x − ε1,n
x
x2,n = √ − ε2,n = √
− ε2,n = √ + o(1).
∗
∗
L
L
L∗
The probability P2,p is bounded in the same way, leading to
(
)
!
q
⊗n
(1)
loc
χ1 (1)
loc
loc
inf Pp
an max ñĥn,k p̂n (kδn , ĥn,k ) − Ep p̂n (kδn , ĥn,k ) − bn ≤ x1,n χ2
p∈Pn
k∈Tn
43
LOCALLY ADAPTIVE CONFIDENCE BANDS
√
0
≥ 2 inf P⊗n
an
ñ Zn,p
− bn ≤ x2,n χ2 − 1 + o(1).
p
p∈Pn
Next, we show that there exists some sequence (ε3,n ) converging to zero, such that
√
(5.19)
χ
= o(1).
sup P an ñ max
>
ε
G
f
−
max
G
f
3,n 2
Pp
Pp
p
p,0
f ∈Fn
f ∈Fn
p∈Pn
For this purpose, note first that
√
sup P an ñ max
GPp f − maxp GPp f > ε3,n χ2
p,0
f ∈Fn
f ∈Fn
p∈Pn
√
≤ sup P |Y |an ñ maxp |Pp f | > ε3,n χ2
f ∈Fn
p∈Pn
with Y ∼ N (0, 1). Due to the choice of c1
√
L∗ kKk1 −un /2
an ñ maxp |Pp f | ≤ an √
2
= o(1),
f ∈Fn
M
which proves (5.19). Following the same steps as before
√
0
inf P⊗n
a
ñ
Z
−
b
≤
x
χ
n
n
2,n 2
p
n,p
p∈Pn
√
≥ inf P an
ñ maxp GPp f − bn ≤ x3,n χ2 + o(1)
p∈Pn
f ∈Fn
with x3,n = x2,n − ε3,n .
Finally we conduct a further approximation
!
Z
kδn − x
1
K
dW (x)
(Yn,p (k))k∈Tn = q
loc
ĥloc
ĥn,k
n,k
to the process
p
√ Z
ñ fn,k (x) p(x) dW (x)
D
=
√
ñ GPp fn,k
k∈Tn
k∈Tn
k∈Tn
in order to obtain to a suitable intermediate process for Step 3. With
√
√
Vn,p (k) = ñ W (fn,k p) − Yn,p (k)
p
p
√ Z
= ñ fn,k (x)
p(x) − p(kδn ) dW (x),
it remains to show that
(5.20)
lim
sup P
n→∞ p∈P
n
W
an max |Vn,p (k)| > ε4,n
k∈Tn
=0
44
for some sequence (ε4,n )n∈N converging to zero. Note first that
2
max EW Vn,k
Z
p
2
p
p(x) − p(kδn ) dx
= max ñ fn,k (x)2
k∈Tn
k∈Tn
q
2
Z
p
1
2
loc
K(x)
= max
p(kδn + ĥn,k x) − p(kδn ) dx
k∈Tn p(kδn )
Z
1
≤ max
K(x)2 p(kδn + ĥloc
n,k x) − p(kδn ) dx
k∈Tn p(kδn )
L∗ kKk22 loc β∗
≤ max
ĥn,k
k∈Tn p(kδn )
L∗ kKk22
−c β log 2
≤
(log ñ) 1 ∗
.
M
Denoting by k · kψ2 the Orlicz norm corresponding to ψ2 (x) = exp(x2 ) − 1, we
deduce for sufficiently large n ≥ n0 (c1 , β∗ , L∗ , K, M )
sup
an · max |Vn,p (k)|
k∈Tn
p∈Pn
≤ sup an ·
p∈Pn
c(ψ2 ) ψ2−1
ψ2
δn−1
max kVn,p (k)kψ2
k∈Tn
12
q
L∗ kKk22
−c1 β∗ log 2
−1
(log ñ)
≤ an · c(ψ2 ) log 1 + δn
kY kψ2
M
p
− 1 c β log 2
≤ an · c log ñ (log ñ) 2 1 ∗
.
The latter expression converges to zero due to the choice of c1 in (3.9). Thus, (5.20)
is established. Following the same steps as before, we obtain
√
⊗n
an
ñ max GPp fn,k − bn ≤ x3,n χ2
inf Pp
k∈Tn
p∈Pn
W
≥ inf P
an max Yn,p (k) − bn ≤ x4,n + o(1)
p∈Pn
k∈Tn
for n → ∞, with x4,n = x3,n − ε4,n .
Step 3 (Reduction to the supremum over a stationary Gaussian process).
We now use Slepian’s comparison inequality in order to pass over to the least
favorable case. Since K is symmetric and of bounded variation, it possesses a representation
Z x
K(x) =
g dP
−1
for all but at most countably many x ∈ [−1, 1], where P is some symmetric probability measure on [−1, 1] and g is some measurable odd function with |g| ≤ T V (K).
LOCALLY ADAPTIVE CONFIDENCE BANDS
45
Using this representation, and denoting by
s
o
1 n
Wk,l (z) =
W (kδn + ĥloc
) − W (kδn + z ĥloc
)
n,k
n,k
ĥloc
n,k
s
o
1 n
−
W (lδn + ĥloc
) − W (lδn + z ĥloc
)
n,l
n,l
ĥloc
n,l
s
n
o
1
loc
W (kδn − z ĥloc
)
−
W
(kδ
+
z
ĥ
)
W̃k,l (z) =
n
n,k
n,k
ĥloc
n,k
(5.21)
s
o
1 n
W (lδn + z ĥloc
) − W (lδn − z ĥloc
) ,
+
n,l
n,l
ĥloc
n,l
Fubini’s theorem with one stochastic integration and the Cauchy-Schwarz inequality
yield for any k, l ∈ Tn
2
EW Yn,p (k) − Yn,p (l)
s
Z Z x−kδ
n
n
o
ĥloc
1
n,k
= EW
g(z) dP (z)1 |x − kδn | ≤ ĥloc
dW (x)
n,k
ĥloc
−1
n,k
!2
s
Z Z x−lδ
n
n
o
ĥloc
1
n,l
loc
g(z) dP (z)1 |x − lδn | ≤ ĥn,l dW (x)
−
ĥloc
−1
n,l
(s
Z 1
Z n
o
1
loc
= EW
1
kδn + z ĥloc
≤
x
≤
kδ
+
ĥ
dW (x)
g(z)
n
n,k
n,k
ĥloc
−1
n,k
)
!2
s
Z n
o
1
loc
loc
−
1 lδn + z ĥn,l ≤ x ≤ lδn + ĥn,l dW (x) dP (z)
ĥloc
n,l
Z 1
2
= EW
g(z)Wk,l (z) dP (z)
−1
Z
= EW
Z
1
2
g(z) Wk,l (z) − Wk,l (−z) dP (z)
0
1Z 1
= EW
Z
0
1Z 1
≤
0
g(z)g(z 0 )W̃k,l (z)W̃k,l (z 0 ) dP (z) dP (z 0 )
0
n
o1/2
|g(z)g(z 0 )| EW W̃k,l (z)2 EW W̃k,l (z 0 )2
dP (z) dP (z 0 ).
0
We verify in Lemma A.1 that
EW W̃k,l (z)2 ≤ 4
for z ∈ [0, 1], so that
2
Z
EW (Yn,p (k) − Yn,p (l)) ≤ 4
2
1
|g(z)| dP (z)
0
≤ T V (K)2
46
for all k, l ∈ Tn . Consider now the Gaussian process
Z
c15
kδn − x
Yn,min (k) = √
K
dW (x),
δn /2
δn
with
c15 =
k ∈ Tn ,
T V (K)
.
kKk2
Furthermore,
2
EW (Yn,min (k) − Yn,min (l)) = EW Yn,min (k)2 + EW Yn,min (l)2 = T V (K)2
for all k, l ∈ Tn with k 6= l, so that
(5.22)
EW (Yn,p (k) − Yn,p (l))2 ≤ EW (Yn,min (k) − Yn,min (l))
2
for all k, l ∈ Tn . In order to apply Slepian’s comparison inequality we however
need coinciding second moments. For this aim, we analyze the modified Gaussian
processes
Ȳn,p (k) = Yn,p (k) + c16 Z
Ȳn,min (k) = Yn,min (k) + c17 Z
with
c16 = c16 (K) =
T V (K)
√
,
2
c17 = c17 (K) = kKk2 ,
and for some standard normally distributed random variable Z independent of
(Yn,p (k))k∈Tn and (Yn,min (k))k∈Tn . Note that these processes have the same increments as the processes before. In particular
2
2
EW Ȳn,p (k) − Ȳn,p (l) = EW (Yn,p (k) − Yn,p (l))
2
≤ EW (Yn,min (k) − Yn,min (l))
2
= EW Ȳn,min (k) − Ȳn,min (l)
for all k, l ∈ Tn by inequality (5.22). With this specific choice of c16 and c17 , they
furthermore have coinciding second moments
EW Ȳn,p (k)2 = EW Ȳn,min (k)2 =
T V (K)2
+ kKk22
2
for all k ∈ Tn . Then,
W
inf P
an max Yn,p (k) − bn ≤ x3,n
k∈Tn
p∈Pn
W
= inf P
an max Ȳn,p (k) − c16 Z − bn ≤ x4,n
p∈Pn
k∈Tn
47
LOCALLY ADAPTIVE CONFIDENCE BANDS
W
≥ inf P
p∈Pn
≥ inf PW
p∈Pn
≥ inf PW
p∈Pn
1
an max Ȳn,p (k) − c16 Z − bn ≤ x4,n , −Z ≤
bn
k∈Tn
3c16
2
1
an max Ȳn,p (k) − bn ≤ x4,n − P −Z >
bn
k∈Tn
3
3c16
2
an max Ȳn,p (k) − bn ≤ x4,n + o(1)
k∈Tn
3
for n → ∞. Slepian’s inequality in the form of Corollary 3.12 in Ledoux and Talagrand (1991) yields
2
inf PW an max Ȳn,p (k) − bn ≤ x4,n
k∈Tn
p∈Pn
3
2
≥ PW an max Ȳn,min (k) − bn ≤ x4,n .
k∈Tn
3
Step 4 (Limiting distribution theory). Finally, we pass over to an iid sequence
and apply extreme value theory. Together with
2
W
P
an max Ȳn,min (k) − bn ≤ x4,n
k∈Tn
3
1
2
W
bn
≥P
an max Yn,min (k) + c17 Z − bn ≤ x4,n , Z ≤
k∈Tn
3
3c17
1
1
W
≥P
bn
an max Yn,min (k) − bn ≤ x4,n − P Z >
k∈Tn
3
3c17
1
= PW an max Yn,min (k) − bn ≤ x4,n + o(1)
k∈Tn
3
as n → ∞, we finally obtain
inf
p∈Pn
P⊗n
p
q
an
sup
!
ñĥloc
n (t)
loc
p̂loc
n (t, ĥn (t))
− p(t) − bn
!
≤x
t∈[0,1]
1
≥ 2 P an max Yn,min (k) − bn ≤ x4,n − 1 + o(1).
k∈Tn
3
Theorem 1.5.3 in Leadbetter, Lindgren and Rootzén (1983) yields now
(5.23)
Fn (x) = P
W
an
1
max Yn,min (k) − bn
k∈Tn
3
≤x
−→ F (x) = exp(− exp(−x))
for any x ∈ R. It remains to show, that Fn (xn ) → F (x) for some sequence xn → x as
n → ∞. Because F is continuous in x, there exists for any ε > 0 some δ = δ(ε) > 0
such that |y − x| ≤ δ imlies |F (x) − F (y)| ≤ ε/2. In particular, for y = x ± δ,
(5.24)
|F (x) − F (x + δ)| ≤
ε
2
and |F (x) − F (x − δ)| ≤
ε
.
2
48
As xn → x, there exists some N1 = N1 (ε), such that |xn − x| ≤ δ for all n ≥ N1 .
Therefore, employing the monotonicity of Fn ,
|Fn (xn ) − F (x)| ≤ |Fn (x + δ) − F (x)| ∨ |Fn (x − δ) − F (x)|
for n ≥ N1 , where
|Fn (x ± δ) − F (x)| ≤ |Fn (x ± δ) − F (x ± δ)| + |F (x ± δ) − F (x)| ≤ ε
for n ≥ N2 = N2 (ε) due to (5.23) and (5.24). Consequently,
lim
inf
n→∞ p∈Pn
P⊗n
p
!
!
q
loc
loc
≤x
ñĥloc
sup
n (t) p̂n (t, ĥn (t)) − p(t) − bn
an
t∈[0,1]
1
≥ 2 lim P an max Yn,min (k) − bn ≤ x4,n − 1 + o(1)
n→∞
k∈Tn
3
√
= 2P
L∗ G ≤ x − 1 + o(1), n → ∞,
for some standard Gumbel distributed random variable G.
Proof of Proposition 3.10. The proof is based on a reduction of the supremum over the class to a maximum over two distinct hypotheses.
Part 1. For β ∈ [β∗ , 1), the construction of the hypotheses is based on the Weierstraß function as defined in (3.8). As in the proof of Proposition 3.3 consider the
function p0 : R → R with
0,
if |x − t| ≥ 10
3
1 + 3 (x − t + 2),
10
if
−
<
x
− t < −2
3
p0 (x) = 41 16
1−2−β
+ 12 Wβ (x − t),
if |x − t| ≤ 2
16
3
−
(x
−
t
−
2),
if 2 < x − t < 10
4
16
3
and the functions p1,n , p2,n : R → R with
p1,n (x) = p0 (x) + qt+ 49 ,n (x; gβ,n ) − qt,n (x; gβ,n ),
x∈R
p2,n (x) = p0 (x) + qt+ 94 ,n (x; c18 · gβ,n ) − qt,n (x; c18 · gβ,n ),
x∈R
for gβ,n = 41 n−1/(2β+1) and c18 = c18 (β) = (2LW (β))−1/β , where
(
qa,n (x; g) =
for a ∈ R and g > 0.
if |x − a| > g
0,
1−2−β
12
Wβ (x − a) − Wβ (g) ,
if |x − a| ≤ g
LOCALLY ADAPTIVE CONFIDENCE BANDS
49
Fig 5. Functions p1,n and p2,n for t = 0.5, β = 0.5 and n = 50
Following the lines of the proof of Proposition 3.3, both p1,n and p2,n are contained
in the class Pk (L, β∗ , M, KR , ε) for sufficiently large k ≥ k0 (β∗ ). Moreover, both
p1,n and p2,n are constant on B(t, c18 · gβ,n ), so that
p1,n|B(t,c18 ·gβ,n ) , p2,n|B(t,c18 ·gβ,n ) ∈ HB(t,c18 ·gβ,n ) (∞, L)
for some constant L = L(β). Using Lemma 3.5 and (5.3), the absolute distance of
the two hypotheses in t is at least
|p1,n (t) − p2,n (t)| = |qt,n (t; gβ,n ) − qt,n (t; c18 · gβ,n )|
1 − 2−β
|Wβ (gβ,n ) − Wβ (c18 · gβ,n )|
12
1 − 2−β∗
≥
|Wβ (gβ,n ) − Wβ (0)| − |Wβ (c18 · gβ,n ) − Wβ (0)|
12
1 − 2−β∗ β
β
≥
gβ,n − LW (β) (c18 · gβ,n )
12
β
≥ 2c19 gβ,n
=
where
c19 = c19 (β∗ ) =
1 − 2−β∗
.
48
Since furthermore
Z
(p2,n (x) − p1,n (x)) dx = 0,
and log(1 + x) ≤ x for x > −1, the Kullback-Leibler divergence between the asso⊗n
ciated product probability measures P⊗n
1,n and P2,n is bounded from above by
Z
(p2,n (x) − p1,n (x))2
⊗n
K(P⊗n
,
P
)
≤
n
dx
2,n
1,n
p1,n (x)
Z
≤ 12 n (p2,n (x) − p1,n (x))2 dx
Z
= 24 n (q0,n (x; gβ,n ) − q0,n (x, c18 · gβ,n ))2 dx
= 24 n
1 − 2−β
12
2
Z
gβ,n
2
c18 ·gβ,n
2
Wβ (x) − Wβ (gβ,n ) dx
50
Z
c18 ·gβ,n
+
Wβ (c18 · gβ,n ) − Wβ (gβ,n )
2
!
dx
−c18 ·gβ,n
≤ 24 n LW (β)
2
1 − 2−β
12
2
Z
gβ,n
(gβ,n − x)2β dx
2
c18 ·gβ,n
!
+ 2(1 − c18 )
2
2β+1
c18 gβ,n
= c20
with
2 −(2β+1)
c20 = c20 (β) = 48 LW (β) 4
1 − 2−β
12
2
!
(1 − c18 )2β+1
2
+ (1 − c18 ) c18 ,
2β + 1
where we used Lemma 3.5 in the last inequality. Theorem 2.2 in Tsybakov (2009)
then yields
β
2β+1 |T (t) − p(t)| ≥ c
inf sup P⊗n
n
n
19
p
Tn p∈Sk (β)
(
≥ max
)
p
1 − c20 /2
1
exp(−c20 ),
> 0.
4
2
Part 2. For β = 1, consider the function p0 : R → R with
p0 (x) =
(
0,
1
4
−
1
16 |x
− t|,
if |x − t| > 4
if |x − t| ≤ 4
and the functions p1,n , p2,n : R → R with
p1,n (x) = p0 (x) + qt+ 94 ,n (x; g1,n ) − qt,n (x; g1,n )
p2,n (x) = p0 (x) + qt+ 49 ,n (x; g1,n /2) − qt,n (x; g1,n /2)
for g1,n = 41 n−1/3 , where
(
qa,n (x; g) =
0,
1
16 (g
− |x − a|),
if |x − a| > g
if |x − a| ≤ g
for a ∈ R and g > 0. Following the lines of the proof of Proposition 3.3, both p1,n
and p2,n are contained in the class Pk for sufficiently large k ≥ k0 (β∗ ). Moreover,
both p1,n and p2,n are constant on B(t, g1,n /2), so that
p1,n|B(t,g1,n /2) , p2,n|B(t,g1,n /2) ∈ HB(t,g1,n /2) (∞, 1/4).
The absolute distance of p1,n and p2,n in t is given by
|p1,n (t) − p2,n (t)| =
1
g1,n ,
32
51
LOCALLY ADAPTIVE CONFIDENCE BANDS
whereas the Kullback-Leibler divergence between the associated product probability
⊗n
measures P⊗n
1,n and P2,n is upper bounded by
⊗n
K(P⊗n
2,n , P1,n )
(p2,n (x) − p1,n (x))2
dx
p1,n (x)
Z
≤ 16 n (p2,n (x) − p1,n (x))2 dx
Z
= 32 n (q0,n (x; g1,n ) − q0,n (x, g1,n /2))2 dx
Z
≤n
Z
g1,n
= 32 n 2
g1,n /2
=
1
(g1,n − x)
16
2
Z
g1,n /2
dx +
g
−g1,n /2
1,n
32
2
!
dx
1
2
+ .
2
3 · 32
32
Together with Theorem 2.2 in Tsybakov (2009) the result follows.
Proof of Theorem 3.12. Recall the notation of Subsection 3.2, in particular
the definitions of ĥloc
n (t) in (3.12), of qn (α) in (3.14), of βn,p (t) in (3.15), and of h̄n (t)
in (4.1). Furthermore, set γ̃ = γ̃(c1 ) = 21 (c1 log 2 − 1). To show that the confidence
band is adaptive, note that according to Proposition 4.1 and Lemma 4.2 for any
δ > 0 there exists some n0 (δ), such that
(t)
2β−βn,p(t)+1
√
n,p
jmin
log
ñ
sup Pχp 2 sup |Cn,α (t)| ·
≥ 6 · 21− 2 qn (α)(log ñ)γ̃
ñ
p∈Pn
t∈[0,1]
!
h̄n (t)
χ2
−un
sup
·2
≥6
= sup Pp
p∈Pn
t∈[0,1] ĥloc
n (t)
h̄n (t)
n
o ≥ 6
= sup Pχp 2 max sup
k∈T
−
ĵ
((k−1)δ
n
t∈Ik min 2 n
p∈Pn
n ) , 2−ĵn (kδn )
min
h̄
((k
−
1)δ
),
h̄
(kδ
)
n
n
n
n
n
o ≥ 2
≤ sup Pχp 2 max
k∈Tn
p∈Pn
min 2−ĵn ((k−1)δn ) , 2−ĵn (kδn )
n
o
min 2−j̄n ((k−1)δn ) , 2−j̄n (kδn )
n
o ≥ 1
≤ sup Pχp 2 ∃ k ∈ Tn :
p∈Pn
min 2−ĵn ((k−1)δn ) , 2−ĵn (kδn )
n
o
min 2−j̄n ((k−1)δn ) , 2−j̄n (kδn )
n
o < 1
= sup 1 − Pχp 2 ∀ k ∈ Tn :
p∈Pn
min 2−ĵn ((k−1)δn ) , 2−ĵn (kδn )
n
o
≤ sup 1 − Pχp 2 ĵn (kδn ) < j̄n (kδn ) for all k ∈ Tn
p∈Pn
≤δ
52
for all n ≥ n0 (δ).
5.2. Proofs of the results in Section 4.
Proof of Proposition 4.1. We prove first that
(5.25)
lim sup Pχp 2 ĵn (kδn ) > j̄n (kδn ) + 1 for some k ∈ Tn = 0.
n→∞ p∈P
n
Note first that if ĵn (kδn ) > j̄n (kδn ) + 1 for some k ∈ Tn , then j̄n (kδn ) + 1 cannot be
an admissible exponent according to the construction of the bandwidth selection
scheme in (3.10), that is, j̄n (kδn ) + 1 ∈
/ An (kδn ). By definition of An (kδn ) there
exist exponents mn,k , m0n,k ∈ Jn with mn,k > m0n,k ≥ j̄n (kδn ) + 4 such that
r
max
s∈B (
kδn , 78 ·2−(j̄n (kδn )+1)
)∩Hn
|p̂(2)
n (s, mn,k )
−
0
p̂(2)
n (s, mn,k )|
> c2
log ñ
.
ñ2−mn,k
Consequently,
Pχp 2 ĵn (kδn ) > j̄n (kδn ) + 1 for some k ∈ Tn
≤ Pχp 2 ∃k ∈ Tn and ∃ mn,k , m0n,k ∈ Jn with mn,k > m0n,k ≥ j̄n (kδn ) + 4 such that
r
max
s∈B (kδn , 78 ·2−(j̄n (kδn )+1) )∩Hn
≤
X
X
|p̂(2)
n (s, mn,k )
−
0
p̂(2)
n (s, mn,k )|
> c2
log ñ
ñ2−mn,k
!
Pχp 2 m > m0 ≥ j̄n (kδn ) + 4 and
m∈Jn m0 ∈Jn
max
s∈B (kδn , 78 ·2−(j̄n (kδn )+1) )∩Hn
(2)
0
|p̂(2)
n (s, m) − p̂n (s, m )|
r
> c2
!
log ñ
for some k ∈ Tn .
ñ2−m
We furthermore use the following decomposition into two stochastic terms and two
bias terms
max
s∈B (kδn , 78 ·2−(j̄n (kδn )+1) )∩Hn
≤
(2)
0
p̂(2)
n (s, m) − p̂n (s, m )
max
s∈B (kδn , 87 ·2−(j̄n (kδn )+1) )∩Hn
+
χ2 (2)
p̂(2)
n (s, m) − Ep p̂n (s, m)
max
s∈B (kδn , 87 ·2−(j̄n (kδn )+1) )∩Hn
+
sup
s∈B (kδn , 87 ·2−(j̄n (kδn )+1) )
0
χ2 (2)
0
p̂(2)
n (s, m ) − Ep p̂n (s, m )
Eχp 2 p̂(2)
n (s, m) − p(s)
LOCALLY ADAPTIVE CONFIDENCE BANDS
+
53
0
Eχp 2 p̂(2)
n (s, m ) − p(s) .
sup
s∈B (kδn , 78 ·2−(j̄n (kδn )+1) )
In order to bound the two bias terms, note first that for any m > m0 ≥ j̄n (kδn ) + 4
both
7 −(j̄n (kδn )+1)
1
·2
= 2−(j̄n (kδn )+1) − · 2−(j̄n (kδn )+1) ≤ 2−(j̄n (kδn )+1) − 2−m
8
8
and
0
7 −(j̄n (kδn )+1)
1
·2
= 2−(j̄n (kδn )+1) − · 2−(j̄n (kδn )+1) ≤ 2−(j̄n (kδn )+1) − 2−m .
8
8
According to Assumption 3.1 and Lemma 3.2,
p|B (kδn ,2−(j̄n (kδn )+1) ) ∈ Hβ ∗ ,B (kδn ,2−(j̄n (kδn )+1) ) βp B kδn , 2−j̄n (kδn ) , L∗ ,
so that Lemma 4.4 yields,
Eχp 2 p̂(2)
n (s, m) − p(s)
sup
s∈B (kδn , 87 ·2−(j̄n (kδn )+1) )
≤
Eχp 2 p̂(2)
n (s, m) − p(s)
sup
s∈B(kδn
,2−(j̄n (kδn )+1) −2−m )
−j̄n (kδn )
≤ b2 2−mβp (B (kδn ,2
))
≤ b2 2−mβp (B (kδn ,h̄n (kδn )))
≤ b2 2−mβn,p (kδn ) ,
with the bandwidth h̄n (·) as defined in (4.1), and analogously
0
0
−m βn,p (kδn )
Eχp 2 p̂(2)
.
n (s, m ) − p(s) ≤ b2 2
sup
s∈B (kδn , 87 ·2−(j̄n (kδn )+1) )
Thus, the sum of the two bias terms is bounded from above by 2b2 h̄n (kδn )βn,p (kδn ) ,
such that
s
ñ2−m
sup
Eχ2 p̂(2) (s, m) − p(s)
log ñ s∈B (kδn , 7 ·2−(j̄n (kδn )+1) )) p n
8
!
+
s∈B (
s
≤
ñh̄n (kδn )
· 2b2 h̄n (kδn )βn,p (kδn )
log ñ
≤ c21 ,
0
Eχp 2 p̂(2)
n (s, m ) − p(s)
sup
)
kδn , 87 ·2−(j̄n (kδn )+1) )
54
where c21 = c21 (β∗ , L∗ , ε) = 2b2 · 2−jmin (2β∗ +1)/2 . Thus, it holds
Pχp 2 ĵn (kδn ) > j̄n (kδn ) + 1 for some k ∈ Tn
(
X X
χ2 (2)
Pχp 2 max
max
≤
p̂(2)
n (s, m) − Ep p̂n (s, m)
k∈Tn s∈B (kδn , 7 ·2−(j̄n (kδn )+1) )∩Hn
0
8
m∈Jn m ∈Jn
!
r
c2 − c21
log ñ
>
2
ñ2−m
+ Pχp 2
max
max
k∈Tn s∈B (kδn , 7 ·2−(j̄n (kδn )+1) )∩Hn
8
0
χ2 (2)
0
p̂(2)
n (s, m ) − Ep p̂n (s, m )
c2 − c21
>
2
≤ 2 |Jn |2 · Pχp 2
r
log ñ
ñ2−m0
s
!
c2 − c21
ñh
(2)
χ2 (2)
p̂ (s, h) − Ep p̂n (s, h) >
.
sup max
log ñ n
2
s∈Hn h∈Gn
!)
Choose c2 = c2 (A, ν, β∗ , L∗ , K, ε) sufficiently large such that c2 ≥ c21 + 2η0 , where
η0 is given in Lemma 4.3. Then, Lemma 4.3 and the logarithmic cardinality of Jn
yield (5.25). In addition, we show that
(5.26)
lim sup Pχp 2 ĵn (kδn ) < kn (kδn ) for some k ∈ Tn = 0.
n→∞ p∈P
n
For t ∈ [0, 1], due to the sequential definition of the set of admissible bandwidths
An (t) in (3.10), if ĵn (t) < jmax , then both ĵn (t) and ĵn (t)+1 are contained in An (t).
Note furthermore, that kn (t) < jmax for any t ∈ [0, 1]. Thus, if ĵn (kδn ) < kn (kδn )
for some k ∈ Tn , there exists some index j < kn (kδn ) + 1 with j ∈ An (kδn ) and
satisfying (3.6) and (3.7) for u = 2−j and t = kδn . In particular,
r
log ñ
(2)
(2)
max
p̂n (s, j + 3) − p̂n (s, j̄n (kδn )) ≤ c2
−
ñ2 j̄n (kδn )
s∈B (kδn , 87 ·2−j )∩Hn
for sufficiently large n ≥ n0 (c1 ), using that j̄n (kδn ) ∈ Jn for any k ∈ Tn . Consequently
Pχp 2 ĵn (kδn ) < kn (kδn ) for some k ∈ Tn
55
LOCALLY ADAPTIVE CONFIDENCE BANDS
(5.27)
X
Pχp 2 ∃ k ∈ Tn : j < kn (kδn ) + 1 and p|B(kδn ,2−j ) ∈ Hβ ∗ ,B(kδn ,2−j ) (β, L∗ )
≤
j∈Jn
and
|(Kg ∗ p)(s) − p(s)| ≥
sup
s∈B(kδn ,2−j −g)
g ≤ 2−(j+3) and
max
s∈B (kδn , 78 ·2−j )∩Hn
gβ
for all g ∈ G∞ with
log n
(2)
p̂(2)
n (s, j + 3) − p̂n (s, j̄n (kδn ))
r
≤ c2
log ñ
!
ñ2−j̄n (kδn )
.
The triangle inequality yields
max
s∈B (kδn , 78 ·2−j )∩Hn
(2)
p̂(2)
n (s, j + 3) − p̂n (s, j̄n (kδn ))
≥
max
s∈B (kδn , 78 ·2−j )∩Hn
−
χ2 (2)
Eχp 2 p̂(2)
n (s, j + 3) − Ep p̂n (s, j̄n (kδn ))
max
s∈B (kδn , 87 ·2−j )∩Hn
−
max
s∈B (kδn , 78 ·2−j )∩Hn
χ2 (2)
p̂(2)
n (s, j + 3) − Ep p̂n (s, j + 3)
χ2 (2)
p̂(2)
n (s, j̄n (kδn )) − Ep p̂n (s, j̄n (kδn )) .
We further decompose
max
s∈B (kδn , 87 ·2−j )∩Hn
χ2 (2)
Eχp 2 p̂(2)
n (s, j + 3) − Ep p̂n (s, j̄n (kδn ))
≥
max
s∈B (kδn , 87 ·2−j )∩Hn
−
Eχp 2 p̂(2)
n (s, j + 3) − p(s)
Eχp 2 p̂(2)
n (s, j̄n (kδn )) − p(s) .
sup
s∈B (
kδn , 78 ·2−j
)
As Assumption 3.1 is satisfied for u = 2−j and t = kδn , together with Lemma 3.2
we both have
(5.28)
p|B(kδn ,2−j ) ∈ Hβ ∗ ,B(kδn ,2−j ) βp B kδn , 2−j , L∗
and
−j
g βp (B (kδn ,2 ))
sup
|(Kg ∗ p)(s) − p(s)| ≥
log n
s∈B(kδn ,2−j −g)
(5.29)
for all g ∈ G∞ with g ≤ 2−(j+3) . In particular, (5.28) together with Lemma 4.4
gives the upper bias bound
sup
s∈B (kδn , 87 ·2−j )
−j̄n (kδn )βp (B (kδn ,2−j ))
Eχp 2 p̂(2)
n (s, j̄n (kδn )) − p(s) ≤ b2 · 2
56
for sufficiently large n ≥ n0 (c1 ), whereas (5.29) yields the bias lower bound
Eχp 2 p̂(2)
n (s, j + 3) − p(s)
sup
s∈B (
kδn , 78 ·2−j
)
=
sup
s∈B(kδn
,2−j −2−(j+3) )
Eχp 2 p̂(2)
n (s, j + 3) − p(s)
−j
2−(j+3)βp (B (kδn ,2 ))
≥
.
log n
(5.30)
To show that the above lower bound even holds for the maximum over the set
B kδn , 87 · 2−j ∩ Hn , note that for any point kδn − 87 2−j ≤ t̃ ≤ kδn + 87 2−j there
exists some t ∈ Hn with |t − t̃| ≤ δn , and
Eχp 2 p̂(2)
n (t, j + 3) − p(t)
Z
n
o
K(x) p(t + 2−(j+3) x) − p(t) dx
=
Z
n
o
K(x) p t̃ + 2−(j+3) x − p t̃
≥
dx
Z
− |K(x)| · p(t + 2−(j+3) x) − p t̃ + 2−(j+3) x dx
Z
− |K(x)| · |p(t) − p t̃ | dx
(5.31)
Z
≥
n
o
K(x) p t̃ + 2−(j+3) x − p t̃ dx − 2kKk1 L∗ · |t − t̃|β∗ ,
where
|t − t̃|β∗ ≤ δnβ∗
−jmin
≤2
log ñ
ñ
21
(log ñ)−2
≤
h̄n (kδn )βn,p (kδn )
(log ñ)2
≤
2−(j̄n (kδn )−1)βn,p (kδn )
(log ñ)2
≤
2−(j+3)βn,p (kδn )
(log ñ)2
for sufficiently large n ≥ n0 (c1 ). For n ≥ n0 (c1 ) and j ∈ Jn with j < kn (kδn ) + 1,
2−j > 2mn −1 · 2−j̄n (kδn ) > h̄n (kδn ).
57
LOCALLY ADAPTIVE CONFIDENCE BANDS
Together with (5.28), this implies
βp (B(kδn , 2−j )) ≤ βn,p (kδn )
(5.32)
since otherwise p would be β-Hölder smooth with β > βn,p (kδn ) on a ball B(kδn , r)
with radius r > h̄n (t), which would contradict the definition of βn,p (kδn ) together
with Lemma A.4. This implies
−j
β∗
|t − t̃|
2−(j+3)βp (B(kδn ,2
≤
(log ñ)2
))
.
Together with inequalities (5.30) and (5.31),
max
s∈B (kδn , 87 ·2−j )∩Hn
Eχp 2 p̂(2)
n (s, j + 3) − p(s)
−j
≥
∗
Eχp 2 p̂(2)
n (s, j + 3) − p(s) − 2kKk1 L
sup
s∈B (kδn , 78 ·2−j )
1 2−(j+3)βp (B(kδn ,2
≥ ·
2
log ñ
−j
2−(j+3)βp (B(kδn ,2
(log ñ)2
))
))
for sufficiently large n ≥ n0 (L∗ , K, c1 ). Altogether, we get for j < kn (kδn ) + 1,
s
ñ2−j̄n (kδn )
χ2 (2)
max
Eχp 2 p̂(2)
n (s, j + 3) − Ep p̂n (s, j̄n (kδn ))
log ñ
s∈B (kδn , 78 ·2−j )∩Hn
s
ñ2−j̄n (kδn )
≥
max
Eχp 2 p̂(2)
n (s, j + 3) − p(s)
log ñ
s∈B (kδn , 87 ·2−j )∩Hn
!
−
max
s∈B (kδn , 87 ·2−j )∩Hn
s
≥
s
≥
s
>
ñh̄n (kδn )
2 log ñ
−j
1 2−(j+3)βp (B(kδn ,2
·
2
log ñ
!
))
−j̄n (kδn )βp (B (kδn ,2−j ))
− b2 · 2
−j
1 2(j̄n (kδn )−j−4)βp (B(kδn ,2
·
2
log ñ
ñh̄n (kδn ) −(j̄n (kδn )−1)βp (B(kδn ,2−j ))
2
2 log ñ
ñh̄n (kδn ) −(j̄n (kδn )−1)βp (B(kδn ,2−j ))
2
2 log ñ
Eχp 2 p̂(2)
n (s, j̄n (kδn )) − p(s)
2(mn −5)β∗
− b2 2−β∗
2 log ñ
))
!
−β∗
− b2 2
.
We now show that for j ∈ Jn with j < kn (kδn ) + 1, we have that
(5.33)
βp (B(kδn , 2−j )) ≤ β ∗ .
According to (5.32), it remains to show that βn,p (kδn ) ≤ β ∗ . If βn,p (kδn ) = ∞, then
j̄n (kδn ) = jmin . Since furthermore j ∈ Jn and therefore j ≥ jmin , this immediately
58
contradicts j < kn (kδn ) + 1. That is, j < kn (kδn ) + 1 implies that βn,p (kδn ) < ∞,
which in turn implies βn,p (kδn ) ≤ β ∗ according to Remark 3. Due to (3.9) and
(5.33), the last expression is again lower bounded by
s
2βp (B(kδn ,2−j ))+1
−j
ñh̄n (kδn )
2
h̄n (kδn )βp (B(kδn ,2 )) 2jmin
3c2
log ñ
for sufficiently large n ≥ n0 (L∗ , K, β∗ , β ∗ , c1 , c2 ). Recalling (5.32), we obtain
s
ñ2−j̄n (kδn )
χ2 (2)
max
Eχp 2 p̂(2)
n (s, j + 3) − Ep p̂n (s, j̄n (kδn ))
7
−j
log ñ
s∈B (kδn , 8 ·2 )∩Hn
s
2βp (B(kδn ,2−j ))+1
ñh̄n (kδn )
2
≥ 3c2
h̄n (kδn )βn,p (kδn ) 2jmin
log ñ
= 3c2 .
Thus, by the above consideration and (5.27),
X
Pχp 2 ĵn (kδn ) < kn (kδn ) for some k ∈ Tn ≤
(Pj,1 + Pj,2 )
j∈Jn
for sufficiently large n ≥ n0 (L∗ , K, β∗ , β ∗ , c1 , c2 ), with
s
ñ2−(j+3)
χ2
Pj,1 = Pp ∃k ∈ Tn : j < kn (kδn ) + 1 and
log ñ
!
·
max
s∈B (kδn , 87 ·2−j )∩Hn
p̂(2)
n (s, j
+ 3) −
s
Pj,2 = Pχp 2 ∃k ∈ Tn : j < kn (kδn ) + 1 and
Eχp 2 p̂(2)
n (s, j
+ 3) ≥ c2
ñ2−j̄n (kδn )
log ñ
!
·
max
s∈B (kδn , 78 ·2−j )∩Hn
p̂(2)
n (s, j̄n (kδn ))
−
Eχp 2 p̂(2)
n (s, j̄n (kδn ))
Both Pj,1 and Pj,2 are bounded by
s
!
ñh
χ2
(2)
χ2 (2)
Pj,i ≤ Pp
sup max
p̂ (s, h) − Ep p̂n (s, h) ≥ c2 ,
log ñ n
s∈Hn h∈Gn
≥ c2 .
i = 1, 2.
For sufficiently large c2 ≥ η0 , Lemma 4.3 and the logarithmic cardinality of Jn
yield (5.26).
Proof of Lemma 4.2. We prove both inequalities separately.
LOCALLY ADAPTIVE CONFIDENCE BANDS
59
Part (i). First, we show that the density p cannot be substantially unsmoother
at z ∈ (s, t) than at the boundary points s and t. Precisely, we shall prove that
min{h̄n (s), h̄n (t)} ≤ 2h̄n (z). In case
βn,p (s) = βn,p (t) = ∞,
that is h̄n (s) = h̄n (t) = 2−jmin , we immediately obtain h̄n (z) ≥ 12 2−jmin since
1 −jmin
B z, 2
⊂ B(s, h̄n (s)) ∩ B(t, h̄n (t)).
2
Hence, we subsequently assume that
min{βn,p (s), βn,p (t)} < ∞.
Note furthermore that
(5.34)
min h̄n (s), h̄n (t)
= hmin{βn,p (s),βn,p (t)},n .
In a first step, we subsequently conclude that
(5.35)
1
z + hmin{βn,p (s),βn,p (t)},n < s + h̄n (s)
2
or
(5.36)
1
z − hmin{βn,p (s),βn,p (t)},n > t − h̄n (t).
2
Note first that |s − t| < hβ,n for all β ≥ β∗ by condition (4.2). Assume now that
(5.35) does not hold. Then, inequality (5.36) directly follows as
z−
1
1
min{h̄n (s), h̄n (t)} = z + min{h̄n (s), h̄n (t)} − min{h̄n (s), h̄n (t)}
2
2
≥ s + h̄n (s) − min{h̄n (s), h̄n (t)}
≥ t − (t − s)
> t − h̄n (t).
Vice versa, if (5.36) does not hold, then a similar calculation as above shows that
(5.35) is true. Subsequently, we assume without loss of generality that (5.35) holds.
That is,
(5.37)
1
s − h̄n (s) < z − hmin{βn,p (s),βn,p (t)},n
2
1
< z + hmin{βn,p (s),βn,p (t)},n
2
< s + h̄n (s).
There exists some β̃ > 0 with
(5.38)
hβ̃,n =
1
min{h̄n (t), h̄n (s)}.
2
60
for sufficiently large n ≥ n0 (β∗ ). Equation (5.38) implies that
β̃ < min{βn,p (s), βn,p (t)} ≤ βn,p (s).
(5.39)
Finally, we verify that
βn,p (z) ≥ β̃.
(5.40)
Using Lemma A.4 as well as (5.37), (5.38), and (5.39) we obtain
kpkβ̃,β ∗ ,B(z,h
β̃,n )
∗
bβ̃∧β c
=
X
kp(k) kB (z, 1 min{h̄n (t),h̄n (s)})
2
k=0
+
|p(bβ̃∧β
sup
x,y ∈ B (z, 21 min{h̄n (t),h̄n (s)})
x6=y
∗
c)
(x) − p(bβ̃∧β
|x −
∗
c)
(y)|
y|β̃−bβ̃∧β ∗ c
≤ L∗ .
Consequently, we conclude (5.40). With (5.34) and (5.38), this in turn implies
min h̄n (s), h̄n (t) = 2hβ̃,n ≤ 2hβn,p (z),n = 2h̄n (z).
Part (ii). Now, we show that the density p cannot be substantially smoother
at z ∈ (s, t) than at the boundary points s and t. Without loss of generality, let
βn,p (t) ≤ βn,p (s). We prove the result by contradiction: assume that
8
min h̄n (s), h̄n (t) <
· h̄n (z).
17
(5.41)
Since t − z ≤ hβ,n /8 for all β ≥ β∗ by condition (4.2), so that in particular t − z ≤
h̄n (t)/8, we obtain together with (5.41) that
1
1
17
1
(5.42)
z − t + h̄n (z) >
− h̄n (t) + h̄n (t) = h̄n (t) > 0.
2
2
8
8
Because furthermore 21 (z − t + h̄n (z)) < 1, there exists some β 0 = β 0 (n) > 0 with
hβ 0 ,n =
1
z − t + h̄n (z) .
2
This equation in particular implies that hβ 0 ,n < 12 h̄n (z) and thus β 0 < βn,p (z).
Since furthermore t − z < h̄n (z) by condition (4.2) and therefore also
z − h̄n (z) < t − hβ 0 ,n < t + hβ 0 ,n < z + h̄n (z),
we immediately obtain
kpkβ 0 ,β ∗ ,B(t,hβ0 ,n ) ≤ L∗ ,
61
LOCALLY ADAPTIVE CONFIDENCE BANDS
so that
βn,p (t) ≥ β 0 .
This contradicts inequality (5.42).
Proof of Lemma 4.3. Without loss of generality, we prove the inequality for
(1)
the estimator p̂n (·, h) based on χ1 . Note first, that
s
ñ
X
ñh
χ1 (1)
(f (Xi ) − Ep f (Xi ))
p̂(1)
sup sup
n (s, h) − Ep p̂n (s, h) = sup
log ñ
s∈Hn h∈Gn
f ∈En i=1
with
En =
1
fn,s,h (·) = (ñh log ñ)− 2 K
·−s
h
: s ∈ Hn , h ∈ Gn .
Observe first that
sup Varp (fn,s,h (X1 )) ≤ sup Ep fn,s,h (X1 )2
p∈Pn
p∈Pn
= sup
p∈Pn
1
ñh log ñ
Z
K
x−s
h
2
p(x) dx
L∗ kKk22
ñ log ñ
=: σn2
≤
uniformly over all fn,s,h ∈ En , and
kKksup
sup max kfn,s,h ksup ≤ max √
h∈Gn
ñh log ñ
s∈Hn h∈Gn
= kKksup (log ñ)−
≤
κ2 +1
2
kKksup
=: Un ,
(log ñ)3/2
where the last inequality holds true because by definition of κ2 ≥ 2 in (3.9). In
particular σn ≤ Un for sufficiently large n ≥ n0 (L∗ , K). Since (ñh log ñ)−1/2 ≤ 1
for all h ∈ Gn and for all n ≥ n0 , the class En satisfies the VC property
00 ν 00
A
lim sup sup N En , k · kL2 (Q) , εkKksup ≤
ε
n→∞
Q
for some VC characteristics A00 = A00 (A, K) and ν 0 = ν + 1, by the same arguments
as in (5.17). According to Proposition 2.2 in Giné and Guillou (2001), there exist
constants c22 = c22 (A00 , ν 00 ) and c5 = c5 (A00 , ν 00 ), such that
s
!
ñh
χ1
(1)
χ1 (1)
Pp
sup max
p̂ (s, h) − Ep p̂n (s, h) > η
log ñ n
s∈Hn h∈Gn
62
η
≤ c5 exp −
log 1 +
c5 U n
(5.43)
η
≤ c5 exp −
c5 Un
ηUn
2
p
c5
ñσn2 + Un log(A00 Un /σn )
log (1 + c23 ηUn log ñ)
p
uniformly over all p ∈ Pn , for all n ≥ n0 (A00 , K, L∗ ) with c23 = c23 (A00 , ν 00 , L∗ , K),
whenever
s
00
!
p
A00 Un
A Un
2
(5.44)
η ≥ c22 Un log
+ ñσn log
.
σn
σn
Since the right hand side in (5.44) is bounded from above by some positive constant
η0 = η0 (A00 , ν 00 , L∗ , K) for sufficiently large n ≥ n0 (A00 , ν 00 , L∗ , K), inequality (5.43)
holds in particular for all n ≥ n0 (A00 , ν, K, L∗ ) and for all η ≥ η0 . Finally, using the
inequality log(1 + x) ≥ x2 for 0 ≤ x ≤ 2 (Lemma A.2), we obtain for all η ≥ η0
s
!
ñh
(1)
χ1 (1)
χ1
p̂ (s, h) − Ep p̂n (s, h) > η
Pp
sup max
log ñ n
s∈Hn h∈Gn
η0
3/2
≤ c5 exp −c24 η(log ñ) log 1 + c25 √
log ñ
1
≤ c5 exp − c24 c25 η0 η log ñ
2
uniformly over all p ∈ Pn , for all n ≥ n0 (A00 , ν 00 , K, L∗ ) and positive constants
c24 = c24 (A00 , ν 00 , K) and c25 = c25 (A00 , ν 00 , L∗ , K), which do not depend on n or η.
Proof of Lemma 4.4. Let t ∈ R, g, h > 0, and
p|B(t,g+h) ∈ Hβ ∗ ,B(t,g+h) (β, L).
The three cases β ≤ 1, 1 < β < ∞, and β = ∞ are analyzed separately. In case
β ≤ 1, we obtain
Z
sup |(Kh ∗ p)(s) − p(s)| ≤ |K(x)| sup |p(s + hx) − p(s)| dx,
s∈B(t,g)
s∈B(t,g)
where
sup |p(s + hx) − p(s)| ≤ hβ ·
s∈B(t,g)
|p(s0 ) − p(s)|
≤ Lhβ .
0 − s|β
0
|s
s,s ∈B(t,g+h)
sup
s6=s0
In case 1 < β < ∞, we use the Peano form for the remainder of the Taylor polynomial approximation. Note that β ∗ ≥ 2 because K is symmetric by assumption,
and K is a kernel of order bβ ∗ c = β ∗ − 1 in general, such that
sup |(Kh ∗ p)(s) − p(s)|
s∈B(t,g)
63
LOCALLY ADAPTIVE CONFIDENCE BANDS
Z
=
n
o
K(x) p(s + hx) − p(s) dx
sup
s∈B(t,g)
(k)
X
p (s)
p
K(x) p(s + hx) − Ps,bβ∧β
· (hx)k dx
∗ c (s + hx) +
k!
Z
=
sup
s∈B(t,g)
k=1
Z
≤
bβ∧β ∗ c
p
p(s + hx) − Ps,bβ∧β
∗ c (s + hx) dx
|K(x)| sup
s∈B(t,g)
Z
≤
|K(x)| sup
sup
p(bβ∧β
∗
c)
s∈B(t,g) s0 ∈B(s,h)
∗
≤
hbβ∧β c hβ−bβ∧β
bβ ∧ β ∗ c!
∗
c
(s0 ) − p(bβ∧β
bβ ∧ β ∗ c!
c)
(s)
(hx)bβ∧β
∗
Z
·
∗
|K(x)| sup
sup
s∈B(t,g) s0 ∈B(s,h)
s0 6=s
∗
c
dx
p(bβ∧β c) (s0 ) − p(bβ∧β
|s − s0 |β−bβ∧β ∗ c
∗
c)
(s)
dx
(5.45)
≤ LkKk1 hβ .
In case β = ∞, the density p satisfies p|B(t,g+h) ∈ Hβ ∗ ,B(t,g+h) (β, L∗ ) for all β > 0.
That is, the upper bound (5.45) on the bias holds for any β > 0, implying that
sup |(Kh ∗ p)(s) − p(s)| = 0.
s∈B(t,g)
This completes the proof.
Proof of Lemma 4.5. Note that by symmetry of K
Z
1 1
(Kh ∗ p)(s) − p(s) =
K(x) p(s + hx) + p(s − hx) − 2p(s) dx.
2 −1
The upper bound can thus be deduced exactly as in the proof of Lemma 4.4.
APPENDIX A: AUXILIARY RESULTS
Lemma A.1. For z ∈ [0, 1], the second moments of W̃k,l (z), k, l ∈ Tn as defined
in (5.21) are bounded by
EW W̃k,l (z)2 ≤ 4.
Proof of Lemma A.1. As W̃k,l (·) = −W̃l,k (·), we assume k ≤ l without loss
of generality. For any k, l ∈ Tn
EW W̃k,l (z)2 =
10
X
i=1
with
E1 =
1
ĥloc
n,k
2
EW W (kδn − z ĥloc
n,k )
Ei
64
=
kδn − z ĥloc
n,k
ĥloc
n,k
2
loc
E2 = −
EW W (kδn − z ĥloc
n,k )W (kδn + z ĥn,k )
ĥloc
n,k
= −2
E3 = q
kδn − z ĥloc
n,k
ĥloc
n,k
2
loc
ĥloc
n,k ĥn,l
loc
EW W (kδn − z ĥloc
n,k )W (lδn + z ĥn,l )
kδn − z ĥloc
n,k
=2 q
loc
ĥloc
n,k ĥn,l
E4 = − q
= −2
E5 =
=
2
loc
ĥloc
n,k ĥn,l
loc
EW W (kδn − z ĥloc
n,k )W (lδn − z ĥn,l )
, lδn − z ĥloc
min{kδn − z ĥloc
n,l }
q n,k
loc
ĥloc
n,k ĥn,l
1
ĥloc
n,k
2
EW W (kδn + z ĥloc
n,k )
kδn + z ĥloc
n,k
ĥloc
n,k
2
loc
E6 = − q
EW W (kδn + z ĥloc
n,k )W (lδn + z ĥn,l )
loc
ĥloc
ĥ
n,k n,l
= −2
E7 = q
=2
E8 =
=
min{kδn + z ĥloc
, lδn + z ĥloc
n,l }
q n,k
loc
ĥloc
n,k ĥn,l
2
loc
ĥloc
n,k ĥn,l
loc
EW W (kδn + z ĥloc
n,k )W (lδn − z ĥn,l )
min{kδn + z ĥloc
, lδn − z ĥloc
n,l }
q n,k
loc
ĥloc
n,k ĥn,l
1
ĥloc
n,l
2
EW W (lδn + z ĥloc
n,l )
lδn + z ĥloc
n,l
ĥloc
n,l
2
loc
E9 = −
EW W (lδn + z ĥloc
n,l )W (lδn − z ĥn,l )
ĥloc
n,l
65
LOCALLY ADAPTIVE CONFIDENCE BANDS
= −2
E10 =
=
lδn − z ĥloc
n,l
ĥloc
n,l
1
ĥloc
n,l
2
EW W (lδn − z ĥloc
n,l )
lδn − z ĥloc
n,l
ĥloc
n,l
.
Altogether,
o
n
2
loc
loc
EW W̃k,l (z)2 = 4z + q
kδn − z ĥloc
n,k − min kδn − z ĥn,k , lδn − z ĥn,l
loc
ĥloc
n,k ĥn,l
!
n
o
n
o
loc
loc
loc
loc
− min kδn + z ĥn,k , lδn + z ĥn,l + min kδn + z ĥn,k , lδn − z ĥn,l
.
We distinguish between the two cases
loc
(i) kδn − z ĥloc
n,k ≤ lδn − z ĥn,l
loc
(ii) kδn − z ĥloc
n,k > lδn − z ĥn,l .
and
In case (i), we obtain
2
EW W̃k,l (z)2 = 4z + q
loc
ĥloc
n,k ĥn,l
n
o
loc
min kδn + z ĥloc
n,k , lδn − z ĥn,l
n
o
loc
− min kδn + z ĥloc
n,k , lδn + z ĥn,l
!
≤ 4.
In case (ii), we remain with
2
EW W̃k,l (z) = 4z + q
2
kδn −
loc
ĥloc
n,k ĥn,l
z ĥloc
n,k
!
n
o
loc
loc
− min kδn + z ĥn,k , lδn + z ĥn,l
.
loc
If in the latter expression kδn + z ĥloc
n,k ≤ lδn + z ĥn,l , then
4z ĥloc
n,k
≤ 4.
EW W̃k,l (z)2 = 4z − q
loc
ĥloc
n,k ĥn,l
loc
Otherwise, if kδn + z ĥloc
n,k > lδn + z ĥn,l , we arrive at
2
EW W̃k,l (z) = 4z + q
2
loc
ĥloc
n,k ĥn,l
(k − l)δn − z
ĥloc
n,k
+
ĥloc
n,l
!
≤4
66
because k ≤ l and z ∈ [0, 1]. Summarizing,
EW W̃k,l (z)2 ≤ 4.
Lemma A.2.
For any x ∈ [0, 1], we have
ex − 1 ≤ 2x.
Proof. Equality holds for x = 0, while e − 1 ≤ 2. Hence, the result follows by
convexity of the exponential function.
Lemma A.3.
For any x ∈ R \ {0}, we have
1−
sin(x)
x2
≤
.
x
6
Proof. Since both sides of the inequality are symmetric in zero, we restrict our
considerations to x > 0. For positive x, it is equivalent to
f (x) = sin(x) − x +
x3
≥ 0.
6
As f (0) = 0, it suffices to show that
f 0 (x) = cos(x) − 1 +
x2
≥0
2
for all x > 0. Since furthermore f 0 (0) = 0 and
f 00 (x) = − sin(x) + x ≥ 0
for all x > 0, the inequality follows.
The next lemma shows that the monotonicity of the Hölder norms k·kβ1 ,U ≤ k·kβ2 ,U
with 0 < β1 ≤ β2 stays valid for the modification k · kβ,β ∗ ,U .
Lemma A.4.
For 0 < β1 ≤ β2 < ∞ and p ∈ Hβ ∗ ,U (β2 ),
kpkβ1 ,β ∗ ,U ≤ kpkβ2 ,β ∗ ,U
for any open interval U ⊂ R with length less or equal than 1.
Proof. If β1 ≤ β2 , but bβ1 ∧ β ∗ c = bβ2 ∧ β ∗ c, the statement follows directly
with
bβ2 ∧β ∗ c
kpkβ1 ,β ∗ ,U =
X
k=0
∗
kp
(k)
|p(bβ2 ∧β c) (x) − p(bβ2 ∧β
kU + sup
|x − y|β1 −bβ2 ∧β ∗ c
x,y ∈ U
x6=y
∗
c)
(y)|
≤ kpkβ2 ,β ∗ ,U .
LOCALLY ADAPTIVE CONFIDENCE BANDS
67
If β1 < β2 and also bβ1 ∧ β ∗ c < bβ2 ∧ β ∗ c, we deduce that β1 < β ∗ and bβ1 c + 1 ≤
bβ2 ∧ β ∗ c. Then, the mean value theorem yields
bβ1 c
kpkβ1 ,β ∗ ,U =
X
kp(k) kU + sup
x,y ∈ U
x6=y
k=0
|p(bβ1 c) (x) − p(bβ1 c) (y)|
|x − y|β1 −bβ1 c
bβ1 c
≤
X
kp(k) kU + kp(bβ1 c+1) kU sup |x − y|1−(β1 −bβ1 c)
x,y ∈ U
x6=y
k=0
bβ1 c+1
≤
X
kp(k) kU
k=0
bβ2 ∧β ∗ c
≤
X
kp(k) kU
k=0
≤ kpkβ2 ,β ∗ ,U .
References.
Baraud, Y. (2004). Confidence balls in Gaussian regression. Ann. Stat. 32 528-551.
Barral, J., Durand, S., Jaffard, S. and Seuret, S. (2013). Local multifractal analysis.
Fractal Geometry and Dynamical Systems in Pure and Applied Mathematics II: Fractals
in Applied Mathematics, Amer. Math. Soc. 601 31-64.
Bickel, P. J. and Rosenblatt, M. (1973). On some global measures of the deviations
of density function estimates. Ann. Statist. 1 1071-1095.
Bull, A. D. (2012). Honest adaptive confidence bands and self-similar functions. Electron.
J. Stat. 6 1490-1516.
Bull, A. D. and Nickl, R. (2013). Adaptive confidence sets in L2 . Prob. Theory Relat.
Fields 156 889-919.
Cai, T. T. and Low, M. G. (2004). An adaptation theory for nonparametric confidence
intervals. Ann. Statist. 32 1805-1840.
Cai, T. T. and Low, M. G. (2006). Adaptive confidence balls. Ann. Statist. 34 202-228.
Carpentier, A. (2013). Adaptive confidence sets in Lp . Elect. J. Stat. 7 2875-2923.
Chernozhukov, V., Chetverikov, D. and Kato, K. (2014a). Anti-concentration and
honest, adaptive confidence bands. Ann. Statist. 42 1787-1818.
Chernozhukov, V., Chetverikov, D. and Kato, K. (2014b). Gaussian approximation
of suprema of empirical processes. Ann. Statist. 42 1564-1597.
Daoudi, K., Lévy Véhel, J. and Meyer, Y. (1998). Construction of Continuous Functions with Prescribed Local Regularity. Constr. Approx. 14 349-385.
Davies, L., Kovac, A. and Meise, M. (2009). Nonparametric regression, confidence
regions and regularization. Ann. Stat. 37 2597-2625.
Dümbgen, L. (1998). New goodness-of-fit tests and their application to nonparametric
confidence sets. Ann. Statist. 26 288-314.
Dümbgen, L. (2003). Optimal confidence bands for shape-restricted curves. Bernoulli 9
423-449.
Genovese, C. and Wasserman, L. (2005). Confidence sets for nonparametric wavelet
regression. Ann. Statist. 33 698-729.
68
Genovese, C. and Wasserman, L. (2008). Adaptive confidence bands. Ann. Statist. 36
875-905.
Giné, E. and Guillou, A. (2001). On consistency of kernel density estimators for randomly censored data: rates holding uniformly over adaptive intervals. Ann. Inst. H.
Poincaré Probab. Statist. 37 503-522.
Giné, E. and Nickl, R. (2010). Confidence bands in density estimation. Ann. Statist. 38
1122-1170.
Hardy, G. H. (1916). Weierstrass’s non-differentiable function. Trans. Amer. Math. Soc.
17 301-325.
Hengartner, N. W. and Stark, P. B. (1995). Finite-sample confidence envelopes for
shape-restricted densities. Ann. Statist. 23 525-550.
Heurteaux, Y. (2005). Weierstrass functions in Zygmund’s class. Proc. Amer. Math. Soc.
133 2711-2720.
Hoffmann, M. and Nickl, R. (2011). On adaptive inference and confidence bands. Ann.
Statist. 39 2383-2409.
Jaffard, S. (1995). Functions with prescribed Hölder exponent. Appl. Comput. Harmon.
Anal. 2 400-4001.
Jaffard, S. (2006). Wavelet techniques for pointwise regularity. Ann. Fac. Sci. Toulouse
15 3-33.
Juditsky, A. and Lambert-Lacroix, S. (2003). Nonparametric confidence set estimation. Math. Methods Statist. 12 410-428.
Kerkyacharian, G., Nickl, R. and Picard, D. (2012). Concentration inequalities and
confidence bands for needlet density estimators on compact homogeneous manifolds.
Probab. Theory Relat. Fields 153 363-404.
Koltchinskii, V. (2011). Oracle Inequalities in Empirical Risk Minimization and Sparse
Recovery Problems. Lecture Notes in Math. Springer, Heidelberg.
Kueh, A. (2012). Locally adaptive density estimation on the unit sphere using needlets.
Constr. Approx. 36 433-458.
Leadbetter, M., Lindgren, G. and Rootzén, H. (1983). Extremes and Related Properties of Random Sequences and Processes. Springer, New York.
Ledoux, M. and Talagrand, M. (1991). Probability in Banach spaces. Isoperimetry and
processes. Springer, New York.
Lepski, O. V. (1990). One problem of adaptive estimation in Gaussian white noise. Theory
Probab. Appl. 35 459-470.
Low, M. G. (1997). On nonparametric confidence intervals. Ann. Statist. 25 2547-2554.
Mauldin, A. D. and Williams, S. C. (1986). On the Hausdorff dimension of some graphs.
Trans. Amer. Math. Soc. 298 793-804.
Nickl, R. (2015). Discussion of ”Frequentist coverage of adaptive nonparametric bayesian
credible sets”. Ann. Statist. 43 1429-1436.
Nickl, R. and Szabó, B. (2016). A sharp adaptive confidence ball for self-similar functions. Stoch. Proc. Appl. to appear.
Nolan, D. and Pollard, D. (1987). U-processes: rates of convergence. Ann. Statist. 15
780-799.
Picard, D. and Tribouley, K. (2000). Adaptive confidence interval for pointwise curve
estimation. Ann. Statist. 28 298-335.
Robins, J. and van der Vaart, A. W. (2006). Adaptive nonparametric confidence sets.
Ann. Statist. 34 229-253.
Seuret, S. and Lévy Véhel, J. (2002). The local Hölder function of a continuous function. Appl. Comput. Harmon. Anal. 13 263-276.
Tsybakov, A. B. (2009). Introduction to Nonparametric Estimation. Springer, New York.
LOCALLY ADAPTIVE CONFIDENCE BANDS
69
Fakultät für Mathematik
Ruhr-Universität Bochum
44780 Bochum
Germany
E-mail: [email protected]
and
Mathematisches Institut
Albert-Ludwigs-Universität Freiburg
Eckerstraße 1
79104 Freiburg im Breisgau
Germany
E-mail: [email protected]
| 1 |
Multivariate Fine-Grained Complexity
of Longest Common Subsequence∗
arXiv:1803.00938v1 [cs.CC] 2 Mar 2018
Karl Bringmann†
Marvin Künnemann‡
Abstract
We revisit the classic combinatorial pattern matching problem of finding a longest common
subsequence (LCS). For strings x and y of length n, a textbook algorithm solves LCS in time
O(n2 ), but although much effort has been spent, no O(n2−ε )-time algorithm is known. Recent
work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis
(SETH) [Abboud, Backurs, Vassilevska Williams FOCS’15; Bringmann, Künnemann FOCS’15].
Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued
to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for
special cases of interest, e.g., differential file comparison. This line of research was successfully
pursued until 1990, at which time significant improvements came to a halt. In this paper, using
the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements
and (2) determine whether some special cases of LCS admit faster algorithms than currently
known.
To this end, we provide a systematic study of the multivariate complexity of LCS, taking into
account all parameters previously discussed in the literature: the input size n := max{|x|, |y|},
the length of the shorter string m := min{|x|, |y|}, the length L of an LCS of x and y, the numbers
of deletions δ := m − L and ∆ := n − L, the alphabet size, as well as the numbers of matching
pairs M and dominant pairs d. For any class of instances defined by fixing each parameter
individually to a polynomial in terms of the input size, we prove a SETH-based lower bound
matching one of three known algorithms (up to lower order factors of the form no(1) ). Specifically,
we determine the optimal running time for LCS under SETH as (n + min{d, δ∆, δm})1±o(1) .
Polynomial improvements over this running time must necessarily refute SETH or exploit novel
input parameters. We establish the same lower bound for any constant alphabet of size at least 3.
For binary alphabet, we show a SETH-based lower bound of (n + min{d, δ∆, δM/n})1−o(1) and,
motivated by difficulties to improve this lower bound, we design an O(n+δM/n)-time algorithm,
yielding again a matching bound.
We feel that our systematic approach yields a comprehensive perspective on the well-studied
multivariate complexity of LCS, and we hope to inspire similar studies of multivariate complexity
landscapes for further polynomial-time problems.
∗
Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing at
the University of California, Berkeley.
†
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, [email protected]
‡
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, [email protected]
Contents
1 Introduction
1.1 Our Approach and Informal Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Related Work on LCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 (Multivariate) Hardness in P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
2
3
2 Preliminaries
2.1 Parameter Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Hardness Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
4
6
3 Formal Statement of Results
6
4 Hardness Proof Overview
4.1 Classification of Non-trivial Parameter
4.2 Monotonicity of Time Complexity . .
4.3 Hardness for Large Alphabet . . . . .
4.4 Small Alphabet . . . . . . . . . . . . .
Settings
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
8
10
11
12
5 Organization
12
6 Parameter Relations
13
7 Technical Tools and Constructions
16
7.1 Generating dominant pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.2 Block elimination and dominant pair reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
8 Paddings
22
8.1 Matching Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
8.2 Dominant Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
9 Hardness for Large Alphabet
9.1 Small LCS . . . . . . . . . . . .
9.1.1 Hard Core . . . . . . . .
9.1.2 Constant Alphabet . . .
9.1.3 Superconstant Alphabet
9.2 Large LCS . . . . . . . . . . . .
9.2.1 Hard Core . . . . . . . .
9.2.2 Constant Alphabet . . .
9.2.3 Superconstant Alphabet
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
25
26
26
27
27
31
32
10 Hardness for Small Constant Alphabet
10.1 Small LCS . . . . . . . . . . . . . . . . . . . .
10.2 Large LCS, Alphabet Size at least 3 . . . . .
10.3 Large LCS, Alphabet Size 2 . . . . . . . . . .
10.3.1 Case α∆ ≤ αm = αL . . . . . . . . . .
10.3.2 Case α∆ > αm = αL and αδ ≥ αM − 1
10.3.3 Case α∆ > αm = αL and αδ ≤ αM − 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
34
34
39
44
44
46
50
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11 New Algorithm for Binary Alphabet
54
12 Strengthening Hardness via BP-SETH
56
1
Introduction
String comparison is one of the central tasks in combinatorial pattern matching, with various
applications such as spelling correction [68, 84], DNA sequence comparison [8], and differential
file comparison [46, 66]. Perhaps the best-known measure of string similarity is the length of the
longest common subsequence (LCS). A textbook dynamic programming algorithm computes the
LCS of given strings x, y of length n in time O(n2 ), and in the worst case only an improvement
by logarithmic factors is known [65]. In fact, recent results show that improvements by polynomial
factors would refute the Strong Exponential Time Hypothesis (SETH) [1, 28] (see Section 2.2 for a
definition).
Despite the quadratic-time barrier, the literature on LCS has been steadily growing, with a
changing focus on different aspects of the problem over time (see Section 1.2 for an overview).
Spurred by an interest in practical applications, a particular focus has been the design of LCS
algorithms for strings that exhibit certain structural properties. This is most prominently witnessed
by the UNIX diff utility, which quickly compares large, similar files by solving an underlying
LCS problem. A practically satisfying solution to this special case was enabled by theoretical
advances exploiting the fact that in such instances the LCS differs from the input strings at only
few positions (e.g., [66, 70]). In fact, since Wagner and Fischer introduced the LCS problem in
1974 [84], identifying and exploiting structural parameters to obtain faster algorithms has been a
decades-long effort [13, 14, 37, 45, 47, 50, 70, 71, 87].
Parameters that are studied in the literature are, besides the input size n := max{|x|, |y|}, the
length m := min{|x|, |y|} of the shorter string, the size of the alphabet Σ that x and y are defined
on, the length L of a longest common subsequence of x and y, the number ∆ = n − L of deleted
symbols in the longer string, the number δ = m − L of deleted symbols in the shorter string, the
number of matching pairs M , and the number of dominant pairs d (see Section 2.1 for definitions).
Among the fastest currently known algorithms are an Õ(n + δL)-algorithm due to Hirschberg [45],
an Õ(n+δ∆)-algorithm due to Wu, Manbers, Myers, and Miller [87], and an Õ(n+d)-algorithm due
to Apostolico [13] (with log-factor improvements by Eppstein, Galil, Giancarlo, and Italiano [37]).
In the remainder, we refer to such algorithms, whose running time is stated in more parameters
than just the problem size n, as multivariate algorithms. See Table 1 on page 3 for a non-exhaustive
survey containing the asymptotically fastest multivariate LCS algorithms.
The main question we aim to answer in this work is: Are there significantly faster multivariate
LCS algorithms than currently known? E.g., can ideas underlying the fastest known algorithms be
combined to design an algorithm that is much faster than all of them?
1.1
Our Approach and Informal Results
We systematically study special cases of LCS that arise from polynomial restrictions of any of
the previously studied input parameters. Informally, we define a parameter setting (or polynomial
restriction of the parameters) as the subset of all LCS instances where each input parameter is
individually bound to a polynomial relation with the input size n, i.e., for each parameter p we
fix a constant αp and restrict the instances such that p attains a value Θ(nαp ). An algorithm for
a specific parameter setting of LCS receives as input two strings x, y guaranteed to satisfy the
parameter setting and outputs (the length of) an LCS of x and y. We call a parameter setting
trivial if it is satisfied by at most a finite number of instances; this happens if the restrictions
on different parameters are contradictory. For each non-trivial parameter setting, we construct
a family of hard instances via a reduction from satisfiability, thus obtaining a conditional lower
bound. This greatly extends the construction of hard instances for the n2−o(1) lower bound [1, 28].
1
Results for large alphabets. Since we only consider exact algorithms, any algorithm for LCS
takes time Ω(n). Beyond this trivial bound, for any non-trivial parameter setting we obtain a
SETH-based lower bound of
1−o(1)
min d, δ∆, δm
.
Note that this bound is matched by the known algorithms with running times Õ(n+d), Õ(n+δL)1 ,
and Õ(n + δ∆). Thus, our lower bound very well explains the lack of progress since the discovery
of these three algorithms (apart from lower-order factors).
Results for constant alphabet size. For the alphabet size |Σ|, we do not only consider the
case of a polynomial relation with n, but also the important special cases of |Σ| being any fixed
constant. We show that our conditional lower bound for polynomial alphabet size also holds for
any constant |Σ| ≥ 3. For |Σ| = 2, we instead obtain a SETH-based lower bound of
min d, δ∆, δM/n
1−o(1)
.
This lower bound is weaker than the lower bound for |Σ| ≥ 3 (as the term δM/n is at most δm
by the trivial bound M ≤ mn; see Section 2.1 for the definition of M ). Surprisingly, a stronger
lower bound is impossible (assuming SETH): Motivated by the difficulties to obtain the same lower
bound as for |Σ| ≥ 3, we discovered an algorithm with running time O(n + δM/n) for |Σ| = 2, thus
matching our conditional lower bound. To the best of our knowledge, this algorithm provides the
first polynomial improvement for a special case of LCS since 1990, so while its practical relevance is
unclear, we succeeded in uncovering a tractable special case. Interestingly, our algorithm and lower
bounds show that the multivariate fine-grained complexity of LCS differs polynomially between
|Σ| = 2 and |Σ| ≥ 3. So far, the running time of the fastest known algorithms for varying alphabet
size differed at most by a logarithmic factor in |Σ|.
We find it surprising that the hardness assumption SETH is not only sufficient to prove a
worst-case quadratic lower bound for LCS, but extends to the complete spectrum of multivariate
algorithms using the previously used 7 parameters, thus proving an optimal running time bound
which was implicitly discovered by the computer science community within the first 25 years of
research on LCS (except for the case of Σ = {0, 1}, for which we provide a missing algorithm).
1.2
Related Work on LCS
Table 1 on the next page gives a non-comprehensive overview of progress on multivariate LCS,
including the asymptotically fastest known algorithms. Note that the most recent polynomial
factor improvement for multivariate LCS was found in 1990 [87]. Further progress on multivariate
LCS was confined to log-factor improvements (e.g., [37, 50]). Therefore, the majority of later works
on LCS focused on transferring the early successes and techniques to more complicated problems,
such as longest common increasing subsequence [33, 58, 67, 88], tree LCS [69], and many more
generalizations and variants of LCS, see, e.g., [6, 7, 9, 10, 20, 21, 24, 32, 36, 38, 41, 43, 48, 49, 53–
55, 57, 60–62, 74, 78, 79, 81, 85]. One branch of generalizations considered the LCS of more
1
Note that L ≤ m. At first sight it might seem as if the Õ(n + δL) algorithm could be faster than our lower
bound, however, for L ≥ m/2 we have δL = Θ(δm), which appears in our lower bound, and for L ≤ m/2 we have
δ = m − L = Θ(m) and thus δL = Θ(Lm) which is Ω(d) by Lemma 6.3, and d appears in our lower bound.
2
See [23] for how to extend the Masek-Paterson algorithm to non-constant alphabets.
3
Wu et al. state their running time as O(nδ) in the worst case and O(n + δ∆) in expectation for random strings.
However, Myers worst-case variation trick [70, Section 4c] applies and yields the claimed time bound O(n log n + δ∆).
The additional O(n log n) comes from building a suffix tree.
2
Reference
Wagner and Fischer [84]
Hunt and Szymanski [47]
Hirschberg [45]
Hirschberg [45]
Masek and Paterson [65]
Nakatsu, Kambayashi and Yajima [71]
Apostolico [13]
Myers [70]
Apostolico and Guerra [14]
Wu, Manbers, Myers and Miller [87]
Eppstein, Galil, Giancarlo and Italiano [37]
Iliopoulos and Rahman [50]
Running Time
O(mn)
O((n + M ) log n)
O(n log n + Ln)
O(n log n + Lδ log n)
O(n + nm/ log2 n) assuming |Σ| = O(1)
2 2
O n + nm · logloglogn n
O(nδ)
O(n log n + d log(mn/d))
O(n log n + ∆2 )
O(n log n + Lm min{log m, log(n/m)})
O(n log n + δ∆) 3
O(n log n + d log log min{d, nm/d})
O(n + M log log n)
Table 1: Short survey of LCS algorithms. See Section 2.1 for definitions of the parameters. When
stating the running times, every factor possibly attaining non-positive values (such as δ, log(n/m),
etc.) is to be read as max{·, 1}. For simplicity, log(Σ)-factors have been bounded from above by
log n (see [72] for details on the case of constant alphabet size).
than two strings (e.g., [1, 25]), with variations such as string consensus (e.g., [11, 12]) and more
(e.g., [9, 19, 35, 41, 42, 61]). Since natural language texts are well compressible, researchers also
considered solving LCS directly on compressed strings, using either run-length encoding (e.g., [15,
31, 34, 57]) or straight-line programs and other Lempel-Ziv-like compression schemes (e.g., [40, 44,
63, 80]). Further research directions include approximation algorithms for LCS and its variants
(e.g., [42, 43, 59]), as well as the LCS length of random strings [18, 64]. For brevity, here we ignore
the equally vast literature on the closely related edit distance. Furthermore, we solely regard the
time complexity of computing the length of an LCS and hence omit all results concerning space
usage or finding an LCS. See, e.g., [22, 72] for these and other aspects of LCS (including empirical
evaluations).
1.3
(Multivariate) Hardness in P
After the early success of 3SUM-hardness in computational geometry [39], recent years have brought
a wealth of novel conditional lower bounds for polynomial time problems, see, e.g., [1–4, 16, 17, 26–
29, 56, 75, 83, 86] and the recent survey [82]. In particular, our work extends the recent successful
line of research proving SETH-based lower bounds for a number problems with efficient dynamic
programming solutions such as Fréchet distance [26, 29], edit distance [16, 28], LCS and dynamic
time warping [1, 28]. Beyond worst-case conditional lower bounds of the form nc−o(1) , recently
also more detailed lower bounds targeting additional input restrictions have gained interest. Such
results come in different flavors, as follows.
Input parameters, polynomial dependence. Consider one or more parameters in addition to the
input size n, where the optimal time complexity of the studied problem depends polynomially on n
and the parameters. This is the situation in this paper as well as several previous studies, e.g., [4,
26, 56]. To the best of our knowledge, our work is the first of this kind to study combinations of
more than two parameters that adhere to a complex set of parameter relations – for previous results,
typically the set of non-trivial parameter settings was obvious and simultaneously controlling all
3
parameters was less complex.
Input parameters, superpolynomial dependence. Related to the above setting, parameters have
been studied where the time complexity depends polynomially on n and superpolynomially on the
parameters. If the studied problem is NP-hard then this is known as fixed-parameter tractability
(FPT). However, here we focus on problems in P, in which case this situation is known as “FPT in
P”. Hardness results in this area were initiated by Abboud, Vassilevska Williams, and Wang [4].
A finite/discrete number of special cases. Some input restrictions yield a discrete or even finite
set of special cases. For example, Backurs and Indyk [17] and later Bringmann et al. [27] studied
special cases of regular expression pattern matching by restricting the input to certain “types” of
regular expressions. The set of types is discrete and infinite, however, there are only finitely many
tractable types, and finitely many minimal hardness results. Their approach is similarly systematic
to ours, as they classify the complexity of pattern matching for any type of regular expressions. The
major difference is that our parameters are “continuous”, specifically our parameter exponents αp
are continuous, and thus our algorithms and lower bounds trace a continuous tradeoff.
While in all of the above settings the design of fast multivariate algorithms is well established,
tools for proving matching conditional lower bounds have been developed only recently. In particular, the systematic approach to multivariate lower bounds pursued in this paper provides an
effective complement to multivariate algorithmic studies in P, since it establishes (near-)optimality
and may uncover tractable special cases for which improved algorithms can be found.
Beyond SETH. Motivated in part to find barriers even for polylogarithmic improvements on
LCS, a surprising result of Abboud et al. [2] strengthens the conditional quadratic-time hardness
of LCS substantially. More precisely, they show that a strongly subquadratic-time algorithm for
LCS would even refute a natural, weaker variant of SETH on branching programs. In Section 12,
we survey their result and show that the conditional lower bounds we derive in this paper also hold
under this weaker assumption.
2
Preliminaries
We write [n] := {1, . . . , n}. For a string x, we denote its length by |x|, the symbol at its i-th
position by x[i], and the substring from position i to position j by x[i..j]. If string x is defined over
alphabet Σ, we denote the number of occurrences of symbol σ ∈ Σ in x by #σ (x). In running time
bounds we write Σ instead of |Σ| for readability. For two strings x, y, we denote their concatenation
by x ◦ y = xy and define, for any ℓ ≥ 0, the ℓ-fold repetition xℓ := ℓi=1 x. For any strings x, y
we let LCS(x, y) be any longest common subsequence of x and y, i.e., a string z = z[1..L] of
maximum length L such that there are i1 < . . . < iL with x[ik ] = z[k] for all 1 ≤ k ≤ L and
there are j1 < . . . < jL with y[jk ] = z[k] for all 1 ≤ k ≤ L. For a string x of length n, let
rev(x) := x[n] x[n − 1] . . . x[1] denote its reverse.
2.1
Parameter Definitions
We survey parameters that have been used in the analysis of the LCS problem (see also [22, 72]).
Let x, y be any strings. By possibly swapping x and y, we can assume that x is the longer of the two
strings, so that n = n(x, y) := |x| is the input size (up to a factor of two). Then m = m(x, y) := |y|
is the length of the shorter of the two strings. Another natural parameter is the solution size, i.e.,
the length of any LCS, L = L(x, y) := |LCS(x, y)|.
Since any symbol not contained in x or in y cannot be contained in a LCS, we can ensure the
following using a (near-)linear-time preprocessing.
4
d a b c c b d
d
c
b
a
d
c
1
1
1
1
1
1
1
1
1
2
2
2
1
1
2
2
2
2
1
2
2
2
2
3
1
2
2
2
2
3
1
2
3
3
3
3
1
2
3
3
4
4
n(x, y) := |x|,
m(x, y) := |y|
L(x, y) := |LCS(x, y)|
δ(x, y) := |y| − L(x, y),
∆(x, y) := |x| − L(x, y)
Σ(x, y) := #({x[i] | 1 ≤ i ≤ |x|} ∩ {y[j] | 1 ≤ j ≤ |y|})
M (x, y) := #{(i, j) | x[i] = y[j]}
d(x, y) := #{(i, j) | L[i, j] > L[i − 1, j] and L[i, j] > L[i, j − 1]},
where L[i, j] := |LCS(x[1..i], y[1..j])|.
(b)
(a)
Figure 1: (a) Illustration of the L-table, matching pairs and dominant pairs. Entries marked in
orange color and bold letters correspond to dominant pairs (which by definition are also matching
pairs), while entries marked in blue are matching pairs only. (b) Summary of all input parameters.
Assumption 2.1. Every symbol σ ∈ Σ occurs at least once in x and in y, i.e., #σ (x), #σ (y) ≥ 1.
Consider the alphabet induced by x and y after ensuring Assumption 2.1, namely Σ = {x[i] |
1 ≤ i ≤ |x|} ∩ {y[j] | 1 ≤ j ≤ |y|}. Its size Σ(x, y) := |Σ| is a natural parameter.
Beyond these standard parameters n, m, L, |Σ| (applicable for any optimization problem on
strings), popular structural parameters measure the similarity and sparsity of the strings. These
notions are more specific to LCS and are especially relevant in practical applications such as, e.g.,
the diff file comparison utility, where symbols in x and y correspond to lines in the input files.
Notions of similarity. To obtain an LCS, we have to delete ∆ = ∆(x, y) := n − L symbols from
x or δ = δ(x, y) := m − L symbols from y. Hence for very similar strings, which is the typical
kind of input for file comparisons, we expect δ and ∆ to be small. This is exploited by algorithms
running in time, e.g., Õ(n + δ∆) [87] or Õ(n + δL) [45].
Notions of sparsity. Based on the observation that the dynamic programming table typically
stores a large amount of redundant information (suggested, e.g., by the fact that an LCS itself
can be reconstructed examining only O(n) entries), algorithms have been studied that consider
only the most relevant entries in the table. The simplest measure of such entries is the number of
matching pairs M = M (x, y) := #{(i, j) | x[i] = y[j]}. Especially for inputs with a large alphabet,
this parameter potentially significantly restricts the number of candidate pairs considered by LCS
algorithms, e.g., for files where almost all lines occur only once. Moreover, in the special case where x
and y are permutations of Σ we have M = n = m, and thus algorithms in time Õ(n+M ) [46, 47, 50]
recover the near-linear time solution for LCS of permutations [77].
One can refine this notion to obtain the dominant pairs. A pair (i, j) dominates a pair (i′ , j ′ ) if
we have i ≤ i′ and j ≤ j ′ . A k-dominant pair is a pair (i, j) such that L(x[1..i], y[1..j]) = k and no
other pair (i′ , j ′ ) with L(x[1..i′ ], y[1..j ′ ]) = k dominates (i, j). By defining L[i, j] := L(x[1..i], y[1..j])
and using the well known recursion L[i, j] = max{L[i − 1, j], L[i, j − 1], L[i − 1, j − 1] + 1x[i]=y[j]}, we
observe that (i, j) is a k-dominant pair if and only if L[i, j] = k and L[i − 1, j] = L[i, jS− 1] = k − 1.
Denoting the set of all k-dominant pairs by Dk , the set of dominant pairs of x, y is k≥1 Dk , and
we let d = d(x, y) denote the number of dominant pairs. Algorithms running in time Õ(n + d)
5
exploit a small number of dominant pairs [13, 37]. Figure 1a illustrates matching and dominant
pairs.
While at first sight the definition of dominant pairs might not seem like the most natural
parameter, it plays an important role in analyzing LCS: First, from the set of dominant pairs
alone one can reconstruct the L-table that underlies the basic dynamic programming algorithm.
Second, the parameter d precisely describes the complexity of one of the fastest known (multivariate)
algorithms for LCS. Finally, LCS with parameter d is one of the first instances of the paradigm of
sparse dynamic programming (see, e.g., [37]).
On practical instances, exploiting similarity notions seems to typically outperform algorithms
based on sparsity measures (see [66] for a classical comparison to an algorithm based on the number
of matching pairs M [46, 47]). To the best of our knowledge, Figure 1b summarizes all parameters
which have been exploited to obtain multivariate algorithms for LCS.
2.2
Hardness Hypotheses
Strong Exponential Time Hypothesis (SETH): For any ε > 0 there is a k ≥ 3 such that
k-SAT on n variables cannot be solved in time O((2 − ε)n ).
SETH was introduced by Impagliazzo, Paturi, and Zane [51] and essentially asserts that satisfiability has no algorithms that are much faster than exhaustive search. It forms the basis of many
conditional lower bounds for NP-hard as well as polynomial-time problems.
Effectively all known SETH-based lower bounds for polynomial-time problems use reductions
via the Orthogonal Vectors problem (OV): Given
sets A, B ⊆ {0, 1}D of size |A| = n, |B| = m,
PD
determine whether there exist a ∈ A, b ∈ B with i=1 a[i] · b[i] = 0 (which we denote by ha, bi = 0).
Simple algorithms solve OV in time O(2D (n + m)) and O(nmD). The fastest known algorithm for
D = c(n) log n runs in time n2−1/O(log c(n)) (when n = m) [5], which is only slightly subquadratic
for D ≫ log n. This has led to the following reasonable hypothesis.
Orthogonal Vectors Hypothesis (OVH): OV restricted to n = |A| = |B| and D = no(1)
requires time n2−o(1) .
A well-known reduction by Williams [86] shows that SETH implies OVH. Thus, OVH is the
weaker assumption and any OVH-based lower bound also implies a SETH-based lower bound. The
results in this paper do not only hold assuming SETH, but even assuming the weaker OVH.
3
Formal Statement of Results
Recall that n is the input size and P := {m, L, δ, ∆, |Σ|, M, d} is the set of parameters that were
previously studied in the literature. We let P ∗ := P ∪ {n}. A parameter setting fixes a polynomial
relation between any parameter and n. To formalize this, we call a vector α = (αp )p∈P with
αp ∈ R≥0 a parameter setting, and an LCS instance x, y satisfies the parameter setting α if each
parameter p attains a value p(x, y) = Θ(nαp ). This yields a subproblem of LCS consisting of all
instances that satisfy the parameter setting. We sometimes use the notation αn = 1.
For our running time bounds, for each parameter p ∈ P except for |Σ| we can assume αp > 0,
since otherwise one of the known algorithms runs in time Õ(n) and there is nothing to show.
Similarly, for αd ≤ 1 there is an Õ(n) algorithm and there is nothing to show. For Σ, however, the
case αΣ = 0, i.e., |Σ| = Θ(1), is an important special case. We study this case more closely by also
considering parameter settings that fix |Σ| to any specific constant greater than 1.
6
Parameter
m
L
δ
∆
|Σ|
d
M
Restriction
0 ≤ αm ≤ 1
0
(≤ αL ≤ αm
0 ≤ αδ ≤ αm if αL = αm
α = αm
otherwise
( δ
αδ ≤ α∆ ≤ 1 if αL = αm = 1
α∆ = 1
otherwise
0 ≤ αΣ ≤ αm
max{αL , αΣ } ≤ αd ≤ min{2αL + αΣ , αL + αm , αL + α∆ }
max{1, αd , 2αL − αΣ } ≤ αM ≤ αL + 1
if |Σ| = 2: αM ≥ max{αL + αm , 1 + αd − αL }
if |Σ| = 3: αM ≥ αm + αd − αL
Table 2: Complete set of restrictions for non-trivial parameter settings.
Definition 3.1 (Parameter Setting). Fix γ ≥ 1. Let α = (αp )p∈P with αp ∈ R≥0 . We define
LCSγ (α) as the problem of computing the length of an LCS of two given strings x, y satisfying
nαp /γ ≤ p(x, y) ≤ nαp · γ for every parameter p ∈ P, where n = |x|, and |x| ≥ |y|. We call α and
LCSγ (α) parameter settings. In some statements we simply write LCS(α) to abbreviate that there
exists a γ ≥ 1 such that the statement holds for LCSγ (α).
For any fixed alphabet Σ, constant γ ≥ 1, and parameter setting α with αΣ = 0, we also define
the problem LCSγ (α, Σ), where additionally the alphabet of x, y is fixed to be Σ. We again call
(α, Σ) and LCSγ (α, Σ) parameter settings.
We call a parameter setting α or (α, Σ) trivial if for all γ ≥ 1 the problem LCSγ (α) or
LCSγ (α, Σ), respectively, has only finitely many instances.
As our goal is to prove hardness for any non-trivial parameter setting, for each parameter setting
we either need to construct hard instances or verify that it is trivial. That is, in one way or the other
we need a complete classification of parameter settings into trivial and non-trivial ones. To this
end, we need to understand all interactions among our parameters that hold up to constant factors,
which is an interesting question on its own, as it yields insight into the structure of strings from
the perspective of the LCS problem. For our seven parameters, determining all interactions is a
complex task. This is one of the major differences to previous multivariate fine-grained complexity
results, where the number of parameters was one, or in rare cases two, limiting the interaction
among parameters to a simple level.
Theorem 3.2 (Classification of non-trivial parameter settings). A parameter setting α or (α, Σ)
is non-trivial if and only if it satisfies all restrictions in Table 2.
Note that the restrictions in Table 2 consist mostly of linear inequalities, and that for small
alphabet sizes |Σ| ∈ {2, 3} additional parameter relations hold. The proof of this and the following
results will be outlined in Section 4. We are now ready to state our main lower bound.
Theorem 3.3 (Hardness for Large Alphabet). For any non-trivial parameter setting α, there is a
1−o(1)
, unless OVH fails.
constant γ ≥ 1 such that LCSγ (α) requires time min d, δ∆, δm
In the case of constant alphabet size, the (conditional) complexity differs between |Σ| = 2 and
|Σ| ≥ 3. Note that |Σ| = 1 makes LCS trivial.
7
Relation
L≤m≤n
L≤d≤M
∆≤n
δ≤m
δ≤∆
δ =m−L
∆=n−L
|Σ| ≤ m
n≤M
d ≤ Lm
d ≤ L2 |Σ|
d ≤ 2L(∆ + 1)
|Σ| ≤ d
L2
|Σ| ≤ M ≤ 2Ln
M ≥ Lm/4
M ≥ nd/(5L)
M ≥ md/(80L)
Restriction
if |Σ| = 2
if |Σ| = 2
if |Σ| = 3
Reference
trivial
trivial
trivial
trivial
trivial
by definition
by definition
Assumption 2.1
Assumption 2.1
Lemma 6.3
Lemma 6.3
Lemma 6.4
Lemma 6.5
Lemma 6.6
Lemma 6.7
Lemma 6.9
Lemma 6.10
Table 3: Relations between the parameters.
Theorem 3.4 (Hardness for Small Alphabet). For any non-trivial parameter setting (α, Σ), there
is a constant γ ≥ 1 such that, unless OVH fails, LCSγ (α, Σ) requires time
1−o(1)
• min d, δ∆, δm
if |Σ| ≥ 3,
• min d, δ∆, δM/n
1−o(1)
if |Σ| = 2.
Finally, we prove the following algorithmic result, handling binary alphabets faster if M and δ
are sufficiently small. This yields matching upper and lower bounds also for |Σ| = 2.
Theorem 3.5 (Section 11). For |Σ| = 2, LCS can be solved in time O(n + δM/n).
4
Hardness Proof Overview
In this section we present an overview of the proofs of our main results. We first focus on the large
alphabet case, i.e., parameter settings α, and discuss small constant alphabets in Section 4.4.
4.1
Classification of Non-trivial Parameter Settings
The only-if-direction of Theorem 3.2 follows from proving ineqalities among the parameters that
hold for all strings, and then converting them to inequalities among the αp ’s, as follows.
Lemma 4.1 (Parameter Relations, Section 6). For any strings x, y the parameter values P ∗ satisfy
the relations in Table 3. Thus, any non-trivial parameter setting α or (α, Σ) satisfies Table 2.
Proof Sketch. The full proof is deferred to Section 6. Some parameter relations follow trivially from
the parameter definitions, like L ≤ m ≤ n. Since by Assumption 2.1 every symbol in Σ appears in
8
x and y, we obtain parameter relations like |Σ| ≤ m. Other parameter relations need a non-trivial
proof, like M ≥ md/(80L) if |Σ| = 3.
From a relation like L ≤ m we infer that if αL > αm then for sufficiently large n no strings
x, y have L(x, y) = Θ(nαL ) and m(x, y) = Θ(nαm ), and thus LCSγ (α) is finite for any γ > 0. This
argument converts Table 3 to Table 2.
For the if-direction of Theorem 3.2, the task is to show that any parameter setting satisfying
Table 2 is non-trivial, i.e., to construct infinitely many strings in the parameter setting. We start
with a construction that sets a single parameter p as specified by α, and all others not too large.
Lemma 4.2 (Paddings, Section 8). Let α be a parameter setting satisfying Table 2. For any
parameter p ∈ P ∗ and any n ≥ 1 we can construct strings xp , yp such that (1) p(xp , yp ) = Θ(nαp ),
and (2) for all q ∈ P ∗ we have q(xp , yp ) = O(nαq ). Moreover, given n we can compute xp , yp , and
L(xp , yp ) in time O(n).
Note that although for Theorem 3.2 the existence of infinitely many strings would suffice, we
even show that they can be computed very efficiently. We will use this additional fact in Section 4.2.
Proof Sketch. We defer the full proof to Section 8 and here only sketch the proof for the parameter |Σ|. Let w := 12 . . . t be the concatenation of t := ⌈nαΣ ⌉ unique symbols. We argue that the
strings w, w or the strings w, rev(w) prove Lemma 4.2 for parameter p = |Σ|, depending on the
parameter setting α. Clearly, both pairs of strings realize an alphabet of size t = Θ(nαΣ ), showing (1). By Table 2, we have αL = αm or αδ = αm . In the first case, we use L(w, w) = t = O(nαΣ )
together with αΣ ≤ αm = αL , as well as δ(w, w) = ∆(w, w) = 0 ≤ nαδ ≤ nα∆ , to show (2)
for the parameters L, δ, ∆. In the second case, we similarly have L(w, rev(w)) = 1 ≤ nαL and
δ(w, rev(w)) = ∆(w, rev(w)) = t − 1 = O(nαΣ ) and αΣ ≤ αm = αδ ≤ α∆ .
The remaining parameters are straight-forward. Let (x, y) ∈ {(w, w), (w, rev(w))}. We have
n(x, y) = m(x, y) = t = O(nαΣ ) = O(nαm ) = O(n). Moreover, d(x, y) ≤ M (x, y) = t = O(nαΣ ) =
O(nαd ) = O(nαM ). Clearly, the strings and their LCS length can be computed in time O(n).
To combine the paddings for different parameters, we need the useful property that all studied
parameters sum up if we concatenate strings over disjoint alphabets.
Lemma 4.3 (Disjoint Alphabets). Let Σ1 , . . . , Σk be disjoint alphabets and let xi , yi be strings over
alphabet Σi with |xi | ≥ |yi | for all i.PConsider x := x1 . . . xk and y := y1 . . . yk . Then for any
parameter p ∈ P ∗ , we have p(x, y) = ki=1 p(xi , yi ).
Proof. The statement is trivial for the string lengths n, m, alphabet size |Σ|, and number of matching pairs M . For the LCS length L we observe that any common subsequence z can be decomposed P
into z1 . . . zk with zi using only symbols from Σi , so that |zi | ≤ L(xi , yi ) and thus
L(x, y) ≤ ki=1 L(xi , yi ). Concatenating longest common subsequences of xi , yi , we obtain equality. Using δ = m − L and ∆ = n − L, the claim follows also for δ and ∆.
Since every dominant pair is also a matching pair, every dominant pair of x, y stems from
prefixes x1 . . . xj x′ and y1 . . . yj y ′ , with x′ being a prefix of xj+1 and y ′ being a prefix of yj+1 for
P
some j. Since L(x1 . . . xj x′ , y1 . . . yj y ′ ) = ji=1 L(xi , yi ) + L(x′ , y ′ ), where the first summand does
not depend on x′ , y ′ , the dominant pairs of x, y of the form x1 . . . xj x′ , y1 . . . yj y ′ are in one-to-one
correspondence with the dominant pairs of xj+1 , yj+1 . This yields the claim for parameter d.
With these preparations we can finish our classification.
9
Proof of Theorem 3.2 for large alphabet. One direction follows from Lemma 4.1. For the other
direction, let α be a parameter setting satisfying Table 2. For any n ≥ 1 consider the instances
xp , yp constructed in Lemma 4.2, and let them use disjoint alphabets for different p ∈ P ∗ . Then
the concatenations x := p∈P ∗ xp and y := p∈P ∗ xp form an instance of LCS(α), since for any
parameter p ∈ P ∗ we have p(xp , yp ) = Θ(nαp ), and for all other instances xp′ , yp′ the parameter
p is O(nαp ), and thus p(x, y) = Θ(nαp ) by the Disjoint Alphabets Lemma. Thus, we constructed
instances of LCS(α) of size Θ(n) for any n ≥ 1, so the parameter setting α is non-trivial.
We highlight two major hurdles we had to be overcome to obtain this classification result:
• Some of the parameter relations of Table 3 are scattered through the LCS literature, e.g., the
inequality d ≤ Lm is mentioned in [14]. In fact, proving any single one of these inequalities is
not very hard – the main issue was to find a complete set of parameter relations. The authors
had to perform many iterations of going back and forth between searching for new parameter
relations (i.e., extending Lemma 4.1) and constructing strings satisfying specific parameter
relations (i.e., extending Lemma 4.2), until finally coming up with a complete list.
• The dependency of d on the other parameters is quite complicated. Indeed, eight of the parameter relations of Table 3 involve dominant pairs. Apostolico [13] introduced the parameter
under the initial impression that “it seems that whenever [M ] gets too close to mn, then this
forces d to be linear in m”. While we show that this intuition is somewhat misleading by constructing instances with high values of both M and d, it is a rather complex task to generate
a desired number of dominant pairs while respecting given bounds on all other parameters.
Intuitively, handling dominant pairs is hard since they involve restrictions on each pair of
prefixes of x and y. For Lemma 4.2, we end up using the strings (01)R+S , 0R (01)S as well as
((1 ◦ . . . ◦ t) ◦ (t′ ◦ . . . ◦ 1))R (1 ◦ . . . ◦ t)S−R , (1 ◦ . . . ◦ t)S for different values of R, S, t, t′ .
4.2
Monotonicity of Time Complexity
It might be tempting to assume that the optimal running time for solving LCS is monotone in
the problem size n and the parameters P (say up to constant factors, as long as all considered
parameters settings are non-trivial). However, since the parameters have complex interactions (see
Table 3) it is far from obvious whether this intuition is correct. In fact, the intuition fails for
|Σ| = 2, where the running time O(n + δM/n) of our new algorithm is not monotone, and thus
also the tight time bound (n + min{d, δ∆, δM/n})1±o(1) is not monotone.
Nevertheless, we will prove monotonicity for any parameter setting α, i.e., when the alphabet
size can be assumed to be at least a sufficiently large constant. To formalize monotonicity, we define
problem LCS≤ (α) consisting of all instances of LCS with all parameters at most as in LCS(α).
Definition 4.4 (Downward Closure of Parameter Setting). Fix γ ≥ 1 and let α be a parameter
setting. We define the downward closure LCSγ≤ (α) as follows. An instance of this problem is a
triple (n, x, y), where p(x, y) ≤ γ · nαp for any p ∈ P ∗ , and the task is to compute the length of an
LCS of x, y. In some statements, we simply write LCS≤ (α) to abbreviate that there exists a γ ≥ 1
such that the statement holds for LCSγ≤ (α).
Similarly, for any fixed alphabet Σ we consider the downward closure LCSγ≤ (α, Σ) with instances
(n, x, y), where x, y are strings over alphabet Σ and p(x, y) ≤ γ · nαp for any p ∈ P ∗ .
Lemma 4.5 (Monotonicity). For any non-trivial parameter setting α and β ≥ 1, LCSγ (α) has an
O(nβ )-time algorithm for all γ if and only if LCSγ≤ (α) has an O(nβ )-time algorithm for all γ.
10
Proof of Lemma 4.5. The if-direction follows from the fact that if (x, y) is an instance of LCSγ (α)
then (|x|, x, y) is an instance of LCSγ≤ (α).
For the other direction, let (n, x, y) be an instance of LCS≤ (α). Since α is non-trivial, it satisfies
Table 2, by Theorem 3.2. Lemma 4.2 thus allows us to construct paddings xp , yp for any p ∈ P ∗
such that (1) p(xp , yp ) = Θ(nαp ) and (2) (n, xp , yp ) is an instance of LCS≤ (α). We construct
these paddings over disjoint alphabets for different parameters and consider the concatenations
x′ := x ◦ p∈P xp and y ′ := y ◦ p∈P yp . Then (1), (2), and the Disjoint Alphabets Lemma
imply that p(x′ , y ′ ) = Θ(nαp ) for any p ∈ P ∗ , so that (x′ , y ′ ) is an instance of LCS(α). By
assumption, we can thus compute
L(x′ , y ′ ) in time O(nβ ). By the Disjoint Alphabets Lemma, we
P
have L(x, y) = L(x′ , y ′ ) − p∈P L(xp , yp ), and each L(xp , yp ) can be computed in time O(n) by
Lemma 4.2, which yields L(x, y) and thus solves the given instance (n, x, y). We finish the proof
by observing that the time to construct x′ , y ′ is bounded by O(n).
Note that this proof was surprisingly simple, considering that monotonicity fails for |Σ| = 2.
4.3
Hardness for Large Alphabet
Since we established monotonicity for parameter settings α, it suffices to prove hardness for
LCS≤ (α) instead of LCS(α). This makes the task of constructing hard strings considerably easier, since we only have to satisfy upper bounds on the parameters. Note that our main result
Theorem 3.3 follows from Lemma 4.5 and Theorem 4.6 below.
Theorem 4.6 (Hardness for Large Alphabet, Section 9). For any non-trivial parameter setting α,
1−o(1)
, unless OVH fails.
there exists γ ≥ 1 such that LCSγ≤ (α) requires time min d, δm, δ∆
Proof Sketch. The full proof is deferred to Section 9. We provide different reductions for the cases
αδ = αm and αL = αm . Intuitively, this case distinction is natural, since after this choice all
remaining restrictions from Table 2 are of an easy form: they are linear inequalities.
In the case αδ = αm the complexity min{d, δ∆, δm}1±o(1) simplifies to d1±o(1) (since αd ≤
αL + αm ≤ 2αm = αδ + αm and similarly αd ≤ αδ + αm = 2αδ ≤ αδ + α∆ , see Table 2).
This simplification makes this case much easier. For constant alphabet, instantiating the known
reduction from OV to LCS [28] such that x chooses one of ≈ L vectors and y chooses one of
≈ d/L vectors yields the claim. For larger alphabet, the right-hand side of the parameter relation
d ≤ L2 |Σ| increases and allows for potentially more dominant pairs. In this case, the second set of
vectors would increase to a size of ≈ d/L = ω(L), and the length of an LCS of this construction
becomes too large. We thus adapt the reduction by using the construction for constant alphabet
multiple times over disjoint alphabets and concatenating the results (reversing the order in one).
The case αL = αm is harder, since all three terms of the complexity min{d, δ∆, δm}1±o(1) are
relevant. The known reduction [28] fails fundamentally in this case, roughly speaking since the
resulting δ is always as large as the number of vectors encoded by any of x and y. Hence, we go
back to the “normalized vector gadgets” from the known reduction [28], which encode vectors a, b
by strings NVG(a), NVG(b) whose LCS length only depends on whether a, b are orthognal. We
then carefully embed these gadgets into strings that satisfy any given parameter setting. A crucial
trick is to pad each gadget to NVG′ (a) := 0α 1β (01)γ NVG(a)1γ for appropriate lengths α, β, γ. It
is easy to see that this constructions ensures the following:
(1vs1) The LCS length of NVG′ (a), NVG′ (b) only depends on whether a, b are orthognal, and
(2vs1) NVG′ (b) is a subsequence of NVG′ (a) ◦ NVG′ (a′ ) for any vectors a, a′ , b.
11
2k−1
In particular, for any vectors a(1) , . . . , a(2k−1) and b(1) , . . . , b(k) , on the strings x = i=1
NVG′ (a(i) )
and y = kj=1 NVG′ (b(j) ) we show that any LCS consists of k − 1 matchings of type 2vs1 and one
matching of type 1vs1 (between NVG′ (b(j) ) and NVG′ (a(2j−1) ) for some j). Thus, the LCS length
of x and y only depends on whether there exists a j such that a(2j−1) , b(j) are orthogonal. Moreover,
since most of y is matched by type 2vs1 and thus completely contained in x, the parameter δ(x, y)
is extremely small compared to the lengths of x and y – which is not achievable with the known
reduction [28]. Our proof uses an extension of the above construction, which allows us to have
more than one matching of type 1vs1. We think that this 1vs1/2vs1-construction is our main
contribution to specific proof techniques and will find more applications.
4.4
Small Alphabet
Proving our results for small constant alphabets poses additional challenges. For instance, our
proof of Lemma 4.5 fails for parameter settings (α, Σ) if |Σ| is too small, since the padding over
disjoint alphabets produces strings over alphabet size at least |P| = 7. In particular, for |Σ| = 2
we may not use the Disjoint Alphabets Lemma at all, rendering Lemma 4.2 completely useless.
However, the classification Theorem 3.2 still holds for parameter settings (α, Σ). A proof is implicit
in Section 10, as we construct (infinitely many) hard instances for all parameter settings (α, Σ)
satisfying Table 2.
As mentioned above, the Monotonicity Lemma (Lemma 4.5) is wrong for |Σ| = 2, since our
new algorithm has a running time Õ(n + δM/n) which is not monotone. Hence, it is impossible
to use general strings from LCS≤ (α, Σ) as a hard core for LCS(α, Σ). Instead, we use strings
from a different, appropriately chosen parameter setting LCS≤ (α′ , Σ′ ) as a hard core, see, e.g.,
Observation 10.18. Moreover, instead of padding with new strings xp , yp for each parameter, we
need an integrated construction where we control all parameters at once. This is a technically
demanding task to which we devote a large part of this paper (Section 10). Since the cases |Σ| = 2,
|Σ| = 3, and |Σ| ≥ 4 adhere to different relations of Table 2, these three cases have to be treated
separately. Furthermore, as for large alphabet we consider cases αδ = αm and αL = αm . Hence, our
reductions are necessarily rather involved and we need to very carefully fine-tune our constructions.
5
Organization
The remainder of the paper contains the proofs of Theorems 3.2, 3.3, 3.4, and 3.5, following the
outline given in Section 4. Specifically, Section 6 lists and proves our complete set of parameter
relations (proving Lemma 4.1). In Section 7, we prove basic facts and technical tools easing the
constructions and proofs in later sections – this includes a simple greedy prefix matching property
as well as a surprising technique to reduce the number of dominant pairs of two given strings.
In Section 8, we show how to pad each parameter individually (proving Lemma 4.2). Section 9
then constructs hard instances for large alphabet (for the downward closure of any parameter
setting, proving Theorem 4.6 and thus Theorem 3.3). Finally, the much more intricate case of
small constant alphabet sizes such as |Σ| = 2 is handled in Section 10, which takes up a large
fraction of this paper (proving Theorem 3.4). We present our new algorithm in Section 11 (proving
Theorem 3.5). Finally, Section 12 describes the necessary modifications to our hardness proofs to
show the same conditional lower bounds also under the weaker variant of SETH used in [2].
We remark that for some intermediate strings x, y constructed in the proofs, the assumption
|x| ≥ |y| may be violated; in this case we use the definitions given in Figure 1b (and thus we may
have n(x, y) < m(x, y) and ∆(x, y) < δ(x, y)). Since L, M, d, and Σ are symmetric in the sense
12
L(x, y) = L(y, x), these parameters are independent of the assumption |x| ≥ |y|.
For simplicity, we will always work with the following equivalent variant of OVH.
Unbalanced Orthogonal Vectors Hypothesis (UOVH): For any α, β ∈ (0, 1], and computable functions f (n) = nα−o(1) , g(n) = nβ−o(1) , the following problem requires time nα+β−o(1) :
Given a number n, solve a given OV instance with D = no(1) and |A| = f (n) and |B| = g(n).
Lemma 5.1 (Essentially folklore). UOVH is equivalent to OVH.
Proof. Clearly, UOVH implies OVH (using α = β = 1, f (n) = g(n) = n). For the other direction,
assume that UOVH fails and let α, β ∈ (0, 1], f (n) = nα−o(1) , and g(n) = nβ−o(1) be such that OV
with D = no(1) and |A| = f (n) and |B| = g(n) can be solved in time O(nα+β−ε ) for some constant
ε > 0. Consider an arbitrary OV instance A, B ⊆ {0, 1}D with D = no(1) . We partition A into
n
n
⌉ sets A1 , . . . , As of size f (n) and B into t := ⌈ g(n)
⌉ sets B1 , . . . , Bt of size g(n) (note
s := ⌈ f (n)
that the last set of such a partition might have strictly less elements, but can safely be filled up
using all-ones vectors). By assumption, we can solve each OV instance Ai , Bj in time O(nα+β−ε ).
Since there exist a ∈ A, b ∈ B with ha, bi = 0 if and only if there exist a ∈ Ai , b ∈ Bj with
hai , bj i = 0 for some i ∈ [s], j ∈ [t], we can decide the instance A, B by sequentially deciding the
′
s · t = O(n2−(α+β)+o(1) ) OV instances Ai , Bj . This takes total time O(s · t · nα+β−ε ) = O(n2−ε ) for
any ε′ < ε, which contradicts OVH and thus proves the claim.
6
Parameter Relations
In this section we prove relations among the studied parameters, summarized in Table 3. Some
of these parameter relations can be found at various places in the literature, however, our set of
relations is complete in the sense that any parameter setting α is non-trivial if and only if it satisfies
all our relations, see Theorem 3.2.
Consider a relation like d(x, y) ≤ L(x, y) · m(x, y), given in Lemma 6.3(i) below. Fix exponents
αd , αL , and αm , and consider all instances x, y with d(x, y) = Θ(nαd ), L(x, y) = Θ(nαL ), and
m(x, y) = Θ(nαm ). Note that the relation may be satisfied for infinitely many instances if αd ≤
αL + αm . On the other hand, if αd > αL + αm then the relation is satisfied for only finitely many
instances. This argument translates Table 3 into Table 2 (using αn = 1), thus generating a complete
list of restrictions for non-trivial parameter settings.
Let x, y be any strings. In the remainder of this section, for convenience, we write p = p(x, y) for
any parameter p ∈ P ∗ . Recall that by possibly swapping x and y, we may assume m = |y| ≤ |x| = n.
This assumption is explicit in our definition of parameter settings. For some other strings x, y
considered in this paper, this assumption may be violated. In this case, the parameter relations of
Table 3 still hold after replacing n by max{n, m} and m by min{n, m}, as well as ∆ by max{∆, δ}
and δ by min{∆, δ} (as the other parameters are symmetric).
Note that Assumption 2.1 (i.e., every symbol in Σ appears at least once in x and y) implies
|Σ| ≤ m and ensures that any symbol of x has at least one matching symbol in y, and thus M ≥ n.
We next list trivial relations. The length of the LCS L satisfies L ≤ m. The numbers of deleted
positions satisfy ∆ = n − L ≤ n, δ = m − L ≤ m, and δ ≤ ∆. Since any dominant pair is also
a matching pair, we have d ≤ M . Moreover, d ≥ L since for any 1 ≤ k ≤ L there is at least one
k-dominant pair.
To prepare the proofs of the remaining
relations, recall that we defined L[i, j] = L(x[1..i], y[1..j]).
P
Moreover, observe that L(x, y) ≤ σ∈Σ min{#σ (x), #σ (y)}, which we typically exploit without
explicit notice. Furthermore, we will need the following two simple facts.
13
Observation 6.1. For any σ ∈ Σ, we have #σ (x) ≤ L or #σ (y) ≤ L.
Proof. If some σ ∈ Σ occurs at least L + 1 times in both x and y, then σ L+1 is an LCS of x and y
of length L + 1 > L, which is a contradiction.
Observation 6.2. Fix 1 ≤ k ≤ L and 1 ≤ ī ≤ n. Then there is at most one k-dominant pair (ī, j)
with 1 ≤ j ≤ m, namely the pair (ī, j ∗ ) with j ∗ = min{j | L[ī, j] = k} if it exists. Symmetrically
for every 1 ≤ k ≤ n and 1 ≤ j̄ ≤ m, there is at most one k-dominant pair (i, j̄) with 1 ≤ i ≤ n.
Proof. All pairs (ī, j) with j 6= j ∗ and L[ī, j] = k satisfy j ≥ j ∗ , so they are dominated by (ī, j ∗ ).
We are set up to prove the more involved relations of Table 3. We remark that while the
inequality d ≤ Lm is well known since the first formal treatment of dominant pairs, the bound
d ≤ L2 |Σ| seems to go unnoticed in the literature.
Lemma 6.3. It holds that (i) d ≤ Lm and (ii) d ≤ L2 |Σ|.
Proof. (i) Let 1 ≤ k ≤ L. For any 1 ≤ j̄ ≤ m there
P is at most one k-dominant pair (i, j̄) by
Observation 6.2. This proves |Dk | ≤ m and thus d = L
i=1 |Dk | ≤ Lm.
(ii) Let σ ∈ Σ. By Observation 6.1, we may assume that #σ (x) ≤ L (the case #σ (y) ≤ L
is symmetric). For any occurrence iσ of σ in x and any 1 ≤ k ≤ L, there can be at most one
k-dominant pair (iσ , j) by Observation 6.2. Hence, σ contributes at most L k-dominant pairs.
Summing over all σ ∈ Σ and k = 1, . . . , L yields the claim.
Lemma 6.4. We have d ≤ 2L(∆ + 1).
Proof. Fix an LCS z of x and y. Since z can be obtained by deleting at most ∆ = n − L positions
from x or by deleting at most δ = m − L positions from y, x[1..i] and y[1..j] contain z[1..i − ∆] and
z[1..j − δ], respectively, as a subsequence. Hence, we have min{i − ∆, j − δ} ≤ L[i, j] ≤ min{i, j}.
Let 1 ≤ k ≤ L. By the previous property, if L[i, j] = k then (i) k ≤ i ≤ k + ∆ or (ii)
k ≤ j ≤ k + δ. Note that for each ī ∈ {k, . . . , k + ∆} we have (by Observation 6.2) at most one
k-dominant pair (ī, j), and similarly, for each j̄ ∈ {k, . . . , k + δ} we have at most one k-dominant
pair (i, j̄). This proves |Dk | ≤ ∆ + δ + 2 ≤ 2(∆ + 1), from which the claim follows.
Lemma 6.5. We have d ≥ |Σ|.
Proof. By Assumption 2.1, every symbol σ ∈ Σ appears in x and y. Let i be minimal with x[i] = σ
and j be minimal with y[j] = σ. We show that (i, j) is a dominant pair of x, y, and thus d ≥ |Σ|.
Let k = L[i, j] = L(x[1..i], y[1..j]). Since x[i] = y[j], we have L[i − 1, j − 1] = k − 1. Moreover,
since the last symbol in x[1..i] does not appear in y[1..j − 1], it cannot be matched, and we obtain
L[i, j − 1] = L[i − 1, j − 1] = k − 1. Similarly, L[i − 1, j] = k − 1. This proves that (i, j) is a
k-dominant pair of x, y, as desired.
Lemma 6.6. We have (i) M ≥ L2 /|Σ| and (ii) M ≤ 2Ln.
P
P
Proof. (i) Let z be an LCS of x and y. We have M = σ∈Σ #σ (x) · #σ (y) ≥ σ∈Σ #σ (z)2 . By
P
σ∈Σ #σ (z) = L and the arithmetic-quadratic mean inequality, the result follows.
(ii) Let Σw := {σ ∈ Σ | #σ (w) ≤ L} for w ∈ {x, y}. By Observation 6.1, we have Σx ∪ Σy = Σ.
We can thus bound
X
X
X
L · #σ (y) ≤ L(n + m) ≤ 2Ln.
L · #σ (x) +
M=
#σ (x) · #σ (y) ≤
σ∈Σ
σ∈Σx
σ∈Σy
14
Small alphabets. Not surprisingly, in the case of very small alphabets there are more relations
among the parameters. The following relations are specific to |Σ| = 2 and |Σ| = 3.
Lemma 6.7. Let Σ = {0, 1}. Then M ≥ Lm/4.
Proof. Let z be an LCS of x and y. Without loss of generality we may assume #0 (x) ≥ n/2 (by
possibly exchanging 0 and 1). If #0 (y) ≥ L/2, then M ≥ #0 (x)·#0 (y) ≥ Ln/4 ≥ Lm/4. Otherwise
we have #0 (y) < L/2 ≤ m/2, which implies #1 (y) ≥ m/2. By #0 (y) < L/2, we must have that
#1 (x) ≥ L/2, since otherwise L ≤ min{#0 (x), #0 (y)} + min{#1 (x), #1 (y)} ≤ #0 (y) + #1 (x) < L,
which is a contradiction. Hence M ≥ #1 (x) · #1 (y) ≥ Lm/4, proving the claim.
The following lemma (which is also applicable for large alphabets) asserts that if most positions
in x and y are the same symbol, say 0, then the number of dominant pairs is small.
P
Lemma 6.8. Let 0 be a symbol in Σ and set λ := σ∈Σ\{0} min{#σ (x), #σ (y)}. Then d ≤ 5λL.
In particular, for Σ = {0, 1}, we have d ≤ 5L · #1 (y).
Proof. Let 1 ≤ k ≤ L and σ ∈ Σ \ {0}. By Observation 6.2, there are at most min{#σ (x), #σ (y)}
k-dominant pairs (i, j) with x[i] = y[j] = σ. Hence in total, there are at most λ · L dominant pairs
(i, j) with x[i] = y[j] 6= 0.
To count the remaining dominant pairs, which are contributed by 0, we use a similar argument
to Lemma 6.4. Let 1 ≤ k ≤ L and consider any pair (i, j) with x[i] = y[j] = 0, say x[i] is the ℓx -th
occurrence of 0 in x and y[j] is the ℓy -th occurrence of 0 in y. If (i, j) is a k-dominant pair then
k − λ ≤ min{ℓx , ℓy } ≤ k. Indeed, if min{ℓx , ℓy } < k − λ, then
X
L[i, j] ≤
min{#σ (x[1..i]), #σ (y[1..j])} ≤ min{ℓx , ℓy } + λ < k,
σ∈Σ
contradicting the definition of a k-dominant pair. Moreover, if min{ℓx , ℓy } > k, then 0min{ℓx ,ℓy } is a
common subsequence of x[1..i], y[1..j] of length strictly larger than k, which is again a contradiction
to (i, j) being a k-dominant pair.
Hence, we have k − λ ≤ ℓx ≤ k or k − λ ≤ ℓy ≤ k. Since any choice of ℓx uniquely determines i
by Observation 6.2 (and symmetrically ℓy determines j), there are at most 2λ + 2 k-dominant pairs
with x[i] = y[j] = 0. In total, we have at most (3λ + 2)L ≤ 5λL dominant pairs (note that λ ≥ 1
by Assumption 2.1 and |Σ| ≥ 2).
Lemma 6.9. If Σ = {0, 1} then M ≥ nd/(5L).
Proof. Without loss of generality assume that min{#0 (x), #0 (y)} ≥ min{#1 (x), #1 (y)} (by possibly exchanging
0 and 1). Then λ = min{#1 (x), #1 (y)} satisfies #σ (y) ≥ λ for all σ ∈ Σ. Thus,
P
M = σ∈Σ #σ (x) · #σ (y) ≥ (#0 (x) + #1 (x)) · λ = λn. By Lemma 6.8, we have λ ≥ d/(5L) and
the claim follows.
For ternary alphabets, the following weaker relation holds.
Lemma 6.10. If Σ = {0, 1, 2} then M ≥ md/(80L).
Proof. Let z be an LCS of x and y. We permute the symbols such that σ = 0 maximizes #σ (z).
Thus, we have #0 (x) ≥ #0 (z) ≥ L/|Σ| = L/3 and symmetrically #0 (y) ≥ L/3.
If #0 (y) ≥ m/2 then we have M ≥ #0 (x) · #0 (y) ≥ Lm/6 ≥ dm/(18L) by Lemma 6.3(ii).
Similarly, if #0 (x) ≥ n/2 then we obtain M ≥ Ln/6 ≥ dn/(18L) ≥ dm/(18L). Hence, it remains
to consider the case #0 (x) ≤ n/2 and #0 (y) ≤ m/2. Let x′ , y ′ be the subsequences obtained by
15
deleting all 0s from x and y, respectively, and note that |x′ |, |y ′ | ≥ m/2. Since x′ , y ′ have alphabet
′ ′
size 2, Lemma 6.7 is applicable and yields M (x′ , y ′ ) ≥ L(x′ , y ′ ) · m(x
P, y )/4. Observe that there
′
′
is a common subsequence of x , y of length at least λ/2, where λ = σ∈Σ\{0} min{#σ (x), #σ (y)}
(consider the longer subsequence of 1min{#1 (x),#1 (y)} and 2min{#2 (x),#2 (y)} ). Hence,
M (x, y) ≥ M (x′ , y ′ ) ≥
1 λ m
λm
1
· L(x′ , y ′ ) · m(x′ , y ′ ) ≥ · ·
=
.
4
4 2 2
16
The claim now follows from λ ≥ d/(5L) as proven in Lemma 6.8.
7
Technical Tools and Constructions
To prepare later constructions and ease their analysis, this section collects several technical results.
We start off with the simple fact that equal prefixes can be greedily matched.
Lemma 7.1 (Greedy Prefix Matching). For any strings w, x, y, we have L(wx, wy) = |w| + L(x, y)
and d(wx, wy) = |w| + d(x, y).
Proof. For the first statement, it suffices to prove the claim when w = 0 is a single symbol (by
induction and renaming of symbols). Consider any common subsequence z of 0x, 0y. If z is of the
form 0z ′ , then z ′ is a common subsequence of x, y, so |z| ≤ 1 + L(x, y). If the first symbol of z is
not 0, then the first symbols of 0x, 0y are not matched, so z is a common subsequence of x, y and
we obtain |z| ≤ L(x, y). In total, L(0x, 0y) ≤ 1 + L(x, y). The converse holds by prepending 0 to
any LCS of x, y.
For the second statement, let x′ = wx and y ′ = wy. For i ∈ [|w|], we have L(x′ [1..i], y ′ [1..i]) = i.
Hence (i, i) is the unique i-dominant pair of x′ , y ′ and no other dominant pairs (i, j) with i ≤ |w| or
j ≤ |w| exist. This yields |w| dominant pairs. Consider now any (i, j) with i = |w|+ ī and j = |w|+ j̄
where ī ∈ [|x|], j̄ ∈ [|y|]. By the first statement, L(x′ [1..i], y ′ [1..j]) = |w| + L(x[1..ī], y[1..j̄]). Thus
(i, j) is a (|w| + k)-dominant pair of x′ and y ′ if and only if (ī, j̄) is a k-dominant pair of x and y.
This yields d(x, y) additional dominant pairs, proving the claim.
For bounding the number of dominant pairs from below we often use the following observation.
Observation 7.2. For any strings a, x, b, y, we have d(ax, by) ≥ d(a, b).
Proof. This trivially follows from the fact that any prefixes a′ , b′ of a, b are also prefixes of ax,by.
7.1
Generating dominant pairs
The dependency of d on the other parameters is quite complicated, and hence it is a rather complex
task to generate a desired number of dominant pairs while respecting given bounds on all other
parameters. We present different constructions for this purpose in the following.
The following lemma establishes the first such construction, illustrated in Figure 2. We remark
that statements (iii) and (iv) are technical tools that we will only use for |Σ| = O(1) in Section 10.
Lemma 7.3 (Generating dominant pairs). Let R, S ≥ 0 and define a := (01)R+S and b := 0R (01)S .
The following statements hold.
(i) We have L(a, b) = |b| = R + 2S.
(ii) We have R · S ≤ d(a, b) ≤ min{2(R + 1), 5S} · (R + 2S) = O(R · S).
16
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
0
0
0
0
0
1
0
1
0
1
0
1
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2
3
3
3
3
3
3
3
3
1
2
2
2
2
3
3
4
4
4
4
4
4
4
1
2
3
3
3
3
4
4
5
5
5
5
5
5
1
2
3
3
3
4
4
5
5
6
6
6
6
6
1
2
3
4
4
4
5
5
6
6
7
7
7
7
1
2
3
4
4
5
5
6
6
7
7
8
8
8
1
2
3
4
5
5
6
6
7
7
8
8
9
9
1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2
3 3 3 3 3 3 3 3 3
4 4 4 4 4 4 4 4 4
5 5 5 5 5 5 5 5 5
6 6 6 6 6 6 6 6 6
6 7 7 7 7 7 7 7 7
7 7 8 8 8 8 8 8 8
7 8 8 9 9 9 9 9 9
8 8 9 9 10 10 10 10 10
8 9 9 10 10 11 11 11 11
9 9 10 10 11 11 12 12 12
9 10 10 11 11 12 12 13 13
10 10 11 11 12 12 13 13 14
Figure 2: The L-table for the strings a = (01)R+S and b = 0R (01)S with R = 4, S = 5 (where the
entry in row j and column i denotes L(a[1..i], b[1..j])). The indicated dominant pairs visualize the
results of Lemma 7.3.
′
(iii) For any α, β, β ′ ≥ 0, we have L(a1α , 0β b0β ) = |b| = R + 2S.
′
(iv) For any α, β, β ′ ≥ 0, we have R · S ≤ d(a1α , 0β b0β ) ≤ 2(max{R + α, β + β ′ } + 1)(R + 2S).
Proof. All statements follow from the following fact.
(∗) For any 0 ≤ s ≤ S, s ≤ r ≤ R + s and β ≥ 0, we have L((01)r , 0β 0R (01)s ) = r + s.
To prove (∗), note that by Lemma 7.1 (reversing the strings) we have
L((01)r , 0β+R (01)s ) = 2s + L((01)r−s , 0β+R ) = 2s + min{#0 ((01)r−s ), β + R} = 2s + (r − s) = r + s.
Statement (i) now follows from setting s = S, r = R + S, and β = 0.
′
To see (iii), note that L(a1α , 0β b0β ) ≥ L(a, b) = |b| by (i). For the upper bound, we compute
′
′
′
L(a1α , 0β b0β ) ≤ min{#0 (a1α ), #0 (0β b0β )} + min{#1 (a1α ), #1 (0β b0β )}
= min{R + S, R + S + β + β ′ } + min{R + S + α, S} = R + 2S = L(a, b).
To prove (ii), note that d(a, b) ≤ 5 · #1 (b) · L(a, b) = 5S(R + 2S) by Lemma 6.8. The bound
d(a, b) ≤ 2(R + 1) · (R + 2S) follows from (iv) by setting α = β = β ′ = 0, hence it remains to
prove (iv).
′
For the lower bound, we use d(a1α , 0β b0β ) ≥ d(a, 0β b) (by Observation 7.2) and consider
L′ [r, s] := L((01)r , 0β+R (01)s ). We prove that for any 1 ≤ s ≤ S, s < r ≤ R + s, we have at least
one (r +s)-dominant pair (i, j) with 2(r −1) < i ≤ 2r and β +R+2(s−1) < j ≤ β +R+2s. Indeed,
L′ [r, s] = r + s (by (i)) implies that an (r + s)-dominant pair (i, j) with i ≤ 2r and j ≤ β + R + 2s
exists. If we had i ≤ 2(r − 1), then by monotonicity of L(x[1..i], y[1..j]) in i and j we would have
L′ [r − 1, s] ≥ r + s, contradicting L′ [r − 1, s] = r + s − 1 (by (i)). Thus, we obtain i > 2(r − 1),
and symmetrically we have j > β + R + 2(s − 1). Hence, for every 1 ≤ s ≤ S, s < r ≤ R + s, we
have at least one dominant pair which is not counted for any other choice of r and s. Since for
any 1 ≤ s ≤ S there are R choices for s < r ≤ R + s, we conclude that d(a, b) ≥ S · R. For the
′
upper bound, note that ∆(a1α , 0β b0β ) = max{R + α, β + β ′ } by (iii), and hence Lemma 6.4 yields
′
′
′
d(a1α , 0β b0β ) ≤ 2 · L(a1α , 0β b0β ) · (∆(a1α , 0β b0β ) + 1) = 2(R + 2S)(max{R + α, β + β ′ } + 1).
17
The previous construction uses alphabet size |Σ| = 2, enforcing M (a, b) ≥ L(a, b)2 /2. If the
desired number of matching pairs is much smaller than L2 , which can only be the case if the
desired alphabet size is large, we have to use a more complicated construction exploiting the larger
alphabet. To make the analysis more convenient, we first observe a simple way to bound the
number of dominant pairs from below: if we can find k pairwise non-dominating index pairs (i, j)
that have the same LCS value (of the corresponding prefixes) and whose predecessors (i − 1, j − 1)
have a strictly smaller LCS value, then at least k/2 dominant pairs exist.
Lemma 7.4. For any strings x, y set L[i, j] := L(x[1..i], y[1..j]). Suppose that there are index pairs
(i1 , j1 ), . . . , (ik , jk ) with i1 < i2 < · · · < ik and j1 > j2 > · · · > jk such that for some γ and all
1 ≤ ℓ ≤ k we have L[iℓ , jℓ ] = γ and L[iℓ − 1, jℓ − 1] = γ − 1. Then the number of γ-dominant pairs
of x and y is at least k/2.
Proof. For each k, fix any γ-dominant pair pℓ = (i∗ℓ , jℓ∗ ) that dominates (iℓ , jℓ ). Note we may have
pℓ = (iℓ , jℓ ) if (iℓ , jℓ ) is itself a dominant pair, and thus pℓ always exists. Set i0 := 1. We argue
that for every ℓ, we have iℓ−1 ≤ i∗ℓ ≤ iℓ .
Note that for all ℓ, we have L[iℓ − 1, jℓ − 1] < γ and hence L[i, j] < γ for all i < iℓ , j < jℓ . Thus,
we have either (1) i∗ℓ = iℓ (and jℓ∗ ≤ jℓ ) or (2) jℓ∗ = jℓ (and i∗ℓ ≤ iℓ ). Case (1) trivially satisfies
iℓ−1 ≤ i∗ℓ ≤ iℓ . Thus, it remains to argue that in case (2) we have iℓ−1 ≤ i∗ℓ . Indeed, otherwise we
have i∗ℓ < iℓ−1 and jℓ∗ = jℓ < jℓ−1 , which implies L[i∗ℓ , jℓ∗ ] < γ and (i∗ℓ , jℓ∗ ) is no γ-dominant pair.
Note that the above property implies pℓ 6= pℓ+2 for all 1 ≤ ℓ ≤ k − 2, since i∗ℓ ≤ iℓ < iℓ+1 ≤ i∗ℓ+2 .
Thus, the number of γ-dominant pairs is bounded from below by |{pℓ | 1 ≤ ℓ ≤ k}| ≥ k/2.
Note that the previous lemma would not hold without the condition L[iℓ − 1, jℓ − 1] = γ − 1.
We are set to analyze our next construction, which is illustrated in Figure 3.
Lemma 7.5 (Generating dominant pairs, large alphabet). Let t ≥ 2, 1 ≤ t′ ≤ t, and S ≥ R ≥ 1.
Over alphabet Σ = {1, . . . , t} we define the strings
a := ((1 . . . t) ◦(t′ . . . 1))R ◦(1 . . . t)S−R ,
b := (1 . . . t)S .
It holds that
(i) L(a, b) = |b| = St,
(ii) Assume that S ≥ R(t′ + 1). Then (St)(Rt′ )/8 ≤ d(a, b) ≤ 4(St)(Rt′ ),
(iii) tS 2 ≤ M (a, b) ≤ t(S + R)S.
Proof. Note that (i) trivially follows from the fact that b is a subsequence of a. For (iii), observe that
for all σ ∈ Σ, we have S ≤ #σ (a) ≤ R+S and #σ (b) = S, from which the claim follows immediately.
The upper bound of (ii) follows from d(a, b) ≤ 2 · L(a, b) · (∆(a, b) + 1) = 2(St)(Rt′ + 1) ≤ 4(St)(Rt′ )
(by Lemma 6.4). To prove the remaining lower bound, we establish the following fact.
(∗) Let w := 1 . . . t and w′ := t′ . . . 1. Then L((ww′ )R , wR+k ) = Rt + k for all 0 ≤ k ≤ t′ R.
Let us postpone the proof and show that (ii) follows from (∗). Define v := wS−R . For 0 ≤ k ≤ K :=
min{S − R, Rt′ } and 0 ≤ ℓ ≤ (S − R − k)t, we let a(k, ℓ) := (ww′ )R v[1..ℓ] and b(k, ℓ) := wR+k v[1..ℓ].
18
1 2 3 3 2 1 1 2 3 3 2 1 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
1
2
3
3
3
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
1
2
3
3
4
4
4
5
5
5
5
5
5
5
5
5
5
5
5
5
5
1
2
3
4
4
4
5
5
5
6
6
6
6
6
6
6
6
6
6
6
6
1
2
3
4
4
4
5
5
5
6
6
6
7
7
7
7
7
7
7
7
7
1
2
3
4
5
5
5
6
6
6
7
7
7
8
8
8
8
8
8
8
8
1
2
3
4
5
6
6
6
7
7
7
8
8
8
9
9
9
9
9
9
9
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
6 6 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7
6 7 7 7 8 8 8 8 8 8 8 8 8 8 8 8 8 8
7 7 7 7 8 9 9 9 9 9 9 9 9 9 9 9 9 9
7 7 8 8 8 9 10 10 10 10 10 10 10 10 10 10 10 10
7 8 8 8 9 9 10 11 11 11 11 11 11 11 11 11 11 11
8 8 8 8 9 10 10 11 12 12 12 12 12 12 12 12 12 12
8 8 9 9 9 10 11 11 12 13 13 13 13 13 13 13 13 13
8 9 9 9 10 10 11 12 12 13 14 14 14 14 14 14 14 14
9 9 9 9 10 11 11 12 13 13 14 15 15 15 15 15 15 15
9 9 10 10 10 11 12 12 13 14 14 15 16 16 16 16 16 16
9 10 10 10 11 11 12 13 13 14 15 15 16 17 17 17 17 17
10 10 10 10 11 12 12 13 14 14 15 16 16 17 18 18 18 18
10 10 11 11 11 12 13 13 14 15 15 16 17 17 18 19 19 19
10 11 11 11 12 12 13 14 14 15 16 16 17 18 18 19 20 20
10 11 11 11 12 13 13 14 15 15 16 17 17 18 19 19 20 21
Figure 3: The L-table for strings a, b of Lemma 7.5 with t = t′ = 3, R = 2, S = 7.
Note that a(k, ℓ) and b(k, ℓ) are prefixes of a and b, respectively. By greedy suffix matching (i.e.,
Lemma 7.1 applied to the reversed strings) and (∗), we obtain
L(a(k, ℓ), b(k, ℓ)) = ℓ + L((ww′ )R , wR+k ) = Rt + k + ℓ.
Hence, any 0 ≤ k ≤ K, 1 ≤ ℓ ≤ (S − R − k)t give rise to an index pair (i, j) with L(a[1..i], b[1..j]) =
L(a(k, ℓ), b(k, ℓ)) > L(a(k, ℓ − 1), b(k, ℓ − 1)) = L(a[1..i − 1], b[1..j − 1]). Let Iγ denote the set
of all such index pairs (i, j) that additionally satisfy L(a[1..i], b[1..j]) = P
γ. Then for any γ, no
(i, j) ∈ Iγ dominates another (i′ , j ′ ) ∈ Iγ . P
Thus by Lemma
7.4,
d(a,
b)
≥
γ |Iγ |/2. By counting
P
all possible choices for k and ℓ, we obtain γ |Iγ | = 0≤k≤K t(S − R − k) ≥ tK(S − R)/2. This
yields d(a, b) ≥ t · min{S − R, Rt′ } · (S − R)/4. For S ≥ R(t′ + 1), we have S − R ≥ S/2 as well as
S − R ≥ Rt′ and the lower bound of (ii) follows.
To prove (∗), let a′ := (ww′ )R and b′ := wR+k . For the lower bound, it is easy to see that we
can completely match R of the copies of w in b′ to copies of w in a′ , and at the same time match
a single symbol in each of the remaining k copies of w in b to a single symbol in some copy of w′
in a′ (since k ≤ R|w′ | = Rt′ ). This yields L(a′ , b′ ) ≥ R|w| + k = Rt + k.
R+k
bj with bj := w and consider a partitioning a′ =
For the upper bound, we write b′ = j=1
P
R+k
R+k
′ ′
j=1 aj such that L(a , b ) =
j=1 L(aj , bj ). For any aj , let w(aj ) denote the number of symbols
that aj shares with any occurrence of w (if, e.g., aj = xw′ y for some prefix x of w and some suffix
y of w, then w(aj ) = |x| + |y|). We first show that
(
1
if w(aj ) = 0,
(1)
L(aj , bj ) ≤
min{w(aj ), |w|} otherwise.
Note that trivially L(aj , bj ) ≤ |bj | = |w|. Hence for an upper bound, we may assume that w(aj ) <
|w|, and in particular that aj is a subsequence of a′j = xw′ y for some prefix x = σx . . . t of w and
19
some suffix y = 1 . . . σy of w with σy ≤ σx , where |x| + |y| = w(aj ). Note that any longest common
subsequence z of a′j and bj = w = 1 . . . t is an increasing subsequence of a′j . Hence, if z starts
with a symbol σ ′ ≥ σy , then z is an increasing subsequence in x t′ . . . σ ′ ; easy inspection shows
that in this case |z| ≤ max{|x|, 1}. If z starts with a symbol σ ′ ≤ σx , then z is an increasing
subsequence in σ ′ . . . 1 y; again, one can see that |z| ≤ max{|y|, 1} holds in this case. Thus,
L(aj , bj ) ≤ L(a′j , bj ) = |z| ≤ max{|x|, |y|, 1} ≤ max{|x| + |y|, 1} = max{w(aj ), 1}, concluding the
proof of (1).
Let J = {j | w(aj ) ≥ 1}. We compute
R+k
X
X
L(aj , bj ) ≤
min{w(aj ), |w|} + (R + k − |J|)
L(a′ , b′ ) =
j=1
j∈J
≤ min
R+k
X
j=1
w(aj ), |J| · |w| + (R + k − |J|)
≤ min{R · |w|, |J| · |w|} + R + k − |J| ≤ R|w| + k = Rt + k,
where the last inequality follows from the observation that |J| = R maximizes the expression
min{R · |w|, |J| · |w|} − |J|. This finishes the proof of (∗) and thus the lemma.
7.2
Block elimination and dominant pair reduction
We collect some convenient tools for the analysis of later constructions. The first allows us to
“eliminate” 0ℓ -blocks when computing the LCS of strings of the form x0ℓ y, 0ℓ z, provided that ℓ is
sufficiently large.
Lemma 7.6. For any strings x, y, z and ℓ ≥ #0 (x) + |z| we have L(x0ℓ y, 0ℓ z) = ℓ + L(0#0 (x) y, z).
Proof. Let u := x0ℓ y and v := 0ℓ z. In case we match no symbol in the 0ℓ -block of v with a symbol
in 0ℓ y in u, then at most min{#0 (x), ℓ} symbols of the 0ℓ -block of v are matched. The remainder
z yields at most |z| matched symbols. Otherwise, in case we match any symbol in the 0ℓ -block
of v with a symbol in 0ℓ y in u, then no symbol σ 6= 0 of x can be matched. Thus, in this case
we may replace x by 0#0 (x) . Together this case distinction yields L(u, v) = max{min{#0 (x), ℓ} +
|z|, L(0#0 (x)+ℓ y, 0ℓ z)}. Using Lemma 7.1, we obtain L(u, v) = max{#0 (x) + |z|, ℓ + L(0#0 (x) y, z)}.
The assumption ℓ ≥ #0 (x) + |z| now yields the claim.
The following lemma bounds the number of dominant pairs of strings of the form x′ = yx,
y ′ = zy by d(x′ , y ′ ) = O(|z| · |y ′ |). If |x′ | ≥ |y ′ |, this provides a bound of O(δ(x′ , y ′ ) · m(x′ , y ′ ))
instead of the general, weaker bound O(∆(x′ , y ′ ) · m(x′ , y ′ )) of Lemma 6.4.
Lemma 7.7. For any strings x, y, z, let x′ = yx, y ′ = zy. Then
d(x′ , y ′ ) ≤ |y| · (|z| + 1) + d(x′ , z) ≤ |y| · (|z| + 1) + |z|2 .
Proof. For every prefix ỹ = y ′ [1..j], we bound the number of dominant pairs (i, j) of x′ , y ′ . Clearly,
all prefixes ỹ of z (i.e., j ≤ |z|) contribute d(x′ , z) ≤ L(x′ , z) · m(x′ , z) ≤ |z|2 dominant pairs.
It remains to consider ỹ = z y[1..ℓ] (i.e., j = |z| + ℓ) for ℓ ∈ [|y|]. For i < ℓ, the string
x̃ = x′ [1..i] = y[1..i] is a subsequence of ỹ, i.e., L(x̃, ỹ) = i, but the prefix z y[1..i] of ỹ already
satisfies L(x̃, z y[1..i]) = i. Hence, there are no dominant pairs with i < ℓ. Thus, consider i ≥ ℓ and
let x̃ = x′ [1..i]. Clearly, y[1..j] is a common subsequence of x̃, ỹ. This yields L(x̃, ỹ) ≥ j = |ỹ| − |z|
20
0 0 0 0 0 1 0 1 0 1 0 1 0 1 2 2 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
2
2
0
0
0
0
0
1
0
1
0
1
0
1
0
1
0
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
1
2
2
2
2
2
2
2
2
2
2
2
2
2
0
0
1
2
3
3
3
3
3
3
3
3
3
3
3
3
0
0
1
2
3
4
4
4
4
4
4
4
4
4
4
4
0
0
1
2
3
4
5
5
5
5
5
5
5
5
5
5
0
0
1
2
3
4
5
6
6
6
6
6
6
6
6
6
0
0
1
2
3
4
5
6
7
7
7
7
7
7
7
7
0
0
1
2
3
4
5
6
7
8
8
8
8
8
8
8
0
0
1
2
3
4
5
6
7
8
9
9
9
9
9
9
0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
0 0 0 0 0 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
1 1 1 1 1 1 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
2 2 2 2 2 2 2 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
3 3 3 3 3 3 3 3 3 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5
4 4 4 4 4 4 4 4 4 4 4 5 5 6 6 6 6 6 6 6 6 6 6 6 6
5 5 5 5 5 5 5 5 5 5 5 5 5 6 6 7 7 7 7 7 7 7 7 7 7
6 6 6 6 6 6 6 6 6 6 6 6 6 6 7 7 8 8 8 8 8 8 8 8 8
7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 8 8 9 9 9 9 9 9 9 9
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 9 9 10 10 10 10 10 10 10
9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 10 10 11 11 11 11 11 11
10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 11 11 12 12 12 12 12
10 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 12 12 13 13 13 13
10 11 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 13 13 14 14 14
10 11 12 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 14 14 15 15
10 11 12 13 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 15 15 16
Figure 4: Illustration of Lemma 7.8. The strings x′ = y2ℓ x,y ′ = 2ℓ y are defined using x = (01)R+S ,
y = 0R (01)S , R = 4, S = 5, and ℓ = 2. The number of dominant pairs is significantly reduced
compared to Figure 2.
and hence any such dominant pair (i, j) satisfies j−|z| ≤ L(x′ [1..i], y ′ [1..j]) ≤ j. By Observation 6.2,
there are at most |z| + 1 such dominant pairs for fixed j. This yields at most |y| · (|z| + 1) dominant
pairs (i, j) with |z| < j ≤ |y ′ |, concluding the proof.
The above lemma gives rise to a surprising technique: Given strings x, y, we can build strings
x′ , y ′ such that L(x′ , y ′ ) lets us recover L(x, y), but the number of dominant pairs may be reduced
significantly, namely to a value d(x′ , y ′ ) = O(δ(x, y) · n(x, y)), independently of d(x, y). The effect
of this technique is illustrated in Figure 4.
Lemma 7.8 (Dominant Pair Reduction). Consider strings x, y and a number ℓ > |y| − L(x, y).
(i) If 2 is a symbol not appearing in x, y, then x′ := y2ℓ x and y ′ := 2ℓ y satisfy L(x′ , y ′ ) =
L(x, y) + ℓ and d(x, y) ≤ 3ℓ · |y|.
(ii) For any symbols 0, 1 (that may appear in x, y) set x′′ := 0k 1k y1ℓ 0k 1k x and y ′′ := 1ℓ 0k 1k y with
k := 2|y| + |x| + 1. Then L(x′′ , y ′′ ) = L(x, y) + ℓ + 2k and d(x, y) ≤ O(ℓ(|x| + |y| + ℓ)).
Proof. (i) We clearly have L(x′ , y ′ ) ≥ L(2ℓ , 2ℓ ) + L(x, y) ≥ ℓ + L(x, y). For the other direction,
let z be a common subsequence of x′ , y ′ . If z contains no 2, then by inspecting y ′ we obtain
|z| ≤ |y| = L(x, y) + (|y| − L(x, y)) < L(x, y) + ℓ, so z is no LCS. Otherwise, if z contains a 2, then
no symbol from the copy of y in x′ can be matched by z, implying |z| ≤ L(2ℓ x, y ′ ) = ℓ + L(x, y).
For the dominant pairs, we apply Lemma 7.7 to obtain d(x′ , y ′ ) ≤ |y|(ℓ+1)+d(x′ , 2ℓ ). Note that
′
d(x , 2ℓ ) = d(2ℓ , 2ℓ ) = ℓ, since we can delete all characters different from 2 in x′ without affecting
the dominant pairs of x′ , 2ℓ and then apply Lemma 7.1. Thus, d(x′ , y ′ ) ≤ |y|(ℓ + 1) + ℓ ≤ 3ℓ · |y|.
(ii) The argument is slightly more complicated when the padding symbols may appear in x, y.
Clearly, we have L(x′′ , y ′′ ) ≥ L(1ℓ 0k 1k x, 1ℓ 0k 1k y) ≥ ℓ + 2k + L(x, y). For the other direction, let
z be a common subsequence of x′′ , y ′′ . If z does not match any 0 in the 0k -block of y ′′ with a
symbol in a 0k -block in x′′ , then from the 0k -block of y ′′ we match at most |y| + |x| symbols,
and we obtain |z| ≤ (|y| + |x|) + |1ℓ 1k y| = |x| + 2|y| + ℓ + k < ℓ + 2k, since k > |x| + 2|y|,
21
so z is no longest common subsequence. If z matches a 0 in the 0k -block of y ′′ with a symbol
in the left 0k -block of x′′ , then no symbol in the 1ℓ -block in y ′′ is matched by z, so we obtain
|z| ≤ |0k 1k y| = 2k + L(x, y) + (|y| − L(x, y)) < 2k + L(x, y) + ℓ, so z is no longest common
subsequence. It remains the case where z matches some 0 in the 0k -block of y ′′ with a symbol in
the right 0k -block of x′′ . Then the part 1k y of y ′′ has to be matched to a subsequence of 0k 1k x in
x′′ . This yields |z| ≤ ℓ + k + L(0k 1k x, 1k y). Since k > |y| we can apply Lemma 7.6 (swapping the
roles of 0 and 1) to obtain L(0k 1k x, 1k y) = k + L(x, y), so as desired we have |z| ≤ ℓ + 2k + L(x, y).
For the dominant pairs, we apply Lemma 7.7 to obtain d(x′′ , y ′′ ) ≤ |0k 1k y| · (ℓ + 1) + ℓ2 =
O(ℓ(|x| + |y| + ℓ)).
8
Paddings
In this section we construct paddings that allow to augment any strings from LCS≤ (α) to become
strings in LCS(α). Specifically, we prove Lemma 4.2. So let α be a parameter setting satisfying
Table 2, let p ∈ P ∗ = {n, m, L, δ, ∆, |Σ|, M, d} be a parameter, and let n ≥ 1. We say that strings
x, y prove Lemma 4.2 for parameter p if (n, x, y) is an instance of LCS≤ (α) with p(x, y) = Θ(nαp ),
and given n we can compute x = x(n), y = y(n), and L(x, y) in time O(n). Note that for the first
requirement of being an instance of LCS≤ (α), we have to show that p′ (x, y) ≤ O(nαp′ ) for any
parameter p′ ∈ P ∗ . Recall that we write p = nαp for the target value of parameter p.
Lemma 8.1. Let Σ′ be an alphabet of size min{|Σ|, L}. Then the strings x := y :=
prove Lemma 4.2 for parameter L.
′
σ∈Σ′
σ ⌊L/|Σ |⌋
Proof. Note that ⌊L/|Σ′ |⌋ = Θ(L/|Σ′ |), since |Σ′ | ≤ L. Thus, indeed L(x, y) = |x| = Θ(L/|Σ′ |) ·
|Σ′ | = Θ(L). Moreover, L(x, y) can be computed in time O(n), as well as x and y. For the number
of matching pairs we note that M (x, y) ≤ |Σ′ | · (L/|Σ′ |)2 , which is max{L, L2 /|Σ|} by choice of |Σ′ |.
This is O(M ), using the parameter relations M ≥ n ≥ m ≥ L and M ≥ L2 /|Σ| (see Table 3).
The remaining parameters are straight-forward. Using m(x, y) = n(x, y) = L(x, y) = Θ(L) and
the parameter relations L ≤ m ≤ n we obtain that m(x, y) ≤ O(m) and n(x, y) ≤ O(n). Moreover,
δ(x, y) = ∆(x, y) = 0 ≤ δ ≤ ∆. The alphabet size |Σ(x, y)| = |Σ′ | is at most |Σ| by choice of |Σ′ |.
By Lemma 7.1 we obtain d(x, y) = |x| = Θ(L) ≤ O(d) using the parameter relation L ≤ d.
Lemma 8.2. The strings x := 1∆+1 and y := 1 prove Lemma 4.2 for parameter ∆. The strings
x := 1 and y := 1δ+1 prove Lemma 4.2 for parameter δ.
Proof. The analysis is straight-forward. Note that indeed ∆(x, y) = ∆, and that L(x, y) = 1 ≤
O(L). We have n(x, y) = ∆ + 1 ≤ O(n) and m(x, y) = 1 ≤ O(m). Clearly, L(x, y), x, and y can
be computed in time O(n). Moreover, δ(x, y) = 0 ≤ δ and the alphabet size is 1 ≤ O(Σ). Finally,
we have M (x, y) = Θ(∆) ≤ O(n) ≤ O(M ) using the parameter relations L ≤ n ≤ M , and using
the relation d ≤ Lm we obtain d(x, y) ≤ 1 ≤ O(d).
The analysis for δ is symmetric; the same proof holds almost verbatim.
Lemma 8.3. The strings constructed in Lemma 8.1 or the strings constructed in Lemma 8.2 prove
Lemma 4.2 for parameters n and m.
Proof. Since n = L + ∆ we have L = Θ(n) or ∆ = Θ(n), i.e., αL = 1 or α∆ = 1. In the first case,
in Lemma 8.1 we construct strings of length Θ(L) = Θ(n), and thus these strings prove Lemma 4.2
not only for parameter L but also for parameter n. In the second case, the same argument holds
for the first pair of strings constructed in Lemma 8.2. The parameter m is symmetric.
22
Lemma 8.4. Let w := 12 . . . |Σ| be the concatenation of |Σ| unique symbols. The strings w, w or
the strings w, rev(w) prove Lemma 4.2 for parameter |Σ|.
Proof. Clearly, both pairs of strings realize an alphabet of size exactly |Σ|. Since m = L + δ
we have L = Θ(m) or δ = Θ(m). In the first case, we use L(w, w) = |Σ| ≤ m = Θ(L) and
δ(w, w) = ∆(w, w) = 0 ≤ δ ≤ ∆. In the second case, we have L(w, rev(w)) = 1 ≤ O(L) and
δ(w, rev(w)) = ∆(w, rev(w)) = |Σ| − 1 ≤ m = Θ(δ) ≤ O(∆).
The remaining parameters are straight-forward. Let (x, y) ∈ {(w, w), (w, rev(w))}. We have
n(x, y) = m(x, y) = |Σ| ≤ m ≤ n. Moreover, d(x, y) ≤ M (x, y) = |Σ| ≤ d ≤ M using the relations
|Σ| ≤ d ≤ M . Clearly, the strings and their LCS length can be computed in time O(n).
8.1
Matching Pairs
Lemma 8.5. If α∆ = 1 then x := 1⌊M/n⌋+∆ and y := 1⌊M/n⌋ prove Lemma 4.2 for parameter M .
Proof. Note that ⌊M/n⌋ = Θ(M/n) by the parameter relation M ≥ n. We have M (x, y) =
Θ((M/n)2 +∆M/n). By the parameter relations M ≤ 2Ln ≤ 2n2 the first summand is O(n·M/n) =
O(M ). Since α∆ = 1, the second summand is Θ(M ). Thus, we indeed have M (x, y) = Θ(M ).
The remainder is straight-forward. Clearly, x, y, and L(x, y) = ⌊M/n⌋ can be computed in time
O(n). Since M ≤ 2Ln we also obtain L(x, y) = m(x, y) = ⌊M/n⌋ ≤ O(L) ≤ O(m). Moreover,
n(x, y) = ⌊M/n⌋ + ∆ ≤ O(n) by the relations M/n ≤ 2L ≤ 2n and ∆ ≤ n. Note that ∆(x, y) = ∆
and δ(x, y) = 0 ≤ δ. The alphabet size is 1. By Lemma 7.1 we have d(x, y) = ⌊M/n⌋ ≤ 2L ≤ 2d.
Lemma 8.6. Assume α∆ < 1 and let Σ′ be an alphabet of size min{⌈m2 /M ⌉, |Σ|}. Then x := y :=
⌊m/|Σ′ |⌋ prove Lemma 4.2 for parameter M .
σ∈Σ′ σ
Proof. Observe that α∆ < 1 implies αL = αm = 1, so that n = Θ(L) = Θ(m) (see Table 2).
The number of matching pairs is M (x, y) = |Σ′ | · ⌊m/|Σ′ |⌋2 . By the parameter relation m ≥ |Σ|
and |Σ| ≥ |Σ′ | we have ⌊m/|Σ′ |⌋ = Θ(m/|Σ′ |), and by M ≤ 2Ln = Θ(m2 ) we obtain ⌈m2 /M ⌉ =
Θ(m2 /M ). Thus, M (x, y) = Θ(m2 /|Σ′ |) = Θ(max{M, m2 /|Σ|}) by choice of |Σ′ |. Using m = Θ(L)
and the parameter relation M ≥ L2 /|Σ| we indeed obtain M (x, y) = Θ(M ).
The remainder is straight-forward. Since x = y we have L(x, y) = m(x, y) = n(x, y) = |Σ′ | ·
⌊m/|Σ′ |⌋ ≤ m = Θ(L) = Θ(n). Moreover, δ(x, y) = ∆(x, y) = 0 ≤ δ ≤ ∆. The alphabet size is
|Σ(x, y)| = |Σ′ | ≤ |Σ| by choice of |Σ′ |. By Lemma 7.1 we have d(x, y) = L(x, y) ≤ O(L) ≤ O(d).
Clearly, x, y, and L(x, y) can be computed in time O(n).
8.2
Dominant Pairs
We start with a simple construction that always works on constant-sized alphabets (αΣ = 0).
Lemma 8.7. Assume αd ≤ 2αL ≤ αM and set√ x := (01)R+S and y := 0R (01)R+S (as analyzed
in Lemma 7.3), instantiated with R = ⌊min{∆, d}⌋, S = ⌈d/R⌉. Then x, y prove Lemma 4.2 for
parameter d.
√
Proof. Note that by definition R ≤ d, and hence S ≥ d/R ≥ R. By Lemma 7.3(ii), we obtain
d(x, y) = Θ(R · S) = Θ(R
√ · d/R) = Θ(d). For the other parameters, note that n(x, y) = 2(R + S) =
O(d/R) = O(d/∆ + d). By the relation d ≤ 2L(∆
√ + 1), we have d/∆ ≤ O(L), and by the
assumption αd ≤ 2αL , we have d ≤ O(L2 ) and hence d ≤ O(L). Thus, n(x, y) ≤ O(L).
The remainder is straight-forward. By L(x, y) ≤ m(x, y) ≤ n(x, y) ≤ O(L) ≤ O(m) ≤ O(n) we
have verified n, m, L. Consequently, also M (x, y) ≤ n(x, y)2 ≤ O(L2 ) ≤ O(M ) by the assumption
2αL ≤ αM . Trivially, |Σ(x, y)| = 2 = O(|Σ|). Observing that δ(x, y) = 0 ≤ O(δ) and ∆(x, y) =
23
R ≤ O(∆) by Lemma 7.3(i) concludes the parameter verification. Since x, y and L(x, y) = R + 2S
(by Lemma 7.3(i)) can be computed in time O(n), the claim follows.
The construction above creates a long LCS of length L(x, y) = Θ(m(x, y)) which forces d(x, y) =
O(L(x, y)2 ). With super-constant alphabet sizes, one can construct larger numbers of dominant
pairs (compared to L(x, y)) by exploiting the crossing gadgets defined in Definition 9.4.
Lemma 8.8. Assume αd > 2αL and set v := (01)R+S and w := 0R (01)S with R = S = L.
Construct x := crx (v, . . . , v) and y := cry (w, . . . , w) on ⌊d/L2 ⌋ copies of v and w. Then x, y prove
Lemma 4.2 for parameter d.
Proof. Note that ⌊d/L2 ⌋ = Θ(d/L2 ) since we assume αd > 2αL . By the Crossing Alphabets
Lemma (Lemma 9.3), we obtain L(x, y) = L(v, w) = 3L, and in particular L(x, y), x, and y can be
computed in time O(n).
Furthermore, the Crossing Alphabets Lemma also yields d(x, y) = Θ(d/L2 ) · d(v, w) = Θ(d),
where the bound d(v, w) = Θ(L2 ) follows from Lemma 7.3(i). Similarly, we observe that ∆(x, y) ≤
n(x, y) = Θ(d/L2 ) · n(v, w) = Θ(d/L), which is at most O(∆) ≤ O(n) by the parameter relation
d ≤ 2L(∆ + 1). Likewise, m(x, y) ≤ n(x, y) = O(d/L) ≤ O(m) by the parameter relation d ≤ Lm.
Moreover, M (x, y) = O(d/L2 ) · M (v, w) = O(d) ≤ O(M ) by d ≤ M . Finally, the assumption
2αL < αd together with the parameter relation d ≤ Lm, i.e., αd ≤ αL + αm , forces αL < αm .
Hence, αδ = αm , i.e., δ = Θ(m) (see Table 2), and thus δ(x, y) ≤ m(x, y) ≤ O(m) = O(δ).
Since v and w have alphabet size 2 and we use ⌊d/L2 ⌋ copies over disjoint alphabets, we have
|Σ(x, y)| = 2⌊d/L2 ⌋ ≤ O(|Σ|) by the parameter relation d ≤ L2 |Σ|, which concludes the proof.
For super-constant alphabet sizes, the number of matching pairs M (x, y) can attain values much
smaller than L(x, y)2 , which is an orthogonal situation to the lemma above. In this case, we use a
different generalization of the first construction (that we already prepared in Section 7).
Lemma 8.9. Assume αM < 2αL and set x := (1 . . . t t′ . . . 1)R (1 . . . t)S−R and y := (1 . . . t)S (as
analyzed in Lemma 7.5) instantiated with
t :=
j L2 k
M
,
′
t := min{r, t}
R :=
lrm
t
p
where r := min{∆, ⌊ d/t⌋}. Then x, y prove Lemma 4.2 for parameter d.
ldm
S := 4
,
rt
Proof. We first verify the conditions of Lemma 7.5. Observe that by the assumption αM < 2αL
we indeed have t = Θ(L2 /M ) and t ≥ 2 (for sufficiently large n). From the parameter relation
M ≥ L2 /|Σ| we obtain t ≤ |Σ|, and from the parameter
relation d ≥ |Σ| (and α∆ > 0) this yields
p
′
r ≥ 1. Thus, 1 ≤ t ≤ t. Moreover, r = Θ(min{∆, d/t}). Observe that r ≤ Rt′ ≤ 2r. Indeed, if
r ≤ t then R = 1 and t′ = r, and if r > t then r/t ≤ R ≤ 2r/t and t′ = t. In particular, we have
Rt′ = Θ min{∆,
p
d/t}
and S = Θ
d
.
Rt′ · t
p
Note that R(t′ + 1) ≤ 2Rt′ ≤ 4r ≤ S, since r ≤ d/t. In particular, this yields 1 ≤ R ≤ S, so that
all conditions of Lemma 7.5 are satisfied.
In the remainder we show that x, y satisfy√the parameter constraints. We have n(x, y) ≤
(R + S)t = O(St) = O(d/(Rt′ )) √
= O(d/∆p
+ dt). Note that d/∆ = O(L) by the parameter
relation d ≤ 2L(∆ + 1), and that dt = O( dL2 /M ) = O(L) by the parameter relation d ≤ M .
Thus, L(x, y) ≤ m(x, y) ≤ n(x, y) ≤ O(L) ≤ O(m) ≤ O(n), which satisfies the parameters L, m, n.
24
For d, note that by Lemma 7.5(ii), we have d = Θ((Rt′ ) · (St)) = Θ((Rt′ ) · d/(Rt′ )) = Θ(d).
For M , Lemma 7.5(iii) shows that M (x, y) = O(S 2 t) = O((d/(Rt′ ))2 · (1/t)) = O(L2 · (M/L2 )) =
O(M ), where we used d/(Rt′ ) = O(L) as shown above. Since L(x, y) = |b| = St, we obtain
δ(x, y) = 0 ≤ O(δ) and ∆(x, y) = Rt′ ≤ O(∆). Finally, |Σ(x, y)| = t = Θ(L2 /M ) ≤ O(|Σ|) follows
from the parameter relation M ≥ L2 /|Σ|. Observing that x, y, and L(x, y) = St can be computed
in time O(n) concludes the proof.
9
Hardness for Large Alphabet
In this section, we consider a parameter setting α satisfying the relations of Table 2, and we prove
a lower bound for LCS≤ (α) assuming OVH, thus proving Theorem 4.6. We split our proof into the
two cases αδ = αm (where L may be small) and αL = αm (where L is large). For readability, but
abusing notation, for the target value ⌈nαp ⌉ of parameter p we typically simply write p.
In this section we can assume that
αL , αm , αδ , α∆ > 0
and αd > 1,
(LB)
since otherwise the known Õ(n + min{d, δm, δ∆}) algorithm runs in (near-)optimal time Õ(n) and
there is nothing to show (here we used the parameter relations d ≤ Lm and L, m, δ, ∆ ≤ n).
9.1
Small LCS
Assume αδ = αm , i.e., δ = Θ(m). In this case, the longest common subsequence might be arbitrarily
small, i.e., any value 0 < αL ≤ αm is admissible.
9.1.1
Hard Core
At the heart of our constructions lies the previous reduction from OV to LCS of [28], which we
restate here for our purposes.
Lemma 9.1. Let two sets A = {a1 , . . . , aA } and B = {b1 , . . . , bB } of vectors in {0, 1}D with A ≥ B
be given. In time O(AD), we can construct strings x1 , . . . , x2A and y1 , . . . , yB over {0, 1} and
γ, γ ′ = Θ(D) such that the strings x and y defined by
x1 0γ x2 0γ . . . x2A−1 0γ x2A ,
x :=
′
′
y := 0Aγ y1 0γ y2 0γ . . . yB−1 0γ yB 0Aγ ,
satisfy the following properties:
(i) We can compute some ρ in time O(AD) such that L(x, y) ≥ ρ if and only if there is a pair
i, j with hai , bj i = 0.
(ii) |x|, |y| = O(AD).
(iii) #1 (y) = O(BD).
(iv) For all β ≥ 0, we have L(x, 0β y) = L(x, y).
Proof. Claims (i) and (ii) are a restatement of the main result in [28], instantiated for LCS. Claim
(iii) follows directly from the construction. It is easy to check that in [28] the value of γ ′ is chosen
large enough to satisfy Aγ ′ ≥ #0 (x). This yields claim (iv), since any common subsequence z of
′
x, 0β y starts with at most #0 (x) symbols 0, which can all be matched to the 0Aγ -block of y, so z
is also a common subsequence of x, y, and we obtain L(x, y) = L(x, 0β y).
25
9.1.2
Constant Alphabet
First assume αΣ = 0 and thus |Σ| = O(1). Consider any n ≥ 1 and target values p = nαp
for p ∈ P. Let A = {a1 , . . . , aA }, B = {b1 , . . . , bB } ⊆ {0, 1}D be a given OV instance with
D = no(1) and where we set A := ⌊L/D⌋ and B := ⌊d/(LD)⌋. Note that A = nαL −o(1) = nΩ(1) and
B = nαd −αL −o(1) = nΩ(1) by (LB). Also note that UOVH implies that solving such OV instances
takes time (AB)1−o(1) = nαd −o(1) = d1−o(1) .
Construct strings x, y as in Lemma 9.1. Then from the LCS length L(x, y) we can infer whether
A, B has an orthogonal pair of vectors by Lemma 9.1(i). Moreover, this reduction runs in time
O(AD) = O(L) ≤ O(d1−ε ) for sufficiently small ε > 0 (since αL ≤ αm ≤ 1 < αd by Table 2 and
(LB)). We claim that (n, x, y) is an instance of LCS≤ (α). This shows that any algorithm solving
LCS≤ (α) in time O(d1−ε ) implies an algorithm for our OV instances with running time O(d1−ε ),
contradicting UOVH. Hence, in the current case αδ = αm and αΣ = 0, any algorithm for LCS≤ (α)
takes time d1−o(1) , proving part of Theorem 4.6.
It remains to show the claim that (n, x, y) is an instance of LCS≤ (α). Using the parameter
relations L ≤ m ≤ n, Lemma 9.1(ii), and the definition of A, we have L(x, y) ≤ m(x, y) ≤ n(x, y) =
|x| ≤ O(AD) = O(L) ≤ O(m) ≤ O(n), so indeed p(x, y) ≤ O(p) = O(nαp ) for p ∈ {L, m, n}.
Similarly, we obtain δ(x, y) ≤ ∆(x, y) ≤ n(x, y) ≤ O(m) = O(δ) ≤ O(∆), where the equality
holds in the current case αδ = αm . Since x, y use the binary alphabet {0, 1}, we have |Σ(x, y)| =
2 ≤ O(nαΣ ). For the number of matching pairs we have M (x, y) ≤ n(x, y)2 = O((AD)2 ) = O(L2 ).
Since we are in the case αΣ = 0, from the parameter relation M ≥ L2 /|Σ| (Lemma 6.6(i)) we obtain
L2 ≤ O(M ) and thus also M (x, y) is sufficiently small. Finally, we use Lemmas 6.8 and 9.1(iii)
to bound d(x, y) ≤ O(L(x, y) · #1 (y)) ≤ O(AD · BD), which by definition of A, B is O(d). This
proves that (n, x, y) belongs to LCS≤ (α).
We remark that our proof also yields the following lemma, which we will use for small alphabets.
Lemma 9.2. Let α be a parameter setting satisfying Table 2 with αΣ = 0 and αδ = αm . There
is a constant γ ≥ 1 such that any algorithm for LCSγ≤ (α, {0, 1}) takes time d1−o(1) , unless OVH
fails. This holds even restricted to instances (n, x, y) of LCSγ≤ (α, {0, 1}) with |x|, |y| ≤ γ · nαL and
#1 (y) ≤ γ · nαd −αL satisfying L(x, 0β y) = L(x, y) for all β ≥ 0.
9.1.3
Superconstant Alphabet
To tackle the general case αΣ ≥ 0 (while still assuming αδ = αm ), we use the following fact which
is similar to the Disjoint Alphabets Lemma.
Lemma 9.3 (Crossing Alphabets). Let Σ1 , . . . , Σk be disjoint alphabets and let xi , yi be strings
over alphabet Σi . Consider x := x1 . . . xk and y := yk . . . P
y1 , i.e., the order in y is reversed.
k
For any parameter p ∈ {n, m, |Σ|, M, d} we have p(x, y) =
i=1 p(xi , yi ). Moreover, L(x, y) =
maxi L(xi , yi ).
Proof. The statement is trivial for the string lengths n, m, alphabet size |Σ|, and number of matching pairs M . For the LCS length L we observe that any common subsequence z that matches a
symbol in Σi cannot match any symbols in other alphabets, which yields L(x, y) ≤ maxi L(xi , yi ).
Since any common subsequence of xi , yi is also a common subsequence of x, y, we obtain equality.
Since every dominant pair is also a matching pair, every dominant pair of x, y stems from
prefixes x1 . . . xj−1 x′ and yk . . . yj+1 y ′ , with x′ a prefix of xj and y ′ a prefix of yj for some j. Since
L(x1 . . . xj−1 x′ , yk . . . yj+1 y ′ ) = L(x′ , y ′ ), we obtain that the dominant pairs of x, y of the form
x1 . . . xj−1 x′ , yk . . . yj+1 y ′ are in one-to-one correspondence with the dominant pairs of xj , yj . Since
these dominant pairs of x, y are incomparable, this yields the claim for parameter d.
26
We make use of the above lemma by invoking the following construction.
Definition 9.4. Let Σ1 , . . . , Σt be a collection of disjoint two-element alphabets. For any string z
over {0, 1} and Σi , let z ↑ Σi denote the string z lifted to Σi , i.e., we replace the symbols {0, 1} in
z bijectively by Σi . Then for given x1 , . . . , xt , y1 , . . . , yt ∈ {0, 1}∗ we construct
crx (x1 , . . . , xt ) :=
y
cr (y1 , . . . , yt ) :=
x1 ↑ Σ 1
yt ↑ Σt
x2 ↑ Σ 2
yt−1 ↑ Σt−2
...
...
xt ↑ Σ t ,
y1 ↑ Σ1 .
We adapt the construction from Lemma 9.1 using the following trick that realizes an ”OR” of
t ≤ O(Σ) instances, without significantly increasing the parameters d and M .
Consider any n ≥ 1 and target values p = nαp for p ∈ P. Let A = {a1 , . . . , aA }, B =
{b1 , . . . , bB } ⊆ {0, 1}D be a given OV instance with D = no(1) and where we set A = ⌊ min{L,d√d}·D ⌋
√
d}
⌋. Note that UOVH implies that solving such OV instances takes time
and B = ⌊ min{L,
D
1−o(1)
α
−o(1)
d
(AB)
= n
= d1−o(1) . Since clearly A ≥ B, we can partition A into t := ⌈A/B⌉
groups A1 , . . . , At of size B (filling up the last group with all-ones vectors). Using the relation
d ≤ L2 |Σ| (Lemma 6.3(ii)), we obtain t ≤ O(d/L2 + 1) ≤ O(Σ).
For each i = 1, . . . , t we construct strings xi and yi for the sets Ai and B using Lemma 9.1.
Finally, we set x := crx (x1 , . . . , xt ) and y := cry (y1 , . . . , yt ). By the Crossing Alphabets Lemma
and Lemma 9.1.(i) from L(x, y) we can infer whether A, B has an orthogonal pair of vectors. We
claim that (n, x, y) is an instance of LCS≤ (α). This shows that any algorithm solving LCS≤ (α) in
time O(d1−ε ) implies an algorithm for our OV instances with running time O(d1−ε ), contradicting
UOVH. Hence, in the current case αδ = αm , any algorithm for LCS≤ (α) takes time d1−o(1) , proving
part of Theorem 4.6.
It remains to show the claim that (n, x, y) is an instance of LCS≤ (α). This is similar to the
proof for the case αΣ = 0, additionally
using the Crossing Aphabets Lemma.√Specifically, we obtain
Pt
m(x, y) ≤ n(x, y) = |x| = i=1 |xi | ≤ O(t · BD) = O(AD) = O(max{d/L, d}), which is at most
O(m) ≤ O(n) using the parameter relations d ≤ Lm ≤ m2 (Lemma 6.3.(i)). Similarly, we obtain
δ(x, y) ≤ ∆(x, y) ≤ n(x, y) ≤ O(m) = O(δ) ≤ O(∆), where the equality holds in the current case
αm = αδ . For L we obtain L(x, y) = maxi L(xi , yi ) ≤ |yi | = O(BD) ≤ O(L). Since t ≤ O(Σ)
we
relation d ≤ M we have d(x, y) ≤ M (x, y) =
Pt have |Σ(x, y)| ≤ O(Σ). Using the parameter
2 ) = O(AD · BD) = O(d) ≤ O(M ). This proves that
M
(x
,
y
)
≤
t
·
|x
|
·
|y
|
=
t
·
O((BD)
i i
i
i
i=1
(n, x, y) belongs to LCS≤ (α).
9.2
Large LCS
Now assume αL = αm , i.e., L = Θ(m). Then the number of deletions in the shorter string might
be arbitrary small, i.e., any value 0 < αδ ≤ αm is admissible. In this case, the construction of [28]
is no longer applicable. The new 1vs1/2vs1 gadgets that we design for constructing hard strings
for small δ can be seen as one of our main contributions.
9.2.1
Hard Core
The following lemma (which effectively represents an intermediate step in the proof of [28]) yields
the basic method to embed sets of vectors into strings x and y.
Lemma 9.5. Let two sets A = {a1 , . . . , aA } and B = {b1 , . . . , bB } of vectors in {0, 1}D be given.
In time O((A + B)D) we can construct strings x1 , . . . , xA of length ℓx and y1 , . . . , yB of length ℓy
over alphabet {0, 1}, as well as integers ρ1 < ρ0 , such that for all i ∈ [A], j ∈ [B] we have
27
(i) ℓy ≤ ℓx ≤ O(D),
(ii) L(xi , yj ) = ρ0 if hai , bj i = 0,
(iii) L(xi , yj ) = ρ1 if hai , bj i 6= 0, and
(iv) L(xi , yj ) > ℓy /2.
′ of length ℓ′ = O(D)
Proof. We can construct strings x′1 , . . . , x′A of length ℓ′x = O(D) and y1′ , . . . , yB
y
′
′
and integers ρ1 < ρ0 as in [28, Claim III.6] (using the so called normalized vector gadget) that
satisfy L(x′i , yj′ ) = ρ′0 if hai , bj i = 0 and L(x′i , yj′ ) = ρ′1 otherwise. To additionally enforce conditions
′
′
′
(i) and (iv), we define xi := 1ℓy 0ℓy +1 x′i and yj := 0ℓy +1 yj′ . Since L(xi , yj ) = L(x′i , yj′ ) + ℓ′y + 1 by
Lemmas 7.6 and 7.1, we thus obtain conditions (ii) and (iii) for ρ0 := ρ′0 +ℓ′y +1 and ρ1 := ρ′1 +ℓ′y +1.
Since by definition ℓy = 2ℓ′y + 1 holds, the first condition follows directly and the trivial bound
L(xi , yj ) ≥ ℓ′y + 1 > ℓy /2 shows that the last condition is fulfilled.
1vs1/2vs1 gadget. The aim of the following construction is to embed given strings y1 , . . . , yQ
into a string y and strings x1 , . . . , xP into x, where P = Θ(Q), such that in an LCS each yj is
either aligned with a single string xi or with several strings xi , xi+1 , . . . , xi′ . In the first case,
|yj | − L(xi , yj ) characters of yj are not contained in an LCS of x and y, while in the second case
yj can be completely aligned. By choosing P = 2Q − N for an arbitrary 1 ≤ N ≤ Q, it will turn
out that the LCS aligns N strings yj with a single partner xi , and the remaining Q − N strings yj
with two strings xi , xi+1 each. Thus, only N strings yj are not completely aligned.
To formalize this intuition, let P ≥ Q. We call a set Λ = {(i1 , j1 ), . . . , (ik , jk )} with 0 ≤ k ≤ Q
and 1 ≤ i1 < i2 < · · · < ik ≤ P and 1 ≤ j1 ≤ j2 ≤ · · · ≤ jk ≤ Q a (partial) multi-alignment. Let
Λ(j) = {i | (i, j) ∈ Λ}. We say that every j ∈ [Q] with |Λ(j)| = k is k-aligned. We will also refer to
a 1-aligned j ∈ [Q] as being uniquely aligned to i, where Λ(j) = {i}. Every j ∈ [Q] with Λ(j) = ∅
is called unaligned. Note that each i ∈ [P ] occurs in at most one (i, j) ∈ Λ. We denote the set of
multi-alignments as Λmulti
P,Q .
We will also need the following specialization of multi-alignments. We call a multi-alignment
1,2
Λ ∈ Λmulti
P,Q a (1,2)-alignment, if each j is either 1-aligned or 2-aligned. Let ΛP,Q denote the set of
all (1,2)-alignments.
Given strings x1 , . . . , xP of length ℓx and y1 , . . . , yQ of length ℓy , we define the value v(Λ) of a
PQ
multi-alignment Λ ∈ Λmulti
P,Q as v(Λ) =
j=1 vj where
0
vj := L(xi , yj )
ℓy
if j is unaligned,
if j is uniquely aligned to i,
if j is k-aligned for k ≥ 2.
Lemma 9.6. Given strings x1 , . . . , xP of length ℓx and y1 , . . . , yQ of length ℓy , construct
x := G(x1 ) G(x2 ) . . . G(xP ),
y := G(y1 ) G(y2 ) . . . G(yQ ),
where G(w) := 0γ1 1γ2 (01)γ3 w 1γ3 with γ3 := ℓx + ℓy , γ2 := 8γ3 and γ1 := 6γ2 . Then we have
max v(Λ) ≤ L(x, y) − Q(γ1 + γ2 + 3γ3 ) ≤ max v(Λ).
Λ∈Λ1,2
P,Q
Λ∈Λmulti
P,Q
28
(2)
Proof. For the first inequality of (2), let Λ ∈ Λ1,2
i∈Λ(j) G(xi ).
P,Q . For every yj , we define zj =
Consider a 1-aligned j and let i ∈ [P ] be the index j is uniquely aligned to. We have that
zj = G(xi ) = 0γ1 1γ2 (01)γ3 xi 1γ3 and hence by Lemma 7.1, we obtain L(zj , G(yj )) = γ1 + γ2 + 3γ3 +
L(xi , yj ) = γ1 + γ2 + 3γ3 + vj . Likewise, consider a 2-aligned j and let i, i′ ∈ [P ] be such that
Λ(j) = {i, i′ }. Then zj = G(xi )G(xi′ ). We compute
L(zj , G(yj )) = γ1 + γ2 + 3γ3 + L(xi 1γ3 0γ1 1γ2 (01)γ3 xi′ , yj )
≥ γ1 + γ2 + 3γ3 + L((01)γ3 , yj )
= γ1 + γ2 + 3γ3 + ℓy = γ1 + γ2 + 3γ3 + vj ,
where the first line follows from Lemma 7.1, the second line from monotonicity and the third line
from γ3 ≥ ℓy = |yj |. Observe that z1 z2 . . . zQ is a subsequence of x. We conclude that
L(x, y) ≥
Q
X
L(zj , G(yj )) = Q(γ1 + γ2 + 3γ3 ) +
Q
X
vj .
j=1
j=1
It remains to prove the second inequality of (2). Write x = z1 z2 . . . zQ such that L(x, y) =
PQ
j=1 L(zj , G(yj )). We define a multi-alignment Λ by letting (i, j) ∈ Λ if and only if zj contains
strictly more than half of the 0γ1 -block of G(xi ). Note that the thus defined set satisfies the
definition of a multi-alignment, since no two zj ’s can contain more than half of G(xi )’s 0γ1 -block and
if (i, j), (i′ , j ′ ) ∈ Λ, then j < j ′ implies i < i′ . It remains to show that L(zj , G(yj )) ≤ γ1 +γ2 +3γ3 +vj
for all j to prove the claim.
In what follows, we use the shorthand H(w) := 1γ2 (01)γ3 w1γ3 . Note that G(w) = 0γ1 H(w).
Consider an unaligned j ∈ [Q]. By definition, zj is a subsequence of 0γ1 /2 H(xi )0γ1 /2 for some
i ∈ [P ]. We can thus bound (using Lemma 7.1)
L(zj , G(yj )) ≤ L(0γ1 /2 H(xi )0γ1 /2 , 0γ1 H(yj )) =
γ1
+ L(H(xi )0γ1 /2 , 0γ1 /2 H(yj )).
2
By Lemma 7.6 with ℓ := γ1 /2 ≥ 2γ2 + 6γ3 + ℓx + ℓy = |H(xi )| + |H(yj )| ≥ #0 (H(xi )) + |H(yj )|,
L(H(xi )0γ1 /2 , 0γ1 /2 H(yj )) = γ1 /2 + L(0#0 (H(xi )) , H(yj )) ≤ γ1 /2 + #0 (H(yj )) ≤ γ1 /2 + γ3 + ℓy .
Hence, in total we have L(zj , G(yj )) ≤ γ1 + γ3 + ℓy ≤ γ1 + γ2 + 3γ3 = γ1 + γ2 + 3γ3 + vj , as desired.
Consider a j ∈ [Q] that is uniquely aligned (under Λ) to some i. Then zj is a subsequence of
0γ1 /2 H(xi−1 )0γ1 H(xi )0γ1 /2 . Analogously to above we compute
γ1
+ L(H(xi−1 )0γ1 H(xi )0γ1 /2 , 0γ1 /2 H(yj ))
2
= γ1 + L(0#0 (H(xi−1 ))+γ1 H(xi )0γ1 /2 , H(yj ))
L(zj , G(yj )) ≤
= γ1 + L(0#0 (H(xi−1 ))+γ1 1γ2 (01)γ3 xi 1γ3 0γ1 /2 , 1γ2 (01)γ3 yj 1γ3 ).
Using Lemma 7.6 with symbol 0 replaced by 1 yields, since ℓ := γ2 ≥ 3γ3 + ℓy = |(01)γ3 yj 1γ3 | and
#1 (0#0 (H(xi−1 ))+γ1 ) = 0,
L(zj , G(yj )) ≤ γ1 + γ2 + L((01)γ3 xi 1γ3 0γ1 /2 , (01)γ3 yj 1γ3 ) = γ1 + γ2 + 2γ3 + L(xi 1γ3 0γ1 /2 , yj 1γ3 ).
Similarly, using Lemma 7.6 with symbol 0 replaced by 1 on the reversed strings yields, since
ℓ := γ3 ≥ ℓy = |yj | and #1 (0γ1 /2 ) = 0,
L(xi 1γ3 0γ1 /2 , yj 1γ3 ) = γ3 + L(xi , yj ).
29
Hence, we obtain the desired L(zj , G(yj )) ≤ γ1 + γ2 + 3γ3 + L(xi , yj ) = γ1 + γ2 + 3γ3 + vj .
It remains to consider j ∈ [Q] that is k-aligned for k ≥ 2. In this case, the claim follows from
the trivial bound L(zj , G(yj )) ≤ |G(yj )| = γ1 + γ2 + 3γ3 + vj .
Thus z1 , . . . , zQ defines a multi-alignment Λ ∈ Λmulti
P,Q with
L(x, y) =
Q
X
j=1
L(zj , G(yj )) ≤ Q(γ1 + γ2 + 3γ3 ) + v(Λ),
proving the second inequality of (2).
We can now show how to embed an OV instance A = {a1 , . . . , aA }, B = {b1 , . . . , bB } ⊆ {0, 1}D
with A ≤ B into strings x and y of length O(B · D) whose LCS can be obtained by deleting at most
O(A · D) symbols from y. For this we will without loss of generality assume that A divides B by
possibly duplicating some arbitrary element of B up to A − 1 times without affecting the solution
of the instance.
1,2
The key idea is that for any P and Q = 2P − N with N ∈ {0, . . . , P }, ΛP,Q
is non-empty and
each Λ ∈ Λ1,2
P,Q has exactly N uniquely aligned j ∈ [Q] and exactly P − N 2-aligned j ∈ [Q]. At the
same time each Λ ∈ Λmulti
P,Q leaves at least N indices j ∈ [Q] either unaligned or uniquely aligned.
Lemma 9.7. Let a1 , . . . , aA , b1 , . . . bB ⊆ {0, 1}D be given with A | B. Construct the corresponding
strings x1 , . . . , xA of length ℓx , y1 , . . . , yB of length ℓy ≤ ℓx ≤ O(D), and integers ρ0 , ρ1 as in
Lemma 9.5 and define
2·(B/A)+3 groups of size A
}|
{
z
x̃ := (x̃1 , . . . , x̃P ) = (x1 , . . . , xA , x1 , . . . , xA , . . . , x1 , . . . , xA ),
ỹ := (ỹ1 , . . . , ỹQ ) = ( y1 , . . . , y1 , y1 , . . . , yB , y1 , . . . , y1 ),
| {z }
| {z }
A copies of y1
A copies of y1
where P := 2B + 3A and Q := B + 2A. Then the instance x :=
i G(x̃i ), y :=
j G(ỹj )
of Lemma 9.6 (with the corresponding choice of γ1 , γ2 and γ3 ) satisfies the following properties:
(i) For every i ∈ [A], j ∈ [B], there is a (1,2)-alignment Λ ∈ Λ1,2
P,Q such that some ℓ ∈ [Q] is
uniquely aligned to some k ∈ [P ] with x̃k = xi and ỹℓ = yj .
(ii) We have L(x, y) ≥ Q(γ1 + γ2 + 3γ3 ) + (A − 1)ρ1 + ρ0 + (Q − A)ℓy if and only if there are
i ∈ [A], j ∈ [B] with hai , bj i = 0.
(iii) We have |y| ≤ |x| ≤ O(B · D) and δ(x, y) = O(A · D).
Proof. For (i), we let j ∈ [B] and note that yj = ỹℓ for ℓ := A + j. We will show that for every
λ ∈ {0, . . . , A − 1}, there is a (1,2)-alignment Λ with (k, ℓ) ∈ Λ1,2
P,Q where k := 2(A + j) − 1 − λ. By
the cyclic structure of x̃, (x̃k )0≤λ<A cycles through all values x1 , . . . , xA . Hence, for some choice
of λ the desired x̃k = xi follows, yielding the claim.
To see that for any λ ∈ {0, . . . , A − 1}, some Λ ∈ Λ1,2
P,Q with (k, ℓ) ∈ Λ exists, observe that
there are ℓ − 1 = A + j − 1 predecessors of ỹℓ and k − 1 = 2(A + j − 1) − λ = 2(ℓ − 1) − λ
predecessors of x̃k . Hence there is a (1,2)-alignment Λ1 ∈ Λ1,2
k−1,ℓ−1 (leaving λ indices j ∈ [Q]
uniquely aligned). Similarly, observe that there are Q − ℓ = B + A − j successors of ỹℓ and
P − k = 2B + A − 2j + λ + 1 = 2(Q − ℓ) − (A − λ − 1) successors of x̃k , hence there is a (1,2)alignment Λ2 ∈ Λ1,2
P −k,Q−ℓ (which leaves A − (λ + 1) indices j uniquely aligned). By canonically
composing Λ1 , (k, ℓ) and Λ2 we can thus obtain Λ ∈ Λ1,2
P,Q with (k, ℓ) ∈ Λ.
30
For (ii), assume that there are i ∈ [A], j ∈ [B] satisfying hai , bj i = 0. By (i), there is some
Λ ∈ Λ1,2
P,Q where some ℓ ∈ [Q] is uniquely aligned to some k ∈ [P ] such that x̃k = xi and ỹℓ = yj .
To apply Lemma 9.6, observe that Λ has Q−A 2-aligned j ∈ [Q], which contribute value ℓy to v(Λ),
and A uniquely aligned j ∈ [Q], in particular, ℓ is uniquely aligned to k. Since any x̃i corresponds to
some xi′ , every ỹj corresponds to some yj ′ and L(xi′ , yj ′ ) ∈ {ρ0 , ρ1 }, we conclude that ℓ contributes
ρ0 to v(Λ) and the other A − 1 uniquely aligned j contribute at least ρ1 . Hence by the lower bound
in Lemma 9.6, we obtain L(x, y) ≥ Q(γ1 +γ2 +3γ3 )+v(Λ), where v(Λ) ≥ (A−1)ρ1 +ρ0 +(Q−A)ℓy .
Assume now that no i ∈ [A], j ∈ [B] satisfy hai , bj i = 0, and let Λ ∈ Λmulti
P,Q . Then any j ∈ [Q]
uniquely aligned to some i ∈ [P ] contributes L(x̃i , ỹj ) = ρ1 to v(Λ). Let λ be the number of
j ∈ [Q] that are k-aligned for any k ≥ 2, each contributing ℓy to v(Λ). Then there are at most
min{P − 2λ, Q − λ} uniquely aligned j ∈ [Q] (since every k-aligned j blocks at least two i ∈ [P ]
for other alignments), and the remaining j ∈ [Q] are unaligned, with no contribution to v(Λ).
Hence v(Λ) ≤ λℓy + min{P − 2λ, Q − λ} · ρ1 = min{P ρ1 + (ℓy − 2ρ1 )λ, Qρ1 + (ℓy − ρ1 )λ}. Note
that ℓy /2 < ρ1 ≤ ℓy (by Lemma 9.5(iv)), hence this minimum of linear functions with leading
coefficients ℓy − 2ρ1 < 0 and ℓy − ρ1 ≥ 0 is maximized when both have the same value, i.e., when
λ = P − Q = Q − A. Thus, v(Λ) ≤ (Q − A)ℓy + Aρ1 < (Q − A)ℓy + (A − 1)ρ1 + ρ0 . Thus by the
upper bound of Lemma 9.6 we conclude that L(x, y) < Q(γ1 +γ2 +3γ3 )+(Q−A)ℓy +(A−1)ρ1 +ρ0 .
For (iii), since P ≥ Q and ℓx ≥ ℓy we have |x| ≥ |y|, and by P ≤ O(A) and |G(x̃i )| ≤ O(ℓx ) ≤
O(D) we obtain |x| ≤ O(AD). Note that for any (1,2)-alignment Λ ∈ Λ1,2
P,Q , we have
v(Λ) = Q · ℓy −
X
j uniquely aligned to i
(ℓy − L(xi , yj )) = Q · ℓy − O(A · D),
since by P = 2Q − A the number of uniquely aligned indices j in Λ equals A, and ℓy = O(D).
Hence by Lemma 9.6, L(x, y) ≥ Q(γ1 + γ2 + 3γ3 ) + Qℓy − O(A · D) = |y| − O(A · D), implying
δ(x, y) = |y| − L(x, y) ≤ O(A · D).
9.2.2
Constant Alphabet
First assume αΣ = 0 and thus |Σ| = O(1). Consider any n ≥ 1 and target values p = nαp for
p ∈ P. We write ⌊x⌋2 for the largest power of 2 less than or equal to x. Let A = {a1 , . . . , aA },
B = {b1 , . . . , bB } ⊆ {0, 1}D be a given OV instance with D = no(1) and where we set
A :=
j1
D
n
min δ,
ok
d
min{m, ∆} 2
and B :=
j1
D
k
min{m, ∆} .
2
By αm , α∆ ≤ 1 and (LB) we obtain A ≥ nmin{αL ,αd −1}−o(1) = nΩ(1) and B = nmin{αm ,α∆ }−o(1) =
nΩ(1) . Also note that UOVH implies that solving such OV instances takes time (AB)1−o(1) =
min{d, δm, δ∆}1−o(1) , which is the desired bound. We claim that A ≤ B, implying A | B. Indeed,
if δ ≤ d/ min{m, ∆} this follows from the simple parameter relations δ ≤ m and δ ≤ ∆. Otherwise,
if δ > d/ min{m, ∆}, then in particular δ∆ > d, implying d < ∆2 . Together with the parameter
relations d ≤ Lm ≤ m2 we indeed obtain d/ min{m, ∆} ≤ min{m, ∆}.
Thus, we may construct strings x, y as in Lemma 9.7. We finish the construction by invoking
the Dominant Pair Reduction (Lemma 7.8) to obtain strings x′ := 0k 1k y1ℓ 0k 1k x and y ′ := 1ℓ 0k 1k y
with k := 2|y| + |x| + 1 and ℓ := Θ(A · D) with sufficiently large hidden constant, so that ℓ > δ(x, y).
Then from the LCS length L(x′ , y ′ ) we can infer whether A, B has an orthogonal pair of vectors by
L(x′ , y ′ ) = L(x, y)+ℓ+2k and Lemma 9.7(ii). Moreover, this reduction runs in time O(|x′ |+|y ′ |) =
O(|x| + |y|) = O(BD) ≤ O(min{d, δm, δ∆}1−ε ) for sufficiently small ε > 0 (since αδ > 0 and
αd > 1 ≥ αm , α∆ by (LB) and Table 2). We claim that (n, x′ , y ′ ) is an instance of LCS≤ (α). This
31
shows that any algorithm solving LCS≤ (α) in time O(min{d, δm, δ∆}1−ε ) implies an algorithm for
our OV instances with running time O(min{d, δm, δ∆}1−ε ), contradicting UOVH. Hence, in the
current case αL = αm and αΣ = 0, any algorithm for LCS≤ (α) takes time min{d, δm, δ∆}1−o(1) ,
proving part of Theorem 4.6.
It remains to show the claim that (n, x′ , y ′ ) is an instance of LCS≤ (α). From Lemmas 7.8
and 9.7(iii) we obtain L(x′ , y ′ ) = ℓ + 2k + L(x, y) = ℓ + 2k + |y| − δ(x, y) ≥ |y ′ | − O(AD), and
thus δ(x′ , y ′ ) ≤ O(AD) ≤ O(δ). Using the parameter relations L ≤ m ≤ n, Lemma 9.7(iii), and
the definition of B, we have L(x′ , y ′ ) ≤ m(x′ , y ′ ) ≤ n(x′ , y ′ ) = |x′ | ≤ O(BD) = O(min{m, ∆}),
which together with the relation m ≤ n and the assumption αL = αm shows that p(x′ , y ′ ) ≤ O(p) =
O(nαp ) for p ∈ {L, m, n}. Similarly, we obtain ∆(x′ , y ′ ) ≤ n(x′ , y ′ ) ≤ O(min{m, ∆}) ≤ O(∆). Since
x′ , y ′ use the binary alphabet {0, 1}, we have |Σ(x′ , y ′ )| = 2 ≤ O(nαΣ ). For the number of matching
pairs we have M (x′ , y ′ ) ≤ n(x′ , y ′ )2 = O((BD)2 ) = O(L2 ). Since we are in the case αΣ = 0, from
the parameter relation M ≥ L2 /|Σ| (Lemma 6.6(i)) we obtain L2 ≤ O(M ) and thus also M (x′ , y ′ )
is sufficiently small. Finally, we use Lemma 7.8 to bound d(x′ , y ′ ) ≤ O(ℓ · |y|) ≤ O(AD · BD), which
by definition of A, B is O(d). This proves that (n, x′ , y ′ ) belongs to LCS≤ (α).
We remark that our proof also yields the following, which we will use for small alphabets.
Lemma 9.8. Let α be a parameter setting satisfying Table 2 with αΣ = 0 and αL = αm . There
is a constant γ ≥ 1 such that any algorithm for LCSγ≤ (α, {0, 1}) takes time min{d, δm, δ∆}1−o(1) ,
unless OVH fails. This holds even restricted to instances (n, x, y) of LCSγ≤ (α, {0, 1}) with |y| ≤
|x| ≤ γ · min{nαm , nα∆ }.
9.2.3
Superconstant Alphabet
The crucial step in extending our construction to larger alphabets is to adapt the 1vs1/2vs1 gadget
such that the strings use each symbol in the alphabet Σ roughly evenly, thus reducing the number
of matching pairs by a factor |Σ|.
Recall that given a 2-element alphabet Σ′ and a string z over {0, 1}, we let z ↑ Σ′ denote the
string z lifted to alphabet Σ′ by bijectively replacing {0, 1} with Σ′ .
Lemma 9.9. Let P = 2B + 3A and Q = B + 2A for some A | B. Given strings x1 , . . . , xP of
length ℓx and y1 , . . . , yQ of length ℓy , we define, as in Lemma 9.6, G(w) := 0γ1 1γ2 (01)γ3 w 1γ3
with γ3 := ℓx + ℓy , γ2 := 8γ3 and γ1 := 6γ2 . Let Σ1 , . . . , Σt be disjoint alphabets of size 2 with
Q/t ≥ A/2 + 1. We define
x := H(x1 ) H(x2 ) . . . H(xP ),
y := G(y1 ) ↑ Σf (1)
G(y2 ) ↑ Σf (2)
...
G(yQ ) ↑ Σf (Q) ,
where f (j) = ⌈ Qj · t⌉ and
H(xi ) :=
(
G(xi ) ↑ Σk+1 G(xi ) ↑ Σk
G(xi ) ↑ Σk
Then we have
if
if
S⌊(i+A)/2⌋
j=⌈i/2⌉
S⌊(i+A)/2⌋
j=⌈i/2⌉
{f (j)} = {k, k + 1}
{f (j)} = {k}
max v(Λ) ≤ L(x, y) − Q(γ1 + γ2 + 3γ3 ) ≤ max v(Λ).
Λ∈Λ1,2
P,Q
Λ∈Λmulti
P,Q
(3)
Proof. Note that H(·) is well-defined, since f (·) maps {1, . . . , Q} to constant-valued
intervals of
Q Qk
length at least Q/t − 1 ≥ A/2, as f (j) = k if and only if j ∈ Qk
−
,
,
containing
at least
t
t
t
32
Q/t − 1 integers. Hence for every i, the ≤ A/2 values f (⌈i/2⌉), . . . , f (⌊(i + A)/2⌋) can touch at
most 2 different constant-valued intervals.
The proof of (3) is based on the proof of Lemma 9.6 (the analogous lemma for alphabet Σ =
′
{0, 1}). For the first inequality of (3), let Λ ∈ Λ1,2
P,Q and define for every j the substring zj =
i∈Λ(j) H(xi ). Note that under Λ, each Λ(j) consists of one or two elements from {2j − A, . . . , 2j},
since there are at most 2Q − P = A uniquely aligned j. In other words, for any i ∈ Λ(j) we have
j ∈ {⌈i/2⌉, . . . , ⌊(i + A)/2⌋}. Thus, by definition each H(xi ) for i ∈ Λ(j) contains G(xi ) ↑ Σf (j) as
a substring and hence zj′ contains i∈Λ(j) G(xi ) ↑ Σf (j) as a subsequence. This proves
L zj′ , G(yj ) ↑ Σf (j) ≥ L
i∈Λ(j)
G(xi ) ↑ Σf (j) , G(yj ) ↑ Σf (j) = L
i∈Λ(j)
G(xi ), G(yj ) ,
which reduces the proof to the case of Σ = {0, 1} – note that the last term is equal to L(zj , G(yj ))
in the proof of the same inequality of Lemma 9.6 and thus the remainder follows verbatim.
It remains to show the second inequality
of (3). Essentially as in the proof of Lemma 9.6, we
PQ
′
′
′
write x = z1 z2 . . . zQ with L(x, y) = j=1 L(zj′ , G(yj ) ↑ Σf (j) ). For every zj′ , we obtain a string
zj by deleting all symbols not contained in Σf (j) and then lifting it to the alphabet {0, 1}. We
conclude that L(zj′ , G(yj ) ↑ Σf (j) ) = L(zj , G(yj )). We claim that z := z1 z2 . . . zQ is a subsequence
of x{0,1} := G(x1 ) . . . G(xP ) (which is equal to the string x that we constructed in the case of
Σ = {0, 1}). Indeed, if H(xi ) is of the form wk+1 wk for some k with wℓ = G(xi ) ↑ Σℓ , then symbols
of at most one of wk and wk+1 are contained in z. To see this, note that if wk is not deleted then
at least one of its symbols is contained in some zj′ with f (j) = k, but then no symbol in wk+1 can
be contained in zj′ ′ with f (j ′ ) = k + 1, since this would mean j ′ > j, so wk+1 is deleted. Thus,
L(x, y) =
Q
X
j=1
L
zj′ , G(yj ) ↑ Σf (j)
=
Q
X
j=1
L zj , G(yj ) ≤ L(x{0,1} , y{0,1} ),
where y{0,1} := G(y1 ) . . . G(yQ ) is the string y that we constructed in the case of Σ = {0, 1}. Hence,
the second inequality of (3) follows from the proof of Lemma 9.6.
By the same choice of vectors as in Lemma 9.7, we can embed orthogonal vectors instances.
Lemma 9.10. Let a1 , . . . , aA , b1 , . . . bB ⊆ {0, 1}D be given with A | B. Construct the corresponding
strings x1 , . . . , xA of length ℓx , y1 , . . . , yB of length ℓy ≤ ℓx ≤ O(D) and integers ρ0 , ρ1 as in
Lemma 9.5 and define
2·(B/A)+3 groups of size A
}|
{
z
x̃ := (x̃1 , . . . , x̃P ) = (x1 , . . . , xA , x1 , . . . , xA , . . . , x1 , . . . , xA ),
ỹ := (ỹ1 , . . . , ỹQ ) = ( y1 , . . . , y1 , y1 , . . . , yB , y1 , . . . , y1 ),
| {z }
| {z }
A copies of y1
A copies of y1
where P := 2B+3A and Q := B+2A. For disjoint alphabets Σ1 , . . . , Σt of size 2 with Q/t ≥ A/2+1,
we construct the instance x := i H(x̃i ), y := j G(ỹj ) of Lemma 9.9 (with the corresponding
choice of γ1 , γ2 and γ3 ). This satisfies the following properties:
(i) We have that L(x, y) ≥ Q(γ1 + γ2 + 3γ3 ) + (A − 1)ρ1 + ρ0 + (Q − A)ℓy if and only if there are
i ∈ [A], j ∈ [B] with hai , bj i = 0.
(ii) We have |y| ≤ |x| ≤ O(B · D) and δ(x, y) = O(A · D).
33
Proof. The lemma and its proof are a slight adaptation of Lemma 9.7: For (i), since Lemma 9.9
proves (3) which is identical to (2), we can follow the proof of Lemma 9.7(i) and (ii) verbatim
(since we have chosen x̃ and ỹ as in this lemma). For (ii), the bounds |y| ≤ |x| ≤ O(B · D) and
δ(x, y) = O(A · D) follow exactly as in Lemma 9.6 (note that only |x| has increased by at most a
factor of 2, so that |x| ≤ O(B · D) still holds by the trivial bound).
We can now finish the proof of Theorem 4.6 for the case of αL = αm and αΣ > 0. Consider any
n ≥ 1 and target values p = nαp for p ∈ P. Let A = {a1 , . . . , aA }, B = {b1 , . . . , bB } ⊆ {0, 1}D be a
given OV instance with D = no(1) and where we set, as in the case αΣ = 0,
j1
ok
j1
n
k
d
A :=
and B :=
min δ,
min{m, ∆} .
D
min{m, ∆} 2
D
2
As before, we have A | B, so we may construct strings x, y as in Lemma 9.10, where we set
t := min{⌊Q/(A/2 + 1)⌋, |Σ|} = Θ(min{B/A, |Σ|}). We finish the construction by invoking the
Dominant Pair Reduction (Lemma 7.8) to obtain strings x′ := y2ℓ x and y ′ := 2ℓ y, where 2 is a
symbol not appearing in x, y and we set ℓ := Θ(A · D) with sufficiently large hidden constant, so
that ℓ > δ(x, y).
For the remainder of the proof we can follow the case αΣ = 0 almost verbatim. The only exception is the bound on the number of matching pairs. Note that symbol 2 appears O(AD) times in x′
and y ′ . As in x and y every symbol appears roughly equally often and the total alphabet size is Θ(t),
for any symbol σ 6= 2 we have #σ (x) ≤ O(|x|/t) and #σ (y) ≤ O(|y|/t), implying #σ (x′ ), #σ (y ′ ) ≤
O(BD/t). Hence, M (x′ , y ′ ) ≤ O((AD)2 +t·(BD/t)2 ). Using t = Θ(min{B/A, |Σ|}) and A ≤ B, we
obtain M (x′ , y ′ ) ≤ O(max{AD ·BD, (BD)2 /|Σ|}) ≤ O(max{d, m2 /|Σ|}. The assumption αL = αm
and the parameter relations M ≥ L2 /|Σ| and M ≥ d now imply M (x′ , y ′ ) ≤ O(M ). This concludes
the proof of Theorem 4.6.
10
Hardness for Small Constant Alphabet
In this section, we show hardness of the parameter settings LCS(α, Σ) for alphabets of constant
size |Σ| ≥ 2, i.e., we prove Theorem 3.4. The general approach, as outlined in Section 4, is to take
the hard instances x, y of LCS≤ (α, {0, 1}) constructed in Section 9 and pad them to instances x′ , y ′
of LCS(α, Σ). Notably, unlike the black-box method of Lemma 4.5 that effectively considered each
parameter separately, we now cannot make extensive use of the Disjoint Alphabets Lemma, as this
would introduce more symbols than admissible. Instead, for small alphabet size such as |Σ| = 2 we
need to pad all parameters simultaneously in a combined construction, taking care of the interplay
of the parameters manually. Additionally, for |Σ| ∈ {2, 3}, more complex parameter relations hold.
Unfortunately, this general approach fails for Σ = {0, 1}, i.e., we cannot always pad hard strings
x, y of LCS≤ (α, {0, 1}) to LCS(α, {0, 1}). Surprisingly, the reason is that by an O(n + δM/n)-time
algorithm (given in Section 11), some parameter settings LCS(α, {0, 1}) are indeed simpler to solve
than LCS≤ (α, {0, 1}) (conditional on SETH). In these cases, we take hard instances (n, x, y) from
LCS≤ (α′ , {0, 1}) for a suitably defined “simpler” parameter setting α′ and pad x, y to instances of
LCS(α, {0, 1}).
As in Section 9, we distinguish between the two cases αδ = αm (i.e., δ = Θ(m) and any
0 < αL ≤ αm is admissible) and αL = αm (i.e., L = Θ(m) and any 0 < αδ < αm is admissible).
10.1
Small LCS
In this section, we assume αδ = αm . It can be checked that this assumption implies α∆ = 1,
i.e., ∆ = Θ(n). Moreover, if |Σ| = 2 then the assumption and the parameter relation M ≥
34
nd/(80L) ≥ Ω(nd/m) imply δM/n = Ω(d). Thus, the desired running time bound simplifies to
d1−o(1) . Theorem 3.4 in this regime follows from the following statement (and Lemma 4.1).
Lemma 10.1. Let (α, Σ) be a parameter setting satisfying Table 2 with αδ = αm . There is a
constant γ ≥ 1 such that any algorithm for LCSγ (α, Σ) takes time d1−o(1) unless OVH fails.
We prove the above lemma in the remainder of this section. Note that any parameter setting
(α, Σ) satisfying Table 2 gives rise to a parameter setting α satisfying Table 2 with αΣ = 0 (where
the converse does not hold in general). Recall that for any such α, in Lemma 9.2 we constructed hard
instances (n, x, y) of LCSγ≤ (α, {0, 1}) with an additional threshold ρ such that deciding L(x, y) ≥ ρ
decides the corresponding OV instance, yielding hardness of LCSγ≤ (α, {0, 1}). Furthermore, the
constructed instances have the additional guarantees that |x|, |y| ≤ γ · nαL and #1 (y) ≤ γ · nαd −αL
and for any β ≥ 0 we have L(x, 0β y) = L(x, y).
Hence, to prove the above lemma it suffices to show how to compute, given any such instance
′
(n, x, y) and threshold ρ, an instance x′ , y ′ of LCSγ (α, Σ) (for some γ ′ ≥ 1) and an integer ρ′ in
time O(n) such that L(x′ , y ′ ) ≥ ρ′ if and only if L(x, y) ≥ ρ. More precisely, we show how to
compute an integer τ in time O(n) such that L(x′ , y ′ ) ≥ τ + ρ if and only if L(x, y) ≥ ρ.
We will do this successively for alphabet sizes |Σ| = 2, |Σ| = 3, |Σ| = 4, and |Σ| ≥ 5. To this
end, the following basic building block will be instantiated with different parameters. Recall that
in Lemma 7.3, we defined strings a = (01)R+S and b = 0R (01)S with the properties L(a, b) = |b| =
R + 2S and d(a, b) = Θ(R · S).
Lemma 10.2 (Basic building block). Let x, y be given strings. Given α, β, R, S ≥ 0, we set
ℓ := |x| + |y|, and define
x′ :=
a 1α 0ℓ x
=
(01)R+S 1α 0ℓ x
y ′ :=
b 0β 0ℓ y
=
0R (01)S 0β 0ℓ y.
Then we have L(x′ , y ′ ) = L(a, b) + ℓ + L(x, 0β y) = R + 2S + ℓ + L(x, 0β y). If L(x, 0β y) = L(x, y)
then we even have L(x′ , y ′ ) = R + 2S + ℓ + L(x, y).
Proof. Clearly, L(x′ , y ′ ) ≥ L(a, b) + L(0ℓ , 0ℓ ) + L(x, 0β y) = (R + 2S) + ℓ + L(x, 0β y) since L(a, b) =
|b| = R + 2S by Lemma 7.3. To prove a corresponding upper bound, note that we can partition
y ′ = wz such that L(x′ , y ′ ) = L(a1α , w) + L(0ℓ x, z). Consider first the case that z is a subsequence
of y. Then
L(x′ , y ′ ) = L(a1n , w) + L(0ℓ x, z) ≤ L(a1n , y ′ ) + L(0ℓ x, y),
P
since w, z are subsequences of y ′ , y, respectively. Using L(u, v) ≤ σ∈Σ min{#σ (u), #σ (v)} for any
strings u, v, we obtain
L(x′ , y ′ ) ≤ (#0 (a1n ) + #1 (y ′ )) + (#0 (y) + #1 (0ℓ x))
= (R + S) + (S + #1 (y)) + #0 (y) + #1 (x)
≤ (R + 2S) + ℓ + L(x, 0β y),
since ℓ ≥ |x| + |y| ≥ #0 (x) + #0 (y) + #1 (y). It remains to consider the case that z is not a
subsequence of y and hence w is a subsequence of b0β+ℓ . By Lemma 7.3(iii), we can without loss
of generality assume that w is a subsequence of b, since L(a1α , b0β+ℓ ) = L(a, b). We write z = z ′ z ′′
such that z ′′ is a subsequence of 0ℓ+β y and maximal with this property. Hence, wz ′ is a subsequence
of b. Using the facts L(u, v) ≤ |v| and L(u, v ′ v ′′ ) ≤ |v ′ | + L(u, v ′′ ), we bound
L(x′ , y ′ ) = L(a1n , w) + L(0ℓ x, z ′ z ′′ ) ≤ |w| + (|z ′ | + L(0ℓ x, z ′′ )),
35
Since wz ′ is a subsequence of b and z ′′ is a subsequence of 0ℓ+β y, this yields
L(x′ , y ′ ) ≤ |b| + L(0ℓ x, 0ℓ+β y) = (R + 2S) + ℓ + L(x, 0β y),
where we used greedy prefix matching. This finishes the proof.
We now instantiate the basic building block to prove Lemma 10.1 for Σ = {0, 1}. Note that in
the remainder we again simply write p for the target value ⌈nαp ⌉ of parameter p ∈ P, while the
parameter value attained by any strings x, y is denoted by p(x, y), as in Section 9. Note that the
additional guarantees for (n, x, y) are satisfied by Lemma 9.2.
Lemma 10.3. Consider a parameter setting (α, {0, 1}) satisfying Table 2 with αδ = αm . Let
(n, x, y) be an instance of LCSγ≤ (α, {0, 1}) with |x|, |y| ≤ γ · L and #1 (y) ≤ γ · d/L satisfying
′
L(x, 0β y) = L(x, y) for any β ′ ≥ 0. We obtain strings x′ , y ′ from Lemma 10.2 (recall that in this
lemma we set ℓ := |x| + |y|), where we choose
R := L,
S := ⌊d/L⌋,
β := m̃ := max{m, 2|x|},
α := ñ := max{n, m̃ + |y|}.
Then, setting κ := ⌊M/n⌋, the strings defined by
x′′ :=
1κ x′
=
1κ a 1ñ 0ℓ x
=
1κ (01)R+S 1ñ 0ℓ x,
y ′′ :=
1κ y ′
=
1κ b 0ℓ+m̃ y
=
1κ 0R (01)S 0ℓ+m̃ y.
′
are an instance of LCSγ (α, {0, 1}) (for some constant γ ′ ≥ γ) and can be computed in time O(n),
together with an integer τ such that L(x′′ , y ′′ ) ≥ τ + ρ if and only if L(x, y) ≥ ρ.
Proof. Note that
L(x′′ , y ′′ ) = κ + L(x′ , y ′ ) = κ + (R + 2S) + ℓ + L(x, y),
(4)
where the first equality follows from greedy prefix matching and the second follows from Lemma 10.2.
Thus by setting τ = κ + (R + 2S) + ℓ, we have that L(x′′ , y ′′ ) ≥ τ + ρ if and only if L(x, y) ≥ ρ.
Clearly, x′′ , y ′′ , and τ can be computed in time O(n), and Σ(x′′ , y ′′ ) = {0, 1}.
We first verify that |x|, |y|, ℓ, R, S, |a|, |b|, κ = O(L). By assumption, |x|, |y| = O(L) and thus
ℓ = |x| + |y| = O(L). By the parameter relation d ≤ |Σ| · L2 = 2L2 , we note that d/L = O(L)
and hence by choice of R, S, we have |a|, |b| = Θ(R + S) = Θ(L + d/L) = Θ(L). Furthermore,
the parameter relation M ≤ 2Ln implies κ ≤ M/n ≤ 2L. Since L(x, y) ≤ |x| = O(L), the bound
L(x′′ , y ′′ ) = κ + R + 2S + ℓ + L(x, y) = R + O(L) = Θ(L) follows directly from (4).
Observe that ñ is chosen such that |x′′ | ≥ |y ′′ |. Also, m̃ = Θ(m) and ñ = Θ(n).
Since
L ≤ m ≤ n, we thus have |x′′ | = κ+|a|+ ñ+ℓ+|x| = ñ+O(L) = Θ(n) and |y| = κ+|b|+ m̃+ℓ+|y| =
m̃ + O(L) = Θ(m).
Note that by (4), δ(x′′ , y ′′ ) = (m̃ + |y|) − L(x, y) ≥ m̃ − |x| ≥ m/2.
Hence, δ(x′′ , y ′′ ) =
Θ(m) = Θ(δ) (by the assumption αδ = αm ). Moreover, since δ = Θ(m), for some constant
ε > 0 we have ∆ = δ + (n − m) ≥ εm + n − m = n − (1 − ε)m ≥ n − (1 − ε)n = Ω(n) (where
we used the parameter relation m ≤ n). Since also ∆ ≤ n we have ∆ = Θ(n). By the same
argument, using δ(x′′ , y ′′ ) = Θ(m) = Θ(m(x′′ , y ′′ )) and n(x′′ , y ′′ ) = Θ(n) as shown above, we
obtain ∆(x′′ , y ′′ ) = Θ(n(x′′ , y ′′ )) = Θ(n), and thus ∆(x′′ , y ′′ ) = Θ(∆).
For M , observe that #1 (x′′ ) = κ + #1 (a) + ñ + #1 (x) = ñ + O(L) = Θ(n). Moreover,
#1 (y) = O(d/L) (by assumption) and #1 (b) = S = O(d/L) yield #1 (y ′′ ) = κ + #1 (b) + #1 (y) =
Θ(M/n) + O(d/L) (here κ = Θ(M/n) follows from the parameter relation M ≥ n). This yields
#1 (x′′ ) · #1 (y ′′ ) = Θ(M ) + O(dn/L) = Θ(M ) (using the parameter relation M ≥ nd/(5L)). Also
36
note that #0 (x′′ ) = #0 (a) + ℓ + #0 (x) = O(L) and #0 (y ′′ ) = #0 (b) + ℓ + m̃ + #0 (y) = m̃ + O(L) =
O(m). This yields #0 (x′′ ) · #0 (y ′′ ) = O(Lm) = O(M ) (using the parameter relation M ≥ Lm/4).
Combining these bounds, we obtain M (x′′ , y ′′ ) = #0 (x′′ ) · #0 (y ′′ ) + #1 (x′′ ) · #1 (y ′′ ) = Θ(M ). Note
that the last two parameter relations used here exploited that we have Σ = {0, 1}.
It remains to determine the number of dominant pairs. Since L(x′ , y ′ ) = Θ(L) (as argued above)
and #1 (y ′ ) = O(d/L), Lemma 6.8 yields d(x′ , y ′ ) ≤ 5L(x′ , y ′ ) · #1 (y ′ ) = O(L · d/L) = O(d). For
a corresponding lower bound, from Observation 7.2 and Lemma 7.3 we obtain d(x′ , y ′ ) ≥ d(a, b) ≥
R · S = Ω(d). By Lemma 7.1, the claim now follows from d(x′′ , y ′′ ) = κ + d(x′ , y ′ ) = O(L) + Θ(d) =
Θ(d), where we use κ = O(L) and the parameter relation d ≥ L.
The case Σ = {0, 1, 2} is similar to {0, 1}, except that we use the new symbol 2 to pad the
parameter n, we use symbol 1 to pad m, and we have to swap the constructions for x′′ and y ′′ .
Lemma 10.4. Consider a parameter setting (α, {0, 1, 2}) satisfying Table 2 with αδ = αm . Let
(n, x, y) be an instance of LCSγ≤ (α, {0, 1}) with |x|, |y| ≤ γ · L and #1 (y) ≤ γ · d/L satisfying
′
L(x, 0β y) = L(x, y) for any β ′ ≥ 0. We obtain strings x′ , y ′ from Lemma 10.2 (recall that in this
lemma we set ℓ := |x| + |y|), where we choose
S := ⌊d/L⌋,
R := L,
β := 0,
α := m.
Then, setting κ := ⌊M/n⌋ and ñ := max{n, κ + |a| + m + |x|}, the strings defined by
x′′ :=
2ñ y ′
=
2ñ b
0ℓ y
=
2ñ 0R (01)S
0ℓ y,
y ′′ :=
2κ x′
=
2κ a 1m 0ℓ x
=
2κ (01)R+S 1m 0ℓ x.
′
are an instance of LCSγ (α, {0, 1, 2}) (for some constant γ ′ ≥ γ) and can be computed in time
O(n), together with an integer τ such that L(x′′ , y ′′ ) ≥ τ + ρ if and only if L(x, y) ≥ ρ.
Proof. Note that unlike the case {0, 1} the string x now appears in y ′′ and y appears in x′′ , so the
constructions are swapped. This is necessary to realize m and M , using the parameter relation
M ≥ md/(80L) that holds for Σ = {0, 1, 2}. Observe that as usual |x′′ | ≥ |y ′′ |.
We first compute
L(x′′ , y ′′ ) = L(2ñ , 2κ ) + L(y ′ , x′ ) = κ + (R + 2S) + ℓ + L(x, y),
(5)
where the first equality follows from the Disjoint Alphabets Lemma and the second equality from
greedy prefix matching and Lemma 10.2. Thus, by setting τ = κ+(R+2S)+ℓ, we have L(x′′ , y ′′ ) ≥
τ + ρ if and only if L(x, y) ≥ ρ. Clearly, x′′ , y ′′ , and τ can be computed in time O(n), and
Σ(x′′ , y ′′ ) = {0, 1, 2}.
As in the case {0, 1}, we have |x|, |y|, ℓ, R, S, |a|, |b|, κ = O(L). Thus, by (5), we have L(x′′ , y ′′ ) =
R+O(L) = Θ(L). Furthermore, note that ñ = Θ(n). Thus, |y ′′ | = κ+|a|+m+ℓ+|x| = m+O(L) =
Θ(m) and |x′′ | = ñ + |b| + ℓ + |y| = ñ + O(L) = Θ(n). Since L(x, y) ≤ |x| = O(L), the bound
L(x′′ , y ′′ ) = R + O(L) = Θ(L) follows directly from (5).
By (5), we see that δ(x′′ , y ′′ ) = |y ′′ | − L(x′′ , y ′′ ) = R + m + (|x| − L(x, y)) ≥ m. Hence
′′
δ(x , y ′′ ) = Θ(m). Thus, ∆(x′′ , y ′′ ) = δ(x′′ , y ′′ ) + (|x′′ | − |y ′′ |) = Θ(n) follows as in the case {0, 1}.
For M , observe that #1 (y ′′ ) = #1 (a) + m + #1 (x) = m + O(L) = Θ(m). Moreover, #1 (y) =
O(d/L) (by assumption) yields #1 (x′′ ) = #1 (b) + #1 (y) = S + O(d/L) = O(d/L). Also note that
#0 (y ′′ ) = #0 (a) + ℓ + #0 (x) = O(L) and #0 (x′′ ) = #0 (b) + ℓ + #0 (y) = O(L). Since furthermore
#2 (y ′′ ) = Θ(M/n) (by the parameter relation M ≥ n) and #2 (x′′ ) = Θ(n), we conclude that
37
P
M (x′′ , y ′′ ) = σ∈{0,1,2} #σ (x′′ ) · #σ (y ′′ ) = O(dm/L + L2 ) + Θ(M ). By the parameter relations
M ≥ md/(80L) (using that Σ = {0, 1, 2}) and M ≥ L2 /|Σ| = Ω(L2 ), this yields M (x′′ , y ′′ ) = Θ(M ).
For the remaining parameter d, by the disjoint alphabets lemma and Lemma 7.1 we have
′′
d(x , y ′′ ) = d(2ñ , 2κ ) + d(y ′ , x′ ) = κ + d(x′ , y ′ ) (using symmetry d(x, y) = d(y, x)). The remaining
arguments are the same as in the case {0, 1}.
In the case Σ = {0, 1, 2, 3} we can use the new symbol 3 to pad m (instead of using symbol 1,
as in the previous case). Note that now x appears in x′′ and y in y ′′ , as in the case {0, 1}.
Lemma 10.5. Consider a parameter setting (α, {0, 1, 2, 3}) satisfying Table 2 with αδ = αm . Let
(n, x, y) be an instance of LCSγ≤ (α, {0, 1}) with |x|, |y| ≤ γ · L and #1 (y) ≤ γ · d/L satisfying
′
L(x, 0β y) = L(x, y) for any β ′ ≥ 0. We obtain strings x′ , y ′ from Lemma 10.2 (recall that in this
lemma we set ℓ := |x| + |y|), where we choose
S := ⌊d/L⌋,
R := L,
β := 0,
α := 0.
Then, setting κ := ⌊M/n⌋ and ñ := max{n, m + κ + |y|}, the strings defined by
x′′ :=
3
2ñ x′
=
3
2ñ a 0ℓ x
=
3
2ñ (01)R+S 0ℓ x,
y ′′ :=
3m 2κ y ′
=
3m 2κ b 0ℓ y
=
3m 2κ 0R (01)S 0ℓ y,
′
are an instance of LCSγ (α, {0, 1, 2, 3}) (for some constant γ ′ ≥ γ) and can be computed in time
O(n), together with an integer τ such that L(x′′ , y ′′ ) ≥ τ + ρ if and only if L(x, y) ≥ ρ.
Proof. We compute
L(x′′ , y ′′ ) = L(3, 3m ) + L(2ñ , 2κ ) + L(x′ , y ′ ) = 1 + κ + (R + 2S) + ℓ + L(x, y),
(6)
where the first equality follows from the Disjoint Alphabets Lemma and the second follows from
greedy prefix matching and Lemma 10.2. Hence, by setting τ = 1 + κ + R + 2S + ℓ, we have
L(x, y) ≥ ρ if and only if L(x′′ , y ′′ ) ≥ τ + ρ. Clearly, x′′ , y ′′ , and τ can be computed in time O(n),
and Σ(x′′ , y ′′ ) = {0, 1, 2, 3}.
As for the cases {0, 1} and {0, 1, 2}, we have |x|, |y|, ℓ, R, S, |a|, |b|, ℓ, κ = Θ(L). Note that by
choice of ñ, we have again |x′′ | ≥ |y ′′ | and ñ = Θ(n). Hence, |x′′ | = 1 + ñ + |a| + ℓ + |x| =
ñ + O(L) = Θ(n) and |y ′′ | = m + κ + |b| + ℓ + |y| = m + O(L) = Θ(m). Since L(x, y) ≤ |x| = O(L),
the bound L(x′′ , y ′′ ) = R + O(L) = Θ(L) follows directly from (6). Note that (6) also implies that
δ(x′′ , y ′′ ) = |y ′′ | − L(x′′ , y ′′ ) = m − 1 + |x| − L(x, y) ≥ m − 1 and hence δ(x′′ , y ′′ ) = Θ(m). Thus,
∆(x′′ , y ′′ ) = δ(x′′ , y ′′ ) + (|x′′ | − |y ′′ |) = Θ(n) follows as for the case {0, 1}.
For M , observe that |a0ℓ x|, |b0ℓ y| = O(L) implies that M (a0ℓ x, b0ℓ y) = O(L2 ). By the Disjoint
Alphabets Lemma, we obtain
M (x′′ , y ′′ ) = M (2ñ , 2κ ) + M (3, 3m ) + M (a0ℓ x, b0ℓ y) = κñ + O(m + L2 ) = Θ(M ),
where we used κñ = Θ(M/n·n) = Θ(n) (note that M ≥ n implies κ = Θ(M/n)) and the parameter
relations M ≥ n ≥ m and M ≥ L2 /|Σ| = Ω(L2 ).
For the remaining parameter d, as in the case {0, 1} we show that d(x′ , y ′ ) = Θ(d). Now the
Disjoint Alphabets Lemma and Lemma 7.1 prove that d(x′′ , y ′′ ) = d(3, 3m ) + d(2ñ , 2κ ) + d(x′ , y ′ ) =
1 + κ + d(x′ , y ′ ) = d(x′ , y ′ ) + O(L) = Θ(d) using κ = O(L) and the parameter relation d ≥ L.
Finally, observe that for any parameter setting (α, Σ) with |Σ| ≥ 5 satisfying Table 2, also
the parameter setting (α, {0, 1, 2, 3}) satisfies Table 2. Hence, the following lemma transfers the
hardness of LCS(α, {0, 1, 2, 3}) to LCS(α, Σ).
38
Lemma 10.6. Let α be a parameter setting satisfying Table 2 with αδ = αm . Let Σ be an alphabet
of size |Σ| ≥ 5. If there is an O(nβ )-time algorithm for LCS(α, Σ), then also LCS(α, {0, 1, 2, 3})
admits an O(nβ )-time algorithm.
Proof. Given an instance (x, y) of LCS(α, {0, 1, 2, 3}) with n := |x|, we show how to compute in
time O(n) an instance (x′ , y ′ ) of LCS(α, Σ) such that L(x′ , y ′ ) = 1+L(x, y). The claim then follows
from applying the O(nβ )-time algorithm on x′ , y ′ (and subtracting 1 from the result).
Without loss of generality, let Σ = {0, . . . , σ} with σ ≥ 4. Define x′ := wx and y ′ = wR y, where
w = 4 . . . σ and wR = σ . . . 4. Then by the Disjoint Alphabets and Crossing Alphabets Lemmas
(Lemmas 4.3 and 9.3), we obtain L(x′ , y ′ ) = L(w, wR ) + L(x, y) = 1 + L(x, y). It remains to show
that (x′ , y ′ ) is an instance of LCS(α, P
Σ). By the Crossing Alphabets Lemma, for all parameters
p ∈ {d, M, n, m} we have p(w, wR ) = σσ′ =4 p(σ ′ , σ ′ ) = |Σ| − 4, and hence the Disjoint Alphabets
Lemma yields p(x′ , y ′ ) = p(w, wR ) + p(x, y) = |Σ| − 4 + p(x, y) = Θ(p(x, y)), by the parameter
relations n ≥ m ≥ |Σ| and M ≥ d ≥ |Σ|. For L we obtain L(x′ , y ′ ) = L(w, wR ) + L(x, y) =
1 + L(x, y) = Θ(L(x, y)). For p ∈ {δ, ∆} this yields p(x′ , y ′ ) = (|w| − 1) + p(x, y) = Θ(p(x, y)),
since α∆ ≥ αδ = αm ≥ αΣ (by the assumption αδ = αm and the parameter relations ∆ ≥ δ and
m ≥ |Σ|) and thus ∆(x, y) ≥ δ(x, y) ≥ Ω(|w| − 1). Hence, (x′ , y ′ ) has the same parameters as (x, y)
up to constant factors, so all parameter relations satisfied by (x, y) are also satisfied by (x′ , y ′ ).
Since clearly x′ , y ′ use alphabet Σ, indeed (x′ , y ′ ) is an instance of LCS(α, Σ).
Lemmas 10.3, 10.4, 10.5, and 10.6 of this section, together with the construction of hard strings
in LCS≤ (α, {0, 1}) in Lemma 9.2, prove hardness of LCS(α, Σ) for any constant alphabet size in
the case αδ = αm , i.e., Lemma 10.1.
10.2
Large LCS, Alphabet Size at least 3
In this section, we study the case that αL = αm (and αδ , α∆ may be small). Additionally, we assume
that |Σ| ≥ 3. In this regime, Theorem 3.4 follows from the following statement (and Lemma 4.1).
Lemma 10.7. Let (α, Σ) be a parameter setting satisfying Table 2 with αL = αm and |Σ| ≥ 3.
There is a constant γ ≥ 1 such that any algorithm for LCSγ (α, Σ) takes time min{d, δm, δ∆}1−o(1)
unless OVH fails.
By the following lemma, it suffices to prove the result for Σ = {0, 1, 2} (note that for any (α, Σ)
satisfying Table 2 with αL = αm and |Σ| ≥ 4, also (α, {0, 1, 2}) satisfies Table 2, since the only
additional constraint αM ≥ αm + αd − αL for ternary alphabets simplifies, by αL = αm , to the
constraint αM ≥ αd , which is satisfied by α).
Lemma 10.8. Let α be a parameter setting satisfying Table 2 with αL = αm . Let Σ be an alphabet
of size |Σ| ≥ 4. If there is an O(nβ )-time algorithm for LCS(α, Σ), then also LCS(α, {0, 1, 2})
admits an O(nβ )-time algorithm.
Proof. Given an instance (x, y) of LCS(α, {0, 1, 2}) with n := |x|, we show how to compute in time
O(n) an instance (x′ , y ′ ) of LCS(α, Σ) such that L(x′ , y ′ ) = |Σ|−3+L(x, y). The claim then follows
from applying the O(nβ )-time algorithm on x′ , y ′ (and subtracting |Σ| − 3 from the result).
Without loss of generality, let Σ = {0, . . . , σ} with σ ≥ 3. Define x′ := wx and y ′ = wy,
where w = 3 . . . σ. Then by the Disjoint Alphabets Lemma (Lemma 4.3), we obtain L(x′ , y ′ ) =
L(w, w) + L(x, y) = |Σ| − 3 + L(x, y). It remains to show that (x′ , y ′ ) is an instance of LCS(α, Σ).
∗
By the Disjoint Alphabets Lemma,
Pσ for ′all ′parameters p ∈ P = {n, m, L, δ, ∆, |Σ|, M, d} ′ we′ have
′
′
p(x , y ) = p(w, w)+p(x, y) = σ′ =3 p(σ , σ )+p(x, y). For p ∈ {n, m, L, M, d} we have p(σ , σ ) = 1
39
and thus p(x′ , y ′ ) = |Σ| − 3 + p(x, y) = Θ(p(x, y)) by the assumption αL = αm and the parameter
relations n ≥ m ≥ |Σ| and M ≥ d ≥ |Σ|. For p ∈ {δ, ∆} this yields p(x′ , y ′ ) = 0 + p(x, y) = p(x, y).
Hence, (x′ , y ′ ) has the same parameters as (x, y) up to constant factors, so all parameter relations
satisfied by (x, y) are also satisfied by (x′ , y ′ ). Since clearly x′ , y ′ use alphabet Σ, indeed (x′ , y ′ ) is
an instance of LCS(α, Σ).
To prepare the construction, we adapt the construction of Lemma 10.2 to obtain a desired value
for d (and, in later sections, δ). Recall that Lemma 7.3 defines a = (01)R+S , b = 0R (01)S with
L(a, b) = |b| = R + 2S and d(a, b) = Θ(R · S).
Lemma 10.9 (Basic building block). Given strings x, y and R, S, ℓ, β ≥ 0 with ℓ ≥ R + |x| + |y|
we define
x′ :=
a 0ℓ x
=
(01)R+S 0ℓ x,
y ′ :=
0β b 0ℓ y
=
0β 0R (01)S 0ℓ y.
Assume that S ≥ |x| or L(x, 0β y) = L(x, y). Then we have L(x′ , y ′ ) = R + 2S + ℓ + L(x, y).
Proof. Clearly, L(x′ , y ′ ) ≥ L(a, b) + L(0ℓ , 0ℓ ) + L(x, y) = R + 2S + ℓ + L(x, y), since L(a, b) = |b| =
R + 2S. To prove the corresponding upper bound, we partition y ′ = y1 y2 such that L(x′ , y ′ ) =
L(a, y1 ) + L(0ℓ x, y2 ). Consider first the case that y2 is a subsequence of y. Then
L(x′ , y ′ ) ≤ L(a, y1 ) + L(0ℓ x, y) ≤ |a| + |y| = 2(R + S) + |y| ≤ (R + 2S) + ℓ + L(x, y),
since ℓ ≥ R + |y|.
It remains to consider the case that y2 is not a subsequence of y and hence y1 is a subsequence
of 0β b0ℓ . By Lemma 7.3(iii), we can without loss of generality assume that y1 is a subsequence
of 0β b, since L(a, 0β b0ℓ ) = |b| = L(a, 0β b). Hence, we can partition 0β b = y1 z with L(x′ , y ′ ) ≤
L(a, y1 ) + L(0ℓ x, z0ℓ y). We bound
L(a, y1 ) ≤ min{#0 (a), #0 (y1 )} + min{#1 (a), #1 (y1 )} ≤ (R + S) + #1 (y1 ).
Observe that L(0ℓ x, z0ℓ y) ≤ #1 (z) + L(0ℓ x, 0#0 (z)+ℓ y), since each “1” in z can increase the LCS
by at most 1. By greedy prefix matching, we obtain L(0ℓ x, z0ℓ y) ≤ #1 (z) + ℓ + L(x, 0#0 (z) y). With
the assumption L(x, 0β y) = L(x, y) for any β ≥ 0, this yields
L(x′ , y ′ ) ≤ L(a, y1 ) + L(0ℓ x, z0ℓ y) ≤ (R + S + #1 (y1 )) + (#1 (z) + ℓ + L(x, y)) = R + 2S + ℓ + L(x, y),
since 0β b = y1 z and hence #1 (y1 ) + #1 (z) = #1 (0β b) = S.
With the alternative assumption S ≥ |x|, we bound L(0ℓ x, z0ℓ y) ≤ |0ℓ x| ≤ ℓ + S. If #1 (y1 ) = 0
this yields
L(x′ , y ′ ) ≤ L(a, y1 ) + L(0ℓ x, z0ℓ y) ≤ (R + S) + (ℓ + S) ≤ R + 2S + ℓ + L(x, y).
Otherwise, if #1 (y1 ) ≥ 1, by inspecting the structure of 0β b = 0β+R (01)S we see that z is a
subsequence of (01)S−#1 (y1 ) . Consider an LCS of 0ℓ x, z0ℓ y. If no “1” in z is matched by the LCS,
then we obtain
L(0ℓ x, z0ℓ y) ≤ L(0ℓ x, 0#0 (z)+ℓ y) ≤ #0 (z) + L(0ℓ x, 0ℓ y) = #0 (z) + ℓ + L(x, y),
40
where we used the fact L(u, vw) ≤ |v| + L(uw) and greedy prefix matching. Otherwise, if a “1” in
z is matched by the LCS to a “1” in 0ℓ x, i.e., to a “1” in x, then the 0ℓ -block of 0ℓ x is matched to
a subsequence of z and hence
L(0ℓ x, z0ℓ y) ≤ L(0ℓ , z) + L(x, z0ℓ y) ≤ #0 (z) + |x| ≤ #0 (z) + ℓ + L(x, y),
where we used ℓ ≥ |x|. Since z is a subsequence of (01)S−#1 (y1 ) and thus #0 (z) ≤ S − #1 (y1 ), in
both cases we obtain
L(x′ , y ′ ) ≤ L(a, y1 ) + L(0ℓ x, z0ℓ y) ≤ (R + S + #1 (y1 )) + (#0 (z) + ℓ + L(x, y)) ≤ R + 2S + ℓ + L(x, y),
which finally proves the claim.
Lemma 10.10. Consider x′ , y ′ as in Lemma 10.9 with β = 0, and assume S ≥ |x|. Then
R · S ≤ d(x′ , y ′ ) ≤ (R + 1)(4R + 6S + ℓ) + d(x, y).
Proof. Note that the simple fact L(u′ u′′ , v) ≤ |u′′ | + L(u′ , v) implies that for any strings w, z
and any i we have L(w, z) ≤ |w[(i + 1)..|w|]| + L(w[1..i], z) = |w| − i + L(w[1..i], z), and hence
L(w[1..i], z) ≥ i − (|w| − L(w, z)). Recall that L(a, b) = |b| = R + 2S by Lemma 7.3(iii). This yields
L(a[1..i], b) ≥ i − (|a| − (R + 2S)) = i − R and L(a, b[1..j]) ≥ j − (|b| − |b|) = j.
The lower bound follows from d(x′ , y ′ ) ≥ d(a, b) ≥ R · S by Observation 7.2 and Lemma 7.3(iv).
For the upper bound, we consider all possible prefixes x̃ := x′ [1..i], ỹ := y ′ [1..j] of x′ , y ′ and count
how many of them correspond to dominant pairs. Clearly, for i ≤ |a|, j ≤ |b|, there are d(a, b)
dominant pairs.
By the above observation, for any i ≤ |a|, we have L(a[1..i], b) ≥ i − R. Hence, any dominant
pair of the form (i, j) satisfies i − R ≤ L(a[1..i], b) ≤ |a[1..i]| = i. By Observation 6.2, there are at
most R + 1 such dominant pairs for fixed i. Thus, there are at most |a| · (R + 1) dominant pairs with
i ≤ |a| and j > |b|. Similarly, for j ≤ |b|, we have L(a, b[1..j]) ≥ j. Hence, there are no dominant
pairs with i > |a| and j ≤ |b|, since already the prefix a of x′ [1..i] includes b[1..j] as a subsequence.
In total, there are at most d(a, b) + |a| · (R + 1) dominant pairs with i ≤ |a| or j ≤ |b|.
Let i = |a| + k, j = |b| + k with k ∈ [ℓ]. Then L(x̃, ỹ) = L(a0k , b0k ) = L(a, b) + k = |b| + k
by greedy suffix matching and Lemma 7.3(iii). As any such choice could correspond to a dominant
pair, we count at most ℓ dominant pairs. Analogously to above, for i = |a| + k, there can be at
most i − L(a0k , b0k ) ≤ R dominant pairs with j > |b| + k. Symmetrically, for j = |b| + k, there
are at most j − L(a0k , b0k ) = 0 dominant pairs with i > |a| + k. This yields at most (R + 1) · ℓ
dominant pairs with |a| < i ≤ |a| + ℓ or |b| < j ≤ |b| + ℓ.
It remains to count dominant pairs with i = |a| + ℓ + ĩ and j = |b| + ℓ + j̃, with ĩ ∈ [|x|], j̃ ∈ [|y|].
Here, Lemma 10.9 bounds L(x̃, ỹ) = L(a0ℓ x[1..ĩ], b0ℓ y[1..j̃]) = L(a, b) + ℓ + L(x[1..ĩ], y[1..j̃]). Hence,
the dominant pairs of this form are in one-to-one correspondence to the dominant pairs of x, y.
Summing up all dominant pairs, we obtain
d(x′ , y ′ ) ≤ d(a, b) + (R + 1)(|a| + ℓ) + d(x, y)
≤ (R + 1)(4R + 6S + ℓ) + d(x, y),
since |a| = 2R + 2S and Lemma 7.3(iv) yields d(a, b) ≤ 2(R + 1)(R + 2S).
Finally, to pad δ (and later, in Section 10.3.1, ∆), we need the following technical lemma.
41
Lemma 10.11. Let x, y be arbitrary and µ, ν such that ν ≥ µ + |y|. We define
x′ := 0µ 1ν 0µ x,
y ′ :=
1ν 0µ y.
Then L(x′ , y ′ ) = µ + ν + L(x, y) and d(x′ , y ′ ) = 2µ + ν + #1 (y) + d(x, y).
Proof. Note that for any prefix w, z of 0µ x, 0µ y, we have
L(0µ 1ν w, 1ν z) = ν + L(w, z),
(7)
by Lemma 7.6 (swapping the role of “0”s and “1”s). In particular, we obtain L(x′ , y ′ ) = ν +
L(0µ x, 0µ y) = µ + ν + L(x, y) by greedy prefix matching.
For the second statement, we consider all possible prefixes x̃ := x′ [1..i], ỹ := y ′ [1..j] of x′ , y ′ and
count how many of them correspond to dominant pairs. Note that these prefixes have to end in
the same symbol, since any dominant pair is a matching pair. Recall that x̃ := x′ [1..i], ỹ := y ′ [1..j]
gives rise to a dominant pair if and only if L(x̃, ỹ) > L(x′ [1..i − 1], ỹ) and L(x̃, ỹ) > L(x̃, y ′ [1..j − 1]).
• x̃ = 0µ 1ν w, ỹ = 1ℓ (with w non-empty prefix of 0µ x, ℓ ∈ [ν]): These prefixes do not correspond
to a dominant pair, since L(0µ 1ν w, 1ℓ ) = L(0µ 1ℓ , 1ℓ ) = ℓ is obtained already by a shorter prefix
of x̃.
• x̃ = 0µ 1ν w, ỹ = 1ν z (with w non-empty prefix of 0µ x, z non-empty prefix of 0µ y): These
prefixes correspond to a dominant pair if and only if w, z correspond to a dominant pair of
0µ x, 0µ y, since by (7) we have L(x̃, ỹ) = ν + L(w, z). This yields d(0µ x, 0µ y) dominant pairs,
which by Lemma 7.1 evaluates to µ + d(x, y).
• x̃ = 0k , ỹ = 1µ 0ℓ (with k, ℓ ∈ [µ]): Clearly, L(x̃, ỹ) = min{k, ℓ}. It follows that x̃, ỹ corresponds to a dominant pair if and only if k = ℓ. This yields exactly µ dominant pairs.
• x̃ = 0µ 1k , ỹ = 1ℓ (with k, ℓ ∈ [ν]): Analogously to above, L(x̃, ỹ) = min{k, ℓ}, hence this
corresponds to a dominant pair if and only if k = ℓ. This yields exactly ν dominant pairs.
• x̃ = 0k , ỹ = 1ν 0µ z (with k ∈ [µ], z non-empty prefix of y): We have L(x̃, ỹ) = L(0k , 1ν 0k ) = k,
hence these prefixes do not correspond to a dominant pair, since the LCS is already obtained
for a shorter prefix of ỹ.
• x̃ = 0µ 1k , ỹ = 1ν 0µ z (with k ∈ [ν], z non-empty prefix of y): Since we can either match some
“1” of the 1ν -block in ỹ to a “1” in x̃ (necessarily discarding the initial 0µ -block of x̃) or delete
the complete 1ν -block, we obtain
L(0µ 1k , 1ν 0µ z) = max{L(1k , 1ν 0µ z), L(0µ 1k , 0µ z)}
= max{k, µ + L(1k , z)} = max{k, µ + min{k, #1 (z)}}.
Consider the case L(x̃, ỹ) = L(x′ [1..i], y ′ [1..j]) = k. Then also L(x′ [1..i], y ′ [1..(j − 1)]) = k,
and hence (i, j) is no dominant pair. If, however, L(x̃, ỹ) = µ + min{k, #1 (z)}, then this
corresponds to a dominant pair if and only if k = #1 (z) (and z ends in “1”): if k > #1 (z), then
also L(x′ [1..(i−1)], y ′ [1..j]) = µ+#1 (z), if k < #1 (z), then also L(x′ [1..i], y ′ [1..(j−1)]) = µ+k.
Thus, there are exactly min{ν, #1 (y)} = #1 (y) such dominant pairs.
In total, we have counted ν + 2µ + #1 (y) + d(x, y) dominant pairs.
42
We start with the basic building block from Lemma 10.9 and then further pad the strings to
obtain the desired n, m, ∆, δ, M as follows. Note that the guarantee |y| ≤ |x| ≤ O(min{∆, m}) is
satisfied by Lemma 9.8.
Lemma 10.12. Let (α, {0, 1, 2}) be a parameter setting satisfying Table 2 with αL = αm . Let
(n, x, y) be an instance of LCSγ≤ (α, {0, 1}) with |y| ≤ |x| ≤ O(min{∆, m}). We set
S = max{m, |x|},
R = ⌊d/m⌋,
to instantiate the basic building block x′ = a0ℓ x = (01)R+S 0ℓ x and y ′ = b0ℓ y = 0R (01)S 0ℓ y of
Lemma 10.9 with ℓ := R + S + |x| + |y|. Moreover, we define κ := ⌊M/n⌋ and m̃ := max{m, δ +
2R + 2S + ℓ + |x|} to further pad the instance to
x′′ =
2κ 2∆ 1m̃ 0δ a 0ℓ x,
y ′′ =
2κ 0δ 1m̃ 0δ b 0ℓ y.
′
Then x′′ , y ′′ is an instance of LCSγ (α, {0, 1, 2}) for some constant γ ′ ≥ 1 and can be computed in
time O(n), together with an integer τ such that L(x′′ , y ′′ ) = τ + L(x, y).
Proof. We first use the Disjoint Alphabets Lemma and greedy prefix matching to obtain
L(x′′ , y ′′ ) = κ + L(1m̃ 0δ x′ , 0δ 1m̃ 0δ y ′ ) = κ + m̃ + δ + L(x′ , y ′ ) = κ + m̃ + δ + R + 2S + ℓ + L(x, y), (8)
where we used Lemma 10.11 for the second equality (with the roles of x, y swapped) and Lemma 10.9
for the last equality. Observe that x′′ , y ′′ and τ = κ + m̃ + δ + R + 2S + ℓ can be computed in time
O(n).
′
It remains to verify that x, y is an instance of LCSγ (α, {0, 1, 2}) for some γ ′ ≥ 1. We first
observe that S = Θ(m) by |x| = O(m) and R = O(S) by the parameter relation d ≤ Lm ≤ m2 .
Since also |y| = O(m), we conclude R, S, |x|, |y|, ℓ = O(m). Observe that κ = O(M/n) = O(m) by
the relation M ≤ mn and m̃ = Θ(m) (using that R, S, ℓ, |x| = O(m) and the parameter relation
δ ≤ m). Thus, |x′′ | = κ + ∆ + m̃ + δ + 2(R + S) + ℓ + |x| = Θ(m + ∆) + O(m) = Θ(n). Similarly,
|y ′′ | = κ + m̃ + δ + R + 2S + ℓ + |y| = Θ(m + δ) + O(m) = Θ(m).
By (8), we also have
′′
′′
L(x , y ) = m̃ + O(m) = Θ(m), as desired.
Note that by ∆ ≥ δ, |a| ≥ |b| and |x| ≥ |y|, we indeed have |x′′ | ≥ |y ′′ |. Furthermore, using
the relation d ≤ 2L(∆ + 1) = O(m∆), we obtain R = O(d/m) = O(∆). By (8), we obtain
∆(x′′ , y ′′ ) = ∆ + R + (|x| − L(x, y)) = ∆ + O(∆) = Θ(∆) since |x| = O(∆). Similarly, (8) yields
δ(x′′ , y ′′ ) = δ + (|y| − L(x, y)) = δ + O(δ) = Θ(δ), since |y| − L(x, y) = δ(x, y) = O(δ).
For the dominant pairs, we apply the disjoint alphabets lemma, Lemma 7.1 and Lemma 10.11
to compute
d(x′′ , y ′′ ) = κ + d(1m̃ 0δ x′ , 0δ 1m̃ 0δ y ′ ) = κ + m̃ + 2δ + d(x′ , y ′ ).
(9)
Lemma 10.10 yields the lower bound d(x′ , y ′ ) ≥ R · S = Ω(d) and the corresponding upper bound
d(x′ , y ′ ) ≤ (R + 1)(4R + 6S + ℓ) + d(x, y) = O(R · S + d(x, y)) = O(d).
Thus, (9) yields d(x′′ , y ′′ ) = Θ(d) + O(m) = Θ(d), where we used that d ≥ L = Ω(m) since
αL = αm .
For M , we count #2 (x′′ ) = ∆ + κ and #2 (y ′′ ) = κ, as well as #0 (y ′′ ), #1 (y ′′ ) ≤ |y ′′ | = O(m),
#0 (x′′ ) ≤ δ + |x′ | = O(m) and #1 (x′′ ) ≤ m̃ + |x′ | = O(m). Thus, M (x′′ , y ′′ ) = (∆ + κ)κ + O(m2 )
and by M ≥ L2 /|Σ| = Ω(m2 ) (since αL = αM ), it suffices to prove that (∆ + κ)κ = Θ(M ) to
43
verify M (x′′ , y ′′ ) = Θ(M ). Indeed, we have κ = Θ(M/n), since M ≥ n. If α∆ < 1, we have
αm = αn = 1 and M = Ω(m2 ) together with the relation M ≤ mn = O(m2 ) implies M = Θ(m2 ).
Thus, κ = Θ(m) and hence (κ + ∆)κ = Θ(m2 ) = Θ(M ). If α∆ ≥ αm , then α∆ = 1 and hence
∆ + κ = ∆ + O(m) = Θ(n), which implies (κ + ∆)κ = Θ(n · M/n) = Θ(M ). Finally, note that
indeed Σ(x′′ , y ′′ ) = {0, 1, 2}.
Combining Lemma 10.12 with Lemma 9.8 finally proves Lemma 10.7.
10.3
Large LCS, Alphabet Size 2
In this section, we study the case that αL = αm (and αδ , α∆ may be small) for the case of binary
alphabets, i.e., Σ = {0, 1}. In this regime, Theorem 3.4 follows from the following statement (and
Lemma 4.1).
Lemma 10.13. Let (α, {0, 1}) be a parameter setting satisfying Table 2 with αL = αm . There is
a constant γ ≥ 1 such that any algorithm for LCSγ (α, {0, 1} takes time min{d, δ∆, δM/n}1−o(1)
unless OVH fails.
We present different constructions for three subcases, that we discuss shortly in the following
paragraphs and in detail in the remainder of this section. We hope that this short discussion
conveys enough intuition about the “complexity” of the task to make it believable that our lengthy
and technical case distinction is indeed necessary.
Case 1: α∆ ≤ αm = αL . Then n = Θ(m) and it follows that any binary strings x, y satisfy
M (x, y) = Θ(m2 ), so M poses no constraints, in particular there are no constraints on the numbers
of “0”s and “1”s in the constructed strings. On the other hand, the potentially small value of ∆
renders some of our gadgets useless (e.g., Lemma 10.25). Since δ may be small, we use the hardness
construction from Section 9.2 (for large LCS).
Otherwise, we have α∆ > αm and thus ∆ = Θ(n) ≫ m. Note that any string x of length n
contains at least n/2 “0”s or “1”s, say x contains many “1”s. Then to obtain Θ(M ) matching pairs,
y must contain at most O(M/n) “1”s. Thus, we need to pay close attention to the number of “1”s
in the constructed string y. We split the case α∆ > αm into two subcases. Case 2: α∆ > αm = αL
and αδ ≥ αM − 1. Here, the constraint on #1 (y) is stronger than the constraint on δ, and we
use the hardness construction from Section 9.1 (for small LCS), since it introduces few “1”s. Case
3: α∆ > αm = αL and αδ < αM − 1. Here, the constraint on δ is stronger than the constraint
on #1 (y), and we use the hardness construction from Section 9.2 (for large LCS), since it keeps δ
small.
10.3.1
Case α∆ ≤ αm = αL
Since n = ∆ + L, the assumptions αL = αm and α∆ ≤ αm imply n = Θ(m). Together with the
parameter relations L2 /|Σ| ≤ M ≤ 2Ln and Σ = {0, 1} we obtain M = Θ(m2 ). In particular, in
this regime the Õ(δ∆) time bound beats Õ(δM/n), and Lemma 10.13 simplifies to the following
result.
Lemma 10.14. Let (α, Σ) be a parameter setting satisfying Table 2 with αL = αm and α∆ ≤ αm .
There is a constant γ ≥ 1 such that any algorithm for LCSγ (α, Σ) takes time min{d, δ∆}1−o(1)
unless OVH fails.
We can now instantiate the parameters of Lemma 10.9 to create a string with the desired number
of dominant pairs. The remaining parameters will be padded in an additional construction. Note
that the preconditions, specifically the additional guarantee, are satisfied by Lemma 9.8.
44
Lemma 10.15. Let (α, Σ) be a parameter setting satisfying Table 2 with αL = αm and α∆ ≤ αm .
Given any instance (n, x, y) of LCSγ≤ (α, {0, 1}) with the additional guarantee |y| ≤ |x| ≤ γ · m, we
′
can construct an instance (n, x′ , y ′ ) of LCSγ≤ (α, {0, 1}) (for some constant γ ′ ≥ γ) and τ in time
O(n) such that
(i) L(x′ , y ′ ) = τ + L(x, y),
(ii) d(x′ , y ′ ) = Θ(d).
(iii) |x′ | ≥ |y ′ |.
Proof. We construct x′ , y ′ as in Lemma 10.9 with S = max{|x|, m}, R = ⌈d/S⌉, β = 0 and
ℓ = (R+S +|x|+|y|). Note that indeed |x′ | ≥ |y ′ | by |x| ≥ |y|, |a| ≥ |b|, and β = 0. The assumption
|x|, |y| = O(m) yields S = Θ(m). By the parameter relation d ≤ 2(∆ + 1) · L = O(∆ · m), we also
have R = O(d/S + 1) = O(d/m + 1) = O(∆). We conclude that |x′ |, |y ′ | = O(R + S + |x| + |y|) =
O(m + ∆) = O(m), since by assumption α∆ ≤ αm . This yields L(x′ , y ′ ) = O(m) = O(L) (by the
assumption αL = αm ) and M (x′ , y ′ ) = O(m2 ) = O(M ) (note that M ≥ L2 /|Σ| = Ω(m2 ) by the
assumption αL = αm ).
By Lemma 10.9, we have L(x′ , y ′ ) = R + 2S + ℓ + L(x, y), satisfying (i). This yields δ(x′ , y ′ ) =
δ(x, y) = O(δ) and ∆(x′ , y ′ ) = R + ∆(x, y) = O(d/m + ∆) = O(∆), by the parameter relation
d ≤ 2L(∆ + 1) = O(m∆). For d, we first observe that ⌈d/S⌉ = Θ(d/m) (by the parameter relation
d ≥ L = Θ(m)) and ℓ = O(R + S + |x| + |y|) = O(m). Lemma 10.10 yields the lower bound
d(x′ , y ′ ) ≥ R · S = Ω(d/m · m) = Ω(d) as well as the corresponding upper bound d(x′ , y ′ ) =
O(R · ℓ + d(x, y)) = O(d/m · m + d) = O(d).
′
These bounds prove that (n, x′ , y ′ ) is an instance of LCSγ≤ (α, {0, 1}) for some γ ′ ≥ 1.
We use Lemma 10.11 to finally pad δ, ∆, and m.
Lemma 10.16. Let x, y, x′ , y ′ , τ be as given in Lemma 10.15. Then, in time O(n) we can construct
′′
an instance x′′′ , y ′′′ of LCSγ (α, {0, 1}) (for some constant γ ′′ ≥ 1) and an integer τ ′ such that
L(x′′′ , y ′′′ ) = τ ′ + L(x′ , y ′ ) = (τ + τ ′ ) + L(x, y).
Proof. As an intermediate step, let m̃ := max{m, |x′ |, |y ′ |, ∆} and construct
x′′ := 0∆ 1m̃+∆ 0∆ x′ ,
y ′′ :=
1m̃+∆ 0∆ y ′ .
x′′′ :=
15m̃+δ 0δ x′′ ,
We obtain the final instance as
y ′′′ := 0δ 15m̃+δ 0δ y ′′ .
Note that by definition of m̃, we have m̃ + ∆ ≥ ∆ + |y ′ | and δ + 5m̃ ≥ δ + (3∆ + m̃ + |x′ |) = δ + |x′′ |,
satisfying the conditions of Lemma 10.11. Hence, this lemma yields
L(x′′′ , y ′′′ ) = 5m̃ + 2δ + L(x′′ , y ′′ ) = 6m̃ + 2δ + 2∆ + L(x′ , y ′ ).
(10)
Clearly, x′ , y ′ , τ and hence also x′′ , y ′′ and τ ′ := 6m̃ + 2δ + 2∆ can be computed in time O(n).
We now verify all parameters. Clearly, Σ(x′′′ , y ′′′ ) = {0, 1}. Since by assumption α∆ ≤ αm = 1,
we have |x′ | = O(n) = O(m), |y ′ | = O(m), and ∆ = O(m). This implies m̃ = Θ(m) and
consequently |x′′′ | = 6m̃ + 2δ + 3∆ + |x′ | = 6m̃ + O(m) = Θ(m) = Θ(n) (using δ ≤ ∆ and
45
α∆ ≤ αm = 1). Similarly, |y ′′′ | = 6m̃ + 3δ + 2∆ + |y ′ | = 6m̃ + O(m) = Θ(m). By (10), we have
L(x′′′ , y ′′′ ) = 6m̃ + 2δ + 2∆ + L(x′ , y ′ ) = 6m̃ + O(L) = Θ(L) (using the assumption αL = αm ).
Note that |x′′′ | ≥ |y ′′′ | follows from ∆ ≥ δ and |x′ | ≥ |y ′ | (by Lemma 10.15(iii)). Hence, (10)
provides ∆(x′′′ , y ′′′ ) = ∆ + ∆(x′ , y ′ ) = Θ(∆) and δ(x′′′ , y ′′′ ) = δ + δ(x′ , y ′ ) = Θ(δ).
The number of matching pairs satisfies M (x′′′ , y ′′′ ) = #0 (x′′′ )#0 (y ′′′ ) + #1 (x′′′ )#1 (y ′′′ ) =
Θ(m2 ) = Θ(M ), where the last bound follows from M ≥ L2 /|Σ| = Ω(m2 ) and M ≤ 2Ln = O(m2 )
by αL = αm = 1. For the number of dominant pairs, we apply Lemma 10.11 to bound
d(x′′′ , y ′′′ ) = 3δ + 5m̃ + #1 (x′′ ) + d(x′′ , y ′′ )
= 3δ + 5m̃ + (m̃ + ∆ + #1 (x′ )) + (3∆ + m̃ + #1 (y ′ ) + d(x′ , y ′ ))
= 7m̃ + 3(δ + ∆) + #1 (x′ ) + #1 (y ′ ) + d(x′ , y ′ ) = Θ(d) + O(m) = Θ(d),
where the last two bounds follow from m̃, |x′ |, |y ′ |, δ, ∆ = O(m), d(x′ , y ′ ) = Θ(d) by Lemma 10.15
and the parameter relation d ≥ L = Ω(m).
Combining Lemmas 10.15 and 10.16 with Lemma 9.8 finally proves Lemma 10.14.
10.3.2
Case α∆ > αm = αL and αδ ≥ αM − 1
In this section, we consider the case where αL = αm < α∆ and αδ ≥ αM − 1. In this case, we have
∆ = Θ(n) ≫ m and M/n ≤ O(δ). Since M/n ≤ O(m) ≤ O(∆), the fastest known algorithm runs
in time Õ(min{d, δM/n}). Consequently, in this regime Lemma 10.13 simplifies to the following
statement.
Lemma 10.17. Let (α, {0, 1}) be a parameter setting satisfying Table 2 with αL = αm < α∆
and αM − 1 ≤ αδ . There is a constant γ ≥ 1 such that any algorithm for LCSγ (α, Σ) takes time
min{d, δM/n}1−o(1) unless OVH fails.
To obtain this result, we cannot simply pad our hard instances (n, x, y) of LCS≤ (α, {0, 1}) to
LCS(α, {0, 1}), since the desired running time bound min{d, δM/n} is not monotone. In other
words, for LCS≤ (α, {0, 1}) we have a lower bound of min{δ∆, δm, d}1−o(1) (see Lemma 9.8) which
can be higher than the running time O(n + δM/n) of our new algorithm (Theorem 3.5) and thus we
would violate SETH. In fact, we even cannot start from an instance as constructed in Lemma 9.8,
since this would generate too many “1”s. Instead, we use instances of a different parameter setting
LCS≤ (α′ , {0, 1}) with α′δ = α′m , i.e., we invoke Lemma 9.2.
Observation 10.18. Let (α, {0, 1}) be a parameter setting satisfying Table 2 with αL = αm < α∆
and αM − 1 ≤ αδ . Then α′ := α′ (α) defined by
α′d = min{αd , αδ + αM − 1},
α′M = 2α′L ,
α′m = α′δ = α′L ,
α′Σ = 0,
α′d − α′L = min{αM − 1, αd /2},
α′∆ = 1,
yields a parameter setting (α′ , {0, 1}) satisfying Table 2. The definition of α′ implies
α′L = min{αδ , max{αd − αM + 1, αd /2}}.
(11)
Moreover, there is some constant γ ≥ 1 such that no algorithm solves LCSγ≤ (α′ , {0, 1}) in time
min{d, δM/n}1−o(1) unless OVH√fails. This holds even restricted to instances (n, x, y)
√ with |x|, |y| ≤
′
′
′
γnαL = O(min{δ, max{dn/M, d}}) and #1 (y) ≤ γ · nαd −αL = O(min{M/n, d}) satisfying
L(x, 0β y) = L(x, y) for any β ≥ 0.
46
Proof. We first prove (11). Consider the case that αd /2 ≤ αM − 1. Then αd ≤ 2(αM − 1) ≤
αδ + (αM − 1), where we used the assumption αM − 1 ≤ αδ . Thus, α′d = αd and by definition
α′L = α′d − min{αM − 1, αd /2} = αd − αd /2 = αd /2.
From αd /2 ≤ αM − 1, it follows that αd − αM + 1 ≤ αd /2 and αd /2 ≤ αM − 1 ≤ αδ , hence
α′L = αd /2 = min{αδ , max{αd − αM + 1, αd /2}}, as desired.
Consider the remaining case that αd /2 > αM − 1. Then by definition
α′L = α′d − (αM − 1) = min{αd − αM + 1, αδ }.
Since αd /2 > αM − 1 implies αd − αM + 1 ≥ αd /2, this indeed yields
α′L = min{max{αd − αM + 1, αd /2}, αδ },
as desired, concluding the proof of (11).
Checking all constraints from Table 2 is straight-forward, except for the inequalities 0 ≤ α′L ≤ 1
and α′d ≤ 2α′L . From (11), 0 ≤ α′L ≤ 1 follows immediately by the parameter relations αδ , αd ≥ 0
and αδ ≤ αm ≤ 1. For the other inequality, note that α′d ≤ 2α′L is equivalent to min{αM −1, αd /2} =
α′d − α′L ≤ α′L = min{αδ , max{αd − αM + 1, αd /2}}, which directly follows from the assumption
αM − 1 ≤ αδ and the trivial fact that αd /2 ≤ max{αd − αM + 1, αd /2}.
The last statement directly follows from Lemma 9.2.
It remains to pad strings x, y of LCS≤ (α′ , {0, 1}) to LCS(α, {0, 1}). The first step is the following
construction which pads δ and d.
Lemma 10.19. Let (α, {0, 1}) be a parameter setting satisfying Table 2 with αL = αm < α∆
and αM − 1 ≤ αδ , and construct α′ as in Observation
of
√ 10.18. Let (n, x, y) be an instance
√
LCS≤ (α′ , {0, 1}) with |x|, |y| = O(min{δ, max{dn/M, d}}) and #1 (y) = O(min{M/n, d}) satisfying L(x, 0β y) = L(x, y) for any β ≥ 0. We set
√
R = ⌊d/S⌋,
ℓ = |x| + |y| + R + S,
β = δ,
S = ⌊min{M/n, d}⌋,
and define, as in Lemma 10.9,
x′ :=
a 0ℓ x
=
(01)R+S 0ℓ x,
y ′ :=
0β b 0ℓ y
=
0β 0R (01)S 0ℓ y.
′
Then x′ , y ′ is an instance of LCSγ≤ (α, {0, 1}) (for some γ ′ ≥ 1) with the additional guarantees that
(i) |x′ |, |y ′ | = O(m),
(ii) #1 (y ′ ) = O(M/n) and #1 (y ′ ) · |b0ℓ y| = O(d).
(iii) d(x′ , y ′ ) = Θ(d),
(iv) |y ′ | − L(x′ , y ′ ) = Θ(δ),
(v) In time O(n) we can compute a number τ such that L(x′ , y ′ ) = τ + L(x, y).
47
√
Proof. Note that S ≤ d implies that S ≤ R. Note also that by assumption, |x|,
√|y| = O(min{δ, R})
and hence R, S, ℓ, |x|, |y| = O(R). Furthermore, R = Θ(d/S) = Θ(d/M n + d) = O(m), where
the first bound follows from αd ≥ 0 and αM ≥ 1and the second follows from the parameter
relations d/M n ≤ 5L = O(m) and d ≤ Lm ≤ m2 . Hence |x′ | = O(R) = O(m). Additionally,
δ̃ = Θ(δ) and thus |y ′ | = O(δ + m) = O(m) by δ ≤ m; thus we have proven (i). This also implies
L(x′ , y ′ ) ≤ O(m) = O(L) since αL = αm .
Note that #1 (y) = S + #1 (y) = O(S) by definition of S and assumption on #1 (y). We
compute d(x′ , y ′ ) ≤ 5L(x′ , y ′ ) · #1 (y ′ ) ≤ 5|x′ | · #1 (y ′ ) = O(R · S) = O(d). Furthermore, we
obtain by Observation 7.2 and Lemma 7.3 that d(x′ , y ′ ) ≥ d(a, 0β b) ≥ R · S = Ω(d). Note that in
particular #1 (y ′ ) = O(S) = O(M/n) and #1 (y ′ )·|b0ℓ y| = O(S ·R) = O(d), proving (ii). The bound
M (x′ , y ′ ) ≤ O(m2 ) = O(M ) trivially follows from |x′ |, |y ′ | = O(m) and M ≥ L2 /|Σ| = Ω(m2 ) since
αL = αm .
By Lemma 10.9 we have L(x′ , y ′ ) = R+2S +ℓ+L(x, y), which immediately yields (v). Moreover,
we obtain δ(x′ , y ′ ) = δ +δ(x, y) = Θ(δ), since δ(x, y) ≤ |x| ≤ O(δ), proving (iv). Finally, ∆(x′ , y ′ ) ≤
|x′ | ≤ O(m) ≤ O(∆) by the assumption α∆ > αm .
To finally pad all remaining parameters, we first prepare the following technical tool.
Lemma 10.20. Let x = 1κ 0µ w and y = 0µ 0ν z with µ > |z|. Then it holds that L(x, y) =
µ + L(w, 0ν z), as well as d(x, y) ≥ d(w, 0ν z) and d(x, y) ≤ min{κ, |z|} + µ + d(1κ 0µ , z) + d(w, 0ν z).
Proof. By Lemma 7.6, we have that L(x, y) = µ + L(w, 0ν z).
This immediately shows that d(x, y) ≥ d(w, 0ν z), since the above statement implies, for any
prefixes w̃, z̃ of w, 0ν z, that L(1κ 0µ w̃, 0µ z̃) = µ + L(w̃, z̃) and hence any k-dominant pair (i, j) of w
and 0ν z gives rise to a (µ + k)-dominant pair (κ + µ + i, µ + j) of x and y.
For the upper bound, we count the number of prefixes x̃, ỹ of x, y corresponding to dominant
pairs. Note that x̃, ỹ have to end in the same symbol to be a dominant pair. Consider first the case
that x̃ = 1k . Hence we must have ỹ = 0µ 0ν z̃ for some prefix z̃ = z[1..ℓ] of z̃. Clearly, L(x̃, ỹ) =
min{k, #1 (z̃)}. Hence, x̃, ỹ corresponds to a dominant pair if and only if #1 (z̃) = #1 (z[1..ℓ]) = k
and #1 (z[1..ℓ − 1]) < k, i.e., z̃ is determined by the k-th occurrence of “1” in z. Thus, there can
be at most min{κ, #1 (z)} ≤ min{κ, |z|} such dominant pairs.
Consider the case that x̃ = 1κ 0k with k ∈ [µ]. We separately regard the following types of
prefixes of y.
• ỹ = 0ℓ with ℓ ∈ [µ + ν]: By greedy suffix matching, L(x̃, ỹ) = L(1κ 0k , 0ℓ ) = min{k, ℓ}, hence
as above there can be at most µ dominant pairs, since there are only µ choices for k.
• ỹ = 0µ 0ν z̃: We have L(x̃, ỹ) = max{k, L(1κ 0k , z̃)}. To see that this holds, note that the
longest common subsequence either includes none of the ones of 1κ of ỹ, in which case 0k is
the LCS of ỹ and x̃, or otherwise it matches at least one 1 in ỹ, which means that the LCS
deletes all “0”s preceding the first “1” in ỹ, i.e., the whole 0µ+ν block in y ′ .
If L(x̃, ỹ) = k, then x̃, ỹ cannot correspond to a dominant pair since already the prefix 0k
of ỹ satisfies L(x̃, 0k ) = k = L(x̃, ỹ). Hence x̃, ỹ can only correspond to a dominant pair if
L(x̃, ỹ) = L(1κ 0k , z̃) and hence 1κ 0k , z̃ correspond to a dominant pair of 1κ 0µ , z. This yields
at most d(1κ 0µ , z) dominant pairs.
Finally, consider the case that x̃ = 1κ 0µ w̃ with w̃ a prefix of w. There are no dominant pairs
for ỹ = 0ℓ with ℓ ∈ [µ]: Already for the prefix 1κ 0ℓ of x̃, we have L(1κ 0ℓ , 0ℓ ) = ℓ = L(x̃, ỹ), hence
these prefixes cannot correspond to dominant pairs. It remains to consider ỹ = 0µ z̃ for a prefix z̃
of 0ν z. Again by Lemma 7.6, we have L(x̃, ỹ) = µ + L(w̃, z̃) and hence such dominant pairs are
48
in one-to-one correspondence with the dominant pairs of w and 0ν z. This yields at most d(w, 0ν z)
further dominant pairs.
By summing up over the three cases, we conclude that there are at most min{κ, |z|} + µ +
d(1κ 0µ , z) + d(w, 0ν z) dominant pairs.
We can finally pad to LCS(α, {0, 1}).
˜ := max{∆, |y ′ |},
Lemma 10.21. Let x, y, x′ , y ′ , τ be as in Lemma 10.19. We set κ := ⌊M/n⌋, ∆
′
and m̃ := max{m, |y |} and define
˜
x′′ =
1κ+∆ 0m̃ x′ ,
y ′′ =
1κ
0m̃ y ′ .
Then x′′ , y ′′ is an instance of LCS(α, {0, 1}). Moreover, we can compute a number τ ′ in time O(n)
such that L(x′′ , y ′′ ) = τ ′ + L(x, y).
Proof. Note that κ = O(M/n) = O(m) by the parameter relation M ≤ mn. Lemma 10.19(i)
˜ = Θ(∆) (since αm ≤ α∆ = 1). Thus,
yields |x′ |, |y ′ | = O(m) and hence m̃ = Θ(m) and ∆
′′
′
˜
|x | = κ + ∆ + m̃ + |x | = Θ(∆) + O(m) = Θ(n) since α∆ = 1. Furthermore, |y ′′ | = κ + m̃ + |y ′ | =
m̃ + O(m) = Θ(m).
˜ has been defined such that |x′′ | ≥ |y ′′ |. By greedy prefix matching and
Observe that ∆
Lemma 10.20, we obtain
˜
L(x′′ , y ′′ ) = κ + L(1∆ 0m̃ x′ , 0m̃ y ′ ) = κ + m̃ + L(x′ , y ′ ).
(12)
Since L(x′ , y ′ ) = τ + L(x, y), we satisfy the last claim by setting τ ′ := κ + m̃ + τ . Moreover,
L(x′′ , y ′′ ) = m̃ + O(m) = Θ(m) = Θ(L) since |x′ |, |y ′ | ≤ O(m) and αL = αm . Furthermore, (12)
˜ + (|x′ | − L(x′ , y ′ )) = Θ(∆) + O(m) = Θ(∆) and δ(x′′ , y ′′ ) = |y ′ | − L(x′ , y ′ ) =
yields ∆(x′′ , y ′′ ) = ∆
Θ(δ) by Lemma 10.19(iv).
˜
For the dominant pairs, we apply Lemma 7.1 to bound d(x′′ , y ′′ ) = κ + d(1∆ 0m̃ x′ , 0m̃ y ′ ). To
˜
bound the latter term, note that Lemma 10.20 yields d(1∆ 0m̃ x′ , 0m̃ y ′ ) ≥ d(x′ , y ′ ) = Ω(d) by
Lemma 10.19(iii). For the upper bound, we first recall that y ′ = 0δ̃ b0ℓ y and that #1 (y ′ ) · |b0ℓ y| =
˜
˜
O(d) by Lemma 10.19(ii). Hence we have d(1∆ 0m̃ , b0ℓ y) ≤ 5 · L(1∆ 0m̃ , b0ℓ y) · #1 (b0ℓ y) = O(|b0ℓ y| ·
′
#1 (y )) = O(d). We can finally compute, using Lemma 10.20,
˜ m̃
˜
˜ |y ′ |} + m̃ + d(1∆
0 , b0ℓ y) + d(x′ , y ′ )
d(1∆ 0m̃ x′ , 0m̃ y ′ ) ≤ min{∆,
≤ |y ′ | + m̃ + O(d) + d(x′ , y ′ ) = O(d),
where the last bound follows from |y ′ |, m̃ = O(m) = O(d) by the relation d ≥ L = Ω(m) (since
αL = αm ) and d(x′ , y ′ ) = O(d).
It remains to count the number of matching pairs. We have #0 (x′′ ), #0 (y ′′ ) ≤ |x′ | + |y ′ | +
˜ + #1 (x′ ) = Θ(∆) + O(m) = Θ(n) (since α∆ = 1)
m̃ = O(m), as well as #1 (x′′ ) = κ + ∆
′′
′
and #1 (y ) = κ + #1 (y ) = κ + O(M/n) = Θ(M/n) by Lemma 10.19(ii). Thus M (x′′ , y ′′ ) =
#1 (x′′ )#1 (y ′′ ) + #0 (x′′ )#0 (y ′′ ) = Θ(M ) + O(m2 ) = Θ(M ), where the last bound follows from
M ≥ L2 /|Σ| = Ω(m2 ) since αL = αm .
Combining Lemmas 10.19 and 10.21 with Observation 10.18 finally proves Lemma 10.17.
49
10.3.3
Case α∆ > αm = αL and αδ ≤ αM − 1
In this section, we consider the case where αL = αm < α∆ and αδ ≤ αM − 1. In this case, we have
∆ = Θ(n) ≫ m and δ ≤ O(M/n). Since M/n ≤ O(m) ≤ O(∆), the fastest known algorithm runs
in time Õ(min{d, δM/n}). Consequently, in this regime Lemma 10.13 simplifies to the following
statement.
Lemma 10.22. Let (α, {0, 1}) be a parameter setting satisfying Table 2 with αL = αm < α∆
and αδ ≤ αM − 1. There is a constant γ ≥ 1 such that any algorithm for LCSγ (α, Σ) takes time
min{d, δM/n}1−o(1) unless OVH fails.
As in the previous section, to prove this result we cannot simply pad instances of LCS≤ (α, {0, 1})
to LCS(α, {0, 1}), since the desired running time bound min{d, δM/n} is not monotone. Instead,
we start from instances of a suitably chosen different parameter setting LCS≤ (α′ , {0, 1}) with
α′L = α′m , i.e., we invoke Lemma 9.8.
Observation 10.23. Let (α, {0, 1}) be a non-trivial parameter setting satisfying Table 2 with
αL = αm < α∆ and αδ ≤ αM − 1. Define α′ := α′ (α) by
α′M = 1 + α′m ,
α′δ = min{αδ , αd /2},
α′L = α′m = min{αM − 1, αd − α′δ },
α′d = min{αδ + αM − 1, αd },
α′∆ = 1,
α′Σ = 0,
Then the parameter setting (α′ , {0, 1}) satisfies Table 2. Furthermore, there is some constant
′
γ ≥ 1 such that no algorithm solves LCSγ≤ (α′ , {0, 1}) in time nαd (1−o(1)) = min{d, δM/n}1−o(1)
α′m =
unless OVH fails. This
√ holds even restricted to instances (n, x, y) with |x|, |y| ≤ γ · n
O(min{M/n, max{d/δ, d}}).
Proof. We only discuss the inequalities from Table 2 that are not straight-forward to verify. To see
α′δ ≤ α′m , note that αδ ≤ αM − 1 by assumption and αd /2 = αd − αd /2 ≤ αd − α′δ . The inequality
α′L ≤ α′d follows from αM − 1 ≤ αM − 1 + αδ and αd − α′δ ≤ αd . Furthermore, α′d ≤ 2α′L follows
from αδ + αM − 1 ≤ 2(αM − 1) (by assumption) and αd = 2(αd − αd /2) ≤ 2(αd − α′δ ). From
α′d ≤ 2α′L and α′L = α′m we also obtain α′M = 1 + α′m = 1 + α′L ≥ 1 + α′d − α′L , which corresponds
to the parameter relation M ≥ nd/(5L) that only holds for Σ = {0, 1}. Finally, α′M ≥ α′d follows
from α′M = 1 + α′m = min{αM , 1 + αd − α′δ } ≥ min{αM , αd } by α′δ ≤ αδ ≤ 1 and similarly
α′d = min{αδ + αM − 1, αd } ≤ min{αM , αd }.
Lemma 9.8 shows that some γ ≥ 1 exists such that LCSγ≤ (α′ , {0, 1}) cannot be solved in time
′
′
′
′
′
′
∆ }(1−o(1)) , even restricted to instances (n, x, y) with |x|, |y| ≤ γ · nαm =
min{nαd , nαδ +αm , nαδ +α√
O(min{M/n, max{d/δ, d}}). We simplify the running time bound by noting that α′∆ = 1 ≥ α′m ,
′
′
′
′
so that nαδ +αm ≤ nαδ +α∆ . Moreover, we have α′δ + α′m = α′d = min{αδ + αM − 1, αd }. Indeed,
if αδ ≤ αd /2, we have α′δ = αδ and hence α′δ + α′m = min{αδ + αM − 1, αd } = α′d . Otherwise,
αd /2 < αδ ≤ αM − 1, forcing α′δ = αd /2 and α′m = αd /2, which yields α′δ + α′m = αd = min{αδ +
αM − 1, αd } = α′d . Thus, min{α′d , α′δ + α′m , α′δ + α′∆ } = α′d and the lower bound simplifies to
′
nαd (1−o(1)) = min{δM/n, d}1−o(1) .
In this section, to pad the number of dominant pairs, we will construct instances x′ = a0ℓ x =
(01)R+S 0ℓ x, y ′ = b0ℓ y = 0R (01)S 0ℓ y, where we choose R, S proportional to nαR , nαS with
αS := min{αM − 1, max{αd − αδ , αd /2}},
50
αR := αd − αS .
Note that the use of αS , αR is a slight abuse of notation, since R, S are not actual input parameters,
but αS , αR depend only on α. We will later set R = c·nαR , S = c′ ·nαS with suitably large constants
c, c′ . However, depending on whether αS ≤ αR or αS > αR , we will extend and analyze the basic
construction differently. We start with the simpler case of αS ≤ αR .
Lemma 10.24 (Construction for αS ≤ αR ). Let (α, {0, 1}) be a parameter setting satisfying Table 2
with αL = αm < α∆ , αδ ≤ αM − 1, and αS ≤ αR . Given an instance (n, x, y) of LCSγ≤ (α′ , {0, 1})
√
with |y| ≤ |x| = O(min{M/n, max{d/δ, d}}), we use the parameters
S = max{|x|, ⌊nαS ⌋},
R = ⌊d/S⌋,
ℓ = R + S + |x| + |y|,
β = δ,
to instantiate x′ = a0ℓ x = (01)R+S 0ℓ x and y ′ = 0β b0ℓ y = 0β 0R (01)S 0ℓ y as in Lemma 10.9. We
furthermore set m̃ := max{m, δ + R + 2S + ℓ + |y|} and κ := ⌊M/n⌋ and define
x′′ :=
1∆+κ 0m̃ x′
=
1∆+κ 0m̃
y ′′ :=
1κ
=
1κ
0m̃ y ′
a 0ℓ x
=
1∆+κ 0m̃
0m̃ 0δ b 0ℓ y
=
1κ
(01)R+S 0ℓ x,
0m̃ 0δ 0R (01)S 0ℓ y.
′
Then x′′ , y ′′ is an instance of LCSγ (α, {0, 1}) for some constant γ ′ ≥ 1 and can be constructed in
time O(n) together with some integer τ such that L(x′′ , y ′′ ) = τ + L(x, y).
Proof. By greedy prefix matching and Lemma 7.6, we obtain
L(x′′ , y ′′ ) = κ + L(1∆ 0m̃ x′ , 0m̃ y ′ ) = κ + m̃ + L(x′ , y ′ ) = κ + m̃ + R + 2S + ℓ + L(x, y),
(13)
where the last equality follows from Lemma 10.9, which applies since S ≥ |x|. Clearly, x′′ , y ′′ and
τ = κ + m̃ + R + 2S + ℓ can be computed in time O(n).
′
It remains to verify that x, y is an instance of LCSγ (α, {0, 1}) for some γ ′ ≥ 1. We
√observe that
|x|, |y| = O(nαS ) by assumption, and hence S √= Θ(nαS ) = Θ(min{M/n, max{d/δ, d}}). Thus,
R = Θ(d/S) = Θ(nαR ) = O(dn/M + min{δ, d}) = O(m), where the last bound follows from
the parameter relations M ≥ nd/(5L) = Ω(nd/m) (since αL = αm ) and δ ≤ m. By assumption
αS ≤ αR , we have S = O(R) = O(m). Furthermore, we have κ = O(M/n) = O(m) by the relation
M ≤ mn and m̃ = Θ(m) by R, S, |x|, |y|, ℓ = O(m) and δ ≤ m. Thus, |x′′ | = κ + ∆ + m̃ + 2(R +
S) + ℓ + |x| = Θ(∆ + m) = Θ(n) and |y ′′ | = κ + m̃ + δ + R + 2S + ℓ + |y| = Θ(m). By (13), we also
conclude that L(x′′ , y ′′ ) = m̃ + O(m) = Θ(m) = Θ(L) by the assumption αL = αm .
Note that by ∆ ≥ δ, |a| ≥ |b| and |x| ≥ |y|, we have |x′′ | ≥ |y ′′ |. Hence by (13), ∆(x′′ , y ′′ ) =
∆ + R + (|x| − L(x, y)) = Θ(∆), since R, |x| = O(m) = O(∆) by α∆ > αm . Likewise, (13) yields
′
δ(x′′ , y ′′ ) = δ + (|y| − L(x, y)) = δ + δ(x, y) = Θ(δ) since δ(x, y) = O(δ) by δ(x, y) = O(nαδ ) and
α′δ ≤ αδ .
For the dominant pairs, we first compute
d(1∆ 0m̃ x′ , 0m̃ y ′ ) ≥ d(x′ , y ′ ) ≥ d(a, 0δ b) ≥ R · S = Ω(d),
using Lemma 10.20, Observation 7.2, and Lemma 7.3. For a corresponding upper bound, we use
Lemma 10.20 to obtain
d(1∆ 0m̃ x′ , 0m̃ y ′ ) = d(1∆ 0m̃ x′ , 0m̃ 0δ b0ℓ y) ≤ |y ′ | + m̃ + d(1∆ 0m̃ , b0ℓ y) + d(x′ , y ′ ).
By Lemma 6.8 we have d(1∆ 0m̃ , b0ℓ y) ≤ 5 · L(1∆ 0m̃ , b0ℓ y) · #1 (b0ℓ y) = O(|b0ℓ y| · (S + |y|)) = O(R ·
S) = O(d). Since |y ′ | + m̃ = O(m) = O(d) (using d ≥ L = Ω(m) since αL = αm ) and d(x′ , y ′ ) ≤
51
5 · L(x′ , y ′ ) · #1 (y ′ ) = O(|x′ | · #1 (y ′ )) = O(R · S) = O(d), we conclude that d(1∆ 0m̃ x′ , 0m̃ y ′ ) = Θ(d).
Finally, Lemma 7.1 yields d(x′′ , y ′′ ) = κ + d(1∆ 0m̃ x′ , 0m̃ y ′ ) = Θ(d) + O(m) = Θ(d), as desired.
It remains to count the matching pairs. Note that #0 (x′′ ), #0 (y ′′ ) = O(m̃ + |x′ | + |y ′ |) = O(m).
Furthermore #1 (x′′ ) = ∆ + κ + #1 (x′ ) = Θ(∆) + O(m) = Θ(n) (since α∆ > αm implies α∆ = 1)
and #1 (y ′′ ) = κ + S + #1 (y) = Θ(κ) = Θ(M/n), where we used S, |y| = O(M/n) and κ = Θ(M/n)
(since M ≥ n). Thus, M (x′′ , y ′′ ) = #1 (x′′ )#1 (y ′′ ) + #0 (x′′ )#1 (y ′′ ) = Θ(n · M/n) + O(m2 ) = Θ(M )
using that M ≥ L2 /|Σ| = Ω(m2 ) by αL = αm . Note that indeed Σ(x′′ , y ′′ ) = {0, 1}.
Before giving the construction for the case αS > αR , we present a technical lemma that is
similar to the dominant pair reduction technique of Lemma 7.8.
Lemma 10.25. Let x′ = a0ℓ x = (01)R+S 0ℓ x, y ′ = b0ℓ y = 0R (01)S 0ℓ y be an instance of Lemma 10.2
and R ≥ |y| − L(x, y). We set t := R + |y ′ | + 1 and define
x̄ :=
1t 0t y ′ 0R 1t+∆ 0t x′ ,
ȳ :=
0R 1t
0t y ′ .
Then
(i) L(x̄, ȳ) = R + 2t + L(x′ , y ′ ),
(ii) d(x̄, ȳ) ≤ (2t + |y ′ |)(R + 1) + R2 ,
(iii) d(x̄, ȳ) ≥ R · S.
Proof. We first prove the following property.
(∗) For any prefixes x̃, ỹ of x′ , y ′ , we have
L(1t 0t y ′ 0R 1t+∆ 0t x̃, 0R 1t 0t ỹ) = max{2t + |ỹ|, R + 2t + L(x̃, ỹ)}.
Note that the lower bound immediately follows from either matching 1t 0t ỹ with 1t 0t y ′ , or by
matching 0R 1t 0t ỹ with 0R 1t+∆ 0t x̃. For the upper bound, fix an LCS and observe that it cannot
match symbols in the 1t -prefix of x̄ and symbols in the 0R -prefix of ȳ, so at least one of the two
prefixes stays unmatched. Thus,
L(1t 0t y ′ 0R 1t+∆ 0t x̃, 0R 1t 0t ỹ) ≤ max{L(0t y ′ 0R 1t+∆ 0t x̃, 0R 1t 0t ỹ), L(1t 0t y ′ 0R 1t+∆ 0t x̃, 1t 0t ỹ)}
= max{R + L(0t−R y ′ 0R 1t+∆ 0t x̃, 1t 0t ỹ), 2t + |ỹ|},
where the second line follows from greedy prefix matching. Setting x̂ := 0t−R y ′ 0R 1t+∆ 0t x̃ and
ŷ := 1t 0t ỹ, it remains to provide the upper bound R + L(x̂, ŷ) ≤ max{2t + |ỹ|, R + 2t + L(x̃, ỹ)}
to prove (∗). Assume that an LCS z of x̂, ŷ matches less than t − R symbols of the 1t -prefix of ŷ.
Then |z| ≤ t − R + L(x̂, 0t ỹ) ≤ 2t − R + |ỹ|, yielding R + L(x̂, ŷ) ≤ 2t + |ỹ|. Hence, assume instead
that at least t − R symbols of the 1t -prefix of ŷ are matched. Since the number of “1”s in the prefix
0t−R y ′ 0R of x̂ is only #1 (y ′ ) ≤ |y ′ | < t − R, all zeroes of this prefix have to be deleted, resulting in
′
L(x̂, ŷ) = |z| ≤ L(1#1 (y )+t+∆ 0t x̃, 1t 0t ỹ)
′
= t + L(1#1 (y )+R+∆ 0t x̃, 0t ỹ)
= 2t + L(x̃, ỹ),
52
where the second line follows from greedy prefix matching and the third follows from Lemma 7.6.
Thus, we have verified R + L(x̂, ŷ) ≤ max{2t + |ỹ|, R + 2t + L(x̃, ỹ)}, which implies (∗).
As an immediate consequence, (∗) yields L(x̄, ȳ) = max{2t + |y ′ |, R + 2t + L(x′ , y ′ )} = R + 2t +
L(x′ , y ′ ) since R ≥ |y| − L(x, y) = |y ′ | − L(x′ , y ′ ) (where the equality follows from Lemma 10.2).
This proves (i).
For (ii), an application of Lemma 7.7 yields d(x̄, ȳ) ≤ (2t + |y ′ |)(R + 1) + R2 .
Finally, for (iii) we consider, analogous to Lemma 7.3, for any 0 ≤ s ≤ S and s ≤ r ≤ R + s,
the prefixes x̃ = (01)r of x′ and ỹ = 0R (01)s of y ′ . Then by (∗),
L(1t 0t y ′ 0R 1t+∆ 0t x̃, 0R 1t 0t ỹ) = max{2t + |ỹ|, R + 2t + L(x̃, ỹ)}
= max{2t + R + 2s, R + 2t + r + s} = (R + 2t) + r + s,
where we used Lemma 7.3(∗) for the second equality and r ≥ s for the last equality. Analogously
to the proof of Lemma 7.3(ii), this yields d(x̄, ȳ) ≥ R · S, since any feasible choice of r, s gives rise
to a unique dominant pair.
We can now give the construction for αS > αR . Recall that
αS := min{αM − 1, max{αd − αδ , αd /2}},
αR := αd − αS .
Lemma 10.26 (Construction for αS > αR ). Let (α, {0, 1}) be a parameter setting satisfying
Table 2 with αL = αm < α∆ , αδ ≤ αM − 1, and αS >√ αR . Given an instance (n, x, y) of
LCSγ≤ (α′ , {0, 1}) with |y| ≤ |x| = O(min{M/n, max{d/δ, d}}), we can construct an instance
′
x(4) , y (4) of LCSγ (α, {0, 1}) (for some constant γ ′ ≥ 1) in time O(n) together with some integer τ
such that L(x(4) , y (4) ) = τ + L(x, y).
Proof. We first set ℓ1 := |x| to define
x(1) :=
0ℓ1 x,
y (1) := 1δ 0ℓ1 y,
which pads the parameter δ. For convenience, define δ1 := |y (1) | − L(x(1) , y (1) ). We use the
parameters
S := ⌊nαS ⌋,
ℓ2 := |x(1) | + |y (1) |,
R := max{⌊nαR ⌋, δ1 },
to define, as in Lemma 10.2,
x(2) := a 0ℓ2 x(1) ,
y (2) := b 0ℓ2 y (1) .
We then use the dominant pair reduction trick of Lemma 10.25, that additionally pads ∆, and
define
x(3) := 1ℓ3 0ℓ3 y (2) 0R 1ℓ3 +∆ 0ℓ3 x(2) ,
y (3) :=
0R 1ℓ3
0ℓ3 y (2) ,
where ℓ3 := R + |y (2) | + 1. The final instance is then constructed as
x(4) = 1κ 0m x(3) ,
y (4) = 1κ 0m y (3) ,
53
where κ := ⌊M/n⌋.
We first compute
L(x(4) , y (4) ) = κ + m + L(x(3) , y (3) )
= κ + m + R + 2ℓ3 + L(x(2) , y (2) )
= κ + m + 2R + 2S + ℓ2 + 2ℓ3 + L(x(1) , x(1) )
= κ + m + 2R + 2S + ℓ1 + ℓ2 + 2ℓ3 + L(x, y),
(14)
where we used greedy prefix matching in the first line, Lemma 10.25(i) in the second, Lemma 10.2 in
the third, and Lemma 7.6 in the last line. Note that x(4) , y (4) , and τ := κ+m+2R+2S +ℓ1 +ℓ2 +2ℓ3
can be computed in time O(n).
′
It remains to verify that x(4) , y (4) is an instance of LCSγ (α, {0, 1}) for some γ ′ ≥ 1. We
first observe that |x|, |y| = O(nαS ) and hence |x(1) |, |y (1) | = O(nαS + δ). Note that by definition
S = Θ(nαS ).
Assume for contradiction that αR < αδ . Note that by definition αR = αd − αS = max{αd −
αM + 1, min{αδ , αd /2}} and hence αR < αδ only holds if αd − αM + 1 ≤ αR = αd /2 < αδ . But
then αd − αδ ≤ αd /2 < αδ ≤ αM − 1. This forces αS = αd /2 by definition, which contradicts the
assumption αS > αR . We therefore obtain αR ≥ αδ .
′
Note that δ1 = |y (1) | − L(x(1) , y (1) ) = δ + δ(x, y) by Lemma 7.6. Since δ(x, y) ≤ O(nαδ ) and
α′δ ≤ αδ , this yields δ1 = Θ(δ), and hence R = Θ(nαR ). Thus, R = O(S), since αR < αS . It
is immediate that |x(2) |, |y (2) | = O(R + S + |x(1) | + |y (1) |) = O(S). Furthermore, it follows that
|x(3) | = ∆ + O(S) and |y (3) | = O(S). Finally |y (4) | = m + O(M/n + S) = Θ(m), where the last
bound follows from S = O(M/n) = O(m) by the parameter relation M ≤ mn. Likewise, |x(4) | =
m+∆+O(M/n+S) = Θ(m+∆) = Θ(n). Finally, L(x(4) , y (4) ) = m+O(M/n+S) = Θ(m) = Θ(L)
by (14) and αL = αm .
Since |x| ≥ |y|, |a| ≥ |b| and ∆ ≥ δ it is easy to see that |x(4) | ≥ |y (4) |. Hence (14) yields
∆(x(4) , y (4) ) = 2ℓ3 + |y (2) | + ∆ + R + ∆(x, y) = ∆ + O(m) = Θ(∆),
since in particular ∆(x, y) ≤ |x| = O(m). Similarly, δ(x(4) , y (4) ) = δ + δ(x, y) = Θ(δ) as above.
For the number of dominant pairs, we observe that Lemma 10.25(iii) yields d(x(3) , y (3) ) ≥ R·S =
Ω(d). From Lemma 10.25(ii), the corresponding upper bound d(x(3) , y (3) ) ≤ (2ℓ3 + |y (2) |) · (R + 1) +
R2 = O(S · R + R2 ) = O(d) follows, since R = O(S) by αR < αS . Thus, by Lemma 7.1 we obtain
d(x(4) , y (4) ) = κ + m + d(x(3) , y (3) ) = Θ(d) + O(m) = Θ(d) by d ≥ L = Ω(m) since αL = αm .
It remains to count the number of matching pairs. We have #1 (y (4) ) = κ + #1 (y (3) ) = Θ(M/n),
since κ = Θ(M/n) by the parameter relation M ≥ n, and #1 (y (3) ) ≤ |y (3) | = O(S) = O(M/n).
Since |y (4) | = O(m), we have #0 (y (4) ) = O(m). Note that #1 (x(4) ) = κ + 2ℓ3 + ∆ + |x(2) | + |y (2) | =
∆ + O(S + κ) = Θ(n), since α∆ > αm implies α∆ = 1. Finally, #0 (x(4) ) = m + 2ℓ3 + R + #0 (y (2) ) +
#0 (x(2) ) = O(m). Thus, we obtain M (x(4) , y (4) ) = #1 (x(4) ) · #1 (y (4) ) + #0 (x(4) ) · #0 (y (4) ) =
Θ(n · M/n) + O(m2 ) = Θ(M ) by the relation M ≥ L2 /|Σ| = Ω(m2 ), since αL = αm .
Note that combining Lemmas 10.24 and 10.26 with Observation 10.23 yields Lemma 10.22.
11
New Algorithm for Binary Alphabet
In this section we prove Theorem 3.5, i.e., we assume that Σ = {0, 1} and provide an algorithm
running in time O(n + δM/n). More precisely, for any input x, y, by #0 (x) + #1 (x) = n we have
max{#0 (x), #1 (x)} ≥ n/2, so without loss of generality assume #1 (x) ≥ n/2 (otherwise exchange
54
0 and 1). Since M = #0 (x) · #0 (y) + #1 (x) · #1 (y), it follows that #1 (y) ≤ 2M/n. Hence, it suffices
to design an algorithm running in time O(n + δ · #1 (y)).
Theorem 11.1. For Σ = {0, 1}, LCS has an O(n + δ · #1 (y)) algorithm.
We preprocess x in time O(n) to support the following queries. For σ ∈ {0, 1}, 0 ≤ i ≤ n, and
t ≥ 1, Nexttσ (i) returns the position of the t-th occurrence of symbol σ after position i in x, i.e.,
Nexttσ (i) = i′ if and only if x[i′ ] = σ and #σ (x[i + 1..i′ ]) = t (if such a number i′ does not exist then
Nexttσ (i) := ∞). For convenience, we let Next0σ (i) := i. For t = 1 we also write Next1σ (i) = Nextσ (i).
It is easy to implement Nexttσ in time O(1) using rank/select data structures on the 0’s and 1’s
in x, which can be built in time O(n) [52, 73]. The symbol succeeding i is NextΣ (i) := i + 1 if
i + 1 ≤ n, or ∞ otherwise, which can be computed in time O(1).
Let λ = #1 (y) and 1 ≤ j1 < . . . < jλ ≤ m be the positions of all 1’s in y, and for convenience
set j0 := 0. We can assume that the last symbol in each of x and y is a 1, in particular jλ = m,
because appending symbol 1 to both x and y increases the LCS by exactly 1 (by Lemma 7.1). We
write zℓ for the number of 0’s between jℓ−1 and jℓ in y, i.e., y = 0z1 10z2 1 . . . 10zλ 1 with zℓ ≥ 0.
Consider the dynamic programming table T that contains for all 0 ≤ ℓ ≤ λ and k ≥ 0 (it
remains to fix an upper bound on k) the value
T [ℓ, k] = min{0 ≤ i ≤ n | L(x[1..i], y[1..jℓ ]) = jℓ − k},
(15)
where we set min ∅ = ∞. Observe that from T we can read off the LCS length as L(x, y) =
m − min{k | T [λ, k] < ∞}. In particular, we may initialize δ̃ := 1, and compute the table T for
0 ≤ ℓ ≤ λ and 0 ≤ k ≤ δ̃. If there is no 0 ≤ k ≤ δ̃ with T [λ, k] < ∞ then we double δ̃ and repeat.
This exponential search ends once we find a value δ̃ ∈ [δ, 2δ).
Next we show how to recursively compute T [ℓ, k]. For ℓ = 0, we clearly have T [0, 0] = 0 and
T [0, k] = ∞ for any k > 0. For ℓ > 0, the following dynamic programming recurrence computes
T [ℓ, k], as shown in Lemma 11.2 below.
′
T [ℓ, k] = min min{NextΣ (Nextz0ℓ −k+k (T [ℓ − 1, k′ ])) | max{0, k − zℓ } ≤ k′ < k},
Next1 (Nextz0ℓ (T [ℓ − 1, k])),
(16)
T [ℓ − 1, k − zℓ − 1] .
Note that the third line only applies if k − zℓ − 1 ≥ 0, as T [ℓ′ , k′ ] = ∞ for k′ < 0.
Let us discuss how to efficiently implement the above algorithm, assuming we already have
computed the values T [ℓ − 1, k], 0 ≤ k ≤ δ̃. Clearly, we can evaluate the second and third line in
constant time, using the Next data structures that we built in the preprocessing. For the first line,
′
observe that Nextt0 (i) is the position of the (t + #0 (x[1..i]))-th 0 in x. Hence, Nextz0ℓ −k+k (T [ℓ −
1, k′ ]) is the position of the (zℓ − k + k′ + #0 (x[1..T [ℓ − 1, k ′ ]]))-th 0 in x, so it is minimized if
k′ + #0 (x[1..T [ℓ − 1, k ′ ]]) is minimized4 . Thus, it suffices to compute a range minimum query
over the interval [max{0, k − zℓ }, k) on the array Aℓ [0..δ̃] with Aℓ [k′ ] := k′ + #0 (x[1..T [ℓ − 1, k ′ ]]).
From the answer Aℓ [r] to this range minimum query we can infer T [ℓ, k] in time O(1). Specifically,
the first line evaluates to the next symbol after the position of the (zℓ − k + A[r])-th 0 in x, i.e.,
z −k+Aℓ [r]
NextΣ (Next0ℓ
(0)).
Note that range minimum queries can be performed in time O(1), after a preprocessing of
O(|Aℓ |) = O(δ̃) [30, 76], where |Aℓ | is the size of array Aℓ . Since we can reuse the array Aℓ for
all 0 ≤ k ≤ δ̃, we spend (amortized) preprocessing time O(1) per entry of T [ℓ, ·]. In total, this
4
Here we interpret #0 (x[1..∞]) as ∞.
55
yields time O(δ̃ · λ) = O(δ̃ · #1 (y)) to build the table T . The exponential search for δ yields time
O(δ · #1 (y)). Adding the preprocessing time O(n), we obtain an O(n + δ · #1 (y)) algorithm. It
remains to prove correctness of the recursive formula (16).
Lemma 11.2. Table (15) follows the recursive formula (16).
Proof. For any 1 ≤ ℓ ≤ λ and 0 ≤ k ≤ δ̃ we show that the value T [ℓ, k] of (15) follows the recursive
formula (16). Let i = T [ℓ, k] and let i′ be minimal with
L(x[1..i], y[1..jℓ ]) = L(x[1..i′ ], y[1..jℓ−1 ]) + L(x[i′ + 1..i], y[jℓ−1 + 1..jℓ ]).
(17)
Let k′ = jℓ−1 − L(x[1..i′ ], y[1..jℓ−1 ]). Then we claim i′ = T [ℓ − 1, k ′ ]. Indeed, since i′ satisfies
the condition L(x[1..i′ ], y[1..jℓ−1 ]) = jℓ−1 − k′ of (15) we have i′ ≥ T [ℓ − 1, k′ ]. Moreover, if we
had i′ > T [ℓ − 1, k′ ] then we could replace i′ by T [ℓ − 1, k′ ], as both values satisfy the condition
L(x[1..i′ ], y[1..jℓ−1 ]) = jℓ−1 − k′ , contradicting minimality of i′ .
Let r = L(x[i′ + 1..i], y[jℓ−1 + 1..jℓ ]). By (17) we have jℓ − k = jℓ−1 − k′ + r, and we obtain
r = 1 + zℓ − k + k′ using zℓ = jℓ − jℓ−1 − 1. Note that i ≥ i′ is the smallest value attaining
L(x[i′ + 1..i], y[jℓ−1 + 1..jℓ ]) = r. Indeed, if there was a smaller value i′ ≤ i∗ < i with L(x[i′ +
1..i∗ ], y[jℓ−1 + 1..jℓ ]) = r, then L(x[1..i∗ ], y[1..jℓ ]) ≥ L(x[1..i′ ], y[1..jℓ−1 ]) + L(x[i′ + 1..i∗ ], y[jℓ−1 +
1..jℓ ]) = L(x[1..i′ ], y[1..jℓ−1 ]) + L(x[i′ + 1..i], y[jℓ−1 + 1..jℓ ]) = L(x[1..i], y[1..jℓ ]) = jℓ − k. Then
there also exists 0 ≤ i′′ ≤ i∗ < i with equality, i.e., L(x[1..i′′ ], y[1..jℓ ]) = jℓ − k. Indeed, if
L(x[1..i∗ ], y[1..jℓ ]) > jℓ − k then we can repeatedly reduce i∗ by 1, this reduces L(x[1..i∗ ], y[1..jℓ ])
by at most 1, and we eventually reach jℓ − k since L(x[1..t], y[1..jℓ ]) = 0 for t = 0. However,
existence of i′′ < i contradicts minimality of i = T [ℓ, k].
Now we show that i is one of the terms on the right hand side of (16), considering three cases.
Case 1: If 1 ≤ r < zℓ +1, then the LCS of x[i′ +1..i] and y[jℓ−1 +1..jℓ ] = 0zℓ 1 consists of r −1 0’s
and one additional symbol which is 0 or 1. Thus, the smallest i attaining r is NextΣ (Next0r−1 (i′ )),
accounting for r − 1 0’s and one additional symbol. Since r − 1 = zℓ − k + k′ and i′ = T [ℓ − 1, k′ ],
′
we have shown that i = T [ℓ, k] is of the form NextΣ (Nextz0ℓ −k+k (T [ℓ − 1, k′ ])) for some k′ . Observe
that 1 ≤ r < zℓ + 1 implies k − zℓ ≤ k′ < k. We clearly also have k′ ≥ 0. This corresponds to the
first line of (16).
Case 2: If r = zℓ + 1 then x[i′ + 1..i] contains y[jℓ−1 + 1..jℓ ] = 0zℓ 1. Thus, i = Next1 (Nextz0ℓ (i′ )),
accounting for zℓ 0’s followed by a 1. In this case, we have k′ = k + r − zℓ − 1 = k so that i = T [ℓ, k]
is of the form Next1 (Nextz0ℓ (T [ℓ − 1, k])). This corresponds to the second line of (16).
Case 3: If r = 0 then i = i′ , since the smallest value i ≥ i′ attaining L(x[i′ +1..i], y[jℓ−1 +1..jℓ ]) =
0 is i′ . In this case, we have k′ = k − zℓ − 1, and we obtain T [ℓ, k] = i = i′ = T [ℓ − 1, k ′ ] =
T [ℓ − 1, k − zℓ − 1]. This only applies if k − zℓ − 1 ≥ 0. This corresponds to the third line of (16).
This case distinction shows that i is one of the terms on the right hand side of (16). Also observe
that we have i ≤ Next1 (Nextz0ℓ (T [ℓ − 1, k])), since the number Next1 (Nextz0ℓ (T [ℓ − 1, k])) is part of
′
the set of which i = T [ℓ, k] is the minimum. Similarly, we have i ≤ NextΣ (Nextz0ℓ −k+k (T [ℓ − 1, k′ ]))
for any max{0, k − zℓ } ≤ k′ < k, and i ≤ T [ℓ − 1, k − zℓ − 1] if k − zℓ − 1 ≥ 0. This proves that i
is the minimum over all expressions on the right hand side of (16), proving that i = T [ℓ, k] follows
the recursive formula (16).
12
Strengthening Hardness via BP-SETH
A recent and surprising result by Abboud, Hansen, Virginia Williams and Williams [2] proves conditional lower bounds for LCS and related problems under a natural weaker variant of SETH, called
56
BP-SETH. In this variant, the role of CNF formulas in SETH is replaced by branching programs,
providing a much more expressive class of Boolean functions – intuitively, the corresponding satisfiability problem becomes much harder. As a consequence, refuting a conditional lower bound based
on BP-SETH would yield stronger algorithmic consequences, strengthening the conditional lower
bound significantly. Furthermore, Abboud et al. show that even a sufficiently strong polylogarithmic
improvement for LCS would imply faster (formula) satisfiability algorithms than currently known.
In this section, we show how to adapt the proofs of our conditional lower bounds to also hold
under BP-SETH. To this end, we first introduce this weaker assumption, briefly state the main
construction of Abboud et al. (specialized to LCS) and then describe the necessary modifications
to our proofs.
Branching Programs and BP-SETH. Branching programs provide a popular model for nonuniform computation. Formally, a branching program P on N variables, length T and width W
consists of a directed graph G, whose vertex set V is divided into T disjoint layers V1 , . . . , VT . Each
layer contains at most W vertices. Every edge in G starts in some layer Vj and ends at the following
layer Vj+1 . Each edge is annotated with a constraint Xi = b, where X1 , . . . , XN are the Boolean
input variables and b ∈ {0, 1}. The constraints of all edges starting in layer Vj must use the same
variable Xi , but may have potentially different values of b. There are two distinguished nodes,
namely some start node v0 ∈ V1 , and an accept node v ∗ ∈ VT . Given an assignment X ∈ {0, 1}N
to the variables, we say that P accepts x if and only if there is a path from v0 to v ∗ such that the
constraints on all edges on the path are satisfied by X.
The corresponding satisfiability problem BP-SAT asks, given a branching program P on N
variables, length T and width W , to determine whether there exists an assignment X ∈ {0, 1}N
that is accepted by P . The results in [2] rely on the following hypothesis.
Hypothesis 12.1. For any ε > 0, BP-SAT with (log W )(log T ) = o(N ) cannot be solved in time
O((2 − ε)N ).
Note that this hypothesis is much weaker than SETH: By Barrington’s theorem, already branching programs of constant width and T = polylog(N ) can simulate any NC circuit, and thus any
CNF formula.
Central Construction of Abboud et al. We present an intermediate step of the construction
of Abboud et al. in a convenient form for our purposes.
Lemma 12.2 (Normalized BP-Vector Gadgets [2, Implicit lemma in the proof of Theorem 6,
specialized to LCS]). Let P be a branching program on N1 + N2 variables, width W and length T .
Let A = {a1 , . . . , aA } ⊆ {0, 1}N1 , B = {b1 , . . . , bB } ⊆ {0, 1}N2 be sets of partial assignments.
In linear time in the output size, we can construct binary strings x1 , x2 , . . . , xA of length ℓx and
y1 , y2 , . . . , yB of length ℓy (called normalized vector gadgets) with the following properties.
1. There is some function f (W, T ) = T O(log W ) with ℓx , ℓy ≤ f (W, T ),
2. We can compute ρ0 , ρ1 with ρ0 > ρ1 such that L(xi , yj ) ≥ ρ0 if the assignment given by
assigning ai to the variables X1 , . . . , XN1 and assigning bj to XN1 +1 , . . . , XN1 +N2 is accepted
by P . Otherwise, we have L(xi , yj ) = ρ1 .
57
Proof modifications. Equipped with the above lemma, we sketch how to modify our lower
bounds to also hold under BP-SETH. Consider the following problem LCS-Pair with Guarantees:
Given binary strings x1 , x2 , . . . , xA of length ℓx and y1 , y2 , . . . , yB of length ℓy , where ℓx , ℓy =
(A + B)o(1) , and given integers ρ0 > ρ1 , decide whether either (1) L(xi , yj ) ≥ ρ0 for some i, j, or
(2) L(xi , yj ) = ρ1 for all i, j. The naive solution for this problem runs in time (AB)1+o(1) .
Lemma 12.2 shows that any O((AB)1−ε )-time algorithm for LCS-Pair with Guarantees would
break BP-SETH. Indeed, given a branching program P on N variables, length T and width W ,
we first split the variables into two sets of size N/2, and let A, B be the sets of all assignments to
these two sets of variables. Then testing whether there are partial assignments ai ∈ A and bj ∈ B
that together are accepted by P is equivalent to deciding satisfiability of P . Since A = |A| = B =
|B| = 2N/2 , any O((AB)1−ε )-time algorithm would break BP-SETH. Moreover, in BP-SETH we
can assume log T · log W = o(N ) and thus ℓx , ℓy = T O(log W ) = 2o(N ) = (A + B)o(1) , as required in
the definition of LCS-Pair with Guarantees. Furthermore, the same (AB)1−o(1) lower bound under
BP-SETH also holds restricted to instances with B = Aβ±o(1) for any 0 < β ≤ 1. To see this, in the
above reduction instead of splitting the N variables into equal halves N1 = N2 = N/2, we choose
N1 , N2 such that N2 ≈ βN1 , so that the sets of partial assignments A = {0, 1}N1 and B = {0, 1}N2
satisfy B = |B| = |A|β±o(1) = Aβ±o(1) .
Observe that the normalized vector gadgets constructed in Lemma 9.5 show the similar claim
that any O((AB)1−ε )-time algorithm for LCS-Pair with Guarantees would break the OV hypothesis, and thus SETH. Since all reductions presented in this paper use normalized vector gadgets
either directly via Lemma 9.5 or indirectly via Lemma 9.1, they all implicitly go via LCS-Pair
with Guarantees. Hence, we can easily replace the first part of our reductions, i.e., the reduction
from SAT/OV to LCS-Pair with Guarantees, by the reduction from BP-SAT to LCS-Pair with
Guarantees given in Lemma 12.2. This yields a conditional lower bound based on BP-SETH.
There are two steps where we have to be careful: First, LCS-Pair with Guarantees does not
immediately give properties (i) and (iv) of Lemma 9.5, however, they can be ensured easily as in
the proof of Lemma 9.5. Second, Lemma 9.1 is the construction from [28], and thus to check that
it goes via LCS-Pair with Guarantees one needs to check that the proof in [28] indeed only uses
Lemma 9.5. Along these lines we obtain the following strengthening of our main result.
Theorem 12.3. Theorems 3.3 and 3.4 also hold after replacing SETH by BP-SETH.
References
[1] Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. Quadratic-time hardness
of LCS and other sequence similarity measures. In Proc. 56th Annual IEEE Symposium on
Foundations of Computer Science (FOCS’15), pages 59–78, 2015.
[2] Amir Abboud, Thomas Dueholm Hansen, Virginia Vassilevska Williams, and Ryan Williams.
Simulating branching programs with edit distance and friends or: A polylog shaved is a lower
bound made. In Proc. 48th Annual ACM Symposium on Symposium on Theory of Computing
(STOC’16), 2016. To appear.
[3] Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply strong lower
bounds for dynamic problems. In Proc. 55th Annual IEEE Symposium on Foundations of
Computer Science (FOCS’14), pages 434–443, 2014.
[4] Amir Abboud, Virginia Vassilevska Williams, and Joshua Wang. Approximation and fixed
58
parameter subquadratic algorithms for radius and diameter. In Proc. 27th Annual ACM-SIAM
Symposium on Discrete Algorithms (SODA’16), pages 377–391, 2016.
[5] Amir Abboud, Ryan Williams, and Huacheng Yu. More applications of the polynomial method
to algorithm design. In Proc. 26th Annual ACM-SIAM Symposium on Discrete Algorithms
(SODA’15), pages 218–230, 2015.
[6] Muhammad Rashed Alam and M. Sohel Rahman. The substring inclusion constraint longest
common subsequence problem can be solved in quadratic time. Journal of Discrete Algorithms,
17:67–73, 2012.
[7] Jochen Alber, Jens Gramm, Jiong Guo, and Rolf Niedermeier. Towards optimally solving
the longest common subsequence problem for sequences with nested arc annotations in linear
time. In Proc. 13th Annual Symposium on Combinatorial Pattern Matching (CPM’02), pages
99–114, 2002.
[8] Stephen F. Altschul, Warren Gish, Webb Miller, Eugene W. Myers, and David J. Lipman.
Basic local alignment search tool. Journal of Molecular Biology, 215(3):403 – 410, 1990.
[9] Amihood Amir, Zvi Gotthilf, and B. Riva Shalom. Weighted LCS. Journal of Discrete Algorithms, 8(3):273–281, 2010.
[10] Amihood Amir, Tzvika Hartman, Oren Kapah, B. Riva Shalom, and Dekel Tsur. Generalized
LCS. Theoretical Computer Science, 409(3):438–449, 2008.
[11] Amihood Amir, Haim Paryenty, and Liam Roditty. On the hardness of the consensus string
problem. Information Processing Letters, 113(10-11):371–374, 2013.
[12] Amihood Amir, Haim Paryenty, and Liam Roditty. Configurations and minority in the string
consensus problem. Algorithmica, 74(4):1267–1292, 2016.
[13] Alberto Apostolico. Improving the worst-case performance of the Hunt-Szymanski strategy
for the longest common subsequence of two strings. Inf. Process. Lett., 23(2):63–69, 1986.
[14] Alberto Apostolico and Concettina Guerra. The longest common subsequence problem revisited. Algorithmica, 2:316–336, 1987.
[15] Alberto Apostolico, Gad M. Landau, and Steven Skiena. Matching for run-length encoded
strings. In Proc. 1997 International Conference on Compression and Complexity of Sequences
(SEQUENCES’97), pages 348–356, 1997.
[16] Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic
time (unless SETH is false). In Proc. 47th Annual ACM on Symposium on Theory of Computing
(STOC’15), pages 51–58, 2015.
[17] Arturs Backurs and Piotr Indyk. Which regular expression patterns are hard to match? In
Proc. 57th Annual IEEE Symposium on Foundations of Computer Science (FOCS’16), pages
457–466, 2016.
[18] Ricardo A. Baeza-Yates, Ricard Gavaldá, Gonzalo Navarro, and Rodrigo Scheihing. Bounding the expected length of longest common subsequences and forests. Theory of Computing
Systems, 32(4):435–452, 1999.
59
[19] Nikhil Bansal, Moshe Lewenstein, Bin Ma, and Kaizhong Zhang. On the longest common rigid
subsequence problem. Algorithmica, 56(2):270–280, 2010.
[20] David Becerra, Juan Mendivelso, and Yoan Pinzón. A multiobjective optimization algorithm
for the weighted LCS. Discrete Applied Mathematics, 212:37–47, 2016.
[21] Gary Benson, Avivit Levy, S. Maimoni, D. Noifeld, and B. Riva Shalom. LCSk: a refined
similarity measure. Theoretical Computer Science, 638:11–26, 2016.
[22] Lasse Bergroth, Harri Hakonen, and Timo Raita. A survey of longest common subsequence
algorithms. In Proc. 7th International Symposium on String Processing and Information Retrieval (SPIRE’00), pages 39–48, 2000.
[23] Philip Bille and Martin Farach-Colton. Fast and compact regular expression matching. Theoretical Computer Science, 409(3):486 – 496, 2008.
[24] Guillaume Blin, Paola Bonizzoni, Riccardo Dondi, and Florian Sikora. On the parameterized
complexity of the repetition free longest common subsequence problem. Information Processing
Letters, 112(7):272–276, 2012.
[25] Guillaume Blin, Laurent Bulteau, Minghui Jiang, Pedro J. Tejada, and Stéphane Vialette.
Hardness of longest common subsequence for sequences with bounded run-lengths. In Proc.
23th Annual Symposium on Combinatorial Pattern Matching (CPM’12), pages 138–148, 2012.
[26] Karl Bringmann. Why walking the dog takes time: Frechet distance has no strongly subquadratic algorithms unless SETH fails. In Proc. 55th Annual IEEE Symposium on Foundations of Computer Science (FOCS’14), pages 661–670, 2014.
[27] Karl Bringmann, Allan Grønlund, and Kasper Green Larsen. A dichotomy for regular expression membership testing. In Proc. 58th Annual IEEE Symposium on Foundations of Computer
Science (FOCS’17), 2017. To appear, arXiv:1611.00918.
[28] Karl Bringmann and Marvin Künnemann. Quadratic conditional lower bounds for string
problems and dynamic time warping. In Proc. 56th Annual IEEE Symposium on Foundations
of Computer Science (FOCS’15), pages 79–97, 2015.
[29] Karl Bringmann and Wolfgang Mulzer. Approximability of the Discrete Fréchet Distance. In
Proc. 31st International Symposium on Computational Geometry (SoCG’15), pages 739–753,
2015.
[30] Gerth Stølting Brodal, Pooya Davoodi, and S. Srinivasa Rao. On space efficient two dimensional range minimum data structures. Algorithmica, 63(4):815–830, 2012.
[31] Horst Bunke and Janos Csirik. An improved algorithm for computing the edit distance of
run-length coded strings. Information Processing Letters, 54(2):93–96, 1995.
[32] Mauro Castelli, Riccardo Dondi, Giancarlo Mauri, and Italo Zoppis. The longest filled common
subsequence problem. In Proc. 28th Annual Symposium on Combinatorial Pattern Matching
(CPM’17), 2017. To appear.
[33] Wun-Tat Chan, Yong Zhang, Stanley P.Y. Fung, Deshi Ye, and Hong Zhu. Efficient algorithms
for finding a longest common increasing subsequence. Journal of Combinatorial Optimization,
13(3):277–288, 2007.
60
[34] Maxime Crochemore, Gad M. Landau, and Michal Ziv-Ukelson. A subquadratic sequence alignment algorithm for unrestricted scoring matrices. SIAM Journal on Computing, 32(6):1654–
1673, 2003.
[35] Marek Cygan, Marcin Kubica, Jakub Radoszewski, Wojciech Rytter, and Tomasz Waleń.
Polynomial-time approximation algorithms for weighted LCS problem. Discrete Applied Mathematics, 204:38–48, 2016.
[36] Sebastian Deorowicz. Quadratic-time algorithm for a string constrained LCS problem. Information Processing Letters, 112(11):423–426, 2012.
[37] David Eppstein, Zvi Galil, Raffaele Giancarlo, and Giuseppe F. Italiano. Sparse dynamic
programming I: Linear cost functions. J. ACM, 39(3):519–545, July 1992.
[38] Effat Farhana and M. Sohel Rahman. Doubly-constrained LCS and hybrid-constrained LCS
problems revisited. Information Processing Letters, 112(13):562–565, 2012.
[39] Anka Gajentaan and Mark H. Overmars. On a class of O(N 2 ) problems in computational
geometry. Comput. Geom. Theory Appl., 5(3):165–185, October 1995.
[40] Pawel Gawrychowski. Faster algorithm for computing the edit distance between SLPcompressed strings. In Proc. 19th International Conference on String Processing and Information Retrieval (SPIRE’12), pages 229–236, 2012.
[41] Zvi Gotthilf, Danny Hermelin, Gad M. Landau, and Moshe Lewenstein. Restricted LCS.
In Proc. 17th International Conference on String Processing and Information Retrieval
(SPIRE’10), pages 250–257, 2010.
[42] Zvi Gotthilf, Danny Hermelin, and Moshe Lewenstein. Constrained LCS: Hardness and approximation. In Proc. 19th Annual Symposium on Combinatorial Pattern Matching (CPM’08),
pages 255–262, 2008.
[43] Zvi Gotthilf and Moshe Lewenstein. Approximating constrained LCS. In Proc. 14th International Conference on String Processing and Information Retrieval (SPIRE’07), pages 164–172,
2007.
[44] Danny Hermelin, Gad M. Landau, Shir Landau, and Oren Weimann. Unified compressionbased acceleration of edit-distance computation. Algorithmica, 65(2):339–353, 2013.
[45] Daniel S. Hirschberg. Algorithms for the longest common subsequence problem. J. ACM,
24(4):664–675, 1977.
[46] J. W. Hunt and M. D. McIlroy. An algorithm for differential file comparison. Computing
Science Technical Report 41, Bell Laboratories, 1975.
[47] James W. Hunt and Thomas G. Szymanski. A fast algorithm for computing longest subsequences. Commun. ACM, 20(5):350–353, 1977.
[48] Costas S Iliopoulos, Marcin Kubica, M Sohel Rahman, and Tomasz Waleń. Algorithms for
computing the longest parameterized common subsequence. In Proc. 18th Annual Conference
on Combinatorial Pattern Matching (CPM’07), pages 265–273, 2007.
[49] Costas S. Iliopoulos and M. Sohel Rahman. Algorithms for computing variants of the longest
common subsequence problem. Theoretical Computer Science, 395(2–3):255–267, 2008.
61
[50] Costas S. Iliopoulos and M. Sohel Rahman. A new efficient algorithm for computing the longest
common subsequence. Theory of Computing Systems, 45(2):355–371, 2009.
[51] Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which problems have strongly
exponential complexity? J. Computer and System Sciences, 63(4):512–530, 2001.
[52] Guy Jacobson. Space-efficient static trees and graphs. In Proc. 30th Annual IEEE Symposium
on Foundations of Computer Science (FOCS’89), pages 549–554, 1989.
[53] Guy Jacobson and Kiem-Phong Vo. Heaviest increasing/common subsequence problems. In
Proc. 3th Annual Symposium on Combinatorial Pattern Matching (CPM’92), pages 52–66,
1992.
[54] Tao Jiang, Guohui Lin, Bin Ma, and Kaizhong Zhang. The longest common subsequence
problem for arc-annotated sequences. Journal of Discrete Algorithms, 2(2):257–270, 2004.
[55] Orgad Keller, Tsvi Kopelowitz, and Moshe Lewenstein. On the longest common parameterized
subsequence. Theoretical Computer Science, 410(51):5347–5353, 2009.
[56] Tsvi Kopelowitz, Seth Pettie, and Ely Porat. Higher lower bounds from the 3sum conjecture.
In Proc. 27th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’16), pages 1272–
1287, 2016.
[57] Keita Kuboi, Yuta Fujishige, Shunsuke Inenaga, Hideo Bannai, and Masayuki Takeda. Faster
STR-IC-LCS computation via RLE. In Proc. 28th Annual Symposium on Combinatorial Pattern Matching (CPM’17), 2017. To appear, arXiv:1703.04954.
[58] Martin Kutz, Gerth Stølting Brodal, Kanela Kaligosi, and Irit Katriel. Faster algorithms for
computing longest common increasing subsequences. Journal of Discrete Algorithms, 9(4):314–
325, 2011.
[59] Gad M. Landau, Avivit Levy, and Ilan Newman. LCS approximation via embedding into local
non-repetitive strings. In Proc. 20th Annual Symposium on Combinatorial Pattern Matching
(CPM’09), pages 92–105, 2009.
[60] Gad M. Landau, Eugene Myers, and Michal Ziv-Ukelson. Two algorithms for LCS consecutive
suffix alignment. Journal of Computer and System Sciences, 73(7):1095–1117, 2007.
[61] Gad M. Landau, Baruch Schieber, and Michal Ziv-Ukelson. Sparse LCS common substring
alignment. Information Processing Letters, 88(6):259–270, 2003.
[62] Kjell Lemström, Gonzalo Navarro, and Yoan Pinzon. Bit-parallel branch and bound algorithm
for transposition invariant LCS. In Proc. 11th International Conference on String Processing
and Information Retrieval (SPIRE’04), pages 74–75, 2004.
[63] Yury Lifshits. Processing compressed texts: A tractability border. In Proc. 18th Annual
Symposium on Combinatorial Pattern Matching (CPM’07), pages 228–240, 2007.
[64] George S. Lueker. Improved bounds on the average length of longest common subsequences.
J. ACM, 56(3):17, 2009.
[65] William J. Masek and Mike Paterson. A faster algorithm computing string edit distances. J.
Computer and System Sciences, 20(1):18–31, 1980.
62
[66] Webb Miller and Eugene W. Myers. A file comparison program. Softw., Pract. Exper.,
15(11):1025–1040, 1985.
[67] Johra Muhammad Moosa, M. Sohel Rahman, and Fatema Tuz Zohora. Computing a longest
common subsequence that is almost increasing on sequences having no repeated elements.
Journal of Discrete Algorithms, 20:12–20, 2013.
[68] Howard L. Morgan. Spelling correction in systems programs. Communications of the ACM,
13(2):90–94, 1970.
[69] Shay Mozes, Dekel Tsur, Oren Weimann, and Michal Ziv-Ukelson. Fast algorithms for computing tree LCS. Theoretical Computer Science, 410(43):4303–4314, 2009.
[70] Eugene W. Myers. An O(N D) difference algorithm and its variations. Algorithmica, 1(2):251–
266, 1986.
[71] Narao Nakatsu, Yahiko Kambayashi, and Shuzo Yajima. A longest common subsequence
algorithm suitable for similar text strings. Acta Inf., 18:171–179, 1982.
[72] Mike Paterson and Vlado Dancı́k. Longest common subsequences. In Proc. 19th International
Symposium on Mathematical Foundations of Computer Science (MFCS’94), pages 127–142,
1994.
[73] Mihai Patrascu. Succincter. In Proc. 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS’08), pages 305–313, 2008.
[74] Pavel Pevzner and Michael Waterman. Matrix longest common subsequence problem, duality and Hilbert bases. In Proc. 3th Annual Symposium on Combinatorial Pattern Matching
(CPM’92), pages 79–89, 1992.
[75] Liam Roditty and Virginia Vassilevska Williams. Fast approximation algorithms for the diameter and radius of sparse graphs. In Proc. 45th Annual ACM Symposium on Symposium on
Theory of Computing (STOC’13), pages 515–524, 2013.
[76] Kunihiko Sadakane. Compressed suffix trees with full functionality. Theory Comput. Syst.,
41(4):589–607, 2007.
[77] Thomas G. Szymanski. A special case of the maximal common subsequence problem. Technical
report, TR-170, Computer Science Laboratory, Princeton University, 1975.
[78] Alexander Tiskin. Longest common subsequences in permutations and maximum cliques in
circle graphs. In Proc. 17th Annual Symposium on Combinatorial Pattern Matching (CPM’06),
pages 270–281, 2006.
[79] Alexander Tiskin. Semi-local longest common subsequences in subquadratic time. Journal of
Discrete Algorithms, 6(4):570–581, 2008.
[80] Alexander Tiskin. Faster subsequence recognition in compressed strings. Journal of Mathematical Sciences, 158(5):759–769, 2009.
[81] Alexander Tiskin. Fast distance multiplication of unit-Monge matrices. In Proc. 21st Annual
ACM-SIAM Symposium on Discrete Algorithms (SODA’10), pages 1287–1296, 2010.
63
[82] Virginia Vassilevska Williams. Hardness of Easy Problems: Basing Hardness on Popular
Conjectu res such as the Strong Exponential Time Hypothesis (Invited Talk). In Proc. 10th
International Symposium on Parameterized and Exact Computation (IPEC’15), pages 17–29,
2015.
[83] Virginia Vassilevska Williams and Ryan Williams. Subcubic equivalences between path, matrix
and triangle problems. In Proc. 51st Annual IEEE Symposium on Foundations of Computer
Science (FOCS’10), pages 645–654, 2010.
[84] Robert A. Wagner and Michael J. Fischer. The string-to-string correction problem. J. ACM,
21(1):168–173, 1974.
[85] Biing-Feng Wang, Gen-Huey Chen, and Kunsoo Park. On the set LCS and set-set LCS
problems. Journal of Algorithms, 14(3):466–477, 1993.
[86] Ryan Williams. A new algorithm for optimal 2-constraint satisfaction and its implications.
Theoretical Computer Science, 348(2):357–365, 2005.
[87] Sun Wu, Udi Manber, Gene Myers, and Webb Miller. An O(N P ) sequence comparison algorithm. Inf. Process. Lett., 35(6):317–323, 1990.
[88] I-Hsuan Yang, Chien-Pin Huang, and Kun-Mao Chao. A fast algorithm for computing a
longest common increasing subsequence. Information Processing Letters, 93(5):249–253, 2005.
64
| 0 |
Inter-Subject Analysis: Inferring Sparse Interactions
with Dense Intra-Graphs
arXiv:1709.07036v1 [stat.ME] 20 Sep 2017
Cong Ma∗
Junwei Lu∗
Han Liu∗
Abstract
We develop a new modeling framework for Inter-Subject Analysis (ISA). The
goal of ISA is to explore the dependency structure between different subjects
with the intra-subject dependency as nuisance. It has important applications in
neuroscience to explore the functional connectivity between brain regions under
natural stimuli. Our framework is based on the Gaussian graphical models,
under which ISA can be converted to the problem of estimation and inference
of the inter-subject precision matrix. The main statistical challenge is that
we do not impose sparsity constraint on the whole precision matrix and we
only assume the inter-subject part is sparse. For estimation, we propose to
estimate an alternative parameter to get around the non-sparse issue and it can
achieve asymptotic consistency even if the intra-subject dependency is dense.
For inference, we propose an “untangle and chord” procedure to de-bias our
estimator. It is valid without the sparsity assumption on the inverse Hessian
of the log-likelihood function. This inferential method is general and can be
applied to many other statistical problems, thus it is of independent theoretical
interest. Numerical experiments on both simulated and brain imaging data
validate our methods and theory.
Keyword: Gaussian graphical models; fMRI data; Nuisance parameter; Uncertainty assessment; Sample splitting.
1
Introduction
Inter-Subject Analysis (ISA) refers to the inference of the dependency structures between
different subjects with intra-subject dependencies as nuisance. The subject may be a pathway
consisting of an assembly of genes or a group of stocks from the same sector in financial
markets. Often the dependency structure between different subjects is of scientific interest
while the dependencies within each subject are complicated and hard to infer. The goal of
ISA is to explore the inter-subject dependencies with intra-subject dependencies as nuisance.
∗
Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ
08544, USA; Email: {congm, junweil, hanliu}@princeton.edu
1
(a) Intra-subject correlations
(b) Inter-subject correlations
Figure 1: (a) illustrates the intra-subject correlations where the correlations among voxels
in the same subject are calculated. (b) illustrates the inter-subject correlations where the
correlations of voxels are computed across subjects.
1.1
Motivating Example
To motivate the use of Inter-Subject Analysis (ISA), we consider the functional magnetic
resonance imaging (fMRI) data analysis. fMRI provides scientists a noninvasive way to observe
the neural activity in the human brain (Ashby, 2011). Traditionally, fMRI measurements are
obtained under highly controlled experimental settings where subjects are asked to perform
identical and demanding attention tasks. Recent studies show that neuronal responses and
brain activities are more reliable under naturalistic stimuli, for instance, watching a movie
episode or listening to an audiobook (Mechler et al., 1998; Yao et al., 2007; Belitski et al.,
2008). This motivates the use of fMRI under more naturalistic settings (Zacks et al., 2001;
Hartley et al., 2003; Bartels and Zeki, 2004). However, this brings substantial noise to the
fMRI measurements since intrinsic cognitive processes that are not related to the ongoing
stimuli cannot be constrained or removed as in controlled research settings (Hasson et al.,
2004; Simony et al., 2016). Conventional intra-subject analysis which computes voxel-byvoxel correlations in the same subject can be influenced by such noise and fail to detect the
stimulus-induced correlations.
Hasson et al. (2004) introduced Inter-subject correlations (ISC) to partially resolve this
problem. Instead of computing the intra-subject correlations, ISC calculates the correlation
coefficients of corresponding voxels across different experimental subjects. It is based on the
assumption that individual variations are uncorrelated across subjects and high inter-subject
correlations indicate stimulus related activations. Although ISC can isolate the individual
noise, as a measure of marginal dependence, it fails to eliminate the confounding effects of
other factors (Horwitz and Rapoport, 1988; Lee et al., 2011). Conditional dependence has
long been studied to remedy this problem in both statistics and biology community (Marrelec
et al., 2006; Huang et al., 2009; Varoquaux et al., 2012).
2
1.2
Modeling Framework
In this paper, we propose a new modeling framework named Inter-Subject Analysis (ISA) to infer the conditional dependency between different subjects. Formally, let X = (X1 , . . . , Xd )> ∼
∗
N (0, Σ∗ ) be a d-dimensional Gaussian random vector with precision matrix Ω∗ = (ωjk
). Let G1
and G2 be two disjoint subsets of {1, 2, . . . , d} with cardinality |G1 | = d1 and |G2 | = d2 := d−d1 .
We use XG1 and XG2 to denote the corresponding subvectors of X and they represent features
of two different subjects. We use Σ∗1 , Σ∗2 and Σ∗12 to denote the covariance within XG1 , within
XG2 and between XG1 and XG2 respectively. The submatrices Ω∗1 , Ω∗2 and Ω∗12 are defined
similarly. It is well known that Xj and Xk are conditionally independent given the remaining
∗
variables if and only if ωjk
= 0. Our modeling assumption is that Ω∗12 purely represents
the dependency driven by the common stimuli while Ω∗1 and Ω∗2 can be influenced by the
individual cognitive process. Hence we are willing to assume that Ω∗12 is sparse while reluctant
to put any sparsity constraint on Ω∗1 or Ω∗2 . The main statistical challenge we address in
this paper is to obtain estimation consistency and valid inference of Ω∗12 under non-sparse
nuisance parameters in the high dimensional setting.
1.3
Contributions
There are two major contributions of this paper.
Our first contribution is a new estimator for Ω∗12 when the precision matrix Ω∗ is not
sparse. The idea is to find an alternative sparse matrix to estimate. The candidate we
propose is
Θ∗ := Ω∗ − (Σ∗G )−1 ,
(1.1)
where Σ∗G = diag(Σ∗1 , Σ∗2 ) and diag(A, B) denotes the diagonal block matrix whose diagonals
are A and B. The key observation is that if Ω∗12 is sparse, then Θ∗ is also sparse. More
precisely, we have kΘ∗ k0 ≤ 2s2 + 2s whenever kΩ∗12 k0 ≤ s, where kAk0 counts the number of
non-zero entries in A. This observation holds true even if both Ω∗1 and Ω∗2 are dense. We
illustrate this phenomenon using a numerical example in Figure 2. Following this observation,
we can reparametrize the precision matrix using Ω∗ = Θ∗ + (Σ∗G )−1 , in which Θ∗ contains
the parameter of interest (as Θ∗ ’s off-diagonal block Θ∗12 = Ω∗12 ) and (Σ∗G )−1 is a nuisance
parameter. The estimator we propose named Sparse edge esTimatoR for Intense Nuisance
GraphS (STRINGS) has the following form:
b = arg min Tr(ΘΣ)
b − log |Σ
b G ΘΣ
bG + Σ
b G | + λkΘk1,1 ,
Θ
b and Σ
b G are plug-in estimators for Σ∗ and Σ∗ and kAk1,1 is the element-wise `1 norm
where Σ
G
of A. The consistency for the STRINGS estimator can be achieved under mild conditions.
Our second contribution is to propose a general “untangle and chord” procedure to de-bias
high dimensional estimators when the inverse Hessian of the log-likelihood function is not
sparse. In general, a de-biased estimator βbu takes the following form:
b
βbu = βb − M ∇Ln (β),
3
1
1
5
0.9
5
0.9
10
0.8
10
0.8
15
0.7
15
0.7
20
0.6
20
0.6
0.5
25
0.4
30
25
30
0.5
0.5
5
0.45
10
0.4
15
0.35
20
0.3
25
0.4
30
35
0.3
35
0.3
35
40
0.2
40
0.2
40
45
0.1
45
0.1
45
0
50
0
50
50
5
(a) Graph
10
15
20
25
30
35
40
45
50
5
(b) Ω∗
10
15
20
25
30
35
40
(c) (Σ∗G )−1
45
50
0.25
0.2
0.15
0.1
0.05
0
5
10
15
20
25
30
35
40
45
50
(d) Θ∗
Figure 2: (a) shows a Gaussian graphical model with two subjects, where each color represents
a subject. It can be seen that the inter-subject connections are sparse while the connections
within each subject are dense. (b) shows the heatmap of the precision matrix of the Gaussian
graphical model. (c) show the heatmap of the corresponding (Σ∗G )−1 . (d) shows the heatmap
of Θ∗ = Ω∗ − (Σ∗G )−1 . As can be seen, Θ∗ is sparse even if both Ω∗1 and Ω∗2 are dense.
where βb is the regularized estimator for the parameter of interest β ∗ , M is a bias correction
b
matrix and Ln denotes the negative log-likelihood function. By the Taylor expansion of Ln (β)
∗
at the true parameter β , we have
√
√
√
n · (βbu − β ∗ ) ≈ − nM ∇Ln (β ∗ ) − n[M ∇2 Ln (β ∗ ) − I](βb − β ∗ ).
√
Clearly, the leading term nM ∇Ln (β ∗ ) is asymptotically normal under mild conditions.
√
And if the remainder term n[M ∇2 Ln (β ∗ ) − I](βb − β ∗ ) converges to 0 in probability, we
can conclude that βbu is asymptotically normal. One way to achieve this goal is to let M be a
consistent estimator of the inverse of ∇2 Ln (β ∗ ). This is why previous inferential methods
require the sparsity assumption on the inverse Hessian.
It will be shown that the Hessian in ISA is not necessarily sparse. To get around this
issue, two crucial observations are needed. First, it suffices to use constrained optimization
to control the statistical error of M ∇2 Ln (β ∗ ) − I. Second, to prevent M from tangling with
∇Ln (β ∗ ) and sabotaging the asymptotic normality of the leading term, we can split the data
into two parts: the untangle step estimates the parameter on the first split and the chord
step constructs M using the second split. We show that the “untangle and chord” procedure
de-biases the STRINGS estimator and we can construct valid tests and confidence intervals
based on the de-biased estimator. The “untangle and chord” strategy is general and can be
applied to many other high dimensional problems without the sparsity assumption on the
inverse Hessian.
1.4
Related Work
Estimators for the precision matrix including the maximum likelihood estimator and the
column-wise estimator have been considered in (Yuan and Lin, 2007; Banerjee et al., 2008;
Rothman et al., 2008; Friedman et al., 2008; Ravikumar et al., 2011; Meinshausen and
Bühlmann, 2006; Yuan, 2010; Cai et al., 2011). All of them require the sparsity of the whole
precision matrix. Hence they are not applicable to ISA.
4
Inferential methods based on inverting KKT conditions (Zhang and Zhang, 2014; Van de
Geer et al., 2014) and decorrelated score functions (Ning and Liu, 2014) have been extended
to Gaussian graphical models (Jankova et al., 2015; Gu et al., 2015). They all require the
inverse Hessian of the log-likelihood function to be sparse and hence cannot be applied in
our setting. One exception is the inference for the high dimensional linear model proposed
by Javanmard and Montanari (2014). Their result heavily depends on the special structure
of the linear model. First, the design matrix is independent of the noise and second, the
inverse Hessian matrix is simply the inverse covariance of the design, which is irrelevant to
the regression parameters. Their method is difficult to extend to general estimators.
Efforts have also been made to relax the sparsity assumption on the precision matrix
in Gaussian graphical estimation. Yuan and Zhang (2014) propose a decoupling approach
to estimate Ω∗12 . However, their method requires at least one of the diagonal blocks Ω∗1
or Ω∗2 to be sparse. And it is no longer valid if both of them are dense. Liu et al. (2015)
shares the similar goal with ours. They proposed a density ratio framework to estimate
the dependency between two groups of variables. First, their estimation theory does not
apply to Gaussian distributions due to the unboundedness of the density ratio. Second, their
procedure relies on approximating the normalization function using two sample U-statistics
which are complicated and computationally expensive. Compared with the above works,
our work not only considers the estimation consistency but also propose valid procedures to
assess uncertainty in the high dimensional setting.
Besides the work mentioned above, Kolar et al. (2014) and Xia et al. (2016) focus on testing
whether the inter-subject dependency matrix is identically zero. However, our procedure can
give element-wise estimation and inference.
1.5
Notation
The following notations are used throughout the paper. For any n ∈ N we use
shorthand
Pthe
d
>
d
notation [n] = {1, . . . , n}. For a vector v = (v1 , . . . , vd ) ∈ R , let kvkq = ( i=1 viq )1/q , 1 ≤
q < ∞. Furthermore, let kvk∞ = maxj |vj |. For a matrix A = (Ajk ) ∈ Rm×n , we define
supp(A) = {(j, k)|Ajk 6= 0}. We use Aj∗ and A∗k to denote the j-th row and k-th column of
A respectively. We use kAkq = sup
Pkxkq =1 kAxkq to denote the induced `q -norm of a matrix.
In particular, kAk1 = max1≤k≤n m
j=1 |Ajk |, which is the maximum absolute column
Pn sum
of the matrix A. kAk2 is the largest singular value of A. kAk∞ = max1≤j≤m k=1 |Ajk |,
which is the
the matrix A. We also use kAkmax = maxjk |Ajk |,
P maximum absolute row
Psum2of1/2
kAk1,1 = jk |Ajk | and kAkF = ( jk Ajk ) to denote the `max -, `1,1 - and `F -norms of the
matrix A. λmin (A) is used to denote the minimum eigenvalue of the matrix A and |A| is used
to denote the determinant of A. We use Φ(x) to denote the cumulative distribution function
of a standard normal random variable. For a sequence of random variables Xn , we write
Xn
X, for some random variable X, if Xn converges in distribution to X.
5
1.6
Organization of the Paper
The rest of the paper is organized as follows. In Section 2, we introduce the STRINGS
estimator for the inter-subject precision matrix. In Section 3, we provide the “untangle and
chord” procedure to de-bias the STRINGS estimator. In Section 4, we show theoretical
results on the statistical rate of convergence for the STRINGS estimator and the asymptotic
normality of the de-biased estimator. In Section 5, we demonstrate the validity of our
estimation and inferential methods through numerical experiments on both simulated and
brain imaging data. In Section 6, we discuss extensions of our methods and theories to
non-Gaussian data and multi-subjects settings.
2
The STRINGS Estimator
In this section, we present the STRINGS estimator for the inter-subject precision matrix Ω∗12 .
The basic idea is to use the maximum likelihood principle. Given a data matrix X ∈ Rn×d ,
where rows of X represent i.i.d. samples from a Gaussian distribution with mean 0 and
covariance Σ∗ , the negative log-likelihood for the precision matrix is given by
b − log |Ω|,
L(Ω) = Tr(ΩΣ)
(2.1)
b − log |ΩO + ΩD |.
L(ΩO , ΩD ) = Tr[(ΩO + ΩD )Σ]
(2.2)
−1 b
−1
L(Θ, Σ−1
G ) = Tr[(Θ + ΣG )Σ] − log |Θ + ΣG |.
(2.3)
b = (1/n) · X> X is the sample covariance matrix.
where Σ
Since our focus is to estimate the inter-subject dependency Ω∗12 , a naive reparametrization
of the precision matrix is Ω∗ = Ω∗D + Ω∗O , where Ω∗D is the block diagonal matrix corresponding
to XG1 and XG2 , i.e., Ω∗D = diag(Ω∗1 , Ω∗2 ). And Ω∗O is the off-diagonal parts involving Ω∗12 and
Ω∗>
12 . Under such a reparametrization, we can reformulate (2.1) as
Adopting the maximum likelihood principle, we want to minimize the negative log-likelihood
(2.2) with respect to the parameter ΩO . Hence we can ignore the terms independent of ΩO in
b − log |ΩO + ΩD | with respect to
(2.2). This gives us an equivalent minimization of Tr(ΩO Σ)
ΩO . However, the objective function still depends on the nuisance parameter ΩD and it is
difficult to obtain an estimator for ΩD . Thus this naive reparametrization will not work.
Recall that if Ω∗12 is s-sparse, then Θ∗ = Ω∗ − (Σ∗G )−1 is (2s2 + 2s)-sparse. Based on this
key observation, we reparametrize the precision matrix using Θ∗ and (Σ∗G )−1 , in which Θ∗
contains the parameter of interest (as Θ∗12 = Ω∗12 ) and (Σ∗G )−1 is the nuisance parameter.
Under the new reparametrization Ω∗ = Θ∗ + (Σ∗G )−1 , we can rewrite (2.1) as
To separate Θ from Σ−1
G , we further decompose (2.3) into the following form:
−1 b
−1
−1
b
L(Θ, Σ−1
G ) = Tr(ΘΣ) + Tr(ΣG Σ) − log |ΣG | − log |ΣG ΘΣG + ΣG | − log |ΣG |.
6
(2.4)
Ignoring the terms independent of Θ in (2.4), we have that minimizing (2.4) with respect to
b − log |ΣG ΘΣG + ΣG | with respect to Θ. Now we still
Θ is equivalent to minimizing Tr(ΘΣ)
have the nuisance parameter ΣG . However in this case we can use the naive plug-in estimator
b G is the block diagonal matrix of Σ
b corresponding to XG1 and XG2 . This gives
for ΣG , i.e., Σ
us the following empirical loss function for Θ:
b − log |Σ
b G ΘΣ
bG + Σ
b G |.
Ln (Θ) = Tr(ΘΣ)
(2.5)
b = arg min Ln (Θ) + λkΘk1,1 .
Θ
(2.6)
Correspondingly, we will use L(Θ) = Tr(ΘΣ∗ ) − log |Σ∗G ΘΣ∗G + Σ∗G | to denote the population
loss function. Since Θ∗ is sparse, we further impose a sparsity penalty on the objective
function. Here we choose the `1,1 penalty, and the STRINGS estimator has the following form
b G needs to be
Note that for the empirical loss function Ln (Θ) in (2.5) to be convex, Σ
positive definite. Otherwise, the log-determinant term will always be −∞. However, in the
b G will be rank deficient. To resolve
high dimensional regime, the naive plug-in estimator p
Σ
this issue, we can perturb our plug-in estimator with log d/n · I. This perturbation trick
has also been used to solvep
the initialization problem in Cai et al. (2011). We choose the size
of the perturbation to be log d/n so that it will not affect the concentration property of
b G . Although (2.6) is a convex program, solving it in high dimensions using
the estimator Σ
semidefinite programming is both time-consuming and memory-consuming. We propose
a computationally efficient algorithm based on alternating direction method of multipliers
(ADMM) to solve (2.6). The details are deferred to Appendix Section B.
In the end of this section, we want to comment on why the block-wise regression cannot
be applied to estimate Ω∗12 . Since neighborhood selection (Meinshausen and Bühlmann, 2006;
Yuan, 2010) can be used to estimate the sparse precision matrix, a natural generalization for
the inter-subject precision matrix is to consider a block-wise regression method. Namely, we
regress one group of variables XG1 on the other group of variables XG2 . However, this would
fail since the sparsity pattern of Ω∗21 cannot be translated into the sparsity pattern of the
regression coefficient matrix. More precisely, based on the formula for conditional Gaussian
distribution, we have
∗
∗
∗−1 ∗>
XG1 |XG2 = xG2 ∼ N (Σ∗12 Σ∗−1
2 xG2 , Σ1 − Σ12 Σ2 Σ12 ).
Thus the regression function between XG1 and XG2 is given by XG1 = β > XG2 + , where
∗−1 ∗>
∗>
∗
∗
β = Σ∗−1
2 Σ12 and ∼ N (0, Σ1 − Σ12 Σ2 Σ12 ). Then by the formula for matrix inversion in
∗−1
∗> −1 ∗
block form, we have Ω∗12 = −(Σ∗1 − Σ∗12 Σ∗−1
2 Σ12 ) Σ12 Σ2 . Hence, we have the following
∗
relationship between the precision matrix Ω12 and the regression coefficients β:
∗
∗
∗−1 ∗>
d2 ×d1
Ω∗>
.
12 (Σ1 − Σ12 Σ2 Σ12 ) = −β ∈ R
The reason why we can use neighborhood selection to recover the precision matrix in traditional
Gaussian graphical model lies crucially in the fact that |G1 | = 1. In this situation, we have
7
∗>
Σ∗1 − Σ∗12 Σ∗−1
2 Σ12 is a (positive) real number. Hence the sparsity pattern of β is exactly
the same as the sparsity pattern in Ω∗12 . However, this does not hold when |G1 | > 1. In
∗>
∗
such case, Σ∗1 − Σ∗12 Σ∗−1
2 Σ12 is a d1 -by-d1 matrix. When Ω12 is s-sparse, β is s-row sparse,
i.e., β has s non-zero rows. However, the rate for estimating such a row sparse matrix is
kβb − βk2F = OP (sd/n) (Negahban and Wainwright, 2011; Rohde et al., 2011). Thus no
consistent estimator exists in the high dimensional setting when d > n.
3
“Untangle and Chord” the STRINGS
In this section, we introduce our proposed method to test the existence of certain inter-subject
interaction and construct a confidence interval for entries in Ω∗12 . Formally, for 1 ≤ j ≤ d1
and d1 + 1 ≤ k ≤ d, we are interested in the following two types of inferential problems:
∗
∗
, where θjk
is the (j, k)-th entry of
• Confidence Interval: For a particular parameter θjk
∗
Θ , how to construct a confidence interval for it?
∗
• Hypothesis testing: Consider the null hypothesis H0 : θjk
= 0, how to construct a valid
test for H0 ?
To address these two types of questions, we rely on obtaining an asymptotically normal
∗
estimator of θjk
. After this, constructing confidence intervals and testing hypotheses follow
naturally. Hence in the following we introduce our way to de-bias the STRINGS estimator.
As we mentioned in the introduction, KKT-inversion type of methods (Zhang and
Zhang, 2014; Van de Geer et al., 2014) cannot be applied here since they require the
inverse Hessian to be sparse. Recall that the population loss function is given by L(Θ) =
Tr(ΘΣ∗ ) − log |Σ∗G ΘΣ∗G + Σ∗G |. Its Hessian can be calculated as following:
∇2 L(Θ) = [(Σ∗G Θ + I)−1 Σ∗G ] ⊗ [(Σ∗G Θ + I)−1 Σ∗G ].
Thus we have [∇2 L(Θ∗ )]−1 = Ω∗ ⊗ Ω∗ . We can see that the inverse Hessian can be dense
since we do not impose any assumption on Ω∗ . Getting around with this difficulty requires
new sets of inferential tools. Rather than inverting the KKT conditions, we propose to
de-bias the STRINGS estimator in (2.6) utilizing the estimating equation for Θ∗ . Moreover,
sample splitting is adopted to achieve the desired asymptotic normality. To see this, recall
the definition of Θ∗ that Θ∗ = Ω∗ − (Σ∗G )−1 , we can derive the following estimating equation:
Σ∗ Θ∗ Σ∗G + Σ∗ − Σ∗G = 0.
(3.1)
We first present a heuristic explanation on our de-baising procedure. Based on the sample
version of (3.1), we construct a de-biased estimator as following
bu = Θ
b − M (Σ
bΘ
bΣ
bG + Σ
b −Σ
b G )P > ,
Θ
8
(3.2)
where M and P are two bias correction matrices to be specified later. To gain the intuition
b u defined in (3.2) is an asymptotically normal estimator, we calculate the difference
why Θ
b u and Θ∗ as follows.
between Θ
b u − Θ∗ = Θ
b − Θ∗ − M [Σ(
b Θ
b − Θ∗ + Θ∗ )Σ
bG + Σ
b −Σ
b G ]P >
Θ
b ∗Σ
bG + Σ
b −Σ
b G )P > + Θ
b − Θ∗ − M Σ(
b Θ
b − Θ∗ )Σ
b G P >.
= −M (ΣΘ
b u − Θ∗ = Leading + Remainder, where
Through some algebra, we have Θ
b − Σ∗ )(I + Θ∗ Σ∗G ) − (I − Σ∗ Θ∗ )(Σ
b G − Σ∗G )]P > ,
Leading = −M [(Σ
b − Σ )Θ (Σ
bG −
Remainder = −M (Σ
∗
∗
Σ∗G )P >
(3.3)
b − Θ − M Σ(
b Θ
b − Θ )Σ
bGP .
+Θ
∗
∗
>
(3.4)
b ≈ I and
First, in order to make the Remainder term in (3.4) small, it requires M Σ
b G ≈ I. In other words, M and P should function as the inverse of Σ∗ and Σ∗ respectively.
PΣ
G
Second, regarding the Leading term, we can see that it is a empirical process type quantity.
It is asymptotically normal provided that M and P are independent of the remaining random
quantities in (3.3). This motivates us to utilize sample splitting to obtain the two bias
correction matrices M and P . In all, we have an “untangle and chord” procedure to de-bias
the STRINGS estimator. Concretely, we split the data X ∈ R2n×d into two parts D1 and
D2 with equal number of samples. Note that here we inflate the sample size to 2n. This
is purely for simplifying the notations. The untangle step uses the first data D1 to get an
b The chord step utilizes the second data D2 to obtain the bias
initial STRINGS estimator Θ.
b ≈ I and P Σ
b G ≈ I. Precisely
correction matrices M and P with desired properties, i.e., M Σ
d×d
we use a CLIME-type procedure to get M and P . For M ∈ R , we solve the following
convex program:
min kM k∞
b 0 − Ikmax ≤ λ0 ,
subject to kM Σ
(3.5)
b 0 is the sample covariance matrix of the second sample D2 and λ0 is the approximation
where Σ
b 0 and λ0 can be viewed as two inputs to this CLIME-type procedure.
error we want to achieve. Σ
We solve a similar convex problem to obtain P ∈ Rd×d with different inputs and an additional
block constraint:
min kP k∞
b 0G − Ikmax ≤ λ0 ,
subject to kP Σ
P = diag(P1 , P2 ), P1 ∈ Rd1 ×d1 , P2 ∈ Rd2 ×d2 ,
(3.6)
b 0 is the block diagonal sample covariance matrix corresponding to XG1 and XG2
where Σ
G
on the second sample D2 . Notice here we add another constraint that P needs to be a
b 0 is a block diagonal matrix. The overall
block diagonal matrix. This is expectable since Σ
G
de-biasing procedure is presented in Algorithm 1.
b u = (θbu ) in (3.2), we can obtain a confidence interval for
Given the de-biased estimator Θ
jk
∗
∗
θjk
and conduct hypothesis testing on H0 : θjk
= 0 under a valid estimation of the asymptotic
u
b
variance of θjk . The asymptotic variance is involved, and we defer the details to Section 4.2.
9
Algorithm 1 “Untangle and Chord” the STRINGS
Input: X ∈ R2n×d , where rows of X represent i.i.d samples from N (0, Σ∗ ).
b u , the de-biased estimator for Θ∗ .
Output: Θ
Data Splitting: Split the sample into two parts D1 and D2 .
b and Σ
b 0 be the sample covariance matrices on D1 and D2
Covariance Estimation: Let Σ
b G and Σ
b 0 are block diagonal matrices of Σ
b and Σ
b 0.
respectively. And Σ
G
b by Algorithm 2 using D1 .
Untangle Step: Get the STRINGS estimator Θ
b 0 and Σ
b 0 , choose M to
Chord Step: Given the sample covariance matrices on D2 , i.e., Σ
G
be the minimizer of (3.5) and choose P to be the minimizer of (3.6).
Debias: Define the debiased estimator as
bu = Θ
b − M (Σ
bΘ
bΣ
bG + Σ
b −Σ
b G )P > .
Θ
4
Theoretical Results
In this section, we provide theoretical properties of the proposed estimators. In Section 4.1,
we establish the rate of convergence of the STRINGS estimator in (2.6). In Section 4.2,
we prove the asymptotic normality of the de-biased estimator in (3.2). We also construct
asymptotically valid tests and confidence intervals for low dimensional parameters in Θ∗12 .
4.1
Estimation Consistency
In the next theorem, we give the convergence rate for the STRINGS estimator in (2.6).
Theorem 4.1. Suppose the inter-subject dependencies kΩ∗12 k0 ≤ s and hence kΘ∗ k0 ≤ s∗ :=
∗
2s2 + 2s. Further we assume kΣ∗ kmax ≤ K and kΘp
k1,1 ≤ R for some absolute constants K
∗2
and R. Under the sample size condition that sp log d/n = o(1), there exists a constant
C > 0 such that for sufficiently large n, if λ = 2C log d/n, with probability at least 1 − 4d−1 ,
we have
r
r
∗ log d
56C
14C
s
log d
b − Θ∗ kF ≤
b − Θ∗ k1,1 ≤
and kΘ
s∗
,
kΘ
2
2
ρmin
n
ρmin
n
provided that λmin (Σ∗ ) ≥ ρmin > 0.
A few remarks on the assumptions are in order. First, kΩ∗12 k0 ≤ s is the main assumption
for our theoretical results. It imposes sparsity on Ω∗12 , i.e., the dependency structure between
XG1 and XG2 is sparse. Note that we do not make any assumption about the sparsity of the
overall precision matrix Ω∗ . It can be rather dense. Second, kΣ∗ kmax ≤ K and kΘ∗ k1,1 ≤ R
are two regularity conditions on the Gaussian graphical model. The first specifies that the
covariance between any two variables cannot be too large. It is weaker than kΣ∗ k2 ≤ K
since kΣ∗ kmax ≤ kΣ∗ k2 . This assumption can be commonly found in literatures on covariance
10
and precision matrix estimation (Bickel and Levina, 2008; Rothman et al., 2008). An easy
consequence is that kΣ∗G kmax ≤ K since Σ∗G is the block diagonal of Σ∗ . kΘ∗ k1,1 ≤ R requires
the inter-subject dependency has constant sparsity. Similar conditions can be found in
literatures on differential networks (Zhao et al., 2014).
Theorem 4.1 shares the same spirit with the convergence results for `1 regularized maximum
likelihood estimator
(Rothman et al., 2008), that is the rate for estimating an s∗ -sparse
p
parameter Θ∗ is s∗ log d/n in Frobenius norm. However, there are two things worth to be
noted here. The first is that in Theorem 4.1, s∗ can be replaced with any upper bound of
kΘ∗ k0 and the result is still valid. We know s∗ is an upper bound of kΘ∗ k0 . In the worst
case, kΘ∗ k0 can be as large as s∗ and when kΘ∗ k0 is smaller, the rate in Theorem 4.1 can
be improved. Second, recall that s∗ s2 , where s is the sparsity of Ω∗12 . Considering our
goal is to estimate Ω∗12 , the rate seems to be sub-optimal. Especially in the case d1 = 1,
neighborhood selection (Meinshausen and
p Bühlmann, 2006; Yuan, 2010) and CLIME (Cai
et al., 2011) can obtain the optimal rate s log d/n for the Frobenius norm. However, as we
pointed out in Section 1, these methods cannot be applied when d1 d2 due to the violation
of the sparsity assumption on Ω∗ .
4.2
Asymptotic Inference
In this section, we give the limiting distribution of the de-biased estimator in (3.2). The
asymptotic normality result is presented in Theorem 4.5. Based on this, we propose valid
asymptotic confidence intervals and test statistics for parameters in Θ∗12 .
We first state a version of asymptotic normality result which involves population quantities.
Theorem 4.2. (Asymptotic Normality) Suppose the conditions in Theorem 4.1 hold. Further
bu
assume kΩ∗ kp
1 ≤ L for some absolute constant L > 0. Let Θ be the de-biased estimator
with λ0 = C 0 log d/n, where C 0 is a sufficiently large constant. For any 1 ≤ j ≤ d1 and
d1 + 1 ≤ k ≤ d, define the asymptotic variance as
>
>
> 2
> 2
ξjk2 = (Mj∗ Σ∗ Mj∗
)[Pk∗ (I + Σ∗G Θ∗ )Σ∗G Pk∗
] + (Mj∗ Σ∗G Pk∗
) − (Mj∗ Σ∗ Pk∗
)
where Σ∗G2
>
>
− [Mj∗ (I − Σ∗ Θ∗ )Σ∗G2 (I − Θ∗ Σ∗ )Mj∗
](Pk∗ Σ∗G Pk∗
),
√
= diag(0, Σ∗2 ). Under the scaling condition s∗ log d/ n = o(1), we have
√
u
∗
n · (θbjk
− θjk
)/ξjk
(4.1)
N (0, 1).
Remark 4.3. kΩ∗ k1 ≤ L is a milder condition than the sparsity constraints on Ω∗ in the sense
that Ω∗ can be rather dense. And this is the case for ISA. To further understand the essence of
this assumption, we discuss connections between λmin (Σ∗ ) ≥ ρmin and kΩ∗ k1 ≤ L. Since Ω∗ =
(Σ∗ )−1 , it is not hard to see that λmax (Ω∗ ) ≤ 1/ρmin . Hence we have maxj∈[d] kΩ∗∗j k2 ≤ 1/ρmin .
Here, instead of the column-wise `2 -norm boundedness, we assume that maxj∈[d] kΩ∗∗j k1 ≤ L.
It is indeed stronger than the `2 one, but it is weaker than the sparsity assumption on Ω∗ .
Moreover, as shown by the lower bound in Cai et al. (2016), imposing this assumption does
not make it possible to consistently estimate the parameter. Based on Theorem 1.1 in Cai
11
b − Ω∗ k2 d2 log d/n,
et al. (2016), we have that the optimal rate for the matrix `1 -norm is EkΩ
1
which means no consistent estimator for the whole precision matrix Ω∗ exists when d > n.
To obtain the formula for the asymptotic variance ξjk2 in (4.1), we use the Isserlis’ theorem
(Isserlis, 1916) to calculate the fourth order moment of the Gaussian distribution. We can see
2
that ξjk2 still depends on population quantities Σ∗ and Θ∗ . Thus ξjk
is unknown in practice,
∗
and we need to get a consistent estimator ξbjk2 to construct confidence intervals for θjk
.
b G2 = diag(0, Σ
b 2 ). For any 1 ≤ j ≤ d1 and
Lemma 4.4. (Variance Estimation) Define Σ
2
d1 + 1 ≤ k ≤ d, let ξbjk be the empirical version of (4.1), i.e.,
b > )[Pk∗ (I + Σ
b G Θ)
b Σ
b G P > ] + (Mj∗ Σ
b G P > )2 − (Mj∗ ΣP
b > )2
ξbjk2 = (Mj∗ ΣM
j∗
k∗
k∗
k∗
>
>
b
b
b
b
b
b
− [Mj∗ (I − ΣΘ)ΣG2 (I − ΘΣ)Mj∗ ](Pk∗ ΣG Pk∗ ).
(4.2)
Then under the conditions in Theorem 4.2, ξbjk /ξjk converges in probability to 1.
Combining Theorem 4.2 and Lemma 4.4 with Slutsky’s theorem, we can obtain the final
version of the asymptotic normality result which does not involve any population quantity.
b u be the de-biased
Theorem 4.5. Suppose
the conditions in Theorem 4.2 hold. Let Θ
p
estimator with λ0 = C 0 log d/n, where C 0 is a sufficiently√large constant. For any 1 ≤ j ≤ d1
and d1 + 1 ≤ k ≤ d, under the scaling condition s∗ log d/ n = o(1), we have
√
u
∗
n · (θbjk
− θjk
)/ξbjk
N (0, 1).
Applying Theorem 4.5, it is easy to construct asymptotically valid confidence intervals and
test functions. For any 1 ≤ j ≤ d1 and d1 + 1 ≤ k ≤ d and the significance level α ∈ (0, 1), let
u
u
Ijk (α) = θbjk
− δ(α, n), θbjk
+ δ(α, n) ,
ξbjk
α
where δ(α, n) = √ Φ−1 1 −
.
2
n
(4.3)
∗
Also for the null hypothesis H0 : θjk
= 0, we can construct the following test function
Tjk (α) =
(
√
u b
/ξjk | > Φ−1 (1 − α/2),
1 if | n · θbjk
√ bu b
0 if | n · θjk /ξjk | ≤ Φ−1 (1 − α/2),
(4.4)
where α ∈ (0, 1) is the significance level of the test. The following Corollary proves the
validity of the confidence interval and the test function.
Corollary 4.6. Suppose the conditions in Theorem 4.5 hold. The confidence interval in
(4.3) is asymptotically valid and the type I error of (4.4) is asymptotically α, i.e.,
∗
lim P(θjk
∈ Ijk (α)) = 1 − α and
n→∞
12
∗ =0 (Tjk (α) = 1) = α.
lim Pθjk
n→∞
Table 1: Mean (Standard Error) of different metrics for STRINGS and GLASSO when s = 10
Precision
5
Recall
F-score
d
STRINGS
GLASSO
STRINGS
GLASSO
STRINGS
GLASSO
30
60
100
250
0.37(0.11)
0.31(0.09)
0.31(0.10)
0.21(0.07)
0.22(0.06)
0.22(0.06)
0.22(0.06)
0.18(0.06)
1.00(0.00) 1.00(0.00) 0.53(0.11)
1.00(0.00) 1.00(0.00) 0.47(0.11)
1.00(0.02) 1.00(0.02) 0.47(0.11)
0.99(0.02) 0.99(0.03) 0.34(0.10)
0.36(0.08)
0.36(0.07)
0.36(0.08)
0.30(0.08)
Numerical Experiments
In this section, we conduct numerical experiments on both simulated and real data to validate
our STRINGS estimator and the “untangle and chord” procedure. We also compare our
method with `1 regularized maximum likelihood estimator (GLASSO).
5.1
Simulated Data
For each dimension d and sparsity s, we generate the precision matrix Ω∗ as following. First,
we split the variables [d] into two groups G1 = {1, . . . , d/2} and G2 = {d/2 + 1, . . . , d} with
equal cardinality. For the intra-subject precision matrices, we let Ω∗1 = Ω∗2 to be the all
1 matrix. For the inter-subject precision matrix Ω∗12 , we uniformly sample s indices from
G1 × G2 to be the support of Ω∗12 . And the corresponding values are set to be 0.5. Further
δI is added to Ω∗ to make its condition number equal to d. Finally the precision matrix is
standardized so that the diagonal entries of Ω∗ are all 1’s.
Under this model, we generate n = 100 training samples from the multivariate normal
distribution with mean 0 and covariance Σ∗ = (Ω∗ )−1 , and an independent sample of size 100
with the same distribution
to choose the tuning parameter λ in (2.6). The tuning parameter
p
has the form λ = C log d/n according to Theorem 4.1, where C is set to be 50 uniform
values across [0, 5]. We evaluate the estimators on the validation sample using the estimation
b to be
equation in (3.1), i.e., we define the validation loss incurred by the estimator Θ
b = kΣ
b val Θ
bΣ
b val + Σ
b val − Σ
b val kF ,
Lval (Θ)
G
G
(5.1)
b val and Σ
b val are the empirical covariance matrices on the validation sample. The final
where Σ
G
tuning parameter λ is chosen to be the one with the smallest validation loss in (5.1).
5.1.1
Estimation Quality
Since we are estimating the support of Ω∗12 , we adopt standard metrics including Precision,
Recall and F-score to measure its performance. They are defined as follows
Precision =
TP
,
TP + FP
Recall =
TP
,
TP + FN
13
F-score =
2 · Precision · Recall
,
Precision + Recall
Table 2: Mean (Standard Error) of different metrics for STRINGS and GLASSO when s = 30
Precision
Recall
d
STRINGS
GLASSO
STRINGS
GLASSO
30
60
100
250
0.43(0.07)
0.51(0.10)
0.47(0.09)
0.38(0.08)
0.34(0.05) 0.89(0.06) 0.94(0.05)
0.25(0.04) 1.00(0.01) 1.00(0.01)
0.23(0.03) 1.00(0.01) 1.00(0.01)
0.19(0.03) 0.99(0.02) 1.00(0.01)
F-score
STRINGS
GLASSO
0.58(0.06) 0.49(0.06)
0.67(0.08) 0.40(0.04)
0.63(0.08) 0.37(0.05)
0.54(0.08) 0.32(0.04)
Table 3: Mean (Standard Error) of different metrics for STRINGS and GLASSO when s = 50
Precision
Recall
STRINGS
GLASSO
F-score
d
STRINGS
GLASSO
STRINGS
GLASSO
30
60
100
250
0.48(0.05)
0.38(0.06)
0.61(0.09)
0.50(0.06)
0.40(0.04) 0.78(0.07) 0.86(0.05) 0.60(0.04) 0.54(0.04)
0.33(0.04) 0.81(0.06) 0.86(0.06) 0.51(0.05) 0.47(0.04)
0.23(0.02) 1.00(0.01) 1.00(0.00) 0.75(0.06) 0.38(0.03)
0.20(0.03) 1.00(0.00) 1.00(0.00) 0.67(0.05) 0.33(0.04)
where TP is the number of true positives (nonzero entries in Θ∗12 are considered to be positives),
FP is the false positives and FN stands for the false negatives. By definition, high Precision
means the algorithm recovers substantially more truth than negatives, and high Recall means
the algorithm returns most of the truth. Hence, Precision measures the exactness or quality
of support recovery, and Recall is a measure of the algorithm’s completeness or quantity.
F-score, as the harmonic mean of Precision and Recall, combines these two into a single metric.
b 12 , absolute
The higher the F-score, the better the algorithm is in support recovery. For Θ
−4
values above 1 × 10 are considered to be non-zeros since we set the optimization accuracy
to be 1 × 10−4 . We consider different values of d ∈ {30, 60, 100, 250} and s ∈ {10, 30, 50}.
For each configuration, we replicate for 100 times and report the mean and standard error
of Precision, Recall and F-score in Tables 1 - 3. We can see that the STRINGS estimator
outperforms GLASSO uniformly over all configurations of (d, s). The improvement is more
significant when the dimension d and the sparsity s are larger. This is expectable since
GLASSO is not tailored for estimation under dense precision matrices. Due to the dense
intra-subject precision matrices, GLASSO tends to under-regularize, which results in too
many false positives. And this leads to poor Precision and F-score.
5.1.2
Inference Quality
For inference, we only report the results for STRINGS since GLASSO performs poorly in
terms of estimation. We generate another sample of size 100 to de-bias the initial STRINGS
0
estimator. Guided
p by Theorem 4.2, the tuning parameter λ in CLIME-type procedure is
chosen to be 0.5 log d/n. By Corollary 4.6, the (1 − α) × 100% asymptotic confidence
14
Table 4: Average coverage probabilities and average lengths over S and S c for 100 replications
d
s
30 10
60 10
100 10
250 10
AvgcovS
AvgcovS c
AvglenS
AvglenS c
0.9430
0.9430
0.9190
0.9060
0.9503
0.9518
0.9524
0.9631
0.2462
0.2479
0.2715
0.2173
0.2419
0.2635
0.3095
0.2887
∗
is given by
interval for parameter θjk
h
ξbjk
ξbjk
α bu
α i
u
− √ Φ−1 1 −
Ijk (α) = θbjk
, θjk + √ Φ−1 1 −
,
2
2
n
n
b u is the de-biased estimator and ξbjk is specified in (4.2). Throughout this section,
where Θ
∗
we set α = 0.05. For every parameter θjk
, we estimate the probability that the true value
∗
θjk is covered by the confidence interval Ijk (α) using its empirical version, i.e., α
bjk is the
∗
percentage of times that θjk is covered by Ijk (α) in 100 replications. Next for S = supp(Ω∗12 ),
we define the average coverage probability over S and over S c to be
AvgcovS =
1 X
α
bjk ,
|S|
AvgcovS c =
(j,k)∈S
1 X
α
bjk .
|S c |
c
(5.2)
(j,k)∈S
We also calculate the average length of the confidence intervals over S and S c and denote
them as AvglenS and AvglenS c respectively. The result of these four quantities over 100
replications can be seen in Table 4. The coverage probabilities over the support S and the
non-support S c are around 95%, which matches the significance level α = 0.05. And the
coverage probability over S decreases as the√dimension d increases, as expected.
u
∗
In Figure 3, we show the QQ-plot of n · (θbjk
− θjk
)/ξbjk when d = 250. We choose
(j, k) ∈ {(126, 1), (126, 2), (126, 3)} to present. As we can see, the scattered points of
√
u
∗
n · (θbjk
− θjk
)/ξbjk in 100 replications are close to the line with zero intercept and unit slope.
5.2
fMRI Data
In this section, we apply our estimation and inference methods to an fMRI data studied in
(Chen et al., 2016). This data set includes fMRI measurements of 17 subjects while they
were watching a 23-minute movie (BBC’s “Sherlock”). The fMRI measurements were made
every 1.5 seconds, thus in total we have 945 brain images for each subject. As described
in Chen et al. (2016), the 23-minute movie is divided into 26 scenes for further analysis.
For the original fMRI data, there are 271, 633 voxels measured. We adopt the method
introduced in Baldassano et al. (2015) to reduce the dimension to 172 regions of interest
(ROIs). We use the average of the first eight subjects as XG1 and the average of the remaining
nine subjects as XG2 for conducting Inter-Subject Analysis. For preprocessing, each ROI is
15
3
2
2
2
1
0
-1
-2
Quantiles of Input Sample
3
Quantiles of Input Sample
Quantiles of Input Sample
3
1
0
-1
-2
-3
-3
-2
-1
0
1
2
3
√
u
∗
n · (θb126,1
− θ126,1
)/ξb126,1
Figure 3: The QQ-plot of
0
-1
-2
-3
-3
-2
Standard Normal Quantiles
(a)
1
-1
0
1
2
3
-3
-3
-2
Standard Normal Quantiles
(b)
√
u
∗
n · (θb126,2
− θ126,2
)/ξb126,2
-1
0
1
2
3
Standard Normal Quantiles
(c)
√
u
∗
n · (θb126,3
− θ126,3
)/ξb126,3
√
∗
u
)/ξbjk for (j, k) = (126, 1), (126, 2) and (126, 3).
− θjk
n · (θbjk
standardized to have zero mean and unit variance. For estimation, the tuning parameter λ is
chosen through cross√
validation. In inference, we threshold the de-biased estimator at level
−1
2
b
Φ (1 − 4α/d ) · ξjk / n, where α = 0.05 and 4/d2 accounts for the Bonferroni correction in
multiple hypothesis testing. We pick the eighth scene and the fifteenth scene for presentation.
Scene 8 represents a press conference held by the police department to describe the recent
suicides. Scene 15 contains the first meeting of Sherlock and Watson during which Sherlock
shows his deduction talent to Watson.
In Figure 4, we show the brain networks for these two scenes estimated by our method.
Each purple circle represents a region of interest (ROI). We also show the snapshots of both
the left and the right brain hemispheres in Figure 5. The color represents the degree of the
ROIs in the inter-subject conditional independence graph. And a redder area corresponds to
the ROI with higher degree. As we can see, for the eighth scene when the press conference
took place, the visual cortex and auditory cortex are highly activated since the subjects were
mostly receiving audio and visual information from the press conference. The high activation
of the visual and auditory cortices are ubiquitous in all 26 scenes. This makes sense since
the subjects were under an audio-visual stimulus (“BBC’s Sherlock”). This also matches
the results in Chen et al. (2016). More specifically, during the fifteenth scene when Sherlock
and Watson met, we can see that the prefrontal cortex especially the dorsolateral prefrontal
cortex (DL-PFC) has a large degree. DL-PFC is known for its function in working memory
and abstract reasoning (Miller and Cummings, 2007). And this coincides with scene 15 since
the subjects might reason about Sherlock’s deduction about Watson’s job.
6
Extensions
In this section, we introduce two extensions to our methods and theories. Section 6.1 is
devoted to extensions to non-Gaussian distributions and in Section 6.2, we generalize the
Inter-Subject Analysis (ISA) to multiple subjects.
16
(a) Scene 8: Press conference
(b) Scene 15: Sherlock and Watson
Figure 4: The brain networks for two difference scenes. Each purple circle represents an ROI
and the black edges represent the graph edges.
(a) Scene 8: press conference
(b) Scene 15: Sherlock and Watson met
Figure 5: the brain images for both left and right hemisphere.
6.1
Non-Gaussian Distributions
In this section, we extend our estimation methods to non-Gaussian distributions including
the nonparanormal distribution studied in Liu et al. (2012) and Xue et al. (2012) and mixed
data distributions studied in Fan et al. (2016). First, we introduce the definition of the
nonparanormal distribution.
Definition 6.1. (Nonparanormal Distribution) Let f = {f1 , . . . , fd } be a set of monotone
univariate functions and let Σ∗ ∈ Rd×d be a positive definite correlation matrix with the diagonal being all 1’s. A d-dimensional random vector X = (X1 , . . . , Xd )> follows a nonparanormal
distribution X ∼ NPNd (f, Σ∗ ) if f (X) = (f1 (X1 ), . . . , fd (Xd ))> ∼ N (0, Σ∗ ).
A close look at the proofs of Theorem 4.1 will show that the validity of the STRINGS
b and Σ∗ . If we can obtain
estimator only depends on the `max -norm of the difference between Σ
17
p
b − Σ∗ kmax = OP ( log d/n), then our estimation procedure
b which satisfies kΣ
an estimator Σ
b for
is still valid. And this is the case for the nonparanormal distribution. The estimator Σ
the nonparanormal distribution comes from the Kendall’s tau statistics. Let (j, k) ∈ [d] × [d],
define the (j, k)-th Kendall’s tau statistic as
τbjk =
X
2
0
0
sign[(Xji − Xji )(Xki − Xki )].
n(n − 1) 1≤i<i0 ≤n
τbjk can be viewed as the nonparametric correlation between Xj and Xk . And we define the
b τ = [Σ
b τ ] for the unknown correlation matrix Σ∗ :
following estimator Σ
jk
bτ
Σ
jk
(
sin( π2 τbjk ), if j =
6 k,
=
1,
if j = k.
b τ − Σ∗ kmax ≤ 2.45π
By Theorem 4.2 in Liu et al. (2012), we have kΣ
probability. Based on this, we could get the following theorem.
p
log d/n with high
Theorem 6.2. (Inter-Subject Analysis for Nonparanormal Distributions) Suppose the assumptions in Theorem 4.1 are
C > 0 such that for
psatisfied. Then there exists a constant
−1
sufficiently large n, if λ = 2C log d/n, with probability 1 − 4d , we have
r
r
∗ log d
56C
14C
s
log d
b − Θ∗ k1,1 ≤
b − Θ∗ kF ≤
and kΘ
s∗
.
kΘ
2
2
ρmin
n
ρmin
n
We can see that Theorem 6.2 shares the same statistical rate with Theorem 4.1 under the
Gaussian distribution. In the following we introduce the definition of the latent Gaussian
copula model for binary data described in Fan et al. (2016).
Definition 6.3. (Binary Data) Let X = (X1 , . . . , Xd )> ∈ {0, 1}d be a d-dimensional 0/1
random vector. We say X satisfies latent Gaussian copula model if there exists a d-dimensional
random vector Z = (Z1 , . . . , Zd )> ∼ NPNd (f, Σ∗ ) such that
Xj = 1Zj >Cj
for all j = 1, . . . , d,
where C = (C1 , . . . , Cd ) is a vector of constants. Then we denote X ∼ LNPN(f, Σ∗ , C).
The sample estimator of Σ∗ is also built upon the Kendall’s tau statistics. However it
is more involved than in the nonparanormal distribution. We omit the details and denote
b b . Then by Theorem 3.1 in Fan et al. (2016), we also have
the sample estimator
to be Σ
p
b b − Σ∗ kmax = OP ( log d/n). This will give us similar results as in Theorem 6.2.
kΣ
b our
In all, we can see that with a suitable choice of the sample covariance estimator Σ,
STRINGS estimator can be easily adapted to non-Gaussian distributions.
18
6.2
ISA with Multiple Subjects
In this section, we discuss the extension of ISA to multiple subjects.
Let X = (X1 , . . . , Xd )> be a d-dimensional random
P vector. Let G1 . . . , GL be L disjoint
subsets of {1, . . . , d} with cardinality |G` | = d` and L`=1 d` = d. XG` represents features
of the `-th subject. Let Σ∗ ∈ Rd×d be the covariance matrix of X, with Σ∗jk ∈ Rdj ×dk
being the covariance between XGj and XGk . For the precision matrix Ω∗ = (Σ∗ )−1 , we use
Ω∗jk to denote the dependency between XGj and XGk . Define Σ∗G = diag(Σ∗11 , . . . , Σ∗LL ) and
Θ∗ = Ω∗ − (Σ∗G )−1 , we have a similar observation.
Proposition 6.4. If L = O(1) and kΩ∗jk k0 ≤ s for all j 6= k, then kΘ∗ k0 = O(s2 ).
Given Proposition 6.4, we can see that STRINGS estimator and the “untangle and chord”
procedure are also valid for multiple subjects analysis.
References
Ashby, F. G. (2011). Statistical analysis of fMRI data. MIT press.
Baldassano, C., Beck, D. M. and Fei-Fei, L. (2015). Parcellating connectivity in spatial maps.
PeerJ 3 e784.
Banerjee, O., Ghaoui, L. E. and dAspremont, A. (2008). Model selection through sparse
maximum likelihood estimation for multivariate gaussian or binary data. Journal of Machine
Learning Research 9 485–516.
Bartels, A. and Zeki, S. (2004). Functional brain mapping during free viewing of natural scenes.
Human brain mapping 21 75–85.
Belitski, A., Gretton, A., Magri, C., Murayama, Y., Montemurro, M. A., Logothetis,
N. K. and Panzeri, S. (2008). Low-frequency local field potentials and spikes in primary visual
cortex convey independent visual information. The Journal of Neuroscience 28 5696–5709.
Bickel, P. J. and Levina, E. (2008). Regularized estimation of large covariance matrices. The
Annals of Statistics 199–227.
Cai, T., Liu, W. and Luo, X. (2011). A constrained 1 minimization approach to sparse precision
matrix estimation. Journal of the American Statistical Association 106 594–607.
Cai, T. T., Liu, W., Zhou, H. H. et al. (2016). Estimating sparse precision matrix: Optimal
rates of convergence and adaptive estimation. The Annals of Statistics 44 455–488.
Chen, J., Leong, Y. C., Norman, K. A. and Hasson, U. (2016). Shared experience, shared
memory: a common structure for brain activity during naturalistic recall. bioRxiv 035931.
Fan, J., Liu, H., Ning, Y. and Zou, H. (2016). High dimensional semiparametric latent graphical
model for mixed data. Journal of the Royal Statistical Society: Series B .
Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with
the graphical lasso. Biostatistics 9 432–441.
Gu, Q., Cao, Y., Ning, Y. and Liu, H. (2015). Local and global inference for high dimensional
Gaussian copula graphical models. arXiv preprint arXiv:1502.02347 .
Hartley, T., Maguire, E. A., Spiers, H. J. and Burgess, N. (2003). The well-worn route
and the path less traveled: Distinct neural bases of route following and wayfinding in humans.
Neuron 37 877 – 888.
19
Hasson, U., Nir, Y., Levy, I., Fuhrmann, G. and Malach, R. (2004). Intersubject synchronization of cortical activity during natural vision. science 303 1634–1640.
Horwitz, B. and Rapoport, S. I. (1988). Partial correlation coefficients approximate. J Nucl
Med 29 392–399.
Huang, S., Li, J., Sun, L., Liu, J., Wu, T., Chen, K., Fleisher, A., Reiman, E. and Ye, J.
(2009). Learning brain connectivity of alzheimer’s disease from neuroimaging data. In Advances
in Neural Information Processing Systems.
Ipsen, I. C. (2009). Numerical matrix analysis: Linear systems and least squares. Siam.
Isserlis, L. (1916). On certain probable errors and correlation coefficients of multiple frequency
distributions with skew regression. Biometrika 11 185–190.
Jankova, J., van de Geer, S. et al. (2015). Confidence intervals for high-dimensional inverse
covariance estimation. Electronic Journal of Statistics 9 1205–1229.
Javanmard, A. and Montanari, A. (2014). Confidence intervals and hypothesis testing for
high-dimensional regression. Journal of Machine Learning Research 15 2869–2909.
Kolar, M., Liu, H. and Xing, E. P. (2014). Graph estimation from multi-attribute data. Journal
of Machine Learning Research 15 1713–1750.
Lee, H., Lee, D. S., Kang, H., Kim, B.-N. and Chung, M. K. (2011). Sparse brain network
recovery under compressed sensing. IEEE Transactions on Medical Imaging 30 1154–1165.
Liu, H., Han, F., Yuan, M., Lafferty, J. and Wasserman, L. (2012). High-dimensional
semiparametric gaussian copula graphical models. The Annals of Statistics 2293–2326.
Liu, S., Suzuki, T., Sugiyama, M. and Fukumizu, K. (2015). Structure learning of partitioned
markov networks. arXiv preprint arXiv:1504.00624 .
Marrelec, G., Krainik, A., Duffau, H., Pélégrini-Issac, M., Lehéricy, S., Doyon, J.
and Benali, H. (2006). Partial correlation for functional brain interactivity investigation in
functional mri. Neuroimage 32 228–237.
Mechler, F., Victor, J. D., Purpura, K. P. and Shapley, R. (1998). Robust temporal coding
of contrast by v1 neurons for transient but not for steady-state stimuli. Journal of Neuroscience
18 6583–6598.
Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection
with the lasso. The annals of statistics 1436–1462.
Miller, B. L. and Cummings, J. L. (2007). The human frontal lobes: Functions and disorders.
Guilford press.
Negahban, S. and Wainwright, M. J. (2011). Estimation of (near) low-rank matrices with noise
and high-dimensional scaling. The Annals of Statistics 1069–1097.
Ning, Y. and Liu, H. (2014). A general theory of hypothesis tests and confidence regions for sparse
high dimensional models. arXiv preprint arXiv:1412.8765 .
Petersen, K. B. and Pedersen, M. S. (2012). The matrix cookbook. Version 20121115.
Ravikumar, P., Wainwright, M. J., Raskutti, G., Yu, B. et al. (2011). High-dimensional
covariance estimation by minimizing 1-penalized log-determinant divergence. Electronic Journal
of Statistics 5 935–980.
Rohde, A., Tsybakov, A. B. et al. (2011). Estimation of high-dimensional low-rank matrices.
The Annals of Statistics 39 887–930.
Rothman, A. J., Bickel, P. J., Levina, E., Zhu, J. et al. (2008). Sparse permutation invariant
covariance estimation. Electronic Journal of Statistics 2 494–515.
20
Simony, E., Honey, C. J., Chen, J., Lositsky, O., Yeshurun, Y., Wiesel, A. and Hasson,
U. (2016). Dynamic reconfiguration of the default mode network during narrative comprehension.
Nature Communications 7.
Van de Geer, S., Bühlmann, P., Ritov, Y., Dezeure, R. et al. (2014). On asymptotically
optimal confidence regions and tests for high-dimensional models. The Annals of Statistics 42
1166–1202.
Varoquaux, G., Gramfort, A., Poline, J. B. and Thirion, B. (2012). Markov models for
fmri correlation structure: is brain functional connectivity small world, or decomposable into
networks? Journal of Physiology-Paris 106 212–221.
Vershynin, R. (2010). Introduction to the non-asymptotic analysis of random matrices. arXiv
preprint arXiv:1011.3027 .
Xia, Y., Cai, T. and Cai, T. T. (2016). Multiple testing of submatrices of a precision matrix with
applications to identification of between pathway interactions. J. Am. Stat. Assoc. to appear.
Xue, L., Zou, H. et al. (2012). Regularized rank-based estimation of high-dimensional nonparanormal graphical models. The Annals of Statistics 40 2541–2571.
Yao, H., Shi, L., Han, F., Gao, H. and Dan, Y. (2007). Rapid learning in cortical coding of
visual scenes. Nature neuroscience 10 772–778.
Yuan, M. (2010). High dimensional inverse covariance matrix estimation via linear programming.
Journal of Machine Learning Research 11 2261–2286.
Yuan, M. and Lin, Y. (2007). Model selection and estimation in the gaussian graphical model.
Biometrika 94 19–35.
Yuan, X.-T. and Zhang, T. (2014). Partial gaussian graphical model estimation. IEEE Transactions on Information Theory 60 1673–1687.
Zacks, J. M., Braver, T. S., Sheridan, M. A., Donaldson, D. I., Snyder, A. Z., Ollinger,
J. M., Buckner, R. L. and Raichle, M. E. (2001). Human brain activity time-locked to
perceptual event boundaries. Nature neuroscience 4 651–655.
Zhang, C.-H. and Zhang, S. S. (2014). Confidence intervals for low dimensional parameters in
high dimensional linear models. Journal of the Royal Statistical Society: Series B (Statistical
Methodology) 76 217–242.
Zhao, S. D., Cai, T. T. and Li, H. (2014). Direct estimation of differential networks. Biometrika
101 253–268.
21
A
Proof of the Key Observation
In this section, we give the proof of the key observation in Section 1.
Proof of the Key Observation. By the definition of Θ∗ , we have
−1 ∗
−1 ∗
∗
Θ1 Θ∗12
Σ1 0
Σ1 Σ∗12
∗
∗
∗ −1
,
=
−
Θ = Ω − (ΣG ) =
Θ∗>
Θ∗2
0 Σ∗2
Σ∗>
Σ∗2
12
12
where Θ∗1 , Θ∗2 and Θ∗12 are submatrices of Θ∗ . By the formula for matrix inversion in block
∗> −1
∗
∗
∗> ∗−1 ∗ −1
form, we have Θ∗1 = (Σ∗1 − Σ∗12 Σ∗−1
and
− Σ∗−1
− Σ∗−1
2 Σ12 )
1 , Θ2 = (Σ2 − Σ12 Σ1 Σ12 )
2
∗−1 ∗
∗
∗
∗
∗> ∗−1 ∗ −1
Θ12 = Ω12 = −Σ1 Σ12 (Σ2 − Σ12 Σ1 Σ12 ) . Further by Woodbury’s identity, we have
∗
∗
∗> ∗−1 ∗ −1 ∗> ∗−1
∗−1
Θ∗1 = Σ∗−1
= −Ω∗12 Σ∗>
1 Σ12 (Σ2 − Σ12 Σ1 Σ12 ) Σ12 Σ1
12 Σ1 .
(A.1)
And similarly we have the following identity for Θ∗2 :
∗>
∗
∗
∗−1 ∗> −1 ∗
∗−1
∗
∗−1
Θ∗2 = Σ∗−1
= −Ω∗>
2 Σ12 (Σ1 − Σ12 Σ2 Σ12 ) Σ12 Σ2
12 Σ12 Σ2 .
(A.2)
By the assumption that kvec(Ω∗12 )k0 ≤ s, we know that there are at most s nonzero columns
and s nonzero rows in Ω∗12 . By the relationship in (A.1), we can see that there are at most
s nonzero rows in Θ∗1 . Moreover, since Θ∗1 is symmetric, we get that there are at most s
nonzero columns in Θ∗1 . Hence kvec(Θ∗1 )k0 ≤ s2 . Applying the same argument to (A.2), we
have kvec(Θ∗2 )k0 ≤ s2 . In all, we have kvec(Θ∗ )k0 ≤ 2s2 + 2s.
B
Computational Algorithm
In this section, we derive the updates for optimizing (2.6) using alternating direction method
of multipliers (ADMM).
We can rewrite (2.6) into the following form:
b = arg min
Θ
W ∈Rd×d
b − log |Y | + λkZk1,1
Tr(W Σ)
(B.1)
subject to W − Z = 0,
bGW Σ
bG + Σ
b G − Y = 0.
Σ
The augmented Lagrangian of (B.1) can be written as
b − log |Y | + λkZk1,1 + Tr[U1 (W − Z)]
L(W, Y, Z, U1 , U2 ) = Tr(W Σ)
bGW Σ
bG + Σ
b G − Y )] + ρ kW − Zk2 + ρ kΣ
bGW Σ
bG + Σ
b G − Y k2 .
+ Tr[U2 (Σ
F
F
2
2
22
ADMM is an iterative method that alternatively optimize over the primal variables W, Y, Z
and dual variables U1 , U2 . In the (k + 1)-th step, for W , we solve the following:
W k+1 = arg min L(X, Y k , Z k , U1k , U2k )
W
b + Tr[U1k (W − Z k )] + Tr[U2k (Σ
bGW Σ
bG + Σ
b G − Y k )]
= arg min Tr(X Σ)
W
ρ b
ρ
k 2
b
b
+ kW − Z k k2F + kΣ
G W ΣG + ΣG − Y kF .
2
2
(B.2)
The objective is a quadratic function of W and the optimality condition of (B.2) is given by
b + U1k + Σ
b G U2k Σ
b G + ρ(W k+1 − Z k ) + ρ(Σ
b G )2 W k+1 (Σ
b G )2 + ρΣ
b G (Σ
b G − Y k )Σ
b G = 0,
Σ
which is equivalent to the following simplified form
W k+1 + AW k+1 A = B k ,
(B.3)
b G )2 and B k = Z k − Σ
b G (Σ
b G − Y k )Σ
b G − 1/ρ(Σ
b + Uk + Σ
bG U kΣ
b
where A = (Σ
1
2 G ).
Let A = V DV > be the eigen-decomposition of A, then multiplying (B.3) by V > from left
and V from right would give V > W k+1 V + DV > W k+1 V D = V > B k V. By viewing V > W k V as
a new variable T k , we have T k + DT k D = V > B k V . Since D is a diagonal matrix, T k can be
easily solved as T k = (V > B k V )./[11> + diag(D)diag(D)], where ./ denotes the element-wise
division. Then W k+1 can be recovered by W k+1 = V T k V > .
After obtaining W k+1 , we would solve the following optimization problem for Y :
Y k+1 = arg min L(W k+1 , Y, Z k , U1k , U2k )
(B.4)
Y
b G W k+1 Σ
bG + Σ
b G − Y k2 .
b G W k+1 Σ
bG + Σ
b G − Y )] + ρ kΣ
= arg min − log |Y | + Tr[U2k (Σ
F
2
Y
Although the objective in (B.4) involves the log-determinant term, the minimizer enjoys a
closed-form solution. The first order optimality condition of (B.4) is given by
b G W k+1 Σ
bG + Σ
b G )] = 0.
−Y −1 − U2k + ρ[Y − (Σ
b G W k+1 Σ
bG + Σ
b G ) with the eigen-decomposition C k = Qk Λk Qk> , then
Define C k = U2k + ρ(Σ
q
construct a diagonal matix Ye k with Yejjk = Λkjj + (Λk )2 + 4ρ /(2ρ), and the solution to
jj
k+1
k ek
k>
(B.4) has the form Y
=Q Y Q .
The following updates for Z, U1 and U2 are easy, thus we omit here.
C
Proof of the rate of convergence for STRINGS
Here we outline the proof of Theorem 4.1 on the statistical rate of the STRINGS estimator.
To simplify notations, let S = supp(Θ∗ ) be the support of Θ∗ . By Assumption (A1), we
23
Algorithm 2 STRINGS: Sparse edge esTimator for Intense NuISAnce GraphS
Input: X ∈ Rn×d , where rows of X represent i.i.d samples from N (0, Σ∗ ).
b the STRINGS estimator for Θ∗ .
Output: Θ,
b = (1/n) · X> X and Σ
bG =
Covariance Estimation: Obtain the sample covariance Σ
b 1, Σ
b 2 ), where Σ
b 1 and Σ
b 2 are the diagonal submatrices of Σ.
b
diag(Σ
p
b G ) = 0, let Σ
bG = Σ
b G + log d/n · I.
Preconditioning: if λmin (Σ
0
0
0
b G and U 0 = U 0 = 0. Fix the step size ρ > 0.
Initialization: Set W = Z = 0, Y = Σ
1
2
b G )2 and compute the eigen-decomposition of A = V DV > .
Preprocessing: Form A = (Σ
for k = 0, 1, 2, . . . do
b G (Σ
b G − Y k )Σ
b G − (1/ρ) · (Σ
b + U1k + Σ
b G U2k Σ
b G ). Let
W update: Form B k = Z k − Σ
k
> k
>
>
k+1
k >
T = (U B U )./[11 + diag(D)diag(D) ]. Then set W
=VT V .
k
k
k+1 b
b
b
Y update: Form C = U2 + ρ(ΣG W ΣG + ΣG ) and its eigen-decomposition C k =
Qk Λk Qk> . Then set Y k+1 = Qk Ye k Qk> , where Ye k is a diagonal matrix with Yejjk =
q
(Λkjj + (Λkjj )2 + 4ρ)/(2ρ).
Z update: Set Z k+1 = τλ/ρ (W k+1 + U1k /ρ), where τa (v) = (v − a)+ − (−v − a)+ .
U1 update: Set U1k+1 = U1k + ρ(W k+1 − Z k+1 ).
bG + Σ
b G − Y k+1 ).
b G W k+1 Σ
U2 update: Set U2k+1 = U2k + ρ(Σ
end for
b = W k.
return Θ
have |S| ≤ s∗ . Define a cone C = {∆ ∈ Rd×d k∆S c k1,1 ≤ 3k∆S k1,1 }. Further, define
a set C(t) = C ∩ {∆ ∈ Rd×d k∆kF = t}. Define a function H(∆) : Rd×d → R to be
b = Θ
b − Θ∗ and
H(∆) = Ln (Θ∗ p
+ ∆) + λkΘ∗ + ∆k1,1 − [Ln (Θ∗ ) + λkΘ∗ k1,1 ]. Denote ∆
b F and C is a
t∗ = (14C/ρ2min ) s∗ log d/n, which is the desired rate of convergence for k∆k
constant specified in Lemma C.1.
We first state several technical lemmas of which the proofs are deferred to Section E. The
first lemma bounds k∇Ln (Θ∗ )kmax , therefore connecting k∇Ln (Θ∗ )kmax with λ.
Lemma C.1. (`max -norm Bound) Under Assumptions
(A1)-(A4), then with probability at
p
−1
∗
least 1 − 2d , we have k∇Ln (Θ )kmax ≤ C log d/n, where C is a constant.
The following lemma provides the restricted strong convexity condition for the empirical
loss function Ln (Θ) in a neighborhood of Θ∗ .
Lemma C.2. (Restricted Strong Convexity) Under Assumptions (A1)-(A4), we have with
∗
probability at least 1 − 2d−1 , for all ∆ ∈ C(t ) and µ ∈ [0, 1],
r
log d
> 2
∗
2
0 ∗2
|vec(∆) ∇ Ln (Θ + µ∆)vec(∆)| ≥ ρmin − C s
k∆k2F ,
n
p
∗
where C 0 is a constant. Moreover if s∗2 log d/n = o(1), for all ∆ ∈ C(t ) and µ ∈ [0, 1], we
have |vec(∆)> ∇2 Ln (Θ∗ + µ∆)vec(∆)| ≥ (1/2)ρ2min k∆k2F .
24
We are now ready to prove Theorem 4.1.
Proof of Theorem 4.1. Define the following two events
r
n
o
log d
E1 = k∇Ln (Θ∗ )kmax ≤ C
, where C is the same constant as in Lemma C.1 ,
n
o
n
ρ2min
(t∗ )
> 2
∗
2
E2 = For all ∆ ∈ C , µ ∈ [0, 1], |vec(∆) ∇ Ln (Θ + µ∆)vec(∆)| ≥
k∆kF .
2
By Lemma C.1 and Lemma C.2, we have P(E1 ) ≥ 1 − 2d−1 and P(E2 ) ≥ 1 − 2d−1 . Thus
P(E1 ∩ E2 ) = 1 − P(E1c ∪ E2c ) ≥ 1 − P(E1c ) − P(E2c ) ≥ 1 − 4d−1 ,
where we use the union bound that P(E1c ∪ E2c ) ≤ P(E1c ) + P(E2c ). In the rest of the proof, we
are always conditioning on thepevent E = E1 ∩ E2 .
Under E1 , we have λ = 2C log d/n ≥ 2k∇Ln (Θ∗ )kmax . We first specify the space where
b lies in when λ is chosen as this. By the definition of H(∆) and ∆,
b we have
∆
b = Ln (Θ∗ + ∆)
b + λkΘ∗ + ∆k
b 1,1 − [Ln (Θ∗ ) + λkΘ∗ k1,1 ]
H(∆)
b − Ln (Θ∗ ) − (λkΘ∗ k1,1 − λkΘ∗ + ∆k
b 1,1 ) .
= Ln (Θ∗ + ∆)
|
{z
} |
{z
}
I1
(C.1)
I2
b Further
For the first term, since Ln (Θ) is a convex function, we have I1 ≥ Tr[∇Ln (Θ∗ )∆].
using generalized Cauchy-Schwarz inequality, we get
b 1,1 ,
b ≤ k∇Ln (Θ∗ )kmax k∆k
b 1,1 ≤ λ k∆k
Tr[∇Ln (Θ∗ )∆]
2
where we also use the fact that λ ≥ 2k∇Ln (Θ∗ )kmax . Hence we get
b ≥ − λ k∆k
b 1,1 .
I1 ≥ − Tr[∇Ln (Θ∗ )∆]
2
(C.2)
For the second term, by decomposing the `1,1 -norm into S and S c , we have
b S k1,1 + k∆
b S c k1,1 ) ≤ λk∆
b S k1,1 − λk∆
b S c k1,1 ,
I2 = λkΘ∗S k1,1 − λ(kΘ∗S + ∆
(C.3)
where we use the triangle inequality. Combining (C.1), (C.2) and (C.3) together with the
b ≤ 0, we get −(λ/2)k∆k
b 1,1 − (λk∆
b S k1,1 − λk∆
b S c k1,1 ) ≤ 0. And this gives us
fact that H(∆)
b S c k1,1 ≤ 3k∆
b S k1,1 , i.e., ∆
b ∈ C.
k∆
b F ≤ t. We prove
We further claim that if H(∆) > 0 for all ∆ ∈ C(t) , we must have k∆k
b F > t. Take µ = t/k∆k
b F < 1, then we
this argument by contradiction. Suppose we have k∆k
(t)
b
b
b
have kµ∆kF = t and µ∆ ∈ C since ∆ ∈ C and C is a cone. We also have
b = H (1 − µ)0 + µ∆
b ≤ (1 − µ)H(0) + µH(∆)
b ≤ 0,
H(µ∆)
25
where in the first inequality we use the fact that H(∆) is a convex function of ∆ and in the
b ≤ 0. Hence we find a matrix
second inequality we use the fact that H(0) = 0 and H(∆)
∗
(t)
∗
b ∈ C and H(∆ ) ≤ 0. This contradicts our assumption that H(∆) > 0 for all
∆ = µ∆
(t)
∆ ∈ C . Thus we prove the argument.
∗
∗
Based on this, it suffices to show that H(∆) > 0 for all ∆ ∈ C(t ) . For ∆ ∈ C(t ) , we have
H(∆) = Ln (Θ∗ + ∆) − Ln (Θ∗ ) − (λkΘ∗ k1,1 − λkΘ∗ + ∆k1,1 ) .
|
{z
} |
{z
}
I3
(C.4)
I4
For the first term, by the mean-value theorem, we have
1
e
I3 = Tr[∇Ln (Θ∗ )∆] + vec(∆)> ∇2 Ln (Θ)vec(∆),
2
e = Θ∗ + µ∆ for some µ ∈ [0, 1]. Since we are conditioning on E, we have
where Θ
e
(1/2)vec(∆)> ∇2 Ln (Θ)vec(∆)
≥ (ρ2min /4)k∆k2F . Combining with (C.2), we have
I3 ≥
ρ2min
λ
ρ2
λ
λ
k∆k2F − k∆k1,1 = min k∆k2F − k∆S k1,1 − k∆S c k1,1 .
4
2
4
2
2
(C.5)
For the second term, we have the same decomposition as in (C.3):
I4 = λkΘ∗S k1,1 − λ(kΘ∗S + ∆S k1,1 + k∆S c k1,1 ) ≤ λk∆S k1,1 − λk∆S c k1,1 .
(C.6)
Combing (C.4), (C.5) and (C.6), we have
λ
λ
ρ2min
k∆k2F − k∆S k1,1 − k∆S c k1,1 − (λk∆S k1,1 − λk∆S c k1,1 )
4
2
2
√
ρ2min
3
≥
k∆k2F − λ s∗ k∆kF ,
4
2
√
√
∗
where in the second inequality we use k∆S k1,1 ≤ s∗ k∆S kF ≤ s∗ k∆kF . Since ∆ ∈ C(t ) ,
we have k∆kF = t∗ . Hence following some algebra, we get H(∆) > 0. Thus we prove the rate
for the Frobenius norm.
b ∈ C. Thus we have k∆
b S c k1,1 ≤ 3k∆
b S k1,1 . And
For the `1,1 -norm, we use the fact that ∆
this leads to
√
√
b 1,1 ≤ 4k∆
b S k1,1 ≤ 4 s∗ k∆
b S kF ≤ 4 s∗ k∆k
b F,
k∆k
H(∆) ≥
where in the second inequality we use the Holder’s inequality and in the√third we use the fact
b S kF ≤ k∆k
b F . Further because k∆k
b F ≤ t∗ , we have k∆k
b 1,1 ≤ 4 s∗ · t∗ . This gives us
that k∆
the desired rate for `1,1 norm.
26
D
Proof of the De-biased Estimator
In this section, we outline the proof of Theorem 4.2 and Lemma 4.4. We first provide some
technical lemmas of which the proofs are deferred to Section F.
For asymptotics, we use the following standard notations: we write f (n) = O(g(n)) if
f (n) ≤ Cg(n) for some positive constant C and all sufficiently large n. And f (n) g(n)
means f (n) = O(g(n)) and g(n) = O(f (n)).
The first lemma specifies the desired properties of the adjustment matrices M and P
which will be useful to bound the entries in the Remainder term in (3.4).
Lemma
D.1. (Bias Correction Matrices) Under Assumptions (A1) − (A5), for λ0 =
p
0
C log d/n, where C 0 is a sufficiently large constant in (3.5) and (3.6), we have
kM k∞ = OP (1) and kP k∞ = OP (1),
r
r
log
d
b − Ikmax = OP (
b G − Ikmax = OP ( log d ).
kM Σ
) and kP Σ
n
n
Given the good properties of the bias correction matrices M and P , the next lemma
bounds the Remainder term in (3.4).
Lemma D.2. (Remainder Term) Suppose the conditions in Theorem 4.2 hold. We have
kRemainderkmax = OP (s∗ log d/n).
The next lemma establishes the lower bound for the asymptotic variance in (4.1).
Lemma D.3. (Variance Lower Bound) Under Assumptions of Theorem 4.2, 1/ξjk = OP (1).
The third lemma provides an explicit formula for the 4-th order moments of multivariate
Gaussian distribution. It will be used to show the asymptotic variance of the Leading term.
Lemma D.4. (4-th Order Moments) For a random vector X ∼ N (0, Σ) and four deterministic
matrices A, B, C, D of appropriate sizes, we have
E[(AX)(BX)> (CX)(DX)> ] = (AΣB > )(CΣD> ) + (AΣC > )(BΣD> ) + Tr(BΣC > )(AΣD> ).
Proof. This can be found in Petersen and Pedersen (2012).
The fourth lemma characterizes the tail behavior of a certain random variable. It will
later be used to show the asymptotic normality of the Leading term in (3.3).
Lemma D.5. (Tail Bound) Let X ∼ N (0, Σ∗ ). For any deterministic vectors u, v with
vG1 = 0, kuk1 = O(1) and kvk1 = O(1), define the following random variable
Z = −u> XX > (I + Θ∗ Σ∗G )v + u> (I − Σ∗ Θ∗ )XG2 XG>2 v + u> Σ∗G v − u> Σ∗ v.
We have that Z is a sub-exponential random variable with kZkψ1 = O(1).
27
Now we are ready to prove Theorem 4.2.
Proof of Theorem 4.2. From (3.3) and (3.4), we have for j ∈ [d1 ] and k ∈ [d1 + 1, d]
√
√
√
∗
u
) = n Leadingjk + n Remainderjk .
− θjk
n · (θbjk
(D.1)
∗
By Lemma
max = OP (s log d/n). Under the scaling condition that
√ D.2, we have kRemainderk
√
∗
s log d/ n = o(1), we have n Remainderjk = oP (1). By (3.3), we have
√
1 Xn
>
>
>
=√
Mj∗ Σ∗G Pk∗
− Mj∗ Σ∗ Pk∗
− Mj∗ Xi Xi> (I + Θ∗ Σ∗G )Pk∗
n i=1
o
>
>
>
)P
+
X
X
+ Mj∗ (I − Σ∗ Θ∗ )(Xi,G1 Xi,G
i,G2 i,G2
k∗ ,
1
n
n Leadingjk
(D.2)
where Xi denotes the i-th sample and Xi,G1 and Xi,G2 are features of Xi corresponding to G1
and G2 . Further due to the block structure of P and the fact that k ∈ [d1 + 1, d], we have for
>
every i ∈ [n], Xi,G
P > = 0. Thus (D.2) can be simplified as follows
1 k∗
√
1 Xn
>
>
>
Mj∗ Σ∗G Pk∗
− Mj∗ Σ∗ Pk∗
− Mj∗ Xi Xi> (I + Θ∗ Σ∗G )Pk∗
=√
n i=1
o
>
>
+ Mj∗ (I − Σ∗ Θ∗ )Xi,G2 Xi,G
P
.
2 k∗
n
n Leadingjk
(D.3)
To further simplify notations, for each i ∈ [n], we define the following random variable
>
>
>
>
Zjk,i = Mj∗ Σ∗G Pk∗
− Mj∗ Σ∗ Pk∗
− Mj∗ Xi Xi> (I + Θ∗ Σ∗G )Pk∗
+ Mj∗ (I − Σ∗ Θ∗ )Xi,G2 Xi,G
P >.
2 k∗
P
2
. Combining (D.1) and (D.3), we have
Denote Sn = ni=1 Zjk,i and s2n = nξjk
√
u
∗
n · (θbjk
− θjk
)/ξjk = Sn /sn + oP (1)/ξjk ,
(D.4)
√
where we also use the fact that n Remainderjk = oP (1). By Lemma D.3, we have oP (1)/ξjk =
oP (1). Thus it remains to show that Sn /sn weakly converges to a standard normal distribution.
We prove this using a conditioning argument.
For fixed n, D1 and D2 are two random variables (matrices) in Rn×d , where each row is a
Gaussian random variable. Define the following set:
r
n×d
0
∗
b − Σ kmax ≤ C log d },
Γ = {X ∈ R
kΣ
n
b 0 is the sample covariance obtained from X and C is a sufficiently large constant.
where Σ
We can see that Γ is a set of data sets of which the sample covariance matrix is close to the
population one.
Given any X ∈ Γ, we have M and P are fixed since they purely depend on X. By
the proof of Lemma D.1, we can see that under X ∈ Γ, kM k∞ = O(1), kP k∞ = O(1),
28
p
p
kM Σ∗ − Ikmax = O( log d/n) and kP Σ∗G − Ikmax = O( log d/n). Further, Zjk,1 , . . . , Zjk,n
are conditionally i.i.d random variables given D2 = X. Its conditional mean is
>
>
>
>
>
E[Zjk,i |D2 = X] = Mj∗ Σ∗G Pk∗
− Mj∗ Σ∗ Pk∗
− Mj∗ Σ∗ (I + Θ∗ Σ∗G )Pk∗
+ Mj∗
(I − Σ∗ Θ∗ )Σ∗G2 Pk∗
>
>
>
>
>
= Mj∗ Σ∗G Pk∗
− Mj∗ Σ∗ Pk∗
− Mj∗ Σ∗G Pk∗
+ Mj∗
(I − Σ∗ Θ∗ )Σ∗G Pk∗
>
>
>
> ∗ >
= Mj∗ Σ∗G Pk∗
− Mj∗ Σ∗ Pk∗
− Mj∗ Σ∗G Pk∗
+ Mj∗
Σ Pk∗ = 0,
where in the second identity we use the fact that Σ∗ (I + Θ∗ Σ∗G ) = Σ∗G and the fact that
>
>
Σ∗G2 Pk∗
= Σ∗G Pk∗
and in the third identity we use the fact that (I − Σ∗ Θ∗ )Σ∗G = Σ∗ .
For its conditional variance, we calculate it as follows.
>
> 2
Var(Zjk,i |D2 = X) = E[(Mj∗ XX > (I + Θ∗ Σ∗G )Pk∗
− Mj∗ (I − Σ∗ Θ∗ )XG2 XG>2 Pk∗
) ] −J,
|
{z
}
I
>
> 2
where J = (Mj∗ Σ∗G Pk∗
− Mj∗ Σ∗ Pk∗
) . For term I, we can decompose it into three terms.
>
>
I = E[Mj∗ XX > (I + Θ∗ Σ∗G )Pk∗
Pk∗ (I + Σ∗G Θ∗ )XX > Mj∗
]
{z
}
|
∗
∗
+ E[Mj∗ (I − Σ Θ
|
I1
> >
)XG2 XG2 Pk∗ Pk∗ (XG2 XG>2 )(I
>
−2E[Mj∗ XX (I + Θ
|
∗
{z
I2
∗
>
ΣG )Pk∗ Mj∗ (I
{z
I3
>
− Θ∗ Σ∗ )Mj∗
]
}
>
− Σ∗ Θ∗ )XG2 XG>2 Pk∗
].
}
(D.5)
In the following, we will extensively use Lemma D.4 to calculate I1 , I2 and I3 . For I1 , we have
>
>
>
>
I1 =2(Mj∗ Σ∗G Pk∗
)[Pk∗ (I + Σ∗G Θ∗ )Σ∗ Mj∗
] + [Pk∗ (I + Σ∗G Θ∗ )Σ∗G Pk∗
](Mj∗ Σ∗ Mj∗
)
> 2
>
>
=2(Mj∗ Σ∗G Pk∗
) + [Pk∗ (I + Σ∗G Θ∗ )Σ∗G Pk∗
](Mj∗ Σ∗ Mj∗
),
(D.6)
where we use (I + Σ∗G Θ∗ )Σ∗ = Σ∗G in the second equality. For I2 , we get
> 2
>
>
I2 = 2[Mj∗ (I − Σ∗ Θ∗ )Σ∗G Pk∗
] + (Pk∗ Σ∗G Pk∗
)[Mj∗ (I − Σ∗ Θ∗ )Σ∗G2 (I − Θ∗ Σ∗ )Mj∗
]
>
> 2
>
= 2(Mj∗ Σ∗ Pk∗
) + (Pk∗ Σ∗G Pk∗
)[Mj∗ (I − Σ∗ Θ∗ )Σ∗G2 (I − Θ∗ Σ∗ )Mj∗
],
where we use (I − Σ∗ Θ∗ )Σ∗G = Σ∗ in the second equality. And for I3 , we have
>
>
>
>
) + [Mj∗ Σ∗ I2 (I − Θ∗ Σ∗ )Mj∗
](Pk∗ Σ∗G Pk∗
)
I3 = − 2 (Mj∗ Σ∗G Pk∗
)(Mj∗ Σ∗ Pk∗
>
>
+ [Pk∗ Σ∗G I2 (I − Θ∗ Σ∗ )Mj∗
](Mj∗ Σ∗ Pk∗
) ,
(D.7)
(D.8)
where I2 = diag(0, Id2 ). Combing (D.5), (D.6), (D.7) and (D.8), we have
Var(Zjk,i |D2 = X) = I1 + I2 + I3
>
>
> 2
)
](Mj∗ Σ∗ Mj∗
) + (Mj∗ Σ∗G Pk∗
= [Pk∗ (I + Σ∗G Θ∗ )Σ∗G Pk∗
> 2
>
>
− (Mj∗ Σ∗ Pk∗
) − (Pk∗ Σ∗G Pk∗
)[Mj∗ (I − Σ∗ Θ∗ )Σ∗G2 (I − Θ∗ Σ∗ )Mj∗
].
29
2
Thus we have Var(Zjk,i |D2 = X) = ξjk
. Recall that conditional on D = X ∈ Γ, we have
|Mj∗ |1 ≤ kM k∞ = O(1) and |Pk∗ |1 ≤ kkk∞ = O(1). Thus by Lemma D.5, we have for all
i ∈ [i], kZjk,i |D2 = Xkψ1 = O(1). By the definition of ψ1 -norm, we have
ρ = E(|Zjk,i |3 |D2 = X) = O(1).
2
Also, from the proof of Lemma D.3, we have ξjk
≥ C 00 > 0 for some positive constant C 00 .
Thus by Berry-Esseen Theorem, there exists a universal constant C1 such that for every
X ∈ Γ and for every x ∈ R,
S
C1 ρ
C2
n
P
≤ x|D2 = X − Φ(x) ≤ 3 √ ≤ √ ,
(D.9)
sn
ξjk n
n
where the second inequality comes from the fact that ρ = O(1) and ξjk ≥ C 00 > 0 and C2 is
also a universal constant. Define the event E = {D2 ∈ Γ}. From Lemma E.5, we know that
P(E) ≥ 1 − 2d−1 . Using the law of total probability, we have for all x ∈ R,
S
S
S
n
n
n
≤ x − Φ(x) = P
≤ x|E P(E) + P
≤ x|E c P(E c ) − Φ(x)
P
sn
sn
sn
h S
i
h S
i
n
n
≤ P
≤ x|E − Φ(x) P(E) + P
≤ x|E c − Φ(x) P(E c )
sn
sn
Sn
≤ P(
≤ x|E) − Φ(x) + 2P(E c ),
sn
where in the first inequality we use the triangle inequality and in the second one we use the
fact that |P(E)| ≤ 1 and |P(Sn /sn ≤ x |E c ) − Φ(x)| ≤ 2. To continue, we have
Z h
S
i
Sn
C2
n
P
≤ x|E − Φ(x) =
P
≤ x|E, D2 = X − Φ(x) dPX ≤ √ ,
sn
sn
n
where the inequality comes from (D.9). Combining with P(E c ) ≤ 2d−1 ≤ 2n−1 , we have
S
2
C2
n
P
≤ x − Φ(x) ≤ √ + .
sn
n n
√
u
−
This shows that Sn /sn
N (0, 1). By (D.4) and Slutsky’s theorem, we have n · (θbjk
∗
θjk )/ξjk
N (0, 1). This finishes the proof of Theorem 4.2.
Next we prove the variance estimator in (4.2) is consistent for (4.1).
2
Proof of Lemma 4.4. In light of Lemma D.3, it suffices to show that ξbjk2 − ξjk
= oP (1). Recall
from (4.1) and (4.2) that for j ∈ [d1 ], k ∈ [d1 + 1, d], we have
> 2
> 2
2
>
>
) − (Mj∗ Σ∗ Pk∗
)
ξjk
= (Mj∗ Σ∗ Mj∗
)[Pk∗ (I + Σ∗G Θ∗ )Σ∗G Pk∗
] + (Mj∗ Σ∗G Pk∗
{z
}
{z
} |
|
{z
} |
∗
∗
− [Mj∗ (I − Σ Θ
|
I1
)Σ∗G2 (I
I3
I2
∗
−Θ Σ
{z
∗
>
>
)Mj∗
](Pk∗ Σ∗G Pk∗
).
I4
30
}
And the estimator ξbjk2 is the empirical version of the above formula
>
>
> 2
> 2
b G Θ)
b Σ
b G Pk∗
b j∗
b k∗
b G Pk∗
] + (Mj∗ Σ
)[Pk∗ (I + Σ
)
) − (Mj∗ ΣP
ξbjk2 = (Mj∗ ΣM
|
|
{z
}
{z
}
|
{z
}
Ib2
Ib1
Ib3
>
>
b G Pk∗
b Θ)
b Σ
b G2 (I − Θ
b Σ)M
b j∗
).
](Pk∗ Σ
− [Mj∗ (I − Σ
|
{z
}
Ib4
2
We quantify the difference between ξjk
and ξbjk2 term by term. For the first term, we have
>
b G Θ)
b Σ
b G − (I + Σ∗ Θ∗ )Σ∗ ]P >
b − Σ∗ )Mj∗
Pk∗ [(I + Σ
|I1 − Ib1 | ≤ Mj∗ (Σ
G
G k∗
∗
>
∗
∗
∗
>
b − Σ )Mj∗ Pk∗ (I + ΣG Θ )ΣG Pk∗ |
+ |Mj∗ (Σ
>
b G Θ)
b Σ
b G − (I + Σ∗ Θ∗ )Σ∗ ]P > .
+ Mj∗ Σ∗ Mj∗
Pk∗ [(I + Σ
G
G k∗
We will upper bound the right hand side using |u> Av| ≤ kuk1 kAkmax kvk1 . By Lemma D.1,
b − Σ∗ kmax ,
we have kMj∗ k1 = OP (1) and kPk∗ k1 = OP (1). Thus it remains to control kΣ
b G Θ)
b Σ
b G − (I + Σ∗ Θ∗ )Σ∗ kmax , k(I + Σ∗ Θ∗ )Σ∗ kmax and kΣ∗ kmax . By Assumption (A2),
k(I + Σ
G
G
G
G
p
∗
b − Σ∗ kmax = OP ( log d/n). Using
we have kΣ kmax = O(1) and by Lemma E.5, we have kΣ
the fact that kABCkmax ≤ kAkmax kBk1,1 kCkmax , we have
k(I + Σ∗G Θ∗ )Σ∗G kmax ≤ kΣ∗G kmax + kΣ∗G Θ∗ Σ∗G kmax ≤ kΣ∗G kmax + kΣ∗G kmax kΘ∗ k1,1 kΣ∗G kmax .
Thus k(I + Σ∗G Θ∗ )Σ∗G kmax = O(1) by Assumptions (A2) and (A3). For its perturbation,
b G Θ)
b Σ
b G − (I + Σ∗G Θ∗ )Σ∗G kmax ≤ kΣ
b G − Σ∗ kmax + kΣ
bGΘ
bΣ
b G − Σ∗ Θ∗ Σ∗ kmax ,
k(I + Σ
G
G
G
p
b G − Σ∗ kmax = OP ( log d/n) by Lemma E.5. And
where we use the triangle inequality. kΣ
G
∗ ∗ ∗
b
b
b
for the remaining term kΣG ΘΣG − ΣG Θ ΣG kmax , we have
bGΘ
bΣ
b G − Σ∗ Θ∗ Σ∗ kmax ≤ k(Σ
b G − Σ∗ )(Θ
b − Θ∗ )(Σ
b G − Σ∗ )kmax
kΣ
G
G
G
G
∗
∗
∗
∗
∗
∗
b G − Σ )(Θ
b − Θ )Σ kmax + k(Σ
b G − Σ )Θ (Σ
b G − Σ )kmax + k(Σ
b G − Σ∗ )Θ∗ Σ∗ kmax
+ k(Σ
G
G
G
G
G
G
∗ b
∗ b
∗
∗ b
∗
∗
∗ ∗ b
∗
+ kΣ (Θ − Θ )(ΣG − Σ )kmax + kΣ (Θ − Θ )Σ kmax + kΣ Θ (ΣG − Σ )kmax .
G
G
G
G
G
G
By extensively using kABCkmax ≤ kAk
pmax kBk1,1 kCkmax , Lemma E.5 and Theorem
p 4.1, we
∗
∗
∗
∗
∗
bGΘ
bΣ
b G − Σ Θ Σ kmax = OP (s log d/n). In all, we have |I1 − Ib1 | = OP (s log d/n).
have kΣ
G
G
The second term can be bounded as follows. By triangle inequality, we have
> 2
>
>
b G − Σ∗G )Pk∗
b G − Σ∗G )Pk∗
|I2 − Ib2 | ≤ |[Mj∗ (Σ
] | + 2|[Mj∗ (Σ
](Mj∗ Σ∗G Pk∗
)|.
p
Again, using |u> Av| ≤ kuk1 kAkmax kvk1 , we have |I2 − Ib2 | = OP ( log d/n), where we use
p
b G − Σ∗ kmax = OP ( log d/n) and kΣ∗ kmax = O(1).
kMj∗ k1 = OP (1), kPk∗ k1 = OP (1), kΣ
G
G
p
p
Similar proofs can show that |I3 − Ib3 | = OP ( log d/n) and |I4 − Ib4 | = OP (s∗ log d/n).
p
2
In all, we show that |ξbjk2 − ξjk
| = OP (s∗ log d/n) = oP (1).
31
E
Auxiliary Lemmas for Estimation Consistency
In this section, we give detailed proofs of the technical lemmas provided in Section C. We
first review basic definitions of sub-Gaussian and sub-exponential random variables.
Definition E.1. (Sub-Gaussian random variable and sub-Gaussian norm) A random variable
X is sub-Gaussian if there exists a constant K1 such that P(|X| ≥ t) ≤ exp(1 − t2 /K12 ) for
all t ≥ 0. The sub-Gaussian norm of X is defined as kXkψ2 = supp≥1 p−1/2 (E|X|p )1/p .
Definition E.2. (Sub-exponential random variable and sub-exponential norm) A random
variable X is sub-exponential if there exists a constant K2 such that P(|X| ≥ t) ≤
exp(1 − t/K2 ) for all t ≥ 0. The sub-exponential norm of X is defined as kXkψ1 =
supp≥1 p−1 (E|X|p )1/p .
The next two lemmas present the basic properties of the sub-Gaussian and sub-exponential
random variables.
Lemma E.3. (Product of Sub-Gaussians) Assume that X and Y are sub-Gaussian random
variables, then we have kXY kψ1 ≤ 2kXkψ2 kY kψ2 .
Proof. The proof of this can be found in Lemma 6.2 in Javanmard and Montanari (2014).
Lemma E.4. (Bernstein’s Inequality) Let X1 , . . . , Xn be independent centered sub-exponential
random variables, and B = maxi kXi kψ1 . Then for any t > 0, we have
n
1X
h
t2 t i
P
Xi ≥ t ≤ 2 exp − c min
,
n ,
n i=1
B2 B
where c > 0 is an absolute constant.
Proof. See Proposition 5.16 in Vershynin (2010) for a detailed proof of this.
Lemma E.5 gives an application of Lemma E.4 to the estimation of covariance matrices.
b = (1/n) Pn X> X, we have
Lemma E.5. (Sample Covariance Matrix) Recall that Σ
i=1
h
t2 t i
∗
2
b
P kΣ − Σ kmax ≥ t ≤ 2d exp − c min
,
n ,
K2 K
where c is an absolute constant and K is the upper bound specified in Assumption (A2).
b − Σ∗ kmax ≥ t ≤ d2 maxj,k P |Σ
b jk − σ ∗ | ≥ t , where
Proof. By union bound, we have P kΣ
jk
∗
σjk
is the (j, k)-th entry in Σ∗ . For any j ∈ [d] and k ∈ [d], we have
n
1X
∗
∗
b
P |Σjk − σjk | ≥ t = P
Xij Xik − σjk ≥ t ,
n i=1
32
where Xij denote the j-th variable in the i-th sample. By Assumption (A2), we have
kΣ∗ kmax ≤ K, which means max1≤j≤d Var(Xij ) ≤ K. Thus {Xij }dj=1 are sub-Gaussian
√
random variables with maxj kXij kψ2 ≤ C K, where C is an absolute constant. By Lemma
∗
E.3, we have kXij Xik kψ1 ≤ 2C 2 K and kXij Xik − σjk
kψ1 ≤ 4C 2 K by the centering property
of sub-exponential norm (Vershynin, 2010). By the Bernstein’s inequality in Lemma E.4,
n
1X
h
t2 t i
∗
n ,
(E.1)
P
Xij Xik − σjk
≥ t ≤ 2 exp − c min
,
n i=1
K2 K
b
where c is a constant. Combing with the union bound, we prove the tail bound for Σ.
Lemma E.6 provides perturbation bound for the inverse of a matrix.
Lemma E.6. (Matrix Inverse Perturbation) Let A ∈ Rd×d be the target matrix and E ∈ Rd×d
be the perturbation matrix. If kA−1 k1 kEk1 ≤ 1/2, then we have
k(A + E)−1 − A−1 k1 ≤ 2kA−1 k21 · kEk1 .
Proof. See Fact 2.25 in Ipsen (2009) for a detailed proof of this.
E.1
`max -norm Bound
Proof of Lemma C.1. Recall that the empirical loss function is defined as
b − log |Σ
b G ΘΣ
bG + Σ
b G |.
Ln (Θ) = Tr(ΘΣ)
Thus the gradient can be calculated as follows
b −Σ
b G (ΘΣ
b G + I)−1 .
∇Ln (Θ) = Σ
(E.2)
b G + I)−1 . Thus we have
b −Σ
b G (Θ∗ Σ
Substituting Θ∗ into (E.2), we get ∇Ln (Θ∗ ) = Σ
b −Σ
b G (Θ∗ Σ
b G + I)−1 ](Θ∗ Σ
b G + I)(Θ∗ Σ
b G + I)−1 kmax
k∇Ln (Θ∗ )kmax = k[Σ
b + ΣΘ
b ∗Σ
bG − Σ
b G )(Θ∗ Σ
b G + I)−1 kmax ,
= k(Σ
b G + I)(Θ∗ Σ
b G + I)−1 = I. Further using
where in the first equality we use the identity (Θ∗ Σ
the inequality kABkmax ≤ kAkmax kBk1 , we get
b + ΣΘ
b ∗Σ
bG − Σ
b G kmax · k(Θ∗ Σ
b G + I)−1 k1 .
k∇Ln (Θ∗ )kmax ≤ kΣ
|
{z
} |
{z
}
I1
∗
∗
∗
For the first term, since Σ + Σ Θ
Σ∗G
−
Σ∗G
I2
= 0, we have
b + ΣΘ
b ∗Σ
bG − Σ
b G − (Σ∗ + Σ∗ Θ∗ Σ∗G − Σ∗G )kmax
I1 = kΣ
b − Σ∗ kmax + kΣ
b G − Σ∗ kmax + k(Σ
b − Σ∗ )Θ∗ (Σ
b G − Σ∗ )kmax
≤ kΣ
G
G
∗
∗ ∗
∗ ∗ b
∗
b
+ k(Σ − Σ )Θ Σ kmax + kΣ Θ (ΣG − Σ )kmax
G
(E.3)
G
b − Σ∗ kmax + kΣ
b G − Σ∗ kmax + kΣ
b − Σ∗ kmax kΘ∗ k1,1 kΣ
b G − Σ∗ kmax
≤ kΣ
G
G
∗
∗
∗
∗
∗
b
b G − Σ∗ kmax , (E.4)
+ kΣ − Σ kmax kΘ k1,1 kΣG kmax + kΣ kmax kΘ k1,1 kΣ
G
33
where in the first inequality we use the triangle inequality and in the second inequality we
use the fact that kABCkmax ≤ kAkmax kBk1,1 kCkmax .
b − Σ∗ kmax ≤
By Lemma E.5, there exists a constant C1 such that for the event E = {kΣ
p
C1 log d/n}, we have P(E) ≥ 1 − 2d−1 . In the following, we are always conditioning on E.
p
b G − Σ∗ kmax ≤ C1 log d/n by the definitions of Σ
b G and
Under the event E, we also have kΣ
G
∗
ΣG . Considering (E.4), we have
r
r
r
log d
log
d
log
d
log d
I1 ≤ 2C1
+ C12 R
+ 2KRC1
≤ C2
,
(E.5)
n
n
n
n
p
where we use Assumptions (A2) and (A3) and the fact that log d/n = o(1).
For the second term, by triangle inequality we have
b G + I)−1 − (Θ∗ Σ∗ + I)−1 k1 + k(Θ∗ Σ∗ + I)−1 k1 .
I2 ≤ k(Θ∗ Σ
G
G
|
{z
} |
{z
}
I2,1
(E.6)
I2,2
I2,2 is a population quantity which we can bound easily as follows.
I2,2 = kI − Θ∗ Σ∗ k1 ≤ kIk1 + kΘ∗ Σ∗ k1 ≤ kIk1 + kΘ∗ k1,1 kΣ∗ kmax ≤ 1 + KR ≤ C3 ,
(E.7)
where in the first inequality we use the triangle inequality, in the second we use the fact that
kABk1 ≤ kAk1,1 kBkmax and in the third we use Assumptions (A2) and (A3) again. I2,1 is
b G − Σ∗ ). We
the perturbation of the matrix inverse. Denote A = Θ∗ Σ∗G + I and E = Θ∗ (Σ
G
first check the condition in Lemma E.6. We have
r
r
log
d
log d
b G − Σ∗ )kmax ≤ C1 C3 M
≤ C4
,
kA−1 k1 kEk1 ≤ k(Θ∗ Σ∗G + I)−1 k1 kΘ∗ k1,1 k(Σ
G
n
n
b G − Σ∗ kmax ≤
where
the inequality kABk1 ≤ kAk1,1 kBkmax and the fact that kΣ
G
p we use (E.7), p
C1 log d/n. Since log d/n = o(1), we have kA−1 k1 kEk1 ≤ 1/2, thus by Lemma E.6,
r
r
log
d
log d
b G − Σ∗ )k1 ≤ 2C1 C 2 R
≤ C5
.
(E.8)
I2,1 ≤ 2k(Θ∗ Σ∗G + I)−1 k21 · kΘ∗ (Σ
G
3
n
n
Combining (E.6), (E.7) and (E.8), we have
r
log d
I2 ≤ C5
+ C3 ≤ 2C3 ,
(E.9)
n
p
where we use the fact that log d/n = o(1). Using (E.3), (E.5) and (E.9), we have with
probability at least 1 − 2d−1 ,
r
r
log
d
log d
k∇Ln (Θ∗ )kmax ≤ 2C2 C3
≤C
.
n
n
Hence we prove the lemma.
34
E.2
Restricted Strong Convexity
Proof of Lemma C.2. By taking derivatives, we have
b G Θ + I)−1 Σ
b G ] ⊗ [(Σ
b G Θ + I)−1 Σ
b G ] and
∇2 Ln (Θ) = [(Σ
∇2 L(Θ) = [(Σ∗G Θ + I)−1 Σ∗G ] ⊗ [(Σ∗G Θ + I)−1 Σ∗G ].
∗
Since (Σ∗G Θ∗ + I)−1 Σ∗G = Σ∗ , we have ∇2 L(Θ∗ ) = Σ∗ ⊗ Σ∗ . For all ∆ ∈ C(t ) and µ ∈ [0, 1],
e = µ∆, Σ
e = (Σ
b G Θ∗ + I)−1 Σ
b G and Σ
e 0 = [Σ
b G (Θ∗ + ∆)
e + I]−1 Σ
b G , we have
denote ∆
e − ∇2 L(Θ∗ )kmax = kΣ
e0 ⊗ Σ
e 0 − Σ∗ ⊗ Σ∗ kmax
k∇2 Ln (Θ∗ + ∆)
e ⊗Σ
e − Σ∗ ⊗ Σ∗ kmax ,
e0 ⊗ Σ
e0 − Σ
e ⊗ Σk
e max + kΣ
≤ kΣ
|
{z
} |
{z
}
I1
(E.10)
I2
where we use the triangle inequality. For the second term, we have
e − Σ∗ + Σ∗ ) ⊗ (Σ
e − Σ∗ + Σ∗ ) − Σ∗ ⊗ Σ∗ kmax
I2 = k(Σ
e − Σ∗ ) ⊗ (Σ
e − Σ∗ ) + ( Σ
e − Σ∗ ) ⊗ Σ∗ + Σ∗ ⊗ (Σ
e − Σ∗ )kmax ,
= k(Σ
where we use the fact that Kronecker product is bilinear. By triangle inequality and the fact
that kA ⊗ Bkmax ≤ kAkmax kBkmax , we have
e − Σ∗ k2 + 2kΣ∗ kmax · kΣ
e − Σ∗ kmax .
I2 ≤ kΣ
max
(E.11)
e − Σ∗ kmax . For this, we have
Thus we need to control kΣ
e − Σ∗ kmax = k(Σ
b G Θ∗ + I)−1 Σ
b G − (Σ∗ Θ∗ + I)−1 Σ∗ kmax
kΣ
G
G
∗
−1
∗
∗
−1
b G Θ + I) − (Σ Θ + I) ]Σ
b G kmax + k(Σ∗ Θ∗ + I)−1 (Σ
b G − Σ∗ )kmax ,
≤ k[(Σ
G
G
G
where we use the triangle inequality. By kABkmax ≤ kAk∞ kBkmax = kA> k1 kBkmax , we get
e − Σ∗ kmax ≤ k(Θ∗ Σ
b G + I)−1 − (Θ∗ Σ∗ + I)−1 k1 kΣ
b G kmax + k(Θ∗ Σ∗ + I)−1 k1 k(Σ
b G − Σ∗ )kmax .
kΣ
G
G
G
∗
b
These
the same event E = {kΣ−Σ
kmax ≤
p four terms appear in the proof of Lemma C.1. Define−1
C1 log d/n} and by Lemma E.5, we have P(E) ≥ 1 − 2d . The following arguments are all
conditioning on the event E.
From the proof of Lemma C.1, specifically from (E.7) and (E.8), we have k(Θ∗ Σ∗G +I)−1 k1 ≤
p
b G + I)−1 − (Θ∗ Σ∗ + I)−1 k1 ≤ C5 log d/n. Further, we have
C3 and k(Θ∗ Σ
G
r
b G kmax ≤ kΣ
b G − Σ∗G kmax + kΣ∗G kmax ≤ kΣ
b − Σ∗ kmax + kΣ∗ kmax ≤ C1 log d + K ≤ 2K,
kΣ
n
p
b G and Σ∗ and the condition that log d/n = o(1). Combining
where we use the definitions of Σ
G
these together, we have
r
r
r
log
d
log
d
log d
e − Σ∗ kmax ≤ 2C5 K
kΣ
+ C3 C1
≤ C6
.
(E.12)
n
n
n
35
p
log d/n = o(1), we get
r
r
log
d
log d
log
d
+ 2K
≤ C7
.
I2 ≤ C62 ·
n
n
n
Putting (E.12) into (E.11) and using
(E.13)
For the first term in (E.10), we follow the same strategy as above. We have
e0 − Σ
e + Σ)
e ⊗ (Σ
e0 − Σ
e + Σ)
e −Σ
e ⊗ Σk
e max
I1 = k(Σ
e 0 − Σ)
e ⊗ (Σ
e 0 − Σ)
e + (Σ
e 0 − Σ)
e ⊗Σ
e +Σ
e ⊗ (Σ
e 0 − Σ)k
e max
= k(Σ
e 0 − Σk
e 2 + 2kΣk
e max kΣ
e 0 − Σk
e max .
≤ kΣ
max
(E.14)
e max and kΣ
e 0 − Σk
e max . By (E.12) and triangle inequality, we have
Thus we need to control kΣk
r
e max ≤ kΣ
e − Σ∗ kmax + kΣ∗ kmax ≤ C6 log d + K ≤ 2K.
kΣk
n
Use the fact that kABkmax ≤ kAk∞ kBkmax = kA> k1 kBkmax , we also have
e 0 − Σk
e max = k(Σ
b G (Θ∗ + ∆)
e + I)−1 Σ
b G − (Σ
b G Θ∗ + I)−1 Σ
b G kmax
kΣ
e Σ
b G + I]−1 − (Θ∗ Σ
b G + I)−1 k1 kΣ
b G kmax .
≤ k[(Θ∗ + ∆)
(E.15)
b G kmax ≤ 2K. For the other term, we need to use Lemma E.6. let
We know that kΣ
b G + I and E = ∆
eΣ
b G . We first need to check the condition of Lemma E.6. Since
A = Θ∗ Σ
b G + I)−1 k1 k∆
eΣ
b G k1 ≤ 4C3 Kk∆k
e 1,1 ,
kA−1 k1 kEk1 = k(Θ∗ Σ
where we use the same argument as in the proof of Lemma C.1. Since ∆ ∈ C, we have
k∆S c k1,1 ≤ 3k∆S k1,1 . And this leads to
√
√
k∆k1,1 ≤ 4k∆S k1,1 ≤ 4 s∗ k∆S kF ≤ 4 s∗ k∆kF ,
(E.16)
where in the second inequality we use the Holder’s inequality and in the third we use the fact
that k∆S kF ≤ k∆kF . Further because k∆kF = t∗ and µ ∈ [0, 1], we have
r
log d
∗
e
k∆k1,1 = kµ∆k1,1 ≤ k∆k1,1 = O s
.
n
Therefore we have kA−1 k1 kEk1 = o(1) < 1/2, and
r
log d
∗
−1
∗b
−1
∗b
2
∗
e
b
e
b
.
k[(Θ + ∆)ΣG + I] − (Θ ΣG + I) k1 ≤ 2kI − Θ ΣG k1 · k∆ΣG k1 = O s
n
p
e 0 − Σk
e max = O(s∗ log d/n). Further yy (E.14), we have
As a result,
by
(E.15),
we
have
k
Σ
p
I1 = O(s∗ log d/n). Combining it with (E.13), we have
r
log d
2
∗
2
∗
∗
e
k∇ Ln (Θ + ∆) − ∇ L(Θ )kmax = O s
.
n
36
2
e
e
For any ∆, we have |vec(∆)> ∇2 Ln (Θ∗ +∆)−∇
L(Θ∗ ) vec(∆)| ≤ k∆k21,1 k∇2 Ln (Θ∗ +∆)−
2
>
2
∗
∇ Ln (Θ )kmax , where we use the fact that
√ |x Ax| ≤ kxk1 kAkmax and kvec(∆)k1 = k∆k1,1 .
Moreover, by (E.16) we have k∆k1,1 ≤ 4 s∗ k∆kF ,. Combining all, we have
r log d
e − ∇2 L(Θ∗ )]vec(∆)| = O s∗2
|vec(∆)> [∇2 Ln (Θ∗ + ∆)
k∆k2F .
n
By triangle inequality, we also have
e
|vec(∆)> ∇2 L(Θ∗ )vec(∆) − vec(∆)> ∇2 Ln (Θ∗ + ∆)vec(∆)|
e
≥ |vec(∆)> ∇2 L(Θ∗ )vec(∆)| − |vec(∆)> ∇2 Ln (Θ∗ + ∆)vec(∆)|
e
= |vec(∆)> (Σ∗ ⊗ Σ∗ )vec(∆)| − |vec(∆)> ∇2 Ln (Θ∗ + ∆)vec(∆)|
e
≥ ρ2min k∆k2F − |vec(∆)> ∇2 Ln (Θ∗ + ∆)vec(∆)|,
where the first equality follows from the fact that ∇2 L(Θ∗ ) = Σ∗ ⊗ Σ∗ and in the second
inequality, we use Assumption (A4) and the fact that the eigenvalues of Kronecker products
of symmetric matrices are the products of the eigenvalues of the two matrices. Hence
r
log d
e
|vec(∆)> ∇2 Ln (Θ∗ + ∆)vec(∆)|
≥ ρ2min − C 0 s∗2
k∆k2F .
n
Thus we finish the proof.
F
Auxillary Lemmas for the De-biased Estimator
In this section, we give the proof of the technical lemmas in Section D. For notational
simplicity, denote Ω∗G = (Σ∗G )−1 . First, we will state several useful lemmas.
b 0 − I and Ω∗ Σ
b0
The following two lemmas quantifies the size of Ω∗ Σ
G G − I.
b0
Lemma F.1. (Feasibility of Ω∗ ) Recall that
p Σ is the sample covariance matrix of the second
∗ b0
sample D2 , we have kΩ Σ − Ikmax = OP ( log d/n).
b 0 − I = Ω∗ (Σ
b 0 − Σ∗ ), we have kΩ∗ Σ
b 0 − Ikmax ≤ kΩ∗ k1 kΣ
b 0 − Σ∗ kmax , where
Proof. Since Ω∗ Σ
>
∗
we use the fact that kABk
max ≤ kA k1 kBkmax and Ω is symmetric. By Lemma E.5, we
p
b 0 − Σ∗ kmax = OP ( log d/n). Under Assumption (A5) that kΩ∗ k1 = O(1), we have
have kΣ
p
b 0 − Ikmax = OP ( log d/n).
kΩ∗ Σ
b 0 is the diagonal block of Σ
b 0 corresponding to
Lemma F.2. (Feasibility of Ω∗G ) Recall that Σ
G
p
b 0 − Ikmax = OP ( log d/n).
XG1 and XG2 , we have kΩ∗G Σ
G
∗
∗
b 0 − I = Ω∗ (Σ
b 0 − Σ∗ ), we have kΩ∗ Σ
b0
b0
Proof. Since Ω∗G Σ
G
G
G
G
G G − Ikmax ≤ kΩG k1 kΣG − ΣG kmax ,
where we use the fact that kABkmax ≤ kA> k1 kBkmax and Ω∗G is symmetric. By Lemma E.5,
p
b 0 − Σ∗ kmax = OP ( log d/n). For kΩ∗ k1 , we have
we have kΣ
G
G
G
kΩ∗G k1 = kΩ∗ − Θ∗ k1 ≤ kΩ∗ k1 + kΘ∗ k1 ≤ kΩ∗ k1 + kΘ∗ k1,1 ,
37
where we use the definition of Θ∗ , the triangle inequality and the fact that kAk1 ≤ kAk1,1 .
Further under Assumption (A5) that kΩ∗ k1 = O(1) and Assumption
(A3) that kΘ∗ k1,1 =
p
b 0 − Ikmax = OP ( log d/n).
O(1), we have kΩ∗G k1 = O(1). Thus kΩ∗G Σ
G
F.1
Bias Correction Matrices
p
Proof of Lemma D.1. By Lemma F.1 and Lemma F.2, we can see that for λ0 = C 0 log d/n,
where C 0 is a sufficiently large constant in (3.5) and (3.6), Ω∗ and Ω∗G are feasible solutions
to (3.5) and (3.6) with high probability. Since M and P minimizes the `∞ norm over the
feasible solutions, we can easily get kP k∞ ≤ kΩ∗G k1 and kM k∞ ≤ kΩ∗ k1 . Thus we have
kP k∞ = OP (1) and kM k∞ = OP (1). By triangle inequality, we have
b − Ikmax ≤ kM (Σ
b − Σ∗ )kmax + kM (Σ
b 0 − Σ∗ )kmax + kM Σ
b 0 − Ikmax
kM Σ
b − Σ∗ kmax + kM k∞ kΣ
b 0 − Σ∗ kmax + kM Σ
b 0 − Ikmax ,
≤ kM k∞ kΣ
b
where
p in the last inequality we use kABkmax ≤ kAk∞ kBkmax . pThus kM Σ − Ikmax =
b G − Ikmax = OP ( log d/n).
OP ( log d/n). Similar proofs can show that kP Σ
F.2
Bounding the Remainder Term
Proof of Lemma D.2. Recall from (3.4) that
b − Σ∗ )Θ∗ (Σ
b G − Σ∗ )P > + Θ
b − Θ∗ − M Σ(
b Θ
b − Θ∗ )Σ
bGP > .
Remainder = −M (Σ
G
{z
}
{z
} |
|
I2
I1
It can be decomposed into two terms. We bound each term separately. For I1 , we have
b − Σ∗ )kmax kΘ∗ k1,1 k(Σ
b G − Σ∗G )P > kmax
kI1 kmax ≤ kM (Σ
b − Σ∗ kmax kΘ∗ k1,1 kΣ
b G − Σ∗G kmax kP k∞ ,
≤ kM k∞ kΣ
where in the first inequality we use kABCkmax ≤ kAkmax kBk1,1 kCkmax and in the second
inequality we use kABkmax ≤ kAk∞ kBkmax . Combining Assumption (A3), Lemma D.1 and
Lemma E.5, we have kI1 kmax = OP (log d/n). For I2 , we have
b − Θ∗ − (M Σ
b − I + I)(Θ
b − Θ∗ )(Σ
b G P > − I + I)
I2 = Θ
b − I)(Θ
b − Θ∗ )(Σ
b G P > − I) − (M Σ
b − I)(Θ
b − Θ∗ ) − (Θ
b − Θ∗ )(Σ
b G P > − I).
= −(M Σ
Using the fact that kABCkmax ≤ kAkmax kBk1,1 kCkmax , kABkmax ≤ kAkmax kBk1,1 and
kABkmax ≤ kAk1,1 kBkmax , we have
b − Ikmax kΘ
b − Θ∗ k1,1 kΣ
b G P > − Ikmax + kM Σ
b − Ikmax kΘ
b − Θ∗ k1,1
kI2 kmax ≤ kM Σ
b − Θ∗ k1,1 kΣ
b G P > − Ikmax .
+ kΘ
By Theorem 4.1 and Lemma D.1, we have kI2 kmax = OP (s∗ log d/n). Combining with the
fact that kI1 kmax = OP (log d/n), we have kRemainderkmax = OP (s∗ log d/n).
38
F.3
Variance Lower Bound
Proof of Lemma D.3. Recall from (4.1), we have
>
>
> 2
2
> 2
] + (Mj∗ Σ∗G Pk∗
)[Pk∗ (I + Σ∗G Θ∗ )Σ∗G Pk∗
) − (Mj∗ Σ∗ Pk∗
= (Mj∗ Σ∗ Mj∗
)
ξjk
{z
}
{z
} |
|
{z
} |
∗
∗
− [Mj∗ (I − Σ Θ
|
I1
)Σ∗G2 (I
I3
I2
∗
−Θ Σ
{z
∗
>
>
).
](Pk∗ Σ∗G Pk∗
)Mj∗
}
I4
(F.1)
2
It is equivalent to showing that with high probability ξjk
is lower bounded by a constant.
To this end, we treat the four terms in (F.1) separately. For I1 , we aim to show that
∗ ∗
I1 ≥ (1 − λ0 )4 /(σjj
σkk ), where λ0 is same constant as in (3.5) and (3.6). For I2 , we use the
lower bound I2 ≥ 0. For the remaining two I3 and I4 , we aim to show that they are oP (1).
We first show that I1 is lower bounded by a constant. Due to the constraint in (3.5), we
b 0 ej | ≤ λ0 , where ej is the j-th natural basis in Rd . By the proof of Lemma D.1,
have |1 − Mj∗ Σ
we also have |1 − Mj∗ Σ∗ ej | = OP (λ0 ). Consider the following convex optimization problem
min
v∈Rd
v > Σ∗ v
subject to 1 − v > Σ∗ ej ≤ λ0 .
(F.2)
>
We can see that with high probability v = Mj∗
is a feasible solution to (F.2). To lower bound
∗
>
Mj∗ Σ Mj∗ , we consider the dual problem of (F.2), which is given by
max c(1 − λ0 ) −
c∈R
c2 ∗
σ
4 jj
subject to c ≥ 0.
(F.3)
∗
The optimal value of (F.3) is (1 − λ0 )2 /σjj
. Thus by weak duality, we have for any feasible v
> ∗
0 2
>
∗
>
of (F.2), v Σ v ≥ (1 − λ ) /σjj . Since v = Mj∗
is a feasible solution, we have Mj∗ Σ∗ Mj∗
≥
0 2
∗
∗ ∗
∗ >
0 2
∗
(1 − λ ) /σjj . Similarly we have Pk∗ (I + ΣG Θ )ΣG Pk∗ ≥ (1 − λ ) /σkk .
> 2
> 2
> >
For I3 , we have (Mj∗ Σ∗ Pk∗
) = [(Mj∗ Σ∗ − e>
j )Pk∗ ] , where we use the fact that ej Pk∗ = 0.
This is the case since P has a block structure and j, k are off diagonal indexes. Further by
0 2
2
2
Cauchy Schwarz inequality, we have I3 ≤ kMj∗ Σ∗ − e>
j k∞ kPk∗ k1 = OP ((λ ) ), where we use
∗
0
the fact that kPk∗ k1 ≤ kP k∞ = OP (1) and kMj∗ Σ∗ − e>
j k∞ ≤ kM Σ − Ikmax = OP (λ ).
∗ >
2
∗
Finally for I4 , we have |Pk∗ ΣG Pk∗ | ≤ kPk∗ k1 kΣG kmax = OP (1) due to Assumption (A2)
that kΣ∗G k = O(1) and the fact that kPk∗ k1 = OP (1). For the other term in I4 , we have
>
>
Mj∗ (I − Σ∗ Θ∗ )Σ∗G2 (I − Θ∗ Σ∗ )Mj∗
= Mj∗ Σ∗ I2 (I − Θ∗ Σ∗ )Mj∗
>
= (Mj∗ − Ω∗j∗ )Σ∗ I2 (I − Θ∗ Σ∗ )Mj∗
.
(F.4)
where in the first equality I2 = diag(0, Id2 ) and in the second equality we use the fact that
Ω∗j∗ Σ∗ I2 = e>
j I2 = 0. This is true because j ∈ [d1 ]. By Cauchy Schwarz inequality, we have
>
>
|(Mj∗ − Ω∗j∗ )Σ∗ I2 (I − Θ∗ Σ∗ )Mj∗
| ≤ k(Mj∗ − Ω∗j∗ )Σ∗ I2 (I − Θ∗ Σ∗ )k∞ kMj∗
k1 .
39
(F.5)
>
We already know that kMj∗
k1 = OP (1). To tackle the other term in (F.5), we use the fact
that kABkmax ≤ kAkmax kBk1 , then we get
k(Mj∗ − Ω∗j∗ )Σ∗ I2 (I − Θ∗ Σ∗ )k∞ ≤ kM Σ∗ − Ikmax kI2 (I − Θ∗ Σ∗ )k1
≤ kM Σ∗ − Ikmax kI − Θ∗ Σ∗ k1 ,
(F.6)
where in the first inequality we use the fact that k(Mj∗ − Ω∗j∗ )Σ∗ kmax ≤ kM Σ∗ − Ikmax . By the
proof of Lemma D.1, we have kM Σ∗ −Ikmax = OP (λ0 ). And from (E.7), we have kI −Θ∗ Σ∗ k1 =
>
O(1). Combining (F.4), (F.5) and (F.6), we have |Mj∗ (I −Σ∗ Θ∗ )Σ∗G2 (I −Θ∗ Σ∗ )Mj∗
| = OP (λ0 ).
>
Further because |Pk∗ Σ∗G Pk∗
| = OP (1), we have I4 = OP (λ0 ).
Combining all, we have with high probability
2
ξjk
(1 − λ0 )4
(1 − λ0 )4
0 2
0
,
≥ ∗ ∗ − (λ ) − λ ≥
∗ ∗
σjj σkk
2σjj
σkk
p
where we use the fact that λ0 log d/n and log d/n = o(1). In all, we show that with high
2
probability, ξjk
is lower bounded by a constant, thus 1/ξjk = OP (1).
F.4
Tail Bound
Proof of Lemma D.5. Define four random variables Y1 = −u> X, Y2 = X > (I + Θ∗ Σ∗G )v, Y3 =
u> (I − Σ∗ Θ∗ )XG2 and Y4 = XG>2 v. Hence Z = Y1 Y2 − E[Y1 Y2 ] + Y3 Y4 − E[Y3 Y4 ]. From
the definitions, we have that Y1 ∼ N (0, u> Σ∗ u), Y2 ∼ N (0, v > Σ∗G (I + Θ∗ Σ∗G )v), Y3 ∼
N (0, u> (I − Σ∗ Θ∗ )Σ∗2 (I − Θ∗ Σ∗ )u) and Y4 ∼ N (0, v > Σ∗2 v). It is easy to show that they all
have finite variance. Thus for i ∈ [4], kYi kψ2 = O(1). By Lemma E.3, we have kY1 Y2 kψ1 = O(1)
and kY3 Y4 kψ1 = O(1). Thus kZkψ1 = O(1) by triangle inequality and the centering property
of the sub-exponential norm.
40
| 1 |
On Oscillations in the Social Force Model
Tobias Kretz
PTV Group, D-76131 Karlsruhe, Germany
[email protected]
arXiv:1507.02566v1 [physics.soc-ph] 9 Jul 2015
July 10, 2015
Abstract
The Social Force Model is one of the most prominent models of pedestrian dynamics. As such naturally
much discussion and criticism has spawned around it, some of which concerns the existence of oscillations in the
movement of pedestrians. This contribution is investigating under which circumstances, parameter choices, and
model variants oscillations do occur and how this can be prevented. It is shown that oscillations can be excluded
if the model parameters fulfill certain relations. The fact that with some parameter choices oscillations occur
and with some not is exploited to verify a specific computer implementation of the model.
1
Introduction
The Social Force Model of pedestrian dynamics is a model that aims at describing the movement of pedestrians
with the predominant purpose of simulating pedestrian movement on computers. The force of pedestrian β on
pedestrian α typically has the form
f~αβ = Aα w()e(−g()) êαβ
(1)
where g() is a function which grows with increasing distance between both pedestrians and can depend on the
velocities of one or both pedestrians. The function w() suppresses forces the more pedestrian β is located outside
the current walking direction of pedestrian α.
The Social Force Model has first been introduced in 1995 [1]. This variant later was called “elliptical specification I”. A second variant (circular specification) has been proposed in 2000 [2] and a third variant (elliptical
specification II) in 2007 [3]. The difference between the three variants lies mainly in the way the velocities of
two interacting pedestrians are considered in the computation of the force between them. The 1995 variant
considers only the velocity of the pedestrian who exerts the force. The 2000 variant does not at all consider
velocities (only the distance between pedestrians) and the 2007 variant considers the relative velocity between
both pedestrians (the pedestrian who exerts the force and the pedestrian on whom the force acts). For the
analytical considerations in this paper mainly the simplest variant from 2000 will be considered. Nevertheless it
will also be discussed how results will change qualitatively if the variants as of 1995 or 2007 are applied.
Under “oscillations” in this paper unrealistic artifacts in the trajectory of a pedestrian approaching another
pedestrian, his destination or a wall is understood. The occurrence of oscillations in the sense of this contribution
has been discussed in a number of contributions [4–7] and it is often claimed that oscillations cannot be avoided
in the Social Force Model, but that they just can be made small. In this paper it will be shown that this is not
correct and exact conditions for the value of the model parameter such that oscillations occur will be derived.
In the remainder of the paper first a single pedestrian approaching a destination is investigated, then a
pedestrian approaching another pedestrian who is standing still and finally two pedestrians approaching each
other. In each case the model is reduced to one dimension and the noise term is set to zero. In the first section
on approaching a destination the problem will be shown to be most severe as with certain conditions oscillations
cannot be prevented and continue infinitely long. At the same time – as will be argued – for this case it is not
very relevant, as there are simple, pragmatic solutions. The second and third case yield restrictions to the choice
of parameters which can produce realistic, oscillation-free behavior.
2
A pedestrian approaching a destination
In this section we are interested in and discuss the equations of motion and their solution of a single pedestrian
approaching a destination coordinate (i.e. a point) where he is required to come to a standstill. We assume that
in the beginning the pedestrian is walking with his desired speed v0 straight towards the destination coordinate,
so there is no tangential component of the walking velocity. Then we can describe the pedestrian as walking
from positive x coordinate into negative x direction towards the destination which is at x = 0. Since the 1995,
1
2000, and 2007 variants of the Social Force Model only differ in the force between pedestrians and not the driving
force term all results of this section hold for all three variants.
We assume for now, that the desired velocity is always some externally given v0 and is always pointing from
the pedestrians current position towards x = 0. This assumption is the simplest one and it can be questioned –
as we will do below. With it it is obvious that there will be oscillations around x = 0. Our intention here is to
investigate the quantitative details of these oscillations.
In general the equation of motion for this pedestrian reads
ẍ(t) =
−sign(x(t))v0 − ẋ(t)
τ
(2)
where τ is an external parameter which typically has values between 0.1 and 1.0 seconds.
We require the pedestrian not only to reach x = 0, but also to stand still there as arrival condition. Because
the pedestrian has a speed larger 0 (or, considering walking direction: smaller 0) he will walk over the destination
and be on the left (negative) side of x coordinates. There the desired velocity points into the direction of positive
x coordinates. So we have for the time following the moment when the pedestrian is at x = 0:
ẍ(t) =
v0 − ẋ(t)
τ
(3)
This is solved by
ẋ(t)
t
v0 − ae− τ
=
x(t)
(4)
− τt
=
b + v0 t + aτ e
(5)
where a and b are integration constants which need to be determined by initial conditions.
We choose t = 0 at the moment when the pedestrian is at x = 0. Then ẋ(t = 0) = −v0 . However, for later
usage we want to set here more general ẋ(t = 0) = −u and remember that for our particular case u = v0 . With
the two conditions x(0) = 0 and ẋ(0) = −u we can determine the values of the integration constants:
a
=
v0 + u
(6)
b
=
−(v0 + u)τ
(7)
So we have
ẋ(t)
=
x(t)
=
t
v0 − (v0 + u)e− τ
t
v0 t − (v0 + u)τ 1 − e− τ
(8)
(9)
Now we can compute the time tturn when the pedestrian stops (and turns around) ẋ(t0 ) = 0 and the position
x(t0 ) at which this happens:
tturn
=
x(tturn )
=
v0 + u
v
0
u
u
τ v0 ln 1 +
−
v0
v0
τ ln
(10)
(11)
In the initial case, when u = v0 this simplifies to tturn = τ ln(2) and x(tturn ) = τ v0 (ln(2) − 1).
This is only half the way to go. The actual question is how fast a pedestrian returns to x = 0 when he has
passed it before with speed u and how long such a loop takes.
Now we choose t = 0 for the moment when the pedestrian is standing still at x(tturn ). The time treturned
at which the pedestrian returns to x = 0 therefore has to be understood as time after tturn and not as absolute
point in time.
We begin again by determining the value of the integration constants. From equation (4) and ẋ(0) = 0
follows a = v0 . With equations (5) and (11) we have
u
u
x(0) = b + v0 τ = v0 τ ln 1 +
−
(12)
v0
v0
u
u
b = v0 τ ln 1 +
− 1+
(13)
v0
v0
t
u
t
u
x(t) = v0 τ ln 1 +
− 1+
+ + e− τ
(14)
v0
v0
τ
Because x(treturned ) = 0 we have the following equation for treturned :
treturned
u
u
treturned
τ
− 1+
+
+ e−
0 = v0 τ ln 1 +
v0
v0
τ
0
=
φreturned − α + ln(α) + e−φreturned
2
(15)
(16)
with the substitutions φreturned = treturned /τ and α = 1 + u/v0 . We do another substitution ϕ = φreturned −
α + ln(α), transforming the last equation to
0
−α
−αe
=
=
ϕ + e−(ϕ+α−ln(α))
ϕe
ϕ
Obviously one solution is ϕ = −α, but the general solution is
ϕ = W −αe−α
(17)
(18)
(19)
where W () is the Lambert W function [8–10], which by definition is the inverse relation of f (y) = yey .
Resubstituting ϕ and φreturned (for reasons of convenience we do not resubstitute α) we have:
treturned
= W −αe−α + α − ln(α)
τ
(20)
In the interval y ∈ [−1/e..0] W (y) has two branches, denoted W−1 (y) and W0 (y); and with 1 ≤ α ≤ 2 for
−αe−α we are within this interval where W (y) has two branches. It holds
W−1 −αe−α = −α
(21)
In this case it would be treturned = −tturn . Thus the W−1 branch gives the backwards in time solution which
we are not interested in here (because we already have computed it above). Therefore we have to continue with
the W0 branch for which
W0 −αe−α 6= −α
(22)
although W is the inverse relation of f (y) = yey . In the remainder we write W for W0 .
= W (−2/e2 ) + 2 − ln(2) = 0.90047.
In case u = v0 , i.e. α = 2 the numerical value of the solution is treturned
τ
Using equation (20) in equation (4) we get the speed ẋ(treturned ) at the time when the pedestrian returns to
x = 0 in dependence from u, which is the speed at the time when the pedestrian last was at x = 0:
ẋ(treturned ) = v0 1 + W −αe−α
(23)
where we have used the defining equation of the W function y = W (y)eW (y) .
From here on the properties have an index that determines the recurrence to the offspring. For example t0
is the (absolute) time when the pedestrian is for the first time at the offspring, t1 denotes the (absolute) time
when the pedestrian returns for the first time to the offspring and so on. In the case of properties that do not
describe passage of the offspring, but the loop durance and turning position the index enumerates the loops.
This means that for these two properties there is no index 0 and that tn = tn−1 + ∆tn .
Equation (23) means that for the n + 1th passage of the offspring αn+1 depends on αn like
(24)
αn+1 = 2 + W −αn e−αn
The time ∆tn+1 it takes for the n + 1th loop depends on αn like
∆tn+1
= αn + W −αn e−αn .
τ
(25)
Rewriting equation (11) in terms of α and writing |xn | for the turn around distance of the nth loop we have
|xn+1 |
= αn − 1 − ln(αn )
τ v0
(26)
Table 1 shows the results from the first 30 passages if the process begins at t = 0 and with speed v0 . Figures
(1) to (3) visualize these data.
As typical values are τ = 0.4 s and v0 = 1.5 m/s it takes about 7 passages before the amplitude |xn | gets
smaller than 1 cm. This takes about 2.1 seconds. The 7th oscillation has a period of 0.15 seconds. To resolve
this a computer implementation would have to have a time step which is at the very maximum half that value
0.075 seconds or 14 simulation steps per second.
Because αn+1 = 1 only if W (−αn e−αn ) = −1 which in turn is only the case if αn = 1 there will be infinitely
many oscillations and in A a proof is given that the sum of the ∆tn diverges, i.e. the pedestrian will oscillate
infinitely long around the destination coordinate.
At first sight this is an unsatisfactory result with regard to the Social Force Model, as such oscillations are
unrealistic. However, we have to ask how realistic our assumptions and initial conditions were. This concerns in
particular the desired velocity ~v0 . We demanded in the beginning that the pedestrian should come to a stop at
x = 0. Nevertheless we have set the desired velocity all of the time to one particular value |~v0 | > 0. This is too
simplistic. Real persons plan ahead and adjust their desired speed: if they desire to stop at a certain position
3
Passage/Loop(n)
0
1
2
3
4
5
6
7
8
9
14
19
29
|xn |
τ v0
∆tn
τ
tn
τ
un
v0
0.307
0.128
0.071
0.045
0.031
0.023
0.017
0.014
0.011
0.005
0.003
0.001
1.594
1.018
0.754
0.600
0.499
0.428
0.374
0.332
0.299
0.199
0.150
0.100
0.000
1.594
2.611
3.365
3.966
4.465
4.893
5.266
5.599
5.898
7.062
7.898
9.088
1.000
0.594
0.424
0.330
0.270
0.229
0.199
0.175
0.157
0.142
0.096
0.073
0.049
αn
2.000
1.594
1.424
1.330
1.270
1.229
1.199
1.175
1.157
1.142
1.096
1.073
1.049
Table 1: Numerical results for the first 30 passages resp. oscillations. The value for αn is understood before an
oscillation, while ∆tn /τ is the period of that oscillation, and the total time tn /τ after that same oscillation. All
values are dimensionless. The numerical values of the Lambert W function have been computed using Wolfram
Alpha [11].
Figure 1: Left: Value of αn at the beginning of oscillation n. Right: Maximum amplitude |xn |/(v0 τ ) of oscillation
n.
Figure 2: Left: Period ∆tn /τ of oscillation n. Right: Total time tn /τ at nth passage of offspring (i.e. time after
nth oscillation; t=0 is when the pedestrian reaches the offspring for the first time and with a speed v0 ).
4
Figure 3: This shows the evolution of the amplitude xn /(v0 τ ) over time in [s].
they adapt their desired velocity beforehand to just achieve that. So we have to ask how we have to modify v0
dynamically to account for that.
More precisely we ask: at which distance db does a pedestrian have to start braking in the Social Force
Model, if he sets vb as desired speed opposing his current speed u? And how long does it take before he comes
to a stand still?
For this the equation of motion reads
vb − ẋ(t)
(27)
ẋ(t) =
τ
with the following (initial) conditions:
ẋ(t = 0)
=
−u
(28)
x(t = 0)
=
d
(29)
ẋ(t0 )
=
0
(30)
x(t0 )
=
0
(31)
where we have defined the moment where the pedestrian starts to brake as t = 0 and the moment where he stops
at x = 0 as t0 ; d is the distance before stand still at which braking has to begin which we are looking for.
Solving this results in
u
t0 = τ ln 1 +
(32)
vb
u
u
db = τ vb
− ln 1 +
(33)
vb
vb
This gives finite and positive t0 and db for all vb > 0. Thus, it is not sufficient to set the desired speed to zero
for a pedestrian who wants to stop as this would imply an infinitely large braking distance and an infinitely long
time to come to a stand still. If we assume that a pedestrian for braking can at maximum have a desired speed
vb = v0 the minimum time for braking is t0 = τ ln(2) and the minimum braking distance is db = τ v0 (1 − ln(2))
to come to standstill from an initial speed u = v0 .
A different and pragmatic solution of the problem is that in real-world planning applications usually there is
no requirement set for the speed at which a pedestrian arrives at a destination. It is furthermore usual that as
destination not a point is given but that a cross section has to be crossed or the pedestrian has to move into a
specific area. If no restriction is given for the speed at arrival the problem disappears.
A third objection is that also a real person cannot comply to the request to stand still with his or her center
exactly at some given coordinate. Real people always sway a little bit when they (try to) stand still [12]. Why
then should one require this from a simulated pedestrian? The oscillations around the destination point which
were found here in their functional form surely do not match the swaying of real people when they stand “still”,
but the amplitude of the ones of the model quickly fall below those of real people. When the real swaying is not
required to be reproduced by models implicitly a required limit to precision is set and any movement of that
order of magnitude or below should be acceptable.
5
3
A pedestrian approaching a standing pedestrian
3.1
Theory
Now we want to investigate the situation when some pedestrian M is approaching another (pedestrian S) one
who is standing at a fixed position – again in one dimension, for an empirical investigation see for example [13].
Since the pedestrian who is exerting the force is not moving, the 1995 variant (elliptical specification I) of the
Social Force Model will produce the same result as the 2000 variant (circular specification) which is the one we
are investigating, but different from the 2007 variant (elliptical specification II) where the speed of the pedestrian
on whom the force acts modifies the strength of the force.
Assume pedestrian S is standing still at x = 0, facing into positive x direction and pedestrian M is approaching
from there, i.e. pedestrian M has positive x coordinate and a speed directed towards negative x, i.e. ẋ(t = 0) < 0
as well as the desired speed v0 > 0 which is constant in time. It is assumed that pedestrian M at time t=0 is far
away from pedestrian S and walking with his desired walking speed ẋ(t = 0) = −v0 .
Then the equation of motion for pedestrian M in the 1995 and 2000 variants of the Social Force Model is
ẍ(t) =
x(t)−2R
−v0 − ẋ(t)
+ Ae− B
τ
(34)
with R as radius of a pedestrian (we assume here for simplicity that all pedestrians have the same radius) and
τ , A, and B parameters of the Social Force Model.
Pedestrian M will come to a standstill at distance
Aτ
+ 2R
(35)
ds = B ln
v0
As pedestrians should not bump one into another ds must be larger than 2R. Therefore we have a first condition
for the parameters of the Social Force Model to yield realistic results:
Aτ > v0
(36)
We assume that oscillations only occur generally if they also occur close to equilibrium. Thus, if there are
no oscillations when pedestrian M is already close to x = ds then there are no oscillations at all. Expanding the
exponential function in equation (34) into a Taylor series around x = ds gives
˙
¨ + ξ(t) + v0 ξ(t) ≈ 0
ξ(t)
τ
Bτ
(37)
Equation (37) is the equation of the damped harmonic oscillator, with the known three solutions: under damped
(pedestrian M approaches the equilibrium point oscillating around it), critically damped (pedestrian M approaches the equilibrium point as quick as possible but without oscillations) and over damped (pedestrian M
approaches the equilibrium point slower than would be possible without having oscillations. Which of the three
cases is realized depends on the relation of parameters:
v0 τ
B
v0 τ
4
B
v0 τ
4
B
4
>
1 ↔ under damped
(38)
=
1 ↔ critically damped
(39)
<
1 ↔ over damped
(40)
Thus in addition to equation (36) with
4v0 τ ≤ B
(41)
we have a second condition for the parameters of the 1995 and 2000 variants of the Social Force Model to have
a chance to yield realistic results.
3.2
Implications for parameters found in literature
In the literature one can find values for some or all of the parameters A, B, τ , and v0 gained from empirical
observations or laboratory experiments. If all four parameters are given, one can use relations (36) and (41) to
test if realistic results for the scenario discussed here can be expected. If not for all four parameters values are
given those two equations allow to set limits. Table 2 shows a compilation of such data and how they relate
to or with relations (36) and (41). It can be seen that where all four parameters were taken from experiment
relation (36) easily holds while relation (41) is clearly violated. Whereas if parameters A and B (and τ ) have
been calibrated or given in a work then relations (36) and (41) require pedestrians to walk quite slowly, while if
v0 and τ are calibrated or given it needs relatively large values for A and B that relations (36) and (41) hold.
This is an indication that with the circular specification (alone) the parameter space that yields realistic results
is small.
6
Source
[3]
A in [m/s2 ]
0.42 ± 0.26
B in [m]
1.65 ± 1.01
[2]
[14]
[15]
[15]
[16]1
26.67†
> 2.25
0.16 ± 0.01
0.45 ± 0.41
12.0 ± 0.2
0.08
≥ 3.34
4.16 ± 0.65
13.5 ± 18.4
0.16 ± 0.08
τ in [s]
τ > v0 /Amax
τ ≤ Bmax /v0 /4
0.5
0.61
0.5 ‡
0.5 ‡
1.09 ± 0.35
v0 in [m/s]
< 0.67
eq. (36)
eq. (41)
0.8
1.37
< 0.087
< 0.43
1.34 ± 0.21
OK
violated
OK
violated
Table 2: Bold face marks value from literature; normal face is computed with equations (36) and (41). Where one
parameter was calculated from another one where a range is given the range was utilized such that the parameter
space for the derived parameter is as large as possible. † : from 2000N/75kg. ‡ : Assumption and input for
calibration. 1 : The simplified Nomad model discussed in said publication is identical to the circular specification
of the Social Force Model, except that the radius of pedestrians is not stated explicitly in the exponent. If – what
is not stated explicitly in said contribution – “distance” for the Nomad model means body surface to body surface
the parameter meanings are identical, while if it means center to center distance the value of parameter A would
change by a factor e−2R/B and even equation (36) could be violated.
3.3
Elliptical specification II in comparison
For elliptical specification II a rigid analysis is much more difficult than for the circular specification since the
elliptical specification II does not only depend on the distance of the two pedestrians but also on their relative
velocity. Still one can estimate the effect of the added consideration of relative velocity on the occurrence
of oscillations. In the elliptical specification II of the Social Force Model [3] the force from pedestrian β on
pedestrian α is defined as
˙
w(θαβ (d~αβ , ~r˙α ))~g (d~αβ , d~αβ )
1 + cos(θαβ (d~αβ , ~r˙α ))
λα + (1 − λα )
2
~r˙α · d~αβ
−
|~r˙α ||d~αβ |
(43)
=
~rα − ~rβ
(45)
=
~ ~ Vαβ (d~αβ , d~˙αβ )
−∇
dαβ
(46)
f~α (~rα , ~rβ , ~r˙α , ~r˙β )
=
w(θαβ (d~αβ , ~r˙α ))
=
cos(θαβ (d~αβ , ~r˙α ))
=
d~αβ
˙
~g (d~αβ , d~αβ )
˙
Vαβ (d~αβ , d~αβ )
=
˙
bαβ (d~αβ , d~αβ )
=
~
~˙
b
(d
αβ ,dαβ )
− αβ B
α
Aα Bα e
q
1
˙
˙
(|d~αβ | + |d~αβ + d~αβ ∆tα |)2 − (d~αβ ∆tα )2
2
(42)
(44)
(47)
(48)
where ~r gives the position of pedestrians and A, B, λ, and ∆t are model parameters. Pedestrians’ positions and
derived properties are time dependent, other properties are constant. In one dimension and with pedestrians
facing each other this simplifies to
cos(θαβ )
=
1
w(θαβ )
=
1
(50)
dαβ
=
(51)
fα (xα , xβ , ẋα , ẋβ )
=
xα − xβ w.l.o.g. assumed to be > 0
d
g(dαβ , d˙αβ ) = −
Vαβ (dαβ , d˙αβ )
ddαβ
Vαβ (dαβ , d˙αβ )
=
bαβ (dαβ , d˙αβ )
=
=
=
(49)
b
(d
,ḋ
)
− αβ αβ αβ
Bα
Aα Bα e
q
1
(dαβ + |dαβ + d˙αβ ∆tα |)2 − (d˙αβ ∆tα )2
2
0 for (dαβ + d˙αβ ∆tα ) ≤ 0
q
d2αβ + dαβ d˙αβ ∆tα otherwise
7
(52)
(53)
(54)
(55)
(56)
Therefore the force is either zero if (dαβ + d˙αβ ∆tα ) ≤ 0 or it is
fα (xα , xβ , ẋα , ẋβ )
=
g(dαβ , d˙αβ )
(57)
r
=
2dαβ + d˙αβ ∆tα
−
e
Aα q
2
˙
2 dαβ + dαβ dαβ ∆tα
=
d¯a − d̄g
Aα ¯ e Bα
dg
d2 +dαβ ḋαβ ∆tα
αβ
Bα
(58)
(59)
with d¯a being the arithmetic and d¯g the geometric mean of current and projected distance (dαβ resp. dαβ +
d˙αβ ∆tα ). It can directly be seen that for large distances equation (58) reduces approximately to the circular
specification from [2] which depends only on distance.
For a pedestrian α approaching another pedestrian β from positive x values d is positive and d˙ is negative.
Therefore and because of the inequality of arithmetic and geometric means it holds dαβ + d˙αβ ∆tα < d¯g < d¯a <
˙
dαβ . Then obviously as long as d > −d∆t
the force as in equation (58) is larger compared to the circular
specification and consequently pedestrian α will not overshoot for certain parameter choices where he does so
with the circular specification.
In case pedestrian α overshoots over the equilibrium point and turns around it would be desirable that the
force is not larger but smaller than with the circular specification. However, since in this case d˙ becomes positive
it holds that dαβ < d¯g < d¯a < dαβ + d˙αβ ∆tα and therefore in equation (59) the exponential factor gives a smaller
value than with the circular specification, yet the fraction factor has a value > 1 and may for large values of
parameter B outweigh the damping effect from the modification in the exponential function. There are three
indications that also in this phase of pedestrian α’s movement and in general oscillations are suppressed with
elliptical specification II: first, for large values of parameter B already in the circular specification there are no
oscillations as equation (41) tells us. This means that where in an isolated view on the “way back” the problem
is most pronounced the system may actually not even evolve to the point that it exists.
Second, one can expand equation (58) in a series for small values of d˙αβ ∆tα /dαβ :
d
dαβ d˙αβ ∆tα
− Bαβ
α
fα (xα , xβ , ẋα , ẋβ ) ≈ Aα e
1−
(60)
Bα dαβ
In the moment pedestrian α turns around it is d˙αβ = 0 and therefore circular and elliptical specification II
yield an identical force. Starting to move backward with now positive d˙αβ equation (60) tells us that elliptical
specification II yields smaller forces with the difference to circular specification leveled for Bα → ∞.
Third, with the – admittedly arguable – additional assumptions that
d˙αβ ∆tα
d˙αβ ∆tα
<<
dαβ
(61)
<<
Bα
(62)
one cannot only expand for small ξ(t), but also reduce the complexity of equation (58) with regard to d˙αβ .
Various approximate forms of that equation can be derived in this way of which one is analytically solvable and
contains parameter ∆tα (omitting the indices α):
˙ + v0 ξ(t) = 0
¨ + 1 1 + v0 ∆t ξ(t)
(63)
ξ(t)
τ
B
Bτ
˙ has changed. This leads to a less strict requirewhere in comparison to equation (37) just the factor before ξ(t)
ment for avoiding oscillations:
2
v0 ∆t
(64)
4v0 τ ≤ B 1 +
B
than equation (41).
˙ < 0 where the force is zero: if pedestrian α starts to approach β from a
Finally a note on the case d + d∆t
˙
sufficiently large distance and with a typical pedestrian walking speed it will be d + d∆t
> 0. Since equation
+
˙
(58) diverges to positive values at d + d∆t → 0 the pedestrian will slow down and eventually be at rest or turn
˙
˙ > 0. Only when a
around before d < −d∆t.
For a departing pedestrian α it is d˙ > 0 and thus always d + d∆t
˙
simulation is initiated with d + d∆t
< 0 it may yield dynamics that do not fit into this line of argumentation;
compare [17]. Such extrinsically prepared “Garden of Eden” states can be dealt with for example by a model
extension (if one does not simply want to exclude them by construction).
3.4
Simulations
For realistic applications it makes sense to choose the parameters such that no oscillations occur. However, to
verify a computer implementation of the Social Force Model it can be interesting to use parameters just around
8
the critically damped conditions and check if oscillations do and do not occur according to expectations. Model
specific validation work can amend validation tests like those in the RiMEA test cases [18] which usually are
formulated model independently. In this subsection such a model specific verification process will be carried out
exemplarily utilizing PTV Viswalk [19].
Figure 4: Simulation scenario.
Figure 4 shows how the one-dimensional setting has been implemented in the two-dimensional simulation
model. The walking area is modeled to be 0.55 m wide. A pedestrian facing into positive x direction is stopped
by a red traffic signal at x=0. Then a second pedestrian is set into the model at x=52 m moving into negative
x direction.
τ
0.7
0.8
0.9
1.0
1.2
1.5
2.0
3.0
4.0
5.0
distance
0.4556
0.4821
0.5044
0.5269
0.5645
0.6073
0.6655
0.7483
0.8044
0.8482
expected
0.4570
0.4837
0.5072
0.5283
0.5648
0.6094
0.6669
0.7480
0.8056
0.8502
difference
0.0014
0.0016
0.0028
0.0014
0.0002
0.0021
0.0014
-0.0003
0.0011
0.0020
Table 3: Stand still distance [m] in dependence of parameter τ [s], with A = 1.6 m/s2 , B = 0.2 m and v0 = 1.5
m/s. Column “expected” shows the values according to equation (35).
Parameter settings: since A soc mean controls the contribution of elliptical II specification it has been set
to zero (in the remainder ”A refers to ”A soc iso in the software; for the pedestrian on the left it has been set to
zero as well); λ = 1.0 to minimize effects from the in fact two-dimensional nature of the simulation; stochastic
noise has been set to zero . The value of the noise parameter has been set to zero for both pedestrians to not
unnecessarily blur the results.
B
0.1
0.2
0.3
0.5
1.0
2.0
distance
0.5831
0.6524
0.7240
0.8590
1.2073
1.9005
expected
0.5847
0.6540
0.7233
0.8620
1.2085
1.9017
difference
0.0016
0.0016
-0.0007
0.0029
0.0013
0.0012
B
4.0
6.0
9.0
12.0
18.0
24.0
dist.
3.2870
4.6734
6.7530
8.8326
12.9917
17.1509
exp.
3.2880
4.6743
6.7537
8.8332
12.9920
17.1509
diff.
0.0010
0.0008
0.0007
0.0005
0.0003
0.0000
Table 4: Stand still distance [m] in dependence of parameter B [m], with A = 2.0 m/s2 , τ = 1.5 s and v0 = 1.5 m/s.
Column “expected” shows the values according to equation (35). The difference between theoretical expectation
and simulation is in all cases below 3 mm.
At first we investigate where the second pedestrian comes to rest. According to equation (35) this depends
on the values of the parameters B soc iso, A soc iso, v0 , τ , and R. The latter was set to be 0.2577 m. Keeping
9
A soc iso and v0 constant and increasing the value of τ in steps of 0.1 s the second pedestrian does not pass
through the first one for the first time at a value τ = 0.6 s and both do not overlap visually for τ ≥ 0.8 s. At
τ = 0.9375 s when Aτ = v0 the distance of the central points of both pedestrians is 0.5135 m which comes 2
mm close to 2R. Table 3 and figure 5 show values for the stand still distance in dependence of some values for
parameter τ with all other parameters kept constant. The theoretical expectation is met well in all cases.
Table 4 and figure 5 show values for the stand still distance in dependence of some values for parameter
B soc iso with all other parameters kept constant. The theoretical expectation is met well in all cases.
The stand still distances of table 4 are unrealistically high for all but the smallest values for parameter B.
We have chosen the parameters in this way not to scan for realistic parameter values, but because with these
parameters one can well demonstrate that for certain cases (small values for B) there are oscillations and in
others (large values for B) there are none. Figures 6 and 7 show the time evolution of the position of the
approaching pedestrian for various values of parameter B.
Figure 5: Left: Visualization of the data of table 3. The expectation for the regression curve would be y =
0.2000 ln(x) + 0.5283. Right: Visualization of the data of table 4. The expectation for the regression curve would
be y = 0.6931x + 0.5154.
Figure 6: Left: Position of approaching pedestrian over time for various values of parameter B. Right: As A = 2.0
m/s2 , v0 = 1.5 m/s, τ = 1.5 s the system is critically damped for B = 9.0 m. For increasing B the oscillations get
smaller and vanish for B = 9.0.
Neglecting for the under damped case the damping exponential function the approximately expectated time
distance Tr between two reversal points in the under damped cases is:
Tr = q
π
v0
Bτ
−
(65)
1
4τ 2
Table 5 shows a comparison between actual and expected Tr for various values for B which generate under
damped behavior.
3.5
Two pedestrians approaching each other
If two pedestrians approach each other in one dimension they may come to a stand still at an equilibrium distance
or they might jointly (and with an equilibrium distance) move into one of the two possible directions. If all
10
Figure 7: Left: Position of approaching pedestrian vs. time for under damped and critical cases with regard to the
value of B. Right: Zoom to the region of oscillations.
parameters (v0 , A, B, and τ ) are identical for both pedestrians one can rewrite the coupled equations of motion
of both pedestrians as one equation for the movement of the center of mass and one equation for the relative
motion. Latter one can again be approximated by an oscillator equation. Compared to the case above one has
an additional factor 2 at the friction term implying that the system is critically or over damped for
8v0 τ ≤ B
(66)
which leads to the conclusion that the case of two mutually approaching pedestrians sets a stronger restriction
on parameter choice than the case when one pedestrian is approaching another who is standing, at least if
oscillations are to be avoided. The 2007 variant of the Social Force Model in this case suppresses oscillations
even more than when one of the two pedestrians is standing still.
B
0.1
0.2
0.3
0.5
1.0
1.5
2.0
3.0
number of reverses
11
7
3
3
2
2
2
2
simulation
1.035
1.442
1.800
2.250
3.100
4.000
4.850
6.550
approximate expectation
0.999
1.421
1.750
2.286
3.332
4.215
5.038
6.664
Table 5: Comparison of simulated and approximately expected time [s] between reversals.
4
Conclusions
It could be shown that the Social Force Model as proposed in 1995 (elliptical specification I) and 2000 (circular
specification) in a special and one-dimensional case and around equilibrium reduces to the equations of the
damped harmonic oscillator. This implies that indeed one has to be careful not to choose parameters with which
the pedestrians’ trajectories yield unrealistic results. However, at the same time it means that there are parts in
parameter space in which there are not only just small oscillations, but where there are exactly no oscillations.
A look at parameter values found in literature for the circular specification shows that parameters deemed to
yield realistic results in certain calibration scenarios do or may produce oscillating behavior unless the desired
walking speed(s) are set to rather small values.
The equations of the Social Force Model as of 2007 (elliptical specification II) are not as easily treated
analytically. Still in a discussion of its equations it could be argued that – compared to the circular specification
– oscillations are clearly suppressed. Elliptical specification II from 2007 therefore appears to be superior to the
two preceding variants also for the reasons discussed in this paper (this was found already in the paper from 2007
but for different reasons). It is therefore a good idea to either simply use elliptical specification II or combine it
with one of the earlier variants (e.g. by simply adding the forces) to reduce the risk of oscillations.
The phenomenon of oscillations was used to verify a specific computer implementation of the Social Force
Model. It was possible to show that it reproduces the expected results. This comparison can be seen as an
attempt to falsify either of the two – theoretical expectations and software implementation – and the attempt
11
failed, no potential issues could be found. The method can be applied to verify any implementation of the
Social Force Model variants of 1995 or 2000 and it is generally an example of model specific verification. The
tests carried out in this contribution are just examples. The phenomenon of oscillations bears the potential to
formulate further tests.
5
Acknowledgments
I thank Mohcine Chraibi and Normen Rochau for useful discussions.
6
References
References
[1] D. Helbing and P. Molnar, “Social force model for pedestrian dynamics”, Physical review E 51 no. 5,
(1995) 4282, arXiv:cond-mat/9805244.
[2] D. Helbing, I. Farkas, and T. Vicsek, “Simulating dynamical features of escape panic”, Nature 407
no. 6803, (2000) 487–490, arXiv:cond-mat/0009448.
[3] A. Johansson, D. Helbing, and P. Shukla, “Specification of the Social Force Pedestrian Model by
Evolutionary Adjustment to Video Tracking Data”, Advances in Complex Systems 10 no. 4, (2007)
271–288, arXiv:0810.4587 [physics.soc-ph].
[4] B. Steffen and A. Seyfried, “The repulsive force in continous space models of pedestrian movement”,
arXiv preprint (2008) eprint, arXiv:0803.1319 [physics.soc-ph].
[5] M. Chraibi, A. Seyfried, and A. Schadschneider, “Generalized centrifugal-force model for pedestrian
dynamics”, Physical Review E 82 no. 4, (2010) 046111, arXiv:1008.4297 [physics.soc-ph].
[6] M. Chraibi, U. Kemloh, A. Schadschneider, and A. Seyfried, “Force-Based Models of Pedestrian
Dynamics”, Networks and Heterogeneous Media 6 (2011) 425–442.
[7] G. Köster, F. Treml, and M. Gödel, “Avoiding numerical pitfalls in social force models”, Physical Review
E 87 no. 6, (2013) 063305.
[8] R. Corless, G. Gonnet, D. Hare, D. Jeffrey, and D. Knuth, “On the LambertW function”, Advances in
Computational Mathematics 5 no. 1, (1996) 329–359.
[9] R. Corless, D. Jeffrey, and D. Knuth, “A sequence of series for the Lambert W function”, in Proceedings of
the 1997 international symposium on Symbolic and algebraic computation, pp. 197–204, ACM. 1997.
[10] F. Chapeau-Blondeau and A. Monir, “Numerical evaluation of the Lambert W function and application to
generation of generalized Gaussian noise with exponent 1/2”, IEEE Transactions on Signal Processing 50
no. 9, (2002) 2160–2165.
[11] Wolfram—Alpha, 2014. Publisher: Wolfram Alpha LLC. Retrieved April 9th 2014. https:
//www.wolframalpha.com/input/?i=ProductLog%28a%29+with+a%3D-0.270670566473225&dataset=.
[12] D. Winter, “Human balance and posture control during standing and walking”, Gait & posture 3 no. 4,
(1995) 193–214.
[13] A. Gorrini, K. Shimura, S. Bandini, K. Ohtsuka, and K. Nishinari, “Experimental Investigation of
Pedestrian Personal Space”, Transportation Research Record: Journal of the Transportation Research
Board 2421 no. 1, (2014) 57–63.
[14] T. Werner and D. Helbing, “The social force pedestrian model applied to real life scenarios”, Pedestrian
and evacuation dynamics (2003) 17–26.
[15] S. Seer, C. Rudloff, T. Matyus, and N. Brändle, “Validating social force based models with comprehensive
real world motion data”, Transportation Research Procedia 2 (2014) 724–732.
[16] S. Hoogendoorn and W. Daamen, “Microscopic calibration and validation of pedestrian models:
Cross-comparison of models using experimental data”, in Traffic and Granular Flow 05, pp. 329–340.
Springer, 2007.
12
[17] A. Schadschneider and M. Schreckenberg, “Garden of Eden states in traffic models”, Journal of Physics
A: Mathematical and General 31 no. 11, (1998) L225, arXiv:cond-mat/9801061.
[18] U. Brunner, H. Kirchberger, C. Lebeda, M. Oswald, R. Könnecke, M. Kraft, A. Thoss, L. Mülli,
A. Seyfried, C. Hartnack, S. Wader, G. Spennes, and T. Kretz, RiMEA – Richtlinie für Mikroskopische
Entfluchtungs-Analysen. Initiatoren des RiMEA-Projekts: M. Schwendimann, N. Waldau, P. Gattermann,
C. Moroge, T. Meyer-König, and M. Schreckenberg, 2.2.1 ed., 2009. (eprint from http://www.rimea.de/).
(in German).
[19] PTV AG, PTV Vissim 6.0 – User Manual. PTV Group, Haid-und-Neu-Str. 15, D-76131 Karlsruhe,
Germany, 2013.
[20] Wolfram—Alpha, 2015. Publisher: Wolfram Alpha LLC. Retrieved January 18th 2015.
https://www.wolframalpha.com/input/?i=plot+2%2BLambertW%28-x*exp%28-x%29%29+for+x%3D1+to+2.
[21] Wolfram—Alpha, 2015. Publisher: Wolfram Alpha LLC. Retrieved January 18th 2015.
https://www.wolframalpha.com/input/?i=plot+%28LambertW%28-%281%2Bx%29*exp%28-%281%2Bx%29%
29%29%2C+-%28%2B+1+-+x+%2B+2%2F3*x*x+-+4%2F9*x*x*x+%2B+44%2F135*x*x*x*x+-+104%
2F405*x*x*x*x*x%29%29+for+x+from+0+to+1.
A Proof that a pedestrian heading for a destination point never
comes to rest
In this section we show that
tn =
n
X
∆t
(67)
i=1
diverges for n → ∞.
First step: We proof that αn+1 < αn : we begin with α0 = 2 and with equation (24)
αn+1 = 2 + W0 −αn e−αn
(68)
For all 1 < z ≤ 2 it holds that
− ze−z
<
0
(69)
∂
− ze−z
∂z
>
0
(70)
and for all −1/e < z < 0
W0 (z)
∂
W0 (z)
∂z
<
0
(71)
>
0
(72)
Therefore for all 1 < z < 2 it holds that
W0 (−ze−z )
<
0
(73)
∂
W0 (−ze−z ) > 0
∂z
(74)
From that and with α0 = 2 it follows that
α1 = 2 + W0 (−α0 e−α0 ) < α0
(75)
From equation (24) it follows that
αn − αn+1 = W0 (−αn−1 e−αn−1 ) − W0 (−αn e−αn )
(76)
−z
and from it that if αn−1 > αn also αn > αn+1 . See figure 8 for a plot of 2 + W0 −ze
and the visualization
of the evolution of the αn which allows to see this first step very easily.
Second step: We verify that ∆tn > 0 (physically this is obvious, mathematically it needs to be shown; this
step in effect is a check on potential errors done before): Since
αn = −W1 −αn e−αn
(77)
and because for −1/e < z < 0 it holds that
W0 (z) > W1 (z)
13
(78)
Figure 8: Plot of 2 + W0 (−xe−x ) for 1 ≤ α ≤ 2 and a visualization of the evolution of the αn ; basically created
with [20].
it results for the ∆tn according to equation (25)
∆tn+1
= αn + W0 −αn e−αn = W0 −αn e−αn − W1 −αn e−αn > 0
τ
(79)
Third step: we proof that ∆tn+1 < ∆tn . Equations (25) and (24) read
∆tn+1
τ
αn
αn + W0 (−αn e−αn )
=
=
2 + W0 (−αn−1 e
−αn−1
(80)
).
(81)
From them it follows that
∆tn+1
= αn+1 + αn − 2
(82)
τ
thus the ∆tn show the same behavior as the αn and with αn+1 < αn also ∆tn+1 < ∆tn .
Fourth step: if ∆tn+1 /∆tn for large n would approach a value smaller than 1 the sum would converge, only
if it approaches 1 it may diverge. From (82) it follows that
∆tn+1
αn−1 − αn+1
=1−
∆tn
αn−1 + αn − 2
To compute the limit for this for large n we have to expand W0 (−αn e−αn ) into a series.
For convenience we write
βn = αn − 1
(83)
(84)
knowing that this is βn = un /v0 and that these βn will approach 0 with increasing n.
By computing the right side limit towards 0 of
W0 (−αn e−αn ) = W0 (−(1 + βn )e−(1+βn ) )
(85)
and its derivatives one can write
W0 (−(1 + βn )e−(1+βn ) ) = −1 + βn −
2 2 4 3
44 4 104 5
βn + βn −
βn +
βn + O(βn6 )
3
9
135
405
(86)
compare figure 9.
Considering only terms to second order
βn
≈
βn+1
≈
2 2
βn−1
3
4 2
βn−1 − βn−1
3
βn−1 −
(87)
(88)
Using these approximations in equation (83) gives for large n
∆tn+1
2βn−1
≈1−
∆tn
3 − βn−1
which approaches 1 with n → ∞ where βn−1 → 0.
14
(89)
Thus, like with the harmonic series hm the ratio of subsequent terms approaches 1. We note that equation
∆t
(82) implies n+1
= βn+1 + βn and therefore
τ
∞
X
tn→∞
= α0 − 1 + 2
βn
τ
n=1
(90)
the sum of the ∆tn depends trivially on the sum of the βn and especially their convergence behavior is the same.
In the last step we prove that if βn > hm then also βn+1 > hm+1 and therefore beyond some n, m the harmonic
series is a lower estimate of the βn implying that with the harmonic series also the sum of βn diverges.
Assume that for some n, βn can be written as
βn =
q
m
(91)
with m ∈ N, m >> 1 and 1 ≤ q < m/(m − 1), i.e. βn is placed between two elements of the harmonic series.
This requires just that there is some 0 < βn < 1. This is obviously the case.
So, can there be a βn+1 < 1/(m + 1)?
βn+1 −
1
q
2 q2
1
=
−
−
m+1
m
3 m2
m+1
(92)
A lower estimate of the right side is if at the positive term q is replaced by its minimum value q = 1 and at the
negative term by its upper limit q = m/m − 1:
βn+1 −
1
m+1
>
>
>
1
2
1
1
−
−
m
3 (m − 1)2
m+1
m2 − 8m + 3
3m(m − 1)2 (m + 1)
0∀(m ≥ 8)
(93)
(94)
(95)
Thus, if there is a βn0 < 1/8 – it is, as can be seen in table 1) – it will hold for all βn>n0 that if βn > hm then
also βn+1 > hm+1 . Therefore the harmonic series is a lower estimate for the series of βn . Thus the series of the
βn diverges and with it the series of the ∆tn .
Figure 9: Plot comparing W (−(1 + y) exp(−(1 + y))) and its approximation [21].
15
| 5 |
arXiv:1708.00078v2 [math.ST] 28 Nov 2017
Bayesian Dyadic Trees and Histograms for Regression
Stéphanie van der Pas
Mathematical Institute
Leiden University
Leiden, The Netherlands
[email protected]
Veronika Ročková
Booth School of Business
University of Chicago
Chicago, IL, 60637
[email protected]
Abstract
Many machine learning tools for regression are based on recursive partitioning
of the covariate space into smaller regions, where the regression function can
be estimated locally. Among these, regression trees and their ensembles have
demonstrated impressive empirical performance. In this work, we shed light
on the machinery behind Bayesian variants of these methods. In particular, we
study Bayesian regression histograms, such as Bayesian dyadic trees, in the simple
regression case with just one predictor. We focus on the reconstruction of regression
surfaces that are piecewise constant, where the number of jumps is unknown. We
show that with suitably designed priors, posterior distributions concentrate around
the true step regression function at a near-minimax rate. These results do not require
the knowledge of the true number of steps, nor the width of the true partitioning
cells. Thus, Bayesian dyadic regression trees are fully adaptive and can recover the
true piecewise regression function nearly as well as if we knew the exact number
and location of jumps. Our results constitute the first step towards understanding
why Bayesian trees and their ensembles have worked so well in practice. As an
aside, we discuss prior distributions on balanced interval partitions and how they
relate to an old problem in geometric probability. Namely, we relate the probability
of covering the circumference of a circle with random arcs whose endpoints are
confined to a grid, a new variant of the original problem.
1
Introduction
Histogram regression methods, such as regression trees [1] and their ensembles [2], have an impressive
record of empirical success in many areas of application [3, 4, 5, 6, 7]. Tree-based machine learning
(ML) methods build a piecewise constant reconstruction of the regression surface based on ideas
of recursive partitioning. Perhaps the most popular partitioning schemes are the ones based on
parallel-axis splits. One recent example is the Mondrian process [8], which was introduced to the
ML community as a prior over tree data structures with interesting self-consistency properties. Many
efficient algorithms exist that can be deployed to fit regression histograms underpinned by some
partitioning scheme. Among these, Bayesian variants, such as Bayesian CART [9, 10] and BART
[11], have appealed to umpteen practitioners. There are several reasons why. Bayesian tree-based
regression tools (a) can adapt to regression surfaces without any need for pruning, (b) are reluctant to
overfit, (c) provide an avenue for uncertainty statements via posterior distributions. While practical
success stories abound [3, 4, 5, 6, 7], the theoretical understanding of Bayesian regression tree
methods has been lacking. In this work, we study the quality of posterior distributions with regard
to the three properties mentioned above. We provide first theoretical results that contribute to the
understanding of Bayesian Gaussian regression methods based on recursive partitioning.
Our performance metric will be the speed of posterior concentration/contraction around the true
regression function. This is ultimately a frequentist assessment, describing the typical behavior of the
posterior under the true generative model [12]. Posterior concentration rate results are now slowly
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
entering the machine learning community as a tool for obtaining more insights into Bayesian methods
[13, 14, 15, 16, 17]. Such results quantify not only the typical distance between a point estimator
(posterior mean/median) and the truth, but also the typical spread of the posterior around the truth.
Ideally, most of the posterior mass should be concentrated in a ball centered around the true value
with a radius proportional to the minimax rate [12, 18]. Being inherently a performance measure of
both location and spread, optimal posterior concentration provides a necessary certificate for further
uncertainty quantification [19, 20, 21]. Beyond uncertainty assessment, theoretical guarantees that
describe the average posterior shrinkage behavior have also been a valuable instrument for assessing
the suitability of priors. As such, these results can often provide useful guidelines for the choice of
tuning parameters, e.g. the latent Dirichlet allocation model [14].
Despite the rapid growth of this frequentist-Bayesian theory field, posterior concentration results
for Bayesian regression histograms/trees/forests have, so far, been unavailable. Here, we adopt this
theoretical framework to get new insights into why these methods work so well.
Related Work
Bayesian density estimation with step functions is a relatively well-studied problem [22, 23, 24]. The
literature on Bayesian histogram regression is a bit less crowded. Perhaps the closest to our conceptual
framework is the work by Coram and Lalley [25], who studied Bayesian non-parametric binary
regression with uniform mixture priors on step functions. The authors focused on L1 consistency.
Here, we focus on posterior concentration rather than consistency. We are not aware of any other
related theoretical study of Bayesian histogram methods for Gaussian regression.
Our Contributions
In this work we focus on a canonical regression setting with merely one predictor. We study
hierarchical priors on step functions and provide conditions under which the posteriors concentrate
optimally around the true regression function. We consider the case when the true regression function
itself is a step function, i.e. a tree or a tree ensemble, where the number and location of jumps is
unknown.
We start with a very simple space of approximating step functions, supported on equally sized intervals
where the number of splits is equipped with a prior. These partitions include dyadic regression trees.
We show that for a suitable complexity prior, all relevant information about the true regression
function (jump sizes and the number of jumps) is learned from the data automatically. During the
course of the proof, we develop a notion of the complexity of a piecewise constant function relative
to its approximating class.
Next, we take a larger approximating space consisting of functions supported on balanced partitions
that do not necessarily have to be of equal size. These correspond to more general trees with splits at
observed values. With a uniform prior over all balanced partitions, we are able to achieve a nearly
ideal performance (as if we knew the number and the location of jumps). As an aside, we describe
the distribution of interval lengths obtained when the splits are sampled uniformly from a grid. We
relate this distribution to the probability of covering the circumference of a circle with random arcs, a
problem in geometric probability that dates back to [26, 27]. Our version of this problem assumes
that the splits are chosen from a discrete grid rather than from a unit interval.
Notation
With ∝ and . we will denote an equality and inequality, up to a constant. The ε-covering number
of a set Ω for a semimetric d, denoted by N (ε, Ω, d), is the minimal number of d-balls N
of radius ε
needed to cover the set Ω. We denote by φ(·) the standard normal density and by Pfn =
Pf,i the
n-fold product
measure
of
the
n
independent
observations
under
(1)
with
a
regression
function
f (·).
Pn
By Pxn = n1 i=1 δxi we denote the empirical distribution of the observed covariates, by || · ||n the
norm on L2 (Pxn ) and by || · ||2 the standard Euclidean norm.
2
Bayesian Histogram Regression
We consider a classical nonparametric regression model, where response variables Y (n) =
(Y1 , . . . , Yn )0 are related to input variables x(n) = (x1 , . . . , xn )0 through the function f0 as follows
Yi = f0 (xi ) + εi , εi ∼ N (0, 1), i = 1, . . . , n.
(1)
2
We assume that the covariate values xi are one-dimensional, fixed and have been rescaled so that
xi ∈ [0, 1]. Partitioning-based regression methods are often invariant to monotone transformations
of observations. In particular, when f0 is a step function, standardizing the distance between the
observations, and thereby the split points, has no effect on the nature of the estimation problem.
Without loss of generality, we will thereby assume that the observations are aligned on an equispaced
grid.
Assumption 1. (Equispaced Grid) We assume that the scaled predictor values satisfy xi = ni for
each i = 1, . . . , n.
This assumption implies that partitions that are balanced in terms of the Lebesque measure will be
balanced also in terms of the number of observations. A similar assumption was imposed by Donoho
[28] in his study of Dyadic CART.
The underlying regression function f0 : [0, 1] → R is assumed to be a step function, i.e.
f0 (x) =
K0
X
βk0 IΩ0k (x),
k=1
0
{Ω0k }K
k=1
0
where
is a partition of [0, 1] into K0 non-overlapping intervals. We assume that {Ω0k }K
k=1
is minimal, meaning that f0 cannot be represented with a smaller partition (with less than K0 pieces).
Each partitioning cell Ω0k is associated with a step size βk0 , determining the level of the function f0
0 0
on Ω0k . The entire vector of K0 step sizes will be denoted by β 0 = (β10 , . . . , βK
).
One might like to think of f0 as a regression tree with K0 bottom leaves. Indeed, every step function
can be associated with an equivalence class of trees that live on the same partition but differ in
their tree topology. The number of bottom leaves K0 will be treated as unknown throughout this
paper. Our goal will be designing a suitable class of priors on step functions so that the posterior
concentrates tightly around f0 . Our analysis with a single predictor has served as a precursor to a
full-blown analysis for high-dimensional regression trees [29].
We consider an approximating space of all step functions (with K = 1, 2, . . . bottom leaves)
F = ∪∞
K=1 FK ,
(2)
which consists of smaller spaces (or shells) of all K-step functions
(
)
K
X
FK = fβ : [0, 1] → R; fβ (x) =
βk IΩk (x) ,
k=1
{Ωk }K
k=1
each indexed by a partition
and a vector of K step heights β. The fundamental building
block of our theoretical analysis will be the prior on F. This prior distribution has three main
ingredients, described in detail below, (a) a prior on the number of steps K, (b) a prior on the
0
partitions {Ωk }K
k=1 of size K, and (c) a prior on step sizes β = (β1 , . . . , βK ) .
2.1
Prior πK (·) on the Number of Steps K
To avoid overfitting, we assign an exponentially decaying prior distribution that penalizes partitions
with too many jumps.
Definition 2.1. (Prior on K) The prior on the number of partitioning cells K satisfies
πK (k) ≡ Π(K = k) ∝ exp(−cK k log k) for
k = 1, 2, . . . .
(3)
This prior is no stranger to non-parametric problems. It was deployed for stepwise reconstructions of
densities [24, 23] and regression surfaces [25]. When cK is large, this prior is concentrated on models
with small complexity where overfitting should not occur. Decreasing cK leads to the smearing of
the prior mass over partitions with more jumps. This is illustrated in Figure 1, which depicts the prior
for various choices of cK . We provide recommendations for the choice of cK in Section 3.1.
2.2
Prior πΩ (· | K) on Interval Partitions {Ωk }K
k=1
After selecting the number of steps K from πK (k), we assign a prior over interval partitions πΩ (· |K).
We will consider two important special cases.
3
πK(k)
0.5
f0(x)
6
●
●
0.4
1
1/2
1/5
1/10
True
K=2
5
K=5
●
●
●
4
0.3
●
●
●
●
●
●
●
●
●
●
●
●
3
0.2
0.1
2
●
K=10
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
1
●
0.0
2
●
●
●
4
6
●
●
8
●
0
●
10
0.0
k
0.2
0.4
0.6
0.8
1.0
x
Figure 1: (Left) Prior on the tree size for several values of cK , (Right) Best approximations of f0 (in
the `2 sense) by step functions supported on equispaced blocks of size K ∈ {2, 5, 10}.
2.2.1
Equivalent Blocks
Perhaps the simplest partition is based on statistically equivalent blocks [30], where all the cells are
required to have the same number of points. This is also known as the K-spacing rule that partitions
the unit interval using order statistics of the observations.
Definition 2.2. (Equivalent Blocks) Let x(i) denote the ith order statistic of x = (x1 , . . . , xn )0 ,
where x(n) ≡ 1 and n = Kc for some c ∈ N\{0}. Denote by x(0) ≡ 0. A partition {Ωk }K
k=1
consists of K equivalent blocks, when Ωk = (x(jk ) , x(jk+1 ) ], where jk = (k − 1)c.
A variant of this definition can be obtained in terms of interval lengths rather than numbers of
observations.
Definition 2.3. (Equispaced Blocks) A partition {Ωk }K
k=1 consists of K equispaced blocks Ωk , when
k
Ωk = k−1
,
for
k
=
1,
.
.
.
,
K.
K
K
When K = 2s for some s ∈ N\{0}, the equispaced partition corresponds to a full complete binary
tree with splits at dyadic rationals. If the observations xi lie on a regular grid (Assumption 1), then
Definition 2.2 and 2.3 are essentially equivalent. We will thereby focus on equivalent blocks (EB)
and denote such a partition (for a given K > 0) with ΩEB
K . Because there is only one such partition
EB
EB
for each K, the prior πΩ (·|K) has a single point mass mass at ΩEB
= ∪∞
K . With Ω
K=1 ΩK we
denote the set of all EB partitions for K = 1, 2, . . . . We will use these partitioning schemes as a
jump-off point.
2.2.2
Balanced Intervals
Equivalent (equispaced) blocks are deterministic and, as such, do not provide much room for learning
about the actual location of jumps in f0 . Balanced intervals, introduced below, are a richer class of
partitions that tolerate a bit more imbalance. First, we introduce the notion of cell counts µ(Ωk ). For
each interval Ωk , we write
n
1X
µ(Ωk ) =
I(xi ∈ Ωk ),
(4)
n i=1
the proportion of observations falling inside Ωk . Note that for equivalent blocks, we can write
µ(Ω1 ) = · · · = µ(ΩK ) = c/n = 1/K.
Definition 2.4. (Balanced Intervals) A partition {Ωk }K
k=1 is balanced if
2
Cmin
C2
≤ µ(Ωk ) ≤ max for all k = 1, . . . , K
K
K
for some universal constants Cmin ≤ 1 ≤ Cmax not depending on K.
4
(5)
K=2
K=3
1
●
0.9
●
●
●
●
1
0.8
●
●
c
0.2
●
0.6
0.4
0.1
1−c
c
0
●
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.6
●
0.2
0
0
●
●
0.2
0.3
Ω3
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
0.4
0.8
●
●
●
0.5
0.9
1
0
Ω1
●
●
●
0.6
1
●
●
●
0.4
1−c
0.7
Ω1
0.8
●
0.2
0.4
0.6
0.8
1
Ω2
Ω2
(a) K = 2
(b) K = 3
Figure 2: Two sets EK of possible stick lengths that satisfy the minimal cell-size condition |Ωk | ≥ C
with n = 10, C = 2/n and K = 2, 3.
The following variant of the balancing condition uses interval widths rather than cell counts:
2
e 2 /K ≤ |Ωk | ≤ C
emax
C
/K. Again, under Assumption 1, these two definitions are equivamin
lent. In the sequel, we will denote by ΩBI
K the set of all balanced partitions consisting of K intervals
BI
and by ΩBI = ∪∞
Ω
the
set
of
all
balanced intervals of sizes K = 1, 2, . . . . It is worth pointing
K=1 K
out that the balance assumption on the interval partitions can be relaxed, at the expense of a log factor
in the concentration rate [29].
With balanced partitions, the K th shell FK of the approximating space F in (2) consists of all step
functions that are supported on partitions ΩBI
K and have K −1 points of discontinuity uk ∈ In ≡ {xi :
i = 1, . . . , n − 1} for k = 1, . . . K − 1. For equispaced blocks in Definition 2.3, we assumed that
the points of subdivision were deterministic, i.e. uk = k/K. For balanced partitions, we assume that
uk are random and chosen amongst the observed values xi . The order statistics of the vector of splits
u = (u1 , . . . , uK−1 )0 uniquely define a segmentation of [0, 1] into K intervals Ωk = (u(k−1) , u(k) ],
where u(k) designates the k th smallest value in u and u(0) ≡ 0, u(K) = x(n) ≡ 1.
Our prior over balanced intervals πΩ (· | K) will be defined implicitly through a uniform prior over
the split vectors u. Namely, the prior over balanced partitions ΩBI
K satisfies
1
BI
K
πΩ ({Ωk }K
I
{Ω
}
∈
Ω
.
(6)
k
k=1 | K) =
k=1
K
card(ΩBI
K )
In the following Lemma, we obtain upper bounds on card(ΩBI
K ) and discuss how they relate to an
old problem in geometric probability. In the sequel, we denote with |Ωk | the lengths of the segments
defined through the split points u.
Lemma 2.1. Assume that u = (u1 , . . . , uK−1 )0 is a vector of independent random variables
obtained by uniform sampling (without replacement) from In . Then under Assumption 1, we have for
1/n < C < 1/K
bn(1−K C)c+K−1
K−1
Π min |Ωk | ≥ C =
(7)
n−1
1≤k≤K
and
Π
K−1
max |Ωk | ≤ C
1≤k≤K
=1−
n
e
X
(−1)
k=1
k
n−1
k
bn(1−k C)c+K−1
K−1
n−1
K−1
,
(8)
where n
e = min{n − 1, b1/Cc}.
Proof. The denominator of (7) follows from the fact that there are n − 1 possible splits for the
K − 1 points of discontinuity uk . The numerator is obtained after adapting the proof of Lemma
5
2 of Flatto and Konheim [31]. Without lost of generality, we will assume that C = a/n for some
a = 1, . . . , bn/Kc so that n(1 − KC) is an integer. Because the jumps uk can only occur on the
grid In , we have |Ωk | = j/n for some j = 1, . . . , n − 1. It follows from Lemma 1 of Flatto and
PK
Konheim [31] that the set EK = {|Ωk | : k=1 |Ωk | = 1 and |Ωk | ≥ C for k = 1, . . . , K} lies
PK
in the interior of a convex hull of K points vr = (1 − KC)er + C k=1 ek for r = 1, . . . , K,
where er = (er1 , . . . , erK )0 are unit base vectors, i.e. erj = I(r = j). Two examples of the set EK
(for K = 2 and K = 3) are depicted in Figure 2. In both figures, n = 10 (i.e. 9 candidate split
points) and a = 2. With K = 2 (Figure 2(a)), there are only 7 = n(1−KC)+K−1
pairs of interval
K−1
lengths (|Ω1 |, |Ω2 |)0 that satisfy the minimal cell condition. These points lie on a grid between
the two vertices v1 = (1 − C, C) and v2 = (C, 1 − C). With K = 3, the convex hull of points
v1 = (1 − 2C, C, C)0 , v2 = (C, 1 − 2C, C)0 and v1 = (C, C, 1 − 2C)0 corresponds to a diagonal
dissection of a cube of a side length (1 − 3C) (Figure 2(b), again with a = 2 and n = 10). The
number of lattice points in the interior (and on the boundary)
of such tetrahedron corresponds to
an arithmetic sum 12 (n − 3a + 2)(n − 3a + 1) = n−3a+2
. So far, we showed (7) for K = 2 and
2
K = 3. To complete the induction argument, suppose that the formula holds for some arbitrary
K > 0. Then the size of the lattice inside (and on the boundary) of a (K + 1)-tetrahedron of a side
length [1 − (K√
+ 1)C] √
can be obtained by summing√lattice sizes inside K-tetrahedrons of increasing
side lengths 0, 2/n, 2 2/n, . . . , [1 − (K + 1)C] 2/n, i.e.
n[1−(K+1)C]+K−1
X
j=K−1
j
K −1
n[1 − (K + 1)C] + K
=
,
K
PN
N +1
where we used the fact j=K Kj = K+1
. The second statement (8) is obtained by writing the
event as a complement of the union of events and applying the method of inclusion-exclusion.
Remark 2.1. Flatto and Konheim [31] showed that the probability of covering a circle with random
arcs of length C is equal to the probability that all segments of the unit interval, obtained with iid
random uniform splits, are smaller than C. Similarly, the probability (8) could be related to the
probability of covering the circle with random arcs whose endpoints are chosen from a grid of n − 1
equidistant points on the circumference.
e2 )c+K−1
n−1
min
There are K−1
partitions of size K, of which bn(1−CK−1
satisfy the minimal cell width
2
e
balancing condition (where C
> K/n). This number gives an upper bound on the combinatorial
min
complexity of balanced partitions card(ΩBI
K ).
2.3
Prior π(β | K) on Step Heights β
To complete the prior on F K , we take independent normal priors on each of the coefficients. Namely
π(β | K) =
K
Y
φ(βk ),
(9)
k=1
where φ(·) is the standard normal density.
3
Main Results
A crucial ingredient of our proof will be understanding how well one can approximate f0 with other
step functions (supported on partitions Ω, which are either equivalent blocks ΩEB or balanced
partitions ΩBI ). We will describe the approximation error in terms of the overlap between the true
K
0
partition {Ω0k }K
k=1 and the approximating partitions {Ωk }k=1 ∈ Ω. More formally, we define the
restricted cell count (according to Nobel [32]) as
0
0
0
m V ; {Ω0k }K
k=1 = |Ωk : Ωk ∩ V 6= ∅|,
0
the number of cells in {Ω0k }K
k=1 that overlap with an interval V ⊂ [0, 1]. Next, we define the
complexity of f0 as the smallest size of a partition in Ω needed to completely cover f0 without any
overlap.
6
Definition 3.1. (Complexity of f0 w.r.t. Ω) We define K(f0 , Ω) as the smallest K such that there
exists a K-partition {Ωk }K
k=1 in the class of partitions Ω for which
0
m Ωk ; {Ω0k }K
k=1 = 1 for all k = 1, . . . , K.
The number K(f0 , Ω) will be referred to as the complexity of f0 w.r.t. Ω.
The complexity number K(f0 , Ω) indicates the optimal number of steps needed to approximate f0
with a step function (supported on partitions in Ω) without any error. It depends on the true number
0
of jumps K0 as well as the true interval lengths |Ω0k |. If the minimal partition {Ω0k }K
k=1 resided in the
K
0
approximating class, i.e. {Ω0k }k=1
∈ Ω, then we would obtain K(f0 , Ω) = K0 , the true number of
0 K0
steps. On the other hand, when {Ωk }k=1 ∈
/ Ω, the complexity number K(f0 , Ω) can be much larger.
0
This is illustrated in Figure 1 (right), where the true partition {Ω0k }K
k=1 consists of K0 = 4 unequal
pieces and we approximate it with equispaced blocks with K = 2, 5, 10 steps. Because the intervals
Ω0k are not equal and the smallest one has a length 1/10, we need K(f0 , ΩEB ) = 10 equispaced
0
blocks to perfectly approximate f0 . For our analysis, we do not need to assume that {Ω0k }K
k=1 ∈ Ω
(i.e. f0 does not need to be inside the approximating class) or that K(f0 , Ω) is finite. The complexity
number can increase with n, where sharper performance is obtained when f0 can be approximated
error-free with some f ∈ Ω, where f has a small number of discontinuities relative to n.
Another way to view K(f0 , Ω) is as the ideal partition size on which the posterior should concentrate. Ifpthis number were known, we could achieve a near-minimax posterior concentration
rate n−1/2 K(f0 , Ω) log[n/K(f0 , Ω)] (Remark 3.3). The actual
p minimax rate for estimating a
−1/2
piece-wise constant f0 (consisting of K0 > 2 pieces) is n
K0 log(n/K0 ) [33]. In our main
results, we will target the nearly optimal rate expressed in terms of K(f0 , Ω).
3.1
Posterior Concentration for Equivalent Blocks
Our first result shows that the minimax rate is nearly achieved, without any assumptions on the
number of pieces of f0 or the sizes of the pieces.
Theorem 3.1. (Equivalent Blocks) Let f0 : [0, 1] → R be a step function with K0 steps, where K0
is unknown. Denote by F the set of all step functions supported on equivalent blocks, equipped
with priors πK (·) and π(β | K) as in (3) and (9). Denote with Kf0 ≡ K(f0 , ΩEB ) and assume
√
kβ 0 k2∞ . log n and Kf0 . n. Then, under Assumption 1, we have
q
Π f ∈ F : kf − f0 kn ≥ Mn n−1/2 Kf0 log (n/Kf0 ) | Y (n) → 0
(10)
in Pfn0 -probability, for every Mn → ∞ as n → ∞.
Before we proceed with the proof, a few remarks ought to be made. First, it is worthwhile to
emphasize that the statement in Theorem 3.1 is a frequentist one as it relates to an aggregated
behavior of the posterior distributions obtained under the true generative model Pfn0 .
Second, the theorem shows that the Bayesian procedure performs an automatic adaptation to
K(f0 , ΩEB ). The posterior will concentrate on EB partitions that are fine enough to approximate f0
well. Thus, we are able to recover the true function as well as if we knew K(f0 , ΩEB ).
Third, it is worth mentioning that, under Assumption 1, Theorem 3.1 holds for equivalent as well as
equisized blocks. In this vein, it describes the speed of posterior concentration for dyadic regression
trees. Indeed, as mentioned previously, with K = 2s for some s ∈ N\{0}, the equisized partition
corresponds to a full binary tree with splits at dyadic rationals.
Another interesting insight is that the Gaussian prior (9), while selected for mathematical convenience,
turns out to be sufficient for optimal recovery. In other words, despite the relatively large amount of
mass near zero, the Gaussian prior does not rule out optimal posterior concentration. Our standard
normal prior is a simpler version of the Bayesian CART prior, which determines the variance from
the data [9].
Let Kf0 ≡ K(f0 , ΩEB ) be as in Definition 3.1. Theorem 3.1 is proved by verifying the three
p
Skn
conditions of Theorem 4 of [18], for εn = n−1/2 Kf0 log(n/Kf0 ) and Fn = K=0
FK , with
7
kn of the order Kf0 log(n/Kf0 ). The approximating subspace Fn ⊂ F should be rich enough to
approximate f0 well and it should receive most of the prior mass. The conditions for posterior
contraction at the rate εn are:
ε
(C1) sup log N 36
, {f ∈ Fn : kf − f0 kn < ε}, k.kn ≤ nε2n ,
ε>εn
(C2)
2
Π(F\Fn )
= o(e−2nεn ),
2
2
Π(f ∈ F : kf − f0 kn ≤ εn )
(C3)
j2
2
Π(f ∈ Fn : jεn < kf − f0 kn ≤ 2jεn )
≤ e 4 nεn for all sufficiently large j.
2
2
Π(f ∈ F : kf − f0 kn ≤ εn )
The entropy condition (C1) restricts attention to EB partitions with small K. As will be seen from the
proof, the largest allowed partitions have at most (a constant multiple of) Kf0 log (n/Kf0 ) pieces..
Condition (C2) requires that the prior does not promote partitions with more than Kf0 log (n/Kf0 )
pieces. This property is guaranteed by the exponentially decaying prior πK (·), which penalizes large
partitions.
The final condition, (C3), requires that the prior charges a k.kn neighborhood of the true function. In
our proof, we verify this condition by showing that the prior mass on step functions of the optimal
size Kf0 is sufficiently large.
Proof. We verify the three conditions (C1), (C2) and (C3).
(C1) Let ε > εn and K ∈ N. For fα , fβ ∈ FK , we have K −1 kα − βk22 = kfα − fβ k2n because
µ(Ωk ) = 1/K for each k. We now argue
√ of [18] to show that
as in the proof of Theorem 12
ε
N 36
, {f ∈ FK : kf − f0 kn < ε}, k.kn can be covered by the number of Kε/36-balls required
√
to cover a Kε-ball in RK . This number is bounded above by 108K . Summing over K, we
recognize a geometric series. Taking the logarithm of the result, we find that (C1) is satisfied if
log(108)(kn + 1) ≤ nε2n .
(C2)
We bound the denominator by:
2
2
Π(f ∈ F : kf − f0 k2n ≤ ε2 ) ≥ πK (Kf0 )Π β ∈ RK(f0 ) : kβ − β ext
0 k2 ≤ ε Kf0 ,
Kf0
where β ext
is an extended version of β 0 ∈ RK0 , containing the coefficients for f0 expressed
0 ∈R
Kf0
as a step function on the partition {Ω0k }k=1
. This can be bounded from below by
Z 2
πK (Kf0 )
πK (Kf0 ) ε Kf0 /2 xKf0 /2−1 e−x/2
K(f0 )
2
2
Π
β
∈
R
:
kβk
≤
ε
K
/2
>
dx.
f0
ext 2
ext 2
2
ekβ0 k2 /2
ekβ0 k2 /2 0
2Kf0 /2 Γ(Kf0 /2)
We bound this from below by bounding the exponential at the upper integration limit, yielding:
2
e−ε Kf0 /4
πK (Kf0 )
K /2
εKf0 Kf0f0 .
ext
2
K
ekβ0 k2 /2 2 f0 Γ(Kf0 /2 + 1)
(11)
For ε = εn → 0, we thus find that the denominator in (C2) can be lower bounded with
ext 2
2
eKf0 log εn −cK Kf0 log Kf0 −kβ0 k2 /2−Kf0 /2[log 2+εn /2] . We bound the numerator:
!
Z ∞
∞
∞
[
X
Π(F\Fn ) = Π
Fk ∝
e−cK k log k ≤ e−cK (kn +1) log(kn +1) +
e−cK x log x ,
k=kn +1
kn +1
k=kn +1
which is of order e−cK (kn +1) log(kn +1) . Combining this bound with (11), we find that (C2) is met if:
e−Kf0 log εn +(cK +1) Kf0 log Kf0 +Kf0 kβ
0 2
k∞ −cK (kn +1) log(kn +1)+2nε2n
→ 0 as n → ∞.
(C3) We bound the numerator by one, and use the bound (11) for the denominator. As εn → 0, we
2
obtain the condition −Kf0 log εn + (cK + 1)Kf0 log Kf0 + Kf0 kβ 0 k2∞ ≤ j4 nε2n for all sufficiently
large j.
8
p
Conclusion With εn = n−1/2 Kf0 log(n/Kf0 ), letting kn ∝ nε2n = Kf0 log(n/Kf0 ), the
0 2
condition (C1) is √
met. With this choice of kn , the condition (C2) holds
√ as well as long as kβ k∞ .
log n and Kf0 . n. Finally, the condition (C3) is met for Kf0 . n.
Remark 3.1. It is worth pointing out that the proof will hold for a larger class of priors on K,
as long as the prior shrinks at least exponentially fast (meaning that it is bounded from above by
ae−bK for constants a, b > 0). However, a prior at this exponential limit will require tuning, because
the optimal a and b will depend on K(f0 , ΩEB ). We recommend using the prior (2.1) that prunes
somewhat more aggressively, because it does not require tuning by the user. Indeed, Theorem 3.1
holds regardless of the choice of cK > 0. We conjecture, however, that values cK ≥ 1/K(f0 , ΩEB )
lead to a faster concentration speed and we suggest cK = 1 as a default option.
Remark 3.2. When Kf0 is known, there is no need for assigning a prior πK (·) and the conditions
(C1) and (C3) are verified similarly as before, fixing the number of steps at Kf0 .
3.2
Posterior Concentration for Balanced Intervals
An analogue of Theorem 3.1 can be obtained for balanced partitions from Section 2.2.2 that correspond
to regression trees with splits at actual observations. Now, we assume that f0 is ΩBI -valid and carry
out the proof with K(f0 , ΩBI ) instead of K(f0 , ΩEB ). The posterior concentration rate is only
slightly worse.
Theorem 3.2. (Balanced Intervals) Let f0 : [0, 1] → R be a step function with K0 steps, where K0
is unknown. Denote by F the set of all step functions supported on balanced intervals equipped with
priors πK (·), πΩ (·|K) and π(β | K) as in (3), (6) and (9). Denote with Kf0 ≡ K(f0 , ΩBI ) and
√
assume kβ 0 k2∞ . log2β n and K(f0 , ΩBI ) . n. Then, under Assumption 1, we have
q
Π f ∈ F : kf − f0 kn ≥ Mn n−1/2 Kf0 log2β (n/Kf0 ) | Y (n) → 0
(12)
in Pfn0 -probability, for every Mn → ∞ as n → ∞, where β > 1/2.
2β−1
. The
Proof. All three conditions (C1), (C2) and (C3)
Phold if we choose kn ∝ Kf0 [log(n/Kf0 )]
kn
BI
k
2
entropy condition will be satisfied when log
C
card(Ω
)
.
n
ε
for
some
C
>
0,
where
k
n
k=1
q
n−1
n−1
εn = n−1/2 Kf0 log2β (n/Kf0 ). Using the upper bound card(ΩBI
k ) < k−1 < kn −1 (because
kn < n−1
2 for large enough n), the condition (C1) is verified. Using the fact that card(ΩKf0 ) .
Kf0 log(n/Kf0 ), the condition (C2) will be satisfied when, for some D > 0, we have
e−Kf0 log εn +(cK +1) Kf0 log Kf0 +D Kf0 log(n/Kf0 )+Kf0 kβ
0 2
k∞ −cK (kn +1) log(kn +1)+2nε2n
kβ 0 k2∞
2β
→ 0. (13)
√
. n. These
This holds for our choice of kn under the assumption
. log n and Kf0
choices also yield (C3).
√
Remark 3.3. When Kf0 & n,p
Theorem 3.1 and Theorem 3.2 still hold, only with the bit slower
slower concentration rate n−1/2 Kf0 log n.
4
Discussion
We provided the first posterior concentration rate results for Bayesian non-parametric regression with
step functions. We showed that under suitable complexity priors, the Bayesian procedure adapts to
the unknown aspects of the target step function. Our approach can be extended in three ways: (a)
to smooth f0 functions, (b) to dimension reduction with high-dimensional predictors, (c) to more
general partitioning schemes that correspond to methods like Bayesian CART and BART. These three
extensions are developed in our followup manuscript [29].
5
Acknowledgment
This work was supported by the James S. Kemper Foundation Faculty Research Fund at the University
of Chicago Booth School of Business.
9
References
[1] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Statistics/Probability Series. Wadsworth Publishing Company, Belmont, California, U.S.A., 1984.
[2] L. Breiman. Random forests. Mach. Learn., 45:5–32, 2001.
[3] A. Berchuck, E. S. Iversen, J. M. Lancaster, J. Pittman, J. Luo, P. Lee, S. Murphy, H. K. Dressman, P. G.
Febbo, M. West, J. R. Nevins, and J. R. Marks. Patterns of gene expression that characterize long-term
survival in advanced stage serous ovarian cancers. Clin. Cancer Res., 11(10):3686–3696, 2005.
[4] S. Abu-Nimeh, D. Nappa, X. Wang, and S. Nair. A comparison of machine learning techniques for phishing
detection. In Proceedings of the Anti-phishing Working Groups 2nd Annual eCrime Researchers Summit,
eCrime ’07, pages 60–69, New York, NY, USA, 2007. ACM.
[5] M. A. Razi and K. Athappilly. A comparative predictive analysis of neural networks (NNs), nonlinear
regression and classification and regression tree (CART) models. Expert Syst. Appl., 29(1):65 – 74, 2005.
[6] D. P. Green and J. L. Kern. Modeling heterogeneous treatment effects in survey experiments with Bayesian
Additive Regression Trees. Public Opin. Q., 76(3):491, 2012.
[7] E. C. Polly and M. J. van der Laan.
Super learner in prediction.
http://works.bepress.com/mark_van_der_laan/200/, 2010.
Available at:
[8] D. M. Roy and Y. W. Teh. The Mondrian process. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou,
editors, Advances in Neural Information Processing Systems 21, pages 1377–1384. Curran Associates,
Inc., 2009.
[9] H. A. Chipman, E. I. George, and R. E. McCulloch. Bayesian CART model search. JASA, 93(443):935–948,
1998.
[10] D. Denison, B. Mallick, and A. Smith. A Bayesian CART algorithm. Biometrika, 95(2):363–377, 1998.
[11] H. A. Chipman, E. I. George, and R. E. McCulloch. BART: Bayesian Additive Regression Trees. Ann.
Appl. Stat., 4(1):266–298, 03 2010.
[12] S. Ghosal, J. K. Ghosh, and A. W. van der Vaart. Convergence rates of posterior distributions. Ann. Statist.,
28(2):500–531, 04 2000.
[13] T. Zhang. Learning bounds for a generalized family of Bayesian posterior distributions. In S. Thrun,
L. K. Saul, and P. B. Schölkopf, editors, Advances in Neural Information Processing Systems 16, pages
1149–1156. MIT Press, 2004.
[14] J. Tang, Z. Meng, X. Nguyen, Q. Mei, and M. Zhang. Understanding the limiting factors of topic
modeling via posterior contraction analysis. In T. Jebara and E. P. Xing, editors, Proceedings of the
31st International Conference on Machine Learning (ICML-14), pages 190–198. JMLR Workshop and
Conference Proceedings, 2014.
[15] N. Korda, E. Kaufmann, and R. Munos. Thompson sampling for 1-dimensional exponential family bandits.
In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in
Neural Information Processing Systems 26, pages 1448–1456. Curran Associates, Inc., 2013.
[16] F.-X. Briol, C. Oates, M. Girolami, and M. A. Osborne. Frank-Wolfe Bayesian quadrature: Probabilistic
integration with theoretical guarantees. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and
R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1162–1170. Curran
Associates, Inc., 2015.
[17] M. Chen, C. Gao, and H. Zhao. Posterior contraction rates of the phylogenetic indian buffet processes.
Bayesian Anal., 11(2):477–497, 06 2016.
[18] S. Ghosal and A. van der Vaart. Convergence rates of posterior distributions for noniid observations. Ann.
Statist., 35(1):192–223, 02 2007.
[19] B. Szabó, A. W. van der Vaart, and J. H. van Zanten. Frequentist coverage of adaptive nonparametric
Bayesian credible sets. Ann. Statist., 43(4):1391–1428, 08 2015.
[20] I. Castillo and R. Nickl. On the Bernstein von Mises phenomenon for nonparametric Bayes procedures.
Ann. Statist., 42(5):1941–1969, 2014.
10
[21] J. Rousseau and B. Szabo. Asymptotic frequentist coverage properties of Bayesian credible sets for sieve
priors in general settings. ArXiv e-prints, September 2016.
[22] I. Castillo. Polya tree posterior distributions on densities. preprint available at http: // www.
lpma-paris. fr/ pageperso/ castillo/ polya. pdf , 2016.
[23] L. Liu and W. H. Wong. Multivariate density estimation via adaptive partitioning (ii): posterior concentration. arXiv:1508.04812v1, 2015.
[24] C. Scricciolo. On rates of convergence for Bayesian density estimation. Scand. J. Stat., 34(3):626–642,
2007.
[25] M. Coram and S. Lalley. Consistency of Bayes estimators of a binary regression function. Ann. Statist.,
34(3):1233–1269, 2006.
[26] L. Shepp. Covering the circle with random arcs. Israel J. Math., 34(11):328–345, 1972.
[27] W. Feller. An Introduction to Probability Theory and Its Applications, Vol. 2, 3rd Edition. Wiley, 3rd
edition, January 1968.
[28] D. L. Donoho. CART and best-ortho-basis: a connection. Ann. Statist., 25(5):1870–1911, 10 1997.
[29] V. Rockova and S. L. van der Pas. Posterior concentration for Bayesian regression trees and their ensembles.
arXiv:1708.08734, 2017.
[30] T. Anderson. Some nonparametric multivariate procedures based on statistically equivalent blocks. In P.R.
Krishnaiah, editor, Multivariate Analysis, pages 5–27. Academic Press, New York, 1966.
[31] L. Flatto and A. Konheim. The random division of an interval and the random covering of a circle. SIAM
Rev., 4:211–222, 1962.
[32] A. Nobel. Histogram regression estimation using data-dependent partitions. Ann. Statist., 24(3):1084–1105,
1996.
[33] C. Gao, F. Han, and C.H. Zhang. Minimax risk bounds for piecewise constant models. Manuscript, pages
1–36, 2017.
11
| 2 |
Estimating the second-order parameter of regular
variation and bias reduction in tail index estimation
under random truncation
Nawel Haouas, Abdelhakim Necir∗, Brahim Brahimi
arXiv:1610.00094v2 [math.ST] 20 Oct 2016
Laboratory of Applied Mathematics, Mohamed Khider University, Biskra, Algeria
Abstract
In this paper, we propose an estimator of the second-order parameter of randomly righttruncated Pareto-type distributions data and establish its consistency and asymptotic normality. Moreover, we derive an asymptotically unbiased estimator of the tail index and study
its asymptotic behaviour. Our considerations are based on a useful Gaussian approximation
of the tail product-limit process recently given by Benchaira et al. [Tail product-limit process
for truncated data with application to extreme value index estimation. Extremes, 2016; 19:
219-251] and the results of Gomes et al. [Semi-parametric estimation of the second order
parameter in statistics of extremes. Extremes, 2002; 5: 387-414]. We show, by simulation,
that the proposed estimators behave well, in terms of bias and mean square error.
Keywords: Bias-reduction; Extreme value index; Product-limit estimator; Random truncation; Second-order parameter.
AMS 2010 Subject Classification: 60F17, 62G30, 62G32, 62P05.
* Corresponding
author: [email protected]
E-mail addresses:
[email protected] (N. Haoues)
[email protected] (B. Brahimi)
1
2
1. Introduction
Let (Xi , Yi ) , 1 ≤ i ≤ N be a sample of size N ≥ 1 from a couple (X, Y) of independent
random variables (rv’s) defined over some probability space (Ω, A, P) , with continuous marginal distribution functions (df’s) F and G respectively. Suppose that X is truncated to the
right by Y, in the sense that Xi is only observed when Xi ≤ Yi . We assume that both
survival functions F := 1 − F and G := 1 − G are regularly varying at infinity with negative
tail indices −1/γ1 and −1/γ2 respectively, that is, for any x > 0
F (xz)
G (xz)
= x−1/γ1 and lim
= x−1/γ2 .
z→∞ G (z)
z→∞ F (z)
lim
(1.1)
Since the weak approximations of extreme value theory based statistics are achieved in the
second-order framework (see de Haan and Stadtmüller, 1996), then it seems quite natural
to suppose that both df’s F and G satisfy the well-known second-order condition of regular
variation specifying the rates of convergence in (1.1). That is, we assume that for any x > 0
lim
xρ1 − 1
UF (tx) /UF (t) − xγ1
= xγ1
,
t→∞
AF (t)
ρ1
(1.2)
UG (tx) /UG (t) − xγ2
xρ2 − 1
= xγ2
,
t→∞
AG (t)
ρ2
(1.3)
and
lim
where ρ1 , ρ2 < 0 are the second-order parameters and AF , AG are functions tending to
zero and not changing signs near infinity with regularly varying absolute values at infinity with indices ρ1 , ρ2 respectively. For any df H, UH (t) := H ← (1 − 1/t) , t > 1, stands
for the quantile function. This class of distributions, which includes models such as Burr,
Fréchet, Generalized Pareto, Student,log-gamma, stable,... takes a prominent role in extreme value theory. Also known as heavy-tailed, Pareto-type or Pareto-like distributions,
these models have important practical applications and are used rather systematically in
certain branches of non-life insurance, as well as in finance, telecommunications, hydrology, etc... (see, e.g., Resnick, 2006). We denote the observed observations of the truncated sample (Xi , Yi ) , i = 1, ..., N, by (Xi , Yi ) , i = 1, ..., n, as copies of a couple of rv’s
(X, Y ) , where n = nN is a sequence of discrete rv’s for which, by of the weak law of large
P
numbers satisfies nN /N → p := P (X ≤ Y) , as N → ∞. The usefulness of the statis-
tical analysis under random truncation is shown in Herbst (1999) where the authors ap-
plies truncated model techniques to estimate loss reserves for incurred but not reported
(IBNR) claim amounts. For a recent discussion on randomly right-truncated insurance
3
claims, one refers to Escudero and Ortega (2008). In reliability, a real dataset, consisting in lifetimes of automobile brake pads and already considered by Lawless (2002) in page
69, was recently analyzed in Gardes and Stupfler (2015) and Benchaira et al. (2016a) as an
application of randomly truncated heavy-tailed models. The joint distribution of Xi and
Yi is
(x, y) := P (X ≤ x, Y ≤ y) = P (X ≤ min (x, Y) , Y ≤ y | X ≤ Y) , which equals
Z H
y
p−1 F (min (x, z)) dG (z) . The marginal distributions of the rv’s X and Y, respectively
0
Z x
Ry
∗
∗
−1
G (z) dF (z) and p−1 0 F (z) dG (z) , respectively.
denoted by F and G , are equal to p
0
The tail of df F ∗ simultaneously depends on G and F while that of G∗ only relies on G. By
using Proposition B.1.10 in de Haan and Ferreira (2006), to the regularly varying functions
∗
∗
F and G, we also show that both G and F are regularly varying at infinity, with respective
indices γ2 and γ := γ1 γ2 / (γ1 + γ2 ) . Recently Gardes and Stupfler (2015) addressed the estimation of the extreme value index γ1 under random right-truncation and used the definition
of γ to derive a consistent estimator as a quotient of two Hill estimators (Hill, 1975) of tail
indices γ and γ2 which are based on the upper order statistics Xn−k:n ≤ ... ≤ Xn:n and
Yn−k:n ≤ ... ≤ Yn:n pertaining to the samples (X1 , ..., Xn ) and (Y1 , ..., Yn ) respectively. The
sample fraction k = kn being a (random) sequence of integers such that, given n = m = mN ,
km → ∞ and km /m → 0 as N → ∞. Under the tail dependence and the second-order
regular variation conditions, Benchaira et al. (2015) established the asymptotic normality
of this estimator. Recently, Worms and Worms (2016) proposed an asymptotically normal
estimator for γ1 by considering a Lynden-Bell integration with a deterministic threshold.
The case of a random threshold, is addressed by Benchaira et al. (2016a) who propose a
Hill-type estimator for randomly right-truncated data, defined by
(BM N )
γb1
where
a(i)
n :=
=
k
X
i=1
a(i)
n log
Xn−i+1:n
,
Xn−k:n
(1.4)
Fn (Xn−i+1:n ) /Cn (Xn−i+1:n )
,
k
X
Fn (Xn−i+1:n ) /Cn (Xn−i+1:n )
i=1
1
is the well-known product-limit Woodroofe’s eswith Fn (x) := i:Xi >x exp −
nCn (Xi )
P
timator (Woodroofe, 1985) of the underlying df F and Cn (x) := n−1 ni=1 1 (Xi ≤ x ≤ Yi ) .
Q
The authors show by simulation that, for small datasets, their estimator behaves better in
terms of bias and root of the mean squared error (rmse), than Gardes-Supfler’s estimator.
4
Moreover, they establish the asymptotic normality by considering the second-order regular
variation conditions (1.2) and (1.3) and the assumption γ1 < γ2 . More precisely, they show
that, for a sufficiently large N,
(BM N )
γ1
b
= γ1 + k −1/2 Λ (W) +
A0 (n/k)
(1 + oP (1)) ,
1 − ρ1
(1.5)
where A0 (t) := AF 1/F (UF ∗ (t)) , t > 1, and Λ (W) is a centred Gaussian rv defined by
Z 1
γ
Λ (W) :=
(γ2 − γ1 − γ log s) s−γ/γ2 −1 W (s) ds − γW (1) ,
γ1 + γ2 0
with {W (s) ; s ≥ 0} being a standard Wiener process defined on the probability space
√ (BM N )
D
− γ1 → N (λ/ (1 − ρ1 ) , σ 2 ) , as N →
(Ω, A, P) . Thereby, they conclude that k γb1
∞, where σ 2 := γ 2 (1 + γ1 /γ2 ) 1 + (γ1 /γ2 )2 / (1 − γ1 /γ2 ) , provided that, given n = m,
√
km A0 (m/km ) → λ < ∞. Recently, Benchaira et al. (2016b) adopted the same approach
(BM N )
to introduce a kernel estimator to the tail index γ1 which improves the bias of b
γ1
. It is
worth mentioning that the assumption γ1 < γ2 is required in order to ensure that it remains
enough extreme data for the inference to be accurate. In other words, they consider the
situation where the tail of the rv of interest is not too contaminated by the truncation rv.
The aim of this paper is the estimation of the second order-parameter ρ1 given in condition
(1.2) which, to our knowledge, is not addressed yet in the extreme value literature. This
parameter is of practical relevance in extreme value analysis due its crucial importance in
selecting the optimal number of upper order statistics k in tail index estimation (see, e.g.,
de Haan and Ferreira, 2006, page 77) and in reducing the bias of such estimation. In the case
of complete data, this problem has received a lot of attention from many authors like, for
instance, Peng (1998), Fraga Alves et al. (2003), Gomes et al. (2002), Peng and Qi (2004),
Goegebeur et al. (2010), de Wet et al. (2012), Worms and Worms (2012), Deme el al. (2013).
Inspired by the paper of Gomes et al. (2002), we propose an estimator for ρ1 adapted to the
random right-truncation case. To this end, for α > 0 and t > 0, we introduce the following
tail functionals
M
(α)
(t; F) :=
(α)
Q
and
1
F (UF ∗ (t))
(t; F) :=
Z
∞
logα (x/UF ∗ (t)) dF (x) ,
(1.6)
UF ∗ (t)
M (α) (t; F) − Γ (α + 1) M (1) (t; F)
M (2) (t; F) − 2 (M (1) (t; F))
S (α) (t; F) := δ (α)
Q(2α) (t; F)
(Q(α+1) (t; F))
2,
2
α
,
(1.7)
(1.8)
5
where logα x := (log x)α and δ (α) := α (α + 1)2 Γ2 (α) / (4Γ (2α)) , with Γ (·) standing for
the usual Gamma function. From assertion (ii) of Lemma 5.1, we have, for any α > 0,
M (α) (t; F) → γ1α Γ (α + 1) , Q(α) (t; F) → qα (ρ1 ) and S (α) (t; F) → sα (ρ1 ) ,
as t → ∞, where
qα (ρ1 ) :=
γ1α−2 Γ (α + 1) 1 − (1 − ρ1 )α − αρ1 (1 − ρ1 )α−1
and
sα (ρ1 ) :=
ρ21 (1 − ρ1 )α−2
ρ21 1 − (1 − ρ1 )2α − 2αρ1 (1 − ρ1 )2α−1
1 − (1 − ρ1 )α+1 − (α + 1) ρ1 (1 − ρ1 )α
(1.9)
,
(1.10)
2 .
(1.11)
The three results (1.9) allow us to construct an estimator for the second-order parameter ρ1 .
Indeed, by recalling that n = nN is a random sequence of integers, let υ = υn be a subsequence of n, different than k, such that given n = m, υm → ∞, υm /m → 0 as N → ∞. The
√
sequence υ has to be chosen so that υm |A0 (m/υm )| → ∞, which is a necessary condition
to ensure the consistency of ρ1 estimator. On the other hand, as already pointed out, the
√
(BM N )
asymptotic normality of b
γ1
requires that, for a given n = m, km A0 (m/km ) → λ < ∞.
∗
This means that both sample fractions k and υ have to be distinctly chosen. Since F is reg-
ularly varying at infinity with index −1/γ, then from Lemma 3.2.1 in de Haan and Ferreira
(2006) page 69, we infer that, given n = m, we have Xm−υm :m → ∞ as N → ∞ almost
surely. Then by using the total probability formula, we show that Xn−υ:n → ∞, almost
surely too. By letting, in (1.6) , t = n/υ then by replacing UF ∗ (n/υ) by Xn−υ:n and F by
(α)
the product-limit estimator Fn , we get an estimator Mn (υ) = M (α) (t; Fn ) for M (α) (t; F)
as follows:
Mn(α)
1
(υ) =
Fn (Xn−υ:n )
Z
∞
logα (x/Xn−υ:n ) dFn (x) .
(1.12)
Xn−υ:n
(α)
Next, we give an explicit formula for Mn (υ) in terms of observed sample X1 , ..., Xn .
Since F and G are regularly varying with negative indices −1/γ1 and −1/γ2 respectively,
then their right endpoints are infinite and thus they are equal. Hence, from Woodroofe
R∞
R∞
(1985), we may write x dF (y) /F (y) = x dF ∗ (y) /C (y) , where C (z) := P (X ≤ z ≤ Y )
is the theoretical counterpart of Cn (z) given in (1.4). Differentiating the previous two
integrals leads to the crucial equation C (x) dF (x) = F (x) dF ∗ (x) , which implies that
P
Cn (x) dFn (x) = Fn (x) dFn∗ (x) , where Fn∗ (x) := n−1 ni=1 1 (Xi ≤ x) is the usual empirical df based on the observed sample X1 , ..., Xn . It follows that
Z ∞
1
Fn (x)
(α)
Mn (υ) =
logα xdFn∗ (x) ,
Fn (Xn−υ:n ) Xn−υ:n Cn (x)
6
which equals
υ
X Fn (Xn−i+1:n )
Xn−i+1:n
1
logα
.
(υ) =
Xn−υ:n
nFn (Xn−υ:n ) i=1 Cn (Xn−i+1:n )
P
Similarly, we show that Fn (Xn−υ:n ) = n−1 ni=1 Fn (Xn−i+1:n ) /Cn (Xn−i+1:n ) . This leads to
Mn(α)
following form of M (α) (t; F) estimator:
Mn(α)
(υ) :=
υ
X
α
a(i)
n log
i=1
(1)
(BM N )
It is readily observable that Mn (k) = γb1
Xn−i+1:n
.
Xn−υ:n
. Making use of (1.8) with the expression
(α)
above, we get an estimator of S (α) (t; F) , that we denote Sn
(α)
= Sn (υ) . This, in virtue
of the third limit in (1.9) , leads to estimating sα (ρ1 ) . It is noteworthy that the function
ρ1 → sα (ρ1 ) , defined and continuous on the set of negative real numbers, is increasing for
0 < α < 1/2 and decreasing for α > 1/2, α 6= 1. Then, for suitable values of α, we may
(α)
invert sα to get an estimator ρb1
(α)
ρb1
for ρ1 as follows:
(α)
:= s←
, provided that Sn(α) ∈ Aα ,
α Sn
(1.13)
where Aα is one of the following two regions:
s : (2α − 1) /α2 < s ≤ 4 (2α − 1) / (α (α + 1))2 , for α ∈ (0, 1/2) ,
or
s : 4 (2α − 1) / (α (α + 1))2 ≤ s < (2α − 1) /α2 , for α ∈ (1/2, ∞) \ {1} .
For more details, regarding the construction of these two sets, one refers to Remark 2.1 and
Lemma 3.1 in Gomes et al. (2002). It is worth mentioning that, for α = 2, we have s2 (ρ1 ) =
√
(3ρ21 − 8ρ1 + 6) / (3 − 2ρ1 )2 and s←
3s − 2 / (4s − 3) , for 2/3 < s < 3/4.
2 (s) = 6s − 4 +
Thereby, we obtain an explicit formula to the estimator of ρ1 as follows
q
(2)
(2)
6Sn − 4 + 3Sn − 2
(2)
ρb1 =
, provided that 2/3 < Sn(2) < 3/4.
(2)
4Sn − 3
(1.14)
(BM N )
Next, we derive an asymptotically unbiased estimator for γ1 , that improves b
γ1
by es-
timating the asymptotic bias A0 (n/k) / (1 − ρ1 ) , given in weak approximation (4.19). In-
deed, let υ be equal to un := [n1−ǫ ] , for a fixed ǫ > 0 close to zero (say ǫ = 0.01) so
√
that, given n = m, um → ∞, um /m → ∞ and um |A0 (m/um )| → ∞. The validity of
such a sequence is discussed in Gomes et al. (2002) (Subsection 6.1, conclusions 2 and 5).
(∗)
The estimator of ρ1 pertaining to this choice of υ will be denoted by ρb1 . We are now is
proposition to define an estimator for A0 (n/k) . From assertion (i) in Lemma 5.1, we have
7
A0 (t) ∼ (1 − ρ1 )2 M (2) (t; F) − 2 M (1) (t; F)
2
/ 2ρ1 M (1) (t; F) , as t → ∞. Then, by
letting t = n/k and by replacing, in the previous quantity, UF ∗ (n/k) by Xn−k:n , F by Fn
(∗)
and ρ1 by ρb1 , we end up with
2
2 (∗) (1)
(2)
(1)
b 0 (n/k) := 1 − ρb(∗)
A
M
(k)
−
2
M
(k)
/
2b
ρ
M
(k)
,
1
n
n
1
n
as an estimator for A0 (n/k) . Thus, we obtain an asymptotically unbiased estimator
2
!
(2)
(1)
Mn (k) − 2 Mn (k)
1
1 − (∗) ,
γ1 := Mn(1) (k) +
b
(1)
2Mn (k)
ρb1
for the tail index γ1 , as an adaptation of Peng’s estimator (Peng, 1998) to the random right-
truncation case. The rest of the paper is organized as follows. In Section 2, we present our
main results which consist in the consistency and the asymptotic normality of the estimators
(α)
ρb1 and b
γ1 whose finite sample behaviours are checked by simulation in Section 3. All proofs
are gathered in Section 4. Two instrumental lemmas are given in the Appendix.
2. Main results
It is well known that, weak approximations of the second-order parameter estimators are
achieved in the third-order condition of regular variation framework (see, e.g., de Haan and Stadtmüller,
1996). Thus, it seems quite natural to suppose that df F satisfies
ρ1
UF (tx) /UF (t) − xγ1
xγ1 xρ1 +β1 − 1 xρ1 − 1
γ1 x − 1
lim
/BF (t) =
, (2.15)
−x
−
t→∞
AF (t)
ρ1
ρ1
ρ1 + β1
ρ1
where β1 < 0 is the third-order parameter and BF is a function tending to zero and not
changing sign near infinity with regularly varying absolute value at infinity with index β1 .
For convenience, we set B0 (t) := BF 1/F (UF ∗ (t)) and by keeping similar notations to
those used in Gomes et al. (2002), we write
(2)
µ(1)
α := Γ (α + 1) , µα (ρ1 ) :=
Γ (α) (1 − (1 − ρ1 )α )
,
ρ1 (1 − ρ1 )α
(1 − ρ1 )2
1
if α = 1,
2 log
ρ1
1 −2ρ1
µ(3)
(ρ
)
:=
1
α
Γ (α)
1
2
+1
if α 6= 1,
2
α−1 −
ρ1 (α − 1) (1 − 2ρ1 )
(1 − ρ1 )α−1
−1
(2)
µ(4)
µ(2)
α (ρ1 , β1 ) := β1
α (ρ1 + β1 ) − µα (ρ1 ) ,
2
(2)
(3)
(1)
(1) (2)
µ
(ρ
)
,
c
:=
µ
(ρ
)
−
µ
µ
(ρ
)
(ρ
)
−
µ
mα := µ(2)
1
α
1
1
1
α
1
α
α
1
α
(2.16)
8
(4)
(1) (4)
and dα := µα (ρ1 , β1 ) − µα µ1 (̺1 , β1 ) . For further use, we set rα := 2qα γ12−α Γ (α + 1) ,
1
(2α − 1) c2α
2cα+1 r2α
η1 :=
,
+ c2 r2α −
2
2γ1 m2 rα+1
Γ (2α)
rα+1 Γ (α)
η2 :=
τ1 :=
d2α
2dα+1 r2α
1
+ d2 r2α −
,
2
γ1 m2 rα+1 Γ (2α)
rα+1 Γ (α + 1)
2αr2α
1 − 2α − 3r2α
+ 3
,
ξ := γ
2
rα+1
m2
rα+1 m2
1
2
γ12α−1 rα+1
Γ (2α
+ 1) m2
, τ2 := −
2r2α
α 3
γ1 rα+1 Γ (α +
2) m2
,
−2αrα+1 + 2 (α + 1) r2α − 4rα+1 r2α
,
3
rα+1
m2
ρ1 − 1
1 − ρ1
1 − ρ1
1
τ5 :=
, τ6 := 1 + 2
and µ := γ 2 + 2
−
.
2γ1 ρ1
γ1 ρ1
γ1 ρ1
ρ1
τ3 :=
r2α
,
2
γ1 rα+1 2m2
τ4 :=
Theorem 2.1. Assume that both df ’s F and G satisfy the second-order conditions (1.2) and
(1.3) respectively with γ1 < γ2 . Let α, defined in (1.13), be fixed and let υ be a random sequence
√
of integers such that, given n = m, υm → ∞, υm /m → 0 and υm |A0 (m/υm )| → ∞, then
(α) P
ρb1 → ρ1 , as N → ∞.
If in addition, we assume that the third-order condition (2.15) holds, then whenever, given
√
√
n = m, υm A20 (m/υm ) and υm A0 (m/υm ) B0 (m/υm ) are asymptotically bounded, then
there exists a standard Wiener process {W (s) ; s ≥ 0} , defined on the probability space
(Ω, A, P) , such that
s′α
(ρ1 )
√
υA0 (n/υ)
(α)
ρb1
− ρ1 =
Z
1
0
s−γ/γ2 −1 ∆α (s)W (s) ds − ξW (1)
√
√
+ η1 υA20 (n/υ) + η2 υA0 (n/υ) B0 (n/υ) + oP (1) ,
where s′α is the Lebesgue derivative of sα given in (1.11) and
τ1 γ log2α s−γ 2ατ1 γ 2 log2α−1 s−γ τ2 γ logα+1 s−γ
+
+
γ1 + γ2
γ1
γ1 + γ2
∆α (s) :=
τ2 (α + 1) γ 2 logα s−γ τ3 γ log2 s−γ
+
γ1
γ1 + γ2
2τ3 γ 2
τ4 γ 2
τ4 γ
γ1 ξ
+
log s−γ +
+
−
γ1
γ1 + γ2
γ1
γ1 + γ2
+
If, in addition, we suppose that given n = m,
√
υm A20 (m/υm ) → λ1 < ∞ and
√
υm A0 (m/υm ) B0 (m/υm ) → λ2 < ∞,
9
then
√
(α)
D
υA0 (n/υ) ρb1 − ρ1 → N (η1 λ1 + η2 λ2 , σα2 ) , as N → ∞, where
σα2
:=
Z 1Z
0
1
s
−γ/γ2 −1 −γ/γ2 −1
t
0
min (s, t) ∆α (s)∆α (t)dsdt − 2ξ
Z
1
s−γ/γ2 ∆α (s)ds + ξ 2 .
0
Theorem 2.2. Let k be a random sequence of integers, different from υ, such that, given
√
n = m, km → ∞, km /m → 0 and km A0 (m/km ) is asymptotically bounded, then with the
same Wiener process {W (s) ; s ≥ 0} , for any ǫ > 0, we have
√
k (b
γ1 − γ1 ) =
Z
0
1
s−γ/γ2 −1 D(s)W (s) ds − µW (1) + oP (1) ,
where
2τ5 γ 3
γ 2 τ6
τ6 γ 2
γ1 µ
+
log s +
−
.
γ1
γ1 + γ2
γ1
γ1 + γ2
√
If, in addition, we suppose that, given n = m, km A0 (m/km ) → λ < ∞, then
γ 3 τ5
log2 s −
D(s) :=
γ1 + γ2
√
where
σ∗2
:=
Z 1Z
0
0
1
s
D
k (b
γ1 − γ1 ) → N 0, σ∗2 , as N → ∞,
−γ/γ2 −1 −γ/γ2 −1
t
min (s, t) D(s)D(t)dsdt − 2µ
Z
1
s−γ/γ2 D(s)ds + µ2 .
0
3. Simulation study
(α)
In this section, we study the performance of ρb1
troduced bias-reduced estimator γb1 with
(for α = 2) and compare the newly in-
(BM N )
γ1
b
.
Let us consider two sets of truncated and
−δ/γ1
truncation data respectively drawn from Burr’s models, F (x) = 1 + x1/δ
and G (x) =
−δ/γ2
1 + x1/δ
, x ≥ 0, where δ, γ1 , γ2 > 0. By elementary analysis, it is easy to verify that
F satisfies the third-order condition (2.15) with ρ1 = β1 = −γ1 /δ, AF (x) = γ1 xρ1 / (1 − xρ1 )
and BF (x) = (δ/γ1) AF (x) . We fix δ = 1/4 and choose the values 0.6 and 0.8 for γ1
and 70% and 90% for the percentage of observed data p = γ2 /(γ1 + γ2 ). For each couple
(γ1 , p) , we solve the latter equation to get the pertaining γ2 -value. We vary the common
size N of both samples (X1 , ..., XN ) and (Y1 , ..., YN ) , then for each size, we generate 1000
independent replicates. For the selection of the optimal numbers υ ∗ and k ∗ of upper order
(2)
(BM N )
statistics used in the computation of estimators ρb1 , b
γ1 and b
γ1
, we apply the algorithm
of Reiss and Thomas (2007), page 137. Our illustration and comparison, made with respect
to the absolute biases (abias) and rmse’s, are summarized in Tables 3.1 and 3.2. The ob(2)
tained results, in Table 3.1, show that ρb1 behaves well in terms of bias and rmse and it is
(BM N )
γ1 performs better b
γ1
clear that from Table 3.2 that b
both in bias and remse too.
10
p = 0.7
N
υ∗
n
p = 0.9
abias rmse
n
υ∗
abias rmse
γ1 = 0.6
100
70
27
0.009 0.047 89
38
0.004 0.048
200
151
70
0.008 0.046 179
73
0.003 0.046
500
349 208 0.005 0.043 450 243 0.002 0.048
1000 697 667 0.001 0.027 896 641 0.001 0.030
γ1 = 0.8
100
70
30
0.011 0.050 90
40
0.013 0.048
200
139
67
0.009 0.048 179
71
0.012 0.047
500
350 198 0.008 0.043 449 232 0.006 0.049
1000 730 301 0.002 0.037 903 378 0.002 0.029
Table 3.1. Absolute bias and rmse of the second-order parameter estimator
based on 1000 right-truncated samples of Burr’s models.
p = 0.9
p = 0.7
N
n
k∗
γb1
abias rmse
k∗
(BM N )
γb1
abias rmse
(BM N )
n
k∗
abias rmse
k∗
γ1
b
γ1 = 0.6
γ1
b
abias rmse
100
70
12
0.068 0.263
11
0.127 0.259 89
16
0.013 0.152
15
0.118 0.217
200
140
26
0.048 0.200
24
0.090 0.223 180
34
0.006 0.116
31
0.089 0.176
500
349
63
0.020 0.123
58
0.072 0.173 449
83
0.002 0.078
78
0.056 0.129
1000 703 115 0.007 0.097 112 0.011 0.121 898 176 0.001 0.037 174 0.016 0.056
γ1 = 0.8
100
70
13
0.067 0.311
12
0.222 0.217 89
16
0.063 0.220
15
0.196 0.315
200
140
25
0.014 0.219
24
0.163 0.282 179
33
0.033 0.150
31
0.131 0.220
500
349
64
0.011 0.152
59
0.033 0.222 449
85
0.021 0.097
79
0.088 0.156
1000 707 145 0.007 0.054 125 0.017 0.133 897 179 0.013 0.058 166 0.019 0.098
Table 3.2. Absolute biases and rmse’s of the tail index estimators based on
1000 right-truncated samples of Burr’s models.
11
4. Proofs
(α)
4.1. Proof of Theorem 2.1. We begin by proving the consistency of ρb1 defined in (1.13).
We let
Ln (x; υ) :=
Fn (Xn−υ:n x) F (Xn−υ:n x)
−
,
Fn (Xn−υ:n )
F (Xn−υ:n )
and we show that for any α > 0
Z ∞
(α)
α (1)
Mn (υ) = γ1 µα +
Ln (x; υ) d logα x + (1 + oP (1)) αγ1α−1 µ(2)
α (ρ1 ) A0 (n/υ) ,
(4.17)
1
(1)
(2)
(α)
where µα and µα (ρ1 ) are as in (2.16). It is clear that from formula (1.12) , Mn (υ) may
R∞
be rewritten into − 1 logα xdFn (Xn−υ:n x) /Fn (Xn−υ:n ) , which by an integration by parts
R∞
equals 1 Fn (Xn−υ:n x) /Fn (Xn−υ:n ) d logα x. The latter may be decomposed into
Z ∞
Z ∞
Z ∞
Fn (Xn−υ:n x)
α
α
−1/γ1
d log x +
x−1/γ1 d logα x.
−x
Ln (x; υ) d log x +
Fn (Xn−υ:n )
1
1
1
R ∞ −1/γ
(1)
1
It is easy to verify that 1 x
d logα x equals γ1α µα . Since, Xn−υ:n → ∞, almost surely,
then by making use of the uniform inequality of the second-order regularly varying functions,
to F, given in Proposition 4 of Hua and Joe (2011), we write: with probability one, for any
0 < ǫ < 1 and large N
ρ1 /γ1
F (Xn−υ:n x) /F (Xn−υ:n ) − x−1/γ1
−1
−1/γ1 x
≤ ǫx−1/γ1 +ǫ , for any x ≥ 1, (4.18)
−
x
−2 e
γ
/ρ
1
1
γ AF 1/F (Xn−υ:n )
1
e F (t) ∼ AF (t) , as t → ∞. This implies, almost surely, that
where A
Z ∞
Fn (Xn−υ:n x)
−1/γ1
d logα x
−x
F
(X
)
1
n
n−υ:n
Z ∞
Z ∞
ρ1 /γ1
−1
α
α
−1/γ1 +ǫ
−1/γ1 x
e
= AF 1/F (Xn−υ:n )
d log x + oP
x
d log x
.
x
γ1 ρ1
1
1
R∞
R ∞ −1/γ +ǫ
xρ1 /γ1 − 1
α
α−1 (2)
1
d
log
x
=
αγ
µ
(ρ
)
and
x
d logα x is finite.
α
1
1
1
1
γ1 ρ1
P
From Lemma 7.4 in Benchaira et al. (2016a), Xn−υ:n /UF ∗ (n/υ) → 1, as N → ∞, then
by using the regular variation property of AF 1/F (·) and the corresponding Potter’s
We check that
x−1/γ1
inequalities (see, for instance, Proposition B.1.10 in de Haan and Ferreira (2006)), we get
e F 1/F (Xn−υ:n ) = (1 + oP (1)) AF 1/F (UF ∗ (n/υ)) = (1 + oP (1)) A0 (n/υ) ,
A
therefore
Mn(α)
(υ) =
γ1α µ(1)
α
+
Z
1
∞
Ln (x; υ) d logα x + αγ1α−1 µ(2)
α (ρ1 ) A0 (n/υ) (1 + oP (1)) .
12
In the second step, we use the Gaussian approximation of Ln (x) recently given by Benchaira et al.
(2016a) (assertion (6.26)), saying that: for any 0 < ǫ < 1/2 − γ/γ2 , there exists a standard
Wiener process {W (s) ; s ≥ 0} , defined on the probability space (Ω, A, P) such that
sup x(1/2−ǫ)/γ−1/γ2
√
x≥1
P
υLn (x; υ) − L (x; W) → 0, as N → ∞,
(4.19)
where {L (x; W) ; x > 0} is a Gaussian process defined by
γ −1/γ1 1/γ
x
x W x−1/γ − W (1)
γ1
Z 1
γ
−1/γ1
s−γ/γ2 −1 x1/γ W x−1/γ s − W (s) ds.
x
+
γ1 + γ2
0
R
√ ∞
Let us decompose υ 1 Ln (x; υ) d logα x into
Z
∞
α
1
L (x; W) d log x +
Z
∞
1
√
υLn (x; υ) − L (x; W) d logα x.
R∞ √
By using approximation (4.19), we obtain 1 { υLn (x; υ) − L (x; W)} d logα x = oP (1) .
R∞
R∞
We showed in Lemma 5.2 that 1 L (x; W) d logα x = OP (1) , therefore 1 Ln (x; υ) d logα x =
OP υ −1/2 , it follows that
Mn(α)
(υ) =
γ1α µ(1)
α
+υ
−1/2
Z
1
∞
L (x; W) d logα x
(4.20)
−1/2
.
+ αγ1α−1 µ(2)
(ρ
)
A
(n/υ)
(1
+
o
(1))
+
o
υ
1
0
p
P
α
Once again, by using the fact that
R∞
1
L (x; W) d logα x = OP (1) , we get
α−1 (2)
µα (ρ1 ) A0 (n/υ) (1 + oP (1)) + oP υ −1/2 .
Mn(α) (υ) = γ1α µ(1)
α + αγ1
(1)
It particular, for α = 1, we have µ1 = 1, this means that
(2)
Mn(1) (υ) = γ1 + µ1 (ρ1 ) A0 (n/υ) (1 + op (1)) + oP υ −1/2 ,
which implies that
2
(2)
Mn(1) (υ) = γ12 + 2γ1 µ1 (ρ1 ) A0 (n/υ) (1 + oP (1)) + oP υ −1/2 .
(4.21)
(1)
Likewise, for α = 2, we have µ2 = 2, then
(2)
Mn(2) (υ) = 2γ12 + 2γ1 µ2 (ρ1 ) A0 (n/υ) (1 + oP (1)) + oP υ −1/2 .
(4.22)
13
(α)
(α)
Similar to the definition of Mn (υ) , let Qn (υ) be Qα (t; F) with UF ∗ (t) and F respectively
replaced by by Xn−υ:n and Fn . From (1.7) , we may write
α
(α)
(1)
Mn (υ) − Γ (α + 1) Mn (υ)
.
Q(α)
2
n (υ) =
(2)
(1)
Mn (υ) − 2 Mn (υ)
Then, by using the approximations above, we end up with
(2)
(1) (2)
αγ1α−1 µα (ρ1 ) − µα µ1 (ρ1 )
.
Q(α)
n (υ) = (1 + oP (1))
(2)
(1) (2)
2γ1 µ2 (ρ1 ) − µ2 µ1 (ρ1 )
(1)
(1)
(2)
(2)
By replacing µα , µ1 , µα (ρ1 ) and µ1 (ρ1 ) by their corresponding expressions, given in
(2.16), with the fact that αΓ (α) = Γ (α + 1) , we show that the previous quotient equals
(α)
P
(α)
P
qα (ρ1 ) given in (1.10). This implies that Qn (υ) → qα (ρ1 ) and therefore Sn (υ) → sα (ρ1 ) ,
P
(α)
(α)
←
as N → ∞, as well. By using the mean value theorem, we infer that ρb1 = sα Sn (υ) →
(α)
ρ1 , as sought. Let us now focus on the asymptotic representation of ρb1 . We begin by
(α)
fn(α) (υ) , Sen(α) (υ) and Q
e(α)
denoting M
(t; F) , S (α) (t; F) and
n (υ) the respective values of M
(α)
Q(α) (t; F) when replacing UF ∗ (t) by Xn−υ:n . It is clear that the quantity Sn (υ) − sα (ρ1 )
may be decomposed into the sum of
Tn1
2
2
(α+1)
(α+1)
e
Qn
(υ) − Qn
(υ)
(2α)
:= −δ (α)
2 Qn (υ; Fn ) ,
(α+1)
e(α+1)
Qn
(υ) Q
(υ)
n
(2α)
Tn2 := δ (α)
(α)
P
Qn
en(2α) (υ)
(υ) − Q
and Tn3 := Sen(α) (υ) − sα (ρ1 ) .
2
(α+1)
en
Q
(υ)
Since Qn (υ) → qα (ρ1 ) , then by using the mean value theorem, we get
−3
e(α+1) (υ) .
Q(α+1)
(υ)
−
Q
Tn1 = − (1 + op (1)) 2δ (α) q2α qα+1
n
n
Making use of the third-order condition (2.15), with analogy of the weak approximation
given in Gomes et al. (2002) (page 411), we write
Z ∞
(α)
α (1)
−1/2
Mn (υ) = γ1 µα + υ
L (x; W) d logα x + αγ1α−1 µ(2)
α (ρ1 ) A0 (n/υ)
1
−1/2
+ αγ1α−1 µ(4)
.
α (ρ1 , β1 ) A0 (n/υ) B0 (n/υ) (1 + op (1)) + oP υ
(4.23)
14
Since
R∞
L (x; W) d logα x = OP (1) , then
1
α−1 (2)
Mn(α) (υ) = γ1α µ(1)
µα (ρ1 ) A0 (n/υ)
α + αγ1
(4.24)
−1/2
+ αγ1α−1 µ(4)
.
α (ρ1 , β1 ) A0 (n/υ) B0 (n/υ) (1 + oP (1)) + oP υ
Let us write
e(α)
Q(α)
n (υ) − Qn (υ)
2
2
(α)
(1)
fn(α) (υ) − Γ (α + 1) M
fn(1) (υ)
Mn (υ) − Γ (α + 1) Mn (υ)
M
=
−
.
2
2
(2)
(1)
(2)
(1)
f
f
Mn (υ) − 2 Mn (υ)
Mn (υ) − 2 Mn (υ)
By reducing to the common denominator and by using the weak approximations (4.23)
√
√
P
and (4.24) with the fact that A0 (n/υ) → 0, υA20 (n/υ) and υA0 (n/υ) B0 (n/υ) are
stochastically bounded, we get
√
(α)
(α)
e
υA0 (n/υ) Qn (υ) − Qn (υ)
Z ∞
√
=
L (x; W) dg1 (x; α) + θ1 (α) υA0 (n/υ) B0 (n/υ) + oP (1) ,
1
where
γ α−1
g1 (x; α) := 1
2m2
γ1−α
−1
αΓ (α)
2
−2
(1)
log x −
rα γ1 log x − αµα − 2αΓ (α) rα γ1 log x ,
2
α
and θ1 (α) := αγ1α−2 {dα − Γ (α) rα d2 } / (2m2 ) with dα , rα and m2 being those defined in the
beginning of Section 2. It follows that
√
υA0 (n/υ) Tn1
= −2δ
−3
(α) q2α qα+1
Z
∞
1
L (x; W) dg1(x; α + 1) + θ1 (α + 1)
√
υA0 (n/υ) B0 (n/υ) + oP (1) .
Likewise, by similar arguments, we also get
√
υA0 (n/υ) Tn2
Z
−2
= δ (α) qα+1
1
∞
L (x; W) dg1 (x; 2α) + θ1 (2α) υA0 (n/υ) B0 (n/υ) + oP (1) .
Therefore
√
√
υA0 (n/υ) (Tn1 + Tn2 ) =
Z
1
∞
L (x; W) dg(x; α) + K (α)
√
−2
−3
where K (α) := δ (α) qα+1
θ1 (2α) − 2q2α qα+1
θ1 (α + 1) and
υA0 (n/υ) B0 (n/υ) + oP (1) ,
−2
−3
g(x; α) := δ (α) qα+1
g1 (x; 2α) − 2q2α qα+1
g1 (x; α + 1) .
15
P
Once again by using the third-order condition (2.15) with the fact that A0 (n/υ) → 0 and
√
√
√
υA0 (n/υ) B0 (n/υ) = OP (1) , we show that υA0 (n/υ) Tn3 = η1 υA20 (n/υ) + oP (1) . It
is easy to check that K (α) ≡ η2 , hence we have
υA0 (n/υ) Sn(α) (υ) − sα (ρ1 )
Z ∞
√
√
=
L (x; W) dg(x; α) + η1 υA20 (n/υ) + η2 υA0 (n/υ) B0 (n/υ) + oP (1) ,
√
1
(α)
where η1 and η2 are those defined in the beginning of Section 2. Recall that Sn (υ) =
(α)
(α)
sα ρb1 , then in view of the mean value theorem and the consistency of ρb1 , we end up
with
(α)
(ρ1 ) υA0 (n/υ) ρb1 − ρ1
Z ∞
√
√
=
L (x; W) dg(x; α) + η1 υA20 (n/υ) + η2 υA0 (n/υ) B0 (n/υ) + oP (1) .
√
s′α
1
Finally, integrating by parts with elementary calculations complete the proof of the second
(α)
part of the theorem, namely the Gaussian approximation of ρb1 . For the third assertion, it
suffices to calculate
E
Z
1
s
−γ/γ2 −1
0
∆α (s)W (s) ds − ξW (1)
2
to get the asymptotic variance σα2 , therefore we omit details.
4.2. Proof of Theorem 2.2. Let us write
√
k (b
γ1 − γ1 ) =
√
k
Mn(1)
(k) − γ1 +
(∗)
ρb1 − 1
(∗)
(1)
2b
ρ1 Mn (k)
√ (2)
2
(1)
.
k Mn (k) − 2 Mn (k)
(1)
From, Theorem 3.1 in Benchaira et al. (2016a) and Theorem 2.1 above both Mn (k) =
(BM N )
γ1
b
(∗)
and ρb1 are consistent for γ1 and ρ1 respectively. It follows that
√
k (b
γ1 − γ1 )
√
2
ρ1 − 1 √ (2)
(1)
(1)
(1 + oP (1)) .
k Mn (k) − 2 Mn (k)
= k Mn (k) − γ1 +
2γ1 ρ1
By applying the weak approximation (4.20) , for α = 1, we get
√
k
Mn(1)
(k) − γ1 =
Z
∞
1
L (x; W) d log x +
√
kA0 (n/k)
+ oP (1) .
1 − ρ1
(4.25)
16
(1)
Using the mean value theorem and the consistency of Mn (k) yield
√
2
k Mn(1) (k) − γ12
)
(Z
√
∞
kA0 (n/k)
+ oP (1) (1 + oP (1)) .
= 2γ1
L (x; W) d log x +
1 − ρ1
1
√
From Lemma 5.2 and the assumption kA0 (n/k) = OP (1) as N → ∞ we have
Z ∞
√
2
2γ1 √
(1)
2
k Mn (k) − γ1 =
kA0 (n/k) + oP (1) .
L (x; W) d (2γ1 log x) +
1 − ρ1
1
Once again, by applying the weak approximation (4.20 , for α = 2, we write
Z ∞
√
√
(2)
(2)
2
k Mn (k) − 2γ1 =
L (x; W) d log2 x + 2γ1 µ2 (ρ1 ) kA0 (n/k) + oP (1) ,
1
(2)
where µ2 (ρ1 ) = 1 − (1 − ρ1 )2 / ρ1 (1 − ρ1 )2 . It follows that
√ (2)
2
k Mn (k) − 2 Mn(1) (k)
Z ∞
2γ1 ρ1 √
kA0 (n/k) + oP (1) .
=
L (x; W) d log2 x − 4γ1 log x +
(1 − ρ1 )2
1
(4.26)
By combining approximations (4.25) and (4.26), we obtain
Z ∞
√
√
k (b
γ1 − γ1 ) =
kA0 (n/k) , as N → ∞,
L (x; W) dΨ (x) + oP
1
where Ψ (x) := τ6 log x + τ5 log2 x. Finally, making an integration by parts and a change of
variables, with elementary calculations, achieve the proof of the first assertion of the theorem.
The second part is straightforward.
References
Benchaira, S., Meraghni, D., Necir, A., 2015. On the asymptotic normality of the extreme
value index for right-truncated data. Statist. Probab. Lett. 107, 378-384.
Benchaira, S., Meraghni, D., Necir, A., 2016a. Tail product-limit process for truncated data
with application to extreme value index estimation. Extremes, 19, 219-251.
Benchaira, S., Meraghni, D., Necir, A., 2016b. Kernel estimation of the tail index of a
right-truncated Pareto-type distribution. Statist. Probab. Lett. 119, 186-193.
Deme, E., Gardes, L., Girard, S., 2013. On the estimation of the second order parameter for
heavy-tailed distributions. REVSTAT 11, 277-299.
Escudero, F., Ortega, E., 2008. Actuarial comparisons for aggregate claims with randomly
right-truncated claims. Insurance Math. Econom. 43, 255-262.
17
Fraga Alves, M. I., de Haan, L., Lin, T., 2003. Estimation of the parameter controlling the
speed of convergence in extreme value theory. Math. Methods Statist. 12, 155-176.
Gardes, L., Stupfler, G., 2015. Estimating extreme quantiles under random truncation. TEST
24, 207-227.
Gomes, M. I., de Haan, L., Peng, L., 2002. Semi-parametric estimation of the second order
parameter in statistics of extremes. Extremes 5, 387-414.
Goegebeur, Y., Beirlant, J., de Wet, T., 2010. Kernel estimators for the second order parameter in extreme value statistics. J. Statist. Plann. Inference 140, 2632-2652.
de Haan, L., Stadtmüller, U., 1996. Generalized regular variation of second order. J. Australian Math. Soc. (Series A) 61, 381-395.
de Haan, L., Ferreira, A., 2006. Extreme Value Theory: An Introduction. Springer.
Herbst, T., 1999. An application of randomly truncated data models in reserving IBNR
claims. Insurance Math. Econom. 25, 123-131.
Hill, B.M., 1975. A simple general approach to inference about the tail of a distribution.
Ann. Statist. 3, 1163-1174.
Hua, L., Joe, H., 2011. Second order regular variation and conditional tail expectation of
multiple risks. Insurance Math. Econom. 49, 537-546.
Lawless, J.F., 2002. Statistical Models and Methods for Lifetime Data, Second Edition.
Wiley Series in Probability and Statistics.
Peng, L., 1998. Asymptotically unbiased estimators for the extreme-value index. Statist.
Probab. Lett. 38, 107-115.
Peng, L., Qi, Y., 2004. Estimating the first- and second-order parameters of a heavy-tailed
distribution. Aust. N. Z. J. Stat. 46, 305-312.
Reiss, R.D., Thomas, M., 2007. Statistical Analysis of Extreme Values with Applications to
Insurance, Finance, Hydrology and Other Fields, 3rd ed. Birkhäuser Verlag, Basel, Boston,
Berlin.
Resnick, S., 2006. Heavy-Tail Phenomena: Probabilistic and Statistical Modeling. Springer.
de Wet, T., Goegebeur, Y., Guillou, A., 2012. Weighted moment estimators for the second
order scale parameter. Methodol. Comput. Appl. Probab. 14, 753-783.
Worms, J., Worms, R., 2012. Estimation of second order parameters using probability
weighted moments. ESAIM Probab. Stat. 16, 97-113.
Worms, J., Worms, R., 2016. A Lynden-Bell integral estimator for extremes of randomly
truncated data. Statist. Probab. Lett. 109, 106-117.
18
Woodroofe, M., 1985. Estimating a distribution function with truncated data. Ann. Statist.
13, 163-177.
5. Appendix
Lemma 5.1. Assume that the second-order regular variation condition (1.2) holds, then for
any α > 0
(1)
(i) limt→∞
M (α) (t; F) − µα
(M (1) (t; F))
M (1) (t; F)
α−1
A0 (t)
α
(2)
(1) (2)
= α µα (ρ1 ) − µα µ1 (ρ1 ) .
(ii) limt→∞ Q(α) (t; F) = qα (ρ1 ) .
(iii) limt→∞ S (α) (t; F) = sα (ρ1 ) .
Proof. Let us consider assertion (i) . We begin by letting
α
Z ∞
(1)
(α)
(1)
M
(t;
F)
−
µ
M
(t;
F)
α
U (α) := −
logα sds−1/γ1 and ℓ (t) :=
,
A0 (t)
1
to show that, for any α > 0
(1) (2)
lim ℓ (t) = αγ1α−1 µ(2)
(ρ
)
−
µ
µ
(ρ
)
.
1
1
α
α
1
t→∞
(5.27)
Indeed, let us first decompose ℓ (t) into
α
α
α
(1)
(1)
(t; F) − U (1)
U (α) (t) − µα U (1)
M (α) (t; F) − U (α)
(1) M
− µα
+
,
A0 (t)
A0 (t)
A0 (t)
R∞
R∞
(1)
and note that µα = Γ (α + 1) where Γ (a) = 0 e−x xa−1 dx = 1 t−2 loga−1 tdt, a > 0. It is
α
(1)
easy to verify that U (α) − µα U (1) = 0, therefore
α
α
(1)
(t; F) − U (1)
M (α) (t; F) − U (α)
(1) M
− µα
.
(5.28)
ℓ (t) =
A0 (t)
A0 (t)
R∞
Recall that M (α) (t; F) = u logα (x/u) dF (x) /F (u) , where u := UF ∗ (t) , which by a change
of variables and an integration by parts, may be rewritten into
Z ∞
F (ux)
d logα x =: Mu(α) (F) ,
F
(u)
1
Making use, once again, of Proposition 4 of Hua and Joe (2011), we write: for possibly
e F , with A
e F (y) ∼ AF (y) , as y → ∞, for any 0 < ǫ < 1 and x ≥ 1, we
different function A
have
ρ1 /γ1
−1
F (ux) /F (u) − x−1/γ1
−1/γ1 x
−
x
≤ ǫx−1/γ1 +ǫ , as u → ∞.
−2 e
γ
/ρ
1
1
γ AF 1/F (u)
1
19
By using elementary analysis, we easily show that this inequality implies that
(α)
Mu (F) − U (α)
→ αγ1α−1 µ(2)
α (ρ1 ) , as u → ∞.
e
AF 1/F (u)
e F 1/F (u) ∼ AF 1/F (u) = A0 (t) . This
Hence, since 1/F (u) → ∞ as u → ∞, then A
means that
M (α) (t, F) − U (α)
→ αγ1α−1 µ(2)
α (ρ1 ) , as t → ∞.
A0 (t)
(5.29)
(2)
Note that for α = 1, we have U (1) = γ1 and therefore M (1) (t, F) − γ1 /A0 (t) → µ1 (ρ1 ) ,
which implies that M (1) (t, F) → γ1 . By using the mean value theorem and the previous two
results we get
α
M (1) (t, F) − γ1α
(2)
→ αγ1α−1 µ1 (ρ1 ) , as t → ∞.
A0 (t)
(5.30)
Combining (5.28) , (5.29) and (5.30) leads to (5.27) . Finally, we use the fact that M (1) (t, F) →
γ1 to achieve the proof of assertion (i) . To show assertion (ii) , we apply assertion (i) twice,
for α > 0 and for α = 2, then we divide the respective results to get
α
(1)
M (α) (t; F) − µα M (1) (t; F)
(α)
Q (t; F) =
2
M (2) (t; F) − 2 [M (1) (t; F)]
α−1
(2)
(1) (2)
(1)
α µα (ρ1 ) − µα µ1 (ρ1 )
M (t; F)
.
∼
M (1) (t; F) 2 µ(2) (ρ ) − µ(1) µ(2) (ρ )
1
1
2
2
1
(1)
(2)
By replacing µα and µα (ρ1 ) by their expressions, given in (2.16), we get
(2)
(1) (2)
α µα (ρ1 ) − µα µ1 (ρ1 )
Γ (1) (1 − (1 − ρ1 ))
Γ (α) (1 − (1 − ρ1 )α )
− Γ (α + 1)
=α
ρ1 (1 − ρ1 )α
ρ1 (1 − ρ1 )
α
Γ (α) (1 − (1 − ρ1 ) ) Γ (α + 1)
.
−
=α
ρ1 (1 − ρ1 )α
1 − ρ1
Since M (1) (t, F) → γ1 , then
(α)
Q
(t; F) →
γ1α−2 Γ (α + 1) 1 − (1 − ρ1 )α − αρ1 (1 − ρ1 )α−1
ρ21 (1 − ρ1 )α−2
, as t → ∞,
which is qα (ρ1 ) given in (1.10) . For assertion (iii) , it is clear that
(2α)
Qt
ρ21 1 − (1 − ρ1 )2α − 2αρ1 (1 − ρ1 )2α−1
,
δ (α)
2 →
α+1
α 2
(α+1)
1
−
(1
−
ρ
)
−
(α
+
1)
ρ
(1
−
ρ
)
1
1
1
Qt
which meets the expression of sα (ρ1 ) given in (1.11) .
20
Lemma 5.2. For any α > 0, we have
R∞
1
L (x; W) d logα x = OP (1) .
R∞
Proof. Observe that 1 L (x; W) d logα x may be decomposed into the sum of
Z
Z ∞
γ
γ ∞ 1/γ2
α
−1/γ
x W x
d log x, I2 := − W (1)
x−1/γ1 d logα x,
I1 :=
γ1 1
γ1
1
Z 1
Z ∞
γ
1/γ2
−γ/γ2 −1
−1/γ
x
s
W x
s ds d logα x,
I3 :=
γ1 + γ2 1
0
and
Z 1
Z ∞
γ
−γ/γ2 −1
I4 := −
s
W (s) ds
x−1/γ1 d logα x.
γ1 + γ2
0
1
Next we show that Ii = OP (1) , i = 1, ..., 4. To this end, we will show that E |Ii | is finite for
i = 1, ..., 4. Indeed, we have
E |I1 | ≤ (γ/γ1 )
Since E |W (y)| ≤
√
Z
∞
x−1/γ1 x1/γ E W x−1/γ
1
y, for any 0 ≤ y ≤ 1, then E |I1 | ≤ (αγ/γ1 )
d logα x.
R∞
1
x1/γ2 −1/(2γ)−1 logα−1 xdx.
By successively making two changes of variables log x = t, then (−1/γ2 + 1/ (2γ) + 1) t = s,
we end up with E |I1 | ≤ γγ1α−1 (2γ/ (2γ − γ1 ))α Γ (α + 1) which is finite for any α > 0. By
similar arguments we also show that E |I2 | ≤ γγ1α−1 Γ (α + 1) which is finite as well. For the
third term I3 , we have
γ
E |I3 | ≤
γ1 + γ2
Z
∞
1/γ2
x
1
By elementary calculations, we get
Z
1
s
−γ/γ2 −1
−1/γ
E W x
0
γ2 (2γ)α+1
E |I3 | ≤
(γ2 − 2γ) (γ1 + γ2 )
γ1
2γ − γ1
α
s ds d logα x.
Γ (α + 1) ,
which is also finite. By using similar arguments, we get
as sought.
2γ2γγ1α Γ (α + 1)
E |I4 | ≤
< ∞,
(γ1 + γ2 ) (γ2 − 2γ)
| 1 |
arXiv:1709.10258v1 [cs.CC] 29 Sep 2017
Recognizing Matroids
Brahim Chaourar
Department of Mathematics and Statistics,
Al Imam University (IMSIU)
P.O. Box 90950, Riyadh 11623, Saudi Arabia
Correspondence address: P. O. Box 287574, Riyadh 11323, Saudi Arabia
email: [email protected]
Abstract
Let E be a finite set and P, S, L three classes of subsets of E, and r a function
defined on 2E . In this paper, we give an algorithm for testing if the quadruple (P, S, L, r) is the locked structure of a given matroid, i.e., recognizing if
(P, S, L, r) defines a matroid. This problem is intractable. Our algorithm improves the running time complexity of a previous algorithm due to Spinrad. We
deduce a polynomial time algorithm for recognizing large classes of matroids
called polynomially locked matroids and uniform matroids.
2010 Mathematics Subject Classification: Primary 05B35, Secondary 90C27,
52B40.
Key words and phrases: recognizing matroids, locked structure of a matroid,
intractable problem, polynomially locked matroid.
1
Introduction
Sets and their characteristic vectors will not be distinguished. We refer to Oxley [5]
and Schrijver [8] about matroids and polyhedra terminology and facts, respectively.
Given a matroid M defined on a finite set E. Suppose that M (and M ∗ ) is 2connected. A subset L ⊂ E is called a locked subset of M if M|L and M ∗ |(E\L) are 2connected, and their corresponding ranks are at least 2, i.e., min{r(L), r ∗ (E\L)} ≥ 2.
It is not difficult to see that if L is locked then both L and E\L are closed, respectively,
in M and M ∗ (That is why we call them locked). We denote by L(M) and ℓ(M),
respectively, the class of locked subsets of M and its cardinality, which is called the
1
2 THE LOCKED AXIOMS FOR A MATROID
2
locked number of M. Given a positive integer k, Lk , the class of k-locked matroids,
is the class of matroids M such that ℓ(M) ∈ O(|E|k ). M is 0-locked if L(M) = ∅,
i.e., ℓ(M) = 0 and the class of such matroids is L0 . For a given nonegative integer
k, Lk is also called a polynomially locked class of matroids, and its elements k-locked
or polynomially locked matroids. It is not difficult to see that the class of lockeds
subsets of a matroid M is the union of lockeds subsets of the 2-connected components
of M. The locked structure of M is the quadruple (P(M), S(M), L(M), ρ ), where
P(M) and S(M) are, respectively, the class of parallel and coparallel closures, and ρ
is the rank function restricted to P(M) ∪ S(M) ∪ L(M) ∪ {∅, E}.
A matroid can be completely characterized by its locked structure through its bases
polytope [3]. Chaourar [4] gave a new axiom system called the locked axioms for
defining matroids based on this quadruple with an extension of the function ρ to 2E .
Let E be a finite set and P, S, L be three subclasses of 2E , and r a function defined on
P ∪S ∪L∪{∅, E}∪P C ∪S C , where X C = {E\X such that X ∈ X } and X ∈ {P, S}.
In this paper, we give an algorithm for testing if the quadruple (P, S, L, r), called
a basic quadruple, is the locked structure of a given matroid, i.e., recognizing if a
basic quadruple defines a matroid. This problem is intractable (see [7]). A similar
study has been done by Provan and Ball for testing if a given clutter Ω, defined on
a finite set E, is the class of the bases of a matroid [6]. They provide an algorithm
with running time complexity O(|Ω|3 |E|). Spinrad [9] improves the running time to
O(|Ω|2|E|). In this paper, we give an algorithm for matroid recognition with running
time complexity O((|E| + |L|)2 + |E||L|log|L|). This complexity bound is better
than that of Spinrad’s algorithm. This algorithm becomes polynomial on |E| for
recognizing polynomially locked matroids. Many hard problems (Kth best base of a
matroid, maximum weight basis of a matroid, testing self duality of matroids, matroid
isomorphism) has been proved polynomial for polynomially locked classes of matroids
[1, 3, 4]. This motivates the interest of polynomially locked classes of matroids and
their recognition. We also deduce a polynomial time algorithm for deciding if a basic
quadruple defines a uniform matroid.
The remainder of this paper is organized as follows. In section 2, we describe the
locked axioms system for defining matroids, then, in section 3, we give an algorithm
for recognizing matroids based on a basic quadruple. Finally, we conclude in section
4.
2
The Locked Axioms for a Matroid
Given a finite set E, M = (E, P, S, L, r) is a locked system defined on E if:
(L1) E 6= ∅,
(L2) P and S are partitions of E,
(L3) For any (P, S) ∈ P × S, if P ∩ S 6= ∅ then |P | = 1 or |S| = 1,
(L4) L is a class of nonempty and proper subsets of E such that L ∩ P = L ∩ S = ∅,
3 AN EFFICIENT ALGORITHM FOR MATROID RECOGNITION
3
(L5) For any (X, L) ∈ (P ∪ S) × L, X ∩ L = ∅ or X ⊂ L,
(L6) r is a nonegative integer function defined on P ∪ S ∪ L ∪ {∅, E} ∪ P C ∪ S C ,
where X C = {E\X such that X ∈ X } and X ∈ {P, S},
(L7) r(∅) = 0 and r(E) ≥ r(X) for any X ⊆ E,
(L8) r(P ) = min{1, r(E)} for any P ∈ P,
(L9) r(E\P ) = min{|E\P |, r(E)} for any P ∈ P,
(L10) r(S) = min{|S|, r(E)} for any S ∈ S,
(L11) r(E\S) = min{|E\S|, r(E) + 1 − |S|} for any S ∈ S,
(L12) r(L) ≥ max{2, r(E) + 2 − |E\L|} for any L ∈ L,
(L13) r is increasing on P ∪ L ∪ {∅, E},
(L14) r is submodular on P ∪ S ∪ L ∪ {∅, E},
(L15) For any X 6∈ P ∪ S ∪ L ∪ {∅, E}, one of the following holds:
(P1) There exists L ∈ L such that L ⊂ X , r(X) = r(L) + r(X\L), and X\L
verifies (P1) or (P2),
(P2) There exists P ∈ P such that P ∩ X 6= ∅ , r(X) = r(P ) + r(X\P ), and
X\P verifies (P1) or (P2),
(P3) There exists L ∈ L such that X ⊂ L , r(X) = r(L)+r(X ∪(E\L))−r(E),
and X ∪ (E\L verifies (P3) or (P4),
(P4) There exists S ∈ S such that (E\S) ∪ X 6= E , r(X) = r(E\S) + r(X ∪
S) + |S ∩ X| − r(E), and X ∪ S verifies (P3) or (P4),
(L16) For any (L1 , L2 ) ∈ L2 , if L1 ∩ L2 6= ∅ and L1 ∩ L2 6∈ L then L1 ∩ L2 verifies
(P1) or (P2) of (L15),
(L17) For any (L1 , L2 ) ∈ L2 , if L1 ∪ L2 6= E and L1 ∪ L2 6∈ L then L1 ∪ L2 verifies
(P3) or (P4) of (L15),
Without loss of generality, we can replace axioms (L8)-(L11) by the following axioms
respectively:
(LL8) r(P ) = 1 for any P ∈ P,
(LL9) r(E\P ) = r(E) for any P ∈ P,
(LL10) r(S) = |S| for any S ∈ S,
(LL11) r(E\S) = r(E) + 1 − |S| for any S ∈ S.
Chaourar [4] proved that the locked axioms define uniquely a matroid. So, recognition of matroids is equivalent to recognize if a basic quadruple is a locked system.
3
An efficient algorithm for matroid recognition
We give now the running time complexity for testing each of the locked axioms.
(L1) can be tested in O(1). (L2) can be tested in O(|E|2). (L3) can be tested in
3 AN EFFICIENT ALGORITHM FOR MATROID RECOGNITION
4
O(|E|2). (L4) and (L5) can be tested in O(|E||L|). We need the following lemma for
(L6).
Lemma 3.1. We can replace axiom (L6) by the following axiom:
(LL6) r is a nonegative integer function defined on L ∪ {E}.
Proof. Axioms (LL6) and (L7)-(L11) imply axiom (L6).
It follows that (LL6) can be tested in O(|L|). We need the following lemma for
(L7).
Lemma 3.2. We can replace axiom (L7) by the following axiom:
(LL7) r(∅) = 0.
Proof. Axioms (LL7) and (L13) imply axiom (L7).
It follows that (LL7) can be tested in O(1). (L8)-(L11) can be tested in O(|E|).
(L12) can be tested in O(|L|). We need the following lemma for (L13).
Lemma 3.3. (L13) can be tested in O(|E||L|log|L|).
Proof. We can construct a lattice (ordered by inclusion) for elements of P ∪L∪{∅, E}.
The root is the empty set and the sink is the ground set. Adjacent vertices to the root
are the elements of P because of axioms (L4) and (L5). After sorting the elements
of L according to their cardinalities, we can complete the lattice. We can test the
axiom (L13) at each step of the lattice construction.
(L14) can be tested in O((|E| + |L|)2 ).
We need the following lemma for (L15).
Lemma 3.4. Axiom (L15) is equivalent to the following axiom:
(LL15) For any X 6∈ P ∪ S ∪ L ∪ {∅, E}, one of the following holds:
(PP12) There exist {L1 , ..., Lp } ⊆ L and {P1 , ..., Pq } ⊆ P such that Li ⊂
X, i = 1, ..., p, Pj ∩ X 6= ∅, j = 1, ..., q, and r(X) = r(L1 ) + ... + r(Lp ) +
r(P1 ) + ... + r(Pq ) with p and q nonegative integers such that p + q ≥ 2,
(PP34) There exist {L1 , ..., Lp } ⊆ L and {S1 , ..., Sq } ⊆ S such that Li ⊃
X, i = 1, ..., p, (E\Sj ) ∪ X 6= E, j = 1, ..., q, and r(X) = r(L1 ) + ... + r(Lp ) +
r(E\S1 ) + ... + r(E\Sq ) − |S1 ∩ X| + ... + |Sq ∩ X| − (p + q)r(E) with p and q
nonegative integers such that p + q ≥ 2.
Proof. Repeating recursively axiom (L15) gives the lemma.
3 AN EFFICIENT ALGORITHM FOR MATROID RECOGNITION
5
It follows that axiom (LL15) gives a way on how to compute the function r outside
P ∪S ∪L∪{∅, E}∪P C ∪S C , where X C = {E\X such that X ∈ X } and X ∈ {P, S}.
So we do not need to test it and we have to force it.
(L16) and (L17) can be tested in O(|L|2) by using (LL15).
We can summarize all the previous steps as follows.
Theorem 3.5. We can decide if a basic quadruple (P, S, L, r) is a locked system or
not in O((|E| + |L|)2 + |E||L|log|L|).
This algorithm improves the running time complexity of that given by Spinrad’s
algorithm because if the answer is yes, i.e., the given clutter form the class of bases
of a matroid, then its running time complexity is O(|B|2 |E|) where B is the class of
bases, and |B| > |P| + |S| + |L| > |E| + |L| because the facets of the bases polytope
are completely described by P ∪ S ∪ L ∪ {E} [3] and the number of extreme points is
greater than the number of facets. Furthermore, Spinrad’s algorithm has in the input
a clutter which is not a basic structure as in our algorithm (basic quadruple).
A consequence of Theorem 3.5 is the following corollary about recognition of polynomially locked classes of matroids.
Corollary 3.6. Let k be a nonegative integer and a basic quadruple Q. We can decide
if Q defines a matroid in Lk or not as follows:
(1) If k ≥ 2 then Lk recognition can be done in O(|E|2k );
(2) L1 recognition can be done in O(|E|2log|E|);
(3) L0 recognition can be done in O(|E|2).
Another consequence is recognizing uniform matroids in polynomial time. We
need the following theorem [3] for this purpose.
Theorem 3.7. A matroid M is uniform if and only if one of the following properties
holds:
(i) ℓ(M) = 0 and |P(M)| = |E| = |S(M)|;
(ii) |P(M)| = 1;
(iii) |S(M)| = 1;
(iv) r(M) = |E|;
(v) r(M) = 0.
It follows that:
Corollary 3.8. We can decide if a basic quadruple is an uniform matroid or not in
O(|E|2).
Proof. Testing (i) of Theorem 3.7 can be done in O(|E|2) by using Corollary 3.6. (ii)
of Theorem 3.7 is equivalent to: |P(M)| = 1, |S(M)| = |E| and |ℓ(M)| = 0. So we
can test (ii) in O(|E|2). We can use a similar argument for (iii). (iv) of Theorem 3.7
is equivalent to: r(M) = |E|, |S(M)| = |E| = |P(M)| and |ℓ(M)| = 0. So we can
test (iv) in O(|E|2). Finally, a similar argument can be used for (v).
4 CONCLUSION
4
6
Conclusion
We have used a new system of axioms for defining a matroid based essentially on
locked subsets to describe an algorithm for matroid recognition if the input is a
basic quadruple. We have deduced a polynomial time algorithm for polynomially
locked classes of matroids recognition. A second consequence is a polynomial time
algorithm to decide if a basic quadruple is a uniform matroid. Future investigations
can be improving the running time complexity of all problems treated in this paper,
i.e., matroids recognition in general, polynomially locked matroids recognition, and
uniform matroids recognition in particular.
References
[1] B. Chaourar (2008), On the Kth Best Basis of a Matroid, Operations Research
Letters 36 (2), 239-242.
[2] B. Chaourar (2011), A Characterization of Uniform Matroids, ISRN Algebra, Vol.
2011, Article ID 208478, 4 pages, doi:10.5402/2011/208478.
[3] B. Chaourar (2017), The Facets of the Bases Polytope of a Matroid and Two
Consequences, arXiv 1702.07128.
[4] B. Chaourar (2017), On the Matroid Isomorphism Problem, arXiv 1703.03744.
[5] J. G. Oxley (1992), Matroid Theory, Oxford University Press, Oxford.
[6] J. S. Provan and M. O. Ball (1988), Efficient Recognition of Matroid and 2Monotonis Sysytems, In: R. D. Ringeisen and F. S. Roberts (eds), Applications
of Discrete Mathematics, SIAM, Philadelphia: 122-134.
[7] G. C. Robinson and D. J. A. Welsh (1980), The computational complexity of
matroid properties, Math. Proc. Cambridge Phil. Society 87, 29-45.
[8] A. Schrijver (1986), Theory of Linear and Integer Programming, John Wiley and
Sons, Chichester.
[9] J. Spinrad (1991), A Note on Recognition of Matroid Systems, Operations Research Letters 10: 313-314.
| 4 |
Salient Object Detection by Lossless Feature Reflection
arXiv:1802.06527v1 [cs.CV] 19 Feb 2018
Pingping Zhang1,3 , Wei Liu2,3 , Huchuan Lu1 , Chunhua Shen3
1
Dalian University of Technology, Dalian, 116024, P.R. China
2
Shanghai Jiao Tong University, Shanghai, 200240, P.R. China
3
University of Adelaide, Adelaide, SA 5005, Australia
[email protected]; [email protected]; [email protected]; [email protected]
Abstract
Salient object detection, which aims to identify and
locate the most salient pixels or regions in images,
has been attracting more and more interest due to
its various real-world applications. However, this
vision task is quite challenging, especially under
complex image scenes. Inspired by the intrinsic reflection of natural images, in this paper we propose
a novel feature learning framework for large-scale
salient object detection. Specifically, we design a
symmetrical fully convolutional network (SFCN)
to learn complementary saliency features under the
guidance of lossless feature reflection. The location
information, together with contextual and semantic
information, of salient objects are jointly utilized to
supervise the proposed network for more accurate
saliency predictions. In addition, to overcome the
blurry boundary problem, we propose a new structural loss function to learn clear object boundaries
and spatially consistent saliency. The coarse prediction results are effectively refined by these structural information for performance improvements.
Extensive experiments on seven saliency detection
datasets demonstrate that our approach achieves
consistently superior performance and outperforms
the very recent state-of-the-art methods.
1
Introduction
As a fundamental yet challenging task in computer vision,
salient object detection (SOD) aims to identify and locate distinctive objects or regions which attract human attention in
natural images. In general, SOD is regarded as a prerequisite
step to narrow down subsequent object-related vision tasks.
For example, it can be used in image retrieval, sematic segmentation, visual tracking and person re-identification, etc.
In the past two decades, a large number of SOD methods
have been proposed. Most of them have been well summarized in [Borji et al., 2015]. According to that work, conventional SOD methods focus on extracting discriminative local and global handcrafted features from pixels or regions to
represent their visual properties. With several heuristic priors, these methods predict salient scores according to the extracted features for saliency detection. Although great suc-
Left Sibling Branch
Right Sibling Branch
O-Input
384x384x3
R-Input
64
128
256
512
512
512
512
128
256
64
384x384x3
512
512
Fusing Branch
256
Conv <3x3,1>
Conv <3x3,1> with AdaBN and ReLU
128
64
2
Concat and Merge
Structural Loss
384x384x1
384x384x1
MaxPooling <2x2,2>
Copy
DeConv <2x2,2>
Figure 1: The semantic overview of our proposed SFCN.
cess has been made, there still exist many important problems which need to be solved. For example, the low-level
handcrafted features suffer from limited representation capability, and are difficult to capture the semantic and structural
information of objects in images, which is very important for
more accurate SOD. What’s more, to further extract powerful
and robust visual features manually is a tough mission for performance improvement, especially in complex image scenes,
such as cluttered backgrounds and low-contrast imaging patterns.
With the recent prevalence of deep architectures, many remarkable progresses have been achieved in a wide range of
computer vision tasks, e.g., image classification [Simonyan
and Zisserman, 2014], object detection [Girshick et al.,
2014], and semantic segmentation [Long et al., 2015]. Thus,
many researchers start to make their efforts to utilize deep
convolutional neural networks (CNNs) for SOD and have
achieved favourable performance, since CNNs have strong
ability to automatically extract high-level feature representations, successfully avoiding the drawbacks of handcrafted
features. However, most of state-of-the-art SOD methods
still require large-scale pre-trained CNNs, which usually employ the strided convolution and pooling operations. These
downsampling methods increase the receptive field of CNNs,
helping to extract high-level semantic features, nevertheless
they inevitably drop the location information and fine details
of objects, leading to unclear boundary predictions. Furthermore, the lack of structural supervision also makes SOD an
extremely challenging problem in complex image scenes.
In order to utilize the semantic and structural information
derived from deep pre-trained CNNs, we propose to solve
both tasks of complementary feature extraction and saliency
region classification with an unified framework which is
learned in the end-to-end manner. Specifically, we design a
symmetrical fully convolutional network (SFCN) architecture
which consists of two sibling branches and one fusing branch,
as illustrated in Fig. 1. The two sibling branches take reciprocal image pairs as inputs and share weights for learning complementary visual features under the guidance of lossless feature reflection. The fusing branch integrates the multi-level
complementary features in a hierarchical manner for SOD.
More importantly, to effectively train our network, we propose a novel loss function which incorporates structural information and supervises the three branches during the training
process. In this manner, our proposed model sufficiently captures the boundaries and spatial contexts of salient objects,
hence significantly boosts the performance of SOD.
In summary, our contributions are three folds:
• We present a novel network architecture, i.e., SFCN,
which is symmetrically designed to learn complementary visual features and predict accurate saliency maps
under the guidance of lossless feature reflection.
• We propose a new structural loss function to learn clear
object boundaries and spatially consistent saliency. This
loss function is able to utilize the location, contextual
and semantic information of salient objects to supervise
the proposed SFCN for performance improvements.
• Extensive experiments on seven large-scale saliency
benchmarks demonstrate that the proposed approach
achieves superior performance and outperforms the very
recent state-of-the-art methods by a large margin.
2
Related Work
Salient Object Detection. Recent years, deep learning based
methods have achieved solid performance improvements in
SOD. For example, [Wang et al., 2015] integrate both local
pixel estimation and global proposal search for SOD by training two deep neural networks. [Zhao et al., 2015] propose
a multi-context deep CNN framework to benefit from the local context and global context of salient objects. [Li and Yu,
2015] employ multiple deep CNNs to extract multi-scale features for saliency prediction. Then they propose a deep contrast network to combine a pixel-level stream and segmentwise stream for saliency estimation [Li and Yu, 2016]. Inspired by the great success of FCNs, [Wang et al., 2016] develop a recurrent FCN to incorporate saliency priors for more
accurate saliency map inference. [Liu and Han, 2016] also
design a deep hierarchical network to learn a coarse global estimation and then refine the saliency map hierarchically and
progressively. Then, [Hou et al., 2017] introduce dense short
connections to the skip-layers within the holistically-nested
edge detection (HED) architecture [Xie and Tu, 2015] to get
rich multi-scale features for SOD. [Zhang et al., 2017a] propose a bidirectional learning framework to aggregate multilevel convolutional features for SOD. And they also develop
a novel dropout to learn the deep uncertain convolutional features to enhance the robustness and accuracy of saliency detection [Zhang et al., 2017b]. [Wang et al., 2017b] provide
a stage-wise refinement framework to gradually get accurate
saliency detection results. Despite these approaches employ
powerful CNNs and make remarkable success in SOD, there
still exist some obvious problems. For example, the strategies of multiple-stage training reduce the efficiency. And the
explicit pixel-wise loss functions used by these methods for
model training cannot well reflect the structural information
of salient objects. Hence, there is still a large space for performance improvements.
Image Intrinsic Reflection. Image intrinsic reflection is a
classical topic in computer vision field. It aims to separate a
color image into two intrinsic reflection images: an image of
just the highlights, and the original image with the highlights
removed. It can be used to segment and analyze surfaces with
image color variations. Most of existing methods are based
on the Retinex model [Land and McCann, 1971], which captures image information for Mondrian images: images of a
planar canvas that is covered by samll patches of constant reflectance and illuminated by multiple light sources. Recent
years, researchers have augmented the basic Retinex model
with non-local texture cues [Zhao et al., 2012] and sparsity
priors [Shen and Yeo, 2011]. Sophisticated techniques that
recover reflectance and shading along with a shape estimate
have also been proposed [Barron and Malik, 2012]. Inspired
by these works, we construct a reciprocal image pair based on
the input image (see Section 3.1). However, there are three
obvious differences between our method and previous intrinsic reflection methods: 1) the objective is different. The aim
of previous methods is explaining an input RGB image by estimating albedo and shading fields. Our aim is to learn complementary visual features for SOD. 2) the resulting image
pair is different. Image intrinsic reflection methods usually
factory an input image into a reflectance image and a shading image, while our method builds a reciprocal image pair
for each image as the input of deep networks. 3) the source
of reflection is different. The source of previous intrinsic reflection methods is the albedo of depicted surfaces, while our
reflection is originated from deep features in CNNs. Therefore, our reflection is feature-level not image-level.
3
The Proposed Method
Fig. 1 illustrates the semantic overview of our method. We
first convert an input RGB image into a reciprocal image pair,
including the origin image (O-Input) and the reflection image
(R-Input), by utilizing the ImageNet mean [Deng et al., 2009]
and a pixel-wise negation operator. Then the image pair is
fed into the sibling branches of our proposed SFCN, extracting multi-level deep features. Afterwards, the fusing branch
hierarchically integrates the complementary features into the
same resolution of input images. Finally, the saliency map is
predicted by exploiting integrated features and the structural
loss. In the following subsections, we elaborate the proposed
SFCN architecture and the weighted structural loss.
3.1
Symmetrical FCN
The proposed SFCN is an end-to-end fully convolutional network. It consists of three main branches with a paired reciprocal image input to achieve lossless feature reflection learning.
We describe each of them as follows.
Reciprocal Image Input. To capture complementary image information, we first convert the given RGB image X ∈
RW ×H×3 to a reciprocal image pair by the following reflection function,
Rec(X, k) = (X − M, k(M − X))),
= (X − M, −k(X − M ))
k
= (XO , XR
).
(1)
(2)
(3)
where k is a hyperparameter to control the reflection scale and
M ∈ RW ×H×3 is the mean of an image or image dataset.
From above equations, one can see that the converted image
k
pair, i.e., XO and XR
, is reciprocal with a reflection plane. In
detail, the reflection scheme is a pixel-wise negation operator, allowing the given images to be reflected in both positive
and negative directions while maintaining the same content of
images. In the proposed reflection, we use the multiplicative
operator to measure the reflection scale, but it is not the only
feasible method. For example, this reflection can be combined with other non-linear operators, such as quadratic form,
to add more diversity. For reducing the computation, in this
paper we use k = 1 and the mean of the ImageNet dataset.
Sibling Branches with AdaBN. Based on the reciprocal image pair, we propose two sibling branches to extract complementary reflection features. More specifically, we build
each sibling branch, following the VGG-16 model [Simonyan
and Zisserman, 2014]. Each sibling branch has 13 convolutional layers (kernel size = 3 × 3, stride size = 1) and 4 max
pooling layers (pooling size = 2 × 2, stride = 2). To achieve
the lossless reflection features, the two sibling branches are
designed to share weights in convolutional layers, but with
adaptive batch normalization (AdaBN). In other words, we
keep the weights of corresponding convolutional layers of the
two sibling branches the same, while use different learnable
BN between the convolution and ReLU operators [Zhang et
al., 2017a]. The main reason of this design is that after the
reflection transform, the reciprocal images have different image domains. Domain related knowledge heavily affects the
statistics of BN layers. In order to learn domain invariant
features, it’s beneficial for each domain to keep its own BN
statistics in each layers.
Hierarchical Feature Fusion. After extracting multi-level
reflection features, we adhere an additional fusing branch to
integrate them for the saliency prediction. In order to preserve
the spatial structure and enhance the contextual information,
we integrate the multi-level reflection features in a hierarchical manner. Formally, the fusing function is defined by
(
k
h([gl (XO ), fl+1 (X), gl∗ (XR
)]), l < L
fl (X) =
(4)
∗
k
h([gl (XO ), gl (XR )]), l = L
where h denotes the integration operator, which is a 1 × 1
convolutional layer followed by a deconvolutional layer to ensure the same resolution. [·] is the concatenation operator in
channel-wise. gl and gl∗ are the reflection features of the l-th
convolutional layer in the two sibling branches, respectively.
In the end, we add a convolutional layer with two filters for
the saliency map prediction. The numbers in Fig. 1 illustrate
the detailed filter setting in each convolutional layer.
3.2
Weighted Structural Loss
Given the SOD training dataset S = {(Xn , Yn )}N
n=1 with N
training pairs, where Xn = {xni , i = 1, ..., T } and Yn =
{yin , i = 1, ..., T } are the input image and the binary groundtruth image with T pixels, respectively. yin = 1 denotes the
foreground pixel and yin = 0 denotes the background pixel.
For notional simplicity, we subsequently drop the subscript n
and consider each image independently. In most of existing
SOD methods, the loss function used to train the network is
the standard pixel-wise binary cross-entropy (BCE) loss:
X
log Pr(yi = 1|X; θ)
Lbce = −
i∈Y+
−
X
(5)
log Pr(yi = 0|X; θ).
i∈Y−
where θ is the parameter of the network. Pr(yi = 1|X; θ) ∈
[0, 1] is the confidence score of the network prediction that
measures how likely the pixel belong to the foreground.
However, for a typical natural image, the class distribution
of salient/non-salient pixels is heavily imbalanced: most of
the pixels in the ground truth are non-salient. To automatically balance the loss between positive/negative classes, we
introduce a class-balancing weight β on a per-pixel term basis, following [Xie and Tu, 2015]. Specifically, we define the
following weighted cross-entropy loss function,
X
Lwbce = −β
log Pr(yi = 1|X; θ)
i∈Y+
−(1 − β)
X
(6)
log Pr(yi = 0|X; θ).
i∈Y−
The loss weight β = |Y+ |/|Y |, and |Y+ | and |Y− | denote the
foreground and background pixel number, respectively.
For saliency detection, it is also crucial to preserve the
overall spatial structure and semantic content. Thus, rather
than only encouraging the pixels of the output prediction to
match with the ground-truth using the above pixel-wise loss,
we also minimize the differences between their multi-level
features by a deep convolutional network. The main intuition
behind this operator is that minimizing the difference between
multi-level features, which encode low-level fine details and
high-level coarse semantics, helps to retain the spatial structure and semantic content of predictions. Formally, let φl denotes the output of the l-th convolutional layer in a CNN, our
semantic content (SC) loss is defined as
Lsc =
L
X
λl ||φl (Y ; w) − φl (Ŷ ; w)||2 ,
(7)
l=1
where Ŷ is the overall prediction, w is the parameter of a pretrained CNN and λl is the trade-off parameter, controlling the
influence of the loss in the l-th layer. In our case, we use the
light CNN-9 model [Wu et al., 2015] to calculate the above
loss between the ground-truth and the prediction.
To overcome the blurry boundary problem [Li et al., 2016],
we also introduce the smooth L1 loss which encourages to
keep the details of boundaries of salient objects. Specifically,
the smooth L1 loss function is defined as
0.5||D||22 ,
||D||1 < 0.5
Ls1 =
(8)
||D||1 − 0.5,
otherwise
where D = Y − Ŷ . This training loss also helps to minimize
pixel-level differences between the overall prediction and the
ground-truth. By taking all above loss functions together, we
define our final loss function as
L = arg min Lwbce + µLsc + γLs1 ,
al., 2015] and recently proposed S-measure [Fan et al., 2017].
The PR curve of a specific dataset exhibits the mean precision
and recall of saliency maps at different thresholds. The Fmeasure is a weighted mean of average precision and average
recall, calculated by
Fη =
(1 + η 2 ) × P recision × Recall
.
η 2 × P recision × Recall
(10)
We set η 2 to be 0.3 to weigh precision more than recall as
suggested in [Borji et al., 2015].
For fair comparison on non-salient regions, we also calculate the mean absolute error (MAE) by
M AE =
(9)
W X
H
X
1
|S(x, y) − G(x, y)|,
W × H x=1 y=1
(11)
where µ and γ are hyperparameters to balance the specific
terms. All the above losses are continuously differentiable,
so we can use the standard stochastic gradient descent (SGD)
method to obtain the optimal parameters. In addition, we use
λl = 1, µ = 0.01 and γ = 20 to optimize the final loss
function for our experiments without further tuning.
where W and H are the width and height of the input image.
S(x, y) and G(x, y) are the pixel values of the saliency map
and the binary ground truth at (x, y), respectively.
To evaluate the spatial structure similarities of saliency
maps, we also calculate the S-measure, defined as
4
where λ ∈ [0, 1] is the balance parameter. So and Sr are the
object-aware and region-aware structural similarity, respectively. We set λ = 0.5 as suggested in [Fan et al., 2017].
4.1
Experimental Results
Datasets and Evaluation Metrics
To train our model, we adopt the MSRA10K [Borji et al.,
2015] dataset, which has 10,000 training images with high
quality pixel-wise saliency annotations. Most of images in
this dataset have a single salient object. To combat overfitting, we augment this dataset by random cropping and mirror
reflection, producing 120,000 training images totally.
For the performance evaluation, we adopt seven public
saliency detection datasets as follows: DUT-OMRON [Yang
et al., 2013] dataset has 5,168 high quality natural images.
Each image in this dataset has one or more objects with
relatively complex image background. DUTS-TE dataset
is the test set of currently largest saliency detection benchmark (DUTS) [Wang et al., 2017a]. It contains 5,019 images with high quality pixel-wise annotations. ECSSD [Shi
et al., 2016] dataset contains 1,000 natural images, in which
many semantically meaningful and complex structures are included. HKU-IS-TE [Li and Yu, 2015] dataset has 1,447 images with pixel-wise annotations. Images of this dataset are
well chosen to include multiple disconnected objects or objects touching the image boundary. PASCAL-S [Li et al.,
2014] dataset is generated from the PASCAL VOC [Everingham et al., 2010] dataset and contains 850 natural images
with segmentation-based masks. SED [Borji, 2015] dataset
has two non-overlapped subsets, i.e., SED1 and SED2. SED1
has 100 images each containing only one salient object, while
SED2 has 100 images each containing two salient objects.
SOD [Jiang et al., 2013] dataset has 300 images, in which
many images contain multiple objects either with low contrast or touching the image boundary.
To evaluate the performance of varied SOD algorithms, we
adopt four metrics, including the widely used precision-recall
(PR) curves, F-measure, mean absolute error (MAE) [Borji et
Sλ = λ ∗ So + (1 − λ) ∗ Sr ,
4.2
(12)
Implementation Details
We implement our proposed model based on the Caffe toolbox [Jia et al., 2014] with the MATLAB 2016 platform. We
train and test our method in a quad-core PC machine with an
NVIDIA Titan 1070 GPU (with 8G memory) and an i5-6600
CPU. We perform training with the augmented training images from the MSRA10K dataset. Following [Zhang et al.,
2017a; Zhang et al., 2017b], we do not use validation set and
train the model until its training loss converges. The input
image is uniformly resized into 384 × 384 × 3 pixels and subtracted the ImageNet mean [Deng et al., 2009]. The weights
of sibling branches are initialized from the VGG-16 model.
For the fusing branch, we initialize the weights by the “msra”
method. During the training, we use standard SGD method
with batch size 12, momentum 0.9 and weight decay 0.0005.
We set the base learning rate to 1e-8 and decrease the learning
rate by 10% when training loss reaches a flat. The training
process converges after 150k iterations. When testing, our
proposed SOD algorithm runs at about 12 fps. The source
code will be made publicly available upon the acceptance of
the work.
4.3
Comparison with the State-of-the-arts
To fully evaluate the detection performance, we compare our
proposed method with other 14 state-of-the-art ones, including 10 deep learning based algorithms (Amulet [Zhang et al.,
2017a], DCL [Li and Yu, 2016], DHS [Liu and Han, 2016],
DS [Li et al., 2016], ELD [Lee et al., 2016], LEGS [Wang
et al., 2015], MCDL [Zhao et al., 2015], MDF [Li and
Yu, 2015], RFCN [Wang et al., 2016], UCF [Zhang et al.,
2017b]) and 4 conventional algorithms (BL [Tong et al.,
2015], BSCA [Qin et al., 2015], DRFI [Jiang et al., 2013],
DSR [Li et al., 2013]). For fair comparison, we use either the
implementations with recommended parameter settings or the
saliency maps provided by the authors.
Quantitative Evaluation. As illustrated in Tab. 1, Tab. 2 and
Fig. 3, our method outperforms other competing ones across
all datasets in terms of near all evaluation metrics. From these
results, we have other notable observations: (1) deep learning
based methods consistently outperform traditional methods
with a large margin, which further proves the superiority of
deep features for SOD. (2) our method achieves higher Smeasure than other methods, especially on complex structure
datasets, e.g., the DUT-OMRON, SED and SOD datasets. We
attribute this result to our structural loss. (3) without segmentation pre-training, our method only fine-tuned from the image classification model still achieves better results than the
DCL and RFCN, especially on the HKU-IS and SED datasets.
(4) compared to the DHS and Amulet, our method is inferior on the DUTS-TE and PASCAL-S datasets. However, our
method ranks in the second place and is still very comparable.
Qualitative Evaluation. Fig. 2 provides several visual examples in various challenging cases, where our method outperforms other compared methods. For example, the images in
the first two rows are of very low contrast, where most of the
compared methods fail to capture the salient objects, while
our method successfully highlights them with sharper edges
preserved. The images in the 3-4 rows are challenging with
complex structures or salient objects near the image boundary, and most of the compared methods can not predict the
whole objects, while our method captures the whole salient
regions with preserved structures.
4.4
Ablation Analysis
We also evaluate the main components in our model. Tab.3
shows the experimental results with different model settings.
All models are trained on the augmented MSRA10K dataset
and share the same hyper-parameters described in subsection
4.2. Due to the limitation of space, we only show the results
on the ECSSD dataset. Other datasets have the similar performance trend. From the results, we can see that the SFCN only
using the channel concatenation operator without hierarchical
fusion (model (a)) has achieved comparable performance to
most deep learning methods. This confirms the effectiveness
of reflection features. With the hierarchical fusion, the resulting SFCN (model (b)) improves the performance by a large
margin. The main reason is that the fusion method introduces
more contextual information from high layers to low layers,
which helps to locate the salient objects. In addition, it’s no
wonder that training with the Lwbce loss achieves better results than Lbce . With other two losses Lsc and Ls1 , the model
achieves better performance in terms of MAE and S-measure.
These results demonstrate that individual components in our
model complement each other. When taking them together,
the model achieves best results in all evaluation metrics.
5
Conclusion
In this work, we propose a novel end-to-end feature learning
framework for SOD. Our method uses a symmetrical FCN
to learn complementary visual features under the guidance of
lossless feature reflection. For training, we also propose a
new weighted structural loss that integrates the location, semantic and contextual information of salient objects to boost
the detection performance. Extensive experiments on seven
large-scale saliency datasets demonstrate that the proposed
method achieves significant improvement over the baseline
and performs better than other state-of-the-art methods.
References
[Barron and Malik, 2012] Jonathan T Barron and Jitendra Malik.
Color constancy, intrinsic images, and shape estimation. In
ECCV, pages 57–70, 2012.
[Borji et al., 2015] Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and
Jia Li. Salient object detection: A benchmark. IEEE TIP,
24(12):5706–5722, 2015.
[Borji, 2015] Ali Borji. What is a salient object? a dataset and a
baseline model for salient object detection. IEEE TIP, 24(2):742–
756, 2015.
[Deng et al., 2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li,
Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009.
[Everingham et al., 2010] Mark Everingham, Luc Van Gool,
Christopher K. I. Williams, John M. Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. IJCV,
88:303–338, 2010.
[Fan et al., 2017] Deng-Ping Fan, Ming-Ming Cheng, Yun Liu, Tao
Li, and Ali Borji. Structure-measure: A new way to evaluate
foreground maps. In ICCV, pages 4548–4557, 2017.
[Girshick et al., 2014] Ross Girshick, Jeff Donahue, Trevor Darrell,
and Jitendra Malik. Rich feature hierarchies for accurate object
detection and semantic segmentation. In CVPR, pages 580–587,
2014.
[Hou et al., 2017] Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali
Borji, Zhuowen Tu, and Philip Torr. Deeply supervised salient
object detection with short connections. In CVPR, pages 5300–
5309, 2017.
[Jia et al., 2014] Yangqing Jia, Evan Shelhamer, Jeff Donahue,
Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for
fast feature embedding. In ACM Multimedia, pages 675–678,
2014.
[Jiang et al., 2013] Huaizu Jiang, Jingdong Wang, Zejian Yuan,
Yang Wu, Nanning Zheng, and Shipeng Li. Salient object detection: A discriminative regional feature integration approach.
In CVPR, pages 2083–2090, 2013.
[Land and McCann, 1971] Edwin H Land and John J McCann.
Lightness and retinex theory. JOSA, 61(1):1–11, 1971.
[Lee et al., 2016] Gayoung Lee, Yu-Wing Tai, and Junmo Kim.
Deep saliency with encoded low level distance map and high
level features. In CVPR, pages 660–668, 2016.
[Li and Yu, 2015] Guanbin Li and Yizhou Yu. Visual saliency
based on multiscale deep features. In CVPR, pages 5455–5463,
2015.
Methods
Ours
Amulet [Zhang et al., 2017a]
DCL [Li and Yu, 2016]
DHS [Liu and Han, 2016]
DS [Li et al., 2016]
ELD [Lee et al., 2016]
LEGS [Wang et al., 2015]
MCDL [Zhao et al., 2015]
MDF [Li and Yu, 2015]
RFCN [Wang et al., 2016]
UCF [Zhang et al., 2017b]
BL [Tong et al., 2015]
BSCA [Qin et al., 2015]
DRFI [Jiang et al., 2013]
DSR [Li et al., 2013]
DUT-OMRON
Fη M AE Sλ
0.696 0.086 0.774
0.647 0.098 0.771
0.684 0.157 0.743
–
–
–
0.603 0.120 0.741
0.611 0.092 0.743
0.592 0.133 0.701
0.625 0.089 0.739
0.644 0.092 0.703
0.627 0.111 0.752
0.621 0.120 0.748
0.499 0.239 0.625
0.509 0.190 0.652
0.550 0.138 0.688
0.524 0.139 0.660
DUTS-TE
Fη M AE Sλ
0.716 0.083 0.799
0.682 0.085 0.796
0.714 0.150 0.785
0.724 0.066 0.809
0.632 0.091 0.790
0.628 0.098 0.749
0.585 0.138 0.687
0.594 0.105 0.706
0.673 0.100 0.723
0.712 0.090 0.784
0.635 0.112 0.777
0.490 0.238 0.615
0.500 0.196 0.633
0.541 0.175 0.662
0.518 0.145 0.646
Fη
0.880
0.868
0.829
0.872
0.826
0.810
0.785
0.796
0.807
0.834
0.844
0.684
0.705
0.733
0.662
ECSSD
M AE
0.052
0.059
0.149
0.060
0.122
0.080
0.118
0.101
0.105
0.107
0.069
0.216
0.182
0.164
0.178
Sλ
0.897
0.894
0.863
0.884
0.821
0.839
0.787
0.803
0.776
0.852
0.884
0.714
0.725
0.752
0.731
HKU-IS-TE
Fη M AE Sλ
0.875 0.040 0.905
0.843 0.050 0.886
0.853 0.136 0.859
0.854 0.053 0.869
0.787 0.077 0.854
0.776 0.072 0.823
0.732 0.118 0.745
0.760 0.091 0.786
0.802 0.095 0.779
0.838 0.088 0.860
0.823 0.061 0.874
0.666 0.207 0.702
0.658 0.175 0.705
0.726 0.145 0.743
0.682 0.142 0.701
Table 1: Quantitative comparison with 15 methods on 4 large-scale datasets. The best three results are shown in red, green and blue,
respectively. “–” means corresponding methods are trained on that dataset. Our method ranks first or second on these datasets.
Methods
Ours
Amulet [Zhang et al., 2017a]
DCL [Li and Yu, 2016]
DHS [Liu and Han, 2016]
DS [Li et al., 2016]
ELD [Lee et al., 2016]
LEGS [Wang et al., 2015]
MCDL [Zhao et al., 2015]
MDF [Li and Yu, 2015]
RFCN [Wang et al., 2016]
UCF [Zhang et al., 2017b]
BL [Tong et al., 2015]
BSCA [Qin et al., 2015]
DRFI [Jiang et al., 2013]
DSR [Li et al., 2013]
PASCAL-S
Fη M AE Sλ
0.772 0.104 0.809
0.768 0.098 0.820
0.714 0.181 0.791
0.777 0.095 0.807
0.659 0.176 0.739
0.718 0.123 0.757
–
–
–
0.691 0.145 0.719
0.709 0.146 0.692
0.751 0.132 0.799
0.735 0.115 0.806
0.574 0.249 0.647
0.601 0.223 0.652
0.618 0.207 0.670
0.558 0.215 0.594
Fη
0.913
0.892
0.855
0.888
0.845
0.872
0.854
0.878
0.842
0.850
0.865
0.780
0.805
0.807
0.791
SED1
M AE
0.048
0.060
0.151
0.055
0.093
0.067
0.103
0.077
0.099
0.117
0.063
0.185
0.153
0.148
0.158
Sλ
0.905
0.893
0.845
0.894
0.859
0.864
0.828
0.855
0.833
0.832
0.896
0.783
0.785
0.797
0.736
Fη
0.871
0.830
0.795
0.822
0.754
0.759
0.736
0.757
0.800
0.767
0.810
0.713
0.706
0.745
0.712
SED2
M AE
0.048
0.062
0.157
0.080
0.123
0.103
0.124
0.116
0.101
0.113
0.068
0.186
0.158
0.133
0.141
Sλ
0.870
0.852
0.760
0.796
0.776
0.769
0.716
0.742
0.772
0.784
0.846
0.705
0.714
0.750
0.715
Fη
0.789
0.745
0.741
0.775
0.698
0.712
0.683
0.677
0.721
0.743
0.738
0.580
0.584
0.634
0.596
SOD
M AE
0.123
0.144
0.194
0.129
0.189
0.155
0.196
0.181
0.165
0.170
0.148
0.267
0.252
0.224
0.234
Sλ
0.772
0.753
0.748
0.750
0.712
0.705
0.657
0.650
0.674
0.730
0.762
0.625
0.621
0.624
0.596
Table 2: Quantitative comparison with 15 methods on 4 complex structure image datasets. The best three results are shown in red, green and
blue, respectively. “–” means corresponding methods are trained on that dataset. Our method ranks first or second on these datasets.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
Figure 2: Comparison of typical saliency maps. (a) Input images; (b) Ground truth; (c) Ours; (d) Amulet; (e) DCL; (f) DHS; (g) ELD; (h)
MCDL; (i) MDF; (j) RFCN; (k) UCF. Due to the limitation of space, we don’t show the results of DS, LEGS, BL, BSCA, DRFI and DSR.
We will release the saliency maps of all compared methods upon the acceptance.
1
1
1
1
0.9
0.9
0.9
0.9
0.8
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.6
0.5
0.4
0.3
0.2
0.1
0.4
0.6
0.8
0
1
0
0.2
0.6
0.5
0.4
0.2
0.1
0.6
0.8
0
1
0
0.2
0.5
0.4
0.2
0.1
0.6
0.8
0
1
(b) DUTS-TE
(c) ECSSD
1
1
0.9
0.9
0.3
0.2
0.1
0
0.2
0.5
0.4
0.3
0.2
0.1
0.4
0.6
0.8
1
0
0
0.2
(e) PASCAL-S
0.5
0.4
0.2
0.1
0.6
0.8
Recall
Recall
0.6
0.3
0.4
(f) SED1
1
0
0.7
Amulet
BL
BSCA
DCL
DHS
DRFI
DS
DSR
ELD
LEGS
MCDL
MDF
Ours
RFCN
UCF
0
0.2
Precision
0.4
0.6
0.6
0.8
1
0.8
1
0.8
0.7
Amulet
BL
BSCA
DCL
DHS
DRFI
DS
DSR
ELD
LEGS
MCDL
MDF
Ours
RFCN
UCF
Precision
0.5
Precision
0.6
0
0.8
0.8
0.7
0.4
(d) HKU-IS-TE
1
0.9
Amulet
BL
BSCA
DCL
DHS
DRFI
DS
DSR
ELD
LEGS
MCDL
MDF
Ours
RFCN
UCF
0.2
Recall
1
0.7
0
Recall
0.9
0.8
Amulet
BL
BSCA
DCL
DHS
DRFI
DS
DSR
ELD
LEGS
MCDL
MDF
Ours
RFCN
UCF
0.6
0.3
0.4
Recall
(a) DUT-OMRON
0.7
Amulet
BL
BSCA
DCL
DHS
DRFI
DS
DSR
ELD
LEGS
MCDL
MDF
Ours
RFCN
UCF
0.3
0.4
Recall
Precision
0.7
Amulet
BL
BSCA
DCL
DHS
DRFI
DS
DSR
ELD
LEGS
MCDL
MDF
Ours
RFCN
UCF
Precision
0.6
0.8
0.8
0.7
Amulet
BL
BSCA
DCL
DHS
DRFI
DS
DSR
ELD
LEGS
MCDL
MDF
Ours
RFCN
UCF
Precision
Precision
0.7
Precision
0.8
Amulet
BL
BSCA
DCL
DHS
DRFI
DS
DSR
ELD
LEGS
MCDL
MDF
Ours
RFCN
UCF
0.6
0.5
0.4
0.3
0.2
0.1
0.4
0.6
0.8
1
0
0
0.2
Recall
(g) SED2
0.4
0.6
Recall
(h) SOD
Figure 3: The PR curves of the proposed algorithm and other state-of-the-art methods.
Models (a) SFCN-hf+Lbce (b) SFCN+Lbce (c) SFCN+Lwbce (d) SFCN+Lwbce +Lsc (e) SFCN+Lwbce +Ls1 The overall
Fη
0.824
0.848
0.865
0.873
0.867
0.880
M AE
0.102
0.083
0.072
0.061
0.049
0.052
Sλ
0.833
0.859
0.864
0.880
0.882
0.897
Table 3: Results with different model settings on the ECSSD dataset. The best three results are shown in red, green and blue, respectively.
[Li and Yu, 2016] Guanbin Li and Yizhou Yu. Deep contrast learn[Tong et al., 2015] Na Tong, Huchuan Lu, Xiang Ruan, and Minging for salient object detection. In CVPR, pages 478–487, 2016.
Hsuan Yang. Salient object detection via bootstrap learning. In
CVPR, pages 1884–1892, 2015.
[Li et al., 2013] Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan,
[Wang et al., 2015] Lijun Wang, Huchuan Lu, Xiang Ruan, and
and Ming-Hsuan Yang. Saliency detection via dense and sparse
reconstruction. In ICCV, pages 2976–2983, 2013.
Ming-Hsuan Yang. Deep networks for saliency detection via local estimation and global search. In CVPR, pages 3183–3192,
[Li et al., 2014] Yin Li, Xiaodi Hou, Christof Koch, James Rehg,
2015.
and Alan Yuille. The secrets of salient object segmentation. In
CVPR, pages 280–287, 2014.
[Li et al., 2016] Xi Li, Liming Zhao, Lina Wei, Ming-Hsuan Yang,
Fei Wu, Yueting Zhuang, Haibin Ling, and Jingdong Wang.
Deepsaliency: Multi-task deep neural network model for salient
object detection. IEEE TIP, 25(8):3919–3930, 2016.
[Liu and Han, 2016] Nian Liu and Junwei Han. Dhsnet: Deep hierarchical saliency network for salient object detection. In CVPR,
pages 678–686, 2016.
[Long et al., 2015] Jonathan Long, Evan Shelhamer, and Trevor
Darrell. Fully convolutional networks for semantic segmentation.
In CVPR, pages 3431–3440, 2015.
[Qin et al., 2015] Yao Qin, Huchuan Lu, Yiqun Xu, and He Wang.
Saliency detection via cellular automata. In CVPR, pages 110–
119, 2015.
[Shen and Yeo, 2011] Li Shen and Chuohao Yeo. Intrinsic images
decomposition using a local and global sparse representation of
reflectance. In CVPR, pages 697–704, 2011.
[Shi et al., 2016] Jianping Shi, Qiong Yan, Li Xu, and Jiaya Jia.
Hierarchical image saliency detection on extended cssd. IEEE
TPAMI, 38(4):717–729, 2016.
[Simonyan and Zisserman, 2014] Karen Simonyan and Andrew
Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
[Wang et al., 2016] Linzhao Wang, Lijun Wang, Huchuan Lu,
Pingping Zhang, and Xiang Ruan. Saliency detection with recurrent fully convolutional networks. In ECCV, pages 825–841,
2016.
[Wang et al., 2017a] Lijun Wang, Huchuan Lu, Yifan Wang,
Mengyang Feng, Dong Wang, Baocai Yin, and Xiang Ruan.
Learning to detect salient objects with image-level supervision.
In CVPR, pages 136–145, 2017.
[Wang et al., 2017b] Tiantian Wang, Ali Borji, Lihe Zhang, Pingping Zhang, and Huchuan Lu. A stagewise refinement model for
detecting salient objects in images. In ICCV, pages 4019–4028,
2017.
[Wu et al., 2015] Xiang Wu, Ran He, and Zhenan Sun. A lightened
cnn for deep face representation. arXiv:1511.02683, 2015.
[Xie and Tu, 2015] Saining Xie and Zhuowen Tu. Holisticallynested edge detection. In ICCV, pages 1395–1403, 2015.
[Yang et al., 2013] Chuan Yang, Lihe Zhang, Ruan Xiang Lu,
Huchuan, and Ming-Hsuan Yang. Saliency detection via graphbased manifold ranking. In CVPR, pages 3166–3173, 2013.
[Zhang et al., 2017a] Pingping Zhang, Dong Wang, Huchuan Lu,
Hongyu Wang, and Xiang Ruan. Amulet: Aggregating multilevel convolutional features for salient object detection. In ICCV,
pages 202–211, 2017.
[Zhang et al., 2017b] Pingping Zhang, Dong Wang, Huchuan Lu,
Hongyu Wang, and Baocai Yin. Learning uncertain convolutional
features for accurate saliency detection. In ICCV, pages 212–221,
2017.
[Zhao et al., 2012] Qi Zhao, Ping Tan, Qiang Dai, Li Shen, Enhua
Wu, and Stephen Lin. A closed-form solution to retinex with nonlocal texture constraints. IEEE TPAMI, 34(7):1437–1444, 2012.
[Zhao et al., 2015] Rui Zhao, Wanli Ouyang, Hongsheng Li, and
Xiaogang Wang. Saliency detection by multi-context deep learning. In CVPR, pages 1265–1274, 2015.
| 1 |
1
Deep Private-Feature Extraction
Seyed Ali Osia, Ali Taheri, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, Hamid R. Rabiee
arXiv:1802.03151v2 [stat.ML] 28 Feb 2018
Abstract—We present and evaluate Deep Private-Feature Extractor (DPFE), a deep model which is trained and evaluated based on
information theoretic constraints. Using the selective exchange of information between a user’s device and a service provider, DPFE
enables the user to prevent certain sensitive information from being shared with a service provider, while allowing them to extract
approved information using their model. We introduce and utilize the log-rank privacy, a novel measure to assess the effectiveness of
DPFE in removing sensitive information and compare different models based on their accuracy-privacy tradeoff. We then implement
and evaluate the performance of DPFE on smartphones to understand its complexity, resource demands, and efficiency tradeoffs.
Our results on benchmark image datasets demonstrate that under moderate resource utilization, DPFE can achieve high accuracy for
primary tasks while preserving the privacy of sensitive information.
Index Terms—Feature Extraction, Privacy, Information Theory, Deep Learning.
F
1
I NTRODUCTION
T
HE increasing collection of personal data generated
by, or inferred from, our browsing habits, wearable
devices, and smartphones, alongside the emergence of the
data from the Internet of Things (IoT) devices are fueling
many new classes of applications and services. These include healthcare and wellbeing apps, financial management
services, personalized content recommendations, and social
networking tools. Many of these systems and apps rely on
data sensing and collection at the user side, and uploading
the data to the cloud for consequent analysis.
While many of the data-driven services and apps are
potentially beneficial, the underlying unvetted and opaque
data collection and aggregation protocols can cause excessive resource utilization (i.e., bandwidth and energy) [1],
and more importantly data security threats and privacy
risks [2]. Collection and processing of private information on the cloud introduces a number of challenges and
tradeoffs, especially when scalability of data collection and
uploading practice are taken into consideration. The data
is often fed into machine learning models for extracting
insights and features of commercial interest, where the information is exposed to data brokers and service providers.
While certain features of the data can be of interest for
specific applications (e.g., location-based services, or mobile
health applications), the presence of additional information
in the data can lead to unintended subsequent privacy
leakages [3], [4]. Current solutions to this problem, such
as cryptography [5], [6], complete data isolation and local
processing [7] are not efficient for big data and techniques
relying on deep learning [8]. In today’s data-driven ecosystem, these privacy challenges are an inherent side effect of
many big data and machine learning applications.
•
•
•
Seyed Ali Osia, Ali Taheri and Hamid R. Rabiee are with the Advanced
ICT Innovation Center, Department of Computer Engineering, Sharif
University of Technology, Iran.
Ali Shahin Shamsabadi and Kleomenis Katevas are with the School of
Electronic Engineering and Computer Science, Queen Mary University
of London.
Hamed Haddadi is with the Faculty of Engineering, Imperial College
London.
Client-Side
User
Data
Feature
Extractor
Cloud-Side
Private
Feature
Analyzer
Output
Fig. 1: The proposed hybrid framework for user-cloud collaboration.
In this paper, we focus on providing privacy at the first
step of this ecosystem: the exchange of acquired user data
between the end user and a service provider. We propose a
novel solution based on a compromise between scalability
and privacy. The proposed framework is based on the idea
that when preparing data for subsequent analysis by service
provider, the end user does not need to hide all the information by means of cryptographic methods, which can be
resource-hungry or overly complex for the end-user device.
Instead, it might suffice to remove the sensitive parts of the
information (e.g., identity features in a face image), while
at the same time preserving the necessary information for
further analysis. This is also the case in many surveillance
applications where a central node is required to process user
data that may be sensitive in some aspects.
The proposed hybrid framework in which the user and
cloud collaborate to analyze the raw user data in a private
and efficient manner is depicted in figure 1. Our work relies
on the assumption that the service provider releases a publicly verifiable feature extractor module based on an initial
training set. The user then performs a minimalistic analysis
and extracts a private-feature from the data and sends it to
the service provider (i.e., the cloud) for subsequent analysis.
The private-feature is then analyzed in the cloud and the
result yields back to the user. The fundamental challenge in
using this framework is the design of the feature extractor
module that removes sensitive information properly, and
on the other hand does not impact scalability by imposing
heavy computational requirements on the user’s device.
2
Data Holder(s)
i,ii
Model Provider
iii
iv
2.1
End User
Service Provider
Public Dataset Privacy
In the rest of this paper, we first discuss the privacy
issues in different aspects of machine learning and consider
the problem of “user data privacy in interaction with cloud
services” as the main purpose of this paper. To design the
feature extractor module, we express our privacy preservation concerns in an optimization problem based on mutual
information and relax it to be addressable by deep learning.
We then present the Deep Private-Feature Extractor (DPFE),
a tool for solving the aforementioned relaxed problem. We
then propose a new privacy measure, the log-rank privacy,
to verify the proposed feature extractor, measure its privacy,
and evaluate the efficiency of the model in removing sensitive information. The log-rank privacy can be interpreted
from different perspectives, including entropy, k -anonymity
and classification error. We evaluate this framework under
the facial attribute prediction problem by using face images.
In this context, we remove the face identity information
while keeping facial attribute information, and analyze the
privacy-accuracy performance tradeoff. Finally, we implement different private-feature extractors on mobile phone
to compare the performance of different solutions and addressing the scalability concern.
The main contributions of this paper are: (i) Proposing
a hybrid user-cloud framework for the user data privacy
preservation problem which utilizes a private-feature extractor as its core component; (ii) Designing the privatefeature extractor based on information theoretic concepts
leading to an optimization problem (Section 3); (iii) Proposing a deep neural network architecture to solve the optimization problem (Section 4); (iv) Proposing a measure to
evaluate privacy and verify the feature extractor module
(Section 5).1
Training data is the crucial component of each learning
system. Collecting and sharing rich datasets for data mining
tasks can be highly beneficial to the learning community,
although it might come with privacy concerns that make it
a double-edged sword. Publishing a dataset that satisfies
both parties by preserving the users’ privacy and other
useful information for data mining tasks, is a challenging
problem and has long line of work. Agrawal and Srikant [9]
were some of the first to address the privacy concern in
data mining for sharing a generic dataset for learning tasks,
in addition to considering users’ privacy. They utilized a
randomization technique, in which by adding noise to data
they guaranteed its privacy. The resulting distribution of
noisy data might have been different from the original
distribution. To reconstruct the original distribution, a recovery method was introduced in the paper and extended
by Agrawal et al. in [10]. By utilizing this method, it is
possible to train a learning model on reconstructed data
with the same distribution as the original data. Many works
have followed this trend and extended this idea, however,
this approach faces two important obstacles: curse of dimensionality and non-robustness to attacks [11], which make it
inefficient for high dimensional data with side informations.
k -anonymity is another popular option for addressing the
problem of anonymous dataset publishing, first introduced
by Sweeney [12]. Publishing a health database that contains
patient sensitive information is one of the favored instance
of k -anonymity usages. Assuming all data points have
identity documents (IDs) that should be kept private, k anonymity deals with transforming a dataset in a way that,
having an individual data features, one cannot infer its ID
among at least k identities. Many researches are presented to
make a database k -anonymous [13], [14], [15], [16] and most
of them are based on the generalization (e.g. removing the
last digit of the patient zip code) or suppression of features
(e.g. removing the name). Nevertheless, this approach deals
with some important challenges when facing attacks [11],
although [17], [18] and [19] tried to overcome these challenges. Furthermore, they are only well-suited for structured
databases with high level features (e.g. relational databases)
which makes them hard to deploy for other type of data
(e.g. image and video). Newton et al. [20] published a k anonymous image dataset by proposing the k -same algorithm. While they build the desired dataset by constructing
average images among k identities, their employed models
are not reliable today.
2
2.2
v
Fig. 2: Privacy concerns may exist when: (i) data holder
shares a public dataset: anonymity of individuals are threatened; (ii) data holders participate in a model training procedure with their private data; (iii) a model provider shares a
publicly-learned model: the privacy of the individuals’ data
used for training is at risk; (iv) an end user shares his/her
data with the service provider: private information can
be revealed to the service provider; (v) a service provider
shares query answers with the end user: an attacker can
infer the model itself by launching repeated queries.
PRIVACY IN MACHINE LEARNING
Machine learning methods need to analyze sensitive data
in many usecases to perform their desired tasks which
may violate users’ privacy. This fundamental dichotomy has
been appearing in different aspects of machine learning, as
listed in Figure 2. These concerns can be classified as public
dataset privacy, training phase privacy, training data privacy,
model privacy and user data privacy which are discussed in
the rest of this section.
1. All the code and models for the paper are available on https://
github.com/aliosia/DPFE
Training Phase Privacy
A common problem of centralized learning is the collection
of training data, especially when dealing with individual’s
sensitive data (e.g. health information). People are usually
reluctant in sharing data that includes their habits, interests,
and geographical positions. The upcoming solution to this
problem is federated learning, where data holders keep their
data private, while they communicate with a central node in
order to train a learning model in a cooperative manner.
[21] tried to address this problem by using distributed
stochastic gradient descent (SGD), where each party loads
the latest parameters, update them using SGD and upload
3
the new selected parameters to the central node that holds
the global model. While in that case direct leakage of private
data can be prevented, the uploaded gradients might still
include sensitive information from the training data. Thus,
a differentially private algorithm is required for sharing the
gradients which is proposed in that work. This approach
still has some major problems e.g. loose privacy bound
addressed by [22] and potential threats by generative adversarial networks addressed by [23]. An alternative solution
to this problem could be the use of cryptographic techniques like secure multi-party computation, recently used
by [24]. However, these techniques are still not applicable
on complex neural networks, due to their low efficiency and
accuracy.
2.3
Training-Data Privacy
The growing popularity of public learning models raises
the concern of privacy of the individuals involved in the
training dataset. Differentially private algorithms brought
us a rigorous answer to this problem, by providing a method
to answer queries from a statistical database, without disclosing individuals’ information, as formalized by [25]. An
algorithm is called differentially private if the conditional
likelihood ratio of presence and absence of an individual,
given the transformed statistic, is close to one. Adding noise
to the original statistic is one popular method leading to
differential privacy. We can consider a learning model as
a complex statistic of its training data which should not
reveal information about the individuals. Answering complex queries by combining simple queries is the way various
learning models, such as Principal Component Analysis and
k -means, can be made differentially private (see the surveys
by [26] and [27]). Recently differentially private deep models
were proposed by [28]. The authors in [22] introduced
a privacy preservation framework by utilizing differential
privacy which is not specific to the learning model and
possesses a state of the art privacy-accuracy tradeoff.
2.4
Model Privacy
Model privacy is the concern of the service provider and
deals with keeping the learning model private, while returning the inference results to the user. Throughout these years,
less attention has been paid to the model privacy, although
some works such as [29] studied this problem. In general, an
adversary can infer the model parameters by making many
queries to the learning model and aggregate the answers.
[29] considered this approach for some basic models, e.g.
logistic regression, multilayer perceptron and decision tree.
2.5
User Data Privacy
The increasing usage of cloud-based systems has triggered
a situation where preserving privacy is a challenging but
important task. That is, when the user data and the pretrained learning model is not accessible from the same place,
inevitably user data must be sent to the service provider
for further analysis. Usually, cryptographic schemes are
prevalent in these situations, where two parties do not
trust each other. Focusing on the deep models offered by
a cloud service, [30] introduced this problem and proposed
a homomorphic encryption method to execute the inference
directly on the encrypted data. Even though this work is
an interesting approach to the problem, a number of shortcomings makes it impractical. In fact, approximating a deep
neural network with a low degree polynomial function may
not be feasible without sacrificing accuracy. Furthermore,
the complexity of the encryption is relatively high which
makes it inefficient for the real-world online applications.
An alternative to homomorphic encryption was suggested
by [31]. They used garbled circuit protocol and address
some of the discussed challenges, however they were limited to employing simple neural networks and had very
high computational cost.
In summary, using cryptographic techniques on complex
deep neural networks is not feasible yet, while the problem
of user data privacy is getting more and more important
everyday in the cloud computing era. In this paper we are
targeting this challenge and try to address it with a machine
learning solution, based on a specific kind of feature extraction model, formulated in the next section.
3
P ROBLEM F ORMULATION
In this section, we address the user data privacy challenge in
a different manner from encryption-based methods. The key
intuition is that for many applications we can remove all of
the user’s sensitive (unauthorized) information while retaining the ability to infer the primary (authorized) information.
This is as opposed to encryption-based solutions that try
to encode all the information such that only authorized
users can access it. For instance, we may want to focus
on hiding individuals’ identities in a video surveillance
system, but still allow to count the number of participants.
In this scenario, a trivial solution is to censor people’s
faces in the frames, however this solution fails when the
purpose is to measure the facial attributes such as emotion
or gender. Henceforth, we address this problem as a privacy
preservation problem and use the terms primary and sensitive
information as the information needed to be preserved
and removed, respectively. Assuming the service provider
knows the primary and sensitive random variables, we
abstract this concept as an optimization problem by utilizing mutual information (see Appendix A for information
theoretic preliminaries).
Let x be the input, z the primary, and y the sensitive
variables. We would like to extract a feature f, by applying
a function g on x, which is informative about the primary
variable and non-informative about the sensitive variable.
We refer to the extracted feature as private-feature. More
specifically, the desired private-feature is obtained through
maximizing mutual information between the feature and
primary variable I(f; z), while minimizing mutual information between the feature and sensitive variable I(f; y), as
follows:
max I(f; z) − βI(f; y)
f
s.t. f = g(x)
where I(A; B) represents the mutual information between two random variables A and B .
4
y
f
x
z
Fig. 3: Private-feature extraction probabilistic graphical
model.
Even though at the first glance it seems that the optimal solution of this problem is to set f equal to the best
estimation of z, this is not applicable in many real world
applications because: (a) the optimal model which perfectly
predicts z can be too complicated, and hence using such a
feature extractor in the client-side is impossible; and (b) the
service provider may not share the whole model with the
client for some reasons such as copyright issues. Assuming
we can accurately estimate f by using a member of family
of functions G = {g(x; θ)|θ ∈ Θ}, then the optimization
problem becomes:
max I(f; z) − βI(f; y)
θ
s.t. f = g(x; θ)
impractical, nevertheless, their framework provides a powerful tool for analysis of the supervised feature extraction
methods.
Similar to the information bottleneck optimization
problem,
the
private-feature
extraction
problem
(Equation 1) is non-convex and can not be solved through
the known convex optimization algorithms. To overcome
this challenge, it is common to bound the optimization
problem, and then by using the iterative methods similar to
[33] and [36], obtain the desired results. To this end, we first
obtain the lower and upper bounds for I(f; z) and I(f; y)
respectively, and then try to maximize the lower bound
of Equation 1. Henceforth, we assume y to be a discrete
sensitive variable in order to address the classification
privacy problem.
Lower bound for I(f; z). We derive a variational lower
bound for mutual information by first expressing Lemma 1
and then proving Theorem 2.
Lemma 1. For any arbitrary conditional distribution q(z|f ),
we have:
I(f; z) ≥ Ef,z log
q(z|f )
p(z)
(2)
Proof See Appendix B.1.
(1)
where, f is a deterministic function of the input variable,
parameterized by θ. The graphical model of this problem is
shown in Figure 3.
Optimizing mutual information has been widely used in
many information theoretic approaches of machine learning
problems. The authors in [32] formulated the Infomax and
tried to address the problem of unsupervised deterministic invertible feature extraction by maximizing the mutual
information between the input and feature. [33] relaxed
the limiting invertibility constraint and used a variational
approach which leads to the IM algorithm for maximizing the mutual information. Recently, [34] used a similar
method to maximize the mutual information in generative
adversarial networks. These works can be considered as
the fundamental works in the problem of unsupervised
feature extraction from information theoretic viewpoint,
however, since we are utilizing a supervised approach,
those methods cannot be applied to our case. Among works
considering supervised feature extraction, The information
bottleneck introduced in [35] is the most relevant work. In
general, Information bottleneck provides an information
theoretic framework for analyzing the supervised feature
extraction procedure. Although their optimization problem almost looks similar to ours, there is a fundamental
difference between the two approaches. More specifically,
they use I(f; x) instead of I(f; y), meaning that irrelevant
information to z should be removed by minimizing I(f; x)
in the process of feature extraction. Therefore, they can not
directly consider the privacy constraints about y. Moreover,
their optimization problem is solved through an analytical
approach that assumes that the joint probability distribution
p(x, z), is known. However, in practice this distribution
is often unavailable. Although their analytical method is
Theorem 2. The lower bound L for I(f; z) is given by:
L = H(z) + max Ef,z log q(z|f ; φ)
φ
(3)
Proof
For all members of a parametric family of
distributions {q(z|f ; φ)|φ ∈ Φ}, the right hand side
of Equation 2 can be considered as the lower bound
for mutual information. The equality happens when
q(z|f ) is equal to p(z|f ). Therefore, if we consider a
rich family of distributions for q in which a member
can approximate p(z|f ) well enough, we can obtain
a tight enough lower bound for mutual information
by maximizing the right hand side of Equation 2 with
respect to φ. By utilizing the definition of Entropy, we
obtain L as the desired lower bound.
Upper bound for I(f; y). A two-step procedure can be
used to find an upper bound for the mutual information.
First, we use the Lemma 3 and Jensen inequality to prove
Theorem 4, and obtain U1 as the primitive upper bound for
I(f; y). Then we use kernel density estimation (KDE) (see
[37]) and use Lemma 5 and 6 to obtain U2 as the desired
upper bound for I(f; y) through Theorem 7.
Lemma 3. Assume y is a discrete random variable with
{ya |1 ≤ a ≤ cy } as its range, then:
Z
X
p(f |ya )
I(f; y) =
p(ya ) p(f |ya ) log P
df
b p(yb )p(f |yb )
a
p(f,y)
p(f |y)
Proof By substituting p(f )p(y) with p(f ) and then p(f )
P
with b p(yb )p(f |yb ) in the main formula of I(f; y), we
obtain the desired relation.
5
By utilizing Jensen inequality2 and manipulating
Lemma 3, we can compute U1 as a primitive upper bound
for mutual information, as follows.
Theorem 4. The upper bound U1 for I(f; y) is given by:
X X
U1 =
p(ya ) p(yb ) Dkl p(f |ya )kp(f |yb )
(4)
and have large number of classes. Therefore, as stated in
Theorem 8 and Corollary 9, we use an equivalent relation
for U2 which is easier to optimize.
Theorem 8. Constraining the variance of each dimension of
feature space to be 1, we have:
a b:b6=a
U2 =
Proof See Appendix B.2.
(6)
(i,j):
yi =yj
Since computing U1 by Equation 4 is not tractable, we
use an approximation technique to obtain the upper bound.
By employing the kernel density estimation, we can efficiently estimate p(f ) [38]. We then utilize the Silverman’s
rule of thumb [39] and use a Gaussian kernel with the
desired diagonal covariance matrix. Next, by normalizing
each dimension of the feature space to have zero mean and
unit variance, we acquire a symmetric Gaussian kernel with
fixed covariance matrix, σI , where σ is a constant depending
on the dimensionality of the feature space and the size of
the training data. This kind of normalization is a common
process in machine learning [40] and is impartial of relations
among different dimensions including independency and
correlation. Finally, conditioning on y, for each ya we can
think of p(f |ya ) as a Gaussian Mixture Model (GMM) (see
[37]) and use the following lemmas from [41] to obtain a
reliable upper bound.
Lemma 5. [41] For two multidimensional Gaussian distributions, p and q , with µp and µq as their expected values
and the same covariance matrix σI , we have:
Dkl (pkq) =
1
kµp − µq k22
2σ
where c is a constant function of feature space dimension
and number of training data.
Proof See Appendix B.4.
Corollary 9. We can optimize the right hand side of Equation 6 instead of Equation 5 fo obtain U2 .
Considering Corollary 9 together with Theorem 8, we
realize that for a random pair of feature points, we should
decrease their distance if the y labels are different, and
increase their distance if they are the same. This is very
similar to the Contrastive loss idea presented in [42] which is
a popular loss function for Siamese architecture [43]. Siamese
networks are used for metric learning purposes and tends
to form a feature space in which similar points are gathered
near each other. This is the opposite of what we aim to
achieve: increase the distance of similar points and decrease
the distance of dissimilar points.
By utilizing the suggested lower and upper bounds, we
can substitute the original private-feature extraction problem (Equation 1) with the following relaxed problem.
min
Lemma P
6. [41] For two given GMMs p =
a πa pa and
q = b ωb qb , we have:
X
Dkl (pkq) ≤
πa ωb Dkl (pa kqb )
P
θ,φ
(i,j):
yi 6=yj
where fi is the extracted feature from data xi and its
corresponding label yi . The sum is over pairs of points
with different y labels.
Proof See Appendix B.3.
In other words, U2 is an upper bound that is proportional to the average Euclidean distance between each pairs
of feature vectors having different y labels. This value is
practically hard to estimate, especially when we use SGD,
− log q(zi |fi ; φ)
fi ,zi
i
X
β h X
(c − kfi − fj k22 )
kfi − fj k22 +
2
2σN
(i,j):
yi 6=yj
s.t. fi = g(xi ; θ)
where for each a and b, pa and qb are Gaussian distributions forming the mixtures.
We can use Theorem 4, Lemma 5 and Lemma 6 to derive
the desired upper bound for I(f; y).
Theorem 7. Having large training data, the upper bound U2
for I(f; y) is given by:
1 X
U2 =
kfi − fj k22
(5)
σN 2
X
+
a,b
2. see Appendix A
1 X
(c − kfi − fj k22 )
σN 2
(i,j):
yi =yj
(7)
Considering the above equation, we should optimize an
objective function that consists of two loss functions: the loss
of the primary variable preservation modeled by a classification loss (first term), and the loss of sensitive variable
elimination modeled by a contrastive loss (second term).
Thus, the general training framework of the private-feature
extractor contains three main modules: feature extractor,
primary variable predictor and sensitive variable remover,
as shown in Figure 4. Note that according to the second
term of Equation 7, the loss function of removing sensitive
variable is defined on the pairs of samples, and as a result
the y-remover module also operates on pairs of features.
We propose a general deep model along with SGD-based
optimizers to solve the optimization problem in Equation 7,
as explained in the next section.
4
D EEP A RCHITECTURE
By utilizing the latest breakthroughs in the area of deep neural networks, we can practically find good local optimums
6
Output
y-remover
of non-convex objective functions through SGD based algorithms, and accurately estimate complex non-linear functions. Today, a large portion of state of the art learning
models are deep. Therefore, having a general framework
for privacy preserving deep inference is necessary. In this
paper, we focus on image data (in the context of identity
v.s. gender, expression, or age recognition), and propose a
deep architecture based on CNN (Fig. 5) to optimize the
objective function of the relaxed problem (Equation 7). It
is worth mentioning that the proposed framework can be
generalized to other applications and deep architectures
(e.g. recurrent neural networks).
We call the proposed Deep Private-Feature Extractor architecture; DPFE. We consider two consecutive CNNs; one as
the feature extractor and the other as the primary variable
predictor. A simple strategy for building these modules is
the layer separation mechanism introduced in [44]. We can also
employ batch normalization layer [45] and normalize each
dimension of the feature space, as stated in Section 3. In
the following, we first introduce the layer separation mechanism, and then proceed with dimensionality reduction and
noise addition issues that can enhance the preservation of
privacy.
Layer Separation Mechanism
To deploy our framework, we can start from a pre-trained
recognizer of primary variable (e.g. a deep gender recognition model), and make it private to the sensitive variable
(e.g. identity). In order to do this, we choose the output
of an arbitrary intermediate layer of the pre-trained model
as the preliminary private-feature and simply partition the
layers of the model into two sets: the elementary and the
secondary layers that form the feature extractor and the
primary variable predictor, respectively. In this way, the
same model can be easily fine-tuned by just appending the
contrastive loss function and continuing the optimization
process, leading to the private-feature as the intermediate
layer’s output. This procedure is shown in Fig. 5.
…
CNN
Private-Feature
Layer
Contrastive
Loss
Input
Is Similar
Input
𝑥1
𝑦1 == 𝑦2
𝑥2
Fig. 5: Deep CNN architecture for private-feature extraction
(DPFE architecture). x1 and x2 are independent random
samples and y1 and y2 are their corresponding sensitive
labels. y-remover first checks the equality of sensitive labels
and then apply the information removal loss function.
One may argue that separating the layers of a deep
model is sufficient to obtain an ideal private-feature in the
intermediate layer due to the nature of deep networks. In
general, the higher layers of a deep architecture provide
a more abstract representation of the data and drop the
irrelevant information including the sensitive information
[46] and preserve the primary variable [47]. Therefore,
there is no need to fine-tune the model with the suggested
DPFE architecture. However, this argument can easily be
rejected by considering the counter example provided by
deep visualization techniques. For example, [48] provided
a method to reconstruct the input image from intermediate
layers of a deep network. Osia et.al. used this method in
[44] and demonstrated that the original face image can be
reconstructed from some intermediate layers of the gender
recognition model. Thus, there is no guarantee that the
intermediate layers drop the sensitive information (identity
in this case).
4.2
4.1
Distance
Feature
Extractor
Fig. 4: The private-feature extraction framework. z and y
are the primary and sensitive variables, respectively. x1 and
x2 are two independent samples, and f1 and f2 are their
corresponding features. z-predictor just uses f1 to compute
the first term of loss function, whereas y-remover uses both
f1 and f2 to compute the second term of loss function (see
Equation 7). Solid lines show data flow and dotted lines
indicate affection.
Private-Feature
Layer
Feature
Extractor
z-predictor
y-remover
…
𝑓2
…
𝑓1
CNN
Feature
Extractor
Parameters
𝑧1 Prediction
Loss
z-predictor
𝑥1
𝑥2
Dimensionality Reduction
Imagine that the extracted private-feature has a low dimension (in Section 5 we use a 10-dimensional feature space). In
this case, we will benefit from the following advantages:
•
•
•
We can highly decrease the communication cost between user and the service provider, because instead
of sending the raw input data to the cloud, the user
will only send the private-feature of the input.
As shown in Section 5, we need to estimate an expectation to measure the privacy, thus lower dimension
will help us to avoid the curse of dimensionality
during the approximation process.
Reducing the dimension of the private-feature will
intrinsically improve privacy as suggested by [44]
and [49].
7
Nevertheless, a potential disadvantage of the dimensionality reduction is that it can negatively affect on the accuracy
of the primary variable prediction. However, we show in
our experiments that the adverse effect of dimensionality
reduction is negligible.
Reducing the dimensionality can be done as a preprocessing step on the pre-trained network. In fact, after
choosing the intermediate layer, we can first execute the
following operations: (i) Embed an auto-encoder with a low
dimensional hidden layer on top of the chosen layer; (ii)
Fine-tune the model to obtain the new primary variable
predictor, and (iii) Choose the auto-encoder’s hidden layer
as the new intermediate layer which is low dimensional.
Consequently, we can fine-tune the model with DPFE architecture to get a low dimensional private-feature.
4.3
Noise Addition
As mentioned earlier, many of the privacy preservation
methods, from randomization technique to differentially
private algorithms, rely on noise addition to gain privacy
as it increases the uncertainty. We can utilize this technique
after finishing the training procedure, in the test phase,
when the dimensionality reduction is employed and the
granularity of the sensitive variable is finer than the primary
variable (e.g. identity is finer than gender).
Adding noise to the private-feature will smooth out
the conditional distributions of both primary and sensitive
variables and form a tradeoff between privacy (of sensitive
variable) and accuracy (of primary variable). This tradeoff
can be helpful in real world applications, because one can
choose the desired point on privacy-accuracy curve, based
on the the importance of privacy or accuracy in a specific
application. We will discuss this tradeoff in detail in Section 6.
5
P RIVACY M EASURE
In this section, we propose a method for evaluating the
quality of privacy algorithms. Considering the problem
formulation by mutual information (Equation 1), one may
suggest the negative of mutual information between the extracted private-feature and the sensitive variable (−I(f; y)),
as a privacy measure. Since I(f; y) = H(y) − H(y|f) and
H(y) is constant, this approach is equivalent to considering
H(y|f) as the privacy measure. However, this measure has
two shortcomings: (i) it is difficult to obtain an efficient
estimation of p(y|f ); and (ii) there is no intuitive interpretation of this measure for privacy. In order to resolve these
problems, we can relax the definition of uncertainty. We
achieve this by partitioning the conditional probabilities by
their rank order and build a lower bound for the conditional
entropy:
H(y|f) =
Z
p(f )
cy
X
ya =1
p(ya |f ) log
1
df
p(ya |f )
It is known that among some numbers which sums into
one, the r’th highest value is lower than or equal to 1r .
So
if we consider rf,a as the rank of p(ya |f ) in the set of
p(yj |f )|j ∈ {1, . . . , cy } sorted descending, we have:
H(y|f) ≥
Z
p(f )
cy
X
p(ya |f ) log rf,a df
ya =1
(8)
= Ep(f,y) log r , Lrank
which leads to the following definition by dividing all
formulas by log cy , in order to have a normalized measure
between zero and one.
Definition 10 (Log-Rank Privacy). The log-rank privacy of a
discrete sensitive variable y, given the observed feature
vector f, is defined as:
LRP (y|f) =
1
Ep(f,y) log r
log cy
(9)
where r is a random function of
f, and y corresponds to
the rank of p(y|f ) in the set of p(yj |f )|j ∈ {1, . . . , cy }
which has been sorted in a descending order.
Assuming we have an estimation of p(y|f ), log-rank
privacy can be empirically estimated by the sample mean
of training in the rank logarithm:
N
X
1
log rank p(yi |fi ), Si
N log cy i=1
Si = p(yj |fi )|j ∈ {1, . . . , cy }
ˆ (y|f) =
LRP
where rank(a, S) for a ∈ S is the rank of a in the
descending ordered set S . In the following, we provide
some intuition about the log-rank privacy and its relation
to entropy, k-anonymity and classification error.
20-questions game interpretation. Consider the 20questions game, in which we want to guess an unknown
object by asking yes/no questions from an oracle. As stated
in [50], the entropy is equivalent to the minimum number
of questions one could ask in order to find the correct
answer. Now consider the situation where we cannot ask
any kind of yes/no questions, but only the questions in
which we guess to be a candidate for the final answer, e.g.
’is the answer a chair?’. Also assume that if we can guess
the correct answer after k questions, we would be penalized
by log k ; so that the wrong guesses are punished more at
the beginning. Evidently, the optimal strategy to have the
minimum expected penalty is to guess the objects in the
same order with their probabilities. Using this strategy, the
expected penalty would be equal to the log-rank privacy.
k-anonymity and expected rank. k-anonymity deals
with the number of entities we are equally uncertain about.
Expected rank can be considered as a soft interpretation
of this number, relaxing the equal uncertainty with the
weighted sum of ranks. Thus, the rank variable expectation
can be thought as the expected number of entities that we
are in doubt about.
Classification error extension. One could suggest using
classification error (zero-one loss) as the privacy measure,
as it represents the deficiency of the classifier. Using this
measure is equal to considering zero and one penalty for the
correct and wrong guesses of the first question, respectively.
Thus, two situations where we can find the correct label
8
Procedure 1 DPFE Training Phase
Input: training data, intermediate layer, attribute set
M0 ← attribute prediction model
L ← intermediate layer of M0 (e.g. conv7)
|L| ← size of L’s output
A ← attribute set (e.g. {Gender & Age})
AE ← linear auto-encoder with input/output size of |L|
H ← hidden layer of AE (private-feature layer)
Initialize AE with PCA weights on L’s output
M1 ← embed AE to M0 on top of L
SL,A ← fine-tune M1 on A
z ← A , y ← Identity
PL,A ← fine-tune SL,A with DPFE architecture
Output: SL,A : simple model, PL,A : DPFE fine-tuned model,
H : private-feature layer
in the second and tenth question are considered equal
and both penalized by one. The log-rank privacy handles
this issue by penalizing different questions using their
ranks’ logarithm and can be considered as an extension of
classification error.
Sensitivity analysis. Empirically approximating an expected value by drawing samples from a probability distribution is a common method in machine learning [37]. For
comparing empirical estimation of log-rank privacy with
entropy, we need to estimate the order of probabilities in the
former, while the exact values of probabilities are needed in
the later. In general, approximating the log-rank privacy is
less sensitive to the error of the density estimation and can
gain lower variance. Detailed sensitivity analysis is out of
scope of this paper and will be considered in future work.
6
E VALUATION
In this section, we evaluate the proposed private-feature
extractor by considering the problem of facial attribute prediction. We use each face image as an input and infer its
facial attributes such as gender, expression, or age, in a
supervised manner. We extract a feature for facial attribute
prediction, which at the same time is non-informative with
respect to the identity of the person (sensitive attribute). In
all of our experiments, we used the CelebA face dataset,
presented in [51], which includes 40 binary facial attributes,
such as gender (male/female), age (young/old), and smiling (yes/no) with the corresponding identity labels. In the
following, first we explain the experiment setting and then
we discuss the results.
6.1
Experiment Setting
In our evaluations, we used the layer separation mechanism followed by dimensionality reduction and noise addition. We selected the state of the art pre-trained facial
attribute prediction model presented in [52] and called it the
original model.3 Then, we chose an attribute set (e.g. {gender
& age}) to preserve its information as the private-feature.
3. We used a similar implementation from https://github.com/
camel007/caffe-moon which used the tiny darknet architecture from
https://pjreddie.com/darknet/tiny-darknet/.
Next, we selected an intermediate layer (e.g. layer conv7)
of the chosen network. Since this layer can also be a high
dimensional tensor, we embeded a linear auto-encoder and
applied batch normalization on its hidden layer to obtain the
normalized intermediate features. Finally, by fine-tuning the
network, we may have an attribute prediction model with
low dimensional intermediate feature, which we refer to as
the Simple model in the rest of the paper. While the lowdimensional feature preserves the information of attributes
(see Theorem 2), it does not necessarily omit the sensitive
information. Hence, we should fine-tune the network with
the proposed DPFE architecture (figure 5) to remove identity
information from the intermediate features. We refer to
this model as the DPFE model. These steps are depicted in
Procedure 1. We implemented all the models with the Caffe
framework [53], utilizing the Adam optimizer [54], and a
contrastive loss function.
We evaluated each fine-tuned model based on the following criteria:
•
•
Accuracy of the facial attribute prediction: achieving
higher accuracy implies that the primary variable
information is well preserved.
Identity privacy: we evaluate the privacy of the feature extractor, using two different measures. First, the
log-rank privacy measure, introduced in Section 5.
Second, we utilize 1NN identity classifier and consider its misclassification rate which must be high in
order to preserve privacy (although this condition is
not sufficient as discussed in Section 5. We also use
the deep visualization technique presented in [48]
to demonstrate that the higher layers of the deep
network may not be reliable.
To show the generality of the proposed method, we
consider four different intermediate layers (conv4-2, conv51, conv6-1 and conv7) together with five attribute sets (listed
below), and the results for twenty Simple and twenty DPFE
models.
•
•
•
•
•
G: {gender}
GA: {gender, age}
GAS: {gender, age, smiling}
GASL: {gender, age, smiling, big lips}
GASLN: {gender, age, smiling, big lips, big nose}
In what follows, we first explain the accuracy-privacy
tradeoff based on the log-rank privacy measure and 1NN
misclassification rate (Subsection 6.2). We then present the
visualization result (Subsection 6.3), and finally address
the complexity issue of the private-feature extractor by
implementing the proposed framework on a smartphone
(Subsection 6.4).
6.2
Accuracy vs. Privacy
To evaluate Simple and DPFE models, we designed the
following four experiments and assessed different models
based on their accuracy-privacy trade-off:
1)
2)
We compared Simple and DPFE models to show the
superiority of DPFE fine-tuning;
We assessed the effect of different intermediate layers to indicate the appropriateness of higher layers;
9
Procedure 2 DPFE Test Phase
Input: test data, intermediate and private-feature layers,
attribute set, model
H ← private-feature layer
A ← attribute set
M ← model
C ← covariance matrix of H in M
for r ∈ {ratios} do
Nr ← Gaussian noise layer with covariance rC
Mr ← embed Nr as an additive noise on H in M
Hr ← output of H + Nr
pr ← identity privacy of Hr
ar ← average accuracy of Mr on A
end for
plot accuracy-privacy curve using (ar , pr )|r ∈ {ratios}
Output: accuracy-privacy trade-off
3)
4)
We evaluated the effect of extending attribute
set and showed that preserving privacy becomes
harder;
We considered mean and standard deviation of
Rank-privacy measure to guarantee privacy.
In order to adjust the accuracy-privacy trade-off, we
used the noise addition mechanism. After the training
phase, we estimate the covariance matrix of the feature
space, scale it with different ratios and use it as a covariance
matrix of a Gaussian noise. By increasing the amount of
noise, the accuracy of the primary variable prediction decreases but the privacy of the sensitive variable increases. As
a result, we can build the accuracy-privacy trade-off curves
in a manner similar to the trade-off in rate-distortion theory
(see [50]). The evaluation steps are shown in Procedure 2.
The accuracy-privacy curves of different models can be
compared based on the following definition.
Definition 11 (Acc-Priv superiority). For two models that try
to preserve privacy of a sensitive variable and maintain
accuracy of a primary variable, the one which always
results in higher value of privacy for a fixed value of
accuracy, is Acc-Priv superior.
Considering Equation 7, it seems that the relative importance of accuracy and privacy can be controlled by changing
the values of parameter β . However, this is not feasible
in practice due to the challenges in the training stage. For
example, training with a constant β and consequent noise
addition mechanism, it is possible to set different accuracyprivacy strategies by utilizing a single trained model. This is
not the case when we have various models by considering
different values for β . We used cross validation, in order to
choose a suitable fixed value for β in our experiments.
We computed the accuracy-privacy trade-off on the
test data with 608 identities. Setting noise to zero, for all
intermediate layers and attribute sets, Simple and DPFE
models reached the same accuracy level as the original
model with an error margin of less than 0.5%.4 Therefore,
we can conclude that all Simple and DPFE models preserve
the facial attribute information, and we may concentrate on
4. In order to report the accuracy of an attribute set, we consider the
average accuracy of predicting each binary attributes in the set.
their privacy performance.
Effect of DPFE fine-tuning. In order to verify the
superiority of DPFE fine-tuning over Simple fine-tuning, we
compared the accuracy-privacy curve of different models,
fine-tuned with DPFE or Simple architectures. Figure 6
shows the results for the combination of two layers and two
attribute sets, with different privacy measures. In all cases,
DPFE models have the Acc-Priv superiority over Simple
models. In other words, for a fixed value of accuracy, DPFE
consistently achieves higher levels of privacy.
Effect of higher layers. Comparison of the accuracyprivacy curves of different layers on the same attribute set
is depicted in Figure 7. The results illustrate the Acc-Priv
superiority of higher layers for two attribute sets and for
both privacy measures. This observation is inline with our
earlier assumptions about the higher layers.
Effect of attribute set extension. The accuracy-privacy
trade-off of the DPFE fine-tuned models for different
attribute sets with conv7 as the intermediate layer, are
shown in figure 8. The results show that as we enlarge
the attribute set and restrict the model with preserving
the information, then preserving privacy becomes more
challenging due to the intrinsic correlation of the identity
with facial attributes.
Guaranteeing privacy. As discussed in Section 5, instead of log-rank, we could also consider the rank itself by
analyzing its mean and variance. This idea is depicted in
figure 9 for Simple and DPFE models. The results show
that the DPFE model has Acc-Priv superiority over the
Simple model. More importantly, it forces the conditional
distribution of the sensitive variable to converge to an
uniform distribution, at least in the rank-mean and standard
deviation sense. In fact, the mean and the standard deviation
of rank measure for the discrete uniform distribution are 0.5
and 0.28, respectively. As shown in figure 9, when privacy
increased, the statistics for the DPFE model converge to
their corresponding values for the uniform distribution. If
we consider the normal distribution for the rank variable,
we can provide an (, δ) privacy guarantee, similar to the
method used in differential privacy [25]. For example, as
depicted in figure 9, we can achieve the gender accuracy of
up to 90% with a rank-mean of 0.3 and standard deviation
of 0.25. Hence, with a probability of 0.88% we can claim that
the rank-privacy is greater than 0.1, and we have achieved
10% anonymity.
6.3
Visualization
Visualization is a method for understanding the behavior
of deep networks. It provides an insightful intuition about
the flow of information through different layers. We used
an auto-encoder objective visualization technique [48] to
validate the sensitive information removal in DPFE. The
reconstruction of images is done by feeding the privatefeature to the Alexnet decoder proposed in [48]. Therefore,
we may visually verify the identity removal property of the
private-feature by comparing the original and reconstructed
10
conv4-2
conv4-2
0.8
0.95
0.6
0.9
0.5
0.6
0.9
0.5
simple
simple
0.85
dpfe
simple
dpfe
85
90
85
GA Acc (%)
90
75
conv7
conv7
0.9
simple
dpfe
0.9
85
GA Acc (%)
90
simple
dpfe
0.4
90
0.85
simple
dpfe
85
0.6
0.5
0.85
simple
0.95
0.7
LRP
1NN
0.5
conv7
1
0.8
0.95
0.6
80
GASLN Acc (%)
conv7
0.8
0.7
dpfe
75
GASLN Acc (%)
1
0.4
80
GA Acc (%)
simple
0.85
dpfe
0.4
1NN
0.4
LRP
0.95
0.7
LRP
1NN
0.7
conv4-2
1
1NN
0.8
LRP
conv4-2
1
dpfe
75
80
GA Acc (%)
75
80
GASLN Acc (%)
GASLN Acc (%)
Fig. 6: DPFE vs. Simple models: fine-tuned models with DPFE architecture achieve Acc-Priv superiority to corresponding
Simple models in all layers and attribute sets.
0.85
1
1
0.8
0.99
conv4-2
0.7
0.98
0.98
0.97
conv5-1
conv6-1
0.65
conv4-2
conv4-2
conv5-1
conv5-1
0.6
conv6-1
0.96
conv7
90
0.7
1NN
LRP
0.75
1NN
LRP
0.8
95
conv5-1
conv6-1
conv7
90
G Acc (%)
conv4-2
0.94
conv6-1
conv7
0.96
95
75
G Acc (%)
conv7
80
GASL Acc (%)
75
80
GASL Acc (%)
Fig. 7: Layer Comparison: in general, higher layers achieve Acc-Priv superiority to lower layers. In this figure, all models
are fine-tuned with the DPFE architecture.
0.8
the case for DPFE models. Therefore, just relying on the
output of higher layers in the original model can not assure acceptable privacy preservation performance, while the
DPFE models assure the privacy of identities. Regarding the
accuracy, we can observe and detect the facial attributes in
both models.
1
0.6
1NN
0.95
LRP
G
GA
0.4
G
GA
0.9
GAS
GAS
GASL
GASL
GASLN
0.2
GASLN
0.85
ALL
85
ALL
90
Gender Acc (%)
95
85
6.4
90
95
Gender Acc (%)
Fig. 8: Comparison of Gender accuracy-privacy trade-offs
when putting more preservation constraints on the model.
The intermediate layer is set to conv7.
images. These images are shown in figure 10 for different
layers of the original and DPFE fine-tuned models.
The results can be analyzed in two aspects: accuracy
of desired attributes and privacy of identities. From the
privacy perspective, the identity of the people in the reconstructed images of the original model can be readily
observed in the last layers (e.g. conv7), while that is not
Complexity vs. Efficiency
Although higher intermediate layers may achieve better
accuracy-privacy trade-off, in some cases, such as lowpower IoT devices or smartphones, their computational
complexity may not be acceptable. Therefore, due to the
limited resources on these devices (both memory and computational power) a privacy-complexity trade-off should
TABLE 1: Device Specification
Google (Huawei) Nexus 6P
Memory 3 GB LPDDR4 RAM
Storage
32 GB
CPU
Octa-core Snapdragon 810 v2.1
GPU
Adreno 430
OS
Android 7.1.2
11
Rank mean/std comparison
0.7
0.6
0.5
Rank
0.4
0.3
0.2
0.1
dpfe
simple
0
80
85
90
95
Gender Acc. (%)
Fig. 9: Comparison of mean and standard deviation of Rank
variable for DPFE and Simple models for layer conv7.
Analyzing the complexity of different layers can lead us
to considering accuracy-privacy-complexity trade-offs. As
an example, consider Figure 7 and suppose we want to
preserve the gender information. Comparing conv7 with
conv4-2 and setting the accuracy to 95%, we obtain 10%
more log-rank privacy with the cost of about 90% more
inference time. In this way we can choose the right strategy
based on the importance of accuracy, privacy and complexity. Also by using the dimensionality reduction we can
highly decrease the communication cost (compare the size
of an image to size of 10 floating point numbers), although
in this case we should consider the effect of dimensionality
reduction on the complexity which is negligible.
We conclude that our algorithm can be implemented
on a modern smartphone. By choosing a proper privacycomplexity trade-off and using different intermediate layers,
we were able to significantly reduce the cost when running
the model on a mobile device, while at the same time
preserving important user information from being uploaded
to the cloud.
7
also be considered. In order to address this problem, we
evaluated the original architecture without dimensionality
reduction on a smartphone and measured its complexity
in different layers. The results are shown in figure 11. By
gradually reducing the complexity of the private-feature
extractor (considering lower intermediate layers in the layer
separation mechanism), we also managed to reduce the
inference time, memory and CPU usage, while hiding the
user’s sensitive information.
We evaluated the proposed implementation on a modern
handset device, as shown in Table 1. We evaluated the
intermediate layers cumulatively, and compared them with
the on-premise solution (full model). We used Caffe Mobile v1.0 [53] for Android to load each model and measured
the inference time (figure 11a) and model memory usage
(figure 11b) of each of the 17 configurations. We configured
the model to only use one core of the device’s CPU, as
the aim of this experiment was a comparison between the
different configurations on a specific device.
Results show a large increase in both inference time and
memory use when loading the on-premise solution due
to the increased size of the model, proving the efficiency
of our solution. More specifically, and by considering the
layer conv4 2 as a baseline, we experienced a 14.44% inference time and 8.28% memory usage increase in conv5 1,
43.96% inference time and 22.10% memory usage increase in
conv6 1, 90.81% inference time and 35.05% memory usage
increase in conv7, and 121.76% inference time and 54.91%
memory usage increase in all layers (on premise). CPU
usage also increases per configuration, however due to the
multitasking nature of an android device, it is challenging to
isolate the CPU usage of a single process and naturally the
results fluctuates. Moreover, use of the lower intermediate
layers can significantly reduce the complexity of privatefeature extractors, especially when dealing with implementing complex deep architectures e.g. VGG-16 on edge devices
and smartphones [55].
CONCLUSION AND FUTURE WORK
In this paper, we proposed a hybrid framework for user data
privacy preservation. This framework consists of a feature
extractor and an analyzer module. The feature extractor
provides a user with a private-feature which does not
contains the user’s desired sensitive information, but still
maintains the required information to the service provider,
so it can be used by the analyzer module in the cloud. In
order to design the feature extractor, we used an information
theoretic approach to formulate an optimization problem
and proposed a novel deep architecture (DPFE) to solve
it. To measure the privacy of the extracted private-feature
and verify the feature extractor, we proposed a new privacy measure called log-rank privacy. Finally, we considered
the problem of facial attribute prediction from face image,
and attempted to extract a feature which contains facial
attributes information while it does not contain identity
information. By using DPFE fine-tuning and implementing
the model on mobile phone, we showed that we can achieve
a reasonable tradeoff between facial attribute prediction
accuracy, identity privacy and computational efficiency.
Our work can be extended in a number of ways. We used
the proposed framework in an image processing application, while it can be used in other learning applications e.g.
speech or text analysis and can be extended to other deep
architectures e.g. recurrent neural networks. We formulated
the problem for discrete sensitive variables but it can be
extended for general cases. Analyzing the log-rank privacy
measure can also have many potential applications in the
privacy domain. An interesting future direction could be
involving the log-rank privacy in the design of learning to
rank algorithms. In an ongoing work, we are considering
the challenge of privacy in a Machine Learning-as-a-Service
platform.
ACKNOWLEDGMENTS
We acknowledge constructive feedback from Sina Sajadmanesh, Amirhossein Nazem and David Meyer. Hamed
12
conv5-1
conv6-1
conv7
DPFE model
original model
input images
conv4-2
Fig. 10: Visualization of different layers for different models: from top to bottom rows show input images, reconstructed
images from original model and reconstructed images from DPFE model. The second row shows that separating layers of
a deep model and relying on specificity of higher layers does not provide identity privacy.
Haddadi was supported by the EPSRC Databox grant
(Ref: EP/N028260/1), EPSRC IoT-in-the-Wild grant (Ref:
EP/L023504/1), and a Microsoft Azure for Research grant.
[8]
R EFERENCES
[10]
[1]
[2]
[3]
[4]
[5]
[6]
[7]
N. Vallina-Rodriguez, J. Shah, A. Finamore, Y. Grunenberger,
K. Papagiannaki, H. Haddadi, and J. Crowcroft, “Breaking for
commercials: characterizing mobile advertising,” in Proceedings of
the 2012 Internet Measurement Conference. ACM, 2012, pp. 343–356.
A. Acquisti, L. Brandimarte, and G. Loewenstein, “Privacy and
human behavior in the age of information,” Science, vol. 347, no.
6221, pp. 509–514, 2015.
M. Haris, H. Haddadi, and P. Hui, “Privacy leakage in mobile
computing: Tools, methods, and characteristics,” arXiv preprint
arXiv:1410.4978, 2014.
H. Haddadi and I. Brown, “Quantified self and the privacy challenge,” Technology Law Futures, 2014.
F. D. Garcia and B. Jacobs, “Privacy-friendly energy-metering via
homomorphic encryption,” in International Workshop on Security
and Trust Management. Springer, 2010, pp. 226–238.
C. Fontaine and F. Galand, “A survey of homomorphic encryption
for nonspecialists,” EURASIP Journal on Information Security, vol.
2007, no. 1, p. 013801, 2007.
P. Garcia Lopez, A. Montresor, D. Epema, A. Datta, T. Higashino,
A. Iamnitchi, M. Barcellos, P. Felber, and E. Riviere, “Edge-centric
computing: Vision and challenges,” ACM SIGCOMM Computer
Communication Review, vol. 45, no. 5, pp. 37–42, 2015.
[9]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
S. A. Osia, A. S. Shamsabadi, A. Taheri, H. R. Rabiee, N. Lane, and
H. Haddadi, “A hybrid deep learning architecture for privacypreserving mobile analytics,” arXiv preprint arXiv:1703.02952, 2017.
R. Agrawal and R. Srikant, “Privacy-preserving data mining,” in
ACM Sigmod Record, vol. 29, no. 2. ACM, 2000, pp. 439–450.
D. Agrawal and C. C. Aggarwal, “On the design and quantification of privacy preserving data mining algorithms,” in ACM
Symposium on Principles of Database Systems, 2001, pp. 247–255.
C. C. Aggarwal and S. Y. Philip, “A general survey of privacypreserving data mining models and algorithms,” in Privacypreserving Data Mining, 2008, pp. 11–52.
L. Sweeney, “k-anonymity: A model for protecting privacy,” International Journal of Uncertainty, Fuzziness and Knowledge-Based
Systems, vol. 10, no. 05, pp. 557–570, 2002.
B. C. Fung, K. Wang, and P. S. Yu, “Top-down specialization
for information and privacy preservation,” in IEEE International
Conference on Data Engineering, 2005, pp. 205–216.
K. Wang, P. S. Yu, and S. Chakraborty, “Bottom-up generalization:
A data mining solution to privacy protection,” in IEEE International Conference on Data Mining, 2004, pp. 249–256.
R. J. Bayardo and R. Agrawal, “Data privacy through optimal kanonymization,” in IEEE International Conference on Data Engineering, 2005, pp. 217–228.
K. LeFevre, D. J. DeWitt, and R. Ramakrishnan, “Mondrian multidimensional k-anonymity,” in IEEE International Conference on Data
Engineering, 2006, pp. 25–25.
A. Machanavajjhala, D. Kifer, J. Gehrke, and M. Venkitasubramaniam, “l-diversity: Privacy beyond k-anonymity,” ACM Transac-
13
y
p
y
y
g
p
45
350
40
300
35
memory (MB)
Time (ms)
250
200
150
30
25
20
100
15
50
10
con
all
con
con
con
con
con
con
con
con
con
con
con
con
con
con
con
v7
v6
v8
v6
v6
v5
v5
v5
v5
v4
v6
v3
v4
v2
v4
v1
_3
_2
_4
_2
_4
_1
_3
_3
_1
_2
_1
(a) Layers time comparison
con
v1
con
v2
con
con
con
con
con
con
con
con
con
con
con
con
con
al
con
v5
v5
v4
v5
v6
v5
v4
v7
v3
v6
v8 l
v4
v6
v6
_4
_2
_3
_3
_4
_1
_2
_1
_1
_2
_3
(b) Layers memory usage comparison
Fig. 11: Comparison of different layers on mobile phone.
tions on Knowledge Discovery from Data, vol. 1, no. 1, p. 3, 2007.
[18] N. Li, T. Li, and S. Venkatasubramanian, “t-closeness: Privacy beyond k-anonymity and l-diversity,” in IEEE International Conference
on Data Engineering, 2007, pp. 106–115.
[19] D. Rebollo-Monedero, J. Forne, and J. Domingo-Ferrer, “From tcloseness-like privacy to postrandomization via information theory,” IEEE Transactions on Knowledge and Data Engineering, vol. 22,
no. 11, pp. 1623–1636, 2010.
[20] E. M. Newton, L. Sweeney, and B. Malin, “Preserving privacy by
de-identifying face images,” IEEE transactions on Knowledge and
Data Engineering, vol. 17, no. 2, pp. 232–243, 2005.
[21] R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,”
in ACM Conference on Computer and Communications Security, 2015,
pp. 1310–1321.
[22] N. Papernot, M. Abadi, U. Erlingsson, I. Goodfellow, and K. Talwar, “Semi-supervised Knowledge Transfer for Deep Learning
from Private Training Data,” in Proceedings of the International
Conference on Learning Representations (ICLR), 2017.
[23] B. Hitaj, G. Ateniese, and F. Pérez-Cruz, “Deep models under the
gan: information leakage from collaborative deep learning,” in
Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security. ACM, 2017, pp. 603–618.
[24] P. Mohassel and Y. Zhang, “Secureml: A system for scalable
privacy-preserving machine learning,” in IEEE Symposium on Security and Privacy. IEEE, 2017, pp. 19–38.
[25] C. Dwork, “Differential privacy,” in International Colloquium on
Automata, Languages and Programming, 2006, pp. 1–12.
[26] ——, “Differential privacy: A survey of results,” in International
Conference on Theory and Applications of Models of Computation, 2008,
pp. 1–19.
[27] Z. Ji, Z. C. Lipton, and C. Elkan, “Differential privacy and machine
learning: A survey and review,” arXiv preprint arXiv:1412.7584,
2014.
[28] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov,
K. Talwar, and L. Zhang, “Deep learning with differential privacy,”
in Proceedings of the 2016 ACM SIGSAC Conference on Computer and
Communications Security. ACM, 2016, pp. 308–318.
[29] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction apis.” in USENIX
Security Symposium, 2016, pp. 601–618.
[30] R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig,
and J. Wernsing, “Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy,” in International
Conference on Machine Learning, 2016, pp. 201–210.
[31] B. D. Rouhani, M. S. Riazi, and F. Koushanfar, “Deepsecure: Scalable provably-secure deep learning,” arXiv preprint
arXiv:1705.08963, 2017.
[32] A. J. Bell and T. J. Sejnowski, “An information-maximization
approach to blind separation and blind deconvolution,” Neural
Computation, vol. 7, no. 6, pp. 1129–1159, 1995.
[33] D. Barber and F. Agakov, “The im algorithm: a variational approach to information maximization,” in Proceedings of the 16th
International Conference on Neural Information Processing Systems.
MIT Press, 2003, pp. 201–208.
[34] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and
P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” in Neural Information Processing Systems, 2016, pp. 2172–2180.
[35] N. Tishby, F. Pereira, and W. Bialek, “The information bottleneck
method,” in Proceedings of the 37-th Annual Allerton Conference on
Communication, Control and Computing, 1999, pp. 368–377.
[36] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, “Deep variational information bottleneck,” arXiv preprint arXiv:1612.00410,
2016.
[37] C. M. Bishop, Pattern Recognition and Machine Learning (Information
Science and Statistics). Secaucus, NJ, USA: Springer-Verlag New
York, Inc., 2006.
[38] T. Duong and M. L. Hazelton, “Convergence rates for unconstrained bandwidth matrix selectors in multivariate kernel density
estimation,” Journal of Multivariate Analysis, vol. 93, no. 2, pp. 417–
433, 2005.
[39] B. W. Silverman, Density estimation for statistics and data analysis.
CRC press, 1986, vol. 26.
[40] Y. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller, “Efficient backprop,” in Neural networks: Tricks of the trade. Springer, 1998, pp.
9–50.
[41] J. R. Hershey and P. A. Olsen, “Approximating the kullback
leibler divergence between gaussian mixture models,” in IEEE
International Conference on Acoustics, Speech and Signal Processing,
2007, pp. IV–317.
[42] R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction
by learning an invariant mapping,” in IEEE Conference on Computer
Vision and Pattern Recognition, 2006, pp. 1735–1742.
[43] S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric
discriminatively, with application to face verification,” in IEEE
Conference on Computer Vision and Pattern Recognition, 2005, pp.
539–546.
[44] S. A. Osia, A. S. Shamsabadi, A. Taheri, K. Katevas, H. R. Rabiee,
N. D. Lane, and H. Haddadi, “Privacy-preserving deep inference
for rich user data on the cloud,” arXiv preprint arXiv:1710.01727,
2017.
[45] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep
network training by reducing internal covariate shift,” in International conference on machine learning, 2015, pp. 448–456.
[46] R. Shwartz-Ziv and N. Tishby, “Opening the black box of deep
neural networks via information,” arXiv preprint arXiv:1703.00810,
2017.
[47] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable
are features in deep neural networks?” in Neural Information Processing Systems, 2014, pp. 3320–3328.
14
[48] A. Dosovitskiy and T. Brox, “Inverting visual representations with
convolutional networks,” in IEEE Conference on Computer Vision
and Pattern Recognition, 2016, pp. 4829–4837.
[49] M. Malekzadeh, R. G. Clegg, and H. Haddadi, “Replacement
autoencoder: A privacy-preserving algorithm for sensory data
analysis,” in The 3rd ACM/IEEE International Conference of Internetof-Things Design and Implementation, 2018.
[50] T. M. Cover and J. A. Thomas, Elements of information theory. John
Wiley & Sons, 2012.
[51] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes
in the wild,” in Proceedings of International Conference on Computer
Vision (ICCV), 2015.
[52] E. M. Rudd, M. Günther, and T. E. Boult, “Moon: A mixed objective
optimization network for the recognition of facial attributes,” in
European Conference on Computer Vision. Springer, 2016, pp. 19–35.
[53] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick,
S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture
for fast feature embedding,” arXiv preprint arXiv:1408.5093, 2014.
[54] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[55] Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin, “Compression of deep convolutional neural networks for fast and low
power mobile applications,” arXiv preprint arXiv:1511.06530, 2015.
[56] S. S. Haykin, Neural networks and learning machines. Pearson Upper
Saddle River, NJ, USA, 2009.
A PPENDIX A
P RELIMINARIES
A PPENDIX B
B.1
From positivity of kl-divergence we know that:
Z
p(z|f )
kl(p(z|f )kq(z|f )) = p(z|f ) log
dz ≥ 0
q(z|f )
So we have:
Z
p(f, z) log
H(x) = Ex [− log p(x)]
which can be used to measure the uncertainty we have
about x. Differential entropy is the extention of this definition
for the continuous random variables: h(x) =
R
− f (x) log f (x)dx, where here f (x) is the probability density function of x. We can also define entropy for joint and
conditional probability distributions:
H(x, y) = Ex,y [− log p(x, y)]
H(x|y) = Ey Ex|y [− log p(x|y)]
p(z|f ) p(z)
dz df ≥ 0
p(z) q(z|f )
Also we know that:
Z
p(f, z)
df dz
I(f; z) = p(f, z) log
p(f )p(z)
Z
Z
p(z|f )
df dz
= p(z) p(f |z) log
p(z)
Thus:
I(f; z) ≥
B.2
Quantizing some intuitive concepts like uncertainty and
information, is one of the main information theory advantages. In this part we briefly discuss these phenomenons
and refer the readers to further detailed discussion in [50]
and [56].
The entropy of a discrete random variable x is defined as:
Proof of Lemma 1
Z
p(f, z) log
q(z|f )
dz df
p(z)
Proof of Theorem 4
From Lemma 3 we know that:
Z
X
p(f |ya )
p(ya ) p(f |ya ) log P
I(f; y) =
df
b p(yb )p(f |yb )
a
Z
X
p(ya ) H(f|ya ) + p(f |ya ) log Ey p(f |y)df
=−
a
So by using Jensen inequality we have:
Z
X
p(ya ) H(f|ya ) + p(f |ya )Ey log p(f |y)df
I(f; y) ≤ −
a
= U1
We can manipulate U1 as:
Z
h
X
p(ya ) p(f |ya ) log p(f |ya )
U1 =
a
−
X
p(yb ) log p(f |yb )df
i
b
So we get:
Based on these definitions, we can define the mutual
information between two random variables, which tries to
measure the amount of uncertainty reduction about one of
them, given the other one:
U1 =
XX
a
=
p(ya ) p(yb ) Dkl p(f |ya )kp(f |yb )
b
X X
p(ya ) p(yb ) Dkl p(f |ya )kp(f |yb )
a b:b6=a
I(x; y) = H(x) − H(x|y) = H(y) − H(y|x)
It is also equal to kl-divergence between p(x, y) and
p(x)p(y). kl-divergence between two probability distributions p and q is a non-negative distance measure between
them, define as:
p
Dkl [pkq] = Ep [log ]
q
So we have I(x; y) = Dkl [p(x, y)kp(x)p(y)]. These are the
information theoretic definitions we used to define and
solve the privacy preservation problem. Further information
can be accessed through [50].
B.3
Proof of Theorem 7
By using Lemma 5 and 6 we get:
U1 '
≤
=
X X Ny Ny
a
b
Dkl p(f |ya )kp(f |yb )
N N
a b: b6=a
X X Ny Ny
a
b
N N
a b: b6=a
1
σN 2
X
(i,j): yi 6=yj
X
(i,j):
yi =ya
yj =yb
1
1 1
kfi − fj k22
Nya Nyb 2σ
(10)
kfi − fj k22 = U2
15
B.4
Proof of Theorem 8
In order to prove this theorem, first we need to address the
following lemma:
Lemma 12. Assuming f1 and f2 are two samples from p(f )
with mean µ and covariance matrix Σ we have:
h
i
h
i
E kf1 − f2 k22 = E (f1 − f2 )T (f1 − f2 )
Ali Shahin Shamsabadi received his B.S. degree in electrical engineering from Shiraz University of Technology, in 2014, and the M.Sc.
degree in electrical engineering (digital) from the
Sharif University of Technology, in 2016. Currently, he is a Ph.D. candidate at the Queen Mary
University of London. His research interests include deep learning and data privacy protection
in distributed and centralized learning.
= 2diag(Σ + µµT ) − 2µT µ = 2diagΣ
So by normalizing thehfeature space
i to has variance one
for each dimension, E kf1 − f2 k22 is fixed and equal to
2d where d is the dimension.
Now we can state the proof of Theorem 8. Considering
{fi }N
i=1 as i.i.d. samples from p(f ) and setting dij = kfi −
fj k22 we have:
!
X
N
dij '
2d
2
i,j
We can also split pairs with their y labels similarity:
!
X
X
N
2d
dij +
dij '
2
i,j:y =y
i,j:y 6=y
i
j
i
j
and get:
1
N2
X
i,j:yi 6=yj
dij '
2d(N − 1)
1
− 2
N
N
=
1
N2
X
i,j:yi =yj
X
dij
i,j:yi =yj
2dN (N − 1)
− dij
k
Kleomenis Katevas received his B.Sc. degree
in Informatics Engineering from the University of
Applied Sciences of Thessaloniki in 2006, and
an M.Sc. degree in Software Engineering from
Queen Mary University of London in 2010. He
is currently a Ph.D. candidate at Queen Mary
University of London. His research interests includes Mobile & Ubiquitous Computing, Applied
Machine Learning, Crowd Sensing and HumanComputer Interaction.
where k is the number of similar pairs in the training data.
Seyed Ali Osia received his B.Sc. degree in
Software Engineering from Sharif University of
Technology in 2014. He is currently a Ph.D.
candidate at the department of computer engineering, Sharif University of Technology. His
research interests includes Statistical Machine
Learning, Deep Learning, Privacy and Computer
Vision.
Ali Taheri received his B.Sc. degree in Software
Engineering from Shahid Beheshti University in
2015. He received his M.Sc. degree in Artificial
Intelligence from Sharif University of Technology
in 2017. His research interests includes Deep
Learning and Privacy.
Hamed Haddadi received his B.Eng., M.Sc.,
and Ph.D. degrees from University College London. He was a postdoctoral researcher at Max
Planck Institute for Software Systems in Germany, and a postdoctoral research fellow at Department of Pharmacology, University of Cambridge and The Royal Veterinary College, University of London, followed by few years as a
Lecturer and consequently Senior Lecturer in
Digital Media at Queen Mary University of London. He is currently a Senior Lecturer (Associate
Professor) and the Deputy Director of Research in the Dyson School
of Design Engineering, and an Academic Fellow of the Data Science
Institute, at The Faculty of Engineering at Imperial College of London.
He is interested in User-Centered Systems, IoT, Applied Machine Learning, and Data Security & Privacy. He enjoys designing and building
systems that enable better use of our digital footprint, while respecting
users’ privacy. He is also broadly interested in sensing applications and
Human-Data Interaction.
16
Hamid R. Rabiee received his B.S. and M.S.
degrees (with great distinction) in electrical engineering from California State University, Long
Beach, CA, in 1987 and 1989, respectively; the
EEE degree in electrical and computer engineering from the University of Southern California
(USC), Los Angeles, CA; and the Ph.D. degree
in electrical and computer engineering from Purdue University, West Lafayette, IN, in 1996. From
1993 to 1996, he was a Member of the Technical
Staff at AT&T Bell Laboratories. From 1996 to
1999, he worked as a Senior Software Engineer at Intel Corporation.
From 1996 to 2000, he was an Adjunct Professor of electrical and
computer engineering with Portland State University, Portland, OR;
with Oregon Graduate Institute, Beaverton, OR; and with Oregon State
University, Corvallis, OR. Since September 2000, he has been with the
Department of Computer Engineering, Sharif University of Technology,
Tehran, Iran, where he is a Professor of computer engineering, and
Director of Sharif University Advanced Information and Communication
Technology Research Institute (AICT), Digital Media Laboratory (DML),
and Mobile Value Added Services Laboratory (MVASL). He is also
the founder of AICT, Advanced Technologies Incubator (SATI), DML,
and VASL. He is currently on sabbatical leave (2017-2018 academic
year) as visiting professor at Imperial College of London. He has been
the Initiator and Director of national and international-level projects in
the context of United Nation Open Source Network program and Iran
National ICT Development Plan. He has received numerous awards and
honors for his industrial, scientific, and academic contributions. He is a
Senior Member of IEEE, and holds three patents. He has also initiated
a number of successful start-up companies in cloud computing, SDP,
IoT, and storage systems for big data analytics. His research interests
include statistical machine learning, Bayesian statistics, data analytics
and complex networks with applications in multimedia systems, social
networks, cloud and IoT data privacy, bioinformatics, and brain networks.
| 2 |
A port-Hamiltonian approach to the control of nonholonomic
systems
Joel Ferguson a , Alejandro Donaire b , Christopher Renton c , Richard H. Middleton a
arXiv:1801.06954v1 [cs.SY] 22 Jan 2018
a
School of Electrical Engineering and Computing and PRC CDSC, The University of Newcastle, Callaghan, NSW 2308,
Australia.
b
Department of Electrical Engineering and Information Theory and PRISMA Lab, University of Naples Federico II, Napoli
80125, Italy, and with the School of Electrical Eng. and Comp. Sc. of the Queensland University of Technology, Brisbane,
QLD, Australia.
c
School of Engineering, The University of Newcastle, Callaghan, NSW 2308, Australia.
Abstract
In this paper a method of controlling nonholonomic systems within the port-Hamiltonian (pH) framework is presented. It
is well known that nonholonomic systems can be represented as pH systems without Lagrange multipliers by considering a
reduced momentum space. Here, we revisit the modelling of these systems for the purpose of identifying the role that physical
damping plays. Using this representation, a geometric structure generalising the well known chained form is identified as chained
structure. A discontinuous control law is then proposed for pH systems with chained structure such that the configuration of
the system asymptotically approaches the origin. The proposed control law is robust against the damping and inertial of the
open-loop system. The results are then demonstrated numerically on a car-like vehicle.
Key words: Nonholonomic systems; Port-Hamiltonian systems; Discontinuous control; Robust control.
1
Introduction
The control problem of set-point regulation for nonholonomic systems has been widely studied within the literature [1,2,14,23,19]. This problem is inherently difficult
as nonholonomic systems do not satisfy Brockett’s necessary condition for smooth stabilisation which implies
that they cannot be stabilised using smooth control laws,
or even continuous control laws [2]. In response to these
limitations, the control community has utilised several
alternate classes of controllers to stabilise nonholonomic
systems including time-varying control [21,23], switching control [15] and discontinuous control [1,9].
One approach that has been utilised to solve the control problem is to assume that the system has a particular kinematic structure known as chained form [19].
Email addresses: [email protected] (Joel
Ferguson), [email protected] (Alejandro
Donaire), [email protected]
(Christopher Renton),
[email protected] (Richard H.
Middleton).
Preprint submitted to Automatica
This structure was previously utilised in [1] to propose
a discontinuous control law to achieve set-point regulation for this class of systems. While it may seem restrictive to assume this kinematic structure, many systems of
practical importance have been shown to be of chained
form under suitable coordinate and input transformations. Examples of this are the kinematic car [19] and the
n-trailer system [22]. This form of kinematic structure
plays a central role in the developments presented here.
Dynamic models for many nonholonomic systems (ie.
systems with drift) are able to be formulated within
the port-Hamiltonian framework where the constraints
enter the dynamics equations as Lagrange multipliers
[11]. It was shown in [24] that the Lagrange multipliers, arising from the constraint equations, can be eliminated from the pH representation of such systems by
appropriately reducing the dimension fo the momentum
space. Interestingly, the reduced equations have a noncanonical structure and the dimension of the momentum space is less than the dimension of the configuration
space. It was further shown in [17] that stabilisation of
the pH system can easily be achieved using the reduced
representation via potential energy shaping. Asymptotic
23 January 2018
with the interpretation of potential energy shaping together with damping injection, is then proposed for ndegree of freedom pH systems with chained structure
such that the configuration asymptotically converges to
the origin. The controller is shown to be robust against
the damping and inertial properties of the open-loop system.
stability, however, was not considered in that work.
While the control of nonholonomic systems has been extensively studied within the literature, control methods
that exploit the natural passivity of these systems are
rather limited. Some exceptions to this trend are the
works [5,4,18] which all utilised smooth control laws to
achieve some control objective. In each of these cases, the
control objective was to stabilise some non-trivial submanifold of the configuration space with characteristics
such that Brockett’s condition does not apply. Similar
to this approach, a switching control law for a 3-degree
of freedom mobile robot was proposed in [15]. Each of
the individual control laws used in the switching scheme
were smooth and stabilised a sub-manifold of the configuration space. The stabilised sub-manifolds were chosen
such that their intersection was the origin of the configuration space. Using a switching heuristic, the switching control law was able to drive the 3-degree of freedom
robot to a compact set containing the origin.
Notation: Given a scalar function f (x) : Rn →
R, ∇x f denotes the column of partial derivatives
h
i>
∂
∂
. For a vector valued function
f
·
·
·
f
∂x1
∂xn
∂
g(x) ∈ Rm , ∂x
g denotes the standard Jacobian matrix.
In and 0n denote the n × n identity and zero matrices,
respectively. 0n×m is the n × m zero matrix.
2
Problem formulation
This work is concerned with mechanical systems that
are subject to constraints that are non-integrable, linear
combination of generalised velocities:
In our previous work [6], we considered a switching
control law for the Chaplygin sleigh where each of the
individual control laws were potential energy-shaping
controllers were the target potential energy was a discontinuous function of the state. Each of the controllers
stabilised a submanifold of the configuration space
where the stabilised sub-manifolds were chosen such
that they intersect at the origin. A switching heuristic
was then proposed such that the system converged to
the origin of the configuration space asymptotically.
Likewise, asymptotic stability of 3-degree of freedom
nonholonomic systems was considered in [8,9] where
the proposed approach was to use a potential energyshaping control law where the target potential energy
was a discontinuous function of the state. This approach has the advantage of not requiring any switching
heuristic to achieve convergence.
G>
c (q)q̇ = 0k×1 ,
(1)
where q ∈ Rn is the configuration, k is the number of
linearly independent constraints and Gc ∈ Rn×k is full
rank. Such constraints are called nonholonomic, Pfaffian constraints [3] and naturally arises when considering non-slip conditions of wheels [2]. For the remainder
of the paper, the term nonholonomic is used to refer to
constraints of the form (1).
Nonholonomic constraints do not place a restriction on
achievable configurations of the system, but rather, restricts the valid paths of the system through the configuration space. Mechanical systems with nonholonomic
constraints can be modelled as pH systems where the
constants appear as Lagrange multipliers [17]:
In this paper, inspired by the works [1] and [9], we propose a discontinuous potential energy-shaping control
law for a class of nonholonomic systems 1 . First, the
procedure to eliminate the Lagrange multipliers from
the pH representation of a nonholonomic system proposed in [24] is revisited and the role of physical damping is defined. Then, considering the reduced representation of the system, a special geometric structure that
generalises the well know chained form is identified and
called chained structure. A discontinuous control law,
" #
q̇
ṗ0
"
=
y=
0n
In
−In −D0
G>
0 ∇p0 H 0
#"
∇q H 0
∇ p0 H 0
#
"
+
G>
c ∇p0 H 0
+
=
1 > −1
H0 (q, p) = p0 M0 (q)p0 +V(q),
|2
{z
}
0n×m 0n×k
G0
Gc
#" #
u
λ
G>
0 ∇p0 H 0
T0
(2)
where p0 ∈ Rn is the momentum, u, y ∈ Rm , with
m = n − k, are the input and output respectively,
G0 (q) ∈ Rn×m is the input mapping matrix, λ ∈ Rk
are the Lagrange multipliers corresponding to the
constraints (1), D0 (p0 , q) = D0 (p0 , q)> ≥ 0 contains
physical damping terms, M0 (q) = M0> (q) > 0 is the
mass matrix, T0 is the kinetic energy and V(q) > 0
is the potential energy [12], [20]. It is assumed that
1
A short version of this paper has been accepted for presentation at LHMNC 2018 [7]. The conference version considers control of the Chaplygin sleigh system using a simplified version of the control law presented here. Lemma 13,
Proposition 14 and Proposition 16 can be found within the
conference version. The extension to n-dimensional systems,
the presented example and all other technical developments
are original to this work.
2
(q, p̃) as:
(q, p0 )
Eliminate Lagrange multipliers
" # "
#"
# "
#" #
q̇
0n
Q0
∇q H
0n×m 0n×k u
=
+
p̃˙
−Q>
∇p̃ H
G
G̃
λ
0 C̃ − D̃
" #
G>
y=
∇p̃ H
G̃>
1 > −1
H̃(q, p̃) , H0 (q, Q−>
(q)p̃ +V(q),
0 p̃) = p̃ M̃
{z
}
|2
(q, p)
Obtain chained structure
(z, p)
Discontinuous configuration transformation
required to design control law
(w, p)
T̃
(3)
Fig. 1. The progression of coordinate transformations and
their respective purposes.
where
−>
D̃(q, p̃) = Q>
0 (q)D0 (Q0 (q)p̃, q)Q0 (q)
>
∂ −>
Q0 (q)p̃
C̃(q, p̃) = Q>
0 (q)
∂q
∂ −>
Q0 (q)p̃ Q0 (q) (4)
−
∂q
>
M̃ (q) = Q0 (q)M0 (q)Q0 (q)
h
i
the matrix G0 (q) Gc (q) ∈ Rn×n is full rank. The
constraint equation (1) has been used to determine
>
G>
c ∇p0 H0 = Gc q̇ = 0m×1 to simplify the output equation.
G(q) = Q>
0 (q)G0 (q)
G̃(q) = Q>
0 (q)Gc (q).
Problem statement: Given the nonholonomic pH system (2), design a discontinuous control law u = u(q, p0 )
such that limt→∞ q(t) = 0n×1 .
It is now shown that for an appropriate choice of Q0 ,
(3) can be equivalently described in a reduced momentum space without Lagrange multipliers. To see this, let
G⊥
c (q) be a left annihilator of Gc (q) with rank m = n−k.
Using this definition we propose the new momentum
variable
Throughout this paper, several coordinate transformations will be performed on the nonholonomic system
(2) in order to address the problem statement. Figure 1
summarises the coordinate transformations utilised and
states their respective purposes.
p̃ =
" #
µ
p
"
=
A> (q)
#
Q> (q)
| {z }
p0 ,
(5)
Q>
0 (q)
3
for the system (2) where µ ∈ Rk , p ∈ Rm and Q =
>
(G⊥
c ) .
Elimination of Lagrange multipliers
Lemma 2 Consider the matrix Q0 in (5). If G>
c A is
invertible, then Q0 is also invertible.
In this section, the system (2) is simplified by eliminating
the Lagrange multipliers from the pH representation.
As was done in [24], this simplification is achieved via
a reduction of the dimension of the momentum space.
The presented formulation explicitly considers the role of
physical damping which will be utilised in the following
sections. To this end, we recall the following lemma:
PROOF. The proof is provided in the Appendix.
The matrix A is then chosen such that G>
c A is invertible
which implies that Q0 is invertible by Lemma 2.
Lemma 1 (Section 3, [25]) Let Q0 (q) ∈ Rn×n be any
invertible matrix and define p̃ = Q>
0 (q)p0 . Then, the
dynamics (2) can be equivalently expressed in coordinates
Proposition 3 Consider system (2) under the change
of momentum (5). The system dynamics can equivalently
3
From (10) and (11), it follows that
expressed as
" #
q̇
"
=
0n
Q
#"
∇q H
−Q> C − D ∇p H
∂H
y = G>
∂p
1 > −1
H(p, q) = p M p +V,
|2 {z }
ṗ
#
"
+
0n×m
G
−1
∇p H̃ = Q> M0 Q
p
#
u
(12)
−1
µ = A> M0 Q Q> M0 Q
p.
(6)
Thus µ is fully determined by p, rendering the µ dynamics redundant. The modified momentum p̃ may be
computed from the reduced momentum p using (12) as
follows:
T
p̃ = Q>
0 M0 Q0 ∇p̃ H̃
"
h
i
>
= Q0 M0 A Q
where
G(q) = Q> (q)G0 (q)
1 > −1
p̃ M̃ (q)p̃
2
1
= p> (Q> M0 Q)−1 p,
2
T̃ =
−1
>
p0 = Q−>
p.
0 p̃ = M0 Q Q M0 Q
The transformation used in [24] can be seen to be a
special case of the transformation (5) where A = Gc .
>
This satisfies the necessary condition that G>
c A = Gc Gc
be invertible.
An alternative to this transformation arises by choosing
A = M0−1 Gc . This satisfies the necessary condition that
−1
>
G>
c A = Gc M0 Gc is invertible. From (12), this choice
leads to:
(9)
>
µ = G>
c B B M0 B
>
As G>
c A is invertible and Gc Q = 0, then (9) implies
that
−1
p = 0k×1 ,
(16)
and the modified mass matrix becomes
(10)
"
M̃ =
p̃, we obtain
−1
∇p̃ H̃ = Q>
p̃
0 M0 Q0
" # "
#"
#
µ
A> M0 A A> M0 Q ∇µ H̃
=
.
p
Q> M0 A Q> M0 Q ∇p H̃
(15)
2
Considering the constraint equation (1),
Considering the ∇p̃ H̃ = M̃
(14)
which confirms our choice of Hamiltonian in (6) and mass
matrix in (7). Combining (5) and (13), the canonical
momentum p0 is given by
0n
A
Q
∇q H̃
q̇
µ̇ = −A> C̃11 − D̃11 −C̃ > − D̃> ∇µ H̃
21
21
−Q> C̃21 − D̃21 C̃22 − D̃22
∇p H̃
ṗ
0n×m 0n×k " #
u
(8)
>
>
+
A G0 A Gc
λ
Q> G0 0m×k
#"
#
"
>
∇
H̃
G>
A
G
Q
µ
0
0
.
y=
>
∇p H̃
A
G
G>
c Q
c
−1
(13)
Using (13), consider the Hamiltonian function in (3)
PROOF. By Proposition 1, system (2) under the momentum transformation (5) has the form
∇µ H̃ = 0k×1 .
#
−1
Q> M0 Q
p
−1
>
= Q>
p.
0 M0 Q Q M0 Q
D(p, q) = Q> (q)D0 (q, p0 )Q(q)
>h
−1 i
∂
>
C(p, q) = Q (q)
M0 Q Q> M0 Q
p
(7)
∂q
h
i
∂
−1
M0 Q Q> M0 Q
p Q(q)
−
∂q
M (q) = Q> (q)M0 (q)Q(q).
>
> >
G>
c q̇ = Gc ∇p0 H0 = Gc Q0 ∇p̃ H̃ = 0k×1 .
0k×1
#
−1
G>
c M0 Gc
0k×m
0m×k
Q> M0 Q
.
(17)
This transformation has the property that µ is equal
to the velocities in the directions that the forces due to
nonholonomic constraints act, and is thus trivially zero.
As a result of this, the mass matrix is block diagonalised,
which further reinforces the point that the decoupled
(11)
4
dynamics due to the constraints may be omitted from
the model.
2 of [10], the system (6) is equivalently described in the
coordinates (z, p) by:
" #
ż
Regardless of the choice of a suitable matrix A, for isolating and eliminating redundant components of momentum in the presence of nonholonomic constraints, there is
still freedom available in the elements of G⊥
c . For example, we may construct G⊥
c to render the modified mass
matrix constant, as done in [5].
ṗ
Mz = M (q)|q=fz−1 (z)
∂
(fz )Q(q)|q=fz−1 (z)
Qz =
∂q
0
ż 0 1
2
" #
ż3 z2 0 v
1
,
=
ż4 z3 0 v2
.. .. ..
. . .
żn
zn−1 0
| {z }
∇ z Hz
∇p H z
#
+
"
#
0n×m
Gz
u
(19)
Cz = C(q, p)|q=fz−1 (z)
Dz = D(q, p)|q=fz−1 (z)
Gz = G(q)|q=fz−1 (z) .
(20)
Considering the nonholonomic system expressed in the
coordinates (z, p) given by (19), pH systems with chained
structure can now be defined.
Definition 5 A nonholonomic pH system of the form
(19) has a chained structure if Qz has a left annihilator
of the form
Q⊥
z (z)
Chained form systems are two input kinematic systems
described by the equations:
−Q>
z Cz − Dz
Vz = V(q)|q=fz−1 (z)
Chained structure
ż1
#"
Hz = H(q, p)|q=fz−1 (z)
PH systems with chained structure
Qz
where
In this section, two configuration transformations are
proposed for the system (6). The first transformation
fz : q → z is used to transform the system such that the
transformed system has a chained structure, a generalisation of chained form. A second discontinuous configuration transformation fw : z → w is proposed which
serves two purposes: Firstly, the asymptotic behaviour
of z(t) is reduced to the asymptotic behaviour of a single
variable in the w space. Secondly, the control objective
can be addressed by shaping the potential energy to be
quadratic in w.
4.1
=
0n
y = G>
z ∇p H z
1
Hz = p> Mz−1 p + Vz (z),
2
Remark 4 In the derivation of [24], the term C has the
form −p>
0 [Qi , Qj ] where [·, ·] is the lie bracket and Qk
denotes the k th column of Q. This expression is equivalent
to the form given in (7).
4
"
1
−z2
−z3
= .
..
−zn−1
0 1 0 ··· 0
0 0 1 · · · 0
∈ R(n−2)×n .
.. .. .. . . ..
. . . . .
0 0 0 ··· 1
(21)
The relationship between chained systems and chained
structure is now apparent; Qc as defined in (18) has the
trivial left annihilator of the form (21). This annihilator
is then used as the defining property used in our definition of chained structure. By this definition, pH systems
with chained structure are two-input systems (u ∈ R2 )
with momentum space of dimension 2 (pr ∈ R2 ).
(18)
Remark 6 The kinematics associated with (6) are q̇ =
Q∇pr Hr where ∇pr Hr is considered an input to the kinematic system. If this kinematic system admits a feedback
transformation that transforms it into chained form using the method presented in [19], then by Proposition 1
of [9], there exists a coordinate and momentum transformation that transforms (6) into (19) with Qz = Qc .
Such a system clearly has a chained structure.
Qc (z)
where v1 , v2 ∈ R are velocity inputs and zi ∈ R are
configuration variables [2,1,19]. The kinematic models
of many interesting nonholonomic systems can be expressed in chained form under the appropriate coordinate and input transformations. A procedure to transform kinematic models into chained form was presented
in [19].
4.2
Consider now a new set of generalised coordinates z =
fz (q) for the system (6) where fz is invertible. By Lemma
Discontinuous coordinate transformation
A discontinuous coordinate transformation fw : z → w
for systems with chained structure is now proposed. The
5
Proposition 8 The function fw : z → w, defined implicitly by (22), is well defined for all z1 6= 0.
purpose of this transformation is to render the openloop dynamics in a form whereby the control problem
can be addressed by shaping the potential energy to be
quadratic in w.
PROOF. First note that z1 = w1 is invertible for all
h
i>
h
i>
z1 . Let z 0 = z3 · · · zn , w0 = w3 · · · wn
and
The transformation fw is defined implicitly by its inverse
mapping:
z1
N (w1 ) = diag(w1 , w12 , . . . , w1n−2 ). z 0 and w0 are related
by
w1
z 0 = N (w1 )Sn N (w1 )w0 .
P
(i−2)
1
w1 w2 + n
z2
wi
i=3 (i−2)! w1
Pn
(i−1)
1
z3
w
w
i
i=3 (i−1)! 1
.
..
.. = fw−1 (w) =
.
.
Pn
(i+j−4)
1
zj
wi
i=3 (i+j−4)! w1
.
..
..
.
Pn
(i+n−4)
1
zn
wi
i=3 (i+n−4)! w1
N (w1 ) is invertible for all w1 6= 0 and Sn is invertible by
Lemma 7. Thus, we have that
w0 = N −1 (z1 )Sn−1 N −1 (z1 )z 0 .
n
w2 =
The inverse transformation (22) has been constructed
to satisfy two properties. Firstly, it can be seen that
the mapping fw−1 is smooth and if w1 = 0 then z = 0.
Thus the control problem can be addressed in the w
coordinates simply by controlling w1 . The second useful
property of (22) is that each element of z(w) = fw−1 (w)
satisfies the relationship
" #
ẇ
(23)
Sn =
..
.
···
.. . .
.
.
1
1
(n−1)! n!
···
1
(n−1)!
1
n!
..
.
1
(2n−4)!
#"
#
∇w H w
∇p H w
"
+
0n×2
Gw
#
u
(28)
Vw = Vz (z)|z=fw−1 (w)
Cw = Cz (z, p)|z=fw−1 (w)
Mw = Mz (z)|z=fw−1 (w)
Dw = Dz (z, p)|z=fw−1 (w)
∂
(fw )Qz (z)|z=fw−1 (w) Gw = Gz (z)| −1 .
Qw =
z=fw (w)
∂z
(29)
Qw
Hw = Hz (z, p)|z=fw−1 (w)
Lemma 7 The matrix Sn defined as
···
=
0n
where
The remainder of this section is devoted to proving that
fw : z → w, defined implicitly by (22), is well defined
for all z1 6= 0.
1
3!
1
4!
"
−Q>
w Cw − Dw
1
Hw = p> Mw−1 p + Vw ,
2
for i ≥ 2. Considering chained form systems (18), such a
definition is closely related to the underlying system but
integration now occurs spatially, rather than temporally.
1
2!
1
3!
(27)
By Lemma 2 of [10], (19) can be equivalently described
in the coordinates (w, p) by:
ṗ
1
z2 X
(i−2)
z
wi (z).
−
z1 i=3 (i − 3)! 1
2
Z
zi (w)|w2 =0 dw1
(26)
Finally, the transformation for w2 can be solved algebraically as the solution to
(22)
zi+1 (w) =
(25)
(24)
5
In this section, a discontinuous control law is proposed
for the nonholonomic pH system (2).
is invertible for all n ≥ 3.
PROOF. The proof is provided in the Appendix 2 .
Stabilisation via potential energy shaping and
damping injection
2
Assumption 9 Consider the nonholonomic pH system
(2) that has been expressed without Lagrange multipliers in the form (6). It is assumed that there exists
a coordinate transformation fz : q → z such that
2
The proof of Lemma 7 was proposed by user1551 on
math.stackexchange.com
6
fz−1 (0n×1 ) = 0n×1 and the dynamics expressed as a
function of z, given by (19), have a chained structure
with smooth Qz .
is given by
z
2
fw (z) =
z1 −
Under Assumption 9, (2) can be equivalently represented
in the (z, p) coordinates by (19) with a chained structure
or in the coordinates (w, p) as per (28).
5.1
z1
2z3
z12 .
(32)
2z3
z12
Figure 2 is a plot of the function
Stabilising control law
1
1
1
w1 (z)2 + w2 (z)2 |z2 =0 + w3 (z)2
2
2
2
(33)
1 2 4z32
= z1 + 4 ,
2
z1
Vz (z) =
Proposition 10 Consider the system (28) in closedloop with the control law
>
u = −G−1
Qw [Lw − ∇w Vw ]
w
(30)
k
>
+ D̂ + 2 Q>
e
e
Q
∇
H
,
1
w
p
w
w
1
w1
which is part of the shaped potential energy function,
projected onto z2 = 0. Interestingly, the level sets resemble “figure of eights” and the function diverges as z1
tends to 0, unless the ratio zz32 is bounded. Thus, it can
1
be seen that the potential function only allows the system to approach the origin from particular directions.
where D̂ ∈ R2×2 is positive definite, e1 ∈ Rn is the
first standard basis vector, k > 0 is a constant and L =
diag(l1 , . . . , ln ) is a constant positive matrix. The closedloop dynamics have the form
"
=
0n
Qw
#"
#
∇w H d
∇p H d
−Q>
w Cw − Dd
1
1
Hd = p> Mw−1 p + w> Lw,
2
|2 {z }
ṗ
Vz|z =0
2
" #
ẇ
Discontinuous potential function
(31)
5
0
5
4
2
0
0
Vd
-2
where Dd = Dw + D̂ +
z3
k
Q> e e> Qw .
w12 w 1 1
-5
-4
z1
Fig. 2. A component of the discontinuous potential energy
function Vw = 21 w> Lw with n = 3 and L = I3×3 expressed
as a function of z projected onto z2 = 0.
PROOF. The proof follows from direct computation. 2
Remark 11 As Qw is full rank, the term ∇p Hw can be
expressed as a function of w, ẇ. Thus, the control law (30)
can be expressed independent of the systems mass matrix
Mw . Further, the control law is independent of the openloop damping structure Dw . Thus the proposed control
scheme is robust against damping parameters. This is
similar to the case of energy shaping of fully-actuated
mechanical systems.
The proposed control law is comprised of two parts: potential energy shaping and damping injection. The term
>
−G−1
w Qw [Lw − ∇w Vw ] can be considered to be potential energy shaping as its role is to replace the potential
energy term Vw of (28) with Vd . The role of the potential
energy shaping is to drive the system to the configuration
z1 = w1 = 0 whilst
h keeping each wi ibounded. Likewise,
>
the term −G−1
D̂ + wk2 Q>
w
w e1 e1 Qw ∇p Hw can be con1
sidered damping injection as it increases the damping
from Dw to Dd . As the dynamics (28) are not defined
at z1 = w1 = 0, the role of the damping injection is to
ensure that the system cannot reach the configuration
z1 = 0 in finite time. The combination of the two terms
drives the system to the configuration w1 = 0 asymptotically, but prevents any finite time convergence.
Remark 12 The control law has been presented as a
function of (w, p) and stability analysis will be performed
primarily in these coordinates. However, the control law
(30) can be equivalently expressed as a function of (z, p)
via the mapping fw−1 :
u=
To visualise the potential function Vdw , consider the case
that n = 3. The resulting discontinuous transformation
7
−G−1
z
∂>
(fw )∇z Vdz − ∇z Vz
∂z
(34)
k > >
+ D̂ + 2 Qz e1 e1 Qz ∇p Hz ,
z1
Q>
z
it maximises w12 (t) on the interval [t1 , T ]. The time
derivative of the Hamiltonian satisfies
where Vdz = Vdw |w=fw (z) . The control law is discontin>
uous as a function of z due to the terms ∂∂z (fw )∇z Vdz
and zk2 Q>
z e1 ż1 .
k >
>
∇ Hd Q>
w e1 e1 Qw ∇p Hd
w12 p
k
= − 2 ẇ12 (t).
w1 (t)
Ḣd (t) ≤ −
1
5.2
Stability analysis
The remainder of this section is devoted to showing that
the closed-loop dynamics (31) are well defined for all
time and w1 → 0 as t → ∞. To do this, let x = (p, w)
and define the set
U = {x|Hd (x) ≤ Hd (0), w1 6= 0}.
Integrating with respect to time from t0 to T
Hd (T ) − Hd (t0 ) ≤ −
Z
T
t0
(35)
k
Hd (T ) − Hd (t ) ≤ − 2 0
w1 (t )
0
The following proposition demonstrates that the set U is
positively invariant which implies that provided that the
system is initialised with w1 (t0 ) 6= 0, then (31) describes
the system dynamics for all time.
Z
x2
2
Z
f (x)dx ≥ −
x1
x2
(39)
T
Z
t0
ẇ12 (t)dt.
(40)
Applying Lemma 13 to this inequality
Hd (T ) − Hd (t0 )
Lemma 13 Any real valued function f (x) satisfies the
inequality,
1
x2 − x1
k
ẇ2 (t)dt.
w12 (t) 1
As w1 (t0 ) = max{w1 (t)} ∀t ∈ [t1 , T ],
Recalling that fw is well defined for z1 6= 0, the closedloop dynamics (31) are well defined on U .
−
(38)
k
1
≤− 2 0
w1 (t ) T − t0
k
w12 (t0 ) T
k
≤− 2 0
w1 (t ) T
k
.
≤−
T − t0
≤−
f 2 (x)dx, (36)
x1
where x2 > x1 are in the domain of f .
PROOF. The proof follows from the Schwarz inequality [16]. Details are provided in the Appendix. 2
Z
T
!2
ẇ1 (t)dt
t0
1
2
(w1 (T ) − w1 (t0 ))
− t0
1
w2 (t0 )
− t0 1
(41)
As T − t0 ≤ T − t1 is arbitrarily small, the right hand
side of this inequality can be made arbitrarily large by
choosing T − t1 small enough. However, the Hamiltonian is lower bounded, thus we have a contradiction.
Thus, we conclude that there is no finite T such that
limt→T z1 (t) = 0 which implies that U is positively invariant. 2
Proposition 14 If the closed-loop dynamics (31) have
initial conditions such that w1 (t0 ) 6= 0, then the set U is
positively invariant. That is, x(t) ∈ U for all t ≥ t0 .
PROOF. The time derivative of Hd satisfies
Ḣd = −p> Mw−1 Dd Mw−1 p ≤ 0.
By Proposition 14, it is clear that the closed-loop dynamics are well defined for all finite time t < ∞. The
asymptotic behaviour of the system is now considered.
The underlying approach taken here is to show that the
system cannot approach any subset of U asymptotically.
To this end, the following Lemma shows that Ḣd cannot
be identically equal to zero on the set U .
(37)
For any time interval ∆t = [t0 , T ) with the property
that w1 (t) 6= 0 ∀t ∈ ∆t, the shaped Hamiltonian will
satisfy Hd (t) ≤ Hd (t0 ). Considering that Hd is quadratic
in p and w, this means that p(t) and w(t) are bounded
for all t ∈ ∆t. This means that z(t) is bounded on ∆t
as fw−1 is smooth (22). As w1 = z1 , p(t) is bounded
and Qz is smooth, considering the dynamics (19) reveal
that ẇ1 (t) = ż1 (t) is bounded for all t ∈ ∆t. As ẇ(t) is
bounded, limt→T w1 (t) exists for any T .
Lemma 15 Consider the closed-loop dynamics (31) defined for all w1 6= 0. On the set U there is no solution to
(31) satisfying Ḣd = 0 identically.
Now, for the sake of contradiction, assume that
limt→T w1 (t) = 0 for some finite T ∈ [t0 , ∞). Taking
any interval [t1 , T ], such that t1 ≥ t0 , pick t0 such that
PROOF. The time derivative of Hd along the trajectories of (31) are given by (37). As Dd , Mw > 0, for (37) to
8
PROOF. Recall that x = (p, w) and U defined in
(35) is positively invariant by Proposition 14. As Hd is
quadratic in p and w, it is radially unbounded which
implies that U is a bounded set.
be identically equal to zero, p must be identically equal
to zero. This means that ṗ = 02×1 along such a solution.
Evaluating the ṗ dynamics of (31) at p = ṗ = 02×1
results in
−Q>
w Lw = 0n×1 .
By the Bolzano-Weierstrass theorem, any solution x(t)
admits an accumulation point as t → ∞. The set of all
accumulation points is denoted L+ . To see that x(t) →
L+ , first presume that does not. Then, there exists a sequence tk such that d(x(tk ), L+ ) > . As x(t) is bounded,
x(tk ) has a convergent subsequence by the BolzanoWeierstrass theorem and such a subsequence converges
to L+ , which is a contradiction.
(42)
i
h
∂>
>
From (29), Q>
w = Qz (z) ∂z (fw )
−1
z=fw
(w)
, which al-
lows (42) to be rewritten as
Q>
z (z)
−1
z=fw
(w)
∂>
(fw ) z=f −1 (w) Lw = 0n×1 . (43)
w
{z
}
|∂z
As Hd (t) is monotonically decreasing and bounded below by zero, limt→∞ Hd = HL exists. Now suppose that
V = L+ ∩ U 6= ∅. By definition, for each y ∈ V , there
exists a sequence tn such that limn→∞ x(tn ) = y. As Hd
is continuous and limt→∞ Hd = HL , H(V ) = HL . By
the continuity of solutions on U and Lemma 14, a solution x(t) with x(0) = y is contained in V . Thus, such a
solution satisfies Ḣd (t) = 0.
Qra
The expression (43) is satisfied if the columns of Qra are
⊥
in the null-space of Q>
z . Letting Qz be any full rank left
annihilator of Qz , (43) is satisfied if
h
∂>
(fw )|z=fw−1 (w) Lw = Q⊥
z
∂z
i>
−1
z=fw
(w)
a(w), (44)
But by Proposition 15, there is no solution in the set
U satisfying Ḣd = 0 identically. Thus we conclude that
V = ∅ and L+ is contained in the set
where a(w) ∈ Rn−2 is an arbitrary vector. Rearranging
(44) results in
Lw =
∂ > −1 h ⊥
(f ) Qz
∂w w
i>
−1
z=fw
(w)
a(w).
(45)
Ū \ U = {x|H(x) ≤ H(0), w1 = 0}.
(47)
As x(t) → L+ , w1 (t) → 0.
⊥ >
∂>
−1
is exTaking Q⊥
z to be (21), the term ∂w (fw ) Qz
panded in (46), where ∗ denotes an unevaluated element.
Considering the coordinate transformation (22), and
noting that each wi (t) is bounded, w1 (t) → 0 implies
that each zi (t) tends towards zero. 2
Considering the second row of (45) with the evaluation
in (46), w2 = 0. Substituting in w2 = 0 and considering
the first row of (45), w1 = 0. Clearly, such a solution is
not contained in U . 2
Notice that although z tends towards the origin asymptotically, the asymptotic behaviour of p has not been established. Clearly p(t) ∈ L∞ as 21 p> Mw−1 p < Hd (t) ≤
Hd (0) for all time. Further analysis is considered beyond
the scope of this paper and left an area for future research.
When analysing the asymptotic behaviour of Hamiltonian systems, it is typical to invoke LaSalle’s invariance
principle to show that the system converges to the largest
invariant set contained within Ḣd = 0. However, we note
that as the closed-loop dynamics (31) have a discontinuous right hand side, LaSalle’s theorem does not apply.
6
Car-like system example
The following Proposition shows that w1 does indeed
tend towards zero. The intuition behind that Proposition
is noticing that if the system were to converge to a set
that is at least partially contained within U , then Ḣd
would be identically equal to zero on this set. Note that
the proof presented here is very similar in nature to the
proof of LaSalle’s theorem found in [13].
In this section, a car-like system is modelled and controlled. The system is shown to have a chained structure
and thus is able to be controlled with the control law
(30).
Proposition 16 If the closed-loop dynamics (31) have
initial conditions such that w1 (t0 ) 6= 0 then limt→∞ w1 =
0. Furthermore, this implies that limt→∞ z = 0n×1 .
The car-like system (Figure 3) can be modelled as a
mechanical pH system (2), subject to two Pfaffian constraints. The kinetic co-energy of the system is computed
6.1
9
Modelling the car-like system
P
(i−3)
1
1 w2 + n
wi z2 (w)|w2 =0
i=3 (i−3)! w1
0
w1
0
1
0
w2
w1
2! 1
2
1
1
0
w
w3
2! 1
3! 1
.
..
..
..
.
.
n−2
1
1
0
w
wn−1
(n−2)! 1
(n−1)! 1
|
{z
∂>
∂w
−1
(fw
)
···
···
···
···
..
.
···
−z2 −z3 · · · −zn−1
−w1 w2
0
0 ···
0 0
0
n−1
1
w
0 ···
0
(n−1)! 1
∗
1
=
n
1
0
1 ···
0 ∗
w
n! 1
.
.
.. ..
..
.
..
..
.
..
.
.
(2n−4)
1
0
0
·
·
·
1
∗
w
1
(2n−4)!
{z
}
}|
>
[Q⊥
z ]
zn−1 (w)
y
0 · · · 0
∗ · · · ∗
∗ · · · ∗
.. . . ..
. .
.
(46)
∗ ··· ∗
the constraints (49) results in
ẋ2 = ẋ1 − lθ̇ sin θ
φ
l
which can be substituted into (48) to find
(x1 , y1 )
θ
T∗=
x
Fig. 3. The car-like system is fixed to the ground at the points
(x1 , y1 ) and (x2 , y2 ). The point (x1 , y1 ) is able to move along
the direction of θ while (x2 , y2 ) can move in the direction of
θ + φ. The two wheels have a fixed distance given by l and
the rear wheel is unable to move relative to the frame. The
front wheel can pivot about its point of anchor to the frame.
We have two control inputs to this system; A torque can be
applied to the front wheel to cause it to turn about φ and
a force can be applied to the rear wheel along the direction
given by θ.
1
1
1
m1 (ẋ21 + ẏ12 ) + J1 θ̇2 + m2 (ẋ1 − lθ̇ sin θ)2
2
2
2
1
2
+(ẏ1 + lθ̇ cos θ) + J2 φ̇22 .
2
(51)
Taking the configuration to be q = (x1 , y1 , θ, φ), the
mass matrix for the car-like system to be
m1 + m2
0
−m2 l sin θ 0
0
m1 + m2 m2 l cos θ 0
M0 (q) =
. (52)
−m2 l sin θ m2 l cos θ m2 l2 + J1 0
0
0
0
J2
as
1
1
1
1
m1 (ẋ21 + ẏ12 ) + J1 θ̇12 + m2 (ẋ22 + ẏ22 ) + J2 θ̇22 ,
2
2
2
2
(48)
It is assumed that the system experiences linear viscous
damping with dissipation term
where mi , Ji are the masses and moment of inertias of
the rear and front wheels respectively. The system is
subject to two holonomic constraints
x2 = x1 + l cos θ
y2 = y1 + l sin θ,
(50)
ẏ2 = ẏ1 + lθ̇ cos θ,
(x2 , y2 )
T∗=
0 ··· 0
d
u
0
D0 =
0
0
(49)
which must be satisfied along any solution to the system.
The need for these auxiliary equations can be removed
by the appropriate selection of configuration variables.
0 0 0
du 0 0
,
0 dθ 0
0 0 dφ
(53)
where du , dθ , dφ > 0 are the damping coefficients. du is
the coefficient for the x1 and y1 directions while dθ is
the damping in the θ direction and dφ is the damping in
the φ direction. It is assumed that there is a force input
along θ to the rear wheel and a torque input about φ on
Our objective is to stabilise the configuration of the rear
wheel (x1 , y1 , θ) to the origin. As such, the coordinates
x2 and y2 are eliminated from our dynamic equations by
using the identities (49). Taking the time derivatives of
10
the front wheel, which gives the input mapping matrix
cos θ
sin θ
G0 (q) =
0
0
tem matrices are computed according to (7),
0
0
.
0
1
0
tan θ
0
Q=
1 sec θ tan φ 0
l
0
1
"
#
a(q) 0
M=
0 J2
"
#
b(q) 0
D=
0 dφ
"
#
0
c(q)p1
C=
−c(q)p1
0
"
#
l cos(θ1 − θ2 ) 0
G=
0
1
(54)
The system is subject to two nonholonomic constraints
that arise due to the non-slip conditions on the wheels:
ẏ1 cos θ − ẋ1 sin θ = 0
ẏ2 cos(θ + φ) − ẋ2 sin(θ + φ) = 0.
1
(55)
These constraints can be written without ẋ2 and ẏ2 using
the identities (50) and then expressed in the form (1)
with the matrix
(59)
V(q) = 0,
"
G>
c (q) =
sin θ
− cos θ
0
#
0
sin(θ + φ) − cos(θ + φ) −l cos φ 0
. (56)
where
J1 sin2 φ + l2 m2 + l2 m1 cos2 φ
l2 cos2 θ cos2 φ
2
dθ sin φ + du l2 − du l2 sin2 φ
(60)
b(q) =
l2 cos2 θ cos2 φ
(m2 l2 + J1 ) sin φ
c(q) =
.
cos φ(J1 sin2 φ + l2 m2 + l2 m1 cos2 φ)
a(q) =
The matrices (52), (53), (54) and (56) describe the car
like system in the form (2).
6.2
Elimination of Lagrange multipliers
Now the results of Section 3 are applied in order to express the equations of motion of the car-like vehicle without Lagrange multipliers. As per (5), we define the matrix
>
(G⊥
c (q))
1
tan θ
=
1
sec θ tan φ
l
0
0
6.3
The dynamics of the car-like system can be expressed
in a different set of generalised coordinates in order to
obtain a chained structure. Utilising the transformation
proposed in [19], the transformation fz is defined as
0
,
0
1
(57)
z1
x1
z
1 sec3 θ tan φ
2
l
z = = fz (q) =
,
z3
tan θ
z4
y
which satisfies G⊥
c (q)Gc (q) = 0. Note that this choice of
Gc coincides with the kinematic description of the carlike vehicle studied in [19]. Defining
>
Q(q) = (G⊥
c (q))
Coordinate transformation
(61)
which results in a new pH system of the form (19) with
(58)
1
3z2 z
22 3
Qz = z3 +1
z2
z3
allows us to express systems dynamics without Lagrange
multipliers according to Proposition 3. The car-like system can now be written in the form (6), where the sys-
11
1 2
l (z3
3
+ 1) 2
0
h
l2 z22
(z32 +1)3
0
0
i
+1
.
(62)
The control law can be implemented as a function of z
as per (34).
6.4
Numerical simulation
The car-like vehicle was simulated using the following
parameters:
m1 = 0.5
m2 = 2
J1 = 1
J2 = 1
L = diag(1, 10, 0.01, 0.0001)
l = 1.5
du = 4
dθ = 1
dφ = 2
k = 0.01.
(63)
Fig. 5. The car-like vehicle moving through the task space.
The ghosted images represent a time-lapse of the trajectory
at times t0 = 0.0s, t1 = 7.05s, t2 = 60.0s. The red and blue
lines are the paths of the front and rear wheel projected onto
the x − y plane respectively.
The simulation was run for 60 seconds using the initial
conditions x(0) = 4, y(0) = 2, θ(0) = 0, φ(0) = 0. Figure
4 shows the time history of the states q(t) and control
action u(t) and Figure 5 is a time-lapse plot of the carlike vehicle travelling from its initial conditions to the
origin.
Configuration
4
q(t)
robust against the inertial and damping properties of
the open-loop system. Future work will be concerned
with extending the analysis of the closed-loop system
to consider the asymptotic behaviour of the momentum
variables and control action.
x1
2
y1
0
References
-2
0
10
20
40
50
60
[1] A. Astolfi. Discontinuous control of nonholonomic systems.
Systems & Control Letters, 38(27):15–37, 1996.
t
Control action
10
u(t)
30
[2] A. Bloch, J. Baillieul, P. Crouch, and J. Marsden.
Nonholonomic Mechanics and Control. Springer-Verlag, New
York, 2003.
u1
5
u2
[3] H. Choset, L. Lynch, S. Hutchinson, G. Kantor, W. Burgard,
L. Kavraki, and T. Sebastian. Principles of Robot Motion:
Theory, Algorithms, and Implementation. The MIT Press,
2005.
0
-5
0
10
20
30
40
50
60
[4] S. Delgado and P. Kotyczka. Energy shaping for position
and speed control of a wheeled inverted pendulum in reduced
space. Automatica, 74:222–229, 2016.
t
Fig. 4. Time history of the configuration variables of the car–
like vehicle. Using the discontinuous control law, all states
converge to the origin asymptotically.
7
[5] A. Donaire, J.G. Romero, T. Perez, and R. Ortega. Smooth
stabilisation of nonholonomic robots subject to disturbances.
In IEEE, editor, IEEE International Conference on Robotics
and Automation, pages 4385–4390, Seattle, 2015.
[6] J. Ferguson, A. Donaire, and R.H. Middleton. Switched
Passivity-Based Control of the Chaplygin Sleigh. In Proc.
IFAC Symposium on Nonlinear Control Systems, pages 1012–
1017, Monterey, California, 2016. Elsevier B.V.
Conclusion
In this paper, a discontinuous control law for set-point
regulation of nonholonomic, port-Hamiltonian systems
with chained structure to the origin is presented. The
control scheme relies on a discontinuous coordinate
transformation that reduces the control problem to
the stabilisation of a single variable in the transformed
space. The proposed control law can be interpreted as
potential energy shaping with damping injection and is
[7] J Ferguson, A Donaire, and R.˜H. Middleton. Discontinuous
energy shaping control of the Chaplygin sleigh. In Proc.
IFAC Workshop on Lagrangian and Hamiltonian Methods
for Nonlinear Control, 2018, arXiv: 1801.06278.
[8] K. Fujimoto, K. Ishikawa, and T. Sugie. Stabilization of a
class of Hamiltonian systems with nonholonomic constraints
and its experimental evaluation. In IEEE Conference on
Decision and Control, pages 3478 – 3483, 1999.
12
there exists a non-trivial solution z to Q>
0 z = 0. Then,
[9] K. Fujimoto, S. Sakai, and T. Sugie.
Passivity-based
control of a class of Hamiltonian systems with nonholonomic
constraints. Automatica, 48(12):3054–3063, 2012.
[10] K. Fujimoto and T. Sugie. Canonical transformation and
stabilization of generalized Hamiltonian systems. Systems
and Control Letters, 42(3):217–227, 2001.
(A.1)
G⊥
c z = 0.
(A.2)
and
[11] H. Goldstein. Classical Mechanics. Addison-Wesley, Reading,
MA, 2 edition, 1980.
[12] F. Gómez-Estern and A.J. van der Schaft. Physical damping
in IDA-PBC controlled underactuated mechanical Systems.
European Journal of Control, 10(5):451–468, 2004.
Since z 6= 0, then (A.2) requires that z = Gc x for some
x 6= 0 as z must be in the range of Gc . Then (A.1) becomes A> Gc x = 0. As A> Gc is invertible by assumption,
x cannot be non-zero, which is a contradiction. Hence,
we conclude that Q0 must be invertible. 2
[13] H. Khalil. Nonlinear systems. Prentice Hall, New Jersey,
third edition, 1996.
[14] I. Kolmanovsky and N.H. Mcclamroch. Developments in
nonholonomic control problems. IEEE Control Systems,
15(6):20–36, 1995.
[15] D. Lee. Passivity-based switching control for stabilization
of wheeled mobile robots. In Proc. Robotics: Science and
Systems, 2007.
Proof of Lemma 7 Consider the matrix
(n+2)!
(n+k)!
(n+1)!
(n−k+2)! (n−k+3)! · · ·
(n+1)!
(n+1)!
(n+2)!
(n+k)!
(n−k+3)! (n−k+4)! · · · (n+2)!
..
..
..
..
.
.
.
.
Ak =
.
(n+2)!
(n+k)!
(n+1)!
· · · (n+k−2)!
(n−1)!
n!
(n+1)!
(n+2)!
(n+k)!
· · · (n+k−1)!
n!
(n+1)!
1
1
···
1
[16] E.H. Lieb and M. Loss. Analysis. American Mathematical
Society, Providence, RI, 2 edition, 2001.
[17] B.M. Maschke and A. van der Schaft. A Hamiltonian
approach to stabilization of nonholonomic mechanical
systems. In IEEE Conference on Decision and Control, pages
2950–2954, 1994.
[18] V.
Muralidharan,
M.T. Ravichandran, and A.D. Mahindrakar. Extending
interconnection and damping assignment passivity-based
control (IDA-PBC) to underactuated mechanical systems
with nonholonomic Pfaffian constraints: The mobile inverted
pendulum robot. In IEEE Conference on Decision and
Control, pages 6305–6310, 2009.
(A.3)
As An = Sn diag{(n+1)!, (n+2)!, · · · , 2n!}, invertibility
of An is equivalent to invertibility of Sn . We will show
that Ak is invertible by induction. Suppose that Ak−1 is
invertible. Subtracting each column of Ak by the column
on its left results in the matrix
[19] R.M. Murray and S.S. Sastry. Steering nonholonomic systems
in chained form. In IEEE Conference on Decision and
Control, pages 1121–1126, 1991.
[20] R. Ortega and E. Garcia-Canseco. Interconnection and
damping assignment passivity-based control: A survey.
European Journal of control, 10(5):432–450, 2004.
(n+1)!
(n+1)!
(n−k+3)! (k
(n−k+2)!
(n+1)!
(n+1)!
(n−k+3)! (n−k+4)! (k
[21] C. Samson. Control of chained systems application to
path following and time-varying point-stabilization of mobile
robots. IEEE Transactions on Automatic Control, 40(1):64–
77, 1995.
..
.
Ãk =
(n+1)!
(n−1)!
n+1
1
[22] O.J. Sørdalen. Conversion of the kinematics of a car with
n trailers into a chained form. International Conference on
Robotics and Automation, pages 382–387, 1993.
[23] Y.P. Tian and S. Li.
Exponential stabilization of
nonholonomic dynamic systems by smooth time-varying
control. Automatica, 38(7):1139–1146, 2002.
..
.
− 1) · · ·
− 2) · · ·
..
.
(n+1)!
n! (2)
···
1
···
0
···
(n+k−1)!
(n+1)! (k
(n+k−1)!
(n+2)! (k
− 1)
− 2)
..
.
.
(n+k−1)!
(n+k−2)! (2)
1
0
(A.4)
[24] A. van der Schaft and B.M. Maschke. On the Hamiltonian
formulation of nonholonomic mechanical systems. Reports
on Mathematical Physics, 34(2):225–233, 1994.
Notice that the top right (k − 1) × (k − 1) block is
diag(k − 1, k − 2, · · · , 2, 1)Ak−1 which is invertible by
our inductive hypothesis. Thus Ãk , and hence Ak , is invertible. To complete the proof, we note that A1 = [1] is
trivially invertible.
[25] G. Viola, R. Ortega, and R. Banavar. Total energy shaping
control of mechanical systems: simplifying the matching
equations via coordinate changes. IEEE Transactions on
Automatic Control, 52(6):1093–1099, 2007.
A
A> z = 0
Appendix
Proof of Lemma 2 Assume that Q>
0 is singular. Then,
Proof of Lemma 13 By the Schwarz inequality [16],
13
any two real valued functions f (x), g(x) satisfy
Z
2
x2
f (x)g(x)dx
Z
x2
≤
x1
f 2 (x)dx
x1
Z
x2
g 2 (x)dx.
x1
(A.5)
Taking g(x) = 1, (A.5) simplifies to
Z
2
x2
Z
x2
Z
2
x2
1dx
f (x)dx
x1
Z x2 x1
f 2 (x)dx
≤ (x2 − x1 )
f (x)dx
≤
x1
x1
1
x2 − x1
Z
2
x2
f (x)dx
Z
x2
f 2 (x)dx
≤
x1
x1
(A.6)
Taking the negative of this inequality results in
1
−
x2 − x1
Z
x2
x1
2
Z
f (x)dx ≥ −
x2
f 2 (x)dx (A.7)
x1
as desired.
14
| 3 |
IEEE SIGNAL PROCESSING LETTERS
1
Look Wider to Match Image Patches with
Convolutional Neural Networks
arXiv:1709.06248v1 [cs.CV] 19 Sep 2017
Haesol Park, and Kyoung Mu Lee
Abstract—When a human matches two images, the viewer has
a natural tendency to view the wide area around the target
pixel to obtain clues of right correspondence. However, designing
a matching cost function that works on a large window in
the same way is difficult. The cost function is typically not
intelligent enough to discard the information irrelevant to the
target pixel, resulting in undesirable artifacts. In this paper, we
propose a novel convolutional neural network (CNN) module to
learn a stereo matching cost with a large-sized window. Unlike
conventional pooling layers with strides, the proposed per-pixel
pyramid-pooling layer can cover a large area without a loss
of resolution and detail. Therefore, the learned matching cost
function can successfully utilize the information from a large area
without introducing the fattening effect. The proposed method is
robust despite the presence of weak textures, depth discontinuity,
illumination, and exposure difference. The proposed method
achieves near-peak performance on the Middlebury benchmark.
Index Terms—stereo matching,pooling,CNN
I. I NTRODUCTION
Most stereo matching methods first compute the matching
cost of each pixel with a certain disparity, before optimizing
the whole cost volume either globally or locally by using specific prior knowledge [1]. For decades, many researchers have
focused on the second step, designing a good prior function
and optimizing it [2], [3], [4], [5], [6]. Few studies have been
conducted on designing or selecting a better matching cost
function.
One of the most widely used matching cost functions is a
pixel-wise matching cost function, such as the one used in [7].
Along with sophisticated prior models, it sometimes produces
good results, especially in preserving the detailed structures
near the disparity discontinuities. However, the function fails
when the image contains weakly-textured areas or repetitive
textures. In such cases, a window-based matching cost, such
as CENSUS or SAD [8], produces a more reliable and distinctive measurement. The critical shortcoming of window-based
matching cost functions is their unreliability around disparity
discontinuities. Figure 1 visually illustrates the characteristics
of different matching cost measures.
One method to handle this trade-off is to make the windowbased versatile to its input patterns [10], [11], [12]. The key
idea is making the shape of the matching template adaptive so
that it can discard the information from the pixels that are irrelevant to the target pixel. However, knowing the background
pixels before the actual matching is difficult, making it a
H. Park and K. M. Lee are with Automation and Systems Research Institute,
Seoul National University, Seoul 151-744, Korea
matching cost for
Pixelwise
matching cost for
1
1
0.5
0.5
0
1
0
1
SAD
(11x11)
0.5
0.5
0
1
0
1
SAD
(37x37)
0.5
0.5
0
1
0
1
CENSUS
(11x11)
0.5
0.5
0
1
0
1
CENSUS
(37x37)
0.5
0.5
0
1
0
1
MC-CNN-arct[13]
(11x11)
0.5
0.5
0
1
0
1
0.5
0.5
0
0
Proposed
(37x37)
Fig. 1. The top image shows the reference image with two interested points,
x1 and x2 . The pixel positions are marked as blue dots, whereas the red
and green boxes represent 37 × 37 and 11 × 11 windows centered on them,
respectively. At the bottom, the matching costs for each pixel are visualized
as a normalized function of disparity for different matching cost functions.
The positions of true disparities are marked as red vertical lines. The pixelwise cost shows the lowest values at the true disparity, but it also gives zero
costs for other disparities. The SAD and CENSUS matching cost functions [9]
become less ambiguous as the matching window becomes larger. However,
these functions are affected by pixels irrelevant to the target pixel (x2 ). Even
the matching cost learned by using the baseline convolutional neural network
(CNN) architecture fails when the surface has a nearly flat texture (x1 ). On
the other hand, the proposed CNN architecture works well both on weakly
textured regions and disparity discontinuities.
‘chicken-and-egg’ problem. Therefore, the use of a CNN [13],
[14] is appropriate, as it automatically learns the proper shape
of the templates for each input pattern.
The existing methods, however, are based on conventional
CNN architectures resembling the AlexNet [15] or VGG [16]
network, which are optimized for image classification task
and not for image matching. The architectures comprise several convolution layers, each followed by a rectified linear
unit (ReLU) [15], and pooling layers with strides. One of the
limitations of using these architectures for matching is the
difficulty of enlarging the size of the patches that are to be
compared. The effective size of the patch is directly related to
IEEE SIGNAL PROCESSING LETTERS
the spatial extent of the receptive field of CNN, which can be
increased by (1) including a few strided pooling/convolution
layers, (2) using larger convolution kernels at each layer, or
(3) increasing the number of layers. However, use of strided
pooling/convolution layers makes the results downsampled,
losing fine details. Although the resolution can be recovered
by applying fractional-strided convolution [17], reconstructing
small or thin structures is still difficult if once they are lost
after downsampling. Increasing the size of the kernels is also
problematic, as the number of feature maps required to represent the larger patterns increases significantly. Furthermore, a
previous study [18] reported that the repetitive usage of small
convolutions does not always result in a large receptive field.
This paper contributes to the literature by proposing a novel
CNN module to learn a better matching cost function. The
module is an innovative pooling scheme that enables a CNN
to view a larger area without losing the fine details and without
increasing the computational complexity during test times.
The experiments show that the use of the proposed module
improves the performance of the baseline network, showing
competitive results on the Middlebury [1], [19] benchmark.
II. R ELATED W ORKS
Given the introduction of high-resolution stereo datasets
with the ground-truth disparity maps [20], [19], [21], many
attempts have been made to learn a matching cost function
using machine learning algorithms [13], [14], [22]. The most
impressive results are obtained by using CNN [13], [14].
The architecture proposed in [13] takes a small 11 × 11
window and processes it without the use of pooling. The
computed cost volume is noisy due to the limited size of
the window. Thus, it is post-processed by using the crossbased cost aggregation (CBCA) [23], semi-global matching
(SGM) [3], and additional refinement procedures. On the other
hand, the method in [14] uses multiple pooling layers and
spatial-pyramid-pooling (SPP) [24] to process larger patches.
However, the results show a fattening effect owing to the loss
of information introduced by pooling.
The main contribution of this paper is in proposing a
novel pooling scheme that can handle information from a
large receptive field without losing the fine details. Recently,
several attempts have been made to accomplish the same
goal in the context of semantic segmentation [25], [26], [27].
These methods combine the feature maps from the highlevel layers with those from the lower layers, with the aim
of correctly aligning the object-level information along the
pixel-level details. While this approach can successfully align
the boundaries of the big objects, its inherent limitation is
its inability to recover small objects in the final output once
they are lost during the abstraction due to multiple uses of
pooling. In the same context, the FlowNet [28] architecture
can upsample the coarse-level flow to the original scale by
using lower-level feature maps. However, it fails to recover the
extreme flow elements that are hidden due to the low resolution
of high-level feature maps.
The architecture most closely related to the current work
has been proposed in [24]. Unlike the other approaches, the
2
4
P
Fig. 2. The 4P module with pooling size vector s = [5, 3, 1] is visualized.
This figure shows its action for one channel of the feature maps for brevity;
it does the same job for all channels.
SPP network excludes pooling layers between convolutional
layers. Instead, it first computes highly-nonlinear feature maps
by cascading convolutional layers several times and then
generates high-level and mid-level information by pooling
them at different scales. By keeping the original feature maps
along with feature maps pooled at multiple scales, the SPP
network can combine the features from multiple levels without
losing fine details. Although the previously mentioned stereo
method in [14] uses SPP, it also employs conventional pooling
layers between convolutional layers, thus losing the detailed
information.
III. A RCHITECTURE OF THE N EURAL N ETWORK
The proposed architecture takes two input patches and
produces the corresponding matching cost. In the following
subsections, the newly proposed module is first introduced.
Then the detailed architecture of the entire network is presented.
A. Per-pixel Pyramid Pooling (4P)
The use of pooling layers in CNN has been considered
desirable because of its accuracy and efficiency in image
classification tasks. While the use of max-pooling layers has
been reported to provide an additional invariance in spatial
transformation, the most important gain comes from the
downsampling of feature maps. By performing pooling with a
stride that is larger than one, the output feature maps after
the pooling are scaled down. The final scale of the CNN
output is decreased exponentially in terms of the number of
pooling layers. Given that no parameters related to a pooling
operation exist, this method is an effective way to widen the
receptive field area of a CNN without increasing the number
of parameters. The drawback of strided pooling is that the
network loses fine details in the original feature maps as the
pooling is applied. Thus, a trade-off exists in seeing a larger
area and preserving the small details.
Inspired by the idea discussed in [24], we propose a novel
pooling scheme to overcome this trade-off. Instead of using a
small pooling window with a stride, a large pooling window
is used to achieve the desired size of the receptive field. The
use of one large pooling window can lead to the loss of
finer details. Thus, multiple poolings with varying window
sizes are performed, and the outputs are concatenated to
IEEE SIGNAL PROCESSING LETTERS
3
Matching score
Matching score
1x1 Conv. + Sigmoid (1)
1x1 Conv. + Sigmoid (1)
TABLE I
T HE QUANTITATIVE RESULTS ON THE ‘ TRAINING DENSE ’ SET OF THE
M IDDLEBURY BENCHMARK [1] ARE SHOWN . T HE ERROR REPRESENTS
THE PERCENTAGE OF BAD PIXELS WITH A DISPARITY THRESHOLD 2.0,
AND THE SAME WEIGHTING SCHEME IS APPLIED AS IN [1] WHEN
COMPUTING THE AVERAGE .
1x1 Conv. + ReLU (384)
4P (384)
1x1 Conv. (96)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
Methods
WTA
after post-processing
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
L
R
Baseline
L
R
Proposed
Fig. 3. The network structures are visualized for the baseline network, ‘MCCNN-acrt’ [13], and the proposed network. The parenthesized numbers at
each layer represent the number of feature maps after the corresponding
operations. Note that this figure is drawn in terms of the fully convolutional
network.
create new feature maps. The resulting feature maps contain
the information from coarse-to-fine scales. The multi-scale
pooling operation is performed for every pixel without strides.
We call this whole procedure, “per-pixel pyramid pooling”
(4P), which is formally defined as follows:
P 4P (F, s) = [P (F, s1 ) , · · · , P (F, sM )] ,
(1)
where s is a vector having M number of elements, and
P (F, si ) is the pooling operation with size si and stride one.
The structure of this module is illustrated in Figure 2.
B. Proposed model
To validate the effect of the proposed module, we trained
and tested CNNs with and without the 4P module. The
baseline architecture is selected as the ‘MC-CNN-acrt’ [13].
The 4P module in the proposed architecture is constructed by
using the size vector s = [27, 9, 3, 1]. The structures of two
CNNs are visualized in Figure 3.
IV. I MPLEMENTATION D ETAILS
For a fair comparison, we followed the details in [13]
to train the proposed architecture with a few exceptions
mentioned below. First, the size of the training patch became
37×37. Furthermore, we only fine-tuned the parameters of the
last three 1 × 1 convolution layers of the proposed architecture
in Figure 3. The parameters of the earlier layers are borrowed
from the pre-trained ‘MC-CNN-acrt’ [13] network. In our
experiments, this resulted in a better performance than the
end-to-end training of the network with random initializations.
Moreover, training a few convolution layers with pre-trained
features is easier, making it converge faster. We have run a
avg. error
MC-CNN-acrt [13]
proposed
22.91
11.75
MC-CNN-acrt [13]
proposed
(w/ parameters in [13])
proposed
(w/ parameter tuning)
10.26
9.72
8.45
total of four epochs of training, where the last two epochs
were run with a decreased learning rate from 0.003 to 0.0003.
We also used the same post-processing pipeline as in [13]
during the test phase. The post-processing pipeline includes the
use of the CBCA [23] and SGM [3], and the disparity maps are
refined to have continuous values and undergo median filtering
and bilateral filtering.
V. E XPERIMENTS
To verify the effect of the proposed 4P module, we have
compared the results of the baseline and proposed network.
The performance is measured using the ‘training dense’ set
of the Middlebury benchmark [1]. The quantitative results are
briefly summarized in Table I using the average errors. All
experiments are performed by using the Intel core i7 4790K
CPU and a single Nvidia Geforce GTX Titan X GPU.
The proposed method outperforms the baseline architecture regardless of the use of post-processing. The benefit of
using the 4P module is clear when the disparity maps are
obtained by using the pixel-wise winner-takes-it-all (WTA)
rule without any post-processing. Given that the images in the
dataset contain many weakly-textured areas, the small-sized
11×11 window cannot distinguish the true matches from false
ones without the aid of post-processing. On the other hand,
the proposed architecture effectively sees the larger window,
37 × 37, by inserting the 4P module before the final decision
layer.
It is less straightforward to understand why the proposed
architecture still outperforms the baseline even after postprocessing. In that sense, it is worth to mention that the
best parameter setting for post-processing of the proposed
method largely differ from that of the baseline.1 The most
notable changes from the original parameter setting is that
we use much less number of CBCA [23], and it means that
multiple uses of CBCA [23] become redundant in the proposed
architecture. From this fact, we can interpret the role of 4P
module as adaptive local feature aggregation. Compared to the
hand-designed algorithm such as CBCA [23], the influence of
neighboring pixels to a certain pixel is automatically learned
1 Following the conventions in [13], the best parameter setting is as follows: cbca_num_iterations_1 = 0, cbca_num_iterations_2
= 1, sgm_P1 = 1.3, sgm_P2 = 17.0, sgm_Q1 = 3.6, sgm_Q2 =
36.0, and sgm_V = 1.4.
IEEE SIGNAL PROCESSING LETTERS
true disparity and left image
4
proposed
MC-CNN-acrt
Fig. 4. The results for PlaytableP and Vintage are visualized. For each datum, the upper row shows the disparity map and the bottom row shows the
corresponding error maps. While the ‘MC-CNN-acrt’ [13] shows errors around the weakly-textured areas, such as the surfaces of the chair and the table in
PlaytableP or the white wall in Vintage, the proposed method shows more reliable results.
and it can be jointly trained with the cost function itself.
Furthermore, the information exchange among pixels is done
in feature space which contains richer contextual information
than the final cost volume space.
Note that the improvement over the baseline clearly results
neither from the use of extra layers nor from the use of more
parameters, as the authors of [13] already have shown that the
additional use of fully-connected (FC) layers is less significant.
Using two additional FC layers leads to an improvement of
approximately 1.90%, whereas using the 4P module results in
a 21.42% improvement in terms of average error.
The main contribution of the proposed method lies in learning a less ambiguous matching cost function by inspecting a
larger area. Figure 4 shows that the proposed network actually
works better around the weakly-textured area than the ‘MCCNN-acrt’ [13]. The quantitative and qualitative results of
each dataset, including the ones in the ‘test dense’ set, are
available at the Middlebury benchmark [1] website.
VI. C ONCLUSIONS
Viewing a large area to estimate the dense pixel correspondence is necessary to fully utilize the texture information
to achieve less ambiguous and more accurate matching. A
conventional matching cost function fails because neighboring
pixels on the same surface as the target pixel are unknown. In
this paper, a novel CNN module is proposed to make the CNN
structure handle a large image patch without losing the small
details, which enables it to learn an intelligent matching cost
function for large-sized windows. The learned cost function
can discriminate the false matches for weakly-textured areas or
repeating textures, and can also conserve the disparity discontinuities well/. The learned cost function achieves competitive
performance on the Middlebury benchmark.
IEEE SIGNAL PROCESSING LETTERS
R EFERENCES
[1] D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense
two-frame stereo correspondence algorithms,” IJCV, vol. 47, no. 1-3,
pp. 7–42, 2002.
[2] V. Kolmogorov and R. Zabih, “Computing visual correspondence with
occlusions using graph cuts,” in ICCV, vol. 2. IEEE, 2001, pp. 508–515.
[3] H. Hirschmüller, “Stereo processing by semiglobal matching and mutual
information,” PAMI, vol. 30, no. 2, pp. 328–341, 2008.
[4] O. Woodford, P. Torr, I. Reid, and A. Fitzgibbon, “Global stereo
reconstruction under second-order smoothness priors,” PAMI, vol. 31,
no. 12, pp. 2115–2128, 2009.
[5] C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast
cost-volume filtering for visual correspondence and beyond,” in CVPR.
IEEE, 2011, pp. 3017–3024.
[6] Q. Yang, “A non-local cost aggregation method for stereo matching,” in
CVPR. IEEE, 2012, pp. 1402–1409.
[7] S. Birchfield and C. Tomasi, “Depth discontinuities by pixel-to-pixel
stereo,” International Journal of Computer Vision, vol. 35, no. 3, pp.
269–293, 1999.
[8] H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs
on images with radiometric differences,” IEEE transactions on pattern
analysis and machine intelligence, vol. 31, no. 9, pp. 1582–1599, 2009.
[9] H. Hirschmüller and D. Scharstein, “Evaluation of cost functions for
stereo matching,” in CVPR. IEEE, 2007, pp. 1–8.
[10] K. Wang, “Adaptive stereo matching algorithm based on edge detection,”
in ICIP, vol. 2. IEEE, 2004, pp. 1345–1348.
[11] K.-J. Yoon and I. S. Kweon, “Adaptive support-weight approach for
correspondence search,” PAMI, vol. 28, no. 4, pp. 650–656, 2006.
[12] F. Tombari, S. Mattoccia, L. D. Stefano, and E. Addimanda, “Classification and evaluation of cost aggregation methods for stereo correspondence,” in CVPR. IEEE, 2008, pp. 1–8.
[13] J. Žbontar and Y. LeCun, “Stereo matching by training a convolutional
neural network to compare image patches,” The Journal of Machine
Learning Research, vol. 17, no. 1, pp. 2287–2318, 2016.
[14] S. Zagoruyko and N. Komodakis, “Learning to compare image patches
via convolutional neural networks,” in CVPR, June 2015, pp. 4353–4361.
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification
with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
[16] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[17] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation
learning with deep convolutional generative adversarial networks,” arXiv
preprint arXiv:1511.06434, 2015.
[18] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Object
detectors emerge in deep scene cnns,” arXiv preprint arXiv:1412.6856,
2014.
[19] D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić,
X. Wang, and P. Westling, “High-resolution stereo datasets with
subpixel-accurate ground truth,” in Pattern Recognition. Springer, 2014,
pp. 31–42.
[20] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous
driving? the kitti vision benchmark suite,” in CVPR, 2012.
[21] M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,”
in CVPR, 2015.
[22] L. Ladickỳ, C. Häne, and M. Pollefeys, “Learning the matching function,” arXiv preprint arXiv:1502.00652, 2015.
[23] K. Zhang, J. Lu, and G. Lafruit, “Cross-based local stereo matching
using orthogonal integral images,” Circuits and Systems for Video
Technology, vol. 19, no. 7, pp. 1073–1079, 2009.
[24] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep
convolutional networks for visual recognition,” in ECCV. Springer,
2014, pp. 346–361.
[25] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks
for semantic segmentation,” in CVPR, 2015, pp. 3431–3440.
[26] B. Hariharan, P. Arbelaez, R. Girshick, and J. Malik, “Hypercolumns
for object segmentation and fine-grained localization,” in CVPR, June
2015, pp. 447–456.
[27] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for
semantic segmentation,” arXiv preprint arXiv:1505.04366, 2015.
[28] P. Fischer, A. Dosovitskiy, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov,
P. van der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical
flow with convolutional networks,” arXiv preprint arXiv:1504.06852,
2015.
5
| 1 |
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
arXiv:1611.00936v4 [math.GR] 21 Dec 2017
Journal of Knot Theory and Its Ramifications
c World Scientific Publishing Company
SIMPLY CONNECTED LATIN QUANDLES
MARCO BONATTO
Department of Algebra, Faculty of Mathematics and Physics, Charles University
Sokolovská 83, 18675 Praha, Czech Republic
[email protected]
PETR VOJTĚCHOVSKÝ
Department of Mathematics, University of Denver
2280 S Vine St, Denver, Colorado 80112, U.S.A.
[email protected]
ABSTRACT
A (left) quandle is connected if its left translations generate a group that acts transitively on the underlying set. In 2014, Eisermann introduced the concept of quandle
coverings, corresponding to constant quandle cocycles of Andruskiewitsch and Graña.
A connected quandle is simply connected if it has no nontrivial coverings, or, equivalently, if all its second constant cohomology sets with coefficients in symmetric groups
are trivial.
In this paper we develop a combinatorial approach to constant cohomology. We
prove that connected quandles that are affine over cyclic groups are simply connected
(extending a result of Graña for quandles of prime size) and that finite doubly transitive
quandles of order different from 4 are simply connected. We also consider constant
cohomology with coefficients in arbitrary groups.
Keywords: Quandle, connected quandle, latin quandle, quandle cohomology, non-abelian
cohomology, constant cocycle, simply connected quandle, covering.
Mathematics Subject Classification 2000: Primary 05C38, 15A15; Secondary 05A15,
15A18.
1. Introduction
Quandles were introduced by Joyce [13] in 1982 as algebraic objects whose elements
can be used to color oriented knots. In more detail, let K be an oriented knot and
X = (X, ⊲, ⊲) a set with two binary operations. Given a diagram of K, an assignment
of elements of X to arcs of K is consistent if the relations
y
y
x
x
x⊲y
x⊲y
1
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
2
FILE
Marco Bonatto and Petr Vojtěchovský
are satisfied at every crossing of the diagram. If, in addition, the assignment remains
consistent when Reidemeister moves are applied to the diagram of K in all possible
ways, we say that the assignment is a coloring of K. We denote by colX (K) the
number of colorings of K by elements of X in which more than one element is used.
Whether colX (K) 6= 0 is a delicate question that depends on both K and
X = (X, ⊲, ⊲). However, when the consistency conditions imposed by Reidemeister
moves are universally quantified, they force ⊲ to be the inverse operation to ⊲ (in
the sense that x ⊲ (x ⊲ y) = y and x ⊲ (x ⊲ y) = y) and they force precisely the
quandle axioms onto ⊲. In particular, if (X, ⊲) is a quandle and there is a consistent
assignment of elements of X to arcs of a diagram of K, it is also a coloring of K.
It is therefore customary to color arcs of oriented knots by elements of quandles.
For an oriented knot K, the knot quandle of K is the quandle freely generated
by the oriented arcs of K subject only to the above crossing relations. It was shown
early on by Joyce [13] and Matveev [17] that knot quandles are complete invariants
of oriented knots up to mirror image and reversal of orientation.
It has been conjectured that for any finite collection K of oriented knots there
exists a finite collection of finite quandles X1 , . . . , Xn such that the n-tuples
(colX1 (K), . . . , colXn (K)) distinguish the knots of K up to mirror image and reversal of orientation. (See [4] for a more precise formulation.) This conjecture has
been verified in [4] for all prime knots with at most 12 crossings, using a certain
collection of 26 quandles. Many oriented knots can be distinguished by a coarser
invariant, namely by merely checking whether colX (K) 6= 0 [8,14].
Whenever an oriented knot K is colored by a quandle X, the elements of X
actually used in the coloring form a connected subquandle of X. Consequently, for
the purposes of quandle colorings, it is sufficient to consider connected quandles.
Although far from settled, the theory of connected quandles is better understood
than the general case [11], and connected quandles have been classified up to size
47 [11,19].
Our work is primarily concerned with simply connected quandles which can be
defined in several equivalent ways.
In a long paper [6], Eisermann developed the theory of quandle coverings and
offered a complete categorical characterization of coverings for a given quandle.
In [1], Andruskiewitsch and Graña introduced an extension theory for quandles.
Their (dynamical) quandle cocycles can be used to color arcs of oriented knots,
giving rise to knot invariants [2]. (These were first defined in [3] as an analog of
the Dijkgraaf-Witten invariants for 3-manifolds [10].) The quandle cocycle condition (see Definition 2.6) is rather difficult to work with, involving a 3-parameter
mapping. Constant quandle cocycles are precisely those quandle cocycles in which
one parameter is superfluous (see Definition 2.14). Even constant quandle cocycles
yield powerful knot invariants [2, Section 5].
The following conditions are equivalent for a connected quandle X (see Proposition 2.16):
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
3
• the fundamental group of X is trivial,
• every covering of X is equivalent to the trivial covering of X over some set
S,
• for every set S, the second constant cohomology set of X with coefficients
in the symmetric group SS is trivial.
If a connected quandle X satisfies any of these equivalent conditions, we say that
X is simply connected.
In this paper we develop a combinatorial approach to constant cohomology with
emphasis on simply connected quandles. From an algebraic point of view, our main
result is as follows:
Theorem 1.1. Let X be a finite connected quandle that is affine over a cyclic
group, or a finite doubly transitive quandle of size different from 4. Then X is
simply connected.
We offer two proofs of Theorem 1.1. The first proof is combinatorial in nature
and mostly self-contained. The second proof (whose main idea was suggested to us
by the referee of an earlier version of this paper) is much shorter and relies on an
explicit description of the fundamental group of affine quandles from an unpublished
note of Clauwens [5].
We also investigate constant cohomology with coefficients in arbitrary groups
and we prove:
Theorem 1.2. Let X be a latin quandle. Then the following conditions are equivalent:
(i) X is simply connected,
(ii) Hc2 (X, G) = 1 for every group G.
We can then easily obtain the following knot-theoretical corollary:
Corollary 1.3. Let X be a simply connected latin quandle. Then every conjugacy
quandle cocycle invariant based on X is trivial.
The paper is organized as follows. Basic results about quandles and their extensions, coverings, cohomology and constant cohomology are recalled in Section
2. Let X be a latin quandle, u ∈ X, and let S be a nonempty set. Every constant
quandle cocycle β : X × X → SS with coefficients in the symmetric group SS is cohomologous to a normalized (constant) quandle cocycle βu satisfying βu (x, u) = 1
for every x ∈ X, as is recalled in Section 3. In Section 4 we introduce three bijections f , g, h of X × X under which every u-normalized cocycle βu is invariant,
that is, βu (k(x, y)) = βu (x, y) for k ∈ {f, g, h}. To prove that a given connected
quandle is simply connected, it then suffices to show that for every (x, y) ∈ X × X
there exists some (x0 , y0 ) ∈ X × X and k ∈ hf, g, hi such that βu (x0 , y0 ) = 1
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
4
FILE
Marco Bonatto and Petr Vojtěchovský
and k(x0 , y0 ) = (x, y). We therefore study the orbits of f , g, h in Section 5, and
again in Section 6 in the restricted case of connected affine quandles. Theorem 1.1
is proved in Section 7. Clauwens’ explicit description of the fundamental group of
affine quandles is recalled in Section 8 and then Theorem 1.1 is proved once more.
Finally, constant cohomology with coefficients in arbitrary groups is introduced in
Section 9, where we also prove Theorem 1.2 and Corollary 1.3.
2. Basic Results and Quandle Extensions
2.1. Quandles
For a groupoid (X, ·) and x ∈ X, let
Lx : X → X,
x 7→ x · y,
Rx : X → X,
x 7→ y · x
be the left translation by x and the right translation by x, respectively.
We will often suppress the binary operation while talking about groupoids and
denote them by X rather than by (X, ·). We denote by Aut(X) the automorphism
group of X. When X is merely a set, then Aut(X) = SX , the symmetric group on
X.
Definition 2.1. A groupoid (X, ⊲) is a quandle if it is a left quasigroup that is left
distributive and idempotent. That is, (X, ⊲) is a quandle if it satisfies the following
axioms:
Lx ∈ SX ,
(2.1)
x ⊲ (y ⊲ z) = (x ⊲ y) ⊲ (x ⊲ z),
(2.2)
x⊲x=x
(2.3)
for every x, y, z ∈ X.
Note that the identity (2.1) is equivalent to X having a left division operation
defined by
x\y = L−1
x (y),
and that the two identities (2.1) and (2.2) jointly state that Lx ∈ Aut(X) for every
x ∈ X.
Every quandle is flexible, indeed, x ⊲ (y ⊲ x) = (x ⊲ y) ⊲ (x ⊲ x) = (x ⊲ y) ⊲ x, and
it is therefore safe to write x ⊲ y ⊲ x.
For any groupoid X and ϕ ∈ Aut(X) we have ϕLx ϕ−1 = Lϕ(x) for every x ∈ X.
In particular, if X is a quandle then
Ly Lx L−1
y = Ly⊲x for every x, y ∈ X.
(2.4)
Example 2.2. Quandles appear naturally as the following examples illustrate.
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
5
(i) The one element groupoid is called the trivial quandle. More generally, any
projection groupoid on a set X (that is, a groupoid satisfying x ⊲ y = y for
every x, y ∈ X) is a quandle, the projection quandle PX over X.
(ii) Let G be a group and H a union of (some) conjugacy classes of G. For x,
y ∈ H, let x ⊲ y = xyx−1 . Then (H, ⊲) is a quandle, the conjugation quandle
on H.
(iii) Let G be a group, α ∈ Aut(G) and H ≤ Fix(α) = {x ∈ G | α(x) = x}. Let
G/H be the set of left cosets {xH | x ∈ G}. Then G/H with multiplication
xH ⊲ yH = xα(x−1 y)H
is a quandle Q(G, H, α) = (G/H, ⊲), called the coset quandle (also known as
homogeneous quandle or Galkin quandle).
(iv) A coset quandle Q(G, H, α) with H = 1 is called principal and will be denoted
by Q(G, α). If, in addition, G is an abelian group, then Q(G, α) is an affine
quandle.
Definition 2.3. For a quandle X, we call the set
LX = {Lx | x ∈ X}
the left section of X, and the group
LMlt(X) = hLX i ≤ Aut(X)
the left multiplication group of X. The group LMlt(X) is often denoted by Inn(X)
and called the inner automorphism group of X.
The left section LX is closed under conjugation by (2.4), and the corresponding
conjugation quandle on LX will be denoted by L(X). Note that the mapping X →
L(X), x 7→ Lx is a homomorphism of quandles.
Definition 2.4. A quandle X is latin if Rx ∈ SX for every x ∈ X.
In a latin quandle we can define right division by x/y = Ry−1 (x). A latin
quandle X is therefore a quasigroup (X, ⊲, \, /) in which the multiplication ⊲ is
left distributive and idempotent. As in any quasigroup, a homomorphism of a
latin quandle (X, ⊲) is automatically a homomorphism of (X, ⊲, \, /). For instance,
x ⊲ (y/z) = Lx (y/z) = Lx (y)/Lx (z) = (x/y) ⊲ (x/z) holds in any latin quandle.
Definition 2.5. A quandle X is connected (or, sometimes, transitive) if LMlt(X)
acts transitively on X, and doubly transitive if LMlt(X) acts doubly transitively on
X.
All latin quandles are connected. Indeed, if X is a latin quandle and x, y ∈ X,
then Lx/y (y) = x. All finite quandles can be built from connected quandles but the
extension theory is not well understood, cf. [1, Proposition 1.17] or [12].
In order to simplify notation, we will from now on denote the quandle multiplication by · or by juxtaposition rather than by ⊲.
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
6
FILE
Marco Bonatto and Petr Vojtěchovský
2.2. Quandle extensions
The notion of quandle extensions was introduced in [1]. Let X be a groupoid, S
a nonempty set, and suppose that (X × S, ·) is a groupoid. Then the canonical
projection π : X × S → X, (x, s) 7→ x is a homomorphism of groupoids if and only
if there is a mapping β : X × X × S × S → S such that
(x, s) · (y, t) = (xy, β(x, y, s, t)).
We will then denote (X × S, ·) by X ×β S.
Suppose now that X is a quandle. It is then easy to see that X ×β S is also a
quandle if and only if β(x, y, s) : S → S, t 7→ β(x, y, s, t) is a bijection for every x,
y ∈ X, s ∈ S, and the quandle cocycle conditions
β(xy, xz, β(x, y, s)(t))β(x, z, s) = β(x, yz, s)β(y, z, t),
(2.5)
β(x, x, s)(s) = s
(2.6)
hold for every x, y, z ∈ X and every s, t ∈ S. In the context of quandles, we will
therefore consider β as a mapping X × X × S → SS .
Definition 2.6 ([1, Definition 2.2]). Let X be a quandle and S a nonempty set.
A mapping β : X × X × S → SS is a quandle cocycle if (2.5) and (2.6) hold. The
set of all quandle cocycles X × X × S → SS will be denoted by Z 2 (X, SS ).
Proposition 2.7 ([1]). The following conditions are equivalent for quandles X,
Y:
(i) Y is a quandle defined on X × S for some set S and the canonical projection
Y → X is a quandle homomorphism.
(ii) Y ∼
= X ×β S for some set S and some quandle cocycle β ∈ Z 2 (X, SS ).
(iii) X ∼
= Y /α for a uniform congruence α of Y , that is, a congruence α of Y such
that all blocks of α have the same cardinality.
Proof. We have already shown above that (i) and (ii) are equivalent. Suppose that
(ii) holds and Y = X ×β S for some β ∈ Z 2 (X, SS ). Then X ∼
= Y /ker(π), where
π : X ×β S → X is the canonical projection. Clearly, ker(π) is a uniform congruence,
each block having cardinality |S|. Conversely, let α be a uniform congruence on Y ,
X = Y /α, and let S be a set of the same cardinality as any of the blocks of α. Let
{h[x] : [x] → S | [x] ∈ X} be a family of bijections indexed by the blocks of α.
Then the mapping β : X × X × S → SS defined by
β([x], [y], s) = h[xy] Lh−1 (s) h−1
[y]
[x]
is a quandle cocycle and the mapping
Y → X ×β S,
is an isomorphism of quandles.
x 7→ ([x], h[x] (x))
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
7
We therefore define:
Definition 2.8. Let X, Y be quandles. Then Y is an extension of X if X ∼
= Y /α
for some uniform congruence α of Y .
For a quandle X, let H(X) denote the class of all homomorphic images of X.
We have:
Proposition 2.9. Let Y be a connected quandle. Then the following conditions are
equivalent for a quandle X:
(i) X ∈ H(Y ),
(ii) Y is an extension of X.
Proof. By the Fundamental Homomorphism Theorem, X ∈ H(Y ) if and only if
X∼
= Y /α for some congruence α of Y . Since Y is connected, it is easy to show that
every congruence of Y is uniform.
The following equivalence relation partitions Z 2 (X, SS ) so that any two cocycles
in the same block give rise to isomorphic quandles. In a suitably defined category
of quandle extensions (see [1] or Proposition 2.11), two cocycles belong to the same
block of this partition if and only if the two quandle extensions are isomorphic.
Definition 2.10. Let X be a quandle and S a nonempty set. We say that β,
β ′ ∈ Z 2 (X, SS ) are cohomologous, and we write β ∼ β ′ , if there exists a mapping
γ : X → SS
such that
β ′ (x, y, s) = γ(xy)β(x, y, γ(x)−1 (s))γ(y)−1
holds for every x, y ∈ X and s ∈ S. The factor set
H 2 (X, SS ) = Z 2 (X, SS )/ ∼
is the second (non-abelian) cohomology set of X with coefficients in SS .
The following result makes clear the relationship between cohomologous cocycles
and isomorphisms of quandle extensions.
Proposition 2.11 ([1, pp. 194–195]). The following conditions are equivalent
for a quandle X, a set S and cocycles β, β ′ ∈ Z 2 (X, SS ):
(i) β ∼ β ′ ,
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
8
FILE
Marco Bonatto and Petr Vojtěchovský
(ii) there exists an isomorphism φ : X ×β S −→ X ×β ′ S such that the following
diagram is commutative
φ
X ×β ′
π
// X .
♦♦77
♦
♦
♦
π♦♦♦♦
♦
♦
♦♦
♦♦♦
S
X ×β S
2.3. Quandle coverings and constant cocycles
We are interested in a special class of quandle extensions, so-called quandle coverings.
Definition 2.12 ([6, Definition 1.4]). A connected quandle Y is a covering of
a quandle X if there is a surjective quandle homomorphism f : Y → X such that
the left translations Lx , Ly of Y coincide whenever f (x) = f (y).
For a quandle Y , let ker(LY ) denote the equivalence relation on Y induced by
equality in the left section LY , that is, (x, y) ∈ ker(LY ) if and only if Lx = Ly . Then
ker(LY ) is in fact a congruence on Y , thanks to (2.4). Moreover, if Y is a connected
quandle then ker(LY ) is a uniform congruence, and hence Y is an extension of
Y /ker(LY ). Therefore, a connected quandle Y is a covering of a quandle X if and
only if X ∼
= Y /α, where α is some uniform congruence of Y that refines ker(LY ).
Here, we say that a congruence α refines a congruence β if (x, y) ∈ β whenever
(x, y) ∈ α.
Here are some nontrivial examples of quandle coverings:
Proposition 2.13. Let X1 = Q(G, H1 , α) and X2 = Q(G, H2 , α) be two coset
quandles such that H1 ≤ H2 . Then X1 is a covering of X2 .
Proof. Define ψ : X1 → X2 by ψ(xH1 ) = xH2 . The mapping ψ is surjective and
every block of ker(ψ) has the same cardinality as H2 /H1 . For x, y ∈ G we have
ψ(xH1 · yH1 ) = ψ(xα(x−1 y)H1 ) = xα(x−1 y)H2 = xH2 · yH2 = ψ(xH1 ) · ψ(yH1 ),
so ψ is a homomorphism.
Suppose that ψ(xH1 ) = ψ(yH1 ), i.e., x = yh for some h ∈ H2 ≤ Fix(α). Then
xα(x−1 ) = yhα(h−1 y −1 ) = yα(y −1 ) and thus
xH1 · zH1 = xα(x−1 z)H1 = yα(y −1 z)H1 = yH1 · zH1
for every z ∈ G. This shows that LxH1 = LyH1 in X1 , so X1 is a covering of X2 .
We proceed to identify those quandle cocycles that correspond to quandle coverings.
Definition 2.14 ([1, Definition 2.2]). Let X be a quandle and S a nonempty
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
9
set. A quandle cocycle β ∈ Z 2 (X, SS ) is a constant quandle cocycle if
β(x, y, r) = β(x, y, s)
for every x, y ∈ X and r, s ∈ S. Since the value of β(x, y, s) is then independent of
s ∈ S, we will think of constant cocycles as mappings β : X × X → SS .
The set of all constant quandle cocycles X × X → SS will be denoted by
2
Zc (X, SS ). The equivalence relation ∼ on Z 2 (X, SS ) induces an equivalence relation
on Zc2 (X, SS ), and we define
Hc2 (X, SS ) = Zc2 (X, SS )/ ∼,
the second constant cohomology set of X with coefficients in SS .
We see immediately from (2.5) and (2.6) that a mapping β : X × X → SS is a
constant quandle cocycle if and only if it satisfies
β(xy, xz)β(x, z) = β(x, yz)β(y, z),
β(x, x) = 1
(CC)
(CQ)
for every x, y, z ∈ X.
Note that (CC) implies
β(xy, xz) = β(x, yz) ⇔ β(x, z) = β(y, z)
(WCC)
for every x, y, z ∈ X. We will call (WCC) the weaker cocycle condition.
Just as quandle cocycles parametrize quandle extensions, the constant cocycles
parametrize quandle coverings.
Proposition 2.15 ([2, Lemma 5.1]). Let X, Y be connected quandles. Then the
following conditions are equivalent:
(i) Y is a covering of X,
(ii) Y ∼
= X ×β S for some set S and β ∈ Zc2 (X, SS ).
Proof. Let Y be an extension of X, say Y = X ×β S for β ∈ Z 2 (X, SS ). Then
(x, r) · (y, t) = (x, s) · (y, t) for every x, y ∈ X, r, s, t ∈ S if and only if β(x, y, r) =
β(x, y, s) for every x, y ∈ X, r, s ∈ S, which is equivalent to β ∈ Zc2 (X, SS ).
Let X be a quandle and S a nonempty set. The mapping defined by
X × X −→ SS ,
(x, y) 7→ 1
is a constant cocycle, called the trivial cocycle and denoted by 1. It is easy to see
that X ×1 S is the direct product of X and the projection quandle over S. The
covering Y = X ×1 S is called a trivial covering of X.
Two coverings f : Y → X, f ′ : Y ′ → X of X are said to be equivalent if there
is a quandle isomorphism φ : Y → Y ′ such that f ′ ◦ φ = f .
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
10
FILE
Marco Bonatto and Petr Vojtěchovský
Let X = (X, ·) be a quandle. The adjoint group Adj(X) of X is the group with
generators {ex | x ∈ X} and presenting relations {ex·y = e−1
x ey ex | x, y ∈ X}. Following [6, Definitions 1.7, 1.10], let ǫ : Adj(X) → Z be the unique homomorphism
such that ǫ(ex ) = 1 for every x ∈ X. Let Adj(X)◦ be the kernel of ǫ. The fundamental group of X based at x ∈ X is defined as π1 (X, x) = {g ∈ Adj(X)◦ | xg = x}. By
[6, Proposition 5.8], π1 (X, x) is conjugate to π1 (X, y) whenever x, y are in the same
orbit of LMlt(X). In particular, if X is a connected quandle then the isomorphism
type of π1 (X, x) is independent of the base point x, and it is safe to write π1 (X)
instead of π1 (X, x).
Proposition 2.16. The following conditions are equivalent for a connected quandle
X:
(i) π1 (X) is trivial,
(ii) every covering Y → X is equivalent to the trivial covering of X over some set
S,
(iii) Hc2 (X, SS ) is trivial for every set S.
Proof. The equivalence of (i) and (ii) is established in [6, Proposition 5.15]. Let
us prove the equivalence of (ii) and (iii).
By Proposition 2.15, any covering of X is of the form π : X ×β S → S for some
nonempty set S. If X ×β S → S, X ×β ′ S ′ → S ′ are two equivalent coverings of
X, then S and S ′ have the same cardinality. It therefore suffices to investigate two
coverings X ×β S → S and X ×β ′ S → S with β, β ′ ∈ Zc2 (X, SS ). By Proposition
2.11, these two coverings are equivalent if and only if β ∼ β ′ .
3. Normalized Constant Cocycles
In this section we start computing the constant cohomology set Hc2 (X, SS ) of a
latin quandle X. The situation is greatly simplified in the latin case because every
cocycle of Hc2 (X, SS ) can be assumed to be normalized:
Definition 3.1 (compare [9, Lemma 5.1]). Let X be a latin quandle, S a
nonempty set and u ∈ X. Then β ∈ Zc2 (X, SS ) is said to be u-normalized if
β(x, u) = 1
for every x ∈ X.
For β ∈ Zc2 (X, SS ) and σ ∈ SS define β σ ∈ Zc2 (X, SS ) by β σ (x, y) =
σβ(x, y)σ −1 .
Proposition 3.2. Let X be a latin quandle, S a nonempty set and u ∈ X. For
β ∈ Zc2 (X, SS ) define βu ∈ Zc2 (X, SS ) by
βu (x, y) = β((xy)/u, u)−1 β(x, y)β(y/u, u).
Then {βuσ | σ ∈ SS } is the set of all u-normalized cocycles cohomologous to β.
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
11
Proof. We have δ ∼ β if and only if δ(x, y) = γ(xy)β(x, y)γ(y)−1 for some mapping
γ : X → SS . The following conditions are then equivalent for δ: δ is u-normalized,
1 = δ(x, u) = γ(xu)β(x, u)γ(u)−1 for every x ∈ X, γ(xu) = γ(u)β(x, u)−1 for every
x ∈ X,
γ(x) = γ(u)β(x/u, u)−1
(3.1)
for every x ∈ X, where we have used the latin property in the last step. Conversely,
given σ = γ(u) ∈ SS , the formula (3.1) defines a map γ : X → SS (it is well defined
since γ(u)β(u/u, u)−1 = γ(u)β(u, u)−1 = γ(u) by (CQ)). Then
δ(x, y)=γ(xy)β(x, y)γ(y)−1 =σβ((xy)/u, u)−1 β(x, y)β(y/u, u)σ −1 =σβu (x, y)σ −1 ,
so δ = βuσ .
Corollary 3.3. Let X be a latin quandle, S a nonempty set, and u ∈ X. Let β,
β ′ ∈ Zc2 (X, SS ), and let δ, δ ′ be u-normalized cocycles such that δ ∼ β and δ ′ ∼ β ′ .
Then β ∼ β ′ if and only if δ ′ = δ σ for some σ ∈ SS . Moreover, the following
conditions are equivalent:
(i) Hc2 (X, SS ) is trivial,
(ii) if β ∈ Zc2 (X, SS ), δ ∼ β and δ is u-normalized then δ = 1,
(iii) βu = 1 for every β ∈ Zc2 (X, SS ).
Proof. The first statement follows immediately from Proposition 3.2. Suppose that
(i) holds, let β ∈ Zc2 (X, SS ), and let δ ∼ β be u-normalized. Since 1 is also unormalized and 1 ∼ β by triviality of Hc2 (X, SS ), we have δ = 1 by the first
statement, establishing (ii). Clearly, (ii) implies (iii). Finally, if (iii) holds then
β ∼ βu = 1 for every β ∈ Zc2 (X, SS ), so Hc2 (X, SS ) is trivial.
Many identities for normalized cocycles can be derived from (CC) and (CQ).
We will later need:
Lemma 3.4. Let X be a latin quandle and let β be a u-normalized cocycle. Then
β (u/ (u/x) , x) = β (u/x, x)
for every x ∈ X. Moreover u/(u/x) · x = u if and only if x = u.
Proof. Setting x = u/y and y = u/z in (CC), we get β(u/(u/z), z) = β(u/z, z)
for every z ∈ X. Moreover,
u/(u/x) · x = u ⇔ u/(u/x) = u/x ⇔ u/x = u ⇔ x = u.
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
12
FILE
Marco Bonatto and Petr Vojtěchovský
4. Three Bijections on X × X that Preserve Normalized Cocycles
Given a mapping α : A → B and a bijection ℓ : A → A, we say that α is ℓ-invariant
if α(x) = α(ℓ(x)) for every x ∈ A.
In this section we introduce three bijections of X × X (where X is a latin quandle) under which all normalized cocycles are invariant. We will use these bijections
throughout the rest of the paper.
Let X be a latin quandle and u ∈ X. Define
f : X × X → X × X,
(x, y) 7→ (x · y/u, xu),
g : X × X → X × X,
(x, y) 7→ (ux, uy),
h : X × X → X × X,
(x, y) 7→ (y/(x\u) · x, y).
(4.1)
The element u on which f , g, h depend will always be understood from the context.
Proposition 4.1. Let X be a latin quandle and u ∈ X. Then f ∈ SX×X and every
u-normalized cocycle is f -invariant.
Proof. It is easy to see that f has an inverse, namely the mapping (x, y) 7→
(y/u, (y/u)\x·u). Let β be a u-normalized cocycle. Then (WCC) implies β(xy, xu) =
β(x, yu) for every x, y ∈ X. With z = yu, we obtain β(x, z) = β(x, yu) =
β(xy, xu) = β(x(z/u), xu) = β(f (x, z)).
Lemma 4.2. Let X be a latin quandle, u ∈ X and β a constant cocycle. Then the
following conditions are equivalent:
(i) β(u, x) = 1 for every x ∈ X,
(ii) β is g-invariant.
Proof. If (i) holds, then
(i)
(CC)
(i)
β(ux, uy) = β(ux, uy)β(u, y) = β(u, xy)β(x, y) = β(x, y)
for every x, y ∈ X. Conversely, if (ii) holds, let x = u/y and verify
(CC)
β(u, y) = β(u · u/y, uy)−1 β(u, u/y · y)β(u/y, y)
= β(u · u/y, uy)−1 β(u, u)β(u/y, y)
(CQ)
(ii)
= β(u · u/y, uy)−1β(u/y, y) = 1
for every y ∈ X.
We remark that (i) implies (ii) in Lemma 4.2 even if X is an arbitrary quandle,
not necessarily latin.
Proposition 4.3. Let X be a latin quandle and u ∈ X. Then g ∈ SX×X and every
u-normalized cocycle is g-invariant.
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
13
Proof. Since g = Lu × Lu , we obviously have g ∈ SX×X for any quandle X.
Suppose now that β is a u-normalized cocycle. In view of Lemma 4.2, it suffices to
prove that β(u, x) = 1 for every x ∈ X. Now,
(CC)
β(u, xu) = β(u, xu)β(x, u) = β(ux, u)β(u, u) = 1,
and we are done because Ru is a bijection.
Lemma 4.4. The following identities hold in a latin quandle:
(xy)/z = x(y/(x\z)),
(4.2)
(x/y)(zy) = ((x/y)z)x.
(4.3)
Proof. For (4.2), substitute x\z for z in x(y/z) = (xy)/(xz). The identity (4.3)
follows immediately from left distributivity and (x/y)y = x.
Proposition 4.5. Let X be a latin quandle and u ∈ X. Then h ∈ SX×X and every
u-normalized cocycle is h-invariant.
Proof. We will show that
k(x, y) = (u/((xy/u)\y), y)
is the inverse of h. (The mapping k was found by automated deduction [18].) It
suffices to show that h, k are inverse to each other in the first coordinate. We will
freely use the quasigroup identities x/(y\x) = y and (x/y)\x = y.
The first coordinate of h(k(x, y)) is equal to
y/[(u/((xy/u)\y))\u] · u/((xy/u)\y)
= y/[(xy/u)\y] · u/((xy/u)\y)
= (xy/u) · u/((xy/u)\y),
which is an expression of the form a · b/(a\c) and therefore, by (4.2), equal to
((xy/u)u)/y = xy/y = x.
The first coordinate of k(h(x, y)) is equal to
u/[(((y/(x\u) · x)y)/u)\y].
(4.4)
Since (y/(x\u)·x)y is of the form (a/b)c·a, it is by (4.3) equal to (y/(x\u))(x·x\u) =
y/(x\u) · u, and substituting this back into (4.4) yields
u/[((y/(x\u) · u)/u)\y] = u/[(y/(x\u))\y] = u/(x\u) = x.
We have proved that k is the inverse of h.
Let β be a u-normalized cocycle. Then we have
(CC)
β(u/y, y) = β(u/y·x, u)β(u/y, y) = β(u/y·x, u/y·y)β(u/y, y) = β(u/y, xy)β(x, y),
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
14
FILE
Marco Bonatto and Petr Vojtěchovský
and also
(CC)
β(u/y, y) = β(x, u)β(u/y, y) = β(x, u/y · y)β(u/y, y) = β(x · u/y, xy)β(x, y).
Therefore β(u/y, xy) = β(x · u/y, xy), and with u/y = z, xy = v we have x = v/y =
v/(z\u) and β(z, v) = β(v/(z\u) · z, v) = β(h(z, v)).
5. Orbits of the Three Bijections on X × X
For ℓ ∈ SA and a ∈ A let Oℓ (a) be the orbit of a under the natural action of hℓi on
A, and let Oℓ = {Oℓ (a) | a ∈ A} be the collection of all ℓ-orbits.
In this section we study the orbits of the bijections f , g, h on X × X defined in
(4.1), and also the orbits of the induced action of hf, hi on Og .
Let X be a quandle. Denote by p the product mapping
p : X × X → X,
(x, y) 7→ xy.
For z ∈ X, the fiber p−1 (z) is equal to
p−1 (z) = {(x, y) ∈ X × X | xy = z} = {(x, x\z) | x ∈ X}
and hence has cardinality |X|. Moreover, since
p(f (x, y)) = p(x · y/u, xu) = (x · y/u)(xu) = x(y/u · u) = xy = p(x, y),
every fiber is a union of f -orbits.
We have Og (u, u) = {(u, u)}, and we collect some additional orbits of g by
setting
Ogf = {Og (x, xu) | u 6= x ∈ X},
Ogu = {Og (x, x\u) | u 6= x ∈ X}.
S
The notation is explained as follows. By Lemma 5.7 below, Ogf = {(x, xu) |
S u
u 6= x ∈ X} and Og = {(x, x\u) | u 6= x ∈ X}. By Proposition 5.2 then,
S
S
{(u, u)} ∪ Ogf are precisely the fixed points of f , while {(u, u)} ∪ Ogu is the fiber
p−1 (u).
We will ultimately prove that certain quandles are simply connected by the following strategy, exploiting the invariance under f , g and h of u-normalized cocycles
(see Propositions 4.1, 4.3 and 4.5). For a u-normalized cocycle β, we first partition
its domain X × X into g-orbits Og on which hf, hi acts by Proposition 5.5. By
Corollary 5.8, f acts on both Ogu and Ogf , while h most definitely does not. The
bijection h is much easier to understand in affine connected quandles, cf. Lemma
6.6. The affine-over-cyclic case of Theorem 1.1 then easily follows. In the doubly
transitive case, we will show that there are at most five orbits of hf, g, hi, namely
{Og (u, u)}, Ogu , Ogf and certain sets Og1 , Og2 introduced later. (We note that Ogf ,
Ogu need not be disjoint, but their intersection is easy to understand, cf. Lemma
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
15
5.7.) A careful analysis of the orbit sizes of h then shows that the four sets Ogu , Ogf ,
Og1 and Og2 must be linked by h (as long as |X| 6= 4), establishing the main result.
Lemma 5.1. Let X be a latin quandle. Then for every x, y ∈ X and every k ∈ Z
we have
f k (x, y) = (fk (x, y), fk−1 (x, y)u),
where
fk (x, y) =
(
k
(Lx Ly/u ) 2 (x), if k is even,
k+1
(Lx Ly/u ) 2 (y/u), if k is odd.
(5.1)
Proof. Fix x, y ∈ X. Let ϕ = Lx Ly/u and define fk , fk′ by f k (x, y) = (fk , fk′ ). Then
′
′
(fk+1 , fk+1
) = f (fk , fk′ ) = (fk · fk /u, fk u), so fk+1
= fk u and fk+1 = fk · fk′ /u =
fk fk−1 for every k. Note that ϕk+1 (y/u) = ϕk (ϕ(y/u)) = ϕk (x(y/u · y/u)) =
ϕk (x · y/u).
For the base step, we will show that (5.1) holds for k ∈ {0, 1}. Indeed, f0 = x =
ϕ0 (x) and f1 = x · y/u = x(y/u · y/u) = ϕ(y/u).
For the ascending induction, suppose that (5.1) holds for k − 1 and k. If k is
even, we have
fk+1 = fk fk−1 = ϕk/2 (x)ϕk/2 (y/u) = ϕk/2 (x · y/u) = ϕ(k+2)/2 (y/u).
If k is odd, we have
fk+1 = fk fk−1 = ϕ(k+1)/2 (y/u)ϕ(k−1)/2 (x) = ϕ(k−1)/2 (x · y/u)ϕ(k−1)/2 (x)
= ϕ(k−1)/2 ((x · y/u)x) = ϕ(k−1)/2 (x(y/u · x)) = ϕ(k+1)/2 (x),
where we have used flexibility.
For the descending induction, suppose that (5.1) holds for k and k + 1. If k is
even then
fk−1 = fk \fk+1 = ϕk/2 (x)\ϕk/2 (x · y/u) = ϕk/2 (x\(x · y/u)) = ϕk/2 (y/u).
If k is odd then
fk−1 = fk \fk+1 = ϕ(k+1)/2 (y/u)\ϕ(k+1)/2 (x)
= ϕ(k+1)/2 ((y/u)\x) = ϕ(k−1)/2 ϕ((y/u)\x) = ϕ(k−1)/2 (x),
finishing the proof.
Proposition 5.2. Let X be a latin quandle and x, y ∈ X. Then, using the notation
of Lemma 5.1, the following conditions are equivalent: f k (x, y) = (x, y), fk (x, y) =
x, fk−1 (x, y) = y/u. In particular,
(i) |Of (x, y)| = 1 if and only if y = xu,
(ii) |Of (x, y)| 6= 2,
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
16
FILE
Marco Bonatto and Petr Vojtěchovský
(iii) |Of (x, y)| ≤ |X|.
Proof. Fix x, y ∈ X and let fk = fk (x, y). Clearly, f k (x, y) = (x, y) holds if and
only if both fk = x and fk−1 = y/u hold. Since xy = p(x, y) = p(f k (x, y)) =
p(fk , fk−1 u) = fk · fk−1 u, we have fk = x if and only if fk−1 u = y.
Part (i) now follows. Suppose that f 2 (x, y) = (x, y). The equality x = f2 says
x = x · (y/u · x), which implies x = x · y/u, x = y/u, y = xu. But then |Of (x, y)| = 1
by (i). Finally, (iii) follows from the above-mentioned fact that every fiber of p has
cardinality |X| and is a union of f -orbits.
Proposition 5.3. Let X be a finite latin quandle and x, y ∈ X. Then |Og (x, y)| =
lcm(|OLu (x)|, |OLu (y)|). Moreover, |Og (x, y)| = 1 if and only if (x, y) = (u, u).
Proof. This follows immediately from g k (x, y) = (Lku (x), Lku (y)).
Lemma 5.4. Let X be a latin quandle and x, y ∈ X. Then the following conditions
are equivalent:
(i) |Oh (x, y)| = 1,
(ii) p(h(x, y)) = p(x, y),
(iii) y = u.
Proof. Each of (i), (ii) is equivalent to y/(x\u) · x = x, which is equivalent to
y/(x\u) = x, that is, to y = u.
The action of hf, hi on X × X in fact induces an action on Og , the orbits of g:
Proposition 5.5. Let X be a latin quandle. Then
k(Og (x, y)) = Og (k(x, y))
for every k ∈ hf, hi.
Proof. It suffices to show that f and h commute with g. Let x, y ∈ X. Since
Lu ∈ Aut(X), we have
f (g(x, y)) = f (ux, uy) = (ux · (uy)/u, uxu) = (u · (x · (y/u)), u · xu) = g(f (x, y))
and
h(g(x, y))=h(ux, uy)=(uy/(ux\u) · ux, uy)=(u · (y/(x\u) · x), u · y)=g(h(x, y)).
Remark 5.6. The mappings f and h never commute on a nontrivial latin quandle.
More precisely, f h(x, y) = hf (x, y) if and only if x = y = u. Indeed, we certainly
have f (u, u) = (u, u) = h(u, u), so f h(u, u) = hf (u, u). Suppose now that
f h(x, y) = f (y/(x\u) · x, y) = ((y/(x\u) · x)(y/u), (y/(x\u) · x)u)
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
17
is equal to
hf (x, y) = h(x · y/u, xu) = (xu/((x · y\u)\u) · x(y\u), xu).
By comparing the second coordinates we see that y = u, and substituting this into
the first coordinates we arrive at xu = xu/(xu\u) · xu, that is, x = u.
Lemma 5.7. Let X be a latin quandle and x ∈ X. Then
(i) Og (x, xu) = {(y, yu) | y ∈ OLu (x)},
(ii) Og (x, x\u) = {(y, y\u) | y ∈ OLu (x)}.
In particular, Og (x, y) ∈ Ogu ∩ Ogf if and only if y = xu and x · xu = u.
Proof. We have g(x, xu) = (ux, uxu), g −1 (x, xu) = (u\x, u\(xu)) =
(u\x, (u\x)u), and (i) follows by induction. Similarly, g(x, x\u) = (ux, u · x\u) =
(ux, (ux)\u) and g −1 (x, x\u) = (u\x, u\(x\u) = (u\x, (u\x)\u) prove (ii). Hence
Og (x, y) ∈ Ogu ∩ Ogf if and only if (x, y) = (z, zu) = (w, w\u) for some z, w ∈ X,
which is equivalent to z = w = x, y = xu = x\u.
Corollary 5.8. Let X be a latin quandle. Then
(i)
(ii)
(iii)
(iv)
h(Ogu ) ∩ Ogu = ∅,
h(Ogf ) ∩ Ogf = ∅,
f (Og (x, xu)) = Og (x, xu) for every x ∈ X, in particular, f (Ogf ) = Ogf ,
f (Ogu ) = Ogu .
Proof. For (i), suppose that h(Og (x, x\u)) ∈ Ogu for some x 6= u. By Lemma 5.7,
h(x, x\u) = (y, y\u) for some y ∈ X. But then x = y, h(x, x\u) = (x, x\u), and
thus x = u by Lemma 5.4, a contradiction.
The proof of (ii) is similar: If h(Og (x, xu)) ∈ Ogf for some x 6= u, then h(x, xu) =
(y, yu) for some y by Lemma 5.7, hence x = y, h(x, xu) = (x, xu), xu = u, x = u,
a contradiction.
For (iii), recall that f (x, xu) = (x, xu) by Proposition 5.2.
Finally, for (iv), note that p(f (x, x\u)) = p(x, x\u) = u, hence f (x, x\u) must
be equal to some (y, y\u). Moreover, if x 6= u then y 6= u because f fixes (u, u).
A permutation is said to be semiregular if all its nontrivial cycles are of the
same finite length. A quandle X is semiregular if there is a positive integer s such
that every nontrivial cycle of any left translation Lx has length s.
Clearly, if X is a connected quandle and Lu is semiregular for some u ∈ X, then
X is semiregular. In particular, a latin quandle is semiregular if and only if Lu is
semiregular for some u ∈ X.
Let us denote a typical orbit of the induced action of f on Og by Of (Og (x, y)),
cf. Proposition 5.5. Then |Of (Og (x, y))| ≤ |Of (x, y)| and the strict inequality can
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
18
FILE
Marco Bonatto and Petr Vojtěchovský
occur in examples.
Lemma 5.9. Let X be a latin semiregular quandle. Then |Of (Og (x, y))| =
|Of (x, y)| and |Oh (Og (x, y))| = |Oh (x, y)| for every x, y ∈ X. Hence f (Og (x, y)) =
Og (x, y) if and only if Og (x, y) ∈ Ogf .
Proof. Suppose that f k (Og (x, y)) = Og (x, y) and k > 0 is smallest possible.
Then f k (x, y) = (Lru (x), Lru (y)) for some r ∈ Z. Therefore, xy = p(x, y) =
p(Lru (x), Lru (y)) = Lru (xy). But then semiregularity implies that |Lu | divides r,
f k (x, y) = (x, y), and |Of (Og (x, y))| ≥ |Of (x, y)|.
Similarly, suppose that hk (Og (x, y)) = Og (x, y) and k > 0 is smallest possible. Then hk (x, y) = (z, y) = (Lru (x), Lru (y)) for some r ∈ Z and some z ∈ X.
Then Lru (y) = y, hence Lru (x) = x by semiregularity, hk (x, y) = (x, y), and
|Oh (Og (x, y))| ≥ |Oh (x, y)|.
6. The Orbits on Connected Affine Quandles
In this section we take advantage of the affine representation to arrive at explicit
expressions for the mappings f and h in terms of the underlying group and the
automorphism α. Moreover, we compute the orbit lengths for f and h.
We will use additive notation for the underlying groups and set u = 0 in affine
quandles. We therefore also write Og0 for Ogu .
The results from previous sections apply to finite affine connected quandles
thanks to the following, well-known result.
Proposition 6.1 ([15, Proposition 1]). Let X = Q(A, α) be a finite affine
quandle. Then the following conditions are equivalent:
(i) X is latin,
(ii) X is connected,
(iii) 1 − α ∈ Aut(A).
Note that Proposition 6.1 implies that in any finite connected affine quandle
X = Q(A, α) we have L0 (y) = (1 − α)(0) + α(y) = α(y), that is, L0 = α.
Proposition 6.2. Let A be an abelian group and let α ∈ Aut(A) be such that
1 − α ∈ Aut(A). Then for every x ∈ A and every positive integer n we have
αn (x) = x ⇔
n−1
X
αk (x) = 0.
k=0
Pn−1 k
Proof. We have 1 − αn = (1 − α) k=0
α . Since 1 − α ∈ Aut(A), we deduce that
Pn−1 k
n
α (x) = x if and only if k=0 α (x) = 0.
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
19
Lemma 6.3. Let X = Q(A, α) be an affine quandle. Then for every x, y ∈ X and
k ≥ 0, the element fk (x, y) from (5.1) is equal to
fk (x, y) = x +
k
X
(−1)j αj (x − y/0).
(6.1)
j=1
Proof. Fix x, y ∈ X and let fk = fk (x, y). The formula (5.1) yields f0 = x and
f1 = x · (y/0) = x − α(x) + α(y/0), in agreement with (6.1). Suppose that (6.1)
holds for k − 1 and k, and recall that fk+1 = fk fk−1 . Then with z = x − y/0 we
have
fk+1 = (x +
k
X
(−1)j αj (z)) · (x +
k
X
j=1
=x+
k
X
(−1)j αj (z))
j=1
j=1
= x − α(x) +
k−1
X
(−1)j αj (z) −
k
X
(−1)j αj+1 (z) + α(x) +
k−1
X
(−1)j αj+1 (z)
j=1
j=1
(−1)j αj (z) − (−1)k αk+1 (z) = x +
k+1
X
(−1)j αj (z).
j=1
j=1
Proposition 6.4. Let X = Q(A, α) be a finite connected affine quandle. Then for
every x, y ∈ X we have
Pn
(i) |Of (x, y)| = min{n ∈ N | j=1 (−1)j αj (x − y/0) = 0},
(ii) (−1)|Of (x,y)| α|Of (x,y)| (x − y/0) = x − y/0,
(iii) if n = |Of (x, y)| then |OL0 (x − y/0)| divides (2n)/gcd(2, n),
(iv) if 2(x − y/0) = 0 then |Of (x, y)| = |OL0 (x − y/0)|.
Proof. (i) By Proposition 5.2, |Of (x, y)| is the smallest positive k such that
fk (x, y) = x, or, equivalently, fk−1 (x, y) = y/0. By Lemma 6.3, fk (x, y) = x if
Pk
and only if j=1 (−1)j αj (x − y/0) = 0.
(ii) Let z = y/0 and n = |Of (x, y)|. By Lemma 6.3 and the above remarks, we
have
n
X
(−1)j αj (x − z),
x = fn (x, y) = x +
j=1
z = fn−1 (x, y) = x +
n−1
X
(−1)j αj (x − z).
j=1
Taking the difference of these two equations yields (−1)n αn (x − z) = x − z.
(iii) Since L0 = α, part (ii) shows that |OL0 (x − z)| divides 2n. If n is even, (ii)
in fact shows that |OL0 (x − z)| divides n.
(iv) Suppose that 2(x − z) = 0. Then 2αj (x − z) = 0 for every j. Part (i)
Pn
Pn
Pn−1
then yields 0 = j=1 (−1)j αj (x − z) = j=1 αj (x − z) = j=0 αj (x − z), and
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
20
FILE
Marco Bonatto and Petr Vojtěchovský
Proposition 6.2 implies αn (x−z) = x−z. As n is minimal with 0 =
we deduce |OL0 (x − z)| = n.
Pn−1
j=0
αj (x−z),
We will now express |Of (x, y)| as a function of |OL0 (x − y/0)| and the order of
a certain element of A. We present only the case when |OL0 (x − y/0)| is even (since
that is all we need later), but the argument in the odd case is similar.
Lemma 6.5. Let X = Q(A, α) be a finite connected affine quandle and let x,
y ∈ X. Suppose that ℓ = |OL0 (x − y/0)| is even. Then
kℓ,
if |Of (x, y)| is even,
|Of (x, y)| =
k ′ ℓ/2, otherwise,
Pℓ−1
Pℓ/2−1
where k = | j=0 (−1)j αj (x − y/0)| and k ′ = | j=0 (−1)j αj (x − y/0)|.
Proof. Let z = x − y/0, ℓ = |OL0 (z)| and n = |Of (x, y)|. Suppose that n is even.
Then, by Proposition 6.4(iii), n = rℓ for some r. By Proposition 6.4(i), we have
0=
rℓ
X
(−1)j αj (z) =
rℓ−1
X
(−1)j αj (z) = r
(−1)j αj (z),
j=0
j=0
j=1
ℓ−1
X
where we have used rℓ even in the second equality and (−1)ℓ αℓ (z) = z (due to
Pℓ−1
ℓ even) in the third equality. Moreover, | j=0 (−1)j αj (z)| = r because rℓ is the
Prℓ
smallest positive integer for which j=1 (−1)j αj (z) = 0.
Suppose now that n is odd. Then, by Proposition 6.4(iii), n = (2s + 1)ℓ/2 for
some s. Since n is odd, we have ℓ/2 odd and Proposition 6.4(iii) yields −z = αn (z) =
α(2s+1)ℓ/2 (z) = αℓ/2 (z) and therefore (−1)ℓ/2 αℓ/2 (z) = z. Using these observations,
Proposition 6.4(i) implies
X
ℓ/2−1
(2s+1)ℓ/2−1
(2s+1)ℓ/2
0=
(−1)j αj (z) =
(−1)j αj (z) = (2s + 1)
j=0
j=1
We again have |
X
Pℓ/2−1
j=0
X
(−1)j αj (z).
j=0
(−1)j αj (z)| = 2s + 1.
We conclude this section by explicitly calculating h and |Oh (x, y)| in a connected
affine quandle.
Lemma 6.6. Let X = Q(A, α) be a connected affine quandle and let β be a 0normalized cocycle. Then
h(x, y) = (y + x, y),
β(ny + x, y) = β(x, y),
β(nx, x) = 1
for every integer n and every x, y ∈ X. In particular, |Oh (x, y)| = |y|.
(6.2)
(6.3)
(6.4)
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
21
Proof. Let x, y ∈ X, set z = x\0 and note that z = x − α−1 (x). Then
h(x, y) = (y/(x\0) · x, y) = (y/z · x, y)
= ((z + (1 − α)−1 (y − z)) · x, y) = ((1 − α)(z) + y − z + α(x), y)
= (−α(z) + y + α(x), y) = (−α(x) + x + y + α(x), y)
= (y + x, y).
The h-invariance of β (cf. Proposition 4.5) then yields (6.3). With y = x, (6.3)
yields β((n + 1)x, x) = β(x, x) = 1, which is (6.4). Finally, |Oh (x, y)| = |y| is an
immediate consequence of (6.2).
7. Two Classes of Simply Connected Quandles
In this section we show that every finite connected affine quandle over a cyclic group
is simply connected (extending a result of Graña for connected quandles of prime
order), and that every finite doubly transitive quandle of order different from 4 is
simply connected.
7.1. Connected affine quandles over cyclic groups
Graña showed:
Proposition 7.1 ([9, Lemma 5.1]). Let q be a prime and X = Q(Zq , α) an
affine quandle. Then X is simply connected.
Proof. Let x, y ∈ X be nonzero elements and let β be a 0-normalized cocycle.
Then y = nx for a suitable n and (6.4) yields β(y, x) = β(nx, x) = 1.
It is known that every connected quandle of prime order q is isomorphic to an
affine quandle of the form Q(Zq , α), cf. [7]. Proposition 7.1 therefore states that
every connected quandle of prime order is simply connected.
Every automorphism of Zm is of the form λn for some n with gcd(m, n) = 1,
where λn (x) = nx. Suppose that gcd(m, n) = 1. As an immediate consequence of
Proposition 6.1, we see that the affine quandle Q(Zm , λn ) is connected if and only
if gcd(m, 1 − n) = 1. Note that the conditions gcd(m, n) = 1 = gcd(m, n − 1) imply
that m is odd.
Let U(Zm ) denote the group of units in the ring of integers modulo m.
Proposition 7.2. Let X = Q(Zm , λn ) be a connected affine quandle and let β
be a 0-normalized cocycle. Then β is u-normalized for every u ∈ U(Zm ), that is,
β(x, u) = 1 for every x ∈ X, u ∈ U(Zm ). In addition, β(u, x) = 1 and β(u·x, u·y) =
β(x, y) for every x, y ∈ X, u ∈ U(Zm ).
Proof. Let u ∈ U(Zm ) and x ∈ X. Then x = nu for a suitable n and we have
β(x, u) = β(nu, u) = 1 by (6.4), showing that β is u-normalized. By Proposition
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
22
FILE
Marco Bonatto and Petr Vojtěchovský
4.3, β(u · x, u · y) = β(x, y) for every x, y ∈ X. Then by Lemma 4.2, β(u, x) = 1 for
every x ∈ X.
Lemma 7.3. Let X = Q(Zm , α) be a connected affine quandle with m odd. Then
for every x ∈ X there are u, v ∈ U(Zm ) such that x = u · v.
Proof. Since u + v = (1 − α)−1 (u) · α−1 (v) and α and 1 − α are automorphisms
of (Zm , +), it suffices to prove that for every x ∈ Zm there are u, v ∈ U(Zm ) such
that x = u + v. This is well-known and can be established as follows:
Let m = pn1 1 · · · pnr r , where p1 , . . . , pr are distinct primes. By the Chinese
remainder theorem, Zm ∼
= U(Zpn1 1 ) × · · · ×
= Zpn1 1 × · · · × Zpnr r and also U(Zm ) ∼
n
r
U(Zpr ). It therefore suffices to prove the claim when m = pn for a prime p.
Consider x = ap + b, where 0 ≤ b < p. If x is invertible (that is, b 6= 0), then so is
2x (since m is odd), and x = 2x + (−x) does the job. If x is not invertible, then
x = ap = (ap + 1) + (−1) finishes the proof.
Theorem 7.4. Let X = Q(Zm , λn ) be a connected affine quandle. Then X is
simply connected.
Proof. Let x, y ∈ X and let β be a 0-normalized cocycle. By Proposition 7.3,
we can write x = u · v for some invertible elements u, v. By Proposition 7.2,
β(x, y) = β(u · v, u · u\y) = β(v, u\y) = 1.
7.2. Doubly transitive quandles
Finite doubly transitive quandles can be characterized as follows:
Theorem 7.5 ([20, Corollary 4]). Let X be a finite quandle. Then LMlt(X) is
doubly transitive if and only if X ∼
= Q(Znq , α) for some prime q, some n > 0 and
n
α ∈ Aut(Zq ) with |α| = |X| − 1.
Lemma 7.6. Let X be a finite idempotent groupoid with a doubly transitive automorphism group. Then X is semiregular and the parameter s (length of any nontrivial orbit of any left translation) is a divisor of |X| − 1.
Proof. For any x ∈ X we have Lx (x) = x by idempotence. Given x 6= y with
Lkx (y) = y and some v 6= w, let ϕ ∈ Aut(X) be such that ϕ(x) = v, ϕ(y) = w. Then
w = ϕ(y) = ϕLkx (y) = Lkϕ(x) (ϕ(y)) = Lkv (w). Hence X is semiregular, and the rest
follows.
Combining Theorem 7.5 and Lemma 7.6, we see that if X ∼
= Q(Znq , α) is a finite
doubly transitive quandle, then α is an (|X| − 1)-cycle (since α cannot be trivial
by Proposition 6.1).
Proposition 7.7. Let X be a finite doubly transitive quandle. Then there is an
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
23
integer F > 1 such that
|Of (x, y)| =
1, if x = y/0,
F, otherwise.
Proof. By Proposition 5.2, |Of (y/0, y)| = 1. Suppose that x, y, v, w ∈ X are
such that x 6= y/0 and v 6= w/0. Let ϕ ∈ LMlt(X) ≤ Aut(X) be such that
ϕ(x) = v and ϕ(y/0) = w/0. Suppose that |Of (x, y)| = k is even, the odd case
being similar. By Lemma 5.1 and Proposition 5.2, we have (Lx Ly/u )k/2 (x) = x.
Then v = ϕ(x) = ϕ(Lx Ly/0 )k/2 (x) = (Lv Lw/0 )k/2 (v), and thus |Of (v, w)| ≤ k.
Corollary 7.8. Let X = Q(Znq , α) be a doubly transitive quandle. Then
(i)
(ii)
(iii)
(iv)
Og0 and Ogf are singletons,
S
S
( Og0 ) ∩ ( Ogf ) = ∅ if |X| > 3,
if q = 2 then F = |X| − 1,
if q > 2 then F = |X| − 1 if it is even and F = (|X| − 1)/2 if it is odd.
Proof. (i) Since α = L0 is an (|X|−1)-cycle, we have |Og (x, y)| = |X|−1 whenever
(x, y) 6= (0, 0) by Proposition 5.3. The claim follows by Lemma 5.7.
S
S
(ii) By Lemma 5.7, (x, y) ∈ ( Og0 ) ∩ ( Ogf ) if and only if x · (x · 0) = 0 and
y = x · 0. We have x · (x · 0) = (1 + α)(1 − α)(x). Since 1 − α ∈ Aut(A), we have
x · (x · 0) = 0 if and only if α(x) = −x. Then α2 (x) = x, so |X| − 1 = |α| ≤ 2.
(iii) According to Proposition 7.7, it suffices to compute the length of an f orbit for some (x, y) with y 6= x · 0. We have 2x = 0 for every x ∈ X, and hence
F = |X| − 1 by Proposition 6.4(iv).
(iv) Since q > 2, |OL0 (x)| = |X| − 1 is even for every x ∈ X. By Proposition 5.2,
F ≤ |X| − 1. If F is even then F = k(|X| − 1) by Lemma 6.5 and thus F = |X| − 1.
If F is odd then the same lemma yields F = k(|X| − 1)/2, the case F = |X| − 1
cannot occur since |X| − 1 is even, and thus F = (|X| − 1)/2.
Lemma 7.9. Let X = Q(Znq , α) be a doubly transitive quandle and β a 0normalized cocycle.
S
(i) If F = |X| − 1 then β(x, y) = 1 for every (x, y) 6∈ (Ogf ∪ Og0 ).
S f
(ii) If β(x, y) = 1 for every (x, y) 6∈ (Og ∪Og0 ) and, in addition, there is 0 6= z ∈ X
such that β(z, z\0) = 1, then β = 1.
Proof. (i) We have β(x, x) = 1 for any x ∈ X. Suppose that (0, 0) 6= (x, y) ∈
S
(X × X) \ (Ogf ∪ Og0 ). Then (x, y) is not a fixed point of f , and neither is (xy, xy),
since xy 6= 0. The fiber p−1 (xy) contains both (x, y) and (xy, xy), and it is a
union of f -orbits. By assumption, one of the orbits has size |F | − 1, forcing the
remaining element of p−1 (xy) to be a fixed point of f . Hence (x, y) ∈ Of (xy, xy).
Then β(xy, xy) = 1 implies β(x, y) = 1.
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
24
FILE
Marco Bonatto and Petr Vojtěchovský
S
(ii) Suppose that β(x, y) = 1 for every (x, y) 6∈ (Ogf ∪ Og0 ), and β(z, z\0) = 1
S 0
for some 0 6= z ∈ X. Then β = 1 on Og since Og0 is a singleton by Corollary
S
S
7.8. Finally, let (x, y) ∈ Ogf . By Corollary 5.8, h(x, y) 6∈ Ogf and thus β(x, y) =
β(h(x, y)) = 1.
Lemma 7.10. Let X = Q(A, α) be a latin affine quandle and 0 6= x ∈ X. Then
S
(i) h(x, x\0) ∈ Ogf if and only if 2x = 0,
(ii) Og (0/(0/x), x) ∈ Ogf if and only if (α2 + α − 1)(x) = 0.
S
Proof. (i) Recall that (x, y) ∈ Ogf if and only if y = x · 0. We have h(x, x\0) =
(x + x\0, x\0) and x\0 = (1 − α−1 )(x). Therefore h(x, x\0) ∈ Ogf if and only if
(x + x\0) · 0 = x\0, which is equivalent to (1 − α)((1 − α−1 )(x) + x) = (1 − α−1 )(x).
This is easily seen to be equivalent to (1 − α)(2x) = 0. Since 1 − α ∈ Aut(A), the
last condition is equivalent to 2x = 0.
(ii) Note that (0/(0/x)) · 0 = x is equivalent to x/0 · 0/x = 0. Also note that
x/0 = (1 − α)−1 (x) and 0/x = −α(1 − α)−1 (x). Then x/0 · 0/x = 0 holds if and
only if x − α2 (1 − α)−1 (x) = 0, which is equivalent to (α2 + α − 1)(x) = 0.
Theorem 7.11. Let X = Q(Znq , α) be a doubly transitive quandle with q ≥ 3. Then
X is simply connected.
Proof. If n = 1 then X is simply connected by Theorem 7.4. Suppose that n > 1
and let β be a 0-normalized cocycle. Recall that X is semiregular with α an (|X|−1)cycle. Hence |Og (x, y)| = |X| − 1 for every (0, 0) 6= (x, y) ∈ X × X. By Lemma
5.9 and Corollary 7.8, |Of (Og (x, y))| = |Of (x, y)| ∈ {1, F } and F ∈ {|X| − 1,
(|X| − 1)/2}. By Lemmas 5.9 and 6.6, |Oh (Og (x, y))| = |Oh (x, y)| = |y| ∈ {1, q} for
any x, y ∈ X. Moreover, by Corollary 5.8, Og0 and Ogf are disjoint singletons.
Suppose that F = |X| − 1. By Lemma 7.9(i), β = 1 on the complement of
S
O = (Og0 ∪ Ogf ). Let 0 6= z ∈ X. By Lemma 7.9(ii), we will be done if we show
S
that β(z, z\0) = 1. By Corollary 5.8, h(z, z\0) ∈
/ Og0 . Since q ≥ 3, Lemma 7.10(i)
S f
yields h(z, z\0) 6∈ Og . Hence h(z, z\0) ∈ O and β(z, z\0) = β(h(z, z\0)) = 1.
For the rest of the proof suppose that F = (|X| − 1)/2. The sets Og (u, u), Ogf
and Og0 account for 1 + 2(|X| − 1) = 2|X| − 1 elements of X × X and for all fixed
points of f , leaving |X|2 − (2|X| − 1) = (|X| − 1)2 points unaccounted for. The
unaccounted points thus form two hf, gi-orbits, each of size F (|X| − 1), say Og1 and
Og2 . We can certainly take Og1 = Of (Og (0, x)) for some 0 6= x. Since β(0, x) = 1 by
Lemma 4.2, we have β = 1 on Og1 . If we can show that β = 1 on Og2 , too, then we
can finish as in the case F = |X| − 1, completing the proof.
We will now show that if q 6= 3 then the induced action of h on Og has only two
orbits, namely {Og (0, 0)} and its complement. This will finish the proof for q 6= 3.
Since h has no fixed points in the set Og0 ∪ Ogf of size 2, it suffices to consider the
following situations:
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
25
(a) Suppose that h acts on Ogf ∪ Og1 and thus also on Og0 ∪ Og2 , both sets of size
F + 1. Let 0 6= x ∈ X. Since Og (x, 0) is fixed by h and belongs to one of these sets,
we see that h acts on a set of size F , hence q divides F = (|X| − 1)/2, hence q
(being odd) divides |X| − 1, a contradiction. We reach a similar contradiction if h
acts on Ogf ∪ Og2 and Og0 ∪ Og1 .
(b) Suppose that h acts on Og0 ∪ Ogf ∪ Og1 and thus also on Og2 , sets of size F + 2
and F , respectively. Since Og2 contains no fixed-points of h, we reach a contradiction
as in (a).
(c) Suppose that h acts on Og0 ∪ Ogf ∪ Og2 and thus also on Og1 , sets of size F + 2
and F , respectively. Once we account for Og (x, 0) ∈ Og1 , we see that h acts on a set
of size F + 2 = (|X| + 3)/2 and on a set of size F − 1 = (|X| − 3)/2, forcing q = 3.
For the rest of the proof we can therefore assume that q = 3. Let 0 6= x ∈ X.
Setting x = z and y = 0 in (CC) yields β(x · 0, x) = β(x, 0 · x). We also have
(x · 0, x), (x, 0 · x) 6∈ O. Suppose for a while that (x · 0, x) and (x, 0 · x) are in the
same f -orbit, that is,
(x · 0, x) = f k (x, 0 · x) = (fk (x, 0 · x), fk−1 (x, 0 · x) · 0)
for some k ≥ 1. Note that (0·x)/0 = (1−α)−1 α(x). Comparing coordinates, Lemma
6.3 yields
(1 − α)(x) = x · 0 = fk (x, 0 · x) = x +
k
X
(−1)j αj (x − (1 − α)−1 α(x)),
j=1
x = fk−1 (x, 0 · x) · 0 = (1 − α)(x +
k−1
X
(−1)j αj (x − (1 − α)−1 α(x))).
j=1
Applying 1 − α to the first identity and using q = 3 then yields
(1 − α)2 (x) = (1 − α)(x) +
k
X
(−1)j αj ((1 − α)(x) − α(x))
j=1
= (1 − α)(x) +
k
X
(−1)j αj (1 + α)(x),
j=1
while the second identity can be rewritten as
x=(1 − α)(x) +
k−1
k−1
X
X
(−1)j αj (1 + α)(x).
(−1)j αj ((1 − α)(x) − α(x))=(1 − α)(x) +
j=1
j=1
Subtracting the two last identities now gives (1 − α)2 (x) − x = (−1)k αk (1 + α)(x).
Since (1 − α)2 = 1 − 2α + α2 = 1 + α + α2 , we can rewrite this as α(1 + α)(x) =
(−1)k αk (1 + α)(x). Noting that 1 + α ∈ Aut(A) (if α(x) = −x then α2 = 1 and
|X| = 3) and, canceling, we finally get x = (−1)k αk−1 (x). If k is even, we deduce
k ≡ 1 (mod |X| − 1), thus also k ≡ 1 (mod F ), but then (x · 0, x) = f k (x, x · 0) =
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
26
FILE
Marco Bonatto and Petr Vojtěchovský
(x, x · 0) implies x · 0 = x, x = 0, a contradiction. If k is odd, we deduce 2(k − 1) ≡ 0
(mod |X| − 1), therefore k ≡ 1 (mod F ), and we reach the same contradiction.
Consequently, the elements (x, x · 0) and (x · 0, x) are not in the same f -orbit,
S
S
hence one of them lies in Og1 while the other in Og2 , and we see that β = 1 on
Og2 .
Theorem 7.12. Let X = Q(Zn2 , α) be a doubly transitive quandle with n 6= 2.
Then X is simply connected.
Proof. By Corollary 7.8(i), F = |X| − 1. Then by Lemma 7.9, β(x, y) = 1 for
every Og (x, y) 6∈ Og0 ∪ Ogf and it suffices to show that β(z, z\0) = 1 for some z 6= 0.
Note that any element (z, z\0) can be written as (0/y, y) by setting y = z\0. By
Proposition 3.4, β(0/y, y) = β(0/(0/y), y) and Og (0/(0/y), y) ∈
/ Og0 . If we show
that Og (0/(0/y), y) 6∈ Ogf for some y 6= 0, then β(0/y, y) = β(0/(0/y), y) = 1 and
we are through.
By Lemma 7.10(ii), it suffices to show that (α2 +α+1)(y) = (α2 +α−1)(y) 6= 0,
which is equivalent to α3 (y) 6= y, and this follows from the fact that α is an (|X|−1)cycle (here we use |X| 6= 4).
We have now proved Theorem 1.1. We will show in Section 8 that a doubly
transitive quandle of order 4 is not simply connected.
8. A Short Proof of Theorem 1.1
In this section we prove Theorem 1.1 once more, this time using results of Clauwens
[5] on the fundamental group of affine quandles.
Clauwens showed how to explicitly calculate the fundamental groups of affine
quandles. Following [5, Definition 1], let G = (G, +) be an abelian group and
Q(G, α) an affine quandle. Let
I(G, α) = hx ⊗ y − y ⊗ α(x) | x, y ∈ Gi,
S(G, α) = (G ⊗ G)/I(G, α),
F (G, α) = Z × G × S(G, α),
where the operation on F (G, α) is given by
(k, x, a)(m, y, b) = (k + m, αm (x) + y, a + b + (αm (x) ⊗ y + I(G, α))).
Then we have:
Theorem 8.1 ([5, Theorem 1 and page 4]). Let G = (G, +, 0) be an abelian
group and Q(G, α) an affine quandle. Then the groups Adj(Q(G, α)) and F (G, α)
are isomorphic, and the groups π1 (Q(G, α), 0) and S(G, α) are isomorphic.
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
27
We are ready to prove Theorem 1.1. Let X = Q(G, α) be a finite connected
affine quandle. By Proposition 2.16 and Theorem 8.1, X is simply connected if and
only if S(G, α) is trivial.
Suppose first that G = Zn is cyclic. Then G⊗G = Zn ⊗Zn ∼
= Zn is generated by
1⊗1. For y ∈ G we have 1⊗(1−α)(y) = 1⊗y −1⊗α(y) = y ⊗1−1⊗α(y) ∈ I(G, α).
Since X is connected, the homomorphism 1 − α is bijective by Proposition 6.1. In
particular, 1 ⊗ 1 ∈ I(G, α) and S(G, α) is trivial.
Now suppose that X = Q(G, α) is doubly transitive, G is not cyclic, and |X| 6= 4.
By the argument given in Subsection 7.2, G = Znp for some prime p and n > 1, and
α is a cycle of length |X| − 1.
Let us write u ≡ v if u, v ∈ G ⊗ G coincide modulo I(G, α). For every x ∈ G
we have
0 ≡ x ⊗ (x + α(x)) − (x + α(x)) ⊗ α(x)
= x ⊗ x + x ⊗ α(x) − x ⊗ α(x) − α(x) ⊗ α(x) = x ⊗ x − α(x) ⊗ α(x).
Therefore x ⊗ x ≡ α(x) ⊗ α(x) for every x. Since α is a cycle of length |X| − 1, we
conclude that there is e ∈ G such that
x⊗x≡e
for every 0 6= x ∈ G.
(8.1)
If x, y ∈ G are such that x 6= 0 6= y and x 6= y, equation (8.1) implies
e ≡ (x − y) ⊗ (x − y) = x ⊗ x − x ⊗ y − y ⊗ x + y ⊗ y ≡ 2e − x ⊗ y − y ⊗ x.
Therefore
x⊗y+y⊗x≡e
for every x 6= 0 6= y with x 6= y.
(8.2)
We proceed to show that e ≡ 0.
Suppose that p 6= 2. Since |X| > 4, there are x, y ∈ G such that x 6= 0 6= y and
x 6= ±y. Then (8.1) implies
e ≡ (x + y) ⊗ (x + y) = x ⊗ x + x ⊗ y + y ⊗ x + y ⊗ y ≡ 2e + x ⊗ y + y ⊗ x
and we deduce x ⊗ y + y ⊗ x ≡ −e. But (8.2) holds for our choice of x, y as well,
and thus e ≡ 0.
Suppose now that p = 2. Since |X| > 4, there are distinct and nonzero x, y,
z ∈ G such that x + y + z 6= 0. Then (8.1) and (8.2) imply
e ≡ (x + y + z) ⊗ (x + y + z) = x ⊗ x + y ⊗ y + z ⊗ z
+ (x ⊗ y + y ⊗ x) + (x ⊗ z + z ⊗ x) + (y ⊗ z + z ⊗ y) ≡ 6e ≡ 0,
and we again conclude that e ≡ 0.
Let us continue the proof with p arbitrary. We have shown that x ⊗ x ≡ 0 for
every x. The calculations leading to (8.2) can now be repeated for any x, y, and we
obtain x ⊗ y ≡ −y ⊗ x for every x, y. Hence
0 ≡ x ⊗ y − y ⊗ α(x) ≡ x ⊗ y + α(x) ⊗ y = (x + α(x)) ⊗ y
(8.3)
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
28
FILE
Marco Bonatto and Petr Vojtěchovský
for every x, y ∈ G. We claim that 1+α is bijective. Indeed, suppose that (1+α)(x) =
0 for some x 6= 0. Then α(x) = −x and α2 (x) = x, a contradiction with α being a
cycle of length |X| − 1 (using |X| > 3). Now (8.3) shows that x ⊗ y ≡ 0 for every
x, y ∈ G, and Theorem 1.1 is proved.
Example 8.2. This example shows that a doubly transitive quandle Q(G, α) of
order 4 is notsimply connected. We will calculate I(G, α). Let {e1 , e2 } with e1 = 10
and e2 = 01 be a basis of G = Z22 , and suppose without loss of generality that
α = 11 10 . Then α(e1 ) = e1 + e2 , α(e2 ) = e1 and α(e1 + e2 ) = e2 . Calculating in
G ⊗ G, we get
e1 ⊗ e1 + e1 ⊗ α(e1 ) = e1 ⊗ e2 ,
e2 ⊗ e1 + e1 ⊗ α(e2 ) = e1 ⊗ e1 + e2 ⊗ e1 ,
e1 ⊗ e2 + e2 ⊗ α(e1 ) = e1 ⊗ e2 + e2 ⊗ e1 + e2 ⊗ e2 ,
e2 ⊗ e2 + e2 ⊗ α(e2 ) = e2 ⊗ e1 + e2 ⊗ e2 .
Hence I(G, α) is the span of e1 ⊗ e1 + e2 ⊗ e1 , e2 ⊗ e1 + e2 ⊗ e2 and e1 ⊗ e2 . Since
G⊗G∼
= Z42 , the quandle Q(G, α) is not simply connected.
9. Constant Cocycles with Coefficients in Arbitrary Groups
Following [1], we have defined constant quandle cocycles as mappings β : X × X →
SS but the definition makes sense for any group G.
Definition 9.1. Let X be a quandle and G a group. Let
Zc2 (X, G) = {β : X × X → G | β satisfies (CC) and (CQ)}.
For β, β ′ ∈ Zc2 (X, G), we write β ∼ β ′ if there exists a mapping γ : X → G such
that
β ′ (x, y) = γ(xy)β(x, y)γ(y)−1
holds for every x, y ∈ X. Then Hc2 (X, G) = Zc2 (X, G)/ ∼ is the second constant
cohomology set of X with coefficients in G.
A careful look at all our results shows that all calculations can be carried out
over an arbitrary group G, not just over symmetric groups.
Proposition 9.2. Let X be a quandle and G a group. Then Zc2 (X, G) embeds into
Zc2 (X, SG ).
Proof. Let λG : G → SG , λG (g)(h) = gh be the left regular representation of G.
Define
j : Zc2 (X, G) → Zc2 (X, SG ),
Then it is easy to see that j is injective.
j(β)(x, y) = λG (β(x, y)).
(9.1)
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
29
Let us show that the embedding j in (9.1) induces a map j : Hc2 (X, G) →
Suppose that β, β ′ ∈ Zc2 (X, G) are cohomologous and let γ : X → G
be such that γ(xy)β(x, y)γ(y)−1 = β ′ (x, y) for every x, y ∈ X. We claim that
j(β), j(β ′ ) ∈ Zc2 (X, SG ) are cohomologous via j(γ), defined by j(γ)(x) = λG (γ(x)).
Indeed, for every a ∈ G and every x, y ∈ X we have
Hc2 (X, SG ).
j(γ)(xy)j(β)(x, y)(j(γ)(y))−1 (a)=γ(xy)β(x, y)γ(y)−1 a=β ′ (x, y)a=j(β ′ )(x, y)(a).
However, the induced map j : Hc2 (X, G) → Hc2 (X, SG ) is not necessarily an
embedding as we shall see in Example 9.5. We start by having another look at
4-element doubly transitive quandles.
Example 9.3. Let X = Q(Z22 , α) be a doubly transitive quandle and G a group.
Suppose that α, e1 , e2 are as in Example 8.2. We claim that
Hc2 (X, G) = {[βa ]∼ | a ∈ G, a2 = 1},
(9.2)
where βa is given by
0
e1
e1 + e2
e2
0
1
1
1
1
e1 e1 + e2
1
1
1
a
a
1
a
a
e2
1
a,
a
1
(9.3)
and, moreover, βa ∼ βb if and only if a and b are conjugate in G.
To see this, first recall that x·y = x+α(−x+y) = x+α(x+y) and check that βa
defined by (9.3) is a 0-normalized cocycle. Conversely, suppose that β ∈ Zc2 (X, G)
is 0-normalized. Then β(x, 0) = 1 for every x ∈ X. Since β is g-invariant by
Proposition 4.3, we also have β(0, x) = 1 for every x ∈ X by Lemma 4.2, and
β(x, y) = β(g(x, y)) = β(0 · x, 0 · y) = β(α(x), α(y)) for every x, y ∈ X. By
Proposition 4.5, β is h-invariant, and by Lemma 6.6 we have h(x, y) = (y + x, y),
so β(x, y) = (y + x, y) for every x, y ∈ X. Applying g, h as indicated, we get
g
g
h
g
g
β(e1 , e2 ) = β(e1 +e2 , e1 ) = β(e2 , e1 +e2 ) = β(e1 , e1 +e2 ) = β(e1 +e2 , e2 ) = β(e2 , e1 ),
so there is a ∈ G such that β = βa . Setting x = z in (CC) yields β(xy, x) =
β(x, yx)β(y, x) and thus
1 = β(0, e1 ) = β(e1 · (e1 + e2 ), e1 ) = β(e1 , (e1 + e2 ) · e1 )β(e1 + e2 , e1 )
= β(e1 , e2 )β(e1 + e2 , e1 ) = a2 .
Finally, by Proposition 3.2, βa ∼ βb if and only if a and b are conjugate in G.
Remark 9.4. In [16], the second cohomology of X = Q(Z22 , α) was calculated
when G is the additive group of a finite field or G ∈ {Z, Q}. Since G is abelian here,
Hc2 (X, G) is also an abelian group under the operation (β+δ)(a, b) = β(a, b)+δ(a, b).
Our calculations in Example 9.3 agree with those of [16, Example 2 and Corollary
1.1]. We have Hc2 (X, G) = 1 if G ∈ {Z, Q} or G = Zkp with p odd, and Hc2 (X, Zk2 ) ∼
=
k
Z2 .
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
30
FILE
Marco Bonatto and Petr Vojtěchovský
Example 9.5. Let X = Q(Z22 , α) be a doubly transitive quandle and let G = Z22 .
By Example 9.3, every a ∈ G yields βa ∈ Zc2 (X, G), and βa ∼ βb if and only if a = b
since G is abelian. Example 9.3 also shows that every σ ∈ SG with σ 2 = 1 yields
βσ in Zc2 (X, SG ), and βσ = βτ if and only if σ, τ have the same cycle structure.
Consider now the embedding j : Zc2 (X, G) → Zc2 (X, SG ) and note that for every
a ∈ G we have j(βa ) = βλG (a) . If a, b ∈ G are distinct nonzero elements of G then
βa 6∼ βb , but j(βa ) ∼ j(βb ) because λG (a), λG (b) have the same cycle structure.
We conclude that j does not induce an embedding of Hc2 (X, G) into Hc2 (X, SG ).
If X is latin, the embedding j commutes with the normalization procedure
described in Proposition 3.2.
Proposition 9.6. Let X be a latin quandle, G a group and β ∈ Hc2 (X, G). Then
j(βu ) = j(β)u for every u ∈ X.
Proof. We have
j(βu )(x, y) = λG (βu (x, y)) = λG (β((xy)/u, u)−1 β(x, y)β(y/u, u)) =
= λG (β((xy)/u, u))−1 λG (β(x, y))λG (β(y/u, u))) =
= j(β)u (x, y)
for every x, y ∈ X.
We can now prove Theorem 1.2. By Proposition 2.16, a connected quandle X
is simply connected if and only if Hc2 (X, SS ) = 1 for every set S. Let X be a latin
quandle. If Hc2 (X, G) = 1 for every group G, then certainly Hc2 (X, SS ) = 1 for every
set S. Conversely, suppose that Hc2 (X, SS ) = 1 for every set S, let G be a group
and let β ∈ Zc2 (X, G). Let u ∈ X. Since Hc2 (X, SG ) = 1 and j(β) ∈ Zc2 (X, SG ),
Corollary 3.3 implies j(β)u = 1. By Proposition 9.6, λG (βu (x, y)) = j(βu )(x, y) =
j(β)u (x, y) = 1 and therefore βu (x, y) = 1 for every x, y ∈ X. By Corollary 3.3,
Hc2 (X, G) = 1.
Problem 9.7. Let X be a connected quandle. Are the following conditions equivalent?
(i) X is simply connected,
(ii) Hc2 (X, G) = 1 for every group G.
9.1. Conjugacy quandle cocycle invariants
We conclude the paper by establishing Corollary 1.3.
In [2, Section 5], a new family of knot invariants was defined by using constant
quandle cocycles. Let X be a quandle, G a group and β ∈ Zc2 (X, G). Let (τ1 , . . . , τk )
be all the crossings of an oriented knot K encountered while traveling around K
starting from some base point and following the orientation of K. For a crossing τ
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
FILE
Simply connected latin quandles
31
and coloring C ∈ colX (K), let B(τ, C) = β(xτ , yτ )ǫτ , where xτ is the color on the
understrand, yτ is the color on the overstrand, and ǫτ is the sign of the crossing.
Qk
Let ϕ(K, C) = i=1 B(τi , C). For g ∈ G, let [g] be the conjugacy class of g in G.
Then
ϕX,G,β (K) = {[ϕ(K, C)] | C ∈ colX (K)}
is a conjugacy quandle cocycle invariant of K. According to [2, Theorem 5.5], this
is indeed an invariant of oriented knots.
Let us prove Corollary 1.3. Let X be a simply connected latin quandle, G a
group and β ∈ Zc2 (X, G). By [2, Proposition 5.6], if β is cohomologous to the
trivial constant cocycle, then ϕX,G,β (K) is trivial. It therefore suffices to show that
Hc2 (X, G) = 1 and this follows from Theorem 1.2.
Acknowledgment
We thank an anonymous referee for several important comments and in particular
for pointing out the highly relevant preprint of Clauwens [5]. The first author would
like to thank Prof. David Stanovský for fruitful discussions during his visit to Ferrara in January 2016 from which this work was started and Dr. Giuliano Bianco
who introduced him to the topic.
Marco Bonatto was partially supported by Simons Foundation Collaboration
Grant 210176 to Petr Vojtěchovský. Petr Vojtěchovský was partially supported by
University of Denver PROF grant.
References
[1] Andruskiewitsch N., Graña M., From racks to pointed Hopf algebras, Advances in
Mathematics 178 (2) (2003), 177–243 .
[2] Carter J. S., Elhamdadi M., Graña M., Saito M., Cocycle knot invariants from quandle modules and generalized quandle homology, Osaka J. Math. 42 (2005), no. 3,
499–541.
[3] Carter J.S., Jelsovsky D., Kamada S., Langford L., Saito M., Quandle cohomology
and state-sum invariants of knotted curves and surfaces, Trans. Amer. Math. Soc.
355 (2003), 3947–3989.
[4] Clark W.E., Elhamdadi M., Saito M., Yeatman T., Quandle colorings of knots and
applications, J. Knot Theory Ramifications 23/6 (2014), 1450035.
[5] Clauwens F.J.-B.J., The adjoing group of an Alexander quandle, preprint,
https://arxiv.org/abs/1011.1587
[6] Eisermann M., Quandle coverings and their Galois correspondence, Fundamenta
Mathematicae 225 (2014), no. 1, 103–168.
[7] Etingof P., Guralnick R., Soloviev A., Indecomposable set-theoretical solutions to the
Quantum Yang–Baxter Equation on a set with prime number of elements, Journal of
Algebra 242 (2001), 709–719.
[8] Fish A., Lisitsa A., Stanovský D., Combinatorial approach to knot recognition, Embracing Global Computing in Emerging Economies, CCIS 514, Springer, 2015, 64–78.
December
22,
2017 4:22 WSPC/INSTRUCTION
simply˙connected˙latin˙quandles˙v19˙˙final˙
32
FILE
Marco Bonatto and Petr Vojtěchovský
[9] Graña M., Indecomposable racks of order p2 , Beiträge zur Algebra und Geometrie
45 (2004), no. 2, 665–676.
[10] Dijkgraaf R., Witten E., Topological gauge theories and group cohomology, Comm.
Math. Phys. 129 (1990), 393–429.
[11] Hulpke A., Stanovský D., Vojtěchovský P., Connected quandles and transitive groups,
J. Pure Appl. Algebra 220/2 (2016), 735–758.
[12] Jedlička P., Pilitowska A., Stanovský D., Zamojska-Dzienio A., The structure of medial quandles, J. Algebra 443 (2015), 300–334.
[13] Joyce D., A Classifying invariant of knots, the knot quandle, Journal of Pure and
Applied Algebra 23 (1982), 37–65.
[14] Kuperberg G., Knottedness is in NP, modulo GRH., Adv. Math. 256 (2014), 493–
506.
[15] Litherland R.A., Nelson S, The Betti numbers of some finite racks, Journal of Pure
and Applied Algebra 178, Issue 2, (2003), 187-202.
[16] Mochizuki T., Some calculations of cohomology groups of finite Alexander quandles,
Journal of Pure and Applied Algebra 179-3, (2003), 287–330.
[17] Matveev S. V., Distributive groupoids in knot theory, Math. USSR - Sbornik 47/1
(1984), 73–83.
[18] W.
W.
McCune,
Prover9,
version
2009-11A.
http://www.cs.unm.edu/~ mccune/prover9/
[19] Vendramin L., On the classification of quandles of low order, J. Knot Theory Ramifications 21 (2012), no. 9, 1250088, 10 pp.
[20] Vendramin L., Doubly transitive groups and quandles, to appear in J. Math. Soc.
Japan.
| 4 |
Average Size of Implicational Bases
Giacomo Kahn1 and Alexandre Bazin2
1
arXiv:1802.04032v1 [cs.AI] 12 Feb 2018
2
LIMOS & Université Clermont Auvergne, France
Le2i - Laboratoire Electronique, Informatique et Image, France
[email protected], [email protected]
Abstract Implicational bases are objects of interest in formal concept
analysis and its applications. Unfortunately, even the smallest base, the
Duquenne-Guigues base, has an exponential size in the worst case. In
this paper, we use results on the average number of minimal transversals
in random hypergraphs to show that the base of proper premises is, on
average, of quasi-polynomial size.
Keywords: Formal Concept Analysis, Implication Base, Average Case Analysis.
1
Introduction
Computing an implication base is a task that has been shown to be costly [1],
due to their size and to the enumeration delay. Even the smallest base (the
Duquenne-Guigues base) is, in the worst case, exponential in the size of the
relation [2]. While the extremal combinatorics of implicational bases is a well
studied subject, up to now, the average case has not received a lot of attention.
In this paper, we adapt the results presented in [3] to give some average-case
properties about implicational bases. We consider the base of proper premises
and the Duquenne-Guigues base. We give the average size of the base of proper
premises and show that the size of the base of proper premises is, on average,
quasi-polynomial. This implies that the size of the Duquenne-Guigues base is on
average at most quasi-polynomial. We then give an almost sure lower bound for
the number of proper premises.
The paper is organised as follows: in section 2 we introduce the definitions
and notations that we use in the remainder of the paper. Section 3 contains the
main results of this work. In section 4, we discuss randomly generated contexts
and the models that are used in this paper. We then conclude and discuss future
works.
2
Definitions and Notations
In this section, we provide the definitions and results that will be used in this
paper. Most of the FCA definitions can be found in [4]. From now on, we will
omit the brackets in the notation for sets when no confusion is induced by this
simplification.
2.1
Formal Concept Analysis
A formal context is a triple C = (O, A, R) in which O and A are sets of objects
and attributes and R ⊆ O × A is a binary relation between them. A pair (o, a) ∈
R is read “object o has attribute a”. Formal contexts can naturally be represented
by cross tables, where a cross in the cell (o, a) means that (o, a) ∈ R.
a1 a2 a3 a4 a5
o1 × ×
o2
×
× ×
o3
× × ×
o4
×
×
× ×
o5
Table 1. Toy context C.
Table 1 shows a toy context with 5 objects and 5 attributes. It will serve as
a running example throughout this paper.
Let O be a set of objects and A a set of attributes, we denote by O′ the set of
all attributes that are shared by all objects of O and A′ the set of all objects that
have all the attributes of A. More formally, O′ = {a ∈ A | ∀o ∈ O, (o, a) ∈ R}
and A′ = {o ∈ O | ∀a ∈ A, (o, a) ∈ R}.
The composition of those two operators, denoted ·′′ , forms a closure operator.
A set X = X ′′ is said to be closed. A pair (O, A) with O ⊆ O, A ⊆ A, A′ = O
and O′ = A is called a (formal) concept of the (formal) context C. In this case,
we also have that A′′ = A and O′′ = O.
The set of all the concepts of a context, ordered by inclusion on either their
sets of attributes or objects forms a complete lattice. Additionally, every complete lattice is isomorphic to the one formed by the concepts of a particular
context.
Definition 1. An implication (between attributes) is a pair of sets X, Y ⊆ A.
It is noted X → Y .
Definition 2. An implication X → Y is said to hold in a context C if and only
if X ′ ⊆ Y ′ .
Many implications are redundant, that is if an implication a → c holds, then
ab → c holds and is redundant. The number of implications that hold can be
quite large [2]. It is necessary to focus on the interesting ones.
Definition 3. An implication set that allows for the derivation of all implications that hold in a context, and only them, through the application of Armstrong’s axioms is called an implication base of the context.
Definition 4 (Duquenne-Guigues Base). An attribute set P is a pseudointent if and only if P 6= P ′′ and Q′′ ⊂ P for every pseudo-intent Q ⊂ P . The
set of all the implications P → P ′′ in which P is a pseudo-intent is called the
Duquenne-Guigues Base.
The Duquenne-Guigues Base, also called canonical base, or stem base has
first been introduced in [5] and is the smallest (cardinality-wise) of all the bases.
Here, we denote this base as Σstem . The complexity of enumerating the elements
of this base is studied in [1].
Base of Proper Premises While the Duquenne-Guigues Base is the smallest
base, the base of proper premises, or Canonical Direct Base, noted here ΣP roper ,
is the smallest base for which the logical closure can be computed with a single
pass. The Canonical Direct Base was initially known under five independent
definitions, shown to be equivalent by Bertet and Montjardet in [6].
2.2
Hypergraphs and Transversals
Let V be a set of vertices. A hypergraph H is a subset of the powerset 2V . Each
E ∈ H is called an (hyper)edge of the hypergraph. A set S ⊆ V is called a
hypergraph transversal of H if it intersects every edge of H, that is S ∩ E 6=
∅, ∀E ∈ H. A set S ⊆ V is called a minimal hypergraph transversal of H if
S is a transversal of H and S is minimal with respect to the subset inclusion
among all the hypergraph transversals of H. The set of all minimal hypergraph
transversals of H forms a hypergraph, that we denote T r(H) and that is called
the transversal hypergraph.
2.3
Proper Premises as Hypergraph Transversals
In this section, we introduce a definition of the base of proper premises based
on hypergraph transversals.
Proposition 1 (from [4]). P ⊆ A is a premise of a ∈ A if and only if (A \
o′ ) ∩ P 6= ∅ holds for all o ∈ O such that (o, a) 6∈ R. P is a proper premise for
m if and only if P is minimal with respect to subset inclusion for this property.
Proposition 23 from [4] uses o ւ a instead of (o, a) 6∈ R. It is a stronger
condition that implies a maximality condition that is not necessary here.
The set of proper premises of an attribute is equivalent to the minimal
transversals of a hypergraph induced from the context with the following proposition:
Proposition 2 (From [7]). P is a premise of a if and only if P is a hypergraph
transversal of Ha where
Ha = {A \ o′ |o ∈ O, (o, a) 6∈ R}
The set of all proper premises of a is exactly the transversal hypergraph T r(Ha ).
To illustrate this link, we show the computation of the base of proper premises
for context 1. We must compute the hypergraph Ha for each attribute. Let’s
begin with attribute a1 . We have to compute Ha1 = {A \ o′ |o ∈ O, (o, a1 ) 6∈ R}
and T r(Ha1 ). In C, there is no cross for a1 in the rows o2 , o3 , o4 and o5 . We
have :
Ha1 = {{a1 , a3 }, {a1 , a5 }, {a1 , a2 , a3 }, {a1 , a2 , a4 }}
and
T r(Ha1 ) = {{a1 }, {a2 , a3 , a5 }, {a3 , a4 , a5 }
We have the premises for a1 , which give implications a2 a3 a5 → a1 and
a3 a4 a5 → a1 . {a1 } is also a transversal of Ha1 but can be omitted here, since
a → a is always true.
In the same way, we compute the hypergraph and its transversal hypergraph
for all other attributes. For example,
Ha2 = {{a1 , a2 , a3 }, {a1 , a2 , a4 }} and T r(Ha2 ) = {{a1 }, {a2 }, {a3 , a4 }}
Ha5 = {{a1 , a5 }, {a3 , a4 , a5 }} and T r(Ha5 ) = {{a5 }, {a1 , a3 }, {a1 , a4 }}
The set of all proper premises of ai is exactly the transversal hypergraph T r(Hai ),
∀i ∈ {1, . . . , 5}, to which we remove the trivial transversals (ai is always a
transversal for Hai ). The base of proper premises for context C is the union of
the proper premises for each attributes:
[
ΣP roper (C) =
T r(Ha ) \ a
a∈A
3
Average size of an implication base
In [7], Distel and Borchmann provide some expected numbers for proper premises
and concept intents. Their approach, like the one in [3], uses the Erdös-Rényi
model [8] to generate random hypergraphs. However, in [7], the probability for
each vertex to appear in a hyperedge is a fixed 0.5 (by definition of the model)
whereas the approach presented in [3] consider this probability as a variable of
the problem and is thus more general.
3.1
Single parameter model
In the following, we assume all sets to be finite, and that |O| is polynomial in |A|.
We call p the probability that an object o has an attribute a. An object having
an attribute is independent from other attributes and objects. We denote by
q = 1 − p the probability that (o, a) 6∈ R. The average number of hyperedges of
Ha , ∀a ∈ A, is m = |O| × q. Indeed, Hai has one hyperedge for each (o, ai ) 6∈ R.
The probability of an attribute appearing in a hyperedge of Hai is also q.
We note n the number of vertices of Ha . At most all attributes appear in
Ha , so n ≤ |A|
Proposition 3 (Reformulated from [3]). In a random hypergraph with m
edges and n vertices, with m = βnα , β > 0 and α > 0 and a probability p that a
vertex appears in an edge, there exists a positive constant c such that the average
number of minimal transversals is
d(α)log 1 m+c ln ln m
q
O n
with q = 1 − p, d(α) = 1 si α ≤ 1 and d(α) =
(α+1)2
4α
otherwise.
Proposition 3 bounds the average number of minimal transversals in a hypergraph where the number of edges is polynomial in the number of vertices.
In [3], the authors also prove that this quantity is quasi-polynomial.
From Prop. 3 we can deduce the following property for the number of proper
premises for an attribute.
Proposition 4. In a random context with |A| attributes, |O| objects and probability p that (o, a) ∈ R , the number of proper premises for an attribute is on
average:
O |A|
!
d(α)log 1 (|O|×q)+c ln ln(|O|×q))
p
and is quasi-polynomial in the number of objects.
Proposition 4 states that the number of proper premises of an attribute is
on average quasi-polynomial in the number of objects in a context where the
number of objects is polynomial in the number of attributes.
As attributes
can share proper premises,
|ΣP roper | is on average less than
(d(α)log 1 (|O|×q)+c ln ln(|O|×q))
p
|A| × O |A|
Since |Σstem | ≤ |ΣP roper |, Prop. 4 immediately yields the following corollary:
Corollary 1. The average number of pseudo-intents in a context where the
number of objects is polynomial in the number of attributes is less than or equal
to:
!
|A| × O |A|
d(α)log 1 (|O|×q)+c ln ln(|O|×q))
p
Corollary 1 states that in a context where the number of object polynomial
in the number of attributes, the number of pseudo-intents is on average at most
quasi-polynomial.
3.2
Almost sure lower bound for the size of the number of proper
premises
An almost sure lower bound is a bound that is true with probability close to 1.
In [3], the authors give an almost sure lower bound for the number of minimal
transversals.
Proposition 5 (Reformulated from [3]). In a random hypergraph with m
edges and n vertices, and a probability p that a vertex appears in an edge, the
number of minimal transversals is almost surely greater than
LMT = n
log 1 m+O(ln ln m)
q
Proposition 5 states the in a random context with probability p that a given
object has a given attribute, one can expect at least LMT proper premises for
each attribute.
Proposition 6. In a random context with |A| attributes, |O| objects and probability q that a couple (o, a) 6∈ R, the size of ΣP roper is almost surely greater
than
|A| × |A|
log 1 (|O|×q)+O(ln ln(|O|×q))
p
As Prop 6 states a lower bound on the number of proper premises, no bound
on the number of pseudo-intents can be obtained this way.
3.3
Multi-parametric model
In this section we consider a multi-parametric model. This model is more accurate with respect to real life data. In this model, each attribute j has a probability
pj of having an object. All the attributes are not equiprobable.
We consider a context with m objects and n attributes. The set of attributes
is partitioned into 3 subsets:
– The set U represents the attributes that appear in a lot of objects’ descriptions (ubiquitous attributes). For all attributes u ∈ U we have qu = 1 − pu <
x
m with x a fixed constant.
– The set R represents rare events, that is attributes that rarely appear. For
all attributes r ∈ R we have that pr = 1 − ln1n tends to 0.
– The set F = A \ (U ∪ R) of other attributes.
Proposition 7 (Reformulated from theorem 3 [3]). In the multi-parametric
model, we have:
– If |O ∪ R| = O(ln |A|), then the number of minimal transversal is on average
at most polynomial.
– If |R| = O((ln |A|)c ), then the number of minimal transversal is on average
at most quasi-polynomial.
– If |R| = Θ(|A|), then the number of minimal transversal is on average at
most exponential on |R|.
Proposition 7 states that when most of the attributes are common (that
is, are in the set U ), |ΣP roper | is on average at most polynomial. When there
is a logarithmic quantity of rare attributes (attributes in R), |ΣP roper | is on
average at most quasi-polynomial (in the number of objects). When most of the
attributes are rare events, |ΣP roper | is on average at most exponential.
As in the single parameter model, Prop. 7 also yields the same bounds on
the number of pseudo-intents.
4
Discussion on randomly generated contexts
The topic of randomly generated contexts is important in FCA, most notably
when used to compare performances of algorithms. Since [9], a few experimental
studies have been made. In [10], the authors investigate the Stegosaurus phenomenon that arises when generating random contexts, where the number of
pseudo-intents is correlated with the number of concepts [11].
As an answer to the Stegosaurus phenomenon raised by experiments on random contexts, in [12], the author discusses how to randomly and uniformly generate closure systems on 7 elements.
In [13], the authors introduce a tool to generate less biased random contexts, avoiding repetition while maintaining a given density, for any number of
elements. However this tool doesn’t ensure uniformity.
The partition of attributes induced by the multi-parametric model allows
for a structure that is close to the structure of real life datasets [3]. However,
we can’t conclude theoretically on whether this model avoids the stegosaurus
phenomenon discussed in [10]. This issue would be worth further theoretical and
experimental investigation.
5
Conclusion
In this paper, we used results on average-case combinatorics on hypergraphs to
bound the average size of the base of proper premises. Those results concerns
only the proper premises, and can’t be applied on the average number of pseudointents. However, as the Duquenne-Guigues base is, by definition, smaller than
the base of proper premises, the average size of the base of proper premises can
serve as an average bound for the number of pseudo-intents.
This approach does not give indications on the number of concepts. However,
there exists some works on this subject [14, 15].
As the average number of concepts is known [14, 15], and this paper gives
some insight on the average size of some implicational bases, future works can
be focused on the average number of pseudo-intents. It would also be interesting
to study the average number of n-dimensional concepts or implications, with
n ≥ 3 [16, 17].
Acknowledgments
This research was partially supported by the European Union’s “Fonds Européen
de Développement Régional (FEDER)” program.
References
1. Felix Distel and Baris Sertkaya. On the complexity of enumerating pseudo-intents.
Discrete Applied Mathematics, 159(6):450–466, 2011.
2. Sergei O. Kuznetsov. On the Intractability of Computing the Duquenne-Guigues
Base. J. UCS, 10(8):927–933, 2004.
3. Julien David, Loïck Lhote, Arnaud Mary, and François Rioult. An Average Study
of Hypergraphs and their Minimal Transversals. Theor. Comput. Sci., 596:124–141,
2015.
4. Bernhard Ganter and Rudolf Wille. Formal Concept Analysis - Mathematical
Foundations. Springer, 1999.
5. Duquenne V. Guigues, J. L. Familles Minimales d’Implications Informatives Résultant d’un Tableau de Données Binaires. Mathématiques et Sciences Humaines,
95:5–18, 1986.
6. Karell Bertet and Bernard Monjardet. The Multiple Facets of the Canonical Direct
Unit Implicational Basis. Theor. Comput. Sci., 411(22-24):2155–2166, 2010.
7. Uwe Ryssel, Felix Distel, and Daniel Borchmann. Fast Algorithms for Implication
Bases and Attribute Exploration Using Proper Premises. Ann. Math. Artif. Intell.,
70(1-2):25–53, 2014.
8. Paul Erdos and Alfréd Rényi. On the evolution of random graphs. Publ. Math.
Inst. Hung. Acad. Sci, 5(1):17–60, 1960.
9. Sergei O. Kuznetsov and Sergei A. Obiedkov. Comparing performance of algorithms for generating concept lattices. J. Exp. Theor. Artif. Intell., 14(2-3):189–
216, 2002.
10. Daniel Borchmann and Tom Hanika. Some experimental results on randomly generating formal contexts. In Proceedings of the Thirteenth International Conference
on Concept Lattices and Their Applications, Moscow, Russia, July 18-22, 2016.,
pages 57–69, 2016.
11. Daniel Borchmann. Decomposing finite closure operators by attribute exploration.
ICFCA 2011, Supplementary Proceedings, 2011.
12. Bernhard Ganter. Random extents and random closure systems. In Proceedings of
The Eighth International Conference on Concept Lattices and Their Applications,
Nancy, France, October 17-20, 2011, pages 309–318, 2011.
13. Andrei Rimsa, Mark A. J. Song, and Luis E. Zárate. Scgaz - A synthetic formal
context generator with density control for test and evaluation of FCA algorithms.
In IEEE International Conference on Systems, Man, and Cybernetics, Manchester,
SMC 2013, United Kingdom, October 13-16, 2013, pages 3464–3470, 2013.
14. Loïck Lhote, François Rioult, and Arnaud Soulet. Average number of frequent
(closed) patterns in bernouilli and markovian databases. In Proceedings of the 5th
IEEE International Conference on Data Mining (ICDM 2005), 27-30 November
2005, Houston, Texas, USA, pages 713–716, 2005.
15. Richard Emilion and Gérard Lévy. Size of random galois lattices and number of
closed frequent itemsets. Discrete Applied Mathematics, 157(13):2945–2957, 2009.
16. Fritz Lehmann and Rudolf Wille. A Triadic Approach to Formal Concept Analysis.
In Conceptual Structures: Applications, Implementation and Theory, Third International Conference on Conceptual Structures, ICCS ’95, Santa Cruz, California,
USA, August 14-18, 1995, Proceedings, pages 32–43, 1995.
17. George Voutsadakis. Polyadic Concept Analysis. Order, 19(3):295–304, 2002.
| 2 |
Accelerating Training of Deep Neural
Networks via Sparse Edge Processing
Sourya Dey, Yinan Shao, Keith M. Chugg, and Peter A. Beerel
arXiv:1711.01343v1 [cs.NE] 3 Nov 2017
Ming Hsieh Department of Electrical Engineering,
University of Southern California.
Los Angeles, California 90089, USA
{souryade,yinansha,chugg,pabeerel}@usc.edu
Abstract. We propose a reconfigurable hardware architecture for deep
neural networks (DNNs) capable of online training and inference, which
uses algorithmically pre-determined, structured sparsity to significantly
lower memory and computational requirements. This novel architecture
introduces the notion of edge-processing to provide flexibility and combines junction pipelining and operational parallelization to speed up
training. The overall effect is to reduce network complexity by factors
up to 30x and training time by up to 35x relative to GPUs, while maintaining high fidelity of inference results. This has the potential to enable
extensive parameter searches and development of the largely unexplored
theoretical foundation of DNNs. The architecture automatically adapts
itself to different network sizes given available hardware resources. As
proof of concept, we show results obtained for different bit widths.
Keywords: Machine learning, Neural networks, Deep neural networks,
Sparsity, Online Learning, Training Acceleration, Hardware Optimizations, Pipelining, Edge Processing, Handwriting Recognition
1
Introduction
DNNs in machine learning systems are critical drivers of new technologies such
as natural language processing, autonomous vehicles, and speech recognition.
Modern DNNs and the corresponding training datasets are gigantic with millions
of parameters [14], which makes training a painfully slow and memory-consuming
experimental process. For example, one of the winning entries in the ImageNet
Challenge 2014 takes 2-3 weeks to train on 4 GPUs [16]. As a result, despite using
costly cloud computation resources, training is often forced to exclude large scale
optimizations over model structure and hyperparameters. This scenario severely
hampers the advancement of research into the limited theoretical understanding
of DNNs and, unfortunately, empirical optimizations remain as the only option.
Recent research into hardware architectures for DNNs has primarily focused
on inference only, while performing training offline [2,4,10,13,15,19] . Unfortunately, this precludes reconfigurability and results in a network incapable of
The final publication is available at Springer via
http://dx.doi.org/10.1007/978-3-319-68600-4_32
2
Sourya Dey, Yinan Shao, Keith M. Chugg, and Peter A. Beerel
dynamically adapting itself to new patterns in data, which severely limits its
usability for pertinent real-world applications such as stock price prediction and
spam filtering. Moreover, offline-only learning exacerbates the problem of slow
DNN research and ultimately leads to lack of transparency at a time when precious little is understood about the working of DNNs.
There has been limited research into hardware architectures to support online training, such as [1,8,9]. However, due to the space-hungry nature of DNNs,
these works have only managed to fit small networks on their prototypes. While
other works [3,10,11,12,20] have proposed memory-efficient solutions for inference, none of them have addressed the cumbersome problem of online training.
Therefore, a hardware architecture supporting online training and reconfiguration of large networks would be of great value for exploring a larger set of models
for both empirical optimizations and enhanced scientific understanding of DNNs.
In this work, we propose a novel hardware architecture for accelerating training and inference of DNNs on FPGAs. Our key contributions are:
1. An architecture designed for FPGA implementation that can perform online
training of large-scale DNNs.
2. A pre-defined, structured form of sparsity that starts off with an algorithmically deterministic sparse network from the very outset.
3. Edge-based processing – a technique that decouples the available hardware
resources from the size and complexity of the network, thereby leading to
tremendous flexibility and network reconfigurability.
4. Hardware-based optimizations such as operational parallelization and junction pipelining, which lead to large speedups in training.
The paper is organized as follows. Section 2 analyzes our proposed form of
sparsity. Section 3 discusses our proposed technique of edge-based processing
and interleaving, along with hardware optimizations. Then section 4 presents
hardware results and section 5 concludes the paper.
2
Sparsity
The need for sparsity, or reducing the number of parameters in a network, stems
from the fact that both the memory footprint and computational complexity of
modern DNNs is enormous. For example, the well-studied DNN AlexNet [14]
has a weight size of 234 MB and requires 635 million arithmetic operations
only for feedforward processing [19]. Convolutional layers are sparse, but locally
connected, i.e. the spatial span of neurons in a layer connecting to a neuron
in the next layer is small. As a result, such layers alone are not suitable for
performing inference and therefore need to be followed by fully-connected (FC)
layers [14,16,18], which account for 95% of the connections in the network [19].
However, FC layers are typically over-parameterized [6,7] and tend to overfit to
the training data, which results in inferior performance on test data. Dropout
(deletion) of random neurons was proposed by [17], but incurs the disadvantage
of having to train multiple differently configured networks, which are finally
Accelerating Training of DNNs via Sparse Edge Processing
3
combined to regain the original full size network. Hashnet [3] randomly forced
the same value on collections of weights, but acknowledged that “a significant
number of nodes [get] disconnected from neighboring layers.” Other sparsifying
techniques such as pruning and quantization [11,12,20] first train the complete
network, and then perform further computations to delete parameters, which
increase the training time. In general, all of these architectures deal with the
complete non-sparsified FC layers at some point of time during their usage cycle
and therefore, fail to permanently solve the memory and complexity bottlenecks
of DNNs.
Contrary to existing works, we propose a class of DNNs with pre-specified
sparsity, implying that from the very beginning, neurons in a layer connect to
only a subset of the neurons in the next layer. This means that the original
network has a lower memory and computational complexity to begin with, and
there are no additional computations to change the network structure. The degrees of fan-out and fan-in (number of connections to the next layer and from
the previous layer, respectively) of each neuron are user-specified, and then the
connections are algorithmically assigned. This ensures that no particular neuron gets disconnected, while the algorithm provides good spatial spread ensuring
that activations from early layers can impact the output of the last layer.
As an example, consider MNIST digit classification over 5 epochs of training
using a (784,112,10) network, i.e. there are 784 input, 112 hidden and 10 output
neurons. If it is FC, the total number of weights is 88,928 (which is already less
than other works such as [5]). Now suppose we preset the fan-out of the input
and hidden neurons to 17 and 5, respectively. This leads to 13,888 total weights,
implying that the overall network has 15% connectivity, or 85% sparsity. Figure
1 compares the performance of sparse networks, keeping all hyperparameters
the same except for adjusting the learning rate to be inversely proportional
to connectivity, which compensates for parameter reduction. Notice that 15%
connectivity gives better performance than the original FC network. Moreover,
3% connectivity gives > 91% accuracy in 5 epochs, which is within 4% of the FC
case. This leads us to believe that the memory and processor requirements of FC
layers in DNNs can be reduced by over 30x with minimal impact on performance.
3
Edge Processing and Interleaving
A DNN is made up of layers of interconnected neurons and the junctions between
adjacent layers contain connections or edges, each having an associated weight
value. The 3 major operations in a network are: a) feedforward (FF), which
primarily computes dot products between weights and the previous layer’s activation values; b) backpropagation (BP), which computes dot products between
weights and the next layer’s delta values and then multiplies them with derivatives of the previous layer’s activations; and c) update (UP), which multiplies
delta values from the next layer with activation values from the previous layer to
compute updates to the weights. Notice that the edges feature in all 3 operations,
and this is where the motivation for our approach stems from.
4
Sourya Dey, Yinan Shao, Keith M. Chugg, and Peter A. Beerel
Fig. 1. Classification performance of a (784,112,10) network with varying connectivity
percentage, trained for 5 epochs on the MNIST dataset.
We propose a DNN architecture which is processed from the point of view
of its edges (i.e., weights), instead of its neurons. Every junction has a degree
of parallelism (DoP), denoted as z, which is the number of edges processed in
parallel. All the weights in each junction are stored in a memory bank consisting
of z memories. All the activation, activation derivative and delta values of each
layer are also stored in separate memory banks of z memories each. The edges
coming into a junction from its preceding layer are interleaved, or permuted,
before getting connected to its succeeding layer. The interleaver algorithm is
deterministic and reconfigurable. It serves to ensure good spatial spread and
prevent regularity, thereby achieving a pseudo-random connection pattern. For
example, if 4 edges come out of the first input neuron of the (784,112,10) network,
they might connect to the 9th, 67th, 84th and 110th neuron in the hidden layer.
Figure 2a depicts a memory bank as a checkerboard, where each column is
a memory. A single cycle of processing (say the nth) comprises accessing the
nth cell in each of the z weight memories. This implies reading all z values from
the same row (the nth), which we refer to as natural order access. Reading a
row implies accessing weights of edges connected to consecutive neurons in the
succeeding layer, since that’s how they are numbered. Figure 2b gives an example
where z is 6 and fan-in is 3. The interleaver determines which neurons in the
preceding layer are connected to those z edges. For ideal spatial spread, these
will be z different neurons. The interleaver algorithm is also designed to be clashfree, i.e. it ensures that the activation values of these z preceding neurons are
stored in z different memories. Violating this condition leads to the same memory
needing to be accessed more than once in the same cycle, i.e. a clash, which
stalls processing. A consequence of clash-freedom and pseudo-random connection
pattern is that the activation memories are accessed in permuted order, as shown
in Figure 2a, where there is only 1 shaded cell in each column.
Noting the significant data reuse between FF, BP and UP, we used operational parallelization to make all of them occur simultaneously. Since every
operation in a junction uses data generated by an adjacent junction or layer, we
designed a junction pipelining architecture where all the junctions execute all 3
Accelerating Training of DNNs via Sparse Edge Processing
5
Fig. 2. (a): Natural order and permuted order access of memory banks. (b): Reading
z = 6 weights (corresponding to 2 succeeding layer neurons) in each cycle. (c): Junction
pipelining and operational parallelization in the whole network.
operations simultaneously on different inputs from the training set. This achieves
a 3(L − 1) times speedup, where L is the total number of layers. The high level
view is shown in Figure 2c. As an example, consider the (784,112,10) network.
When the second junction is doing FF on input n + 1, it is also doing BP on the
previous input n which just finished FF, as well as updating (UP) its weights
from the finished BP results of input n − 1. Simultaneously, the first junction is
doing FF on the latest input n + 2, BP on input n − 1, and UP using the BP
results of input n − 2. Figure 3 shows the 3 simultaneous operations in more
detail inside a single junction. Notice that the memories associated with layer
parameters are both read from and written into during the same cycle. Moreover, the activation and its derivative memories need to store the FF results of
a particular input until it comes back to the same layer during BP. Hence these
memories are organized in queues. While this increases overall storage space,
the fraction is insignificant compared to the memory required for weights. This
problem is alleviated by using only 1 weight memory bank per junction for all
3 processes. Moreover, only 2 rows of this bank need to be accessed at a time,
which makes efficient memory management techniques possible.
A key contribution of our architecture is that z can be set to any value
depending on the area-speed tradeoff desired. z can be made small to process
a large network slowly using limited hardware. For powerful FPGAs, z can be
made large, which achieves tremendous increase in speed at the cost of a large
number of multipliers. z can also be individually adjusted for each junction
so that the number of clock cycles to process each junction is the same, which
ensures an always full pipeline and no stalls. Thus, the size and complexity of the
network is decoupled from the hardware resources available. Moreover, low values
of connectivity alleviate challenges with weight storage for very large DNNs.
Our architecture can be reconfigured to varying levels of fan-out and structured
sparsity, which is neither possible in other online learning architectures such as
[1,8,9], nor in architectures using forms of unstructured sparsity that suffer from
the overhead of lookups and cache misses [18]. Thus, we achieve the ideal case of
one-size-fits-all – an architecture that can adapt to a large class of sparse DNNs.
6
Sourya Dey, Yinan Shao, Keith M. Chugg, and Peter A. Beerel
Fig. 3. Operational parallelization in a single junction between layers k and k + 1,
showing natural and permuted order operations as solid and dashed lines, respectively.
As a concrete example of speedup, consider the network formed by the FC
layers of AlexNet. This has a (1728,4096,4096,1000) neuron configuration and accounts for 6% of the computational complexity [14,19]. Since the entire AlexNet
takes 6 days to train on 2 GPUs for 90 epochs, we estimate that training only
the FC network would take 0.36 days. The authors in [14] acknowledge the overparameterization problem, so we estimate from the data for Figure 1 that the
same FC network with only 6% connectivity can be trained with minimal performance degradation. Using our architecture, modern Kintex Ultrascale FPGA
boards will be able to support z = 256. This results in 4096 cycles being needed
to train a junction, which, at a reasonable clock frequency of 250 MHz, processes
each image through this sparse FC network in 16 µs. Training the network for
the complete 1.2 million images over 90 epochs is estimated to take half an hour,
which is a speedup of 35x over a single GPU.
4
Results
As proof of concept, we used Verilog Hardware Description Language to develop
the register-transfer level (RTL) design for our hardware architecture, and simulated using the MNIST dataset for different fixed point bit widths. The neuron
configuration is (1024,64,16) (we used powers of 2 for ease of hardware implementation and set the extra neurons to 0) and the fan-out for both junctions
is 8, resulting in an 87% sparse network. The first and second junctions have
z = 512 and z = 32, respectively. Figures 4a, 4b and 4c show histograms for the
classification accuracy difference “fixed point - floating point” (i.e., more bars
on the positive side indicate better fixed point performance). Figure 4d indicates that 10-bit fixed point (we used 3 integer bits and 7 fractional bits) for
all network parameters and computed values is sufficient to obtain classification
Accelerating Training of DNNs via Sparse Edge Processing
7
Fig. 4. (a), (b), (c): Classification accuracy difference histograms for 16-bit, 12-bit
and 10-bit fixed point, respectively, with floating point. (d): Detailed comparison view
of a portion of the network learning curves for 10-bit fixed point vs floating point.
performance very close to that of floating point simulations. The plots are for
10,000 training images over 10 epochs.
5
Conclusion and Future Work
This work presents a flexible architecture that can perform both training and inference of large and deep neural networks on hardware. Sparsity is preset, which
greatly reduces the amount of memory and number of multiplier units required.
A reconfigurable degree of parallel edge processing enables the architecture to
adapt itself to any network size and hardware resource set, while junction pipelining and operational parallelization lead to fast and efficient performance. Our
ultimate goal is to propel a paradigm shift from offline training on CPUs and
GPUs to online training using the speed and ease of reconfigurability offered by
FPGAs and custom chips. Future work would involve extension to other types
of networks, tackling memory bandwidth issues, and extensive parameter space
exploration to advance the limited theoretical understanding of DNNs.
References
1. Ahn, B.: Computation of deep belief networks using special-purpose hardware architecture. In: IJCNN-2014. pp. 141–148 (2014)
8
Sourya Dey, Yinan Shao, Keith M. Chugg, and Peter A. Beerel
2. Chen, T., Du, Z., Sun, N., Wang, J., Wu, C., Chen, Y., Temam, O.: Diannao: A
small-footprint high-throughput accelerator for ubiquitous machine-learning. In:
ASPLOS-2014. pp. 269–284. ACM, New York (2014)
3. Chen, W., Wilson, J.T., Tyree, S., Weinberger, K.Q., Chen, Y.: Compressing neural
networks with the hashing trick. In: ICML-2015. pp. 2285–2294. JMLR.org (2015)
4. Chen, Y., Luo, T., Liu, S., Zhang, S., He, L., Wang, J., Li, L., Chen, T., Xu,
Z., Sun, N., Temam, O.: Dadiannao: A machine-learning supercomputer. In: 47th
IEEE/ACM International Symposium on Microarchitecture. pp. 609–622 (2014)
5. Cireşan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Deep, big, simple
neural nets for handwritten digit recognition. Neural Comput. 22(12), 3207–3220
(2010)
6. Cun, Y.L., Denker, J.S., Solla, S.A.: Optimal brain damage. In: NIPS-1989, pp.
598–605. Morgan Kaufmann Publishers Inc., San Francisco (1989)
7. Denil, M., Shakibi, B., Dinh, L., Ranzato, M., Freitas, N.D.: Predicting parameters
in deep learning. In: NIPS-2013, pp. 2148–2156 (2013)
8. Eldredge, J.G., Hutchings, B.L.: Rrann: A hardware implementation of the backpropagation algorithm using reconfigurable FPGAs. In: IEEE ICNN-1994. vol. 4,
pp. 2097–2102 (1994)
9. Gadea, R., Cerdá, J., Ballester, F., Mocholı́, A.: Artificial neural network implementation on a single FPGA of a pipelined on-line backpropagation. In: ISSS-2000.
pp. 225–230. IEEE Computer Society, Washington, DC (2000)
10. Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M.A., Dally, W.J.: EIE:
Efficient inference engine on compressed deep neural network. In: ISCA-2016. pp.
243–254 (2016)
11. Han, S., Mao, H., Dally, W.J.: Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In: ICLR-2016
(2016)
12. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for
efficient neural network. In: NIPS-2015. pp. 1135–1143 (2015)
13. Himavathi, S., Anitha, D., Muthuramalingam, A.: Feedforward neural network
implementation in FPGA using layer multiplexing for effective resource utilization.
IEEE Transactions on Neural Networks 18(3), 880–888 (2007)
14. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS-2012, pp. 1097–1105 (2012)
15. Sanni, K., Garreau, G., Molin, J.L., Andreou, A.G.: FPGA implementation of a
deep belief network architecture for character recognition using stochastic computation. In: CISS-2015. pp. 1–5 (2015)
16. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014), http://arxiv.org/abs/1409.1556
17. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.:
Dropout: A simple way to prevent neural networks from overfitting. Journal of
Machine Learning Research 15, 1929–1958 (2014)
18. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D.,
Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR-2015.
pp. 1–9 (2015)
19. Zhang, C., Wu, D., Sun, J., Sun, G., Luo, G., Cong, J.: Energy-efficient CNN
implementation on a deeply pipelined FPGA cluster. In: ISLPED-2016. pp. 326–
331. ACM, New York (2016)
20. Zhou, X., Li, S., Qin, K., Li, K., Tang, F., Hu, S., Liu, S., Lin, Z.: Deep adaptive
network: An efficient deep neural network with sparse binary connections. CoRR
abs/1604.06154 (2016), http://arxiv.org/abs/1604.06154
| 9 |
December 12, 2017
1
Steering the distribution of agents
in mean-field and cooperative games
Yongxin Chen, Tryphon Georgiou and Michele Pavon
arXiv:1712.03578v1 [cs.SY] 10 Dec 2017
Abstract
The purpose of this work is to pose and solve the problem to guide a collection of weakly interacting dynamical
systems (agents, particles, etc.) to a specified terminal distribution. The framework is that of mean-field and of
cooperative games. A terminal cost is used to accomplish the task; we establish that the map between terminal
costs and terminal probability distributions is onto. Our approach relies on and extends the theory of optimal mass
transport and its generalizations.
Keywords: Mean-field games, linear stochastic systems, weakly interacting particle system, McKean-Vlasov dynamics, optimal control.
I. I NTRODUCTION
Mean-field game (MFG) theory is the study of noncooperative games involving a large number of agents. The
basic model requires agents to abide by identical dynamics and seek to minimize an individual cost function that
is also the same for all. As the number of agents increases to infinity, the empirical distribution of their states
becomes indifferent to the strategy of any single agent and yet it couples their individual responses. Thus, the
aggregate response of the agents (mean field) drives individual responses while the action of individual agents
is insignificant. On the flip side, cooperative games refer to the situation where agents seek to jointly optimize
a common performance index. Either way, the desire to minimize cost, individually or collectively, drives the
empirical distribution of agents in suitable ways. The purpose of this work to study for both, MFG and cooperative
games, the control problem to steer the collective response of agents over a finite window of time between two
specified end-point marginal distributions by suitable choice of cost (i.e., incentives) in non-cooperative games and
centralized control with cooperative agents, and also the problem to ensure a desired stationary distribution under
similar conditions. This viewpoint is influenced by optimal mass transport (OMT) theory that deals with the flow
of time-marginal densities for a collective (agents, particles, resources) and corresponding control and modeling
problems.
The study of MFG’s was introduced into the engineering literature by Huang, Malhamé and Caines [1] and,
independently, by Lasry and Lions [2]. Earlier, in the economics literature, similar models were considered by
Jovanovic and Rosental [3]. The importance of the subject stems from the wide range of applications that include
modeling and control of multi-agent dynamical systems, stock market dynamics, crowd dynamics, power systems
and more; see [1], [2], [4], [5], and also see [6], [7], [8], [9] in the special case of linear dynamics and quadratic
cost. On the other hand, OMT originates in the work of Monge [10] and aims directly at relating/transporting
distributions under minimum cost. Kantorovich [11] introduced linear programming and duality theory for solving
Y. Chen is with the Department of Electrical and Computer Engineering, Iowa State University, IA 50011, Ames, Iowa, USA;
[email protected]
T.T. Georgiou is with the Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA 92697;
[email protected]
M. Pavon is with the Dipartimento di Matematica “Tullio Levi Civita”, Università di Padova, via Trieste 63, 35121 Padova, Italy;
[email protected]
Supported in part by the NSF under grant ECCS-1509387, the AFOSR under grants FA9550-15-1-0045 and FA9550-17-1-0435, and by
the University of Padova Research Project CPDA 140897.
December 12, 2017
2
OMT resource allocation problems and, in recent years, a fast developing phase was spurred by a wide range
of applications of OMT to probability theory, economics, weather modeling, biology and mathematical physics
[12], [13], [14], [15], [16], [17], [18], [19]. The connection between dynamic OMT [20] and stochastic control
has been explored in our work, e.g. [21], [22], where the focus has been on regulating stochastic uncertainty of
diffusion processes and of stochastically driven dynamical systems by suitable control action. These stochastic
control problems, in turn, relate to a classical maximum entropy problem on path space known as the Schrödinger
bridge problem, see e.g., [23], [24], [25], [26], [27], [28].
The goal of the present work is to study density steering problems in an MFG framework or, equivalently,
explore the role of interaction potential and decentralized strategies when steering an initial distribution to a terminal
one. In particular, we are interested on how to design an added terminal cost so as to provide incentives for
agents, under a Nash equilibrium strategy, to move collectively as specified. To this end, we establish that the map
between terminal costs and terminal probability distributions is onto. Thereby, we develop an MFG-based synthesis
framework for OMT-type stochastic control problems with or without stochastic excitation.
The paper evolves along the following lines. First, we discuss the motivation and problem formulation in Section
II. The solution is provided in Section III. In Section IV we study similar problems with less or no disturbance.
Section V is dedicated to the special case with Gaussian marginal distributions. Similar problems in the stationary
setting are investigated in Section VI. In Section VII, we developed the cooperative game counterpart of the density
steering problem. This follows by a simple academic example in Section VIII and a brief concluding remark in
Section IX.
II. P ROBLEM FORMULATION
We herein investigate the collective dynamical response of a group of agents (also thought of as particles,
players, and so on) that interact weakly with each other. The terminology “weakly” refers to the agents being statistically indistinguishable (anonymous) and affecting each other’s response only through their empirical distribution
[29]. Thus, we consider such a system of N agents with dynamics1 specified by
1 X
Āxj (t)dt + Bui (t)dt + Bdwi (t),
(1)
dxi (t) = Axi (t)dt +
N −1
j6=i
xi (0) =
xi0 ,
i = 1, . . . , N.
Here, xi , ui , wi represent the state, control input, white noise disturbance, respectively, for the ith agent, and
the model parameters are the same for all. We further assume that their initial conditions x10 , x20 , . . . , xN
0 are all
independent with the same probability density ρ0 . The ith agent interacts with the rest through the averaged position.
The matrices A, Ā ∈ Rn×n , B ∈ Rn×m are continuous functions of time; for notational simplicity we often use
e.g., A instead of A(t). It is assumed that the pair is controllable in the sense that the reachability Gramian
Z t
M (t, s) =
Φ(t, τ )B(τ )B(τ )0 Φ(t, τ )0 dτ
s
is invertible for all s < t. Here, Φ(·, ·) denotes the state transition matrix that is defined via
∂Φ(t, s)
= AΦ(t, s), Φ(s, s) = I.
∂t
Clearly, in case when A is time-invariant, Φ(t, s) = eA(t−s) .
In MFG [1], each agent searches for an optimal control strategy to minimize its own cost2
Z 1
N
N
Ji (ui ) = E
f (t, xi (t), ui (t), µ (t))dt + g(xi (1), µ (1)) ,
(2)
0
1
This type of weakly coupled system of linear stochastic models has been studied in [6], [7], [8]. In our setting we further assume that
the noise dwi and control action u affect the dynamics in a similar manner, through the same matrix B. The more general case, where this
is not so, is more demanding and will be pursued in future publication, cf. [27], [28].
2
For simplicity of notation and without loss in generality we take the end point to be t = 1.
December 12, 2017
3
where
N
1 X
µ (t) =
δxi (t)
N
N
(3)
i=1
is the empirical distribution of the states of the N agents at time t. Thus, this is a non-cooperative game and the
cost of the ith agent is affected by the strategies of others only through the empirical distribution. An optimal
control corresponds to a Nash equilibrium for the game. We follow the arguments in [5], and restrict ourselves to
equilibria that correspond to symmetric Markovian control strategies (state feedback)
ui (t) = φ(t, xi (t)), i = 1, . . . , N.
(4)
When N is large, the empirical distribution µN is indifferent to small perturbations of control strategy of a
single agent. This points to the following approach [5] to obtain an approximate Nash equilibrium: fix a family
(µ(t))0≤t≤1 of probability measures and solve the standard stochastic control problem
Z 1
φ? = argminφ E
f (t, x(t), φ(t, x(t)), µ(t))dt + g(x(1), µ(1))
(5)
0
subject to the dynamics
dx(t) = Ax(t)dt + Āx̄µ (t)dt + Bφ(t, x(t))dt + Bdw(t), x(0) = x0 a.s.
(6)
where
x̄µ := hx, µ(t)i
denotes the mean3 of the distribution µ(t), and x0 is a random vector with probability density ρ0 . Considering
the choice (µ(t))0≤t≤1 as a parameter, the remaining issue is to choose this distribution flow so that the actual
distribution of the solution x(t) of (6) with optimal control strategy
u? (t) = φ? (t, x(t))
(7)
coincides with µ(t). The solution to the MFG problem involves establishing the existence and uniqueness of the
solution to two coupled partial differential equations (PDEs) [5]. It has been shown that a Nash equilibrium point
for this mean-field game exists under rather mild assumptions on the cost function [1], [2], [4], [5], [6], [7], [8], [9].
That is, there exists a family (µ(t))0≤t≤1 such that the distribution flow of the solution x(t) of (6) under optimal
control strategy φ? coincides with this same µ. In addition, this optimal control φ? is proven to be an ε-Nash
equilibrium to the N -player-game for N large [5], [30].
Departing from previous literature, this paper deals with the density steering problem of the N -player-game
system. More specifically, we are interested in introducting a suitable cost incentive so that the system is driven
to a specific distribution µ1 at time t = 1 under (7). In fact, it turns out that under mild conditions, a quadratic
running cost in both the control and state (i.e., group linear tracking as in the work of Huang, Malamé and Caines
[1]), can be enhanced by a suitable terminal cost g as follows
Z 1
1
1
2
2
N
Ji (ui ) = E
kui (t)k + kxi (t) − x̄(t)kQ dt + g(xi (1), µ (1))
(8)
2
2
0
so as to acomplish the task of steering the initial distribution to the desired terminal one. In other words, we show
that the mapping between a choice of g and the terminal distribution µ1 is onto. Formally, the problem we are
interested in can be stated as follows.
Problem 1: Given N agents governed by (1) with initial probability density ρ0 , find a terminal cost g such
that, in the Nash equilibrium with cost functional (8), the agents will reach a given terminal density ρ1 at time
t = 1, in the limit as N goes to ∞.
3
Throughout, we use the expressions x̄µ or hx, µ(t)i interchangeably.
December 12, 2017
4
III. G ENERAL APPROACH AND SOLUTION
Without loss of generality and for simplicity of the exposition we only consider a running cost only in the
control actuation (i.e., taking the matrix Q in (8) to be zero). We begin by considering the optimal steering problem
[31], [27], [28], [32] without terminal cost, i.e., for a fixed density flow (µ(t))0≤t≤1 , consider the control problem
to minimize
Z 1
1
2
J(u) = E
ku(t)k dt
0 2
subject to the dynamics
dx(t) = Ax(t)dt + Āx̄µ (t)dt + Bu(t)dt + Bdw(t), x(0) = x0 ∼ ρ0
and the constraint that x(1) has probability density ρ1 . This problem can be (formally) posed as
Z 1Z
1
inf
ρ(t, x)ku(t, x)k2 dxdt,
ρ,u
0
Rn 2
∂ρ
1
+ ∇ · ((Ax + Āx̄µ + Bu)ρ) − tr(BB 0 ∇2 ρ) = 0,
∂t
2
ρ(0, ·) = ρ0 , ρ(1, ·) = ρ1 .
(9)
(10a)
(10b)
(10c)
Following a similar argument as in [22], we can establish the following sufficient condition for optimality.
Proposition 1: If there exists a function λ such that ρ? , λ satisfy
1
1
∂λ
+ ∇λ · Ax + ∇λ · Āx̄µ + tr(BB 0 ∇2 λ) + ∇λ · BB 0 ∇λ = 0,
∂t
2
2
?
∂ρ
1
+ ∇ · ((Ax + Āx̄µ + BB 0 ∇λ)ρ? ) − tr(BB 0 ∇2 ρ? ) = 0,
∂t
2
and boundary conditions
ρ? (0, ·) = ρ0 , ρ? (1, ·) = ρ1 ,
(11a)
(11b)
(11c)
then (ρ? , u? = B 0 ∇λ) is a solution to (36).
Replacing µ in (11) by ρ? we obtain the system of (nonlinear) PDE’s
∂λ
1
1
+ ∇λ · Ax + ∇λ · Āx̄ρ? + tr(BB 0 ∇2 λ) + ∇λ · BB 0 ∇λ = 0,
∂t
2
2
?
∂ρ
1
+ ∇ · ((Ax + Āx̄ρ? + BB 0 ∇λ)ρ? ) − tr(BB 0 ∇2 ρ? ) = 0,
∂t
2
ρ? (0, ·) = ρ0 , ρ? (1, ·) = ρ1 .
(12a)
(12b)
(12c)
We will show below that, under mild assumptions, (12) has a solution. In fact, we will construct such a solution
relying on standard Schrödinger bridge theory [21], [22].
Remark 2: Note that the coupled PDEs (12a-12b) are the same as the PDEs that arise in classic MFG problems
corresponding to (1) and (8). However, the usual boundary conditions
ρ? (0, ·) = ρ0 ,
λ(1, x) = −g(x, ρ? (1, ·)),
are now different and given by (12c). Evidently, the Lagrange multiplier −λ is the value (cost-to-go) function of
the associated optimal control problem.
To solve (12), we first consider the Schrödinger bridge problem with prior dynamics
dx(t) = Ax(t)dt + Bdw(t).
Let
ρ̂0 (x) = ρ0 (x + x̄ρ0 )
(13)
December 12, 2017
5
and
ρ̂1 (x) = ρ1 (x + x̄ρ1 ),
then
hx, ρ̂0 i = 0, hx, ρ̂1 i = 0.
The Schrödinger bridge with prior dynamics (13) and marginal distributions ρ̂0 and ρ̂1 is [21], [22]
dx(t) = Ax(t)dt + BB 0 ∇λ̂(t, x(t))dt + Bdw(t),
(14)
∂ λ̂
1
1
+ ∇λ̂ · Ax + tr(BB 0 ∇2 λ̂) + ∇λ̂ · BB 0 ∇λ̂ = 0,
∂t
2
2
(15)
where λ̂ satisfies
or, equivalently, ϕ = exp(λ̂) satisfies
1
∂ϕ
+ ∇ϕ · Ax + tr(BB 0 ∇2 ϕ) = 0.
∂t
2
The boundary condition λ̂(1, ·) for λ̂ is chosen in a way so that the resulting density flow ρ̂(t, x) of (14), which is
∂ ρ̂
1
+ ∇ · ((Ax + BB 0 ∇λ̂)ρ̂) − tr(BB 0 ∇2 ρ̂) = 0,
∂t
2
matches the marginal distributions ρ̂0 and ρ̂1 . The pair (λ̂, ρ̂) satisfies that
h∇λ̂(t, ·), ρ̂(t, ·)i = 0
(16)
hx, ρ̂(t, ·)i = 0
(17)
and therefore
for all 0 ≤ t ≤ 1. The intuition is that if the expectation of the control, i.e., h∇λ̂(t, ·), ρ̂(t, ·)i, is not constantly 0,
then one can always shift the control by its mean to achieve a smaller cost. Now let
−1
(x̄ρ1 − Φ̄10 x̄ρ0 ),
m(t) = Φ(1, t)0 M̄10
y(t) the solution to
ẏ(t) = (A + Ā)y(t) + BB 0 m(t), y(0) = x̄ρ0 ,
and
Z
γ(t) = −
0
t
1
(Āy(s) · m(s) + m(s) · BB 0 m(s))ds.
2
Here Φ̄10 := Φ̄(1, 0) with Φ̄ being the state transition matrices for the pair (A + Ā, B) and the “coupled” Gramian
Z 1
M̄10 =
Φ̄(1, τ )BB 0 Φ(1, τ )0 dτ
0
is assumed to be invertible. Note that y(1) = x̄ρ1 .
With these ingredients, we construct a solution to (12) as follows. Define (λ, ρ? ) by
λ(t, x) = λ̂(t, x − y(t)) + m(t) · x + γ(t),
(18a)
ρ? (t, x) = ρ̂(t, x − y(t)).
(18b)
and
In so doing, (λ, ρ? ) is a solution of (12). On one hand, substituting (18) into (12b), in view of (17), we obtain
∂ρ?
1
+ ∇ · ((Ax + Āx̄ρ? + BB 0 ∇λ)ρ? ) − tr(BB 0 ∇2 ρ? )
∂t
2
∂ ρ̂
0
=
− ∇ρ̂ · ((A + Ā)y + BB m)
∂t
1
+ ∇ · ((Ax + BB 0 ∇λ)ρ̂) + Āhξ, ρ̂(t, ξ − y)i · ∇ρ̂ − tr(BB 0 ∇2 ρ̂)
2
= −∇ρ̂ · (Āy + BB 0 m) + ∇ · (BB 0 mρ̂) + Āhξ, ρ̂(t, ξ − y)i · ∇ρ̂ = 0,
December 12, 2017
6
where we referred to (17) in the last step. The fact that ρ? matches the boundary conditions (12c) follows directly
from the definition (18b). On the other hand, plugging (18) into (12a) yields
∂λ
1
1
+ ∇λ · Ax + ∇λ · Āx̄ρ? + tr(BB 0 ∇2 λ) + ∇λ · BB 0 ∇λ
∂t
2
2
∂ λ̂
− ∇λ̂ · ((A + Ā)y + BB 0 m) + ṁ · x + γ̇
=
∂t
1
+ (∇λ̂ + m) · (Ax + Āx̄ρ? ) + tr(BB 0 ∇2 λ̂)
2
1
0
+ (∇λ̂ + m) · BB (∇λ̂ + m)
2
1
= −∇λ̂ · Āy + ṁ · x + γ̇ + (∇λ̂ + m) · Āx̄ρ? + m · Ax + m · BB 0 m = 0.
2
?
Therefore (λ, ρ ) in (18) is indeed a solution to (12). Finally, back to Problem 1, we assert that with terminal cost
g(x, µ) = −λ̂(1, x − x̄µ ) − m(1) · x − γ(1),
(19)
we can lead the agents to have terminal distribution ρ1 . To this extent, we follow the strategy in [5] as mentioned in
Section II. First fix µ = ρ? with ρ? as in (18b), and then solve the optimal control problem (5). Since g(x, ρ? (1, ·)) =
g(x, ρ1 ) = −λ(1, x), we have
Z 1
1
2
?
E
ku(t)k dt + g(x, ρ (1, ·))
0 2
Z 1
1
2
ku(t)k dt − λ(1, x(1))
= E
2
Z0 1
1
2
= E
[ ku(t)k dt − dλ(t, x(t))] − λ(0, x(0))
0 2
Z 1
1
∂λ
1
2
0 2
= E
[ ku(t)k dt −
dt − ∇λ · dx(t) − tr(BB ∇ λ)dt] − λ(0, x(0))
∂t
2
0 2
Z 1
1
ku(t) − B 0 ∇λ(t, x(t))k2 dt − E{λ(0, x(0)}.
= E
2
0
Hence, the unique optimal control strategy is u? (t) = B 0 ∇λ(t, x(t)). It follows from (12) that the probability
distribution of the controlled state x(t) is ρ? . Therefore, with terminal cost g as in (19) we are able to steer the
system to terminal distribution ρ1 . Thus, we have established the following result.
Theorem 3: Consider N agents governed by (1) with initial density ρ0 . Suppose the terminal cost in (8) is as
in (19). Then, in the Nash equilibrium, the agents will reach density ρ1 at time t = 1, in the limit as N goes to ∞.
Remark 4: In fact, the dependence of g on µ is not necessary. One can simply take g(x, µ) = g(x) = −λ(1, x).
With this terminal cost, we can still conclude that (λ, ρ? ) corresponds to a Nash equilibrium as well. This is due
to the fact that we fix the density flow first when we derive a Nash equilibrium. We might need the dependence of
g on µ to conclude the uniqueness of the equilibrium. It is unclear to us if this is the case.
IV. Z ERO - NOISE LIMIT
In this section, we study the same problem (Problem 1), with however reduced disturbance. More specifically,
we consider a system of N agents with dynamics
√
1 X
dxi (t) = Axi (t)dt +
Āxj (t)dt + Bui (t)dt + Bdwi (t),
(20)
N −1
j6=i
xi (0) =
xi0 ,
i = 1, . . . , N,
December 12, 2017
7
where > 0 represents the variance of the noise. We are especially interested in the limit behavior of the solution
to Problem 1 with dynamics (20) when goes to 0. Following the same arguments as in Section III, we arrive at
the coupled PDEs
1
∂λ
+ ∇λ · Ax + ∇λ · Āx̄ρ? + tr(BB 0 ∇2 λ) + ∇λ · BB 0 ∇λ = 0,
∂t
2
2
∂ρ?
+ ∇ · ((Ax + Āx̄ρ? + BB 0 ∇λ)ρ? ) − tr(BB 0 ∇2 ρ? ) = 0,
∂t
2
?
?
ρ (0, ·) = ρ0 , ρ (1, ·) = ρ1 .
(21a)
(21b)
(21c)
The optimal control strategy is given by u(t) = B 0 ∇λ(t, x(t)) and terminal cost g is as in (19) with adjusted
diffusitivity.
Taking the limit of (21) as → 0 gives
∂λ
1
+ ∇λ · Ax + ∇λ · Āx̄ρ? + ∇λ · BB 0 ∇λ = 0,
∂t
2
∂ρ?
+ ∇ · ((Ax + Āx̄ρ? + BB 0 ∇λ)ρ? ) = 0,
∂t
ρ? (0, ·) = ρ0 , ρ? (1, ·) = ρ1 .
(22a)
(22b)
(22c)
With similar analysis as in Section III we conclude that the above PDEs system has a (viscosity) solution [33]. In
particular, the solution (λ, ρ? ) to (22) has the form (18) with λ̂ being
1
0
−1
λ̂(t, x) = inf λ̂(0, y) + (x − Φ(t, 0)y) M (t, 0) (x − Φ(t, 0)y) ,
y
2
where
1
−1
Φ10 x) − x0 Φ010 M10
Φ10 x
2
and ψ corresponds to the optimal transport map with prior dynamics ẋ = Ax + Bu, and marginal distributions ρ̂0
and ρ̂1 after coordinate transformation, see [22, Proposition 2]. The solution to (22) in fact solves the following
problem.
−1/2
λ̂(0, x) = ψ(M10
Problem 5: Given N agents governed by (20) with = 0, and initial probability density ρ0 , find a function
g such that, in the Nash equilibrium with cost function (8), the agents would reach a specified density ρ1 at time
t = 1, in the limit as N goes to ∞.
With the solution to (22), we can choose a terminal cost as in (19). The corresponding equilibrium control strategy
is u(t, x) = B 0 ∇λ(t, x).
Theorem 6: Consider N agents governed by (20) with = 0 and initial density ρ0 . Suppose the terminal cost
g in (8) is as in (19), then, in the Nash equilibrium, the agents will reach density ρ1 at time t = 1, in the limit as
N goes to ∞.
V. G AUSSIAN CASE
In the special case when ρ0 and ρ1 are normal (Gaussian) distributions, the solutions have a nice linear structure.
Let the two marginal distributions be
ρ0 ∼ N [m0 , Σ0 ],
ρ1 ∼ N [m1 , Σ1 ],
i.e., Gaussian distributions with, respectively, means m0 , m1 and covariances Σ0 , Σ1 . When = 1, λ̂ in (15) equals
Z
1 0
1 t
λ̂(t, x) = − x Π(t)x +
tr(BB 0 Π(s))ds,
2
2 0
December 12, 2017
8
where Π(t) is the solution to the Riccati equation
Π̇(t) = −A0 Π(t) − Π(t)A + Π(t)BB 0 Π(t)
(23)
with boundary condition
Π(0) =
−1/2
Σ0
I
1/2
1/2
−1
+ Σ0 Φ010 M10
Φ10 Σ0
2
I
1/2 0
1/2 1/2
−1/2
−1
−1
−( + Σ0 Φ10 M10 Σ1 M10 Φ10 Σ0 )
Σ0 .
4
where Φ10 = Φ(1, 0), M10 = M (1, 0). And so, in view of (19), one choice of terminal cost is
1
g(x, µ) = (x − x̄µ )0 Π(1)(x − x̄µ ) − m(1) · x.
2
In the above we have discarded some constant terms as it doesn’t affect the final result.
(24)
Theorem 7: Consider N agents governed by (1) with initial density ρ0 ∼ N [m0 , Σ0 ]. Suppose the terminal
cost in (8) is (24). Then, in the Nash equilibrium, the agents will reach density ρ1 ∼ N [m1 , Σ1 ] at time t = 1, in
the limit as N goes to ∞.
Following the discussion in Section IV, the solution to the problem with noise intensity is almost identical
to the above except that, the initial condition of the Riccati equation (23) becomes
−1/2 I
1/2
1/2
−1
Π (0) = Σ0
+ Σ0 Φ010 M10
Φ10 Σ0
2
2 I
1/2 0
1/2 1/2
−1/2
−1
−1
−(
+ Σ0 Φ10 M10 Σ1 M10 Φ10 Σ0 )
Σ0 .
4
Taking the limit as → 0 we obtain the solution to the deterministic problem, which corresponds to the initial
condition
h
−1/2
1/2
1/2
−1
Π0 (0) = Σ0
Σ0 Φ010 M10
Φ10 Σ0
i
−1/2
1/2
1/2
−1
−1
−(Σ0 Φ010 M10
Σ1 M10
Φ10 Σ0 )1/2 Σ0 .
VI. S TATIONARY CASE AND I NVARIANT MEASURE
We now turn to the stationary counterpart of the Problem 1. We would like to design a cost function that
will lead the the agents to achieve a given invariant measure ρ, if the agents follows the equilibrium strategy. In
particular, given N agents with identical dynamics (1) that attempt to minimize their control effort
Z T
1
1
2
Ji (ui ) = lim sup E
kui (t)k dt ,
T →∞ T
0 2
we look for an extra cost g(x, µ) term added to the above such that, in the equilibrium state, the agents have some
specified distribution. The new cost function is
Z T
1
1
2
N
[ kui (t)k + g(xi (t), µ (t))]dt
(25)
Ji (ui ) = lim sup E
T →∞ T
0 2
where µN is the empirical distribution (3). Again we are interested in the mean-field limit of the problem, that is,
the case when N goes to ∞.
Let’s first recall some relevant results in the stationary mean-field game problems. Suppose the N agents with
dynamics (1) attempt to minimize the cost function
Z T
1
N
Ji (ui ) = lim sup E
f (xi (t), ui (t), µ (t))dt .
T →∞ T
0
December 12, 2017
9
We restrict ourself to equilibriums with symmetric stationary Markovian strategies
ui (t) = φ(xi (t)).
In the mean-field limit, one can adapt the following steps [5]. First, fix a probability measure µ and then solve the
standard stochastic control problem (parametrized by µ)
Z T
1
?
f (x(t), φ(x(t)), µ(t))dt
(26)
φ = argminφ lim E
T →∞ T
0
subject to the dynamics
dx(t) = Ax(t)dt + Āx̄µ dt + Bφ(x(t))dt + Bdw(t).
(27)
Once this standard optimal control problem is solved, the remaining issue is finding the correct distribution µ such
that the stationary distribution of (27) with optimal control strategy
u? (t) = φ? (x(t))
coincides with µ. The solution to this mean-field game problem involves the coupled PDEs [2], [5]
1
?
tr(BB 0 ∇2 λ) + η − H ρ (x, −∇λ) = 0,
2
1
∇ · ((Ax + Āx̄ρ? + Bφ? )ρ? ) − tr(BB 0 ∇2 ρ? ) = 0,
2
Z
ρ? ≥ 0,
ρ? = 1,
(28a)
(28b)
(28c)
where η is a constant, u? = φ? (x) is the minimizer of
H ρ (x, p) = minm p0 (Ax + Āx̄ρ + Bu) + f (x, u, ρ) .
u∈R
When the cost function is of the form (25), the PDEs boil down to
1
1
tr(BB 0 ∇2 λ) + η + ∇λ · Ax + ∇λ · Āx̄ρ? + ∇λ · BB 0 ∇λ − g(x, ρ? ) = 0,
2
2
1
∇ · ((Ax + Āx̄ρ? + BB 0 ∇λ)ρ? ) − tr(BB 0 ∇2 ρ? ) = 0,
2
Z
ρ? ≥ 0,
ρ? = 1.
(29a)
(29b)
(29c)
The existence of a solution (ρ? , λ) can be shown under some proper assumptions on g , see [2], [5].
Back to our problem, the cost function g in (25) becomes a design parameter, which is different from the
classic MFG setting. Our goal is to choose a function g such that the corresponding stationary distribution in Nash
equilibrium is ρ? . The solution relies on the same PDEs (29), but with different variables. Given a distribution ρ? ,
we need to find λ and the proper cost g that solve (29). It turns out that (29) has solution only for a small class
of distributions ρ? , which we call the feasible distributions. We next focus on Gaussian distributions. In this case,
the feasible distributions can be characterized by some algebraic equations. The cases of general distributions will
be investigated in future study.
Let ρ? be a Gaussian distribution with mean m and covariance Σ. Plugging the ansatz
1
λ(x) = − x0 Πx + n0 x
2
0
with Π = Π , into (29) yields (after discarding constant terms)
1
1
− tr(BB 0 Π) + η + x0 (−ΠA − A0 Π + ΠBB 0 Π)x
2
2
+n0 (A − BB 0 Π)x − m0 Ā0 Πx − g(x, ρ? ) = 0
(30a)
December 12, 2017
10
(A − BB 0 Π)Σ + Σ(A − BB 0 Π)0 + BB 0 = 0
(30b)
(A + Ā − BB 0 Π)m + BB 0 n = 0.
(30c)
In order for the solution to exist, in view of (30b), it is necessary that Σ satisfies
AΣ + ΣA0 ∈ range(fB ),
(31a)
where fB (X) = BX 0 + XB 0 is a map from Rn×m to the space of symmetric matrices (see [28] for other equivalent
algebraic conditions). Likewise, by (30c), the mean m has to satisfy
(A + Ā)m ∈ range(B).
(31b)
On the other hand, given (m, Σ) satisfying (31), assuming B has full column rank, then (30b) has a unique
symmetric solution [28]. Therefore, these two conditions are also sufficient. Now from (30a) it is easy to conclude
that a possible cost function is
1
g(x, ρ) = x0 Qx + n · (A − BB 0 Π)x − x̄ρ · Ā0 Πx,
2
(32a)
Q = −ΠA − A0 Π + ΠBB 0 Π,
(32b)
with
with Π being the unique solution to (30b). Therefore, we have established the following result.
Theorem 8: Consider N agents governed by (1). Suppose the g function in the cost (25) is as in (32), then,
in the Nash equilibrium, the agents will reach stationary Gaussian distribution with mean m and covariance Σ, in
the limit as N goes to ∞.
VII. C OOPERATIVE GAME
In this section we shift to a slightly different problem. Given the same interacting agents’ system (1), we
would like to investigate the density steering problem in the cooperative game setting. How to select an optimal
controller to drive the agents from given initial distribution ρ0 to terminal distribution ρ1 ? Again, we restrict ourself
to equilibriums given by symmetric Markovian strategies in closed-loop feedback form
ui (t) = φ(t, xi (t)), i = 1, . . . , N.
The cost function we attempt to minimize is the average control energy
)
(N Z
X 11
2
J(u) = E
kui (t)k dt .
0 2
(33)
(34)
i=1
We are interested in the mean-field limit, namely, the asymptotical behavior of the solution when N → ∞.
Problem 9: Given N agents governed by (1) with initial density ρ0 , find a control strategy (33) with minimum
control energy (34) so that the agents will reach density ρ1 at time t = 1, as N goes to ∞.
The major difference between this problem and the mean-field game is that all the agents always use the same
control strategy. A small perturbation on the control will affect the probability density flow as the perturbation is
applied to the controllers of all the agents, see [5], [34] for more discussions on their differences. The average
control energy (34) is equivalent to relative entropy of the controller system with respect to the uncontrolled system
[35], [36], [29]. Therefore, the above problem can also be viewed as an Schrödinger bridge problem for interacting
particle systems.
Problem 9 can be formulated as an optimal control problem over the McKean-Vlasov model
dx(t) = Ax(t)dt + Āx̄(t)dt + Bu(t)dt + Bdw(t), x(0) = x0 ∼ ρ0 .
(35)
December 12, 2017
11
It has the following fluid dynamic formulation. Let ρ(t, ·) be the probability density of the controlled process x(t),
then the optimal control problem can be stated as
Z 1Z
1
inf
ρ(t, x)ku(t, x)k2 dxdt,
(36a)
ρ,u
0
Rn 2
∂ρ
1
+ ∇ · ((Ax + Āx̄ρ + Bu)ρ) − tr(BB 0 ∇2 ρ) = 0,
(36b)
∂t
2
ρ(0, ·) = ρ0 , ρ(1, ·) = ρ1 .
(36c)
Proposition 2: If there exists (λ, ρ? ) satisfying
∂λ
1
1
+ ∇λ · Ax + ∇λ · Āx̄ρ? + tr(BB 0 ∇2 λ) + ∇λ · BB 0 ∇λ + Āx · h∇λ, ρ? i = 0,
∂t
2
2
?
1
∂ρ
+ ∇ · ((Ax + Āx̄ρ? + BB 0 ∇λ)ρ? ) − tr(BB 0 ∇2 ρ? ) = 0,
∂t
2
with boundary conditions
ρ? (0, ·) = ρ0 , ρ? (1, ·) = ρ1 ,
(37a)
(37b)
(37c)
then (ρ? , u? = B 0 ∇λ) is a solution to (36).
Equations (37) are highly coupled. In general, one may not expect a solution to exist. But interestingly, as
we will see below, (37) always has a solution. In fact, we are going to construct a solution based on standard
Schrödinger bridge theory.
Let (ρ̂, λ̂) be as in (13)-(16),
−1
(x̄ρ1 − Φ̄10 x̄ρ0 )
m(t) = Φ̄(1, t)0 M̂10
(38a)
ẏ(t) = (A + Ā)y(t) + BB 0 m(t), y(0) = x̄ρ0 ,
(38b)
and y(t) the solution to
where M̂10 = M̂ (1, 0) with
Z
M̂ (t, s) =
t
Φ̄(t, τ )BB 0 Φ̄(t, τ )0 dτ.
s
Define
t
Z
γ(t) = −
0
1
(Āy(s) · m(s) + m(s) · BB 0 m(s))ds,
2
λ(t, x) = λ̂(t, x − y(t)) + m(t) · x + γ(t),
(39a)
ρ? (t, x) = ρ̂(t, x − y(t)),
(39b)
and
then (λ, ρ? ) solves (37). Apparently, (39b) satisfies the boundary conditions (37c). To verify (37b), substitute (39)
into (37b), which gives
1
∂ρ?
+ ∇ · ((Ax + Āx̄ρ? + BB 0 ∇λ)ρ? ) − tr(BB 0 ∇2 ρ? )
∂t
2
∂ ρ̂
0
− ∇ρ̂ · ((A + Ā)y + BB m) + ∇ · ((Ax + BB 0 ∇λ)ρ̂)
=
∂t
1
+ Āhξ, ρ̂(ξ − y)i · ∇ρ̂ − tr(BB 0 ∇2 ρ̂)
2
= −∇ρ̂ · (Āy + BB 0 m) + ∇ · (BB 0 mρ̂) + Āhξ, ρ̂(ξ − y)i · ∇ρ̂ = 0.
December 12, 2017
12
Similarly, Combing (39) and (37a) yields
∂λ
1
1
+ ∇λ · Ax + ∇λ · Āx̄ρ? + tr(BB 0 ∇2 λ) + ∇λ · BB 0 ∇λ + Āx · h∇λ, ρ? i
∂t
2
2
∂ λ̂
=
− ∇λ̂ · ((A + Ā)y + BB 0 m)
∂t
1
+ ṁ · x + γ̇ + (∇λ̂ + m) · (Ax + Āx̄ρ? ) + tr(BB 0 ∇2 λ̂)
2
1
+ (∇λ̂ + m) · BB 0 (∇λ̂ + m) + Āx · h∇λ̂ + m, ρ? i
2
= −∇λ̂ · Āy + ṁ · x + γ̇ + (∇λ̂ + m) · Āx̄ρ?
1
+ m · Ax + m · BB 0 m + Āx · m + Āx · h∇λ̂, ρ? i
2
= Āx · h∇λ̂(ξ − y), ρ̂(ξ − y)i = 0.
Therefore, the pair (ρ? , u? = B 0 ∇λ) is indeed a solution to (37). Next we prove that this pair (ρ? , u? ) provides a
solution to the optimal control problem (36).
Let ū(t) = E{u(t)}, then, by (35), we have
dx̄(t) = (A + Ā)x̄(t)dt + B ū(t)dt
(40)
d(x̃(t)) = Ax̃(t)dt + B ũ(t)dt + Bdw,
(41)
and
where x̃ = x − x̄ and ũ = u − ū. The control energy can then be decomposed into two parts as
Z 1
Z 1
Z 1
1
1
1
E{
ku(t)k2 dt} =
kū(t)k2 dt + E{
kũ(t)k2 dt}.
2
2
2
0
0
0
These two parts of control energy, corresponding to ū and u − ū respectively, can be minimized independently
since the dynamics (40) and (41) are decoupled. We next show that
i) ū? minimizes
Z 1
1
kū(t)k2 dt
(42)
2
0
ii) ũ? minimizes
1
Z
E{
0
1
kũ(t)k2 dt}.
2
(43)
For i), recalling
u? (t) = B 0 ∇λ = B 0 ∇λ̂(t, x(t) − y(t)) + B 0 m(t),
we have
ū? (t) = B 0 E{∇λ̂(t, x(t) − y(t))} + B 0 m(t)
= B 0 h∇λ̂(t, x − y(t)), ρ? (t, x)i + B 0 m(t) = B 0 m(t).
Using standard optimal control, it is easy to see that ū? (t) minimizes (42) subject to (40) and boundary conditions
x̄(0) = x̄ρ0 , x̄(1) = x̄ρ1 .
We next show ii). Note
ũ? (t) = B 0 ∇λ̂(t, x(t) − y(t)) = B 0 ∇λ̂(t, x(t) − x̄(t)) = B 0 ∇λ̂(t, x̃(t)).
By Schrödinger bridge theory [21], [22] (see also (14)-(15)), it minimizes
Z 1
1
E{
kũ(t)k2 dt}
0 2
December 12, 2017
13
subject to
dx̃(t) = Ax̃(t)dt + B ũ(t)dt + Bdw
and marginal distributions of x̃(0) and x̃(1), which are ρ̂0 and ρ̂1 , respectively. Hence we have established the
following result.
Theorem 10: The pair (ρ? , u? = B 0 ∇λ) with ρ? , λ as in (39) solves (36).
A. Linear-Quadratic case
When both of the marginals ρ0 and ρ1 are normal distributions, the optimal control u? is a linear state-feedback
control. Let the two marginal distributions ρ0 and ρ1 be
ρ0 ∼ N [m0 , Σ0 ], ρ1 ∼ N [m1 , Σ1 ].
By Theorem 10, we need only to compute λ as in (39a), which is
λ(t, x) = λ̂(t, x − y(t)) + m(t) · x + γ(t).
The function λ̂ corresponds to the Schrödinger bridge (14), which satisfies [27]
∇λ̂(t, x) = −Π(t)x,
where Π(t) is the solution to the Riccati equation
Π̇(t) = −A0 Π(t) − Π(t)A + Π(t)BB 0 Π(t)
with boundary condition
−1/2
Π(0) = Σ0
I
1/2
1/2
−1
[ + Σ0 Φ010 M10
Φ10 Σ0
2
I
1/2
1/2
−1/2
−1
−1
− ( + Σ0 Φ010 M10
Σ1 M10
Φ10 Σ0 )1/2 ]Σ0 .
4
It follows the optimal control is
u? (t) = B 0 ∇λ(t, x(t)) = −B 0 Π(t)(x(t) − y(t)) + B 0 m(t) = −B 0 Π(t)x(t) + B 0 n(t),
where
n(t) = Π(t)y(t) + m(t)
−1
= Π(t)Φ̄(t, 1)M̂ (1, t)M̂10
Φ̄10 m0
−1
−1
+ Π(t)M̂ (t, 0)Φ̄(1, t)0 M̂10
m1 + Φ̄(1, t)0 M̂10
(m1 − Φ̄10 m0 ).
B. Zero-noise limit
We study the optimal steering problem for McKean-Vlasov model (35) with smaller disturbance
namely,
√
dx(t) = Ax(t)dt + Āx̄(t)dt + Bu(t)dt + Bdw(t), x(0) = x0 a.s..
√
dw(t),
(44)
In particular, we are interested in the zero-noise limit of this problem. That is, optimal steering problem for dynamics
dx(t) = Ax(t)dt + Āx̄(t)dt + Bu(t)dt, x(0) = x0 a.s..
(45)
We show that the probability flow of the solution to the latter is the limit of that of the former as goes to 0. This
is achieved in a constructive manner.
December 12, 2017
14
Let’s start with the steering problem for the dynamics
dx(t) = Ax(t)dt + Bu(t)dt
with marginals ρ̂0 and ρ̂1 . This problem has been studied in [22] and the solution is
−1
ũ(t, x) = B 0 Φ(1, t)0 M10
(T ◦ Tt−1 (x) − Φ10 (Tt−1 (x)))
(46)
where T is the generalized optimal mass transport map [22] with marginals ρ̂0 , ρ̂1 , and
−1
−1
Tt (x) = Φ(t, 1)M (1, t)M10
Φ10 x + M (t, 0)Φ(1, t)0 M10
T (x).
The corresponding distribution flow is
ρ̂(t, ·) = (Tt )] ρ̂0 .
Note that ρ̂ and ũ satisfy the continuity equation
∂ ρ̂
+ ∇ · ((Ax + B ũ)ρ̂) = 0.
∂t
We claim that
u? (t, x) = ũ(t, x − y(t)) + B 0 m(t)
(47)
with y, m in (38), is the optimal control strategy for the steering problem for the dynamics (45) and the corresponding
distribution flow is
ρ? (t, x) = ρ̂(t, x − y(t)).
(48)
We shall skip the proof as it is similar to the case with disturbance (see (39)-(43)).
The solution to the density steering problem with dynamics (44), weakly converges to ρ? as goes to 0. This
follows directly from the fact a Schrödinger bridge converges to the corresponding optimal transport solution [22]
as goes to 0.
VIII. E XAMPLES
Consider N agents with dynamics
dxi (t) = xi (t)dt −
2 X
xj (t)dt + dwi (t), 1 ≤ i ≤ N.
N −1
j6=i
The two marginal distributions ρ0 and ρ1 are two normal distributions
ρ0 ∼ N [1, 4],
ρ1 ∼ N [−4, 1].
A. Noncooperative game
One choice of terminal cost that will steer the agents from ρ0 to ρ1 is
g(x, µ) = 0.9805(x − x̄µ )2 + 4.3679x.
Figure 1 showcases the evolution of the probability density in the Nash equilibrium. To show that the distribution
of the agents would evolve according to Figure 1, we simulated the dynamics for a system with N = 20000 agents
under the optimal strategy. Figure 2 and Figure 3 depict the empirical distributions of the particles at time t = 0
and t = 1. They match with the theoretical distributions ρ0 and ρ1 very well. We also show the empirical mean of
these particles in Figure 4, which perfectly matches the theoretical result.
December 12, 2017
15
Fig. 1: Time evolution of probability densities
Fig. 2: Empirical distribution of x(0)
B. Cooperative game
Figure 5 depicts the time evolution of the probability densities with these two marginal distributions in the
cooperative game setting. Similarly, we ran some simulations for a particle system with N = 20000 and obtained
Figure 6 and Figure 7 as the empirical distributions of the agents at time t = 0 and t = 1. We also show the empirical
mean of these particles in Figure 8. Clearly the mean is different to the Nash equilibrium in the noncooperative
game setting.
December 12, 2017
16
Fig. 3: Empirical distribution of x(1)
Fig. 4: Time evolution of mean x̄(t)
IX. C ONCLUSION
We introduce a paradigm to steer a large number of agents from one distribution to another. The problem lies
in the intersection of MFG, OMT and optimal control. We study such problems for linearly weakly interacting
agents and solve the problem using tools from all these three areas. Results for several extensions such as stationary
and cooperative game problems are also presented. We expect this paradigm to bring in a new dimension to the
study and applications of MFG and OMT.
December 12, 2017
17
Fig. 5: Time evolution of probability densities
Fig. 6: Empirical distribution of x(0)
R EFERENCES
[1] M. Huang, R. P. Malhamé, and P. E. Caines, “Large population stochastic dynamic games: closed-loop mckean-vlasov systems and the
nash certainty equivalence principle,” Communications in Information & Systems, vol. 6, no. 3, pp. 221–252, 2006.
[2] J.-M. Lasry and P.-L. Lions, “Mean field games,” Japanese Journal of Mathematics, vol. 2, no. 1, pp. 229–260, 2007.
[3] B. Jovanovic and R. W. Rosenthal, “Anonymous sequential games,” Journal of Mathematical Economics, vol. 17, no. 1, pp. 77–87,
1988.
[4] M. Nourian and P. E. Caines, “ε-Nash mean field game theory for nonlinear stochastic dynamical systems with major and minor
agents,” SIAM Journal on Control and Optimization, vol. 51, no. 4, pp. 3302–3331, 2013.
December 12, 2017
18
Fig. 7: Empirical distribution of x(1)
Fig. 8: Time evolution of mean x̄(t)
[5] R. Carmona, F. Delarue, and A. Lachapelle, “Control of McKean–Vlasov dynamics versus mean field games,” Mathematics and
Financial Economics, vol. 7, no. 2, pp. 131–166, 2013.
[6] M. Huang, P. E. Caines, and R. P. Malhamé, “Individual and mass behaviour in large population stochastic wireless power control
problems: centralized and nash equilibrium solutions,” in Decision and Control, 2003. Proceedings. 42nd IEEE Conference on, vol. 1.
IEEE, 2003, pp. 98–103.
[7] ——, “Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralized ε-Nash
equilibria,” Automatic Control, IEEE Transactions on, vol. 52, no. 9, pp. 1560–1571, 2007.
[8] A. Bensoussan, K. Sung, S. C. P. Yam, and S.-P. Yung, “Linear-quadratic mean field games,” Journal of Optimization Theory and
Applications, vol. 169, no. 2, pp. 496–529, 2016.
[9] M. Bardi, “Explicit solutions of some linear-quadratic mean field games,” Networks and Heterogeneous Media, vol. 7, no. 2, pp.
December 12, 2017
19
243–261, 2012.
[10] G. Monge, Mémoire sur la théorie des déblais et des remblais.
De l’Imprimerie Royale, 1781.
[11] L. V. Kantorovich, “On the transfer of masses,” in Dokl. Akad. Nauk. SSSR, vol. 37, no. 7-8, 1942, pp. 227–229.
[12] W. Gangbo and R. J. McCann, “The geometry of optimal transportation,” Acta Mathematica, vol. 177, no. 2, pp. 113–161, 1996.
[13] L. C. Evans, “Partial differential equations and Monge-Kantorovich mass transfer,” Current developments in mathematics, vol. 1997,
no. 1, pp. 65–126, 1997.
[14] L. C. Evans and W. Gangbo, Differential equations methods for the Monge-Kantorovich mass transfer problem. American Mathematical
Soc., 1999, vol. 653.
[15] C. Villani, Topics in Optimal Transportation.
American Mathematical Soc., 2003, no. 58.
[16] L. Ambrosio, N. Gigli, and G. Savaré, Gradient flows: in metric spaces and in the space of probability measures.
[17] C. Villani, Optimal Transport: Old and New.
Springer, 2006.
Springer, 2008, vol. 338.
[18] L. Ambrosio and N. Gigli, “A user’s guide to optimal transport,” in Modelling and optimisation of flows on networks. Springer, 2013,
pp. 1–155.
[19] F. Santambrogio, “Optimal transport for applied mathematicians,” Birkäuser, NY, 2015.
[20] J.-D. Benamou and Y. Brenier, “A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem,” Numerische
Mathematik, vol. 84, no. 3, pp. 375–393, 2000.
[21] Y. Chen, T. T. Georgiou, and M. Pavon, “On the relation between optimal transport and Schrödinger bridges: A stochastic control
viewpoint,” Journal of Optimization Theory and Applications, vol. 169, no. 2, pp. 671–691, 2016.
[22] ——, “Optimal transport over a linear dynamical system,” IEEE Transactions on Automatic Control, vol. 62, no. 5, pp. 2137–2152,
2017.
[23] P. Dai Pra, “A stochastic control approach to reciprocal diffusion processes,” Applied mathematics and Optimization, vol. 23, no. 1,
pp. 313–329, 1991.
[24] C. Léonard, “From the Schrödinger problem to the Monge–Kantorovich problem,” Journal of Functional Analysis, vol. 262, no. 4, pp.
1879–1920, 2012.
[25] ——, “A survey of the Schrödinger problem and some of its connections with optimal transport,” Dicrete Contin. Dyn. Syst. A, vol. 34,
no. 4, pp. 1533–1574, 2014.
[26] I. Gentil, C. Léonard, and L. Ripani, “About the analogy between optimal transport and minimal entropy,” arXiv:1510.08230, 2015.
[27] Y. Chen, T. T. Georgiou, and M. Pavon, “Optimal steering of a linear stochastic system to a final probability distribution, Part I,” IEEE
Trans. on Automatic Control, vol. 61, no. 5, pp. 1158–1169, 2016.
[28] ——, “Optimal steering of a linear stochastic system to a final probability distribution, Part II,” IEEE Trans. on Automatic Control,
vol. 61, no. 5, pp. 1170–1180, 2016.
[29] M. Fischer, “On the form of the large deviation rate function for the empirical measures of weakly interacting systems,” Bernoulli,
vol. 20, no. 4, pp. 1765–1801, 2014.
[30] P. Cardaliaguet, “Notes on mean field games,” Technical report, Tech. Rep., 2010.
[31] Y. Chen, “Modeling and control of collective dynamics: From Schrödinger bridges to optimal mass transport,” Ph.D. dissertation,
University of Minnesota, 2016.
[32] Y. Chen, T. T. Georgiou, and M. Pavon, “Optimal steering of a linear stochastic system to a final probability distribution, Part III,”
IEEE Trans. on Automatic Control, in print, 2017.
[33] W. H. Fleming and H. M. Soner, Controlled Markov processes and viscosity solutions.
vol. 25.
Springer Science & Business Media, 2006,
[34] R. Carmona and F. Delarue, “Forward-backward stochastic differential equations and controlled McKean-Vlasov dynamics,” The Annals
of Probability, vol. 43, no. 5, pp. 2647–2700, 2015.
December 12, 2017
20
[35] D. A. Dawson and J. Gärtner, “Large deviations from the McKean-Vlasov limit for weakly interacting diffusions,” Stochastics: An
International Journal of Probability and Stochastic Processes, vol. 20, no. 4, pp. 247–308, 1987.
[36] J. Feng and T. G. Kurtz, Large deviations for stochastic processes.
American Mathematical Society Providence, 2006, vol. 131.
| 3 |
A Comparison of Public Causal Search Packages on
Linear, Gaussian Data with No Latent Variables1
Joseph D. Ramsey and Bryan Andrews
arXiv:1709.04240v2 [cs.AI] 16 Sep 2017
Pittsburgh, PA, USA
Abstract
We compare Tetrad (Java) algorithms to the other public software packages
BNT (Bayes Net Toolbox, Matlab), pcalg (R), bnlearn (R) on the “vanilla”
task of recovering DAG structure to the extent possible from data generated
recursively from linear, Gaussian structure equation models (SEMs) with no
latent variables, for random graphs, with no additional knowledge of variable order or adjacency structure, and without additional specification of
intervention information. Each one of the above packages offers at least one
implementation suitable to this purpose. We compare them on adjacency
and orientation accuracy as well as time performance, for fixed datasets. We
vary the number of variables, the number of samples, and the density of
graph, for a total of 27 combinations, averaging all statistics over 10 runs,
for a total of 270 datasets. All runs are carried out on the same machine and
on their native platforms. An interactive visualization tool is provided for
the reader who wishes to know more than can be documented explicitly in
this report.
Keywords: Causal Search, Linear, Gaussian, benchmarking
1. Introduction
Understanding how variables relate to one another causally has been of
interest in many fields, including biology, neuroscience, climate science, economics, and the social sciences. It has been argued that causal research is
best carried through intervention of variables or through controlled randomized experiments, though this is often prohibitively expensive, unethical, or
1
Revision 1
Preprint submitted to arXiv
September 19, 2017
impossible. An alternative is the discovery of causal relations from observational data alone. Numerous algorithms have been constructed to this end,
which fall generally into one of two groups, constraint-based (like the PC
algorithm) or score-based (like the GES algorithm). In addition to understanding how these algorithms relate to one another theoretically, it is at a
basic level of interest to see how they related to one another in performance.
To this end, we compare Tetrad implementations (in Java)2 to a number of
widely-used implementations from publicly accessible packages: BNT (Matlab), pcalg (R, C),3 bnlearn (R, C++),4 on their native platforms and on
the same machine. Notably, through the Center for Causal Discovery5 , the
Tetrad algorithms have been given wrappers for R and Python, so there is
no need usually for researchers to give up their accustomed platform in order
to use them. We compare them, however, in their native Java.6
We focus on what we call the “vanilla” task–data simulated i.i.d. recursively from linear, Gaussian structural equation models (SEMs), with no
latent variables, from graphs that are not particularly “surprising”, and with
no intervention data. To make random graphs, we simply add edge randomly
in the forward direction to variables in a linear order. Coefficients for edges
are drawn uniformly from (0.2, 0.9); exogenous error variances from (1, 3).7
We study this task, because each of the platforms studied has at least one
algorithm to address it, so the comparisons from one platform to the next on
the same datasets are meaningful. We vary sample size as 100, 500 or 1000;
we vary number of variables in the generating graph as 50, 100, or 500; we
vary the average degree of the graph as 2, 4, or 6. For each combination, we
simulate and store 10 random graphs and datasets, with different coefficients
and exogenous error variances.
We recognize that there are many types of data on which one may wish
to compare causal search algorithms. One may wish to look at linear or
nonlinear connection functions, Gaussian or non-Gaussian exogenous errors,
2
http://www.phil.cmu.edu/tetrad/
https://cran.r-project.org/package=pcalg
4
http://www.bnlearn.com/
5
http://www.ccd.pitt.edu/
6
http://www.phil.cmu.edu/tetrad/
7
A “surprising” graph might be one that is, for example, scale-free, so that some nodes
have many adjacents, others very few. Typically nodes with many adjacents are difficult
to analyze unless all arrows point out from them.
3
2
continuous or discrete variables, or a mixture continuous and discrete.8 One
may wish to assume knowledge of time order or not, or knowledge of adjacent structure. We take the attitude, though, that these problems need to
be taken up one at a time as algorithms become publicly available. Since
the “vanilla” task has taken up so much of the effort of so many researchers
over so many years, it would be good to get a snapshot of progress on it.
We will draw some very basic conclusions from this study after presentation
of results. Our attitude is simply that, while there is great value in comparing algorithms based on their theoretical virtues, there is also at a base
level value in comparing them head to head on a common task, for common
parameter values, on a common machine, just on their performance. This
way, the strategies that are most effective quickly become apparent and may
be pursued further, and strategies that aren’t as effective may be pursued,
after some thought, perhaps less vigorously.
We run each algorithm on its native platform (Matlab, R, or Java), on
exactly the same datasets and on exactly the same machine, and compile
the results into a single table. We believe this renders the results comparable. To our knowledge, there is no such previous comparison of all of these
implementations together, though comparisons of subsets have been pursued.
The table of results that comes out of this is quite large, too large really
to draw good conclusions by eye without a tool to do so. To this end, we offer
an interactive visualization comparison of the results. We give some results
individually in the Results section below, but the reader is encouraged to try
the visualization tool and draw their own conclusions.
In Section 2 we give the simulation task on which we are comparing the
various algorithms. In Section 3, we specify the comparison procedure. In
Section 4, we preview the packages and algorithms we will compare. In
Section 5, we describe the visualization tool we will use. In Section 6, we
give some sample results. In Section 7, we give a discussion of these results.
In Section 8, we draw some conclusions. It should be noted (as we will in
fact note) that there are many more results than are described in the text
of the paper; we give these results in an Appendix. This table of results is
available as an attached file with this report for use with the visualization
tool. See Section 5.
8
See, for example, Andrews et al., 2017
3
2. The Simulation Task
We generate data, at various sample sizes, from models where they struture should be recoverable, given unbounded sample size. Most of the algorithms listed can recover the pattern (CPDAG) of the true DAG, some the
adjacency structure of the true DAG. We are interested to know how the
algorithms scale in three dimensions. First, we’d like to know how they scale
up with sample size. Second, we’d like to know how they scale up with the
number of variables being studied. Third, we’d like to know how they scale
up with the density of the generating graph. We, therefore, create datasets
for models that are varied along these three dimensions. For sample size, we
chose 100, 500, or 1000 samples. 100 we will take to be a “small” sample,
although for some problems it’s large. 1000 is usually a large sample, where
one expects near convergence in the typical case. For numbers of variables,
we choose 50, 100, or 500 variables. 500 is not an upper limit for all of the
algorithms we study, not by any means. But it is already out of reach for
some of them, so in the interest of providing a common set of datasets to
study across as many algorithms as possible, we limit it thus. Likewise, 50
is by no means necessarily a small number of variables; for some problems,
that’s in fact quite large, but we are interested in how these algorithms scale,
so we pick 50 as a relatively small number of variables to study. For the density of graph, we vary the average degree of the graph, which is the average
total number of parents and children each node has in the true DAG. We
choose values of 2, 4, and 6. Again, a graph of average degree 6 is not necessarily the densest graph one may wish to analyze. In fact, for some scientific
problems, for instance in neuroscience, the density of the true model may
be much greater than that. Nevertheless, an average degree of 6 is already
out of reach for many algorithm, including many of the algorithms we are
comparing, so we limit ourselves thus.
This yields a total of 27 combinations. For each combination, we simulate
ten graphs and data sets randomly choosing simulation parameters for each
dataset, for a total of 270 datasets. These are stored in a public GitHub
repository.9 ; readers are encouraged to try their hand at these datasets to
see if they can improve on the performances described here.
9
https://github.com/cmu-phil/comparison-data
4
3. The Comparison Procedure
We adapted the Algorithm Comparison Tool in Tetrad [10] to do the
comparisons. That tool automates the simulation of datasets and the running
of lists of algorithm variants on that data to produce a final tabular report
of performance statistics. For various reasons, this was ill-suited to the task
of comparing algorithms cross-platform, so adjustments were made. First,
elapsed times were in the original tool calculated automatically on the fly for
algorithms without a record being made of them; we explicitly made a record
of these. Second, result graphs for the existing tool were stored internally
and results tabulated over them; as an adjustment we stored these out in
a directory structure to files. Third, since graphs on other platforms are
saved in formats different from Tetrad’s graph format, parsers needed to be
written to read graphs from each platform, sometimes for each algorithm, so
that in the end all graphs could be compared to the correct “true” DAG for
each simulation; we wrote these parsers. Graph outputs and elapsed time
files were stored in parallel directories and classes were added to reload the
original simulation data and true DAGs, the result graphs and elapsed times,
and to calculate performance statistics and present them in a table in the
manner of the original algorithm comparison tool.
Importantly, all algorithms were run on their native platforms, with timing results calculated on those platforms. To make the timing results comparable, we ran all algorithms on the same machine, with 3.3 GHz Intel I7
processor, with 16G RAM. This processor has two cores with two hardware
threads each, so there is some speedup available for algorithms that can take
advantage of parallelization. The version of Matlab used was 9.2.0.556344
(R2017a); the version of R used was 3.4.1; the version of Java used was
1.8.0 131. The version of BNT used was last updated 17 Oct 2007; a newer
version could not be found. The pcalg code used was version 2.5.0-0, packaged 2017-07-11 and built from sources. The version of bnlearn used was 4.2,
published 2017-07-03, and built from sources. The version of Tetrad used was
commit 39b8abd in the Tetrad GitHub repository from 8/3/2017. The end
result of this analysis was a table of simulation results for each algorithm
or algorithm variant, each sample size, each number of variables, and each
average degree. All statistics were averaged over 10 runs.
While the algorithms comparison tool allows arbitrary statistics to be
calculated and recorded in the final table, we settle on the following list of
statistics, which seem to us to be fairly expressive. Let AT P be the number
5
of true positive adjacency judgments, AF P the number of false positive adjacency judgments, AF N the number of false negative adjacency judgments;
let AHT P , AHF P , AHF N be the corresponding judgments for arrowheads
in the graph, where X → Y counts as as true positive arrowhead if it occurs
in both the true DAG and the estimated graph. Then we have the following
measures:
Vars: Number of variables in the graph
Deg: Average degree of the graph
N: Number of simulated samples
P
AP : Adjacency Precision = AT PAT+AF
P
P
AR: Adjacency Recall = AT PAT+AF
N
P
AHP : Arrowhead precision = AHTAHT
P +AHF P
P
AHR: Arrowhead recall = AHTAHT
P +AHF N
M cAdj: Matthew’s correlation coefficient for adjacencies
=√
(AT P ∗AT N )−(AF P ∗AF N )
(AF P +AF P )∗(AT P +AF N )∗(AT N +AF P )∗(AT N +AF N )
M cArrow: Matthew’s correlation coefficient for arrowheads
=√
(AHT P ∗AHT N )−(AHF P ∗AHF N )
(AHF P +AHF P )∗(AHT P +AHF N )∗(AHT N +AHF P )∗(AHT N +AHF N )
E = Elapsed Time in Seconds
Here, the Matthews correlation ranges from -1 to 1, with higher positive values indicating greater agreement between precision and recall, respectively, for adjacency or arrowhead judgments. The Matthews correlation is
designed to give an aggregate accuracy for precision and recall, balancing
well when the sizes of the constituent sets (TP, FP, FN) are very different in
size. We report AP , AR, AHP , AHR, and E in the results section below
but include M cAdj and M cArrow in the tables in the Appendix.
In some cases results are not provided. These are cases where running
time for a single run of the algorithm was greater than 10 minutes.
4. The Packages and their Algorithms Relevant to this Task
We make some general remarks that apply to multiple packages. Algorithms in these packages, as is generally the case, are commonly said to be
of two sorts, constraint-based or score-based. The distinction is blurred, as
has been noted by [7].
6
The PC-style algorithms are all constraint-based because they operate
entirely from considerations of conditional independence, which constitute
constraints on the search. They take a conditional independence test as an
oracle; the data themselves are screened off by the conditional independence
test. In fact, the algorithms could operate without data if the conditional
independence facts alone were available. So a sensible way to check the
correctness of these algorithms is to feed them d-separation facts from a
known true graph and see if they can recover the CPDAG or adjacencies of
the DAG, depending on the algorithm.
The GES algorithm (Chickering 2002), by contrast, is said to be scorebased, on the strength of the assertion that they rely on Bayesian scores
like BIC for their search. However, it can also be carried using conditional
independence tests as oracles. To make this point, we include some runs of
FGES for Tetrad using the Fisher Z test. For the Fisher Z runs, we use α − p
as a “score”, where alpha is the alpha cutoff for PC and p is the p-value for
the conditional independence test. This is not a proper score, but it does
convey the conditional independence information properly, and as can be
seen, it works well. Also, we have undertaken to check the correctness of
FGES from d-separation facts alone. One way to do this is to return a score
of +1 for conditional d-connection and -1 for conditional d-separation. If one
does this, GES should, in fact, give the same graphs back from d-separation
facts alone as PC, and it does, for all of the FGES implementations. We
assume similar checks have been made for pcalg. In general, however, using
a score for GES that prioritizes true colliders changes the priority of addition
and removal of edges in the forward and backward stages, in ways that help
to find true colliders more quickly. This shortens and stabilizes the search.
For the PC-style algorithms, for small samples, there is always the possibility of a collider conflict–that is, a situation where one is given a model
X → Y ← Z − W , where the X − Y − Z triple has been oriented as a
collider, and for similar reasons one wants to orient the triple Y − Z − W as
a collider as well. For the PC algorithm this can happen if the edge X − Z
has been removed from the graph on the strength of the observation that
I(X, Z|S1) where Y is not in S1, but also I(Y, W |S2) where X is not in S2.
(Here, I(X, Y |Z1) is true if and only if X is independent of Y conditional on
set Z of variables.) For models in which there are no latent variables, this is a
small sample error, but it may occur quite often, depending on the algorithm.
There are at least three possible graphs that could result. One could orient
both colliders to get X → Y ↔ Z ← W ; we call this the bidirected rule;
7
this has been pursued in the Tetrad package in previous versions.10 Or one
could orient just the first collider and refuse to orient the second, which we
call the priority rule–that is, X → Y ← Z − W . Or one could simply orient
the second without unorienting the X → Y edge, to get X → Y → Z ← W ,
which we call the overwrite rule. For these runs, Tetrad’s PC algorithms are
all using the priority rule. The pcalg package below uses the overwrite rule.
For constraint-based algorithms we use the default conditional independence test for the package, except where noted. When there is an option
for setting an alpha level, we set it at at 0.01 or 0.001, except where noted.
For the GES algorithms, we use BIC, with a multiplier on the penalty, so
BIC = 2L − cklnN , where L is the likelihood, c the penalty discount, k the
degrees of freedom, and N the sample size. We use penalty discounts 2 and
4; for pcalg, this corresponds to λ’s of 2lnN and 4lnN (see [7].
Scripts used to generate all results are available. For Tetrad, they are
included in the GitHub results repository11 in a directory called “condition2”.
Since we were adapting the Tetrad algorithms comparison tool, we began
with the Tetrad algorithms and proceeded from there.
4.1. The Tetrad Suite
The Tetrad package contains more or less the same variety of algorithms
for the “vanilla” case as pcalg, with some notable variations in particular
cases. It contains almost all of the same variation in PC-style algorithms
(PC, PC-Stable, CPC, CPC-Stable) and also an optimized GES algorithm
(FGES). The package is written in Java. The indices given are the indices of
the variants in the Appendix.
The PC algorithm was defined in [11]; [5] later gave implied orientation
rules which have been adapted in the Tetrad implementation. The PC-Stable
algorithm [3] aims to render the adjacencies in the output independent of
the order of the input variables; we have implemented this based on the
10
The reason this has been pursued is that when latent common causes of pairs of
measured variables affect the data, a bidirected edge of this form indicates the existence of
one or more latent variables, a desired behavior. Here, since we know there are no latent
variables, these bidirected edges are a nuisance, due to small sample errors of judgment,
so we don’t try to orient them.
11
https://github.com/cmu-phil/causal-comparisons; this repository is currently private
because other other incomplete comparisons are included, but will be made available to
readers if a GitHub ID is sent to the authors. But all of the results graphs and scripts are
included there.
8
description. The PC-Stable-Max algorithm alters the CPC algorithm to
orient colliders using maximum p-value conditioning sets and renders both
adjacencies and orientations order-independent. The CPC algorithm ([9])
orients colliders by insisting on uniformity of independence judgments. CPCStable combines CPC orientation with the PC-Stable adjacency procedure.
The “stable” adjacency search ([3]) was adapted from the pcalg package.
Notably, for the CPC algorithm, we interpret all ambiguous markings in
the graph as noncolliders (see [9]). CPC aims to mark all unshielded triple
X−Y −Z as unambiguous colliders, unambiguous noncolliders, or ambiguous.
Thus the output represents a collections of CPDAGs, not a specific CPDAG.
To get a specific CPDAG from the collections, one needs to choose for each
ambiguous triple whether it is a collider or a noncollider, and then apply the
Meek orientation rules. Here, we choose to interpret each ambiguous triple
as a noncollider, as does pcalg. We do this for compatibility.
The FGES algorithm ([8]) modifies the GES algorithm ([2]), with the
intention of making it more scalable. It forgoes the full loop in the forward
phase, using caching of information instead to avoid some repetition of tests.
There are two settings, faithfulness = true and faithfulness = false. If the
faithfulness flag is set to true, edges X − Y with X uncorrelated with Y are
not added to the graph in a first sweep; for paths X − Y − Z that should
be colliders, X − Z is then added to the graph and removed in a subsequent
backward step, orienting the collider. This strategy is similar to the one
described in [7] for the ARGES algorithm and results in significant speedup,
as they attest. With the faithfulness flag set to false, the same first sweep
(forward followed by backward) is performed, but then edges W − Z are
recovered for X that are unconditionally d-connected to Z but not correlated
with Z unconditionally in a second forward sweep, and the backward sweep
is repeated. This allows for some edges with canceling paths to be recovered
and is equivalent to Chickering’s formulation of GES. Additionally, sections
of the algorithm that can be parallelized are treated as parallelized. This
includes in particular the initial step of adding a single edge to an empty
graph, a step which is particularly time-consuming.
As noted, all g of the PC variants for the Tetrad runs are configured to use
the priority rule to resolve colliders. An alpha of 0.01 and 0.001 are used. For
FGES, some smaller, for Fisher Z, some smaller alphas are included, 0.0001
and 0.00000001, since these are advantageous. For FGES with BIC, penalties
2 and 4 are included. The following are the specific variants include. The
indices correspond to the indices in the full in the Appendix.
9
1. PC (“Peter and Clark”), Priority Rule, using Fisher Z test, alpha =
0.01
2. PC (“Peter and Clark”), Priority Rule, using Fisher Z test, alpha =
0.001
3. PC-Stable (“Peter and Clark” Stable), Priority Rule, using Fisher Z
test, alpha = 0.01
4. PC-Stable (“Peter and Clark” Stable), Priority Rule, using Fisher Z
test, alpha = 0.001
5. PC-Stable-Max (“Peter and Clark”), Priority Rule, using Fisher Z
test, alpha = 0.01
6. PC-Stable-Max (“Peter and Clark”), Priority Rule, using Fisher Z
test, alpha = 0.001
7. CPC (Conservative “Peter and Clark”), Priority Rule, using Fisher Z
test, alpha = 0.01
8. CPC (Conservative “Peter and Clark”), Priority Rule, using Fisher Z
test, alpha = 0.001
9. CPC-Stable (Conservative “Peter and Clark” Stable), Priority Rule,
using Fisher Z test, alpha = 0.01
10. CPC-Stable (Conservative “Peter and Clark” Stable), Priority Rule,
using Fisher Z test, alpha = 0.001
11. FGES (Fast Greedy Equivalence Search) using Fisher Z Score, alpha
= 0.001, faithfulnessAssumed = false
12. FGES (Fast Greedy Equivalence Search) using Fisher Z Score, alpha
= 1.0E-4, faithfulnessAssumed = false
13. FGES (Fast Greedy Equivalence Search) using Fisher Z Score, alpha
= 1.0E-8, faithfulnessAssumed = false
14. FGES (Fast Greedy Equivalence Search) using Sem BIC Score, penaltyDiscount = 2, faithfulnessAssumed = false
15. FGES (Fast Greedy Equivalence Search) using Sem BIC Score, penaltyDiscount = 4, faithfulnessAssumed = false
16. FGES (Fast Greedy Equivalence Search) using Fisher Z Score, alpha
= 0.001, faithfulnessAssumed = true
17. FGES (Fast Greedy Equivalence Search) using Fisher Z Score, alpha
= 1.0E-4, faithfulnessAssumed = true
18. FGES (Fast Greedy Equivalence Search) using Fisher Z Score, alpha
= 1.0E-8, faithfulnessAssumed = true
19. FGES (Fast Greedy Equivalence Search) using Sem BIC Score, penaltyDiscount = 2, faithfulnessAssumed = true
10
20. FGES (Fast Greedy Equivalence Search) using Sem BIC Score, penaltyDiscount = 4, faithfulnessAssumed = true
4.2. The bnlearn Package
The blearn package ([13], [14], [15], [1]) consists of a variety of algorithms,
largely centered around the idea of finding Markov blankets and using Markov
blankets to construct estimations of CPDAGs or undirected graph representing the adjacency structure of the true DAG. Judging from their description,
these algorithms are not primarily aimed at the purely continuous case; we
will come back to them in a subsequent study of multinomial data, for which
more complete test results from the authors are available. Nevertheless, we
give the continuous-only results here, using the default conditional independence test.
Two of the algorithms, MMPC and HITON, estimate adjacencies but not
orientations. For this reason, orientation statistics will not be given for them.
The bnlearn algorithms are all written natively in C and included in R
through the usual sort of wrappers.
We test the following algorithms and their variants. (the indices are the
indices of these variants in our output table.) We use an alpha of 0.01 or
0.001. MMHC in the version of the software we tested does not take an alpha
values.
21. MMPC alpha = 0.01
22. MMPC alpha = 0.001
23. GrowShrink alpha = 0.01
24. GrowShrink alpha = 0.001
25. IAMB alpha = 0.01
26. IAMB alpha = 0.001
27. Fast.IAMB alpha = 0.01
28. Fast.IAMB alpha = 0.001
29. Inter.IAMB alpha = 0.01
30. Inter.IAMB alpha = 0.001
31. si.hiton.pc alpha = 0.01
32. si.hiton.pc alpha = 0.001
33. MMHC
Finally, we took one of the bnlearn algorithms, iamb, and varied the
independence test from the available options. The “monte carlo” (mc) tests
did not scale well even to our smallest problems, so we focused on the options
without the Monte Carlo adjustment.
11
34.
35.
36.
37.
38.
39.
40.
41.
iamb
iamb
iamb
iamb
iamb
iamb
iamb
iamb
alpha
alpha
alpha
alpha
alpha
alpha
alpha
alpha
=
=
=
=
=
=
=
=
0.01.test=cor
0.001.test=cor
0.01.test=mi-g
0.001.test=mi-g
0.01.test=mi-g-sh
0.001.test=mi-g-sh
0.01.test=zf
0.001.test=zf
4.3. The pcalg Package
Descriptions for the pcalg package may be found in [4]; for more recent
documentation, please see the documentation provided in the R package
pcalg. We focus just one algorithms that apply directly to the “vanilla”
task: the PC-style algorithms and the GES implementation. As noted above,
pcalg and Tetrad for this case contain very similar implementation. They
differ in implementation style, language, and platform. The pcalg algorithms
are all written natively in C++ and included in R through the usual sort
of wrappers. Exploring differences in implementation style and choices is an
interesting topic, but this lies beyond the scope of this paper; needless to say,
all of the code is publicly available for anyone who is interested.
In pcalg, the pc method is designed to adjust its behavior in response to
parameters to morph into the various PC-style algorithms. Most of these
have counterparts in the Tetrad suite; some do not (like the majority rule).
To make the comparison easy, we will use the Tetrad names for the algorithms. The specific pcalg commands used to run those algorithms are available in the comparison2 directory of the results repository. In all cases, we
use alpha values of 0.01 or 0.001. For PC-stable, which is readily parallelizable, we set the number of cores to 4.
The GES method in the pcalg package uses a λ rather than a penalty
discount parameter, but these are inter-convertible, we use λ = clnN ; here,
c is the penalty discount.
42. PC pcalg defaults alpha = 0.01
43. PC pcalg defaults alpha = 0.001
44. PC-Stable pcalg ncores=4 alpha = 0.01
45. PC-Stable pcalg ncores=4 alpha = 0.001
46. CPC pcalg defaults alpha = 0.01
47. CPC pcalg defaults alpha = 0.001
48. CPC pcalg majority.rule defaults alpha = 0.01
12
49. CPC pcalg majority.rule defaults alpha = 0.001
For GES, we test the following variants:
50. GES pcalg defaults 2*log(nrow(data))
51. GES pcalg defaults 4*log(nrow(data))
4.4. The BNT Package
BNT (“Bayes Net Toolbox”, [6]) contains a number of algorithms, Once
one eliminates algorithms that are for discrete variables only, algorithms that
assume time order of the variables or knowledge of adjacency structure, or
algorithms that are designed for the latent variable case, the only structure
learning algorithm left to compare with is the implementation of the PC
algorithm, learn struct dag pc. This takes an alpha level; we use 0.01 or
0.001. The BNT algorithms are written natively in Matlab. There are two
variants for PC in BNT in our table:
52. learn.struct.pdag.pc bnt alpha = 0.01
53. learn.struct.pdag.pc bnt alpha = 0.001
5. Visualization Tool
The results are shown in the Appendix in tabular form. Since the this
table is quite large, we found it useful to provide an interactive visualization
tool12 to help readers understand the results. We render figures from this
tool in the Results section below. However, we invite interested readers to
explore it on their own, as there is much more data to explore (see Appendix).
One simply needs to download these three files, stats.txt and config.txt, and
std.txt,13 then go to the URL above for the tool and load them the respective
buttons, then set the number of runs to 10. By providing a file of standard
deviations (std.txt) and the number of runs, the tool additionally calculates
95% confidence intervals around the mean statistics; [12] is used to calculate
the inverse t CDF. Then click to the Plot tab and select a combination of
parameters for which a result exists, and bar plots will be rendered as in
the Results section, below. As many or as few results as are desired may be
simultaneously plotted.
12
http://www.pitt.edu/~bja43/causal
These files are currently provided as example files for the visualization tool, but they
are also included in this arXiv download.
13
13
In order for the bar plots to be displayed, one needs to select a combination of parameters for which data is available in the table. One needs to pick
a one or more numbers of variables, average degrees, and sample sizes. One
needs to pick parameters for which they are compared. (Different algorithms
may require different parameters.) Then one needs to pick one or more algorithms from the four packages.14 Provided one has selected a combination
of settings for which data is available, charts will be rendered.
6. Results
There is one algorithm that is implemented on three different of the
platforms–viz., PC. We show results for this in Figure 1. Here we use an
alpha value of 0.001, with 100 variables, average degree 4, sample size 500.
In Figure 2 we show a comparison of all of the PC-style algorithms in
the Tetrad and pcalg packages. Here, we look at the 100 variables case, with
average degree 4, sample size 500, alpha = 0.001.
In Figure 3 we show a comparison of all of the GES algorithms in the
Tetrad and pcalg packages. Here, we look at the 100 variable case, average
degree 4, sample size 500. For algorithms that use the Fisher Z test, an alpha
of 0.001 was used; for algorithms that use a BIC score, a penalty of 2 was
used. (For pcalg, λ in this case is 2lnN .)
In Figure 4 we show a comparison of all of the algorithms tested in the
bnlearn package. Here, we look at the 100 variable case, average degree 4,
sample size 500, alpha = 0.001.
The plots in Figure 1, Figure 2, 3, and 4 are all for the same datasets and
so are comparable. Using the visualization tool one may obtain similar plots
for other cases.
In Figure 5 we show a comparison of elapsed times for all of the above
choices, for common datasets.
7. Discussion
For Figure 1, it is important to note that all datasets have had their
columns randomly rearranged, so that there is no opportunity to gain an
14
It is worth noting that this visualization tool is intended to be useful outside of this
particular project, and already has been. One simply needs to make a stats, config, and
std files in the proper format and load them in. Plots may be edited using Plotly.
14
Figure 1: Adjacency and orientation accuracy of PC across platforms for 100 variables,
average degree 4, 500 samples, alpha = 0.001.
Figure 2: Adjacency and orientation accuracy of PC-style algorithm for Tetrad and pcalg
platforms. This is for 100 variables, average degree 4, sample size 500, alpha = 0.001.
15
Figure 3: Adjacency and orientation accuracy of GES for Tetrad and pcalg packages. This
is for 100 variables, average degree 4, sample size 500. For Fisher Z algorithms, alpha =
0.001; for BIC algorithms, penalty = 2 (for pcalg, λ = 2lnN ).
Figure 4: Adjacency and orientation accuracy of bnlearn algorithms. Algorithms with no
orientation statistics do not estimate orientations. This is for 100 variables, average degree
4, 500 samples, alpha = 0.001, where applicable.
16
Figure 5: Elapsed times for all algorithms, average degree 4, number of variables 100,
sample size 500, alpha 0.001, penalty 2. The elapsed times are on a log scale.
advantage from having the columns in causal order. Under these conditions,
PC (as with most algorithms for coefficients in the range +/-(0.2, 0.9)) gets
nearly 100% precision for adjacencies, with recall about 0.8, and orientation
precision of about 0.6, with orientation recall (judged against the true DAG)
about 0.45. This is a fairly common profile for PC for this sized problem, and
all three of the platforms more or less agree to it. The Tetrad algorithms gets
slightly higher precision for orientations than the others, perhaps enough to
choose it over pcalg, but really the implementations are all very similar.
Figure 2 compares all of the PC-style algorithms from the pcalg and
Tetrad packages. All of these algorithms use the same adjacency search,
so it is not surprising that their adjacency statistics are identical. They
differ only in orientation. Here, the PC and PC-Stable algorithms yield
fairly unusable orientations; the algorithms that attempt to boost collider
orientation accuracy all do better. The algorithms that do the best are the
CPC variants, for both platforms. There is a trade-off of orientation precision
for orientation recall for these algorithms. PC-Stable Max and CPC Majority
get the higher orientations recalls, but both suffer somewhat on precision; the
performance of each is basically the same.
Figure 3 compares all of the GES variants from pcalg and Tetrad. Over17
all, the performances are very similar, though for orientation precision in
particular, the pcalg implementation is to be preferred on these grounds.
Figure 4 gives timing results for all of the algorithms compared in the
previous figures. The elapsed times are given on a log scale, as some of the
times are fairly slow. Generally, the Tetrad algorithms were faster. If we
consider CPC to be the most accurate of the PC-style algorithms, for pcalg,
it comes back in 3.9 seconds, whereas for Tetrad it comes back in .27 seconds.
For GES, the time posted for pcalg was 0.67 seconds, whereas for Tetrad the
best time was 0.08 seconds. For this size problem, of course, all of these times
are entirely reasonable; one should not be hesitant to wait a few seconds to
get a good result.
For scaling up, however, timing may become an issue. In figure 6 we show
the faster algorithms, for the largest models we consider, 500 variables, average degree 6, 1000 samples. These are all variants of the FGES algorithms,
with one contrast, the pcalg GES algorithm. The fastest of these is for FGES
with the faithfulness flag turned on, using the BIC score with penalty 4. In
Figure 7 the accuracies of the same algorithms are given. As can be seen,
this particular algorithm is not only the fastest at this task but is also one
of the most accurate, though the pcalg GES still has a slight advantage in
accuracy.
8. Conclusion
We have taken an orthogonal approach to comparing algorithms, not
based on their theory but based purely on their performance. While we have
focused only on the “vanilla” task (data generated recursively and i.i.d. from
a linear, Gaussian model with no latent variables), this is a task that has
been taken up in several software packages, both public like these, but also
in private, and which has a long history in the literature. It is usually the
first sort of algorithm one tries for a dataset, for better or for worse. We
know of no comparison to date that compares all of the public packages to
one another directly on this task using exactly the same data on exactly the
same machine.
Overall, we find that the Tetrad package fares quite well in this comparison. The PC-style algorithms are comparable to those in other packages.
The best of these in terms of orientation precision, CPC, has a similar profile
to CPC in pcalg. The main difference is that the Tetrad PC-style algorithms
are much faster than their counterparts. There are no doubt algorithmic
18
Figure 6: Elapsed times for faster algorithms for the 500 variables, average degree 6,
sample size 1000 case, alpha = 0.001, penalty 4. The elapsed times are on a log scale
Figure 7: Accuracies for faster algorithms for the 500 variables, average degree 6, sample
size 1000 case. The elapsed times are on a log scale.
19
reasons for this. The pcalg algorithms are implemented in C++, the Tetrad
algorithms in Java; generally from benchmark testing C++ is 2 - 2.5 times
faster than Java with the same implementation on the same machine. We
surmise that if the Java implementation were simply translated intelligently
into C++, one would be at least a 2-fold speedup over Java, in this way.
Instead the C++ implementations are many times slower. For GES, we find
that overall the pcalg implementation has a small but persistent edge over
FGES for accuracy, though once again the Tetrad implementation is many
times faster. Partly this is due to the parallelization, though the particular
machine these algorithms were run on has only two cores, so this doesn’t
account for the 10 to 20 fold speedup we observe. If one takes into account
that C++ is at least two times faster than Java, one might expect that if
the Tetrad implementation were directly translated into C++ there would
be at least a 20 to 40 fold speedup over the pcalg implementation. This of
course suggests a programming project for someone. In any case, this kind
of speedup is important, since it outpaces by several fold the kind of speedup
one might expect from running the algorithm on a supercomputer.
While we compare these packages separately, we recognize that there has
been a considerable amount of “borrowing back and forth” between these
and other projects. The notion of inferring causal graphs from conditional
independence constraints goes back some time now and is used in all of
these projects. The PC algorithm, as noted, originally appearing in an early
version of Tetrad, is used in three of the projects, and algorithms derived
from PC in two of them. The CPC algorithm, originally in Tetrad, was
reimplemented in pcalg, along with some variants there. In bnlearn, the
building blocks of other constraint-based algorithms are shuffled to generate
CPDAGs from considerations of Markov blanket estimations and scoring
algorithms. For GES, Chickering’s (2002) algorithm was adjusted in each of
pcalg and Tetrad; the specific formulation of the BIC score difference was
adapted from ARGES [7] and is responsible for some of the speedup there.
Overall we find that for the PC-style algorithms, the algorithms of choice
for accuracy are versions the ones based on CPC. However, these are dominated on recall by the GES-style algorithms, which among these options
also include the fastest. This is perhaps surprising, since GES has had an
reputation among a number of researchers as being a slow algorithm, but
both FGES and the pcalg implementation of GES prove this reputation to
be undeserved; the algorithm offers many opportunities for optimization.
For future work, if algorithms are identified that should have been in20
cluded in this comparison, we will include them and update the comparison.
Beyond this condition, there are several more conditions to explore, as suggested above. We will take up as time permits.
Appendix A. Results Tables
Tables of results follow. Here, ’Alg’ is the algorithm as indexed in the
text, ’vars’ is the number of variables in each simulation, ’Deg’ is the average
degree of the graph, ’N’ is the sample size, ’AP’ is adjacency precision (as
defined in the text), ’AR’ adjacency recall, ’AHP’ arrowhead precision, ’AHR’
arrowhead recall, ’McAdj’ Mathew’s correlation for adjacencies, ’McArrow’
Matthews correlation for orientations, and ’E’ elapsed time in seconds. For
hardware and software specifications, please see text. There is one table for
each simulation; the conditions for the simulation are in the table and are
common for all algorithms in each table. ’*’ in table indicates the value is
undefined due to division by zero and ’-’ means that the value is zero or that
the simulations in that position in the table took longer than 10 minutes to
complete. Each statistic is an average over 10 runs.
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Vars
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
Deg
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
N
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
AP
0.95
0.99
0.95
0.99
0.95
0.99
0.95
0.99
0.95
0.99
0.95
0.98
0.99
0.91
0.98
0.96
0.98
AR
0.67
0.56
0.66
0.55
0.66
0.55
0.67
0.56
0.66
0.55
0.73
0.63
0.32
0.78
0.57
0.73
0.63
AHP
0.59
0.55
0.56
0.54
0.86
0.86
0.99
0.97
0.99
0.97
0.85
0.9
0.92
0.8
0.88
0.86
0.9
21
AHR
0.28
0.2
0.28
0.19
0.31
0.21
0.15
0.05
0.15
0.05
0.42
0.32
0.04
0.47
0.23
0.41
0.31
McAdj
0.79
0.73
0.79
0.73
0.79
0.73
0.79
0.73
0.79
0.73
0.83
0.78
0.56
0.83
0.74
0.83
0.78
McArrow
0.11
0.06
0.08
0.04
0.35
0.27
0.27
0.12
0.26
0.12
0.4
0.37
0.09
0.4
0.29
0.4
0.37
E
0.03
0.04
0.01
0.01
0.04
0.01
0.01
0.02
0.01
0.01
0.02
8.00E-03
4.30E-03
0.02
0.01
0.03
0.06
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.99
0.91
0.98
0.95
0.99
0.96
1
0.96
1
0.96
1
0.96
1
0.95
0.99
0.83
0.96
1
0.96
1
0.98
1
0.96
1
0.95
0.99
0.95
0.99
0.95
0.99
0.95
0.99
0.98
0.99
0.95
0.99
0.32
0.77
0.57
0.66
0.54
0.67
0.56
0.67
0.55
0.67
0.56
0.67
0.55
0.67
0.56
0.74
0.67
0.55
0.68
0.57
0.65
0.53
0.67
0.56
0.66
0.55
0.66
0.55
0.66
0.55
0.66
0.55
0.67
0.32
0.67
0.56
0.92
0.82
0.88
*
*
0.85
0.87
0.82
0.87
0.83
0.93
0.82
0.87
*
*
0.61
0.82
0.87
0.82
0.88
0.89
0.86
0.82
0.87
0.54
0.53
0.54
0.53
0.98
0.97
0.96
0.97
0.91
0.97
0.58
0.58
22
0.04
0.47
0.23
0.27
0.1
0.28
0.1
0.27
0.1
0.28
0.1
0.54
0.28
0.1
0.28
0.11
0.16
0.06
0.28
0.1
0.3
0.23
0.3
0.23
0.17
0.06
0.25
0.12
0.38
0.1
0.32
0.25
0.56
0.83
0.74
0.78
0.72
0.79
0.74
0.8
0.73
0.79
0.74
0.8
0.73
0.79
0.73
0.78
0.8
0.73
0.8
0.74
0.79
0.72
0.8
0.74
0.79
0.73
0.79
0.73
0.79
0.73
0.79
0.73
0.81
0.55
0.79
0.74
0.09
0.41
0.29
0.3
0.17
0.28
0.16
0.28
0.17
0.28
0.16
0.24
0.28
0.16
0.28
0.18
0.23
0.11
0.28
0.16
0.07
0.03
0.07
0.03
0.28
0.13
0.34
0.22
0.42
0.21
0.11
0.09
4.20E-03
0.02
8.80E-03
0.09
0.07
0.14
0.1
0.06
0.08
0.11
0.09
0.11
0.06
0.03
0.05
0.13
0.08
0.06
0.08
0.06
0.06
0.05
0.08
0.06
0.05
0.04
0.06
0.05
0.14
0.07
0.12
0.07
0.05
0.02
0.3
0.3
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Vars
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
Deg
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
N
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
AP
0.98
1
0.98
1
0.98
1
0.98
1
0.98
1
0.98
0.99
0.99
0.99
0.99
0.98
0.99
0.99
0.99
0.99
0.98
1
0.97
1
0.97
1
0.97
1
0.97
1
0.98
1
0.9
0.97
1
0.97
AR
0.94
0.91
0.94
0.9
0.94
0.9
0.94
0.91
0.94
0.9
0.97
0.95
0.84
0.95
0.89
0.96
0.95
0.83
0.95
0.89
0.94
0.9
0.95
0.9
0.95
0.9
0.95
0.9
0.95
0.9
0.94
0.91
0.96
0.95
0.9
0.95
AHP
0.75
0.73
0.68
0.67
0.88
0.91
0.98
0.98
0.98
0.98
0.95
0.97
0.95
0.95
0.95
0.95
0.97
0.95
0.95
0.95
*
*
0.91
0.9
0.91
0.9
0.92
0.9
0.91
0.9
*
*
0.71
0.91
0.9
0.91
23
AHR
0.56
0.5
0.53
0.48
0.63
0.59
0.54
0.45
0.54
0.45
0.7
0.69
0.57
0.69
0.63
0.7
0.68
0.56
0.69
0.63
0.65
0.57
0.64
0.56
0.64
0.56
0.64
0.56
0.76
0.64
0.56
0.64
McAdj
0.96
0.95
0.96
0.95
0.96
0.95
0.96
0.95
0.96
0.95
0.97
0.97
0.91
0.97
0.94
0.97
0.97
0.91
0.97
0.94
0.96
0.95
0.96
0.94
0.96
0.95
0.96
0.95
0.96
0.95
0.96
0.95
0.92
0.96
0.95
0.96
McArrow
0.39
0.34
0.3
0.26
0.57
0.57
0.59
0.53
0.6
0.53
0.69
0.69
0.58
0.68
0.63
0.69
0.69
0.57
0.68
0.63
0.61
0.54
0.61
0.54
0.61
0.54
0.61
0.54
0.48
0.61
0.54
0.6
E
0.06
0.02
0.04
0.04
0.03
0.02
0.04
0.03
0.04
0.02
0.02
0.02
0.01
0.02
0.02
0.02
0.02
0.01
0.02
0.02
0.1
0.1
0.33
0.22
0.16
0.13
0.15
0.14
0.15
0.13
0.03
0.06
0.17
0.16
0.13
0.16
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Vars
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
Deg
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
N
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1
0.99
1
0.97
1
0.98
1
0.98
1
0.98
1
0.98
1
0.99
1
0.98
1
AP
0.96
0.99
0.96
1
0.96
1
0.96
0.99
0.96
1
0.95
0.97
0.98
0.97
0.98
0.95
0.97
0.98
0.9 0.9
0.94 0.9
0.89 0.88
0.95 0.91
0.9 0.9
0.94 0.75
0.9 0.72
0.94 0.75
0.9 0.72
0.94 0.98
0.9 0.98
0.94 0.96
0.9 0.98
0.94 0.96
0.85 0.97
0.94 0.78
0.91 0.76
AR
0.97
0.96
0.97
0.95
0.97
0.95
0.97
0.96
0.97
0.95
0.99
0.98
0.96
0.99
0.97
0.98
0.98
0.95
24
0.56
0.62
0.53
0.64
0.56
0.63
0.57
0.63
0.57
0.56
0.47
0.65
0.58
0.7
0.59
0.64
0.59
AHP AHR
0.79 0.61
0.8
0.57
0.78 0.62
0.76 0.58
0.87 0.65
0.91 0.62
0.99 0.61
1
0.53
0.99 0.6
1
0.52
0.89 0.7
0.91 0.69
0.92 0.64
0.92 0.69
0.93 0.66
0.89 0.7
0.92 0.69
0.93 0.63
0.95
0.97
0.94
0.96
0.95
0.96
0.95
0.96
0.95
0.96
0.95
0.96
0.95
0.97
0.92
0.96
0.95
McAdj
0.96
0.97
0.96
0.97
0.96
0.97
0.96
0.97
0.96
0.97
0.97
0.98
0.97
0.98
0.98
0.97
0.98
0.96
0.54
0.58
0.5
0.61
0.54
0.42
0.36
0.42
0.36
0.61
0.54
0.66
0.63
0.7
0.62
0.47
0.42
McArrow
0.47
0.45
0.46
0.41
0.58
0.59
0.65
0.59
0.65
0.59
0.64
0.65
0.62
0.65
0.63
0.64
0.66
0.61
0.12
0.19
0.16
0.16
0.13
0.08
0.07
0.07
0.07
0.37
0.25
0.29
0.27
0.06
0.06
0.38
0.34
E
0.04
0.02
0.05
0.02
0.05
0.03
0.08
0.04
0.08
0.04
0.03
0.03
0.02
0.02
0.01
0.03
0.03
0.02
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
Alg
Vars
Deg
N
0.98
0.98
0.96
1
0.96
1
0.95
1
0.95
1
0.95
1
0.95
0.99
0.9
0.95
1
0.95
1
0.97
1
0.95
1
0.96
1
0.96
1
0.96
1
0.96
1
0.99
0.99
0.96
1
AP
0.98
0.97
0.97
0.95
0.98
0.97
0.98
0.97
0.98
0.96
0.98
0.97
0.97
0.96
0.98
0.98
0.97
0.98
0.97
0.98
0.97
0.98
0.97
0.97
0.95
0.97
0.95
0.97
0.95
0.97
0.95
0.99
0.94
0.97
0.96
AR
0.93
0.93
*
*
0.89
0.93
0.89
0.93
0.89
0.93
0.89
0.93
*
*
0.71
0.89
0.93
0.89
0.93
0.92
0.92
0.89
0.93
0.69
0.68
0.69
0.68
0.99
0.99
0.96
0.96
0.96
0.96
0.71
0.71
0.69
0.65
0.69
0.66
0.7
0.66
0.7
0.65
0.7
0.66
0.77
0.7
0.66
0.7
0.66
0.68
0.65
0.7
0.66
0.61
0.56
0.61
0.56
0.61
0.52
0.68
0.64
0.71
0.64
0.61
0.57
0.98
0.97
0.96
0.97
0.96
0.98
0.96
0.98
0.96
0.98
0.96
0.98
0.96
0.97
0.94
0.96
0.98
0.96
0.98
0.97
0.98
0.96
0.98
0.96
0.97
0.96
0.97
0.96
0.97
0.96
0.97
0.99
0.96
0.96
0.97
AHP
AHR
McAdj
25
0.66
0.63
0.62
0.63
0.63
0.63
0.63
0.63
0.63
0.63
0.48
0.63
0.63
0.63
0.63
0.64
0.62
0.63
0.63
0.35
0.3
0.35
0.3
0.65
0.59
0.68
0.66
0.7
0.65
0.37
0.35
McArrow
0.02
0.01
0.16
0.13
0.53
0.37
0.2
0.18
0.2
0.17
0.2
0.19
0.05
0.08
0.27
0.2
0.17
0.2
0.17
0.32
0.26
0.2
0.16
0.11
0.09
0.11
0.09
0.61
0.37
0.45
0.39
0.07
0.07
0.42
0.36
E
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.98
0.99
0.99
0.99
0.99
0.99
0.98
0.99
0.99
0.99
0.92
0.95
0.98
0.87
0.97
0.93
0.95
0.99
0.9
0.97
0.99
1
0.98
1
0.99
1
0.98
1
0.99
1
0.99
0.99
0.96
0.99
1
0.99
1
0.48
0.39
0.46
0.37
0.46
0.37
0.48
0.39
0.46
0.37
0.66
0.55
0.28
0.72
0.49
0.59
0.51
0.27
0.65
0.45
0.45
0.36
0.48
0.36
0.48
0.37
0.49
0.37
0.48
0.37
0.47
0.38
0.55
0.48
0.37
0.49
0.38
0.57
0.52
0.51
0.48
0.78
0.8
0.94
0.98
0.94
0.98
0.76
0.8
0.84
0.72
0.82
0.77
0.79
0.88
0.73
0.81
*
*
0.7
0.85
0.76
0.83
0.66
0.8
0.76
0.83
*
*
0.68
0.76
0.83
0.7
0.83
26
0.21
0.15
0.19
0.13
0.24
0.14
0.09
0.05
0.09
0.05
0.49
0.37
0.07
0.56
0.32
0.43
0.33
0.06
0.49
0.27
0.23
0.12
0.22
0.11
0.25
0.1
0.22
0.1
0.39
0.22
0.11
0.25
0.11
0.67
0.6
0.66
0.59
0.66
0.59
0.67
0.6
0.66
0.59
0.76
0.71
0.51
0.78
0.67
0.73
0.68
0.5
0.75
0.65
0.65
0.58
0.67
0.58
0.67
0.59
0.68
0.59
0.67
0.59
0.67
0.6
0.71
0.67
0.59
0.68
0.6
0.06
0.01
0.01
-0.02
0.24
0.18
0.19
0.15
0.19
0.15
0.37
0.33
0.14
0.37
0.31
0.34
0.3
0.13
0.34
0.28
0.18
0.18
0.2
0.16
0.15
0.15
0.2
0.16
0.24
0.2
0.16
0.18
0.17
9.70E-03
4.70E-03
9.30E-03
4.20E-03
0.01
4.50E-03
0.01
0.02
0.01
4.50E-03
0.02
0.02
5.20E-03
0.04
0.01
0.01
8.00E-03
3.70E-03
0.02
7.50E-03
0.1
0.08
0.26
0.15
0.12
0.1
0.15
0.11
0.13
0.1
0.03
0.03
0.15
0.14
0.1
0.15
0.09
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
1
0.46 0.73
1
0.35 0.93
0.99 0.48 0.75
1
0.37 0.81
0.99 0.46 0.56
0.99 0.37 0.5
0.99 0.46 0.56
0.99 0.37 0.5
0.99 0.46 0.94
0.99 0.37 0.98
0.99 0.46 0.84
0.99 0.37 0.91
0.97 0.59 0.84
0.99 0.28 0.93
0.98 0.48 0.6
0.99 0.38 0.57
0.2
0.05
0.22
0.11
0.24
0.16
0.24
0.16
0.09
0.05
0.22
0.1
0.44
0.14
0.27
0.19
0.66
0.57
0.67
0.59
0.66
0.59
0.66
0.59
0.66
0.59
0.66
0.59
0.74
0.5
0.67
0.6
0.18
0.14
0.2
0.16
0.07
5.42E-03
0.07
5.42E-03
0.19
0.15
0.27
0.19
0.41
0.24
0.11
0.07
0.1
0.07
0.13
0.1
0.09
0.06
0.09
0.06
0.31
0.11
0.23
0.13
0.08
0.04
0.38
0.31
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Vars
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
Deg
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
N
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
AP
0.98
0.99
0.98
0.99
0.98
0.99
0.98
0.99
0.98
0.99
0.85
0.88
0.93
0.86
0.91
0.86
0.89
0.94
0.86
AHR
0.49
0.42
0.46
0.41
0.58
0.53
0.47
0.38
0.47
0.38
0.78
0.77
0.65
0.77
0.7
0.75
0.72
0.57
0.74
McAdj
0.87
0.83
0.87
0.83
0.87
0.83
0.87
0.83
0.87
0.83
0.89
0.89
0.86
0.89
0.87
0.87
0.87
0.82
0.87
McArrow
0.27
0.22
0.22
0.19
0.48
0.46
0.5
0.44
0.51
0.44
0.57
0.58
0.53
0.57
0.55
0.55
0.56
0.48
0.55
E
0.07
0.04
0.08
0.05
0.11
0.07
0.21
0.1
0.16
0.06
0.13
0.1
0.05
0.05
0.03
0.08
0.08
0.03
0.04
AR
0.79
0.72
0.78
0.72
0.78
0.72
0.79
0.72
0.78
0.72
0.94
0.92
0.81
0.93
0.86
0.91
0.87
0.73
0.9
AHP
0.68
0.66
0.65
0.64
0.82
0.84
0.93
0.93
0.94
0.93
0.76
0.79
0.82
0.77
0.81
0.76
0.8
0.82
0.77
27
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
0.92
0.98
0.99
0.99
1
0.99
1
0.99
1
0.99
1
0.98
0.99
0.96
0.99
1
0.99
1
0.99
1
0.99
1
0.98
0.99
0.98
0.99
0.98
0.99
0.98
0.99
0.92
0.96
0.98
0.99
0.79
0.77
0.71
0.8
0.73
0.8
0.73
0.8
0.74
0.8
0.73
0.78
0.72
0.82
0.8
0.73
0.8
0.74
0.79
0.72
0.8
0.73
0.78
0.72
0.78
0.72
0.78
0.72
0.78
0.72
0.9
0.75
0.79
0.72
0.81
*
*
0.8
0.78
0.82
0.8
0.81
0.79
0.82
0.8
*
*
0.79
0.82
0.8
0.82
0.81
0.82
0.78
0.82
0.8
0.62
0.6
0.62
0.6
0.94
0.94
0.82
0.85
0.85
0.87
0.63
0.62
Alg
1
Vars
50
Deg
4
N
AP AR
1000 0.98 0.86
28
0.63
0.59
0.52
0.59
0.53
0.59
0.53
0.6
0.52
0.68
0.59
0.53
0.6
0.53
0.59
0.5
0.59
0.53
0.48
0.42
0.48
0.42
0.48
0.38
0.59
0.54
0.77
0.61
0.48
0.44
AHP AHR
0.7
0.57
0.84
0.86
0.83
0.88
0.84
0.88
0.85
0.88
0.85
0.88
0.84
0.87
0.83
0.88
0.88
0.85
0.88
0.85
0.88
0.84
0.88
0.85
0.87
0.83
0.87
0.83
0.87
0.83
0.87
0.83
0.9
0.84
0.87
0.83
McAdj
0.91
0.51
0.46
0.4
0.48
0.42
0.48
0.41
0.49
0.42
0.51
0.48
0.42
0.49
0.43
0.48
0.38
0.48
0.43
0.19
0.14
0.19
0.14
0.51
0.44
0.48
0.48
0.64
0.55
0.2
0.18
McArrow
0.34
0.03
0.25
0.23
1.75
0.94
0.67
0.39
0.81
0.41
0.73
0.4
0.09
0.09
0.51
0.7
0.37
0.74
0.37
0.8
0.46
0.7
0.35
0.31
0.22
0.31
0.19
2.55
1.15
1.9
1.28
0.14
0.1
0.71
0.54
E
0.12
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.99
0.99
0.99
0.99
0.99
0.98
0.99
0.99
0.99
0.86
0.89
0.91
0.88
0.91
0.87
0.89
0.91
0.88
0.9
0.99
1
1
1
1
1
1
1
1
1
0.99
0.99
0.98
1
1
1
1
1
1
0.83
0.85
0.82
0.85
0.82
0.86
0.83
0.85
0.82
0.98
0.98
0.92
0.98
0.94
0.97
0.97
0.89
0.97
0.91
0.84
0.81
0.86
0.84
0.86
0.84
0.86
0.84
0.86
0.84
0.86
0.82
0.87
0.86
0.84
0.86
0.84
0.87
0.84
29
0.68
0.66
0.64
0.82
0.83
0.95
0.94
0.95
0.95
0.77
0.8
0.81
0.8
0.82
0.78
0.8
0.81
0.8
0.81
*
*
0.88
0.84
0.88
0.84
0.88
0.83
0.88
0.84
*
*
0.83
0.88
0.84
0.88
0.84
0.88
0.84
0.52
0.53
0.48
0.67
0.64
0.63
0.55
0.63
0.55
0.82
0.82
0.76
0.82
0.78
0.82
0.81
0.73
0.81
0.76
0.7
0.64
0.69
0.64
0.69
0.63
0.69
0.64
0.73
0.69
0.64
0.69
0.65
0.68
0.64
0.9
0.91
0.9
0.91
0.9
0.91
0.9
0.91
0.9
0.91
0.93
0.91
0.92
0.91
0.91
0.92
0.89
0.92
0.9
0.91
0.89
0.92
0.91
0.92
0.91
0.92
0.91
0.92
0.91
0.91
0.9
0.91
0.92
0.91
0.92
0.91
0.92
0.91
0.28
0.26
0.22
0.53
0.52
0.64
0.57
0.64
0.57
0.61
0.64
0.6
0.63
0.62
0.61
0.63
0.58
0.62
0.59
0.62
0.53
0.61
0.53
0.61
0.52
0.61
0.53
0.59
0.61
0.53
0.61
0.54
0.61
0.54
0.06
0.15
0.14
0.23
0.18
0.49
0.25
0.5
0.29
0.15
0.13
0.09
0.07
0.05
0.13
0.11
0.06
0.07
0.04
0.48
0.44
5.3
2.93
2.43
1.22
3.85
1.33
2.75
1.29
0.2
0.18
0.86
2.77
1.17
2.87
1.17
4.06
1.82
40
41
42
43
44
45
46
47
48
49
50
51
52
53
50
50
50
50
50
50
50
50
50
50
50
50
50
50
4
4
4
4
4
4
4
4
4
4
4
4
4
4
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Vars
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
Deg
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
N
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
1
0.86 0.88
1
0.84 0.84
0.99 0.85 0.6
0.99 0.82 0.58
0.99 0.85 0.6
0.99 0.82 0.58
0.99 0.85 0.95
0.99 0.82 0.94
0.99 0.85 0.84
0.99 0.82 0.85
0.92 0.96 0.85
0.94 0.9 0.86
0.98 0.86 0.61
0.99 0.83 0.6
AP
0.94
0.97
0.96
0.98
0.96
0.98
0.94
0.97
0.96
0.98
0.84
0.88
0.93
0.8
0.89
0.87
0.9
0.93
0.84
0.92
0.97
AR
0.35
0.27
0.33
0.26
0.33
0.26
0.35
0.27
0.33
0.26
0.6
0.51
0.24
0.64
0.44
0.5
0.42
0.22
0.54
0.37
0.31
AHP
0.53
0.55
0.52
0.51
0.66
0.74
0.83
0.91
0.83
0.94
0.63
0.64
0.65
0.61
0.65
0.63
0.62
0.65
0.62
0.64
*
30
0.69
0.64
0.51
0.47
0.51
0.47
0.63
0.55
0.68
0.67
0.83
0.76
0.53
0.49
0.92
0.91
0.91
0.9
0.91
0.9
0.91
0.9
0.91
0.9
0.94
0.91
0.91
0.9
AHR
0.16
0.11
0.15
0.1
0.18
0.1
0.09
0.04
0.08
0.04
0.44
0.36
0.11
0.48
0.3
0.35
0.28
0.09
0.39
0.23
-
McAdj
0.55
0.49
0.54
0.48
0.54
0.48
0.55
0.49
0.54
0.48
0.68
0.64
0.44
0.68
0.6
0.63
0.59
0.43
0.64
0.56
0.52
0.61
0.53
0.18
0.14
0.18
0.14
0.64
0.57
0.57
0.56
0.69
0.64
0.2
0.17
McArrow
0.03
0.03
0.02
7.93E-03
0.13
0.12
0.15
0.11
0.14
0.11
0.23
0.2
0.09
0.22
0.17
0.17
0.14
0.08
0.19
0.14
-
2.71
1.13
0.47
0.36
0.51
0.34
3.41
2.35
2.75
2.44
0.16
0.13
0.96
0.74
E
6.70E-03
3.70E-03
9.50E-03
6.30E-03
0.01
5.90E-03
0.01
0.01
0.01
8.60E-03
0.05
0.04
9.60E-03
0.06
0.02
0.03
0.02
3.70E-03
0.03
9.30E-03
0.12
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.99
0.98
0.99
0.99
0.98
0.98
0.99
0.99
0.99
0.95
0.98
0.95
0.99
0.98
0.99
0.98
0.99
0.99
0.99
0.99
0.96
0.98
0.96
0.98
0.96
0.98
0.96
0.98
0.91
0.94
0.94
0.97
0.24
0.34
0.26
0.35
0.25
0.36
0.26
0.35
0.25
0.34
0.27
0.39
0.35
0.25
0.36
0.27
0.34
0.24
0.35
0.26
0.33
0.26
0.33
0.26
0.33
0.26
0.33
0.26
0.51
0.24
0.35
0.28
Alg
1
2
3
Vars
50
50
50
Deg
6
6
6
N
AP AR
500 0.94 0.59
500 0.95 0.54
500 0.96 0.57
*
0.68
0.74
0.67
0.75
0.64
0.73
0.68
0.79
*
*
0.66
0.67
0.75
0.65
0.76
0.65
0.73
0.67
0.77
0.48
0.5
0.48
0.5
0.81
0.93
0.71
0.79
0.73
0.75
0.52
0.52
0.16
0.09
0.18
0.08
0.21
0.08
0.18
0.09
0.27
0.18
0.08
0.19
0.09
0.17
0.06
0.18
0.08
0.16
0.11
0.16
0.11
0.08
0.04
0.16
0.09
0.4
0.15
0.18
0.12
AHP AHR
0.61 0.35
0.61 0.32
0.59 0.32
31
0.46
0.55
0.48
0.56
0.48
0.57
0.49
0.56
0.48
0.55
0.49
0.58
0.56
0.48
0.57
0.49
0.55
0.47
0.57
0.48
0.54
0.48
0.54
0.48
0.54
0.48
0.54
0.48
0.66
0.45
0.55
0.49
0.13
0.12
0.14
0.12
0.13
0.11
0.14
0.13
0.17
0.14
0.12
0.13
0.13
0.12
0.09
0.14
0.12
-9.03E-03
-4.50E-03
-9.03E-03
-4.50E-03
0.14
0.11
0.15
0.14
0.29
0.17
0.02
0.01
0.08
0.42
0.19
0.18
0.12
0.24
0.15
0.2
0.13
0.04
0.03
0.22
0.21
0.11
0.23
0.12
0.14
0.09
0.22
0.11
0.15
0.07
0.13
0.08
0.47
0.15
0.29
0.16
0.12
0.05
0.51
0.34
McAdj
0.72
0.69
0.72
McArrow
0.15
0.13
0.11
E
0.08
0.05
0.17
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
0.97
0.96
0.97
0.94
0.95
0.96
0.97
0.71
0.77
0.87
0.73
0.83
0.71
0.77
0.88
0.73
0.85
0.97
0.98
1
1
1
1
1
1
1
1
0.96
0.96
0.96
1
1
1
1
1
1
1
1
0.53
0.57
0.53
0.59
0.54
0.57
0.53
0.91
0.89
0.78
0.9
0.82
0.89
0.85
0.68
0.87
0.75
0.56
0.51
0.6
0.56
0.61
0.56
0.61
0.56
0.61
0.56
0.57
0.52
0.59
0.61
0.56
0.61
0.56
0.62
0.56
0.61
0.56
0.57
0.74
0.72
0.87
0.86
0.88
0.87
0.59
0.64
0.75
0.6
0.7
0.58
0.64
0.74
0.61
0.71
*
*
0.71
0.69
0.73
0.71
0.72
0.7
0.72
0.72
*
*
0.74
0.73
0.71
0.73
0.71
0.72
0.72
0.73
0.71
32
0.28
0.42
0.37
0.35
0.27
0.35
0.26
0.74
0.72
0.65
0.73
0.68
0.71
0.69
0.55
0.7
0.61
0.38
0.36
0.41
0.37
0.4
0.36
0.4
0.38
0.45
0.41
0.37
0.41
0.36
0.42
0.37
0.41
0.36
0.69
0.72
0.69
0.72
0.69
0.72
0.69
0.78
0.8
0.81
0.78
0.81
0.77
0.78
0.75
0.77
0.77
0.72
0.68
0.75
0.72
0.76
0.73
0.76
0.73
0.76
0.73
0.72
0.69
0.73
0.76
0.73
0.76
0.73
0.77
0.73
0.76
0.73
0.09
0.31
0.27
0.37
0.31
0.38
0.31
0.35
0.4
0.46
0.37
0.43
0.33
0.38
0.39
0.35
0.39
0.25
0.23
0.29
0.25
0.27
0.24
0.27
0.26
0.32
0.29
0.25
0.29
0.24
0.28
0.26
0.29
0.25
0.07
0.16
0.14
0.28
0.1
0.3
0.1
0.46
0.29
0.13
0.28
0.12
0.43
0.26
0.07
0.24
0.09
0.53
0.39
9.47
2.84
2.76
0.95
4.54
1.3
3.07
1
0.23
0.18
1.01
3.49
0.9
4.17
0.91
3.09
1.06
3.06
0.89
42
43
44
45
46
47
48
49
50
51
52
53
50
50
50
50
50
50
50
50
50
50
50
50
6
6
6
6
6
6
6
6
6
6
6
6
500
500
500
500
500
500
500
500
500
500
500
500
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Vars
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
Deg
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
N
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.96
0.97
0.96
0.97
0.96
0.97
0.96
0.97
0.85
0.91
0.95
0.96
AP
0.93
0.93
0.95
0.95
0.95
0.95
0.93
0.93
0.95
0.95
0.59
0.64
0.74
0.62
0.72
0.6
0.64
0.75
0.63
0.73
0.97
0.97
0.1
0.57
0.53
0.57
0.53
0.57
0.53
0.57
0.53
0.84
0.72
0.59
0.54
0.52
0.52
0.52
0.52
0.88
0.85
0.73
0.76
0.72
0.78
0.52
0.53
AR
0.63
0.59
0.61
0.57
0.61
0.57
0.63
0.59
0.61
0.57
0.94
0.92
0.85
0.92
0.86
0.93
0.9
0.79
0.91
0.82
0.6
0.56
0.06
33
0.3
0.28
0.3
0.28
0.35
0.26
0.41
0.39
0.7
0.61
0.32
0.3
AHP AHR
0.6
0.39
0.57 0.34
0.57 0.35
0.54 0.31
0.72 0.45
0.72 0.41
0.85 0.41
0.84 0.35
0.86 0.4
0.84 0.34
0.46 0.7
0.5
0.7
0.58 0.65
0.49 0.7
0.57 0.67
0.47 0.7
0.51 0.69
0.59 0.61
0.5
0.7
0.58 0.64
*
*
0.06 0.03
0.72
0.69
0.72
0.69
0.72
0.69
0.72
0.69
0.83
0.79
0.73
0.7
McAdj
0.74
0.72
0.74
0.71
0.74
0.71
0.74
0.72
0.74
0.71
0.71
0.73
0.76
0.72
0.76
0.7
0.73
0.74
0.72
0.74
0.74
0.71
0.07
0.03
0.03
0.03
0.03
0.38
0.3
0.29
0.31
0.47
0.47
0.04
0.05
McArrow
0.15
0.1
0.1
0.06
0.3
0.28
0.4
0.35
0.4
0.35
0.19
0.24
0.29
0.23
0.28
0.19
0.24
0.28
0.24
0.28
0.02
1.11
0.46
0.76
0.44
3.42
1.77
3.08
2.01
0.25
0.17
1.37
0.93
E
0.22
0.12
0.37
0.16
0.36
0.26
0.77
0.37
0.79
0.36
1.47
1.02
0.37
0.71
0.26
1.38
0.95
0.28
0.7
0.22
1.05
0.76
3.49
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.1 0.06 0.06
0.1 0.06 0.06
0.99 0.62 0.72
0.99 0.64 0.74
0.99 0.61 0.73
0.99 0.64 0.75
0.99 0.62 0.72
0.95 0.61 *
0.95 0.57 *
0.96 0.63 0.7
0.99 0.64 0.75
0.99 0.62 0.72
0.99 0.64 0.74
0.99 0.61 0.71
0.99 0.65 0.74
1
0.62 0.72
0.99 0.64 0.75
0.99 0.62 0.72
0.95 0.61 0.52
0.95 0.57 0.53
0.95 0.61 0.52
0.95 0.57 0.53
0.95 0.61 0.85
0.95 0.57 0.84
0.95 0.61 0.71
0.95 0.57 0.73
0.73 0.88 0.6
0.83 0.79 0.69
0.93 0.63 0.51
0.93 0.59 0.53
Alg
1
2
3
4
5
Vars
100
100
100
100
100
Deg
2
2
2
2
2
N
AP
100 0.89
100 0.97
100 0.91
100 0.98
100 0.91
AR
0.64
0.54
0.64
0.53
0.64
AHP
0.52
0.56
0.48
0.56
0.76
34
0.03
0.03
0.4
0.43
0.41
0.44
0.4
0.46
0.44
0.4
0.44
0.4
0.44
0.41
0.44
0.4
0.34
0.32
0.34
0.32
0.41
0.36
0.45
0.43
0.69
0.64
0.34
0.33
0.07
0.08
0.76
0.78
0.76
0.78
0.76
0.74
0.71
0.75
0.78
0.76
0.78
0.76
0.78
0.77
0.78
0.76
0.74
0.71
0.74
0.71
0.74
0.71
0.74
0.71
0.77
0.79
0.74
0.72
0.01
9.04E-03
0.27
0.31
0.28
0.32
0.28
0.28
0.32
0.27
0.32
0.27
0.31
0.28
0.32
0.27
0.04
0.05
0.04
0.05
0.4
0.35
0.3
0.3
0.33
0.4
0.03
0.05
2.33
1.03
14.59
70.99
15.85
119.75
12.02
0.5
0.35
1.83
101.94
13.09
101
14.95
85.95
11.59
101.31
13.65
1.27
0.76
1.34
0.73
4.76
2.95
4.42
3.38
0.34
0.22
1.87
1.25
AHR McAdj McArrow E
0.27 0.75
0.03
0.01
0.2
0.72
0.06
0.02
0.24 0.76
-8.49E-03 5.30E-03
0.2
0.72
0.05
4.40E-03
0.3
0.76
0.25
7.60E-03
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.98
0.89
0.97
0.91
0.98
0.91
0.98
1
0.83
0.99
0.91
0.98
1
0.84
0.99
0.89
0.98
0.89
0.97
0.98
0.88
0.97
0.89
0.98
0.89
0.98
0.71
0.89
0.98
0.88
0.97
0.94
0.99
0.89
0.97
0.91
0.98
0.53
0.64
0.54
0.64
0.53
0.7
0.59
0.32
0.74
0.54
0.69
0.59
0.32
0.73
0.54
0.63
0.53
0.64
0.54
0.55
0.65
0.54
0.64
0.55
0.64
0.54
0.7
0.64
0.55
0.65
0.55
0.62
0.51
0.64
0.55
0.64
0.53
0.84
0.97
1
0.99
1
0.79
0.87
0.98
0.7
0.89
0.79
0.87
0.98
0.71
0.89
*
*
0.8
0.9
0.9
0.8
0.91
0.79
0.9
*
*
0.48
0.8
0.9
0.78
0.9
0.86
0.95
0.8
0.9
0.5
0.56
35
0.21
0.13
0.05
0.13
0.05
0.4
0.29
0.05
0.48
0.24
0.4
0.29
0.05
0.46
0.24
0.21
0.13
0.13
0.22
0.12
0.22
0.13
0.47
0.23
0.13
0.23
0.13
0.16
0.07
0.23
0.13
0.29
0.23
0.72
0.75
0.72
0.76
0.72
0.79
0.76
0.56
0.78
0.73
0.79
0.76
0.56
0.78
0.72
0.74
0.72
0.75
0.72
0.73
0.75
0.72
0.75
0.73
0.75
0.73
0.7
0.75
0.73
0.75
0.73
0.76
0.71
0.75
0.73
0.76
0.72
0.26
0.25
0.15
0.25
0.15
0.35
0.33
0.15
0.31
0.3
0.34
0.33
0.15
0.31
0.3
0.23
0.22
0.22
0.24
0.21
0.23
0.22
0.08
0.24
0.22
0.23
0.23
0.23
0.17
0.24
0.22
0.02
0.06
5.70E-03
0.02
0.02
7.10E-03
6.20E-03
8.00E-03
6.60E-03
5.40E-03
0.02
6.70E-03
7.70E-03
7.00E-03
4.10E-03
0.01
6.80E-03
0.21
0.2
0.65
0.37
0.17
0.28
0.18
0.31
0.18
0.09
0.07
0.42
0.33
0.15
0.36
0.16
0.21
0.14
0.33
0.16
0.23
0.12
44
45
46
47
48
49
50
51
52
53
100
100
100
100
100
100
100
100
100
100
2
2
2
2
2
2
2
2
2
2
100
100
100
100
100
100
100
100
100
100
0.91
0.98
0.91
0.98
0.91
0.98
0.99
1
0.89
0.97
0.64
0.53
0.64
0.53
0.64
0.53
0.63
0.31
0.64
0.54
0.5
0.56
0.97
1
0.93
0.99
0.9
0.98
0.5
0.56
0.29
0.23
0.14
0.07
0.23
0.12
0.37
0.11
0.3
0.24
0.76
0.72
0.76
0.72
0.76
0.72
0.79
0.55
0.75
0.72
0.02
0.06
0.26
0.18
0.32
0.25
0.41
0.24
0.02
0.07
0.14
0.12
0.27
0.17
0.25
0.2
0.14
0.07
1.16
1.01
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Vars
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
Deg
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
N
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
AP
0.92
1
0.93
1
0.93
1
0.92
1
0.93
1
0.95
0.99
1
0.97
1
0.95
0.99
1
0.97
1
0.93
1
0.93
1
-
AR
0.95
0.91
0.94
0.91
0.94
0.91
0.95
0.91
0.94
0.91
0.98
0.95
0.84
0.97
0.89
0.97
0.95
0.83
0.97
0.89
0.94
0.9
0.96
0.9
-
AHP
0.72
0.77
0.66
0.7
0.83
0.91
0.99
0.99
0.99
0.99
0.93
0.97
0.98
0.96
0.98
0.94
0.97
0.98
0.96
0.98
*
*
0.85
0.89
-
AHR
0.58
0.53
0.54
0.5
0.62
0.58
0.53
0.45
0.53
0.45
0.73
0.67
0.53
0.71
0.6
0.73
0.67
0.53
0.7
0.6
0.65
0.55
-
McAdj
0.93
0.95
0.93
0.95
0.93
0.95
0.93
0.95
0.93
0.95
0.96
0.97
0.91
0.97
0.94
0.96
0.97
0.91
0.97
0.94
0.93
0.94
0.94
0.95
-
McArrow
0.38
0.39
0.29
0.3
0.52
0.57
0.59
0.53
0.59
0.53
0.7
0.68
0.59
0.7
0.63
0.7
0.68
0.59
0.7
0.63
0.56
0.52
-
E
0.05
0.03
0.04
0.02
0.04
0.05
0.07
0.07
0.07
0.05
0.03
0.02
0.02
0.02
0.02
0.02
0.02
0.01
0.02
0.02
0.29
0.29
1.93
1.05
-
36
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Alg
1
2
3
4
5
6
7
Vars
100
100
100
100
100
100
100
Deg
2
2
2
2
2
2
2
N
1000
1000
1000
1000
1000
1000
1000
1
0.93
1
0.93
1
0.92
1
0.8
0.93
1
0.93
1
0.96
1
0.93
1
0.93
1
0.93
1
0.93
1
0.93
1
0.99
1
0.92
1
0.91
0.95
0.91
0.95
0.91
0.95
0.9
0.96
0.95
0.91
0.95
0.91
0.94
0.9
0.95
0.91
0.94
0.91
0.94
0.91
0.94
0.91
0.94
0.91
0.95
0.85
0.95
0.91
0.89
0.86
0.89
0.86
0.89
*
*
0.63
0.86
0.89
0.86
0.89
0.87
0.87
0.86
0.89
0.62
0.65
0.62
0.65
0.99
0.99
0.95
0.97
0.98
0.98
0.65
0.68
0.55
0.64
0.55
0.64
0.55
0.76
0.64
0.55
0.65
0.55
0.62
0.55
0.64
0.55
0.57
0.51
0.57
0.51
0.54
0.46
0.63
0.56
0.67
0.57
0.58
0.53
AP AR AHP AHR
0.94 0.99 0.77 0.64
0.99 0.97 0.81 0.63
0.94 0.99 0.73 0.62
0.99 0.97 0.77 0.6
0.94 0.99 0.89 0.7
0.99 0.97 0.95 0.68
0.94 0.99 1
0.63
37
0.95
0.94
0.95
0.94
0.95
0.93
0.95
0.87
0.94
0.95
0.94
0.95
0.95
0.95
0.94
0.95
0.93
0.95
0.93
0.95
0.93
0.95
0.93
0.95
0.97
0.92
0.93
0.95
0.52
0.56
0.52
0.57
0.52
0.41
0.57
0.52
0.57
0.52
0.55
0.5
0.57
0.52
0.25
0.24
0.25
0.24
0.6
0.54
0.64
0.6
0.69
0.61
0.28
0.29
0.49
0.47
0.51
0.89
0.43
0.13
0.12
0.73
0.68
0.39
0.64
0.4
0.77
0.54
0.62
0.41
0.35
0.2
0.24
0.19
0.91
0.61
0.83
0.67
0.2
0.17
1.25
1.11
McAdj McArrow E
0.96
0.46
0.06
0.98
0.5
0.04
0.96
0.41
0.1
0.98
0.43
0.04
0.96
0.63
0.09
0.98
0.67
0.05
0.96
0.67
0.15
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.99
0.94
0.99
0.95
0.99
1
0.99
0.99
0.96
0.99
1
0.99
0.99
0.94
0.99
0.94
0.99
0.99
0.94
0.99
0.94
0.99
0.94
0.99
0.84
0.94
0.99
0.94
0.99
0.97
1
0.94
0.99
0.94
0.99
0.94
0.99
0.97
0.99
0.97
0.99
0.99
0.95
0.99
0.96
0.99
0.98
0.95
0.98
0.96
0.99
0.96
0.99
0.97
0.97
0.99
0.97
0.99
0.97
0.99
0.97
0.99
0.99
0.97
0.99
0.97
0.99
0.97
0.99
0.97
0.99
0.97
0.99
0.97
38
1
1
1
0.93
0.96
0.97
0.97
0.97
0.93
0.96
0.97
0.97
0.97
*
*
0.91
0.94
0.94
0.91
0.94
0.91
0.94
*
*
0.68
0.91
0.94
0.91
0.94
0.94
0.94
0.91
0.94
0.71
0.74
0.71
0.74
0.57
0.63
0.57
0.74
0.71
0.67
0.72
0.69
0.73
0.7
0.66
0.71
0.68
0.69
0.65
0.66
0.68
0.65
0.68
0.66
0.8
0.68
0.66
0.69
0.66
0.68
0.65
0.68
0.66
0.63
0.61
0.63
0.61
0.98
0.96
0.98
0.97
0.99
0.97
0.99
0.98
0.98
0.99
0.97
0.99
0.98
0.96
0.98
0.96
0.98
0.98
0.96
0.98
0.96
0.98
0.96
0.98
0.91
0.96
0.98
0.96
0.98
0.98
0.98
0.96
0.98
0.96
0.98
0.96
0.98
0.63
0.67
0.63
0.7
0.71
0.68
0.72
0.69
0.7
0.7
0.68
0.71
0.69
0.64
0.64
0.65
0.64
0.64
0.64
0.65
0.48
0.64
0.65
0.64
0.65
0.66
0.64
0.64
0.65
0.39
0.4
0.39
0.4
0.08
0.18
0.1
0.05
0.04
0.03
0.03
0.03
0.04
0.03
0.03
0.03
0.03
0.39
0.39
4.2
2.39
0.8
0.71
0.71
1.11
0.74
0.15
0.17
1.03
0.94
0.64
0.96
0.65
1.62
1.11
0.92
0.62
0.33
0.29
0.3
0.19
46
47
48
49
50
51
52
53
100
100
100
100
100
100
100
100
2
2
2
2
2
2
2
2
1000
1000
1000
1000
1000
1000
1000
1000
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Vars
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
Deg
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
N
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.94
0.99
0.94
0.99
0.99
1
0.94
0.99
AP
0.96
0.99
0.97
0.99
0.97
0.99
0.96
0.99
0.97
0.99
0.91
0.94
0.98
0.84
0.96
0.93
0.95
0.98
0.89
0.97
0.97
1
0.97
0.99
1
0.96
0.99 1
0.97 1
0.99 0.97
0.97 0.99
0.98 0.97
0.94 0.96
0.99 0.73
0.97 0.77
AR
0.5
0.38
0.49
0.37
0.49
0.37
0.5
0.38
0.49
0.37
0.7
0.57
0.26
0.74
0.47
0.62
0.52
0.26
0.66
0.45
0.46
0.35
0.48
0.37
0.37
0.49
AHP
0.59
0.56
0.52
0.49
0.78
0.75
0.97
0.94
0.98
0.95
0.77
0.8
0.83
0.71
0.81
0.78
0.79
0.85
0.74
0.82
*
*
0.66
0.8
0.79
0.66
39
0.64
0.58
0.7
0.66
0.71
0.68
0.65
0.62
0.96
0.98
0.96
0.98
0.99
0.97
0.96
0.98
AHR
0.25
0.16
0.22
0.14
0.26
0.15
0.11
0.04
0.11
0.04
0.55
0.41
0.07
0.6
0.3
0.48
0.36
0.06
0.52
0.27
0.26
0.09
0.11
0.24
McAdj
0.69
0.61
0.68
0.6
0.68
0.6
0.69
0.61
0.68
0.6
0.79
0.72
0.49
0.78
0.66
0.75
0.69
0.49
0.76
0.65
0.66
0.59
0.67
0.6
0.6
0.68
0.69
0.64
0.7
0.69
0.71
0.69
0.42
0.44
McArrow
0.09
0.05
0.02
-0.01
0.25
0.16
0.22
0.13
0.23
0.12
0.41
0.35
0.13
0.4
0.3
0.38
0.32
0.12
0.38
0.29
0.17
0.15
0.15
0.15
1.22
0.94
1.15
0.95
0.21
0.17
1.35
1.21
E
0.03
0.02
0.02
5.90E-03
0.02
8.70E-03
0.03
0.02
0.04
6.90E-03
0.06
0.03
6.20E-03
0.11
0.02
0.02
0.02
5.50E-03
0.03
0.01
0.29
0.25
1.45
0.62
0.32
0.48
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
1
0.97
1
0.96
0.99
0.91
0.97
1
0.97
1
0.99
1
0.97
1
0.97
0.99
0.97
0.99
0.97
0.99
0.97
0.99
0.96
0.99
0.96
0.99
0.37
0.48
0.37
0.49
0.37
0.56
0.48
0.37
0.5
0.39
0.46
0.34
0.48
0.38
0.49
0.37
0.49
0.37
0.49
0.37
0.49
0.37
0.62
0.27
0.5
0.38
0.76
0.7
0.79
*
*
0.69
0.69
0.79
0.67
0.77
0.69
0.9
0.69
0.78
0.51
0.52
0.51
0.52
0.98
0.95
0.86
0.92
0.86
0.88
0.54
0.55
0.11
0.24
0.11
0.42
0.24
0.11
0.28
0.13
0.22
0.06
0.24
0.12
0.24
0.17
0.24
0.17
0.11
0.04
0.23
0.1
0.48
0.14
0.26
0.18
0.6
0.67
0.6
0.68
0.6
0.71
0.67
0.6
0.68
0.61
0.66
0.57
0.68
0.6
0.68
0.6
0.68
0.6
0.68
0.6
0.68
0.6
0.76
0.5
0.69
0.61
0.14
0.18
0.15
0.26
0.17
0.15
0.18
0.15
0.16
0.12
0.17
0.16
0.01
0.02
0.01
0.02
0.23
0.13
0.28
0.19
0.45
0.22
0.05
0.05
0.32
0.7
0.33
0.1
0.09
0.57
0.57
0.27
0.69
0.3
0.38
0.24
0.6
0.28
0.24
0.2
0.26
0.13
0.69
0.29
0.65
0.32
0.34
0.11
1.3
1.16
Alg
1
2
3
4
5
6
7
8
9
Vars
100
100
100
100
100
100
100
100
100
Deg
4
4
4
4
4
4
4
4
4
N
AP
500 0.98
500 0.99
500 0.98
500 0.99
500 0.98
500 0.99
500 0.98
500 0.99
500 0.98
AR
0.83
0.78
0.82
0.77
0.82
0.77
0.83
0.78
0.82
AHP
0.7
0.68
0.63
0.62
0.85
0.86
0.96
0.97
0.97
AHR
0.55
0.48
0.49
0.44
0.66
0.61
0.55
0.44
0.54
McAdj
0.9
0.87
0.9
0.87
0.9
0.87
0.9
0.87
0.9
McArrow
0.33
0.27
0.22
0.18
0.56
0.53
0.58
0.51
0.58
E
0.12
0.06
0.17
0.08
0.28
0.22
0.49
0.27
0.52
40
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
0.99
0.89
0.93
0.96
0.91
0.95
0.9
0.93
0.96
0.92
0.95
0.99
0.99
0.99
1
1
0.99
1
0.99
1
0.98
0.99
0.96
0.99
1
0.98
1
0.99
1
0.99
1
0.98
0.99
0.98
0.99
0.98
0.99
0.77
0.97
0.95
0.84
0.96
0.88
0.94
0.91
0.77
0.93
0.82
0.81
0.76
0.84
0.78
0.78
0.84
0.78
0.83
0.78
0.82
0.77
0.85
0.83
0.78
0.83
0.78
0.83
0.77
0.83
0.78
0.82
0.77
0.82
0.77
0.82
0.77
0.97
0.81
0.85
0.89
0.83
0.88
0.82
0.86
0.88
0.84
0.87
*
*
0.84
0.83
0.84
0.84
0.81
0.84
0.84
*
*
0.82
0.84
0.84
0.84
0.84
0.85
0.84
0.84
0.84
0.6
0.59
0.6
0.59
0.97
0.97
41
0.43
0.84
0.81
0.72
0.83
0.75
0.81
0.77
0.65
0.8
0.69
0.65
0.58
0.59
0.66
0.58
0.65
0.59
0.73
0.65
0.59
0.65
0.59
0.66
0.57
0.65
0.59
0.49
0.45
0.49
0.45
0.55
0.43
0.87
0.92
0.93
0.89
0.93
0.91
0.91
0.92
0.86
0.92
0.88
0.89
0.86
0.91
0.88
0.88
0.91
0.88
0.9
0.88
0.89
0.87
0.9
0.9
0.88
0.9
0.88
0.91
0.87
0.9
0.88
0.9
0.87
0.9
0.87
0.9
0.87
0.5
0.66
0.68
0.65
0.67
0.66
0.65
0.66
0.59
0.66
0.61
0.54
0.48
0.5
0.55
0.47
0.55
0.5
0.58
0.54
0.5
0.54
0.5
0.55
0.48
0.54
0.5
0.17
0.15
0.17
0.15
0.59
0.51
0.26
0.24
0.18
0.12
0.13
0.09
0.14
0.11
0.06
0.09
0.05
0.92
0.63
12.17
5.57
1.67
3.49
1.47
3.94
1.67
0.31
0.29
1.77
3.04
1.42
3.15
1.43
3.63
1.89
2.98
1.42
0.91
0.79
0.96
0.57
5.98
3.9
48
49
50
51
52
53
100
100
100
100
100
100
4
4
4
4
4
4
500
500
500
500
500
500
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Vars
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
Deg
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
N
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.98
0.99
0.96
0.98
0.98
0.99
AP
0.99
1
0.99
1
0.99
1
0.99
1
0.99
1
0.87
0.9
0.94
0.9
0.93
0.88
0.9
0.94
0.89
0.93
0.99
1
0.3
0.2
1
0.99
1
0.99
0.82
0.77
0.93
0.84
0.83
0.78
0.88
0.9
0.91
0.93
0.63
0.63
AR
0.88
0.85
0.87
0.84
0.87
0.84
0.88
0.85
0.87
0.84
0.99
0.98
0.93
0.99
0.95
0.98
0.97
0.9
0.97
0.92
0.87
0.84
0.27
0.17
0.86
0.89
0.86
0.89
42
0.68
0.62
0.82
0.72
0.52
0.48
AHP AHR
0.73 0.6
0.74 0.58
0.68 0.55
0.66 0.52
0.9
0.72
0.9
0.68
0.97 0.68
0.96 0.6
0.97 0.68
0.96 0.59
0.81 0.87
0.85 0.86
0.88 0.81
0.84 0.86
0.87 0.82
0.82 0.85
0.84 0.84
0.88 0.77
0.83 0.85
0.87 0.79
*
*
0.26 0.22
0.17 0.13
0.88 0.68
0.9
0.75
0.87 0.68
0.89 0.74
0.9
0.87
0.95
0.9
0.9
0.87
McAdj
0.93
0.92
0.93
0.91
0.93
0.91
0.93
0.92
0.93
0.91
0.93
0.94
0.93
0.94
0.94
0.92
0.93
0.92
0.93
0.92
0.93
0.91
0.28
0.18
0.92
0.94
0.92
0.94
0.6
0.58
0.75
0.69
0.22
0.21
McArrow
0.39
0.39
0.3
0.26
0.65
0.62
0.69
0.62
0.69
0.62
0.69
0.72
0.71
0.72
0.71
0.68
0.69
0.68
0.69
0.68
0.19
0.11
0.6
0.67
0.6
0.67
6.12
4.31
0.67
0.5
2.27
1.84
E
0.25
0.18
0.47
0.21
0.6
0.49
1.28
0.71
1.41
0.93
0.48
0.42
0.26
0.19
0.12
0.31
0.25
0.14
0.13
0.09
1.82
1.32
22.24
4.75
6.06
21.54
5.86
19.52
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
Alg Vars
1
100
2
100
3
100
4
100
5
100
6
100
7
100
8
100
9
100
10
100
11
100
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1
0.99
1
0.97
0.99
1
0.99
1
0.99
1
0.99
1
0.99
1
0.99
1
0.99
1
0.99
1
0.95
0.97
0.99
1
0.86
0.88
0.85
0.9
0.89
0.86
0.89
0.86
0.89
0.85
0.89
0.86
0.87
0.84
0.87
0.84
0.87
0.84
0.87
0.84
0.98
0.93
0.88
0.85
Deg
6
6
6
6
6
6
6
6
6
6
6
N
AP AR
100 0.95 0.37
100 0.98 0.3
100 0.96 0.36
100 0.98 0.28
100 0.96 0.36
100 0.98 0.28
100 0.95 0.37
100 0.98 0.3
100 0.96 0.36
100 0.98 0.28
100 0.85 0.65
0.88
*
*
0.85
0.89
0.88
0.89
0.88
0.89
0.86
0.89
0.87
0.64
0.62
0.64
0.62
0.97
0.96
0.91
0.91
0.91
0.94
0.66
0.65
0.68
0.79
0.74
0.68
0.74
0.69
0.74
0.68
0.74
0.68
0.55
0.51
0.55
0.51
0.68
0.6
0.74
0.7
0.86
0.82
0.57
0.54
0.92
0.93
0.92
0.93
0.94
0.92
0.94
0.92
0.94
0.92
0.94
0.92
0.93
0.91
0.93
0.91
0.93
0.91
0.93
0.91
0.96
0.95
0.93
0.91
AHP
0.55
0.52
0.5
0.46
0.71
0.72
0.89
0.94
0.9
0.97
0.71
AHR
0.18
0.12
0.16
0.11
0.21
0.12
0.07
0.03
0.07
0.03
0.52
McAdj
0.58
0.52
0.58
0.51
0.58
0.51
0.58
0.52
0.58
0.51
0.73
43
0.6
0.65
0.67
0.6
0.67
0.6
0.66
0.58
0.67
0.6
0.24
0.2
0.24
0.2
0.69
0.62
0.67
0.65
0.78
0.77
0.28
0.25
McArrow
0.04
0.02
-1.16E-05
-0.03
0.17
0.14
0.16
0.1
0.16
0.1
0.34
6.36
0.63
0.56
3.37
18.31
5.53
19.61
5.89
27.46
10.38
18.19
5.45
1.32
1.38
1.31
0.89
8.93
6.38
9.89
7.07
0.61
0.49
2.8
2.27
E
0.02
0.03
0.03
0.02
0.03
0.02
0.03
0.03
0.06
0.03
0.16
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.91
0.96
0.8
0.93
0.88
0.92
0.96
0.85
0.94
0.98
0.99
0.98
0.99
1
0.98
0.99
0.99
1
0.97
0.98
0.95
0.99
1
0.99
1
0.99
1
0.99
1
0.96
0.98
0.96
0.98
0.96
0.98
0.96
0.98
0.54
0.23
0.7
0.47
0.52
0.43
0.22
0.58
0.38
0.34
0.26
0.35
0.26
0.27
0.38
0.28
0.36
0.27
0.36
0.29
0.42
0.36
0.27
0.37
0.28
0.34
0.25
0.36
0.28
0.36
0.28
0.36
0.28
0.36
0.28
0.36
0.28
0.75
0.77
0.66
0.77
0.71
0.72
0.76
0.69
0.75
*
*
0.66
0.75
0.75
0.64
0.67
0.65
0.74
*
*
0.68
0.66
0.75
0.67
0.72
0.64
0.74
0.66
0.73
0.52
0.52
0.52
0.52
0.92
0.91
0.75
0.83
44
0.43
0.1
0.57
0.36
0.41
0.33
0.09
0.46
0.27
0.19
0.09
0.09
0.22
0.1
0.2
0.1
0.3
0.2
0.09
0.22
0.1
0.18
0.06
0.2
0.1
0.19
0.13
0.19
0.13
0.07
0.03
0.2
0.09
0.69
0.46
0.73
0.65
0.67
0.62
0.45
0.69
0.58
0.57
0.49
0.57
0.5
0.51
0.59
0.52
0.58
0.51
0.58
0.52
0.62
0.59
0.51
0.59
0.52
0.57
0.49
0.59
0.51
0.58
0.51
0.58
0.51
0.58
0.51
0.58
0.51
0.33
0.14
0.33
0.31
0.28
0.25
0.13
0.29
0.24
0.13
0.12
0.12
0.12
0.09
0.13
0.12
0.19
0.13
0.12
0.15
0.11
0.11
0.09
0.14
0.11
0.02
0.02
0.02
0.02
0.16
0.1
0.2
0.15
0.12
0.02
0.22
0.07
0.08
0.03
1.00E-02
0.07
0.02
0.34
0.32
2.56
0.99
0.43
0.94
0.55
0.83
0.55
0.15
0.15
0.75
0.82
0.41
0.99
0.44
0.5
0.3
0.82
0.4
0.36
0.28
0.37
0.17
0.95
0.41
1.72
0.42
50
51
52
53
100
100
100
100
6
6
6
6
100
100
100
100
0.93
0.95
0.95
0.98
0.55
0.25
0.37
0.3
0.82
0.82
0.55
0.56
0.46
0.15
0.2
0.15
0.71
0.47
0.58
0.53
0.4
0.2
0.05
0.05
0.54
0.17
1.55
1.27
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Vars
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
Deg
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
N
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
AP
0.96
0.97
0.97
0.98
0.97
0.98
0.96
0.97
0.97
0.98
0.75
0.81
0.91
0.77
0.88
0.77
0.83
0.91
0.79
0.89
0.98
0.99
0.2
1
1
0.99
1
0.99
1
0.98
AR
0.65
0.59
0.64
0.58
0.64
0.58
0.65
0.59
0.64
0.58
0.95
0.92
0.83
0.94
0.88
0.92
0.87
0.7
0.9
0.77
0.62
0.56
0.13
0.61
0.62
0.68
0.62
0.67
0.62
0.64
AHP
0.64
0.65
0.6
0.59
0.77
0.8
0.92
0.94
0.93
0.94
0.67
0.72
0.82
0.68
0.79
0.69
0.74
0.81
0.7
0.79
*
*
0.16
0.76
0.75
0.79
0.75
0.79
0.75
*
AHR
0.4
0.37
0.37
0.33
0.48
0.44
0.41
0.32
0.41
0.31
0.82
0.8
0.72
0.81
0.77
0.79
0.75
0.6
0.78
0.67
0.1
0.42
0.42
0.49
0.42
0.48
0.43
-
McAdj
0.78
0.75
0.78
0.74
0.78
0.74
0.78
0.75
0.78
0.74
0.83
0.85
0.86
0.84
0.87
0.83
0.84
0.79
0.83
0.82
0.77
0.74
0.16
0.77
0.78
0.81
0.78
0.81
0.78
0.78
McArrow
0.2
0.2
0.14
0.12
0.37
0.38
0.46
0.39
0.46
0.39
0.51
0.54
0.59
0.51
0.59
0.5
0.52
0.49
0.51
0.52
0.08
0.32
0.31
0.39
0.31
0.39
0.32
-
E
0.24
0.16
0.41
0.15
0.54
0.3
0.98
0.43
1.07
0.48
1.37
0.88
0.41
0.74
0.28
1.03
0.55
0.16
0.55
0.16
1.78
1.23
14.46
18.05
3.75
34.76
4.37
18.05
3.96
0.7
45
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
Vars
100
100
100
100
100
100
100
100
100
100
100
100
100
Deg
6
6
6
6
6
6
6
6
6
6
6
6
6
N
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.98
0.97
0.99
1
0.99
1
1
1
0.99
1
0.97
0.98
0.97
0.98
0.97
0.98
0.97
0.98
0.9
0.95
0.96
0.97
AP
0.97
0.98
0.97
0.98
0.97
0.98
0.97
0.98
0.97
0.98
0.72
0.76
0.84
0.58
0.67
0.67
0.62
0.68
0.62
0.68
0.62
0.67
0.62
0.64
0.58
0.64
0.58
0.64
0.58
0.64
0.58
0.9
0.79
0.65
0.59
*
0.85
0.79
0.75
0.79
0.75
0.77
0.76
0.79
0.75
0.51
0.51
0.51
0.51
0.93
0.93
0.77
0.81
0.83
0.89
0.52
0.52
AR
0.72
0.68
0.71
0.67
0.71
0.67
0.72
0.68
0.71
0.67
0.97
0.96
0.91
46
AHP
0.64
0.65
0.6
0.59
0.81
0.82
0.93
0.94
0.94
0.94
0.64
0.68
0.75
0.59
0.49
0.42
0.49
0.43
0.48
0.43
0.49
0.42
0.33
0.3
0.33
0.3
0.41
0.31
0.49
0.45
0.81
0.71
0.35
0.31
AHR
0.46
0.43
0.42
0.38
0.56
0.53
0.53
0.45
0.52
0.45
0.84
0.83
0.8
0.74
0.8
0.81
0.78
0.81
0.78
0.82
0.78
0.81
0.78
0.78
0.74
0.78
0.74
0.78
0.74
0.78
0.74
0.89
0.86
0.78
0.75
0.51
0.39
0.31
0.39
0.31
0.37
0.33
0.39
0.31
0.02
0.02
0.02
0.02
0.46
0.39
0.37
0.38
0.66
0.65
0.03
0.03
McAdj McArrow
0.83
0.22
0.81
0.21
0.82
0.15
0.8
0.13
0.82
0.45
0.8
0.45
0.83
0.55
0.81
0.5
0.82
0.55
0.8
0.5
0.82
0.49
0.84
0.52
0.86
0.57
0.58
3.67
231.15
3.38
25.51
3.54
14.2
4.1
20.48
3.33
2.26
1.62
2.31
1.18
12.1
6.28
16.28
6.99
1.25
0.88
3.81
2.76
E
0.66
0.33
1.24
0.69
1.51
1.06
3.23
1.41
3.54
1.7
3.42
2.46
1.16
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.75
0.82
0.72
0.77
0.85
0.75
0.82
0.98
0.99
1
0.1
1
1
0.98
0.98
0.97
0.1
1
1
1
1
0.97
0.98
0.97
0.98
0.97
0.98
0.97
0.98
0.84
0.91
0.96
0.93
0.96
0.94
0.85
0.95
0.88
0.7
0.66
0.69
0.07
0.7
0.69
0.71
0.67
0.73
0.07
0.69
0.69
0.7
0.69
0.71
0.67
0.71
0.67
0.71
0.67
0.71
0.67
0.94
0.89
47
0.66
0.73
0.64
0.69
0.76
0.67
0.73
*
*
0.8
0.08
0.81
0.8
*
*
0.86
0.08
0.8
0.8
0.8
0.8
0.57
0.56
0.57
0.56
0.94
0.94
0.81
0.82
0.77
0.84
0.83
0.81
0.83
0.81
0.74
0.82
0.76
0.51
0.05
0.52
0.51
0.64
0.05
0.51
0.51
0.52
0.51
0.42
0.38
0.42
0.38
0.52
0.45
0.57
0.54
0.83
0.8
0.83
0.86
0.82
0.84
0.84
0.83
0.84
0.82
0.8
0.82
0.08
0.83
0.82
0.82
0.8
0.83
0.08
0.82
0.82
0.83
0.82
0.82
0.8
0.82
0.8
0.82
0.8
0.82
0.8
0.88
0.89
0.51
0.56
0.48
0.52
0.53
0.5
0.53
0.41
0.04
0.42
0.41
0.56
0.04
0.41
0.41
0.42
0.41
0.12
0.09
0.12
0.09
0.55
0.49
0.46
0.45
0.62
0.66
1.2
0.57
2.81
1.88
0.63
1.16
0.41
4.21
2.84
50.12
-9.90E-03
82.52
50.31
2.49
1.46
7.42
58.65
46.87
54.84
54.88
53.1
3.99
2.87
4.14
2.29
20.2
16.28
23.95
14.37
1.66
1.19
52
53
100
100
6
6
1000
1000
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Vars
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Deg
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
N
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.97
0.98
AP
0.74
0.93
0.79
0.95
0.79
0.95
0.74
0.93
0.79
0.95
0.73
0.94
1
0.44
0.98
0.75
0.94
1
0.6
0.98
0.78
0.94
0.94
0.94
0.93
0.94
0.75
0.94
0.58
0.72
0.68
AR
0.66
0.55
0.65
0.54
0.65
0.54
0.66
0.55
0.65
0.54
0.72
0.62
0.34
0.8
0.56
0.71
0.62
0.33
0.75
0.56
0.62
0.53
0.54
0.54
0.53
0.54
0.65
0.54
0.68
0.57
0.57
0.43
0.4
0.83
0.81
AHP
0.42
0.52
0.43
0.5
0.63
0.84
0.96
0.99
0.97
0.99
0.6
0.85
0.96
0.35
0.89
0.62
0.85
0.95
0.48
0.89
*
*
0.87
0.88
0.88
0.88
*
*
0.39
AHR
0.29
0.18
0.27
0.17
0.3
0.19
0.12
0.05
0.12
0.05
0.49
0.29
0.05
0.61
0.23
0.47
0.29
0.05
0.56
0.23
0.11
0.11
0.1
0.11
0.47
McAdj
0.69
0.71
0.71
0.71
0.71
0.71
0.69
0.71
0.71
0.71
0.72
0.76
0.58
0.59
0.74
0.73
0.76
0.58
0.67
0.74
0.69
0.7
0.71
0.71
0.71
0.71
0.69
0.72
0.63
48
0.12
0.11
McArrow
-0.05
0.03
-0.04
7.10E-03
0.17
0.24
0.24
0.15
0.24
0.15
0.23
0.32
0.15
0.04
0.3
0.24
0.32
0.15
0.14
0.3
0.19
0.19
0.19
0.19
-9.76E-03
5.28
5.08
E
1.22
1.22
0.18
0.06
0.12
0.05
1.6
1.12
0.13
0.07
0.18
0.05
0.03
1.84
0.08
0.09
0.04
0.03
0.13
0.06
5.61
5.77
15.19
4.91
3.98
4.7
2.62
1.88
12.52
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.94
0.93
0.98
0.94
0.79
0.95
0.79
0.95
0.79
0.95
0.79
0.95
0.97
1
0.74
0.94
0.54
0.55
0.51
0.54
0.65
0.54
0.65
0.54
0.65
0.54
0.65
0.54
0.66
0.32
0.65
0.55
0.88
0.87
0.92
0.88
0.41
0.5
0.41
0.5
0.96
0.99
0.89
0.99
0.89
0.99
0.41
0.51
0.11
0.12
0.07
0.11
0.3
0.2
0.3
0.2
0.12
0.06
0.23
0.11
0.4
0.11
0.32
0.21
0.71
0.71
0.7
0.71
0.71
0.71
0.71
0.71
0.71
0.71
0.71
0.71
0.8
0.56
0.7
0.71
0.19
0.19
0.16
0.2
-0.07
3.60E-03
-0.07
3.60E-03
0.24
0.17
0.3
0.24
0.42
0.24
-0.07
0.02
5.03
6.06
3.76
5.11
3.44
3.26
3.37
3.08
4.15
5.3
4.52
4.15
3.33
1.75
29.44
30.23
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Vars
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Deg
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
N
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
AP
0.78
0.97
0.8
0.98
0.8
0.98
0.78
0.97
0.8
0.98
0.79
0.98
1
0.91
1
AR
0.95
0.91
0.95
0.91
0.95
0.91
0.95
0.91
0.95
0.91
0.98
0.95
0.84
0.96
0.89
AHP
0.53
0.71
0.53
0.68
0.7
0.92
0.99
0.99
0.99
0.99
0.73
0.95
0.98
0.87
0.97
AHR
0.55
0.52
0.54
0.5
0.68
0.63
0.55
0.45
0.55
0.45
0.79
0.7
0.57
0.74
0.63
McAdj
0.86
0.94
0.87
0.94
0.87
0.94
0.86
0.94
0.87
0.94
0.88
0.96
0.92
0.93
0.94
McArrow
0.16
0.33
0.15
0.28
0.44
0.61
0.61
0.53
0.61
0.53
0.55
0.68
0.62
0.65
0.66
E
1.56
1.23
0.39
0.18
0.46
0.25
1.66
1.53
0.73
0.29
0.58
0.16
0.11
0.22
0.17
49
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
0.83
0.98
1
0.91
1
0.81
0.97
0.97
0.97
0.97
0.97
0.79
0.97
0.54
0.97
0.97
0.99
0.97
0.8
0.98
0.8
0.98
0.8
0.98
0.8
0.98
1
1
0.78
0.97
0.94
0.84
0.96
0.89
0.95
0.91
0.91
0.91
0.91
0.91
0.95
0.91
0.97
0.91
0.91
0.9
0.91
0.95
0.91
0.95
0.91
0.95
0.91
0.95
0.91
0.95
0.84
0.95
0.77
0.95
0.98
0.88
0.97
*
*
0.87
0.86
0.86
0.86
*
*
0.43
0.86
0.86
0.86
0.86
0.51
0.65
0.51
0.65
0.99
1
0.93
0.99
0.98
0.99
0.52
50
0.78
0.7
0.57
0.74
0.63
0.6
0.6
0.59
0.6
0.78
0.6
0.6
0.58
0.6
0.58
0.54
0.58
0.54
0.55
0.46
0.65
0.57
0.7
0.59
0.59
0.9
0.96
0.92
0.94
0.94
0.88
0.94
0.94
0.94
0.94
0.94
0.87
0.94
0.72
0.94
0.94
0.94
0.94
0.87
0.94
0.87
0.94
0.87
0.94
0.87
0.94
0.97
0.92
0.86
0.58
0.68
0.62
0.66
0.66
0.53
0.53
0.52
0.53
0.22
0.53
0.53
0.52
0.53
0.13
0.25
0.13
0.25
0.61
0.54
0.63
0.63
0.72
0.64
0.15
0.16
0.13
0.1
0.17
0.15
7.37
6.33
44.89
12.23
8.48
11.66
2.73
2.19
43.54
12.68
13.02
15.67
13.27
4.93
3.6
4.24
3.57
9.9
8.05
10.6
7.09
4.65
3.77
31.43
53
500
2
500
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
Vars
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Deg
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
N
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.97
AP
0.77
0.98
0.78
0.98
0.78
0.98
0.77
0.98
0.78
0.98
0.78
0.97
1
0.95
1
0.82
0.98
1
0.95
1
0.8
0.98
0.98
0.98
0.98
0.78
0.98
0.55
-
0.91
0.68
AR
0.99
0.98
0.99
0.98
0.99
0.98
0.99
0.98
0.99
0.98
1
0.99
0.95
0.99
0.96
0.99
0.99
0.95
0.99
0.96
0.99
0.97
0.97
0.97
0.97
0.99
0.98
0.99
51
0.55
AHP AHR
0.57 0.62
0.8
0.62
0.56 0.6
0.76 0.6
0.71 0.73
0.95 0.69
0.99 0.66
1
0.6
0.99 0.66
1
0.6
0.73 0.82
0.96 0.74
0.99 0.68
0.93 0.75
0.99 0.69
0.77 0.81
0.96 0.74
0.99 0.68
0.93 0.75
0.99 0.69
*
*
0.94 0.68
0.94 0.67
0.94 0.68
*
*
0.44 0.81
-
0.94
McAdj
0.87
0.98
0.88
0.98
0.88
0.98
0.87
0.98
0.88
0.98
0.88
0.98
0.97
0.97
0.98
0.9
0.98
0.97
0.97
0.98
0.89
0.98
0.97
0.97
0.97
0.88
0.98
0.74
-
0.3
McArrow
0.25
0.48
0.22
0.43
0.48
0.68
0.69
0.65
0.7
0.65
0.57
0.73
0.71
0.71
0.72
0.61
0.73
0.71
0.72
0.72
0.66
0.66
0.66
0.25
-
28.73
E
2.02
1.41
0.78
0.37
0.89
0.53
2.36
1.89
1.58
0.71
0.97
0.29
0.2
0.31
0.26
0.29
0.22
0.2
0.27
0.26
8.78
7.86
18.73
11.53
17.49
3.04
2.71
166.5
-
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Vars
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Deg
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
N
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.98
0.97
0.99
0.97
0.78
0.98
0.78
0.98
0.78
0.98
0.78
0.98
1
1
0.77
0.98
AP
0.91
0.98
0.93
0.99
0.93
0.99
0.91
0.98
0.93
0.99
0.78
0.94
0.99
0.59
0.97
0.86
0.97 0.94
0.97 0.94
0.97 0.94
0.97 0.94
0.99 0.53
0.98 0.71
0.99 0.53
0.98 0.71
0.99 0.99
0.98 1
0.99 0.94
0.98 0.99
0.99 0.99
0.95 0.99
0.99 0.53
0.98 0.74
AR
0.5
0.4
0.49
0.38
0.49
0.38
0.5
0.4
0.49
0.38
0.73
0.56
0.26
0.78
0.48
0.62
AHP
0.54
0.56
0.5
0.52
0.76
0.85
0.97
0.98
0.98
0.98
0.69
0.84
0.93
0.53
0.88
0.76
52
0.68
0.68
0.67
0.68
0.63
0.61
0.63
0.61
0.66
0.6
0.71
0.67
0.73
0.68
0.64
0.63
0.97
0.97
0.98
0.97
0.88
0.98
0.88
0.98
0.88
0.98
0.88
0.98
1
0.97
0.87
0.98
AHR McAdj
0.24 0.67
0.16 0.62
0.22 0.67
0.15 0.62
0.3
0.67
0.17 0.62
0.13 0.67
0.05 0.62
0.13 0.67
0.05 0.62
0.62 0.75
0.43 0.73
0.07 0.51
0.68 0.68
0.33 0.68
0.51 0.73
0.66
0.66
0.66
0.66
0.19
0.37
0.19
0.37
0.69
0.66
0.69
0.71
0.75
0.71
0.21
0.41
McArrow
0.05
0.05
7.30E-03
0.02
0.26
0.23
0.25
0.16
0.25
0.15
0.4
0.41
0.17
0.27
0.36
0.39
18.93
19.45
36.99
18.76
5.2
4.02
4.37
3.99
12.05
7.72
11.86
12.8
4.92
4.44
31.72
28.56
E
1.17
1.36
0.15
0.1
0.18
0.11
1.14
1.2
0.19
0.11
1.64
0.35
0.04
3.95
0.16
0.16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.95
0.99
0.77
0.97
0.94
0.99
0.98
0.98
0.98
0.98
0.92
0.99
0.82
0.98
0.98
1
0.98
0.93
0.99
0.93
0.99
0.93
0.99
0.93
0.99
0.97
0.99
0.91
0.98
0.52
0.26
0.66
0.47
0.46
0.36
0.37
0.37
0.38
0.37
0.49
0.39
0.54
0.37
0.38
0.34
0.37
0.49
0.38
0.49
0.38
0.49
0.38
0.49
0.38
0.65
0.29
0.5
0.4
0.85
0.94
0.68
0.88
*
*
0.82
0.8
0.79
0.8
*
*
0.62
0.8
0.78
0.79
0.79
0.49
0.5
0.49
0.5
0.98
0.98
0.91
0.98
0.92
0.97
0.51
0.52
53
0.39
0.07
0.56
0.31
0.12
0.12
0.12
0.12
0.41
0.12
0.14
0.08
0.13
0.25
0.17
0.25
0.17
0.13
0.05
0.27
0.12
0.54
0.17
0.26
0.18
0.7
0.51
0.71
0.67
0.66
0.6
0.6
0.6
0.61
0.6
0.67
0.62
0.66
0.6
0.61
0.58
0.6
0.67
0.62
0.67
0.62
0.67
0.62
0.67
0.62
0.79
0.53
0.67
0.62
0.38
0.17
0.35
0.35
0.18
0.17
0.17
0.17
0.2
0.17
0.18
0.13
0.17
4.03E-03
1.29E-03
4.03E-03
1.29E-03
0.25
0.16
0.34
0.24
0.54
0.29
0.02
0.02
0.08
0.03
0.26
0.1
6.66
6.92
29.68
8.99
6.98
7.85
1.87
1.7
11.78
7.81
9.33
5.36
7.95
3.7
3.65
3.9
3.01
6.32
4.9
6.73
6.69
9.87
3.17
31.32
31.43
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Vars
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Deg
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
N
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
AP
0.96
1
0.96
1
0.96
1
0.96
1
0.96
1
0.84
0.97
0.99
0.92
0.99
0.91
0.98
0.99
0.95
0.99
0.97
1
0.1
1
1
0.4
0.96
1
0.86
1
-
AR
0.87
0.81
0.87
0.8
0.87
0.8
0.87
0.81
0.87
0.8
0.98
0.95
0.81
0.97
0.87
0.95
0.92
0.78
0.94
0.84
0.86
0.79
0.08
0.79
0.79
0.31
0.87
0.8
0.9
0.79
-
AHP
0.68
0.69
0.63
0.63
0.88
0.93
0.99
0.99
0.99
0.99
0.81
0.95
0.98
0.9
0.98
0.89
0.96
0.98
0.93
0.98
*
*
0.09
0.84
0.85
0.34
*
*
0.78
0.84
54
AHR
0.58
0.51
0.53
0.47
0.72
0.66
0.6
0.48
0.6
0.48
0.91
0.85
0.7
0.89
0.77
0.88
0.82
0.67
0.86
0.74
0.06
0.61
0.61
0.24
0.81
0.61
-
McAdj
0.91
0.89
0.91
0.89
0.91
0.89
0.91
0.89
0.91
0.89
0.91
0.96
0.9
0.95
0.93
0.93
0.95
0.88
0.94
0.91
0.91
0.89
0.09
0.89
0.89
0.35
0.91
0.89
0.88
0.89
-
McArrow
0.32
0.29
0.23
0.2
0.64
0.64
0.65
0.56
0.65
0.56
0.73
0.82
0.72
0.79
0.77
0.77
0.8
0.69
0.8
0.75
0.05
0.52
0.52
0.21
0.6
0.52
-
E
2.16
1.76
1.46
0.63
2.33
1.41
5.91
3.02
5.15
2
3.6
1.32
0.64
1.34
0.58
0.66
0.5
0.24
0.48
0.36
12.2
10.12
24.44
38.28
24.05
15.19
3.88
3
34.09
38.23
-
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Vars
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Deg
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
N
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1
1
1
0.96
1
0.96
1
0.96
1
0.96
1
0.99
1
0.96
1
AP
0.97
1
0.97
1
0.97
1
0.97
1
0.97
1
0.84
0.97
0.99
0.95
0.99
0.91
0.98
0.99
0.79
0.79
0.79
0.87
0.8
0.87
0.8
0.87
0.8
0.87
0.8
0.96
0.86
0.87
0.81
0.85
0.83
0.84
0.58
0.59
0.58
0.59
0.99
0.99
0.93
0.97
0.98
0.99
0.61
0.63
AR
0.94
0.91
0.94
0.91
0.94
0.91
0.94
0.91
0.94
0.91
1
0.99
0.94
0.99
0.95
0.99
0.98
0.92
55
AHP
0.74
0.75
0.69
0.69
0.91
0.95
1
1
1
1
0.81
0.95
0.97
0.93
0.97
0.88
0.96
0.98
0.61
0.59
0.61
0.51
0.47
0.51
0.47
0.6
0.48
0.73
0.66
0.86
0.76
0.54
0.49
AHR
0.68
0.64
0.63
0.59
0.82
0.79
0.75
0.66
0.75
0.66
0.93
0.9
0.84
0.9
0.86
0.92
0.89
0.82
0.89
0.88
0.89
0.91
0.89
0.91
0.89
0.91
0.89
0.91
0.89
0.98
0.93
0.91
0.9
0.52
0.5
0.52
0.16
0.15
0.16
0.15
0.65
0.56
0.7
0.67
0.85
0.77
0.21
0.2
39.53
49.26
38.68
9.42
6.37
9.39
5.68
53.57
40.38
64.25
41.07
13.45
11.07
36.45
35.1
McAdj McArrow E
0.96
0.44
3.82
0.95
0.42
2.62
0.96
0.36
4.33
0.95
0.33
2.32
0.96
0.75
7.54
0.95
0.76
5.45
0.96
0.77
19.59
0.95
0.7
9.48
0.96
0.77
19.82
0.95
0.7
11.6
0.92
0.74
5.58
0.98
0.85
2.52
0.96
0.83
1.6
0.97
0.84
1.29
0.97
0.84
0.83
0.95
0.81
1.42
0.98
0.85
1
0.95
0.81
0.59
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
Alg
Vars
Deg
N
0.97
0.99
0.97
1
1
1
0.97
1
0.88
0.97
1
0.97
1
0.97
1
0.97
1
0.99
0.99
0.97
1
AP
0.98
0.94
0.94
0.9
0.9
0.9
0.94
0.91
0.96
0.94
0.91
0.94
0.91
0.94
0.91
0.94
0.91
0.99
0.95
0.94
0.91
AR
0.94
0.97
*
*
0.89
0.89
*
*
0.81
0.62
0.62
0.62
0.62
0.99
0.99
0.94
0.96
0.98
0.98
0.65
0.66
0.89
0.84
0.74
0.74
0.88
0.59
0.56
0.59
0.56
0.75
0.67
0.83
0.78
0.9
0.86
0.62
0.59
0.97
0.96
0.95
0.95
0.95
0.95
0.96
0.95
0.92
0.96
0.95
0.96
0.95
0.96
0.95
0.96
0.95
0.99
0.97
0.96
0.95
AHP
AHR
McAdj
56
0.85
0.83
0.66
0.67
0.69
0.24
0.22
0.24
0.22
0.77
0.7
0.78
0.77
0.88
0.85
0.29
0.29
McArrow
0.73
0.49
21.43
17.43
112.67
97.54
6.73
5.84
65.41
12.8
9.52
12.13
8.49
101.85
72.33
127.52
74.05
14.52
12.54
43.26
43.87
E
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.94
0.99
0.96
0.99
0.96
0.99
0.94
0.99
0.96
0.99
0.81
0.93
0.98
0.66
0.96
0.89
0.95
0.99
0.83
0.97
0.97
0.99
0.99
0.99
0.96
0.99
0.91
-
0.39
0.3
0.38
0.28
0.38
0.28
0.39
0.3
0.38
0.28
0.73
0.6
0.22
0.77
0.5
0.56
0.46
0.22
0.61
0.4
0.34
0.26
0.26
0.27
0.37
0.29
0.42
-
0.56
0.55
0.52
0.5
0.76
0.83
0.98
0.99
0.98
0.98
0.74
0.85
0.88
0.6
0.87
0.8
0.84
0.89
0.74
0.86
*
*
0.77
0.76
*
*
0.71
57
0.19
0.13
0.17
0.11
0.24
0.13
0.09
0.04
0.09
0.04
0.65
0.52
0.09
0.7
0.41
0.48
0.38
0.08
0.54
0.31
0.09
0.09
0.33
-
0.6
0.54
0.6
0.53
0.6
0.53
0.6
0.54
0.6
0.53
0.77
0.75
0.47
0.71
0.69
0.7
0.65
0.46
0.71
0.62
0.57
0.51
0.51
0.51
0.6
0.53
0.62
-
0.06
0.03
0.02
8.32E-04
0.22
0.19
0.21
0.14
0.21
0.13
0.47
0.47
0.17
0.36
0.42
0.4
0.37
0.17
0.39
0.34
0.13
0.13
0.24
-
1.22
1.24
0.26
0.16
0.28
0.14
1.33
1.35
0.32
0.12
4.26
2.4
0.07
9.46
1.29
0.31
0.17
0.06
0.69
0.15
6.78
6.08
11.9
9.87
2.12
1.65
13.55
-
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
0.96
0.99
0.96
0.99
0.96
0.99
0.96
0.99
0.96
0.99
0.94
0.99
0.38
0.28
0.38
0.28
0.38
0.28
0.38
0.28
0.64
0.26
0.39
0.3
0.5
0.49
0.5
0.49
0.97
0.98
0.87
0.96
0.91
0.94
0.52
0.51
0.19
0.13
0.19
0.13
0.09
0.04
0.22
0.09
0.58
0.17
0.2
0.14
0.6
0.53
0.6
0.53
0.6
0.53
0.6
0.53
0.78
0.5
0.6
0.54
8.02E-03 4.39
-4.94E-03 4.06
8.02E-03 5.01
-4.94E-03 3.01
0.21
8.23
0.14
4.56
0.28
11.5
0.21
4.86
0.57
25.03
0.28
4.64
0.02
32.08
0.01
32.29
Alg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Vars
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Deg
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
N
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
AP
0.99
1
0.99
1
0.99
1
0.99
1
0.99
1
0.86
0.95
0.98
0.91
0.98
0.92
0.96
0.98
0.95
AR
0.75
0.68
0.74
0.67
0.74
0.67
0.75
0.68
0.74
0.67
0.98
0.96
0.87
0.98
0.91
0.93
0.89
0.73
0.92
AHP
0.65
0.65
0.6
0.59
0.88
0.9
0.99
0.99
0.99
0.99
0.84
0.93
0.96
0.89
0.96
0.9
0.94
0.96
0.92
AHR
0.48
0.42
0.43
0.38
0.64
0.58
0.52
0.4
0.52
0.4
0.94
0.91
0.82
0.93
0.86
0.88
0.84
0.68
0.87
McAdj
0.86
0.82
0.86
0.82
0.86
0.82
0.86
0.82
0.86
0.82
0.92
0.96
0.92
0.94
0.94
0.93
0.92
0.85
0.93
McArrow
0.24
0.21
0.15
0.13
0.57
0.55
0.58
0.49
0.58
0.49
0.78
0.85
0.79
0.83
0.83
0.79
0.79
0.68
0.8
58
E
3.86
2.4
4.57
1.87
7.39
3.92
16.45
6.27
16.86
6.01
14.27
7.67
5.18
4.67
2.78
4.53
1.86
0.66
1.84
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
Alg
1
Vars
500
Deg
6
N
1000
0.98
0.99
1
1
1
0.99
1
0.96
0.99
1
0.99
1
0.99
1
0.99
1
0.98
0.99
0.99
1
0.8
0.73
0.65
0.65
0.66
0.74
0.67
0.79
0.74
0.67
0.74
0.67
0.74
0.67
0.74
0.67
0.97
0.89
0.75
0.68
0.96
*
*
0.82
0.82
*
*
0.89
0.57
0.57
0.57
0.57
0.99
0.99
0.89
0.93
0.97
0.98
0.6
0.6
AP AR AHP
0.99 0.85 0.69
59
0.75
0.49
0.5
0.73
0.43
0.38
0.43
0.38
0.52
0.4
0.65
0.59
0.92
0.84
0.45
0.4
AHR
0.57
0.88
0.85
0.81
0.8
0.81
0.86
0.81
0.87
0.86
0.82
0.86
0.82
0.86
0.82
0.86
0.82
0.97
0.94
0.86
0.82
McAdj
0.91
0.73
0.42
0.43
0.65
0.12
0.1
0.12
0.1
0.58
0.49
0.59
0.59
0.89
0.83
0.16
0.14
McArrow
0.32
0.8
25.28
17.51
81.96
60.56
9.37
4.81
61.2
19.66
13.2
21.81
11.24
160.61
93.55
224.5
84.43
41.8
33.51
53.04
45.74
E
11.13
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1
0.99
1
0.99
1
0.99
1
0.99
1
0.86
0.94
0.97
0.93
0.97
0.91
0.95
0.97
0.95
0.97
0.99
1
0.1
0.99
1
0.97
-
0.8
0.84
0.79
0.84
0.79
0.85
0.8
0.84
0.79
0.99
0.99
0.95
0.99
0.97
0.98
0.96
0.87
0.96
0.9
0.83
0.78
0.08
0.84
0.79
0.87
60
0.68
0.64
0.62
0.9
0.91
0.99
0.99
0.99
0.99
0.84
0.92
0.95
0.91
0.95
0.89
0.93
0.95
0.92
0.95
*
*
0.09
*
*
0.92
-
0.53
0.53
0.48
0.74
0.7
0.68
0.58
0.67
0.58
0.95
0.94
0.9
0.94
0.92
0.93
0.91
0.82
0.91
0.85
0.06
0.82
-
0.89
0.91
0.89
0.91
0.89
0.91
0.89
0.91
0.89
0.93
0.96
0.96
0.96
0.97
0.94
0.95
0.92
0.95
0.93
0.91
0.88
0.09
0.91
0.89
0.92
-
0.29
0.23
0.2
0.67
0.65
0.7
0.63
0.7
0.63
0.79
0.86
0.86
0.86
0.87
0.83
0.84
0.78
0.84
0.81
0.05
0.76
-
5.79
15.7
6.37
31.96
17.17
81.91
30.85
82.92
29.51
17.82
11.68
9.05
4.95
3.49
6.64
4.46
2.13
2.85
1.39
71.41
44.48
-9.90E-03
24.8
13.24
161.08
-
40
41
42
43
44
45
46
47
48
49
50
51
52
53
500
500
500
500
500
500
500
500
500
500
500
500
500
500
6
6
6
6
6
6
6
6
6
6
6
6
6
6
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.99
1
0.99
1
0.99
1
0.99
1
0.97
0.98
0.99
1
0.84
0.79
0.84
0.79
0.84
0.79
0.84
0.79
0.99
0.96
0.85
0.79
0.58
0.57
0.58
0.57
0.99
0.99
0.88
0.91
0.96
0.97
0.6
0.59
0.49
0.45
0.49
0.45
0.68
0.58
0.73
0.7
0.94
0.91
0.51
0.47
0.91
0.89
0.91
0.89
0.91
0.89
0.91
0.89
0.98
0.97
0.92
0.89
0.14
0.11
0.14
0.11
0.7
0.63
0.64
0.65
0.9
0.88
0.18
0.15
48.41
26.29
37.9
21.96
414.81
288.21
483.89
256.16
43.1
39.69
74.29
64.07
References
References
[1] Constantin F Aliferis, Alexander Statnikov, Ioannis Tsamardinos, Subramani Mani, and Xenofon D Koutsoukos. Local causal and markov
blanket induction for causal discovery and feature selection for classification part i: Algorithms and empirical evaluation. Journal of Machine
Learning Research, 11(Jan):171–234, 2010.
[2] David Maxwell Chickering. Optimal structure identification with greedy
search. Journal of machine learning research, 3(Nov):507–554, 2002.
[3] Diego Colombo and Marloes H Maathuis. Order-independent constraintbased causal structure learning. The Journal of Machine Learning Research, 15(1):3741–3782, 2014.
[4] Markus Kalisch, Martin Mächler, Diego Colombo, Marloes H Maathuis,
Peter Bühlmann, et al. Causal inference using graphical models with
the r package pcalg. Journal of Statistical Software, 47(11):1–26, 2012.
61
[5] Christopher Meek. Causal inference and causal explanation with background knowledge. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 403–410. Morgan Kaufmann
Publishers Inc., 1995.
[6] Kevin Murphy et al. The bayes net toolbox for matlab. Computing
science and statistics, 33(2):1024–1034, 2001.
[7] Preetam Nandy, Alain Hauser, and Marloes H Maathuis. Highdimensional consistency in score-based and hybrid structure learning.
arXiv preprint arXiv:1507.02608, 2015.
[8] Joseph Ramsey, Madelyn Glymour, Ruben Sanchez-Romero, and Clark
Glymour. A million variables and more: the fast greedy equivalence
search algorithm for learning high-dimensional graphical causal models,
with an application to functional magnetic resonance images. International journal of data science and analytics, 3(2):121–129, 2017.
[9] Joseph Ramsey, Jiji Zhang, and Peter L Spirtes. Adjacency-faithfulness
and conservative causal inference. arXiv preprint arXiv:1206.6843, 2012.
[10] Joseph D Ramsey and Daniel Malinsky. Comparing the performance
of graphical structure learning algorithms with tetrad. arXiv preprint
arXiv:1607.08110, 2016.
[11] Peter Spirtes, Clark Glymour, Richard Scheines, and Prediction Causation. Search. 1993. Lecture Notes in Statistics, 1993.
[12] Ben Tilly, Matthias Trautner Kromann, and Yuji SODE.
Statistics-distributions-js ymdf.
https://github.com/YujiSODE/
statistics-distributions-js_Ymdf, 2017.
[13] Ioannis Tsamardinos, Constantin F Aliferis, and Alexander Statnikov.
Time and sample efficient discovery of markov blankets and direct causal
relations. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 673–678. ACM,
2003.
[14] Ioannis Tsamardinos, Constantin F Aliferis, Alexander R Statnikov, and
Er Statnikov. Algorithms for large scale markov blanket discovery. In
FLAIRS conference, volume 2, pages 376–380, 2003.
62
[15] Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis. The
max-min hill-climbing bayesian network structure learning algorithm.
Machine learning, 65(1):31–78, 2006.
63
| 2 |
Attribute-Driven Community Search
Xin Huang, Laks V.S. Lakshmanan
University of British Columbia
arXiv:1609.00090v3 [cs.DB] 14 Feb 2017
{xin0,laks}@cs.ubc.ca
v10 ML
ABSTRACT
v9
Recently, community search over graphs has attracted significant
attention and many algorithms have been developed for finding
dense subgraphs from large graphs that contain given query nodes.
In applications such as analysis of protein protein interaction (PPI)
networks, citation graphs, and collaboration networks, nodes tend
to have attributes. Unfortunately, most previously developed community search algorithms ignore these attributes and result in communities with poor cohesion w.r.t. their node attributes. In this
paper, we study the problem of attribute-driven community search,
that is, given an undirected graph G where nodes are associated
with attributes, and an input query Q consisting of nodes Vq and
attributes Wq , find the communities containing Vq , in which most
community members are densely inter-connected and have similar
attributes.
We formulate our problem of finding attributed truss communities (ATC), as finding all connected and close k-truss subgraphs
containing Vq , that are locally maximal and have the largest attribute relevance score among such subgraphs. We design a novel
attribute relevance score function and establish its desirable properties. The problem is shown to be NP-hard. However, we develop
an efficient greedy algorithmic framework, which finds a maximal
k-truss containing Vq , and then iteratively removes the nodes with
the least popular attributes and shrinks the graph so as to satisfy
community constraints. We also build an elegant index to maintain
the known k-truss structure and attribute information, and propose
efficient query processing algorithms. Extensive experiments on
large real-world networks with ground-truth communities shows
the efficiency and effectiveness of our proposed methods.
1.
v2 DM
v1 DM
DM
v8
q1
ML
v3 DM
v6 DB
DM,DB
q2
DM,DB,ML
v7 IR
v5 DB
v4DB
H
4-truss
Figure 1: An example attributed graph G
graphs, communities naturally exist as groups of nodes that are
densely interconnected. Finding communities in large networks
has found extensive applications in protein-protein interaction networks, sensor/communication networks, and collaboration networks.
Consequently, community detection, i.e., finding all communities
in a given network, serves as a global network-wide analysis tool,
and has been extensively studied in the literature. Specifically, various definitions of communities based on different notions of dense
subgraphs have been proposed and studied: quasi-clique [10], densest subgraph [37], k-core [35, 29, 11, 3], and k-truss [20, 22]. More
recently, a related but different problem called community search
has generated considerable interest. It is motivated by the need to
make answers more meaningful and personalized to the user [30,
20]. For a given set of query nodes, community search seeks to
find the communities containing the query nodes.
In the aforementioned applications, the entities modeled by the
network nodes often have properties which are important for making sense of communities. E.g., authors in collaboration networks
have areas of expertise; proteins have molecular functions, biological processes, and cellular components as properties. Such
networks can be modeled using attributed graphs [39] where attributes associated with nodes capture their properties. E.g., Figure 1 shows an example of a collaboration network. The nodes
qi , vj , ... represent authors. Node attributes (e.g., DB, ML) represent authors’ topics of expertise. In finding communities (with
or without query nodes) over attributed graphs, we might want to
ensure that the nodes in the discovered communities have homogeneous attributes. For instance, it has been found that communities
with homogeneous attributes among nodes more accurately predict
protein complexes [19]. Furthermore, we might wish to query, not
just using query nodes, but also using query attributes. To illustrate,
consider searching for communities containing the nodes {q1 , q2 }.
Based on structure alone, the subgraph H shown in Figure 1 is a
good candidate answer for this search, as it is densely connected.
However, attributes of the authors in this community are not homogeneous: the community is a mix of authors working in different
topics – DB, DM, IR, and ML. Previous community search methods include those based on k-core [35, 29, 11], k-truss [22], and
1.0-quasi-k-clique-ℓ-adjacent community [10]. A k-core [29] is
INTRODUCTION
Graphs have emerged as a powerful model for representing different types of data. For instance, unstructured data (e.g., text documents), semi-structured data (e.g., XML databases) and structured
data (e.g., relational databases) can all be modeled as graphs, where
the vertices(nodes) are respectively documents, elements, and tuples, and the edges can respectively be hyperlinks, parent-child relationships, and primary-foreign-key relationships [28]. In these
1
v2 DM
v1 DM
v10 ML
q1
v6 DB
q2
DB
DB
q1
v2 DM
v1 DM
v3 DM
v6 DB
v9
DM,DB
DM
q2
DM,DB
v3 DM
v4DB
v5 DB
v5 DB
v4DB
q1
DM
(a) H1 . 4-truss community on (b) H2 . 4-truss community on Vq =
Vq = {q1 , q2 }, Wq = {DB}
{q1 ,q2 }, Wq = {DB, DM }
DM
q2
v8
q1 ML
ML
(c) H3 . 4-truss community on (d) H4 . 3-truss comVq = {q1 , q2 }, Wq = {DM }
munity on Vq = {q1 },
Wq = {M L}
Figure 2: Attribute Communities for queries on different query nodes Vq and query attributes Wq .
q1
for a small subgraph of Figure 1 and a query. Keyword search finds
answers corresponding to trees or subgraphs with minimum communication cost that connect the input keywords/nodes, where the
communication cost is based on diameter, query distance, weight
of spanning tree or steiner tree. On this graph, if we search for the
query node q1 and attribute DB, we will get the single edge connecting q1 and DB as the answer as this is the subgraph with minimum communication cost connecting these two nodes. Clearly,
this is unsatisfactory as a community.
In sum, attributed graphs present novel opportunities for community search by combining dense structure of subgraphs with the
level of homogeneity of node attributes in the subgraph. Most
previous work in community search fails to produce satisfactory
answers over attributed graphs, while keyword search based techniques do not find dense subgraphs. The main problem we study
in this paper is finding top-r communities from attributed graphs,
given a community search query consisting of query nodes and
query attributes. This raises the following major challenges. Firstly,
how should we combine dense connectedness with the distribution
of attributes over the community nodes? We need a community
definition that promotes dense structure as well as attribute homogeneity. However, there can be tension between these goals: as illustrated in the example above, some denser subgraphs may be less
homogeneous in their node attributes than some sparser ones. Secondly, the definition should capture the intuition that the more input
attributes that are covered by a community, the better the community. Finally, we need to find the answer communities from large
input graphs in an efficient manner.
To tackle these challenges, we propose an attributed truss community (ATC) model. Given a query Q = (Vq , Wq ) consisting of
a set of query nodes Vq and a set of query attributes Wq , a good
community H must be a dense subgraph which contains all query
nodes and attributes Wq must be contained in numerous nodes of
the community. The more nodes with attribute w ∈ Wq , the more
importance to w commonly accorded by the community members.
Additionally, the nodes must share as many attributes as possible.
Notice that these two conditions are not necessarily equivalent.
Capturing these intuitions, we define an attribute score function
that strikes a balance between attribute homogeneity and coverage.
Moreover, as a qualifying cohesive and tight structure, we define
a novel concept of (k, d)-truss for modeling a densely connected
community. A (k, d)-truss is a connected k-truss containing all
query nodes, where each node has a distance no more than d from
every query node. This inherits many nice structural properties,
such as bounded diameter, k-edge connectivity, and hierarchical
structure. Thus, based on attribute score function and (k, d)-truss,
we propose a novel community model as attributed truss community (ATC), which is a (k, d)-truss with the maximum attribute
score. In this paper, we make the following contributions.
...
v4
DB
DM
v8
Figure 3: Keyword Search with query Wq = {q1 , DB}
a subgraph in which each vertex has at least k neighbors within
the subgraph. A k-truss [22] is a subgraph in which each edge
is contained in at least (k − 2) triangles within the subgraph. The
1.0-quasi-k-clique-ℓ-adjacent community model [10] allows two kcliques overlapping in ℓ vertices to be merged into one community.
In Figure 1, for k = 4 and ℓ = 3, all these community models
will report H as the top answer and are thus unsatisfactory. The
subgraph H2 obtained from H by removing node v7 with unique
attribute IR, is a more homogeneous community than H and is just
as densely connected (see Figure 2(b)). Intuitively, it is a better
answer than H. Thus, in general, communities found by most previous community search methods can be hard to interpret owing to
the heterogeneity of node attributes. Furthermore, the communities
reported could contain smaller dense subgraphs with more homogeneity in attributes, which are missed by most previous methods.
A recent work [14] proposed an attribute community model. A detailed comparison of [14] with our model can be found in Section
3. Consider now querying the graph of Figure 1 with query nodes
{q1 , q2 } and attributes (i.e., keywords) {DB, DM}. We would expect this search to return subgraph H2 (Figure 2(b)). On the other
hand, for the same query nodes, if we search with attribute {DB}
(resp., {DM}), we expect the subgraph H1 (resp., H3 ) to be returned as the answer (Figure 2(a)&(c)). Both H1 and H3 are dense
subgraphs where all authors share a common topic (DB or DM).
Given a query consisting of nodes and attributes (keywords), one
may wonder whether we can filter out nodes not having those attributes and then run a conventional community search method on
the filtered graph. To see how well this may work, consider querying the graph in Figure 1 with query node q1 and query attribute
ML. Filtering out nodes without attribute ML and applying community search yields the chain consisting of v10 , q1 , v8 , which is
not densely connected. On the other hand, the subgraph induced by
{q1 , v8 , v9 , v10 } is a 3-truss in Figure 2(d). Even though it includes
one node without ML it is more densely connected than the chain
above and is a better answer than the chain as it brings out denser
collaboration structure among the authors in the community. Thus,
a simple filtering based approach will not work. As some denser
subgraphs may be less homogeneous in their node attributes than
some sparser ones and a careful balance has to be struck between
density and attribute homogeneity.
Another topic related to our problem is keyword search over
graphs, which has been extensively studied [1, 18, 17, 5, 23, 12]. A
natural question is whether we can model the information suitably
and leverage keyword search to find the right communities. We
could model authors’ attributes also as nodes and directly connect
them to the author nodes and query the resulting graph with the
union of the author id’s and the keywords. Figure 3 illustrates this
• We motivate the problem of attributed community search,
and identify the desiderata of a good attributed community
(Section 2).
2
• We propose a novel dense and tight subgraph, (k, d)-truss,
and design an attribute score function satisfying the desiderata set out above. Based on this, we propose a community
model called attributed truss community (ATC), and formulate the problem of attributed community search as finding
ATC (Section 4).
• We analyze the structural properties of ATC and show that
it is non-monotone, non-submodular and non-supermodular,
which signal huge computational challenges. We also formally prove that the problem is NP-hard (Section 5).
Method
Topic
[5]
[12]
[28]
[27]
[15]
[24]
[35]
[10]
[22]
[14]
Ours
KS
KS
KS
TF
TF
TF
CS
CS
CS
ACS
ACS
Participation
Condition
χ
χ
χ
χ
χ
χ
X
X
X
X
X
Attribute
Function
X
X
X
X
X
X
χ
χ
χ
X
X
Cohesiveness
Constraint
χ
χ
χ
χ
X
χ
X
X
X
X
X
Communication
Cost
X
X
X
X
X
X
X
χ
X
χ
X
Table 1: A comparison of representative works on keyword
search (KS), team formation (TF), community search (CS) and
attributed community search (ACS).
• We develop a greedy algorithmic framework to find an ATC
containing given query nodes w.r.t. given query attributes.
It first finds a maximal (k, d)-truss, and then iteratively removes nodes with smallest attribute score contribution. For
improving the efficiency and quality, we design a revised attribute marginal gain function and a bulk removal strategy
for cutting down the number of iterations (Section 6).
2. (Cohesiveness) A cohesiveness function coh(H) that measures the cohesive structure of H is high.
3. (Attribute Coverage and Correlation) An attribute score function f(H, Wq ) that measures the coverage and correlation of
query attributes in vertices of H is high.
• For further improving efficiency, we explore the local neighborhood of query nodes to search an ATC. This algorithm
first generates a Steiner tree connecting all query nodes, and
then expands the tree to a dense subgraph with the insertion
of carefully selected nodes, that have highly correlated attributes and densely connected structure (Section 7).
4. (Communication Cost) A communication cost function com(H)
that measures the distance of vertices in H is low.
The participation condition is straightforward. The cohesiveness
condition is also straightforward since communities are supposed
to be densely connected subgraphs. One can use any notion of
dense subgraph previously studied, such as k-core, k-truss, etc.
The third condition captures the intuition that more query attributes
covered by H, the higher f(H, Wq ); also more attributes shared by
vertices of H, the higher f(H, Wq ). This motivates designing functions f(., .) with this property. Finally, keeping the communication
cost low helps avoid irrelevant vertices in a community. This is
related to the so-called free rider effect, studied in [22, 37]. Intuitively, the closer the community nodes to query nodes, subject to
all other conditions, the more relevant they are likely to be to the
query. Notice that sometimes a node that does not contain query
attributes may still act as a “bridge” between other nodes and help
improve the density. A general remark is that other than the first
condition, for conditions 2–4, we may either optimize a suitable
metric or constrain that the metric be above a threshold (below a
threshold for Condition 4). We formalize this intuition in Section 4
and give a precise definition of an attributed communityand formally state the main problem studied in the paper .
• We conduct extensive experiments on 7 real datasets, and
show that our attribute community model can efficiently and
effectively find ground-truth communities and social circles
over real-world networks, significantly outperforming previous work (Section 8).
We discuss related work in Section 3, and conclude the paper
with a summary in Section 9.
2. PRELIMINARIES AND DESIDERATA
2.1 Preliminaries
We consider an undirected, unweighted simple graph G = (V,
E) with n = |V (G)| vertices and m = |E(G)| edges. We denote the set of neighbors of a vertex v by N (v), and the degree
of v by d(v) = |N (v)|. We let dmax = maxv∈V d(v) denote
the maximum vertex degree in G. W.l.o.g. we assume that the
graphs we consider are connected. Note that this implies that m ≥
n − 1. We consider attributed graphs and denote the set of all
attributes in a graph by A. Each node v ∈ V contains a set of
zero or more attributes, denoted by attr(v) ⊆ A. The multiset
union of attributes
P of all nodes in G is denoted attr(V ). Note that
|attr(V )| = v∈V |attr(v)|. We use Vw ⊆ V to denote the set of
nodes having attribute w, i.e., Vw = {v ∈ V | w ∈ attr(v)}.
3. RELATED WORK
Work related to this paper can be classified into community search,
keyword search, team formation, and community detection in attributed graphs. Table 1 shows a detailed comparison of representative works on these topics.
2.2 Desiderata of a good community
Community Search. Community search on a graph aims to find
densely connected communities containing query nodes, and has
attracted a great deal of attention recently. Various models based on
different dense subgraphs have been proposed and studied: quasiclique [10], densest subgraph [37], k-core [35, 11, 3] and k-truss
[20, 22]. All these works focus on the structure of the community while ignoring node attributes. This can result in communities
with poor cohesion in the attribute sets of the community nodes. In
particular, while [20, 22] use k-truss as the basis structure of communities, the k-truss communities they find are not guaranteed to
have high cohesion in the attribute sets of the nodes.
Given a query Q = (Vq , Wq ) with a set of query nodes Vq ⊆ V
and a set of query attributes Wq , the attributed community search
(ACS) problem is to find a subgraph H ⊆ G containing all query
nodes Vq , where the vertices are densely inter-connected, cover
as many query attributes Wq as possible and share numerous attributes. In addition, the communication cost of H should be low.
We call the query Q = (Vq , Wq ) an ACS query. Before formalizing
the problem, we first identify the commonly accepted desiderata of
a good attributed community.
Criteria of a good attributed community: Given a graph G(V, E)
and a ACS query Q = (Vq , Wq ), an attributed community is a connected subgraph H = (V (H), E(H)) ⊆ G that satisfies:
Keyword Search. Keyword search in relational databases has been
extensively studied. Most of the works focus on finding minimal
connected tuple trees from a relational database [1, 18, 17, 5, 23,
1. (Participation) H contains all query nodes as Vq ⊆ V (H);
3
discussion on conditions 2–4.
12]. There are two basic approaches: DBXplorer [1] DISCOVERI [18], and DISCOVER-II [17] use SQL to find tuple-trees. The
other approach materializes a relational database as a graph, and
finds trees from the graph: e.g., see BANKS-I [5] and BANKS-II
[23]. Keyword search over graphs finds a substructure containing
all or a subset of the input keywords. The works [28, 32] report
subgraphs instead of trees as keyword search answers. However,
keyword search does not consider the cohesive structure involving
the query nodes and keywords. As illustrated in the introduction,
keyword search cannot return the right communities over attributed
graphs.
4.1 (k, d)-truss
In the following, we introduce a novel definition of dense and
tight substructure called (k, d)-truss by paying attention to cohesiveness and communication cost.
Cohesiveness. While a number of definitions for dense subgraphs
have been proposed over the years, we adopt the k-truss model,
proposed by Cohen [9], which has gained popularity and has been
found to satisfy nice properties.
A subgraph H ⊆ G is a k-core, if every vertex in H has degree
at least k. A triangle in G is a cycle of length 3. We denote a
triangle involving vertices u, v, w ∈ V as △uvw . The support of
an edge e(u, v) ∈ E in G, denoted supG (e), is the number of
triangles containing e, i.e., supG (e) = |{△uvw : w ∈ V }|. When
the context is obvious, we drop the subscript and denote the support
as sup(e). Since the definition of k-truss [9, 36] allows a k-truss
to be disconnected, we define a connected k-truss below.
Team Formation. Lappas et al. [27] introduced the problem of
discovering a team of experts from a social network, that satisfies
all attributed skills required for a given task with low communication cost. Kargar and An [24] study the team formation problem with a team leader who communicates with each team member to monitor and coordinate the project. Most of the team formation studies focus on a tree substructure, as opposed to densely
connected subgraph required by community search. Gajewar and
Sarma [15] extend the team formation problem to allow for potentially more than one member possessing each required skill, and
use maximum density measure or minimum diameter as the objective. Compared with our problem, these studies do not consider
both dense structure and distance constraint at the same time, and
also have no constraint on query nodes.
D EFINITION 1 (C ONNECTED K-T RUSS ). Given a graph G
and an integer k, a connected k-truss is a connected subgraph
H ⊆ G, such that ∀e ∈ E(H), supH (e) ≥ (k − 2).
Intuitively, a connected k-truss is a connected subgraph in which
each connection (edge) (u, v) is “endorsed” by k − 2 common
neighbors of u and v [9]. A connected k-truss with a large value of
k signifies strong inner-connections between members of the subgraph. In a k-truss, each node has degree at least k − 1, i.e., it
is a (k − 1)-core , and a connected k-truss is also (k − 1)-edgeconnected, i.e., it remains connected if fewer than (k − 1) edges
are removed [4].
Community Detection in Attributed Graphs. Community detection in attributed graphs is to find all densely connected components with homogeneous attributes [39, 7, 33]. Zhou et al.[39]
model the community detection problem as graph clustering, and
combine structural and attribute similarities through a unified distance measure. When high-dimensional attributed communities are
hard to interpret or discover, [21, 16] consider subspace clustering
on high-dimensional attributed graphs. A survey of clustering on
attributed graphs can be found in [6]. Community detection in attributed graphs is to find all communities of the entire graph, which
is clearly different from our goal of query-based community search.
Moreover, it is practically hard and inefficient to adapt the above
community detection approaches [39, 21, 33] for online attributed
community search: community detection is inherently global and
much of the work involved may be irrelevant to the community being searched.
Recently, Yang et al.[14] have proposed a model for community
search over attributed graphs based on k-cores. The key distinction
with our work is as follows. (1) Our community model is based on
k-trusses, which have well-known advantages over k-cores such
as denser structure. A connected k-core has no guarantee to be
2-edge-connected, even with a large core value k. (2) Our search
supports multiple query nodes whereas theirs is limited to a single query node. (3) Their approach may miss useful communities.
E.g., consider the example graph in Figure 1 with query node {q2 }
and attributes {DB, DM}, and parameter k = 3. Their model will
return the subgraphs H1 (Figure 2(a)) and H3 (Figure 2(c)) as answers. However, the subgraph H2 (Figure 2(a)) will not be discovered, due to their strict homogeneity constraints. (4) Furthermore,
unlike them, we minimize the query distance of the community
which has the benefit of avoiding the free rider effect. (5) Finally,
unlike them, we validate our model with experiments over datasets
with ground-truth communities.
4.
E XAMPLE 1. Consider the graph G (Figure 1). The edge e(v1 , v2 )
is contained in three triangles △q1 v1 v2 , △q2 v1 v2 and △v3 v1 v2 ,
thus its support is supG (e) = 3. Consider the subgraph H3 of
G (Figure 2(c)). Every edge of H3 has support ≥ 2, thus H3 is
a 4-truss. Note that even though the edge e(v1 , v2 ) has support 3,
there exists no 5-truss in the graph G in Figure 1.
Communication Cost. For two nodes u, v ∈ G, let distG (u, v)
denote the length of the shortest path between u and v in G, where
distG (u, v) = +∞ if u and v are not connected. The diameter
of a graph G is the maximum length of a shortest path in G, i.e.,
diam(G) = maxu,v∈G {distG (u, v)}. We make use of the notion
of graph query distance in the following.
D EFINITION 2 (Q UERY D ISTANCE [22]). Given a graph G
and query nodes Vq ⊆ V , the vertex query distance of vertex
v ∈ V is the maximum length of a shortest path from v to a query
node q ∈ Vq in G, i.e., distG (v, Vq ) = maxq∈Vq distG (v, q).
Given a subgraph H ⊆ G and Vq ⊆ V (H), the graph query distance of H is defined as distH (H, Vq ) = maxu∈H distH (u, Vq )
= maxu∈H,q∈Vq distH (u, q).
Given a subgraph H ⊆ G and Vq ⊆ V (H), the query distance
distH (H, Vq ) measures the communication cost between the members of H and the query nodes. A good community should have a
low communication cost with small distH (H, Vq ).
For the graph G in Figure 1 and query nodes Vq = {q1 , q2 }, the
vertex query distance of v7 is distG (v7 , Vq ) = maxq∈Vq {distG (v7 , q)}
= 2. Consider the subgraph H1 in Figure 2(a). Then graph query
distance of H1 is distH1 (H1 , Vq ) = distH1 (q1 , q2 ) = 2. The diameter of H1 is diam(H1 ) = 2.
ATTRIBUTED COMMUNITY MODEL
In this section, we develop a notion of attributed community by
formalizing the the desiderata discussed in Section 2. We focus our
4
|Vw ∩V (H)|
,
|V (H)|
(k, d)-truss. We adapt the notions of k-truss and query distance,
and propose a new notion of (k, d)-truss capturing dense cohesiveness and low communication cost.
i.e., the fraction of nodes of H covering w. For a query
Q = (Vq , Wq ) and a community
H, the attribute score of H is
P
defined as f(H, Wq ) =
θ(H,
w) × score(H, w), where
w∈Wq
score(H, w) = |Vw ∩ V (H)| is the number of nodes covering w.
D EFINITION 3 ((k, d)-truss). Given a graph H, query nodes
Vq , and numbers k and d, we say that H is a (k, d)-truss iff H is a
connected k-truss containing Vq and distH (H, Vq ) ≤ d.
The contribution of an attribute w to the overall score is θ(H, w)×
(H)|2
. This depends not only on the number
score(H, w) = |Vw|V∩V(H)|
of vertices covering w but also on w’s popularity in the community H. This choice discourages vertices unrelated to the query
attributes Wq which decrease the relevance score, without necessarily increasing the cohesion (e.g., trussness). At the same time, it
permits the inclusion of essential nodes, which are added to a community to reduce the cost of connecting query nodes. They act as
an important link between nodes that are related to the query, leading to a higher relevance score. We refer to such additional nodes
as steiner nodes. E.g., consider the query Q = ({q1 }, {M L}) on
the graph G in Figure 1. As discussed in Section 1, the community
H4 in Figure 2(d) is preferable to the chain of nodes v8 , q1 , v10 .
Notice that it includes v9 with attribute DM (but not M L); v9 is
thus a steiner node. It can be verified that f(H4 , Wq ) = 49 which is
smaller than the attribute score of the chain, which is 3. However,
H4 is a 3-truss whereas the chain is a 2-truss. It is easy to see that
any supergraph of H4 in Figure 1 is at most a 3-truss and has a
strictly smaller attribute score.
The more query attributes a community has that are shared by
more of its nodes, the higher its attribute score. For example, consider the query Q = ({q1 }, {DB, DM }) on our running example
graph of Figure 1. The communities H1 , H2 , H3 in Figure 2 are
all potential answers for this query. We find that f(H1 , Wq ) =
5 · 1 + 2 · 52 = 5.8; by symmetry, f(H3 , Wq ) = 5.8; on the other
hand, f(H2 , Wq ) = 5· 85 +5· 58 = 6.25. Intuitively, we can see that
H1 and H3 are mainly focused in one area (DB or DM) whereas
H2 has 5 nodes covering DB and DM each and also has the highest
attribute score.
By definition, the cohesiveness of a (k, d)-truss increases with
k, and its proximity to query nodes increases with decreasing d.
For instance, the community H1 in Figure 2 (a) for Vq = {q1 , q2 }
is a (k, d)-truss with k = 4 and d = 2.
4.2 Attribute Score Function
We first identify key properties that should be obeyed by a good
attribute score function for a community. Let f(H, Wq ) denote the
attribute score of community H w.r.t. query attributes Wq . We say
that a node v of H covers an attribute w ∈ Wq , if w ∈ attr(v). We
say that a node of H is irrelevant to the query if it does not cover
any of the query attributes.
Principle 1: The more query attributes that are covered by some
node(s) of H, the higher should be the score f(H, Wq ). The rationale is obvious.
Principle 2: The more nodes contain an attribute w ∈ Wq , the
higher the contribution of w should be toward the overall score
f(H, Wq ). The intuition is that attributes that are covered by more
nodes of H signify homogeneity within the community w.r.t. shared
query attributes.
Principle 3: The more nodes of H that are irrelevant to the query,
the lower the score f(H, Wq ).
We next discuss a few choices for defining f(H, Wq ) and analyze their pros and cons, before presenting an example function
that satisfies all three principles. Note that the scores f(H, Wq )
are always compared between subgraphs H that meet the same
structural constraintPof (k, d)-truss. An obvious choice is to define f(H, Wq ) :=
w∈Wq score(H, w), where score(H, w), the
contribution of attribute w to the overall score, can be viewed as the
relevance of H w.r.t. w. This embodies Principle 1 above. Inspired
by Principle 2, we could define score(H, w) := |V (H) ∩ Vw |,
i.e., the number of nodes of H that cover w. Unfortunately, this
choice suffers from some limitations by virtue of treating all query
attributes alike. Some attributes may not be shared by many community nodes while others are and this distinction is ignored by the
above definition of f(H, Wq ). To illustrate, consider the community H1 in Figure 2(a) and the query Q = ({q1 }, {DB}); H1 has
5 vertices associated with the attribute DB and achieves a score
of 5. The subgraph H of the graph G shown in Figure 1 also has
the same score of 5. However, while the community in Figure 2(a)
is clearly a good community, as all nodes carry attribute DB, the
subgraph H in Figure 1 includes several irrelevant nodes without
attribute DB. Notice that both H1 and H are 4-trusses so we have
no way of discriminating between them, which is undesirable.
(H)|
An alternative is to define score(H, w) as |Vw|V∩V
as this
(H)|
captures the popularity of attribute w. Unfortunately, this fails
to reward larger commumities. For instance, consider the query
Q = ({q1 , v4 }, {DB}) over the graph G in Figure 1. The subgraph H1 in Figure 2(a) as well as its subgraph obtained by removing q2 is a 4-truss and both will be assigned a score of 1.
In view of these considerations, we define f(H, Wq ) as a weighted
sum of the score contribution of each query attribute, where the
weight reflects the popularity of the attribute.
R EMARK 1. We stress that the main contribution of this subsection is the identification of key principles that an attribute score
function must satisfy in order to be effective in measuring the goodness of an attributed community. Specifically, these principles capture the important properties of high attribute coverage and high
attribute correlation within a community and minimal number of
nodes irrelevant to given query. Any score function can be employed as long as it satisfies these principles. The algorithmic
framework we propose in Section 6.1 is flexible enough to handle
an ATC community model equipped with any such score function.
We note that a natural candidate for attribute scoringPis the
entropy-based score function, defined as fentropy (H, Wq ) = w∈Wq
(H)|
(H)|
− |Vw|V∩V
log |Vw|V∩V
. It measures homogeneity of query
(H)|
(H)|
attributes very well. However, it fails to reward larger communities, specifically violating Principle 1. E.g., consider the query
Q = ({q1 , v4 }, {DB}) on the graph G in Figure 1. The subgraph
H1 in Figure 2(a) and its subgraph obtained by removing q2 are
both 4-trusses and both are assigned a score of 0. Clearly, H1 has
more nodes containing the query attribute DB.
4.3 Attributed Truss Community Model
Combining the structure constraint of (k, d)-truss and the attribute score function f(H, Wq ), we define an attributed truss community (ATC) as follows.
D EFINITION 5. [Attribute Truss Community] Given a graph G
and a query Q = (Vq , Wq ) and two numbers k and d, H is an
attribute truss community (ATC), if H satisfies the following conditions:
D EFINITION 4 (ATTRIBUTE S CORE ). Given a subgraph H ⊆
G and an attribute w, the weight of an attribute w is θ(H, w) =
5
Define the weight of a vertex v in a graph G as its degree in G,
P
deg G (v)+degG (v)
S
=
i.e., w(v) = degG (v). Then, χ(GS ) = v∈S
|S|
P
degG (v)
2ρ(GS ) + v∈S |S| . We define a problem, the WDalKProblem, as follows: given a graph G with weights as defined
above, and a density threshold α, check whether G contains an
induced subgraph H with at least k vertices such that χ(H) ≥ α.
We show it is NP-hard in Theorem 1. To establish this, we first
show that the WDK-Problem, i.e., finding whether G has a subgraph H with exactly k vertices with weighted density at least α,
i.e., χ(H) ≥ α, is NP-hard.1 We then extend this result to the
hardness of the WDalK-Problem.
1. H is a (k, d)-truss containing Vq .
2. H has the maximum attribute score f(H, Wq ) among subgraphs satisfying condition (1).
In terms of structure and communication cost, condition (1) not
only requires that the community containing the query nodes Vq be
densely connected, but also that each node be close to the query
nodes. In terms of query attribute coverage and correlation, condition (2) ensures that as many query attributes as possible are covered by as many nodes as possible.
E XAMPLE 2. For the graph G in Figure 1, and query Q =
({q1 , q2 }, {DB, DM }) with k = 4 and d = 2, H2 in 2(b) is
the corresponding ATC, since H2 is a (4, 2)-truss with the largest
score f(H, Wq ) = 6.25 as seen before.
L EMMA 1. WDK-Problem is NP-hard.
P ROOF. We reduce the well-known NP-complete problem, CLIQUE,
to WDK-Problem. Given a graph G = (V, E) with n vertices and
a number k, construct a graph G′ = (V ∪ V ′ , E ∪ E ′ ) as follows.
For each vertex v ∈ V , add n − degG (v) new dummy vertices.
G′ contains an edge connecting each v ∈ V with each of its associated dummy vertices. Notice that the maximum degree of any
node in G′ is n. In particular, every vertex in V has degree n
in G′ whereas every dummy vertexPin V ′ has degree 1. So for any
The ATC-Problem studied in this paper can be formally formulated as follows.
Problem Statement: Given a graph G(V, E), query Q = (Vq , Wq )
and two parameters k and d, find an ATC H, such that H is a (k, d)truss with the maximum attribute score f(H, Wq ).
We remark that in place of the (k, d)-truss with the highest attribute score, we could consider the problem of finding the r (k, d)trusses with the highest attribute score. Our technical results and
algorithms easily generalize to this extension.
5.
v∈G′ degG′ (v)
S
S ⊆ V ∪V ′ , χ(G′S ) = 2ρ(G′S )+
≤ 2ρ(G′S )+n.
|S|
Set α = n + k − 1. We claim that G contains a clique of size k iff
χ(G′ ) ≥ α.
(⇒): Suppose H ⊂ G is a k-clique. Since each vertex v of G has
degree n in G′ and 2ρ(H) is the average degree of a vertex in H,
we have χ(H) = 2ρ(H) + n = k − 1 + n.
(⇐): Suppose G′ contains an induced subgraph G′S with |S| = k
and with χ(G′S ) ≥ n + k − 1. It is clear that for any S with
S ∩ V ′ 6= ∅, χ(G′S ) < n + k − 1. The reason is that vertices in V ′
have degree 1 < n in G′ . Thus, we must have S ⊂ V . Now, for
any S ⊆ V ∪ V ′ , χ(G′S ) is upper bounded by n + k − 1. Thus,
χ(G′S ) = n + k − 1, and we can infer that 2ρ(G′S ) = k − 1,
implying G′S = GS is a k-clique.
PROBLEM ANALYSIS
In this section, we analyze the complexity of the problem and
show that it is NP-hard. We then analyze the properties of the structure and attribute score function of our problem. Our algorithms for
community search exploit these properties.
5.1 Hardness
Our main result in this section is that the ATC-Problem is NPhard (Theorem 2). The crux of our proof idea comes from the hardness of finding the densest subgraph with ≥ k vertices [25]. Unfortunately, that problem cannot be directly reduced to our ATCProblem. To bridge this gap, we extend the notion of graph density
to account for vertex weights and define a helper problem called
WDalK-Problem – given a graph, find the subgraph with maximum “weighed density” with at least k vertices. We then show that
it is NP-hard and then reduce the WDalK-Problem to our problem.
T HEOREM 1. WDalK-Problem is NP-hard.
P ROOF. We can reduce WDK-Problem to WDalK-Problem,
using the ideas similar to those used in reducing the densest k
subgraph problem to the densest at least k subgraph problem [25].
Weighted Density. Let G = (V, E) be an undirected graph. Let
w(v) be a non-negative weight associated with each vertex v ∈
V . Given a subset S ⊆ V , the subgraph of G induced by S is
GS = (S, E(S)), where E(S) = {(u, v) ∈ E | u, v ∈ S}. For a
vertex v in a subgraph H ⊆ G, its degree is degH (v) = |{(u, v) |
(u, v) ∈ E(H)}|. Next, we define:
T HEOREM 2. ATC-Problem is NP-hard.
P ROOF. We reduce the WDalK-Problem to ATC-Problem. Given
a graph G = (V, E) with |V | = n vertices, construct an instance
G′ as follows. G′ is a complete graph over n vertices. For simplicity, we use V to refer to the vertex set of both G and G′ , without
causing confusion. For each edge (u, v) ∈ E(G), create a distinct
attribute wuv for G′ . We assume wuv and wvu denote the same
attribute. Then, with each vertex v ∈ V in G′ , associate a set of
attributes: attr(v) = {wvu : (v, u) ∈ E(G)}. Notice that the cardinality of attr(v) is |attr(v)| = degG (v). Also, an attribute wvu
is present only in the attribute sets of v and u, i.e., Vwvu = {v, u}.
For a vertex set S ⊂ V , we will show that f(G′S , Wq ) = χ(GS ),
where GS = (S, E(S)) is the induced subgraph of G by S, G′S =
(S, E ′ (S)) is the induced subgraph of G′ by S, and Wq = {wvu :
D EFINITION 6 (W EIGHTED D ENSITY.). Given a subset of vertices S ⊆ V of a weighted graph G, the weighted density of subP
degG (v)+w(v)
S
graph GS is defined as χ(GS ) = v∈S
.
|S|
Recall that traditional edge density of an induced subgraph GS
P
degG (v)
S
is ρ(GS ) = |E(S)|
= v∈S 2|S|
[25, 2]. That is, ρ(GS ) is
|S|
twice the average degree of a vertex in GS . Notice that in Definition 6, if the weight of v is w(v) = 0, ∀v, then the weighted density
χ(GS ) = 2ρ(GS ). It is well known that finding a subgraph with
the maximum edge density can be solved optimally using parametric flow or linear programming relaxation [2]. However, given a
number k, finding the maximum density of a subgraph GS containing at least k vertices is NP-hard [25].
1
Notice that the hardness of finding the maximum density subgraph
with ≥ k vertices does not imply hardness of WDK-Problem for
a specific weight function over the vertices and thus it needs to be
proved.
6
5.2 Properties of (k, d)-truss
(v, u) ∈ E(G)}. That is, the query attributes are the set of attributes associated with every edge of G. We have
f(G′S , Wq )
X
=
wvu ∈Wq
=
X
wvu ∈Wq
X
|Vwvu ∩ S|
(|Vwvu ∩ S|)(|Vwvu ∩ S| − 1)
+
|S|
|S|
w
∈W
vu
Our attribute truss community model is based on the concept of
k-truss, so the communities inherit good structural properties of
k-trusses, such as k-edge-connected, bounded diameter and hierarchical structure. In addition, since the attribute truss community
is required to have a bounded query distance, it will have a small
diameter, as explained below.
A k-truss community is (k − 1)-edge-connected, since it remains
connected whenever fewer than k − 1 edges are deleted from the
community [9]. Moreover, a k-truss based community has hierarchical structure that represents the hearts of the community at different levels of granularity [20], i.e., a k-truss is always contained
in some (k − 1)-truss, for k ≥ 3. In addition, for a connected
k-truss with n vertices, the diameter is at most ⌊ 2n−2
⌋ [9]. Small
k
diameter is considered an important property of a good community
[13].
Since the distance function satisfies the triangle inequality, i.e.,
for all nodes u, v, w, distG (u, v) ≤ distG (u, w)+distG (w, v), we
can express the lower and upper bounds on the community diameter
in terms of the query distance as follows.
|Vwvu ∩ S|2
|S|
(1)
q
For every attribute wvu ∈ Wq , exactly one of the following conditions holds:
• u, v ∈ S: In this case (u, v) ∈ E(S). Clearly, |Vwvu ∩ S| =
2, so (|Vwvu ∩ S|)(|Vwvu ∩ S| − 1) = 2.
• exactly one of u, v belongs to S and (u, v) ∈ E \ E(S). In
this case, |Vwvu ∩S| = 1, so (|Vwvu ∩S|)(|Vwvu ∩S|−1) =
0.
• u, v 6∈ S. In this case, clearly (u, v) 6∈ E(S) and |Vwvu ∩
S| = 0, so (|Vwvu ∩ S|)(|Vwvu ∩ S| − 1) = 0.
Therefore,
X (|Vwvu ∩ S|)(|Vwvu ∩ S| − 1)
X
|E(S)|
2
=
=2
= 2ρ(GS ).
|S|
|S|
|S|
(v,u)∈E(S)
wvu ∈Wq
(2)
On the other hand, we have
X
X
2
1
|Vwvu ∩ S|
=
+
|S|
|S|
|S|
wvu ∈Wq
(v,u)∈E(S)
v∈S,(v,u)∈E\E(S)
P
P
P
P
v∈S
(v,u)∈E(G)\E(S) 1
(v,u)∈E(S) 1 +
v∈S degG (v)
=
.
|S|
|S|
(3)
X
=
P
v∈S
P
deg (v)
= χ(GS ).
Overall, f(G′S , Wq ) = 2ρ(GS ) + v∈S|S| G
Next, we show that an instance of WDalK-Problem is a YESinstance iff for the corresponding instance of ATC-Problem, has a
weighted density above a threshold, w.r.t. the query Q = (Vq , Wq )
where Vq = ∅ and Wq = {wvu : (v, u) ∈ E} and the parameter
d = 0.2 The hardness follows from this.
(⇐) : Suppose G is a YES-instance of WDalK-Problem, i.e.,
there exists a subset S ∗ ⊂ V such that for the induced subgraph
GS ∗ of G, we have χ(GS ∗ ) ≥ α. Then, the subgraph G′S ∗ =
(S ∗ , E ′ (S ∗ )) has f(G′S ∗ , Wq ) = χ(GS ∗ ) ≥ α. In addition, since
|S ∗ | ≥ k and G′ is an n-clique, G′S ∗ is a k-clique, and hence a ktruss. For Vq = ∅, trivially Vq ⊆ S ∗ and G′S ∗ satisfies the communication constraint on query distance. Thus, G′S ∗ is a (k, d)-truss
with f(G′S ∗ , Wq ) ≥ α, showing G′ is a YES-instance of ATCProblem.
(⇒) : Supose there exists a (k, d)-truss G′S ∗ = (S ∗ , E ′ (S ∗ )), a
subgraph of G′ induced by S ∗ ⊂ V , with f(G′S ∗ , Wq ) ≥ α. Then,
we have GS ∗ = (S ∗ , E(S ∗ )) and χ(GS ∗ ) = f(G′S ∗ , Wq ) ≥ α.
Since G′S ∗ is a k-truss and |S ∗ | ≥ k, we have χ(GS ∗ ) ≥ α, showing G is a YES-instance of WDalK-Problem.
O BSERVATION 1. For a (k, d)-truss H and a set of nodes Vq ⊆
H, we have d ≤ diam(H) ≤ min{ 2|V (H)|−2
, 2d}.
k
R EMARK 2. Besides k-truss, there exist several other definitions of dense subgraphs including: k-(r,s)-nucleus [34], quasiclique [10], densest subgraph [37], and k-core [35]. A k-(r,s)nucleus, for positive integers k and r < s, is a maximal union of
s-cliques in which every r-clique is present in at least k s-cliques,
and every pair of r-cliques in that subgraph is connected via a
sequence of s-cliques containing them. Thus, k-(r,s)-nucleus is
a generalized concept of k-truss, which can achieve very dense
strucutre for large parameters k, r,
and s. However, finding ks
(r,s)-nucleus incurs a cost of Ω(m 2 ) time where m is the number
of edges, which is more expensive than the O(m1.5 ) time taken for
computing k-trusses, whenever s > 3. A detailed comparison of
k-truss and other dense subgraph models can be found in [22]. In
summary, k-truss is a good dense subgraph model that strikes a
balance between good structural property and efficient computation.
5.3 Properties of attribute score function
We next investigate the properties of the attribute score function,
in search of prospects for an approximation algorithm for finding
ATC. From the definition of attribute score function f(H, Wq ), we
can infer the following useful properties.
Positive influence of relevant attributes. The more relevant attributes a community H has, the higher the score f(H, Wq ). E.g.,
consider the community H4 and Wq = {M L} in Figure 2 (d). If
the additional attribute “ML” is added to the vertex v9 , then it can
be verified that the score f(H4 , {M L}) will increase. We have:
O BSERVATION 2. Given a ATC H and a vertex v ∈ H, let a
new input attribute w ∈ Wq \attr(v) be added to v, and H ′ denote
the resulting community. Then f(H ′ , Wq ) > f(H, Wq ).
In view of the hardness, a natural question is whether efficient
approximation algorithms can be designed for ATC-Problem. Thereto,
we investigate the properties of the problem in the next subsections.
Observe that from the proof, it is apparent that the hardness comes
mainly from maximizing the attribute score of a ATC.
O BSERVATION 3. Given a ATC H and query attribute sets Wq ⊆
Wq′ , we have f(H, Wq ) ≤ f(H, Wq′ ).
2
Since Vq = ∅, we can set d to any value; we choose to set it to the
tightest value.
Negative influence of irrelevant vertices. Adding irrelevant vertices with no query attributes to a ATC will decrease its attribute
7
In addition, we have the following easily verified observation.
Consider the graph G in Figure 1 and query Wq = {DB, DM }
with k = 2. Let the induced subgraphs of G by the vertex sets
S1 = {q1 , v4 } and S2 = {q1 , v4 , v5 } respectively be denoted G1
and G2 ; G1 ⊆ G2 . Let v ∗ be a vertex not in G2 . Let us compare
the marginal gains f(G1 ∪ {v ∗ }, Wq ) − f(G1 , Wq ) and f(G2 ∪
{v ∗ }, Wq ) − (G2 , Wq ), from adding the new vertex v ∗ to G1 and
G2 . Suppose v ∗ = v6 with attribute “DB”, then we have f(G2 ∪
{v6 }, Wq ) − f(G2 , Wq ) = (4 + 1/4) − (3 + 1/3) = 11/12 >
f(G1 ∪ {v6 }, Wq ) − f(G1 , Wq ) = (3 + 1/3) − (2 + 1/2) = 5/6,
violating submodularity of the attribute score function f(., .). On
the other hand, suppose v ∗ = q2 with attributes “DB” and “DM”.
Then we have f(G2 ∪ {q2 }, Wq ) − f(G2 , Wq ) = (4 + 1) − (3 +
1/3) = 5/3 < f(G1 ∪{q2 }, Wq )−f(G1 , Wq ) = (3+4/3)−(2+
1/2) = 11/6, which violates supermodularity. We just proved:
score. For example, for Wq = {DB}, if we insert the vertex
v7 with attribute IR into the community H1 in Figure 2(b), it decreases the score of the community w.r.t. the above query attribute
Wq = {DB} , i.e., f(H1 ∪ {v7 }, {DB}) < f(H1 , {DB}). The
following observation formalizes this property.
O BSERVATION 4. Given two ATC’s H and H ′ where H ⊂ H ′ ,
suppose ∀v ∈ V (H ′ ) \ V (H) and ∀w ∈ Wq , attr(v) ∩ Vw = ∅.
Then f(H ′ , Wq ) < f(H, Wq ).
Non-monotone property and majority attributes. The attribute
score function is in general non-monotone w.r.t. the size of the
community, even when vertices with query related attributes are
added. For instance, for the community H1 in Figure 2(a), with
Wq = {DB, IR}, f(H1 , Wq ) = 4 · 44 = 4. Let us add vertex
v7 with attribute IR into H1 and represent the resulting graph as
H5 , then f(H5 , Wq ) = 4 · 45 + 1 · 15 = 17
< f(H1 , Wq ). If
5
vertex v7 has attribute DB instead of IR, then it is easy to verify
that the attribute score of the resulting graph w.r.t. Wq is strictly
higher than 4. Thus, f(., .) is neither monotone nor anti-monotone.
This behavior raises challenges for finding ATC with the maximum
attibute score. Based on the above examples, we have the following
observation.
L EMMA 3. The attribute score function f(H, Wq ) is neither
submodular or supermodular.
In view of this result, we infer that the prospects for an efficient
approximation algorithm are not promising.
6. TOP-DOWN GREEDY ALGORITHM
In this section, we develop a greedy algorithmic framework for
finding a ATC. It leverages the notions of attribute score contribution and attribute marginal gain that we define. Our algorithm
first finds a (k, d)-truss, and then iteratively removes vertices with
smallest attribute score contribution. Then, we analyze the time
and space complexity of our algorithm. We also propose a more
efficient algorithm with better quality, based on attribute marginal
gain and bulk deletion.
O BSERVATION 5. There exist ATC’s H and H ′ with V (H ′ ) =
V (H) ∪ {v}, and attr(v) ∩ Wq 6= ∅, such that f(H ′ , Wq ) <
f(H, Wq ), and there exist ATC’s H and H ′ with V (H ′ ) = V (H)∪
{v}, and attr(v) ∩ Wq 6= ∅, for which f(H ′ , Wq ) > f(H, Wq ).
The key difference between the two examples above is that DB
is a “majority attribute” in H1 , a notion we formalize next. Formally, given a community H and query Wq , we say that a set of attributes X includes majority attributes of H, and θ(H, Wq ∩ X) =
P
f(H,Wq )
w∈Wq ∩X θ(H, w) ≥ 2|V (H)| . Recall that θ(H, w) is the fraction of vertices of H containing the attribute w. We have:
6.1 Basic Algorithm
We begin with attribute score contribution. Given a subgraph
H ⊂ G, a vertex v ∈ V (H), and attribute query Wq , let us examine the change to the score f (H, Wq ) from dropping v.
L EMMA 2. Let H be a ATC of a graph G. Suppose there is
a vertex v 6∈ V (H) such that the set of attributes Wq ∩ attr(v)
includes the majority attributes of H and that adding v to H results
in a ATC H ′ of G. Then f(H ′ , Wq ) > f(H, Wq ) holds.
f(H − {v}, Wq )
=
w∈Wq
P ROOF. Suppose Wq = {w1 , ..., wl } and w.l.o.g., let Wq ∩
attr(v) = {w1 , ..., wr }, where 1 ≤ r ≤ l. Let |V (H)| = b,
and for each attribute wi ∈ Wq , let |V (H) ∩ Vwi | = bi . Since
Wq ∩ attr(v)
includes the majority attributes of H, θ(H, Wq ∩
Pr
P
bi
f(H,Wq )
attr(v)) = i=1
≥
, so we have ri=1 2bi ≥ f(H, Wq ).
b
2b
Pl |V (H)∩Vwi |2
Pl b2i
We have f(H, Wq ) =
=
i=1
i=1 b , and
|V (H)|
Pr (bi +1)2 Pl
b2
′
i
f(H , Wq ) = i=1 b+1 + i=r+1 b+1 . As a result, f(H ′ , Wq )−
f(H, Wq ) =
r
= b+1
> 0.
b·
Pr
i=1 (2bi +1)−
b(b+1)
Pl
2
i=1 bi
≥
X
b·f(H,Wq )+rb−b·f(H,Wq )
b(b+1)
This lemma will be helpful in designing bottom-up algorithms,
by iteratively adding vertices with majority attributes to increase
attribute score.
|Vw ∩ (V (H) − {v})|2
|V (H)| − 1
X
=
w∈Wq −attr(v)
=
X
w∈Wq
+
X
(|Vw ∩ V (H)| − 1)2
|Vw ∩ V (H)|2
+
|V (H)| − 1
|V (H)| − 1
w∈W ∩attr(v)
q
X
|Vw ∩ V (H)|2
|Vw ∩ V (H)|2
−
+
|V (H)| − 1
|V (H)| − 1
w∈attr(v)∩W
X
w∈Wq ∩attr(v)
q
(|Vw ∩ V (H)| − 1)2
|V (H)| − 1
X
X
|Vw ∩ V (H)|2
2|Vw ∩ V (H)| − 1
−
|V
(H)|
−
1
|V (H)| − 1
w∈Wq
w∈Wq ∩attr(v)
P
f(H, Wq ) · |V (H)|
w∈Wq ∩attr(v) (2|Vw ∩ V (H)| − 1)
=
−
|V (H)| − 1
|V (H)| − 1
=
The second term represents the drop in the attribute score of H
from removing v. We would like to remove vertices with the least
drop in score. This motivates the following.
Non-submodularity and Non-supermodularity. A set function
g : 2U → R≥0 is said to be submodular provided for all sets
S ⊂ T ⊂ U and element x ∈ U \ T , g(T ∪ {x}) − g(T ) ≤
g(S ∪ {x}) − g(S), i.e., the marginal gain of an element has the
so-called “diminishing returns” property. The function g(.) is said
to be supermodular if −g(.) is submodular. Optimization problems
over submodular functions lend themselves to efficient approximation. We thus study whether our attribute score function f (., .) is
submodular w.r.t. its first argument, viz., set of vertices.
D EFINITION 7 (ATTRIBUTE S CORE C ONTRIBUTION ). Given
a graph H and attribute query Wq , the attribute score
P contribution
of a vertex v ∈ V (H) is defined as fH (v, Wq ) = w∈Wq ∩attr(v)
2|Vw ∩ V (H)| − 1.
The intuition behind dropping a vertex v from H is as follows.
Since f(H, Wq ) is non-monotone (Section 5.3), the updated score
8
Algorithm 1 Basic (G, Q)
algorithm finds the ATC H2 with the maximum attribute score in
Figure 2(b), which, for this example, is the optimal solution.
Input: A graph G = (V, E), a query Q = (Vq , Wq ), numbers k and d.
Output: A (k, d)-truss H with the maximum f(H, Wq ).
1: Find a set of vertices S0 having the query distance ≤ d, i.e., S0 = {u :
distG (u, Q) ≤ d}.
2: Let G0 be the induced subgraph of S, i.e., G0 = (S0 , E(S0 )), where
E(S0 ) = {(v, u) : v, u ∈ S0 , (v, u) ∈ E}.
3: Maintain G0 as a (k, d)-truss.
4: Let l ← 0;
5: while connect Gl (Q) = true do
6:
Compute the attribute scoreP
of f(Gl , Wq );
7:
Compute fGl (u, Wq ) =
w∈Wq ∩attr(u) 2|V (H) ∩ Vw | − 1,
∀u ∈ Gl ;
8:
u∗ ← arg minu∈V (Gl )−Vq fGl (u, Wq );
9:
Delete u∗ and its incident edges from Gl ;
10:
Maintain Gl as a (k, d)-truss.
11:
Gl+1 ← Gl ; l ← l + 1;
12: H ← arg maxG′ ∈{G0 ,...,Gl−1 } f(G′ , Wq );
6.2 Complexity Analysis
Let n = |V (G)| and m = |E(G)|, and let dmax be the maximum vertex degree in G. In each iteration i of Algorithm 1, we
delete at least one vertex and its incident edges from Gi . Clearly,
the number of removed edges is no less than k − 1, and so the total number of iterations is t ≤ min{n − k, m/(k − 1)}. We
have the following result on the time and space complexity of Algorithm 1. We note that we do not need to keep all candidate ATCs
in the implementation, but merely maintain a removal record of the
vertices/edges in each iteration.
T HEOREM 3. Algorithm 1 takes O(mρ +t(|Wq |n + |Vq |m))
time and O(m + |attr(V )|) space, where t ∈ O(min{n,
√m/k}),
and ρ is the arboricity of graph G with ρ ≤ min{dmax , m}.
Proof Sketch: The time cost of Algorithm 1 mainly comes from
three key parts: query distance computation, k-truss maintenance,
and attribute score computation.
For query distance computation, finding the set of vertices S
within query distance d from Vq can be done by computing the
shortest distances using a BFS traversal starting from each query
node q ∈ VQ , which takes O(|Vq |m) time. Since the algorithm
runs in t iterations, the total time cost of this step is O(t|Vq |m).
Second, consider the cost of k-truss identification and maintenance. Finding and maintaining a series of k-truss graphs {G0 , ...,
Gl−1 } in each iteration takes O(ρ·m) time in all, where ρ is the
√ arboricity of graph G0 . It has been shown that ρ ≤ min{dmax , m}
[8].
Third, consider the cost of computing attribute score contribution. In each iteration, the computation
P of attribute score contribution for every vertex takes time O( v∈V (G) min{attr(v), |Wq |}) =
O(min {|attr(V )|, |Wq | · n}) ⊆ O(|Wq | · n). Thus, the total cost
of attribute score computation is O(t|Wq |n).
Therefore, the overall time complexity of Algorithm 1 is O(mρ
+t (|Wq |n +|Vq |m) ).
Next, we analyze the space complexity. For graphs {G0 , ..., Gl },
we record the sequence of removed edges from G0 : attaching a
corresponding label to graph Gi at each iteration i, takes O(m)
space in all. For each vertex v ∈ Gi , we keep dist(v, Q), which
takes O(n) space. Hence, the space complexity is O(m + n +
|attr(V )|), which is O(m + |attr(V )|), due to the assumption n ≤
m.
from dropping v from H may increase or decrease, so we check if
f(H − v, Wq ) > f(H, W ).
Algorithm overview. Our first greedy algorithm, called Basic, has
three steps. First, it finds the maximal (k, d)-truss of G as a candidate. Second, it iteratively removes vertices with smallest attribute
score contribution from the candidate graph, and maintains the remaining graph as a (k, d)-truss, until no longer possible. Finally, it
returns a (k, d)-truss with the maximum attribute score among all
generated candidate graphs as the answer.
The details of the algorithm follow. First, we find the maximal
(k, d)-truss of G as G0 . Based on the given d, we compute a set of
vertices S having query distance no greater than d, i.e., S0 = {u :
distG (u, Q) ≤ d}. Let G0 ⊂ G be the subgraph of G induced
by S0 . Since G0 may contain edges with support < (k − 2), we
invoke the following steps to prune G0 into a (k, d)-truss.
(k, d)-truss maintenance: repeat until no longer possible:
(i) k-truss: remove edges contained in < (k − 2) triangles;
(ii) query distance: remove vertices with query distance > d,
and their incident edges;
Notice that the two steps above can trigger each other: removing edges can increase query distance and removing vertices can
reduce edge support. In the following, we start from the maximal
(k, d)-truss Gl where l = 0, and find a (k, d)-truss with large attribute score by deleting a vertex with the smallest attribute score
contribution.
Finding a (k, d)-truss with large attribute score. G0 is our first
candidate answer. In general, given Gl , we find a vertex v ∈
V (Gl ) \ Vq with the smallest attribute score contribution and remove it from Gl . Notice that v cannot be one of the query vertices.
The removal may violate the (k, d)-truss constraint so we invoke
the (k, d)-truss maintenance procedure above to find the next candidate answer. We repeat this procedure until Gl is not a (k, d)truss any more. Finally, the candidate answer with the maximum
attribute score generated during this process is returned as the final answer, i.e., arg maxG′ ∈{G0 ,...,Gl−1 } f(G′ , Wq ). The detailed
description is presented in Algorithm 1.
6.3 An improved greedy algorithm
The greedy removal strategy of Basic is simple, but suffers from
the following limitations on quality and efficiency. Firstly, the attribute score contribution myopically considers the removal vertex
v only, and ignores its impact on triggering removal of other vertices, due to violation of k-truss or distance constraints. If these
vertices have many query attributes, it can severely limit the effectiveness of the algorithm. Thus, we need to look ahead the effect
of each removal vertex, and then decide which ones are better to be
deleted. Secondly, Basic removes only one vertex from the graph
in each step, which leads to a large number of iterations, making
the algorithm inefficient.
In this section, we propose an improved greedy algorithm called
BULK, which is outlined in Algorithm 2. BULK uses the notion of
attribute marginal gain and a bulk removal strategy.
E XAMPLE 3. We apply Algorithm 1 on the graph G in Figure
1 with query Q = ({q1 }, {DB, DM }), for k = 4 and d = 2.
First, the algorithm finds the (k, d)-truss G0 as the subgraph H
shown in Figure 1. Next, we select vertex v7 with the minimum
attribute score contribution fG0 (v7 , Wq ) = 0 and remove it from
G0 . Indeed it contains neither of the query attributes. Finally, the
Attribute Marginal Gain. We begin with a definition.
9
D EFINITION 8 (ATTRIBUTE M ARGINAL G AIN ). Given a graph
H, attribute query Wq , and a vertex v ∈ V (H), the attribute
marginal gain is defined as gainH (v, Wq ) = f(H, Wq ) − f(H −
SH (v), Wq ), where SH (v) ⊂ V (H) is v together with the set of
vertices that violate (k, d)-truss after the removal of v from H.
Notice that by definition, v ∈ SH (v). For example, consider the
graph G in Figure 1 and the query Q = ({q1 }, {M L}), with
k = 3 and d = 2. The vertex v9 has no attribute “ML”, and
the attribute score contribution is fG (v9 , Wq ) = 0 by Definition 7,
indicating no attribute score contribution by vertex v9 . However,
the fact is that v9 is an important bridge for connections among the
vertices q1 , v8 , and v10 with attribute “ML”. The deletion of v9
will thus lead to the deletion of v8 and v10 , due to the 3-truss constraint. Thus, SG (v9 ) = {v8 , v9 , v10 }. The marginal gain of v9 is
gainG (v9 , Wq ) = f(G, Wq ) − f(G − SG (v9 ), Wq ) = 34 − 19 > 0.
This shows that the deletion of v9 from G decreases the attribute
score. It illustrates that attribute marginal gain can more accurately
estimate the effectiveness of vertex deletion than score attribute
contribution, by naturally incorporating look-ahead.
One concern is that gainH (v, Wq ) needs the exact computation
of SH (v), which has to simulate the deletion of v from H by invoking (k, d)-truss maintenance, which is expensive. An important observation is that if vertex v is to be deleted, its neighbors
u ∈ N (v) with degree k − 1 will also be deleted, to maintain ktruss. Let PH (v) be the set of v ′ s 1-hop neighbors with degree
k − 1 in H, i.e., PH (v) = {u ∈ N (v) : degH (u) = k − 1}.
ˆ (v, Wq ) =
We propose a local attribute marginal gain, viz., gain
H
f(H, Wq ) − f(H − PH (v), Wq ), to approximate gainH (v, Wq ).
Continuing with the above example, in graph G, for deleting vertex v9 , note that deg(v8 ) = deg(v10 ) = 2 = k − 1, so we have
PG (v9 ) = {v8 , v9 , v10 }, which coincides with SG (v9 ). In general,
ˆ (v, Wq ) serves as a good approximation to gain (v, Wq )
gain
H
H
and can be computed more efficiently.
Input: A graph G = (V, E), a query Q = (Vq , Wq ), numbers k and d,
parameter ε.
Output: A (k, d)-truss H with the maximum f(H, Wq ).
1: Find the maximal (k, d)-truss G0 .
2: Let l ← 0;
3: while connect Gl (Q) = true do
ˆ (v, Wq ) with the size
4:
Find a set of vertices S of the smallest gain
Gl
ε
of |S| = 1+ε |V (Gi )|;
5:
Delete S and their incident edges from Gl ;
6:
Maintain the (k, d)-truss of Gl ;
7:
Gl+1 ← Gl ; l ← l + 1;
8: H ← arg maxG′ ∈{G0 ,...,Gl−1 } f(G′ , Wq );
Structural Trussness. Recall that trusses have a hierarchical structure, i.e., for k ≥ 3, a k-truss is always contained in some (k − 1)truss [20]. For any vertex or any edge, there exists a k-truss with
the largest k containing it. We define the trussness of a subgraph,
an edge, and a vertex as follows.
D EFINITION 9 (T RUSSNESS ). Given a subgraph H ⊆ G, the
trussness of H is the minimum support of an edge in H plus 2, i.e.,
τ (H) = 2 + mine∈E(H) {supH (e)}. The trussness of an edge
e ∈ E(G) is τG (e) = maxH⊆G∧e∈E(H){τ (H)}. The trussness
of a vertex v ∈ V (G) is τG (v) = maxH⊆G∧v∈V (H) {τ (H)}.
Consider the graph G in Figure 1, and let the subgraph H be the
triangle △q1 v1 v2 . Then the trussness of H is τ (H) = 2 + mine∈H
supH (e) = 3, since each edge is contained in one triangle in H.
However, the trussness of the edge e(q1 , v1 ) is 4, because there
exists a 4-truss containing e(q1 , v1 ) in Figure 2(b), and any subgraph H containing e(q1 , v1 ) has τ (H) ≤ 4, i.e., τG (e(q1 , v1 )) =
maxH⊆G∧e∈E(H) {τ (H)} = 4. In addition, the vertex trussness
of q1 is also 4, i.e., τG (q1 ) = 4.
Based on the trussness of a vertex (edge), we can infer in constant time whether there exists a k-truss containing it. We construct
the structural trussness index as follows. For each vertex v ∈ V , we
keep the vertex trussness of v, and also maintain the edge trussness
of its incident edges in decreasing order of trussness. This supports efficient checking of whether vertex v or its incident edge is
present in a k-truss, avoiding expensive k-truss search. Also, it can
efficiently retrieve v’s incident edges with a given trussness value.
In addition, we use a hashtable to maintain all the edges and their
trussness. Notice that for a graph G, τ̄ (∅) denotes the maximum
structural trussness of G.
Bulk Deletion. The second idea incorporated in BULK is bulk
deletion. The idea is that instead of removing one vertex with the
smallest attribute marginal gain, we remove a small percentage of
vertices from the current candidate graph that have the smallest attribute marginal gain. More precisely, let Gi be the current candidate graph and let ǫ > 0. We identify the set of vertices S such
ǫ
that |S| = 1+ǫ
|V (Gi )| and the vertices in S have the smallest attribute marginal gain, and remove S from Gi , instead of removing
a vertex at a time. Notice that the resulting ATC Gi+1 has size
1
|V (Gi+1 )| ≤ 1+ǫ
|V (Gi )| after the deletion of S. We can safely
terminate the algorithm once the size of Gi drops below k vertices
and return the best ATC obtained so far, due to the constraint of
k-truss. Thus, it follows that the number of iterations t drops from
O(min{n, m/k}) to O(log1+ǫ nk ).
7.
Algorithm 2 BULK (G, Q)
Attributed Trussness. Structural trussness index is not sufficient
for ATC queries. Given a vertex v in G with structural trussness
τG (v) ≥ k, there is no guarantee that v will be present in a (k, d)truss with large attribute score w.r.t. query attributes. E.g., consider
the graph G and vertex v1 with τG (v1 ) = 4 in Figure 1. Here, v1
will not be present in an ATC for query attributes Wq = {“M L”}
since it does not have attribute “ML”. On the contrary, v1 is in a
ATC w.r.t. Wq = {“DM ”}. By contrast, v9 is not present in a
4-truss w.r.t. attribute “DM” even though it has that attribute. To
make such searches efficient, for each attribute w ∈ A, we consider an attribute projected graph, which only contains the vertices
associated with attribute w, formally defined below.
INDEX-BASED SEARCH ALGORITHM
While the BULK algorithm based on the framework of Algorithm 1 has polynomial time complexity, when the graph G is large
and the query Q has many attributes, finding ATCs entails several
ATC queries, which can be expensive. To help efficient processing
of ATC queries, we propose a novel index called attributed-truss
index (ATindex). It maintains known graph structure and attribute
information.
D EFINITION 10. (Attribute Projected graph & Attributed Trussness). Given a graph G and an attribute w ∈ A(V ), the projected
graph of G on attribute w is the induced subgraph of G by Vw ,
i.e., Gw = (Vw , EVw ) ⊆ G. Thus, for each vertex v and edge e
7.1 Attributed Truss Index
The ATindex consists of three components: structural trussness,
attribute trussness, and inverted attibute index.
10
Algorithm 3 ATindex Construction(G)
In this section, we propose an ATindex-based query processing
algorithm by means of local exploration, called LocATC.
Input: A graph G = (V, E).
Output: ATindex of G.
1: Apply the truss decomposition algorithm[36] on G.
2: for v ∈ G do
3:
Keep the structural trussness of v and its incident edges in record.
4: for w ∈ A do
5:
Project G on attribute w as Gw .
6:
Apply the truss decomposition algorithm[36] on Gw .
7:
Construct an inverted node list invAw .
8: for e = (u, v) ∈ G do
9:
Build a hashtable to preserve its structural trussness value τG (e)
and attribute trussness value τGw (e), where w ∈ A(v) ∩ A(u).
Algorithm overview. Based on the ATindex, the algorithm first efficiently detects a small neighborhood subgraph around query vertices, which tends to be densely and closely connected with the
query attributes. Then, we apply Algorithm 2 to shrink the candidate graph into a (k, d)-truss with large attribute score. The outline
of the algorithm LocATC is presented in Algorithm 4. Note that,
when no input parameters k and d are given in LocATC, we design
an auto-setting mechanism for parameters k and d, which will be
explained in Section 8.
To find a small neighborhood candidate subgraph, the algorithm
starts from the query vertices Vq , and finds a Steiner tree connecting
the query vertices. It then expands this tree by adding attributerelated vertices to the graph. Application of standard Steiner tree
leads to poor quality, which we next explain and address.
Algorithm 4 LocATC (G, Q)
Input: A graph G = (V, E), a query Q = (Vq , Wq ).
Output: A (k, d)-truss H with the maximum f(H, Wq ).
1: Compute an attribute Steiner tree T connecting Vq using attribute truss
distance as edge weight;
2: Iteratively expand T into graph Gt by adding adjacent vertices v, until
|V (Gt )| > η;
3: Compute a connected k-truss containing Vq of Gt with the largest
trussness k = kmax ;
4: Let the kmax -truss as the new Gt .
5: Apply Algorithm 2 on Gt to identify ATC with parameters k = kmax
and d = distGt (Gt , Vq ).
Finding attributed Steiner tree T . As discussed above, a Steiner
tree connecting query vertices is used as a seed for expanding into
a (k, d)-truss. A naive method is to find a minimal weight Steiner
tree to connect all query vertices, where the weight of a tree is the
number of edges. Even though the vertices in such a Steiner tree
achieve close distance to each other, using this tree seed may produce a result with a small trussness and low attribute score. For
example, for the query Q = ({q1 , q2 }, {DB}) (see Figure 1), the
tree T1 = {(q1 , v1 ) , (v1 , q2 )} achieves a weight of 2, which is
optimal. However, the edges (q1 , v1 ) and (v1 , q2 ) of T1 will not
be present in any 2-truss with the homogeneous attribute of “DB”.
Thus it suggests growing T1 into a larger graph will yield a low attribute score for Wq = “DB”. On the contrary, the Steiner tree T2
= {(q1 , v4 ) , (v4 , q2 )} also has a total weight of 2, and both of its
edges have the attribute trussness of 4 w.r.t. the attribute “DB”, indicating it could be expanded into a community with large attribute
score. For discriminating between such Steiner trees, we propose a
notion of attributed truss distance.
in Gw , the attributed trussness of v and e w.r.t. w in Gw are respectively defined as τGw (v) = maxH⊆Gw ∧v∈V (H) {τ (H)} and
τGw (e) = maxH⊆Gw ∧e∈E(H){τ (H)}.
For instance, for the graph G in Figure 1, the projected graph
Gw of G on w = “DB” is the graph H1 in Figure 2(a). For vertices v1 and v4 , even though both have the same structural trussness
τG (v1 ) = τG (v4 ) = 4, in graph H1 , vertex v4 has attribute trussness τH1 (v4 ) = 4 w.r.t. w = “DB”, whereas vertex v1 is not even
present in H1 , indicating that vertex v4 is more relevant with “DB”
than v1 .
D EFINITION 11 (ATTRIBUTE T RUSS D ISTANCE ). Given an
edge e = (u, v) in G and query attributes Wq , let G = {Gw :
w ∈ Wq } ∪ {G}. Then the attribute truss distance of e is defined
ˆ Wq (e) = 1+ γ(P
as dist
g∈G (τ̄ (∅) − τg (e))), where τ̄ (∅) is the
maximum structural trussness in graph G.
Inverted Attribute Index. We propose an inverted index for each
attribute w ∈ A, denoted invAw . It maintains an inverted list of
the vertices in Vw , i.e., the vertices containing attribute w, in decreasing order of the vertex structural trussness. Thus, invAw is in
the format {(v1 , τG (v1 )), ..., (vl , τG (vl ))}, τG (vj ) ≥ τG (vj+1 ),
j ∈ [l − 1]. The inverted attribute index and structural trussness
index can both be used to speed up Algorithms 1 and 2.
The set G consists of G together with all its attribute projected
graphs Gw , for w ∈ Wq and the difference (τ̄ (∅) − τg (e)) measures the shortfall in the attribute trussness
of edge e w.r.t. the
P
maximum trussness in G. The sum g∈G (τ̄ (∅) − τg (e)) indicates the overall shortfall of e across G as well as all its attribute
projections. Smaller the shortfall of an edge, lower its distance.
Finally, γ controls the extent to which small value of structural
and attribute trussness, i.e., a large shortfall, is penalized. Using ATindex, for any edge e and any attribute w, we can access
the structural trussness τG (e) and attribute trussness τGw (e) in
O(1) time. Since finding minimum weight Steiner tree is NPhard, we apply the well-known algorithm of [26, 31] to obtain a
2-approximation, using attributed truss distance. The algorithm
takes O(m|Wq | + m + n log n) ⊆ O(m|Wq | + n log n) time,
where O(m|Wq |) is the time taken to compute the attributed truss
distance for m edges.
ATindex Construction. Algorithm 3 outlines the procedure of
ATindex construction. It first constructs the index of structural
trussness using the structural decomposition algorithm of [36], then
constructs the index of attribute trussness and finally the inverted
attribute index. Now, we analyze the time and space complexity of
construction algorithm and the space requirement of ATindex. It
takes O(mρ) time and O(m) space for applying the truss decomposition algorithm on the graph G with m √
edges [20], where ρ is
the arboricity of G, and ρ ≤ min{dmax , m}. Then, for each
keyword w ∈ A, it invokes the truss decomposition algorithm on
the projected graph Gw ⊆ G, which takes O(|E(Gw )|ρ) time and
O(m) space. In implementation, we deal with each Gw separately,
and release its memory after the completion of truss decomposition
and write attribute trussness
P index to disk. Overall, ATindex construction takes O(ρ(m+ w∈AP
|E(Gw )|)) time and O(m) space,
and the index occupies O(m + w∈A |E(Gw )|) space on disk.
Expand attribute Steiner tree T to Graph Gt . Based on the attribute Steiner tree T built above, we locally expand T into a graph
Gt as a candidate (k, d)-truss with numerous query attributes. Lemma
2 gives a useful principle to expand the graph with insertion of a
vertex at a time, while increasing the attribute score. Specifically,
7.2 Index-based Query Processing
11
f(G ,W )
if θ(Gt , Wq ∩ attr(v)) ≥ 2|Vt(Gtq)| , then graph GT ∪ {v} has a
larger attribute score than GT . We can identify such vertices whose
attribute set includes majority attributes of the current candidate
graph and add them to the current graph.
Now, we discuss the expansion process, conducted in a BFS
manner. We start from vertices in T , and iteratively insert adjacent vertices with the largest vertex attribute scores into Gt until
the vertex size exceeds a threshold η, i.e., |V (Gt )| ≤ η, where η is
empirically tuned. After that, for each vertex v ∈ V (Gt ), we add
all its adjacent edges e into Gt .
the violation of (k, d)-truss and irrelevant query attributes.
8. EXPERIMENTS
In this section, we evaluate the efficiency and effectiveness of
our proposed ATC model and algorithms. All algorithms are implemented in C++, and the experiments are conducted on a Linux
Server with Intel Xeon CUP X5570 (2.93 GHz) and 50GB main
memory. In this section, we test all proposed algorithms on a Linux
Server with Intel Xeon CUP X5570 (2.93 GHz) and 50GB main
memory.
Apply BULK on Gt with auto-setting parameters. Based on the
graph Gt constructed above, we apply Algorithm 2 with given parameters k and d on Gt to find an ATC. If input parameters k
and d are not supplied, we can set them automatically as follows.
We first compute a k-truss with the largest k connecting all query
vertices. Let kmax denote the maximum trussness of the subgraph
found. We set the parameter k to be kmax . We also compute the
query distance of Gt and assign it to d, i.e., d := distGt (Gt , Vq ).
We then invoke the BULK algorithm on Gt with parameters k, d to
obtain a ATC with large trussness and high attribute cohesiveness.
8.1 Experimental Setup
Datasets. We conduct experimental studies using 7 real-world networks.The network statistics are reported in Table 2.
The first dataset is PPI network, Krogan 2006, from the BioGRID database, where the PPI data are related to the yeast Saccharomyces cerevisiae [19]. Each protein has three kinds of attributes: biological processes, molecular functions, and cellular
components. There are 255 known protein complexes for Sacchromyces cerevisiae in the MIPS/CYGD [19], which we regard
as ground-truth communities.
The second dataset is Facebook ego-networks. For a given user
id X in Facebook network G, the ego-network of X, denoted egofacebook-X, is the induced subgraph of G by X and its neighbors.
The dataset contains 10 ego-networks indicated by its ego-user X,
where X ∈ {0, 107, 348, 414, 686, 698, 1684, 1912, 3437, 3890}.
For simplicity, we abbreviate ego-facebook-X to fX, e.g., f698.
Vertex attributes are collected from real profiles and anonymized,
e.g., political, age, education, etc. Each ego-network has several
overlapping ground-truth communities, called friendship circles [30].
Note that the statistics of Facebook in Table 2 are results averaged
over 10 networks.
The third and fourth datasets are web graphs respectively gathered from two universities of Cornell and Texas.3 Webpages are
partitioned into five groups including course, faculty, student, project,
and staff. Vertex attributes are unique words frequently present in
webpages.
The other 5 networks, Amazon, DBLP, Youtube, LiveJournal
and Orkut, contain 5000 top-quality ground-truth communities. However, since the vertices on these networks have no attributes, we
generate an attribute set consisting of |A| = 0.005 · |V | different
attribute values in each network G. The average number of attribute/vertex |A|
= 0.005 is less than the proportion of attributes
|V |
to vertices in datasets with real attributes (e.g., the value of 0.12
in Facebook) in Table 2. A smaller attribute pool A makes homogeneity of synthetic attributes in different communities more likely,
which stresses testing our algorithms. For each ground-truth community, we randomly select 3 attributes, and assign each of these
attributes to each of random 80% vertices in the community. In
addition, to model noise in the data, for each vertex in graph, we
randomly assign a random integer of [1, 5] attributes to it. Except
Krogan, all other datasets are available from the Stanford Network
Analysis Project.4
Friendly mechanism for query formulation. Having to set values for many parameters for posing queries using LocATC can be
daunting. To mitigate this, we make use of the auto-setting of parameters k and d. Additionally, we allow the user to omit the query
attribute parameter Wq in a query Q(Vq , Wq ) and write Q(Vq , ).
Thus, only query nodes need
S to be specified. Our algorithm will
automatically set Wq := v∈Vq A(v) by default. The rationale
is that the algorithm will take the the whole space of all possible attributes as input, and leverage our community search algorithms to find communities with a proper subspace of attributes,
while achieving high scores. For example, consider the query Q =
({q1 , q2 }, ) on graph G in Figure 1, LocATC automatically sets
Wq := {DB, DM, M L}. The discovered community is shown in
Figure 2(b), which illustrates the feasibility of this strategy. This
auto-complete mechanism greatly facilitates query formulation,
This auto-complete query formulation is useful to identify relative attributes for discovered communities, which benefits users in
a simple way.
Handling bad queries. In addition to auto-complete query formulation, we discuss how to handle bad queries issued by users. Bad
queries contain query nodes and query attributes that do not constitutes a community. Our solution is to detect outliers of bad queries
and then suggest good candidate queries for users. The whole
framework includes three steps. First, it identifies bad queries.
Based on the structural constraint of (k, d)-truss, if query nodes
span a long distance and loosely connected in graphs, it tends to be
bad queries. In addition, if none of query attributes present in the
proximity of query nodes, it suggests to have no communities with
homogeneous attributes, indicating bad queries as its. Instances of
bad queries Q(Vq , Wq ) have no (k, d)-truss neither containing Vq
nor achieving non-zero score for attributes Wq . Second, it recommends candidates of good queries. Due to outliers existed in bad
queries, we partition the given query into several small queries.
Based on the distribution of graph distance, graph cohesiveness,
and query attribute, we partition given query nodes into several disjoint good queries. Specifically, we start from one query node, and
find the (k, d)-truss community containing it. The query nodes and
query attributes present in this community are formed as one new
query. The process is repreated until all query nodes are presented
in one community or no such one community cotaining it. Thus,
we have several new queries that are good to find ATC. Third, our
approach quickly terminates by returning no communities, due to
Algorithms Compared. To evaluate the efficiency and effectiveness of our proposed index and algorithms, we evaluate and compare the three algorithms – Basic, BULK, and LocATC. Here,
Basic is the top-down greedy approach in Algorithm 1, which removes single node with the smallest node attribute contribution in
3
4
12
http://linqs.cs.umd.edu/projects/projects/lbc/
snap.stanford.edu
Table 2: Network statistics (K = 103 and M = 106 )
Network
Krogan
Facebook
Cornell
Texas
Amazon
DBLP
Youtube
LiveJournal
Orkut
1
|V |
2.6K
1.9K
195
187
335K
317K
1.1M
4M
3.1M
|E|
7.1K
8.9K
304
328
926K
1M
3M
35M
117M
dmax
140
416
94
104
549
342
28,754
14,815
33,313
τ̄ (∅)
16
29
4
4
7
114
19
352
78
|A|
3064
228
1588
1501
1674
1584
5327
11104
9926
|attr(V )|
28151
3944
18496
15437
1804406
1545490
2163244
12426432
10373866
MDC
LCTC
LocATC
LocATC-Q1
ACC-Q1
1
0.8
0.6
0.4
0.2
0
Preceission
Recall
Figure 5: Comparison of precision and recall on f414 network.
MDC
LCTC
LocATC
LocATC-Q1
ACC-Q1
10
MDC
LCTC
LocATC
LocATC- Q1
ACC- Q1
1
0. 8
0.1
0. 6
0. 4
0. 2
10
-2
10
-3
10
-4
Krogan
0
Kr ogan f 0
f0
f107
f348
f414
f686
f698 f1684 f1912 f3437 f3890 Texas Conell
Figure 6: Efficiency evaluation (query time in seconds) on networks with real-world attributes and ground-truth communities
f 107 f 348 f 414 f 686 f 698 f 1684 f 1912 f 3437 f 3890 TexasConel l
Figure 4: Quality evaluation (F1 score) on networks with realworld attributes and ground-truth communities
and LCTC on attributed networks with ground-truth communities.
each iteration. BULK is an improved greedy algorithm in Algoǫ
rithm 2, which removes a set of nodes with size 1+ǫ
|V (Gi )| from
graph Gi in each iteration. We empirically set ǫ = 0.03. LocATC
is the bottom-up local exploration approach in Algorithm 4. For all
methods, we set the parameter k = 4 and d = 4 by default. For
LocATC, we empirically set the parameter η = 1000 and γ = 0.2,
where η = 1000 is selected in order to achieve stable quality and
efficiency by testing η in the range [100, 2000], and γ = 0.2 is selected to balance the cohesive structure and homogeneous attributes
for communities explored.
In addition, to evaluate the effectiveness of the ATC model on
attributed graphs, we implemented three state-of-the-art community search methods – ACC, MDC and LCTC. The k-core based
attribute community search (ACC) [14] finds a connected k-core
containing one given query node with the maximum number of
common query attributes shared in this community. The minimum degree-based community search (MDC) [35] globally finds
a dense subgraph containing all query nodes with the highest minimum degree under distance and size constraints. The closest truss
community search (LCTC) [22] locally finds a connected k-truss
with the largest k containing all query nodes, and a small diameter.
Note that both MDC and LCTC only consider the graph structure
and ignore the attributes. ACC considers both graph structure and
attributes, but it only deals with a single query node with query
attributes and uses k-core as community model.
Networks with real-world attributes. We experiment with the
Krogan network and the 10 Facebook ego-networks, all having
real-world attributes. For every ground-truth community, we randomly select a set of query nodes with size drawn uniformly at
random from [1, 16]. We use 2 representative attributes from the
community as query attributes. We choose attributes occurring
most frequently in a given community and rarely occurring in other
communities as representative attributes. We evaluate the accuracy
of detected communities and report the averaged F1-score over all
queries on each network.
Figure 4 shows the F1-score on Krogan, Cornell, Texas, and
the 10 Facebook ego-networks. Our method (LocATC) achieves
the highest F1-score on most networks, except for facebook egonetworks f104 and f1684. The reason is that vertices of groundtruth communities in f104 and f1684 are strongly connected in
structure, but are not very homogeneous on query attributes. LCTC
has the second best performance, and outperforms MDC on all
networks. We can see that MDC and LCTC do not perform as
well as LocATC, because those community models only consider
structure metrics, and ignore attribute features. Note that for each
query with multiple query vertices, the attribute community search
method ACC randomly takes one query vertex as input. We make
this explicit and denote it as ACC-Q1 in Figure 4. For comparison, we apply the same query on our method LocATC, and denote
it as LocATC-Q1. LocATC-Q1 clearly outperforms ACC-Q1 in
terms of F1-score, showing the superiority of our ATC model. In
addition, LocATC achieves higher score than LocATC-Q1, indicating our method can discover more accurate communities with
more query vertices. Furthermore, we also compare the precision
and recall of all methods on f414 network in Figure 5. MDC perform the worst on precision, since it considers no query attributes
and includes many nodes that are not in ground-truth communities. ACC-Q1 is the winner on precision, which is explained by
the strict attribute constriantin its definition. On the other hand, in
terms of recall, ACC-Q1 is the worst method as it only identifies a
small part of ground-truth communities. Overall, LocATC achieves
a good balance between precision and recall. This is also reflected
in LocATC achieving the best F1-score on most datasets (Figure
4).
Figure 6 shows the running time performance of all methods.
In terms of supporting multiple query vertices, LocATC runs up
to two orders of magnitude faster than MDC and LCTC on small
ego-networks in Facebook, and LCTC is the winner on Cornell
Queries. For each dataset, we randomly test 100 sets of queries
Q = (Vq , Wq ), where we set both the number of query nodes |Vq |,
and the number of query attributes |Wq | to 2 by default.
Evaluation Metrics. To evaluate the quality of communities found
by all algorithms, we measure the F1-score reflecting the alignment
between a discovered community C and a ground-truth community Ĉ. Given a ground-truth community Ĉ, we randomly pick
query vertices and query attributes from it and query the graph
using different algorithms to obtain the discovered community C.
Ĉ)·recall(C,Ĉ)
Then, F 1 is defined as F 1(C, Ĉ) = 2·prec(C,
where
prec(C,Ĉ)+recall(C,Ĉ)
Ĉ|
Ĉ|
is the precision and recall(C, Ĉ) = |C∩
prec(C, Ĉ) = |C∩
|C|
|Ĉ|
is the recall. For all efficiency experiments, we consistently report
the running time in seconds.
8.2 Quality Evaluation
To evaluate the effectiveness of different community models, we
compare LocATC with three state-of-the-art methods – ACC, MDC
13
MDC
LCTC
LocATC
LocATC-Q1
ACC-Q1
1
0.8
104
103
10
2
10
MDC
LCTC
LocATC
LocATC-Q1
ACC-Q1
1
10
0.6
0.1
1
0.4
0.1
10
Basic
Bulk
LocATC
0.01
-2
0.2
10-3
0.001
10-4
0
Amazon
DBLP
Youtube
LJ
Orkut
Amazon
DBLP
Youtube
LJ
1
2
Orkut
(a) F1 score
(b) Query time (in seconds)
Figure 7: Evaluation on networks with synthetic attributes and
ground-truth communities
4
|Vq|
8
16
40
35
30
25
20
15
10
5
0
Basic
Bulk
LocATC
1
2
4
|Vq|
8
16
(a) f414
(b) DBLP
Figure 8: Varying query vertex size |Vq |: Query Time
10
1
Basic
Bulk
LocATC
0.1
and Texas networks. For one query vertex, ACC-Q1 runs faster
than LocATC-Q1, since k-cores can be computed quicker than ktrusses.
0.01
0.001
Networks with synthetic attributes. In this experiment, we test
on 5 large networks – DBLP, Amazon, Youtube, LiveJournal, and
Orkut, with ground-truth communities and synthetic attributes [38].
We randomly select 1000 communities from 5000 top-quality groundtruth communities as answers. For each community, we generate
a query Q = (Vq , Wq ), where query vertices Vq are randomly
selected from this community with a size randomly drawn from
[1, 16], and query attributes Wq are the 3 community attributes.
Figure 7 (a) shows the F1-score. Our method LocATC achieves
the best F1-score among all compared methods on all networks,
and MDC is the worst. The results clearly show the effectiveness
and superiority of our ATC model for attributed community search.
Moreover, LocATC-Q1 outperforms ACC-Q1 on most networks.
Figure 7 (b) reports the running times of all methods on all networks. As we can see, LocATC runs much faster than MDC, and is
close to LCTC. This indicates that LocATC can achieve high quality over large networks with an efficiency comparable to LCTC.
Thus, compared to LCTC, the additional overhead of reasoning
with attribute cohesiveness is small while the improvement in quality of communities discovered is significant. In addition, LocATCQ1 runs much faster than LocATC, which shows the high efficiency
of local exploration for one query vertex.
8.3 Efficiency Evaluation
We evaluate the various approaches using different queries on
ego-facebook-414 (aka f414) and DBLP.
Varying query vertex size |Vq |. We test 5 different values of |Vq |,
i.e., {1, 2, 4, 8, 16} with the default query attribute size |Wq | = 2.
For each value of |Vq |, we randomly generate 100 sets of queries,
and report the average running time in seconds. The results for
f414 and DBLP are respectively shown in Figure 8 (a) and (b).
LocATC achieves the best performance, and increases smoothly
with the increasing query vertex size. BULK is more effective than
Basic, thanks to the bulk deletion strategy. Most of the cost of
BULK and Basic comes from computing the maximal (k, d)-truss
G0 . All methods takes less time on f414 than on DBLP network,
due to the small graph size of f414.
1
2
3
|Wq|
4
5
50
45
40
35
30
25
20
15
10
5
0
Basic
Bulk
LocATC
1
2
3
|Wq|
4
5
(a) f414
(b) DBLP
Figure 9: Varying query attribute size |Wq |: Query Time
Table 3: Comparison of index size (in Megabytes) and index
construction time (wall-clock time in seconds)
Network
Krogan
Amazon
DBLP
Youtube
LiveJournal
Orkut
Graph
Size
0.24
24
23
52
568
1710
Index Size
K-Truss
ATindex
0.15
1.8
19
75
20
57
59
105
666
1091
2190
3451
Index Time
K-Truss
ATindex
0.11
0.786
6.7
21.7
14.2
35.2
75.6
113.6
2142
3556
21011
26545
Table 3 reports the size (MB) and construction time (seconds) of
the structural k-truss index (K-Truss) and ATindex, along with the
size of each network. The 10 Facebook ego-networks have similar
results, omitted from Table 3 for brevity. The size of ATindex is
comparable to the original graph size and structural
P k-truss index.
It confirms that the ATindex scheme has O(m + w∈A |E(Gw )|)
space complexity. Given |A| > 1000 on all these networks, it
shows the projected attribute graphs are very sparse. The ATindex
construction time is comparable to k-truss index construction and
is nearly as efficient.
It can be seen that query processing efficiency is greatly aided by the ATindex. For instance, consider
the index construction and query processing times on DBLP network. In Figure 9(b) and Figure 8(b), the query time of BULK and
Basic without ATindex scheme take nearly 20 seconds, while the
construction time of ATindex is only 35.2 seconds (Table 3). That
is, merely processing two queries more than pays off for the index
construction effort.
8.5 Parameter Sensitivity Evaluation
In this experiment, we vary various parameters in used in the
synthetic data generation, query generation, and in algorithm definitions, and evaluate the quality and efficiency performance of
LocATC.
Varying homogeneous attributes in synthetic datasets.
For
each ground-truth community in Amazon, we randomly select 3 attributes, and assign each of these attributes to each of Z% vertices
in the community, where Z is a random number in [50, Y ]. Note
that different attributes may have different value of Z. The parameter Y is varied from 60 to 90. As Y is increased, intuitively the level
of homogeneity in the network and in its communities increases.
The results of F1-socre are shown in Figure 10. As homogeneous
attributes in communities increase, MDC and LCTC maintain the
same F1-socre, while the F1-score of all methods of attributed community search – LocATC, LocATC-Q1, and ACC-Q1 – increases
as homogeneity increases. Once again, LocATC is the best method
even when the proportion of homogeneous attributes falls in [50,
Varying query attribute size |Wq |. We test 5 different values of
|Wq | from 1 to 5. For each value of |Wq |, we randomly generate
100 sets of queries, and report the average running time. We show
the result for f414 and DBLP respectively in Figure 9 (a) and (b).
Figure 9 shows all methods register only a modest increase in running time as |Wq | increases. Again, the local exploration method
LocATC significantly outperforms other methods.
8.4 Index Construction
14
1
0.8
0.8
0.6
0.5
LocATC-Q1
ACC-Q1
MDC
LCTC
LocATC
70
80
0.6
0.4
0.2
0
0.3
60
LocATC
0.8
F1-score
0.7
90
1
2
Y
Figure 10: Varying homogeneous attributes on Amazon: F1score
0.95
0.945
4
|Vq|
8
0
16
LocATC
0.925
0.92
0.915
0.005
0.008
0.01
0.76
0.74
0.72
|A|/|V|
0.01
Figure 11: Varying parameter
2
3
|Wq|
4
5
LocATC
0.08
0.06
0.04
0.02
0
0.7
|A|
|V |
1
0.1
0.8
0.78
0.93
F1-score
F1-score
0.94
0.002
0.4
(a) Varying |Vq |.
(b) Varying |Wq |.
Figure 12: Varying queries on f414: F1-score
LocATC
0.935
0.91
0.6
0.2
Time (in seconds)
0.4
1
LocATC
F1-score
F1-score
1
0.9
0.03
0.1
0.2
0.4
0.01
ε
on Amazon: F1-score
0.03
0.1
0.2
0.4
ε
(a) F1-score
(b) Query Time
Figure 13: Varying ǫ on f414
60]. LocATC-Q1 beats ACC-Q1 for all settings of homogeneity.
Similar results can be also observed on other synthetic datasets.
result in discovered communities with poor density, compared to
good queries. We compare the quality of discovered communities
|E(C)|
C in terms of edge density, i.e., |V
, averaged across the 100
(C)|
queries. Figure 16(a) shows the results of edge density. Compared
with bad queries, LocATC can find communities with larger densities for good queries. Figure 16(b) shows the average running
times on good and bad queries. LocATC processes bad queries
much faster than good queries, which achieves 6.8 times of efficiency improvement on DBLP network. Intuitively, LocATC can
quickly return empty answers for bad queries, if the algorithm can
determine the weak structure of query nodes and heterogeneous attributes of neighbors in the proximity of query nodes.
in synthetic
Varying the average number of attribute/vertex |A|
|V |
datasets. In this experiment, we vary the average number of attribute/vertex |A|
to generate different attribute sets in Amazon.
|V |
|A|
, LocATC
The results are shown in Figure 11. With the increased |V
|
performs better. This is because the size of attribute set A becomes
larger, which makes homogeneity of synthetic attributes in different communities more likely. Finally, it bring more challenges to
detected accurate communities for a smaller |A|
. We can obtain
|V |
similar results on other synthetic datasets.
Varying query vertex size |Vq | and query attribute size |Wq |.
We test the quality performance of LocATC using different queries
by varying |Vq | and |Wq |. The results are shown in Figure 12 (a)
and (b). As we can see, given more information of query vertices and query attributes within communities, our algorithm accordingly performs better.
8.7 Case Study on PPI network
Besides the quality evaluation measured by F1-score, we also apply the LocATC algorithm on the protein-protein interaction (PPI)
network Krogan. Recall that the performance of LocATC handling bad queries has been tested in Section 8.6, and we test good
queries here. We examine the details of the discovered protein complexes to investigate biologically significant clues, which help us to
better understand the protein complexes. Figure 17(a) shows one
complex “transcription factor TFIIIC complex” in sccharomyces
cerevisiae, which is identified by biologists previously. The graph
contains 6 nodes and 12 edges, with density 0.8 and diameter 2.
We adopt the following procedure for checking whether a protein
is present in a complex. Taking gene id “854277” as an example, we can go to the NCBI5 , input “854277” in the search box,
and select the category of “Gene”, then we will obtain information related to this gene, from which we can check whether this
gene is one of the proteins in the protein complex6 . Similar with
the procedure of good query generation in Sec. 8.2, we randomly
sample a query as Q = (Vq , Wq ) where Vq ={854277, 856100}
and Wq ={“GO:0001009”, “GO:0001041”}, and set the parameters k = 3 and d = 3. To illustrate the importance of the consideration of protein attributes in detecting protein complexes, we
simply use the structure and find the (3, 3)-truss shown in Figure 17(b). This community contains 11 proteins including 6 proteins of the ground-truth complex of Figure 17(a). The other 5
proteins not present in the ground-truth complex are associated
with no query attributes, but have other attributes w3 and w4 , as
shown in Figure 17(b). When we look up the database of Gene
Varying parameters ǫ, γ, and η. We test the performance of
LocATC by varing ǫ, γ, and η. We used the same query nodes
that are selected in Sec. 8.2 on f414 network. Similar results can
be also observed on other networks with real attributes. The results of F1-socre and query time by varying ǫ are respectively reported in Figure 13 (a) and (b). As we can see, LocATC removes
a smaller portion of nodes, which achieves a higher F1-score using more query time. In addition, we test different values of γ
and report the results in Figure 14 (a) and (b). The F1-score remains stable as γ increases from 0.1 to 0.5, and then decreases a
little bit for a larger value of γ = 1.0. Thus, the default choice of
γ = 0.2 is good at balancing the cohesive structure and homogeneous attributes in an efficiency way. Furthermore, we also report
the results by varying the parameter η in Figure 15 (a) and (b). As
can be seen, the F1-score remains stable with increasing η, while
the running time increases a little with larger η. The results show
that the default setting η = 1000 is large enough for achieving a
good balance of efficiency and quality.
8.6 Bad Query Evaluation
In this experiment, we use bad queries to test the performance
of LocATC on f414 and DBLP networks. We generate bad queries
by randomly choosing query nodes and query attributes from different ground truth communities. We test a set of 100 bad queries
generated in this manner. We also test 100 queries that are selected
in Sec. 8.2 as good queries. We intuitively expect bad queries to
5
6
15
https://www.ncbi.nlm.nih.gov/
http://wodaklab.org/cyc2008/resources/CYC2008_complex.tab
0.76
0.74
0.72
0.8
0.045
0.04
0.73
0.64
0.6
0.4
0.3
0.035
0.2
0.1
0.2
0.3
γ
0.5
0.1
1
0.2
0.3
γ
0.5
0.25
0.1
0.047
0.034
0
1
0.01
f414
Time (in seconds)
LocATC
0.8
0.6
0.4
0.2
100
500
1000
η
1500
2000
...
0.045
851262
w 2 w3
...
0.04
856100
w2
0.035
500
1000
η
1500
DBLP
...
2000
854277
w2
...
..
851964
w2
...
856100
w2
...
852421
w2
...
852060
w3
851964
w2
100
f414
854277
w2
LocATC
0.03
0
DBLP
(a) Density
(b) Query Time
Figure 16: Testing Bad Queries on f414 and DBLP
0.05
1
F1-score
1.7
1
0.14
(a) F1-score
(b) Query Time
Figure 14: Varying γ on f414
...
852938
w2
...
851988
w3
851262
...
852421
w2
851461
... w ...
852938
w2
4
...
...
...
w2 w3
851408
w4
850613
w4
...
(a) ATC & a protein complex
(b) (3,3)-truss
Figure 17: Q = ({q1 , q2 }, {w1 , w2 }) where q1 = 854277, q2 =
856100 and w1 = “GO:0001009”, w2 =“GO:0001041”
(a) F1-score
(b) Query Time
Figure 15: Varying η on f414
[4] V. Batagelj and M. Zaversnik. An o (m) algorithm for cores decomposition of
networks. arXiv preprint cs/0310049, 2003.
[5] G. Bhalotia, A. Hulgeri, C. Nakhe, S. Chakrabarti, and S. Sudarshan. Keyword
searching and browsing in databases using banks. In ICDE, pages 431–440,
2002.
[6] C. Bothorel, J. D. Cruz, M. Magnani, and B. Micenkova. Clustering attributed
graphs: models, measures and methods. Network Science, 3(03):408–444, 2015.
[7] H. Cheng, Y. Zhou, X. Huang, and J. X. Yu. Clustering large attributed
information networks: an efficient incremental computing approach. DMKD,
25(3):450–477, 2012.
[8] N. Chiba and T. Nishizeki. Arboricity and subgraph listing algorithms. SIAM J.
Comput., 14(1):210–223, 1985.
[9] J. Cohen. Trusses: Cohesive subgraphs for social network analysis. Technical
report, National Security Agency, 2008.
[10] W. Cui, Y. Xiao, H. Wang, Y. Lu, and W. Wang. Online search of overlapping
communities. In SIGMOD, pages 277–288, 2013.
[11] W. Cui, Y. Xiao, H. Wang, and W. Wang. Local search of communities in large
graphs. In SIGMOD, pages 991–1002, 2014.
[12] B. Ding, J. X. Yu, S. Wang, L. Qin, X. Zhang, and X. Lin. Finding top-k
min-cost connected trees in databases. In ICDE, pages 836–845, 2007.
[13] J. Edachery, A. Sen, and F. J. Brandenburg. Graph clustering using distance-k
cliques. In Proceedings of the 7th International Symposium on Graph Drawing,
pages 98–106, 1999.
[14] Y. Fang, R. Cheng, S. Luo, and J. Hu. Effective community search for large
attributed graphs. PVLDB, 9(12):1233–1244, 2016.
[15] A. Gajewar and A. D. Sarma. Multi-skill collaborative teams based on densest
subgraphs. In SDM, pages 165–176, 2012.
[16] S. Günnemann, B. Boden, and T. Seidl. Db-csc: a density-based approach for
subspace clustering in graphs with feature vectors. In ECML/PKDD, pages
565–580, 2011.
[17] V. Hristidis, L. Gravano, and Y. Papakonstantinou. Efficient ir-style keyword
search over relational databases. In PVLDB, pages 850–861, 2003.
[18] V. Hristidis and Y. Papakonstantinou. Discover: Keyword search in relational
databases. In PVLDB, pages 670–681, 2002.
[19] A. L. Hu and K. C. Chan. Utilizing both topological and attribute information
for protein complex identification in ppi networks. TCBB, 10(3):780–792, 2013.
[20] X. Huang, H. Cheng, L. Qin, W. Tian, and J. X. Yu. Querying k-truss
community in large and dynamic graphs. In SIGMOD, pages 1311–1322, 2014.
[21] X. Huang, H. Cheng, and J. X. Yu. Dense community detection in multi-valued
attributed networks. Information Sciences, 314:77–99, 2015.
[22] X. Huang, L. V. Lakshmanan, J. X. Yu, and H. Cheng. Approximate closest
community search in networks. PVLDB, 9(4):276–287, 2015.
[23] V. Kacholia, S. Pandit, S. Chakrabarti, S. Sudarshan, R. Desai, and
H. Karambelkar. Bidirectional expansion for keyword search on graph
databases. In VLDB, pages 505–516, 2005.
[24] M. Kargar and A. An. Discovering top-k teams of experts with/without a leader
in social networks. In CIKM, pages 985–994, 2011.
[25] S. Khuller and B. Saha. On finding dense subgraphs. In ICALP, pages 597–608,
2009.
[26] L. Kou, G. Markowsky, and L. Berman. A fast algorithm for steiner trees. Acta
informatica, 15(2):141–145, 1981.
[27] T. Lappas, K. Liu, and E. Terzi. Finding a team of experts in social networks. In
KDD, pages 467–476, 2009.
[28] G. Li, B. C. Ooi, J. Feng, J. Wang, and L. Zhou. Ease: an effective 3-in-1
keyword search method for unstructured, semi-structured and structured data.
In SIGMOD, pages 903–914, 2008.
Ontology7 , we know that the attributes of “biological processes”
as “GO:0001009” and “GO:0001041” respectively represent “transcription from RNA polymerase III hybrid type promoter” and “transcription from RNA polymerase III type 2 promoter”. Except query
attributes, we omitted details of other attributes from Figure 17 for
simplicity. LocATC is able to identify all proteins that preform
the same biological process of transcription from RNA polymerase.
Overall, LocATC successfully identifies all proteins that constitute
the ground-truth complex in Figure 17(a). Other than these two homogeneous attributes, interestingly, we also discover another two
attributes shared by all proteins in terms of “molecular functions”.
Specifically, the attributes “GO:0001003” and “GO:0001005” respectively perform DNA binding activity as“RNA polymerase III
type 2 promoter sequence-specific DNA binding” and “ RNA polymerase III type 1 promoter sequence-specific DNA binding”. Overall, this complex exists in the cell nucleus, according to the same
attribute “cellular components” of “GO:0005634” in all proteins.
CONCLUSION
In this work, we propose an attributed truss community (ATC)
model that allows to find a community containing query nodes with
cohesive and tight structure, also sharing homogeneous query attributes. The problem of finding an ATC is NP-hard. We also show
that the attribute score function is not monotone, submodular, or supermodular, indicating approximation algorithms may not be easy
to find. We propose several carefully designed strategies to quickly
find high-quality communities. We design an elegant and compact
index, ATindex, and implement an efficient query processing algorithm, which exploits local exploration and bulk deletion. Extensive experiments reveal that ground-truth communities and social circles can be accurately found by our ATC model, and that
our model and algorithms significantly outperform previous approaches. Several interesting questions remain. Some examples include attributed community search over heterogeneous graphs and
edge-weighted graphs, and w.r.t. weighted query attributes.
10. REFERENCES
[1] S. Agrawal, S. Chaudhuri, and G. Das. Dbxplorer: A system for keyword-based
search over relational databases. In ICDE, pages 5–16, 2002.
[2] B. Bahmani, R. Kumar, and S. Vassilvitskii. Densest subgraph in streaming and
mapreduce. PVLDB, 5(5):454–465, 2012.
[3] N. Barbieri, F. Bonchi, E. Galimberti, and F. Gullo. Efficient and effective
community search. DMKD, 29(5):1406–1433, 2015.
7
BadQuery
GoodQuery
0.03
0.7
9.
Time (in seconds)
F1-score
0.78
10
LocATC
Density
Time (in seconds)
LocATC
BadQuery
GoodQuery
1
0.05
0.8
http://geneontology.org/ontology/go-basic.obo
16
[29] R.-H. Li, L. Qin, J. X. Yu, and R. Mao. Influential community search in large
networks. PVLDB, 8(5), 2015.
[30] J. J. McAuley and J. Leskovec. Learning to discover social circles in ego
networks. In NIPS, volume 272, pages 548–556, 2012.
[31] K. Mehlhorn. A faster approximation algorithm for the steiner problem in
graphs. Information Processing Letters, 27(3):125–128, 1988.
[32] L. Qin, J. X. Yu, L. Chang, and Y. Tao. Querying communities in relational
databases. In ICDE, pages 724–735, 2009.
[33] Y. Ruan, D. Fuhry, and S. Parthasarathy. Efficient community detection in large
networks using content and links. In WWW, pages 1089–1098, 2013.
[34] A. E. Sariyuce, C. Seshadhri, A. Pinar, and U. V. Catalyurek. Finding the
hierarchy of dense subgraphs using nucleus decompositions. In WWW, pages
927–937, 2015.
[35] M. Sozio and A. Gionis. The community-search problem and how to plan a
successful cocktail party. In KDD, pages 939–948, 2010.
[36] J. Wang and J. Cheng. Truss decomposition in massive networks. PVLDB,
5(9):812–823, 2012.
[37] Y. Wu, R. Jin, J. Li, and X. Zhang. Robust local community detection: On free
rider effect and its elimination. PVLDB, 8(7), 2015.
[38] J. Yang and J. Leskovec. Defining and evaluating network communities based
on ground-truth. In ICDM, pages 745–754, 2012.
[39] Y. Zhou, H. Cheng, and J. X. Yu. Graph clustering based on structural/attribute
similarities. PVLDB, 2(1):718–729, 2009.
17
| 8 |
A NEW METHOD TO INDEX AND STORE SPATIO-TEMPORAL
DATA
Guillermo de Bernardo, Departamento de Computación, Universidade da Coruña, A Coruña,
Spain, [email protected]
Ramón Casares, Departamento de Computación, Universidade da Coruña, A Coruña, Spain,
[email protected]
Adrían Gómez-Brandón, Departamento de Computación, Universidade da Coruña, A Coruña,
Spain, [email protected]
José R. Paramá, Departamento de Computación, Universidade da Coruña, A Coruña, Spain,
[email protected]
Abstract
We propose a data structure that stores, in a compressed way, object trajectories, which at the same
time, allow to efficiently response queries without the need to decompress the data. We use a data
structure, called K2-tree, to store the full position of all objects at regular time intervals. For storing the
positions of objects between two time instants represented with K2-trees, we only encode the relative
movements. In order to save space, those relative moments are encoded with only one integer, instead
of two (x,y)-coordinates. Moreover, the resulting integers are further compressed with a technique that
allows us to manipulate those movements directly in compressed form. In this paper, we show an
experimental evaluation of this structure, which shows important savings in space and good response
times.
Keywords: Spatio-temporal structures, K2-tree, SC-Dense Codes, Displacement Logs, Object temporalposition.
1
INTRODUCTION
Nowadays, mobile phone and GPS devices are becoming more and more popular. Moreover, planes,
ships, and even some cars also have devices informing about their position. Hence, there is a large set
of information about the positions of moving objects (or individuals). However, this vast amount
information represents an enormous challenge, because we need efficient ways to store and query these
data.
For example, we can be interested in knowing "who was at the science museum at five o'clock?", or
"who visited the cathedral between six o'clock and seven o'clock?". The spatio-temporal databases were
developed to answer this type of queries. The two main types of queries issued in spatial databases are
the time-slice queries, which return data at a specific time instant, and the time-interval (Pfoser, et al.,
2000) queries, which return data between two time instants.
The spatio-temporal database field has been extensively studied (Šaltenis et al., 2000; Beckmann et al.,
1990; Tao and Papadias, 2001), and thus they have reached a mature state in both academic and
industrial level. In this work, we provide a new point of view in the field of spatial database field. We
present a new spatio-temporal index for moving objects, which is able to answer queries very efficiently,
but uses compact data structures to save space.
The classical setup of a classical database system is based on a set of data structures which uses the
memory hierarchy starting at the disk. The key component to index, and thus efficiently query, is a treebased structure, like for example, the Spatio-Temporal R-tree and the Trajectory-Bundle tree (Pfoser, et
al., 2000) or the Time-Parameterized R-tree (Šaltenis et al., 2000).
Trees have several appealing characteristics. They provide logarithmic search times and a nice adaption
to the memory hierarchy, since they are easy to split between main memory and disk. This last feature
is a requirement in classical computing.
A new family of data structures called compact data structures aim at joining in a unique structure the
data and the access methods. This structure is compact in the sense that the space consumption is kept
as low as possible in order to be able to fit the entire structure in main memory (Jacobson, 1989). By
fitting data and access methods in main memory, we make a better use of the memory hierarchy, since
the upper levels have higher bandwidth and lower latency. For this sake, these data structures have to
use complex mechanisms in order to compress the data, but at the same time, the compressed data must
be queriable without a previous decompression, otherwise the data could not be fitted in main memory.
The main contribution of this work is the first (to the best of our knowledge) compact data structure to
represent and query data storing moving objects. We will show that we can represent the data using only
up to 25.59% of the space needed to represent them in plain form, at the same time that we can query
them without decompressing the data with very good response times.
By providing a compact data structure to represent and query spatio-temporal information, we open a
new way to deal with this type of information, aligned with the new tendencies in computer science
designed to face the problem of processing and/or analysing vast amounts of information. The term Big
Data is often used to refer this new way of computation, and one of its strategies is to perform an “inmemory” data processing, that is, to avoid costly disk transfers, by trying to fit all the information, or at
least larger chunks of information, in main memory.
2
RELATED WORK
An object trajectory is often modelled as a sequence of locations of the object, each one represented as
the (x,y) coordinates of the object in the space. Each of those locations corresponds to a time instant,
and thus the trajectory is a vector of coordinate positions.
There has been much research work related to object trajectories. Most investigations were oriented to
efficient access and index the data. For example, the 3DR-Tree (Vazirgiannis et al., 1998) is a
spatiotemporal method which considers access time as another axis within the spatial coordinates. In
this structure, a line segment represents an object at a location in a time interval. The 3DR-Tree is
efficient processing interval queries, but inefficient processing instant queries.
The RT-Tree (Xu et al. 1990) is a structure where the temporal information is maintained at the nodes
of a traditional R-tree. In this structure, the temporal information plays a secondary role because the
query is directed by spatial information. Therefore, queries with temporary conditions are not efficiently
processed (Nascimento et al., 1998).
The HR-Tree (Nascimento et al., 1999) uses an R-tree for each time instant, but trees are stored keeping
in mind the overlapping between them. The main idea is that, given two trees, the most recent
corresponds to an evolution of the oldest one and then the sub-trees that compose them can be shared
between both trees. The best advantage of HR-Tree is that it can run instant time queries with a good
performance, whereas this structure requires too much space.
The MV3R-Tree (Tao et al., 2001) is a structure based on the manipulation of multiple versions of RTree. This is an extension of the MVB-Tree (Becker et al., 1996).
The SEST-Index (Gutiérrez et al., 2005) is a structure that maintains snapshots of the state of objects
for some instants of time along with event logs between these snapshots. The events provide more
information than a simple change of position of the object. For example, it can indicate when an object
enters or exits from a particular place or if an object crashes with another.
From the previous structures, the only one that has a relationship with our work is the last one. In this
work, we also use snapshots and logs to represent object trajectories. The difference is that SEST-Index
is a classical approach, whereas our structure is a compact structure designed to be fit in main memory.
3
COMPACT DATA STRUCTURES
As usual, compact data structures can be built on top of other data structures. We devote this section to
describe the compact data structures that are the basic components of our proposal.
In order to understand the ideas explained in this paper, we need to know two essential operations over
bit sequences (or bitmaps), which can be solved in constant time (Navarro and Mäkinen, 2007):
• rankb(B, p) counts the number of occurrences of the bit b in the bitmap B until position p. For
example, rank1(T, 2) returns the number of ones that appear in the bitmap T up to position 2.
• selectb(B, n) returns the position in the bitmap B where is located the n-th occurrence of the bit b.
The K2-tree (Brisaboa and Ladra, 2009) is a compact data structure to represent binary relations. In
essence, it represents a binary matrix, as it can be in Figure 1. The K2-tree structure takes advantage of
several properties of a matrix: sparseness, large areas of zeros, and clustering of ones. It achieves very
low space and supports efficient navigation over the compressed data.
The K2-tree is conceptually a tree. To form the root node, the binary matrix is subdivided into K2
submatrices of equal size. Each one of the K2 submatrices is represented in the root node by one bit. The
submatrices are ordered from left to right and from top to bottom. In Figure 1, we have four submatrices,
for each submatrix, the root node has a bit. Each bit indicates if the submatrix has at least one 1 or not.
Therefore, the root node of Figure 1 is 1011, since the first submatrix has 1s, the second one does not
have 1s, and so on. The submatrices that have 1s are divided again, following the same method until the
subdivision reaches the cell-level.
Instead of using a pointer-based representation, the K2-tree is stored using two bit arrays:
• T (tree): stores all the bits of the K2-tree except those in the last level. The bits are placed following
a levelwise traversal: first the k2 binary values of the root node, then the values of the second level,
and so on.
•
L (last level leaves): stores the last level of the tree. Thus, it represents the value of original cells of
the binary matrix.
Another important data structure is the permutation. It is designed to efficiently answer two main
operations π and π-1 over a sequence of numbers, where each number only appears one time throughout
the sequence. The first operation allows us to obtain the number of a specified position in the sequence,
whereas the second operation identifies the position in the sequence of a given value.
Figure 1
K2-tree example where the K value is 2.
The permutation can be considered as an array of size n, where for each position or index we have a
numeric value. This data structure allows us to return π in constant time because we only need to get the
value in the queried position. In order to answer π-1, we do not need any additional structure. That is, the
simple original sequence can answer π-1. The procedure is based on a simple observation: if we take the
number of a position as the index of the next position in a sequence of positions, any sequence will end
up forming a cycle. In fact, in a permutation, there are usually several cycles.
For example, in Figure 2 we have the cycle (6, 9), since in position 6 it is stored a 9, and in position 9
we found a 6, closing the cycle. In that permutation, there are two more cycles (11, 12) and (1, 4, 3, 7,
8). The rest of elements are not in a cycle since they point to themselves.
If we want to know in which position is located the element with value 3, π-1(3), we access position 3,
and we follow the cycle including that position until we find a position pointing to 3. That is, from
position 3, we follow the cycle through 7, 8, 1, and finally 4, which is the position where the value 3 is
found.
Figure 2
Permutation structure with links
A permutation could have a single cycle of length n. This would result in long searches to solve π-1. In
order to avoid this problem, the permutation is provided with two additional arrays (Munro, et. al., 2012).
The first array is called sampled and it signals the starting positions of shortcuts that allow skipping
some nodes of our cycle. The other array is called rev_links, it gives us the position where we jump
when we find a shortcut in the cycle. If we use the above example, we do not need traverse the 7 and 8
states, we skip from state 3 to state 1, and then we reach the fourth position. Observe that to obtain π1
(3), we start the search at state 3, and we have to go through its cycle until it reaches 3 again, in fact,
we are only interested in that last step, that is, when we close the cycle, so the shortcuts are designed in
such a way that we are always sure that we cannot miss the last step of a cycle started in a given position.
For example, if we want to obtain π-1(4), we start the search at 4, when we reach the state 3, we take the
shortcut to state 1, just the state we are looking for, since it points to 4.
Finally, we present a statistical compression method called (s,c)-Dense Code (Brisaboa, et al., 2003).
Statistical compression uses a model that gives the frequency of appearance of each source symbol
(words, integers, etc.). Then a compression scheme assigns longer codewords to the less frequent source
symbols and shorter codewords to the most frequent. Each original symbol is substituted by its codeword
in the compressed file, which should have a prelude informing the correspondence between original
symbols and codewords. The main feature of the (s,c)-Dense Code is that we can start decompression
at any point, without decompressing from the beginning, a key feature in our method.
4
OUR PROPOSAL
In this proposal, we assume that objects can move freely over all the space, that is, they do not need to
follow roads or any sort of net, although we can use it in that scenario. Examples of this could be boats
navigating in the sea or birds flying on springtime migration.
We suppose that the time is discretized, in such a way that a time instant in our framework, would last
a given period of real time. For example, for us, a time instant can last 5 seconds in terms of real time.
The size of that period is a parameter. In a similar way, the two-dimensional space is divided into small
cells (or tiles), where the size of each cell is another parameter.
First, we describe the data structures that hold the information, and then we describe the query
algorithms.
4.1
Data structures
Our proposal uses two data structures called snapshots and logs. Snapshots give the full position of all
objects at given time instant. Our method stores the full position of the objects at regular time intervals,
whereas logs inform about the movements of the objects between two snapshots, as relative movements
with respect to the last known position.
4.1.1
Representation of snapshots
A snapshot can store and index m objects that are located in the two-dimensional space in a compact
way. Given that space is divided into cells, then it can be seen as a grid where each cell can hold objects.
Observe that, if the cell size is sufficiently large, one or more objects can be located in one cell.
Each snapshot uses a K2-tree to represent the distinct positions of our space. As explained, the K2-tree
is designed to represent a binary matrix. We map the binary matrix to the cells dividing the space, in
such a way that each bit of the matrix corresponds to one of the cells of the space. If the bit is set to 1,
this means that there is an object (or more) in the corresponding cell, whereas a 0, means that no object
is present in the corresponding cell.
To represent a binary matrix, the only alternative compact data structure is a wavelet-tree (Brisaboa, et
al., 2009). However, this structure has a poor performance when the number of cells set to 1 in a given
column or row is greater than 1.
The basic K2-tree only serves to check the presence or absence of objects in a given cell. In order to
know which objects are in the cell, we need to add another structure. This new structure is formed by an
array storing the objects identifiers (perm) and a bitmap called quantity or Q.
Observe that each 1 in the binary matrix corresponds, on the one side, to a bit set to 1 in the L bitmap of
the K2-tree, and, on the other side, to one or more object identifiers. The list of object identifiers
corresponding to each of those bits is stored in the perm array, where the objects identifiers are sorted
following the order of appearance in L. That is, if L is 0101, and the first bit set to one represents a cell
where there are two objects with identifiers 10 and 30, and the second bit set to one corresponds to a
cell where we can find the objects 25, 28, and 29, then perm is 10, 30, 25, 28, 29. The quantity bitmap
is aligned with perm, that is, for each element in perm we have a bit in quantity. This bit indicates with
a 0 if the object in perm is the last object of a specific leaf, whereas a 1 signals that more objects exist.
Therefore, quantity for our previous case is 10110. The first 0 corresponds to the object 30, which is the
last object identifier corresponding to the first cell, whereas the second 0, corresponds to 29, the last
object identifier of the objects present in the second cell.
With this information we can answer two queries:
• Search the objects of the n-th leaf: we count the number of 1s in the array of leaves L until the
position n, this gives us the number of leaves with objects up to the n-th leaf, x = rank1(L, n). Then
we calculate the position of the (x-1)-th 0 in Q, which indicates the last bit of the previous leaf, and
we add 1 to get the first position of our leaf, p = select0(Q, x-1)+1. p is the position in perm of the
first object identifier corresponding to the searched position. From p, we read all the object
identifiers aligned with 1s in Q, until we reach a 0, which signals the last object identifier of that
leaf.
• Search the leaf in L corresponding to the k-th position in Q: First, we calculate the number of leaves
before the object corresponding to k, that is, we need to count the number of 0s until the position
before k, y = rank0(Q, k-1). Then we search in L the position of the y+1-th 1, that is, select1(L, y+1).
The process to get the objects of a cell is very efficient; we only require to find the leaf in the K2-tree
(O(h) where h is the height of the tree, that is, O(log n)) and search the objects of this leaf, using perm
and Q.
However, when we want to know the position of an object, with the structures shown so far, we need to
search sequentially in perm the position of the searched object. To avoid this sequential search, we add
a permutation over perm. The permutation allows us to find an object in the array (π-1), spending less
time than a sequential search, while keeps the constant time required to retrieve the object associated
with a cell (π).
Figure 3.
Snapshot example with the K2-tree and the permutation.
For example, assume that we have the binary matrix and the associated K2-tree of Figure 1. Figure 3
shows an example of object identifiers matching that matrix. In addition to the arrays T and L, we add
the array Q and the array of object identifiers perm. Finally, over perm, we construct our permutation
with its cycles and over that cycles, we add three links, in order to improve the performance of the
queries. In this example, we are using the permutation shown in Figure 2.
If we want to find the objects that are in the cell corresponding to the third row and third column (3,3),
we start the search at the root of the K2-tree, and we traverse it downwards until we reach the leaf
representing that cell. First, we compute which child of the root node overlaps the cell (3,3), in this case,
it is the first child, that is, position 0 of T. That position is set to 1, so we continue the search in the next
level. In the second level, we check which submatrix overlaps the cell (3,3), and we obtain the fourth
child[a1], which is represented by the bit at position 7 of T. Since that bit is set to 1, we continue, reaching
the last level. Finally, the leaf corresponding to the cell (3,3) is the first child [a2]of the considered
submatrix (position 4 in L) and its value is a 1, hence the cell holds objects.
Next, we need to know which positions of Q are linked with that leaf. In order to search these positions
we count the number of leaves with objects before our leaf, rank1(L,p-1) where p is the position of our
leaf in L. In our case, p is 4 so we obtain rank1(L,3)=1, therefore, we know that there is one leaf with
objects before our leaf. Then, we find in Q where the previous leaf ends, that is select0(Q, k), k is the
number of leaves before the leaf we are searching for. Thus, we calculate select0(Q, 1)=1. We know that
next position is the start of our leaf, so we sum one and we obtain a 2. Finally, we access that position
in the permutation, π(2) (the object with id 7) is the first object of the leaf and, therefore, the first object
that is located in the cell. We continue obtaining the rest of objects sequentially until we obtain that the
aligned value of the Q bitmap is a 0. In our example, the next bit is Q[3], and it already stores a 0 so
π(3) will be the last element of the cell. π(3) returns a 3, so the objects 7 and 3 are in the cell (3, 3).
Although the K2-tree was designed to traverse the structure from top to bottom, with a small modification,
we can go from bottom to top. In this way, we can obtain the position in the grid of a given object.
For example, we can search for the cell where is placed the object 3. For this sake, we access to the
permutation and execute the operation π-1(3). We start the search at position 3, and we follow the cycle.
We observe that we have a link between 3 and 1 because in the sampled array we have a 1, and in
rev_links the value is 1, which indicates that the link points to the state 1, so we skip the 7 and 8 and we
go to the state 1. We need to continue because π(1)=4, hence we advance to next state. In π(4), we find
the value 3, the value which we are searching for. Therefore, the position of object 3 is the fourth position
in perm.
Now, we search the corresponding leaf of the object. We calculate the number of leaves previous to the
object, thus, we need to count the number of 0s until position 3 in Q, rank0(Q, 3)=2. Then we search in
L the position where the leaf is, hence we calculate select1(L, 3)= 4, so L[4] is the leaf where is the
object 3. Then, we traverse the tree from the bottom to the top and we obtain the path that indicates that
it is in the cell (3,3).
4.1.2
Representation of the log of relative positions
Although the K2-tree allows to represent the position of all objects in a compact way, it would be
prohibitive, in space terms, to use a snapshot (a K2-tree and the associated perm and Q data structures)
for each time instant. Thus, in order to avoid this problem, we use a parameter to establish the number
of time instants between two snapshots and, in order to keep the information about the movements of
objects between two snapshots, we add a light structure called log.
The log stores the relative movements of all objects for all time instants between two snapshots. The
objects can change their positions in the two Cartesian axes, so we would reserve two integers for every
movement in the log, one for x-axis and another for y-axis. However, in order to save space, we encode
the two values with a unique positive integer.
For this sake, we enumerate the cells around the actual position of an object, following a spiral where
the origin is the initial object position, as it is shown in Figure 4.
Figure 4
Spiral matrix to encode relative positions.
For example, we have an object that moves between instant t0 and t4. The t0 corresponds with a snapshot
so we store its absolute position in the snapshot, but the following instants should be stored in the log.
Assuming that we have the following (x,y) coordinates:
{ t0: (5,6) ; t1: (7,8) ; t2: (5,8) ; t3: (3,10)}
we calculate the movements between the time instants:
{t1-t0= (2,2); t2-t1=(-2,0); t3-t2=(-2,2)}
Then, we encode these shifts with the spiral {t1-t0= 24; t2-t1=18; t3-t2=16}, which are the values stored
in the log.
Observe that the basic idea is that with the spiral, we save space by using only one integer, instead of
the two integers needed when using pairs of coordinates of the Euclidean space. A problem would arise
if the displacement of the object between two consecutive time instants is large. In that case, the value
obtained using the spiral encoding will be a large number, which might require a considerable number
of bits to represent it. However, it is likely that the displacement between two time instants will be small,
and thus, in most cases, a short integer will be enough. In addition, [a3]instead of using 32 bits (or 64
bits) for each relative movement, we exploit the repetitiveness in the values of movements to obtain
additional compression, since some movements are more frequent than others. For example, in the
Chilean coast is more frequent that a boat navigates towards the North than towards the East. Thus, we
do not store the relative movements using plain integers, instead that sequence of integers is compressed
using the (s,c)-Dense Code technique.
Recall that one of the main features of this compression technique is that we can start the decompression
at any given point. This is needed to be able to decompress the log starting just after a snapshot, avoiding
to start the decompression from the beginning.
4.1.3
Disappearance and reappearance
In the real world, sometimes objects stop emitting signals or send erroneous information about their
location. The last problem is easy to avoid because we can process and clean the dataset removing these
incorrect data. However the first problem forces us to add two new possible movements:
• Relative reappearance: it occurs when a missing object sends a new position, but we can obtain its
new position with a relative movement with respect to the last snapshot or with respect to another
log position. We only use relative reappearance with objects that disappear and reappear between
the same snapshots. For example, if we have snapshots every two hours and we have a snapshot at
eight o'clock and another at ten o'clock, and an object disappears at eight o'clock and reappears at
nine o'clock, we can add a relative reappearance. However, if it reappears at eleven o'clock, we
cannot add a relative reappearance.
It is important to notice that we can have more than one relative reappearance per object, so we need
to add a new array to the snapshot structure. This array saves for each object the time elapsed and
the relative position between the instant when the object disappears and the instant when the object
•
reappears. Every relative reappearance in the log points to its related position in this array, allowing
us to recover all the information about the reappearance.
Absolute reappearance: it occurs when a missing object sends a new position and it had disappeared
before the last snapshot. In this case, we cannot know where the object is located because we have
not any reference. We could find the last snapshot which contains the object, calculate the log
movements until its disappearance and then calculate the new position decoding the spiral identifier,
however it would be very expensive. Therefore, we store the absolute position of the object and the
instant when the object reappears without any compression, it should be noted that an object can
only have one absolute reappearance between snapshots, so we only have to store these three values,
at most, once per object.
4.2
Querying
We can distinguish three types of queries: search where is an object in a given time instant, search the
objects that are in a zone at a time instant (time slice), or in a time interval (time interval).
4.2.1
Search an object in a given time instant
First, we obtain the immediate previous snapshot to the query time instant. Then, we need to found in
the array of leaves which leaf correspond to the object. We do that using the permutation, as explained
before. Once we find the leaf linked to the object, we through bottom-up the K2-tree and, in this way,
we retrieve the position of the object at the snapshot instant. Then we apply the movements of the log
over this position up to the query time instant.
It is important to notice that if the time instant to check belongs to a snapshot we do not need to access
the log. In addition, our structure can answer efficiently queries about the trajectory of an object between
two instants.
4.2.2
Time slice query
The aim of this query is to know which objects are at a time instant tq in a region of space. We can
distinguish two cases. First, if the time instant belongs to a snapshot, we only need to traverse the K2tree until the leaves, visiting those nodes that intersect with the checked area. When we reach the leaves
we can retrieve from the permutation the objects that are in this area.
The second case occurs when tq is between two snapshots si and si+1. In this case, we check in si a region
larger r' than the query region r. This region r' is built keeping in mind that all the objects cannot move
faster than the fastest object of the dataset. Thus, we limit our region r' to the space where an object has
chances to reach the region r at the time instant tq. Then we get the objects which belong to the region
r' in the snapshot si, these are the candidates to be in the region r at tq. We follow the movements of the
objects with the log and when we observe that an object cannot reach the final region, we discard it.
Hence, we limit the number of objects to follow and we do not waste time with those objects that cannot
be in the region r at instant tq.
For example, we suppose that we want to find the objects that are located in a region r at time instant tq.
Suppose that the closest previous snapshot to tq is at time instant t0. Assume that the fastest object can
move only to one adjacent cell, in the period of time between t0 and tq. Therefore, we expand our query
area in t0 to its adjacent cells, as we can see in Figure 5. In this region, we have three elements (1,2,3)
which we call candidates, so we discard the rest of objects (4, 5), because they do not have any chance
to be in the region r at time instant tq. Then, we control the candidates with the log until reach the time
instant tq, where we observe that the object with id 3 is outside the region r and the rest of candidates
are the result of the query.
Figure 5
4.2.3
Example of expanded region r' and query region r in a time-slice query.
Time interval query
This query returns the objects in a region r, in a time interval [t0, tn]. To solve it, we divide the time
interval into smaller intervals. With each subinterval, we run a subquery. The first subinterval will be
the interval [t0, ts1-1] where ts1 is the first snapshot after t0, the second interval will be [ts1, ts2-1] where
ts2 is the second snapshot after t0, and so on. When we run these subqueries, we obtain the objects which
are in the region r during the subinterval, and then they are part of the final result, so we do not need to
check their position in the next subqueries. Once we obtain all subquery results, we join the partial
results to obtain the final result. It is important to notice that this query is very similar to time slice, its
difference is that we need to consider the larger region r' for every snapshot of the subqueries except the
last one. Although this query is more expensive than the time slice query, it has a good performance
because we take advantage of rejected objects in each subquery, which are no longer considered.
5
EXPERIMENTAL EVALUATION
In this work, we pursue to develop a proof of concept of our structure. Observe that there is not a clear
competitor since our approach aims at storing the complete collection in main memory and thus, the
saving of space is one of our main targets. Classical approaches to the same problem are disk based, and
hence, they are not worried about space consumption. Therefore, the goal of this experimental evaluation
is to show the compression level and, at the same time, that we are capable of querying the compressed
data.
We used a real dataset, which we obtained from the site http://chorochronos.datastories.org/?q=node/81.
This a real dataset that provides the location signals of 4,856 boats moving during one month in the
region of the Greek coast. As any basic compression technique, our method requires the presence of
repetitive patterns, which are the source of compression. In this dataset, ships follow many times the
same course, and thus our compression method can take advantage of this fact.
Observe that our structure deals with the positions of objects at regular intervals, however in the dataset,
boat signals are emitted with distinct frequencies or, moreover, they can send erroneous information.
Therefore, we need to clean the dataset and fill it with positions that we can interpolate from
previous/posterior signals. For example, when a boat sends a position after a large period of time without
sending its position, and the new position coincides with the last received position, we can assume that
the boat stood in the same position all the time, hence, we fill the dataset with the position received after
the period without signals. Another case happens when a boat sends an erroneous location. We can
detect it, if the boat moves faster than 60 knots, which is the fastest speed of our dataset. Therefore, we
can discard this location and interpolate the real position with previous and next positions.
Every position emitted by a boat is discretized into a matrix where the cell size is 30x30 meters. This
discretization is required, since the K2-trees that store the snapshots can represent binary rasters, and
any raster model represents the space as a tessellation of squares, each representing a square of the real
space with a given size, in our case 30x30 meters.
In addition, we normalize the time of signals emitted using intervals of 1 minute. This another
requirement of our method, since we represent the position of the objects at regular time instants.
Therefore, we assign for each boat the nearest position every 1 minute with a margin before and after of
30 seconds. For example, if a boat sends information between the second 30 and the second 90 we assign
to the time instant 1 the nearest location to the second 60. Whereas if it does not emit between 30s and
60s, we do not assign any position to this boat at time instant 1. With this data normalization, we obtain
a matrix with 920,078,376 cells, 33,544 in the x-axis and 27,429 in the y-axis, and the data corresponds
to 44,640 instants of time.
With these data, which occupy 515.89MBytes, we run two kinds of experiments using a program in C++
in a 1.70GHzx4 Intel Core i5 computer with 8GBytes of RAM and an operating system Linux of 64bits.
In the first experiment, we run a set of queries over a structure built with different snapshot frequencies.
These queries are composed by 100 random searches of an object at a time instant and 50 random time
slice queries. We calculate the size of the structure (MBytes), the compression ratio and the response
times (milliseconds). The compression ratio value indicates the percentage of space occupied by the
compressed version with respect to the original dataset. For example, with a snapshot period of 120, our
structure occupies 25.59% of the spaced consumed by the original dataset, whereas when the period of
snapshot is 15, our structure occupies a 143%, that is, it occupies even more than the original dataset,
specifically 43% more.
Snapshot period
120
60
30
15
Table 1
Structure
size (MB)
132.03
218.59
392.35
739.96
Compression ratio
25.59%
42.37%
76.05%
143.43%
Average time of
object search (ms)
0.01784
0.01422
0.01347
0.01329
Average time of time
slice query (ms)
5.29740
2.93332
2.51360
2.20784
Compression ratio and response time (ms).
As it is shown in Table 1, in the first experiment we can observe as the compression ratio improves
when we increment the snapshot period, however, we penalize the response time.
Average time of object search (ms)
Result in the snapshot
Result in the first quarter of log
Result in the middle of log
Result in the third quarter of log
Table 2
0.01262
0.01508
0.01760
0.02010
Average time of time slice query
(ms)
0.45002
3.59028
3.72642
5.52748
Response time in different situations.
In the second experiment, we want to show the behaviour of our structure when we solve queries using
only the snapshot or when accessing the log as well. In this experiment, we used the setup of the best
compression ratio of the first experiment, that is, the structure with a snapshot period of 120. Over this
structure, we run queries which we can be solved directly in a snapshot and others that require traversing
up to the first quarter, the middle, and the third quarter of the log. For each case, we run 50 random
queries, where we obtain at least one result. Then, we measure the average time in milliseconds of
searches for an object and time slice queries. Table 2 shows the results.
As it is shown in Table 2 we can observe that the time of both kinds of queries is better when we are
searching in a snapshot than when we need to access the log. In the case of time slice query, we can see
that the response time grows faster than in object searches. It is due to the tracking of objects that can
reach the queried region at the considered time slice, whereas in searches by object, we only must follow
a given object.
Finally, observe that all response times are very appealing as they are below 6 ms., less than just one
disk access, which would require around 10-20 ms.
6
IMPROVEMENTS
In order to improve the performance of our structure, we designed two enhancements that will allow us
to process the queries more efficiently.
6.1
Bidirectional log
Consider an inquiry that queries a time instant tk, which is between the snapshots si and si+1. Let us
suppose that tk is closer to si+1 than to si, in that case, we must read a large portion of the log, since we
have to start from si up to the time instant that we are checking. However, if would desirable to have the
chance to start the search at si+1 and traverse backwards the log until we reach tk. In this way, we could
reduce the searches over the log to, at most, half of the size of the log between two snapshots.
In order to allow those backwards searches, we simply add, to each portion of the log between two
snapshots si and si+1, the full position of all objects in the time instant immediately before the time instant
corresponding to si+1. With this, we can perform a backwards navigation, for example, if we have the
snapshots s0 and s1 delimiting the time instants between t0 and t8, and the objects perform these
movements:
{t1-t0= (2,2); t2-t1=(-2,0); t3-t2=(-2,2); t4-t3=(-1,2); t5-t4=(-2,2); t6-t5=(-2,2); t7-t6=(-1,1); t8-t7=(0,2) }
Assuming that the position previous to the time instant of s1 is (-8, 13), and we want to know where is
the object at the time instant t6, since we know that the position in t8 is (-8, 13) and we have the shift
between t8-t7, we can revert the changes from the last snapshot until the time instant t6. Hence, if we run:
(-8,13)-(0,2)-(-1,1)= (-7, 10), the position of the object at the time instant t6.
Now, we have to solve the problem of disappearances. If we find an identifier that indicates an object
reappearance, we simply follow the pointer to the reappearance array. In this array, we retrieve the
relative position and the time between the disappearance time instant and the reappearance. Thus, we
never lost the synchronization with the time.
In addition, this structure can help us when we run time-slice queries because we can use the closest
snapshot which allows us to reduce the region area r' and the number of objects to follow during the
query execution.
6.2
Accumulators
To reduce the running times of queries, we can add accumulators at regular time intervals in the logs.
An accumulator sums all the logs values between a snapshot and the time instant where the new
accumulator is added. Hence, now instead of iterating over the log from the snapshot in the left extreme
of a log portion, and summing the object positions until a selected instant, we obtain the accumulator
before the checked instant and we sum the relative positions from the accumulator to the considered
time instant. In fact, if the right snapshot is closer, this process can be done in the same way from that
snapshot, but subtracting.
Example of accumulators Ax in the logs Lx between snapshots si and si+1
Figure 6.
For example, in Figure 6, assume that we want to know the position of a boat at time instant t13. First,
we need to find the object position in the nearest snapshot s1. Next, we obtain the value in the
accumulator A2. Finally, after adding the values obtained in that structure, we add the relative value
obtained from the instant t13.
7
CONCLUSIONS
We have developed a new data structure to represent the object trajectories. This proposal is based on
the compact data structures approach. That is, we aim at keeping in main memory the whole structure
and the access methods in order to avoid the disk access. By fitting data and access methods in main
memory, we make a better use of the memory hierarchy, since the upper levels have higher bandwidth
and lower latency. If we decompress it completely, we will be forced save parts in disk, since it is likely
that the uncompressed structure will not fit completely in main memory. However, with our approach,
we can only decompress the structure parts that we need to answer the query which we are running.
This structure obtains compression ratios up to 25.59%, hence we reduce substantially the space
consumption, having more chances to fit the data structure in main memory. In addition, we obtain a
good performance searching the object positions at a time instant and answering time slice queries.
During the experimental evaluation, we can observe that if we reduce the snapshot period the structure
size grows and the queries are faster. This is caused by the average distance of the log that we need to
traverse in order to response the queries.
As a future work, it would be very interesting the development of the bidirectional log and the
accumulators structure. Thus, we would solve queries that access and traverse the log with a better
performance.
Acknowledgments
This work was funded in part by European Union’s Horizon 2020 research and innovation programme
under the Marie Sk lodowska-Curie grant agreement No 690941, Ministerio de Economía y
Competitividad under grants [TIN2013-46238-C4-3-R], [CDTI IDI-20141259], [CDTI ITC-20151247],
and [CDTI ITC-20151305] and Xunta de Galicia (co-founded with FEDER) under grant
[GRC2013/053].
References
Becker, B., Gschwind, S., Ohler, T., Seeger, B., Widmayer, P. (1996). An asymptotically optimal
multiversion B-tree. The VLDB Journal 5(4), 264–275
Beckmann, N., Hans-Peter K., Ralf Schneider, and Bernhard Seeger. (1990). The R*-Tree: An
Efficient and Robust Access Method for Points and Rectangles. ACM SIGMOD Record, 19(2),
322–31.
Brisaboa, N. R, Fariña A. and Navarro, G. (2003). (S, C) -Dense Coding : An Optimized Compression
Code for Natural Language Text Databases.
Brisaboa, N. R., and Ladra, S. (2009). K2 -Trees for Compact Web Graph Representation.
Proceedings of the 16th International Symposium on String Processing and Information Retrieval,
(2), 18–30.
Brisaboa, N.R., Luaces, M.R., Navarro, G., Seco, D. (2009). A new point access method based on
wavelet trees. Proceedings of Advances in Conceptual Modeling – Challenging Perspectives, ER
2009 Workshops CoMoL, ETheCoM, FP-UML, MOST-ONISW, QoIS, RIGiM, SeCoGIS, Lecture
Notes in Computer Science, vol. 5833, pp. 297–306
Gutiérrez, G., Navarro, G., Rodríguez, A., González, A., Orellana, J. (2005). A Spatio- Temporal
Access Method based on Snapshots and Events. In: Proceedings of the 13th ACM International
Symposium on Advances in Geographic Information Systems (GIS’05), 115–124.
Munro, J. I., Raman, R., Raman, V., & Rao, S. (2012). Succinct representations of permutations and
functions. Theoretical Computer Science, 438, 74-88.
Nascimento, M. A, and Theodoridis, Y. (1998). Access Structures for Moving Points. ReCALL.
Nascimento, M.A., Silva, J.R.O., Theodoridis, Y. (1999). Evaluation of Access Structures for
Discretely Moving Points: Proceedings of the International Workshop on Spatio-Temporal
Database Management (STDBM ’99), 171–188. Springer- Verlag, London, UK
Navarro, G. and Mäkinen, V. (2007). Compressed Full-Text Indexes. ACM Computing Surveys,
39(1), 2.
Pfoser, D., Jensen, C. and Theodoridis, Y. (2000). Novel Approaches to the Indexing of Moving
Object Trajectories. VLDB (OCTOBER 2001), 395-406.
Šaltenis, S., Jensen, C. S., Leutenegger, S. T. and Lopez, M. (2000). Indexing the Positions of
Continuously Moving Objects. SIGMOD Record, 29(2), 331-42.
Tao, Y., Papadias, D. (2001). MV3R-tree: A Spatio-Temporal Access Method for Timestamp and
Interval Queries. 27th International Conference on Very Large Data Bases,. 431-440. Morgan
Kaufmann Publishers Inc., San Francisco, CA, USA
Theodoridis, Y., Vazirgiannis, M., Sellis, T.K. (1996). Spatio-Temporal Indexing for Large
Multimedia Applications. 1996 International Conference on Multimedia Computing and Systems
(ICMCS ’96), 441–448. IEEE Computer Society, Washington, DC, USA.
Vazirgiannis, M., Theodoridis, Y. and Sellis, T. (1998). Spatio-Temporal Composition and Indexing
for Large Multimedia Applications. Multimedia Systems, 6(4), 284-98.
Xu, X., Han, J., Lu, W. (1990). RT-tree: An Improved R-tree Index Structure for Spatio-temporal
Database. In: 4th International Symposium on Spatial Data Handling, 1040-1049
| 1 |
Nelson-Aalen tail product-limit process and extreme
value index estimation under random censorship
Brahim Brahimi, Djamel Meraghni, Abdelhakim Necir∗
arXiv:1502.03955v2 [math.ST] 22 Jul 2016
Laboratory of Applied Mathematics, Mohamed Khider University, Biskra, Algeria
Abstract
On the basis of Nelson-Aalen nonparametric estimator of the cumulative distribution function, we provide a weak approximation to tail product-limit process for randomly rightcensored heavy-tailed data. In this context, a new consistent reduced-bias estimator of the
extreme value index is introduced and its asymptotic normality is established only by assuming the second-order regular variation of the underlying distribution function. A simulation
study shows that the newly proposed estimator performs better than the existing ones.
Keywords: Extreme values; Heavy tails; Hill estimator; Nelson-Aalen estimator; Random
censoring; Tail index.
AMS 2010 Subject Classification: 62P05; 62H20; 91B26; 91B30.
* Corresponding
author: [email protected]
E-mail addresses:
[email protected] (B. Brahimi)
[email protected] (D. Meraghni)
1
2
1. Introduction
Let X1 , ..., Xn be n ≥ 1 independent copies of a non-negative random variable (rv) X, defined
over some probability space (Ω, A, P) , with a cumulative distribution function (cdf) F. These
rv’s are censored to the right by a sequence of independent copies Y1 , ..., Yn of a non-negative
rv Y, independent of X and having a cdf G. At each stage 1 ≤ j ≤ n, we can only observe the
rv’s Zj := min (Xj , Yj ) and δj := 1 {Xj ≤ Yj } , with 1 {·} denoting the indicator function.
The latter rv indicates whether there has been censorship or not. If we denote by H the cdf of
the observed Z 0 s, then, by the independence of X and Y, we have 1 − H = (1 − F ) (1 − G) .
Throughout the paper, we will use the notation S(x) := S(∞) − S(x), for any S. Assume
further that F and G are heavy-tailed or, in other words, that F and G are regularly varying
at infinity with negative indices −1/γ1 and −1/γ2 respectively. That is
F (tx)
G (tx)
= x−1/γ1 and lim
= x−1/γ2 ,
t→∞ F (t)
t→∞ G (t)
lim
(1.1)
for any x > 0. This class of distributions includes models such as Pareto, Burr, Fréchet,
α−stable (0 < α < 2) and log-gamma, known to be very appropriate for fitting large insurance claims, large fluctuations of prices, financial log-returns, ... (see, e.g., Resnick, 2006).
The regular variation of F and G implies that H is regularly varying as well, with index
−1/γ where γ := γ1 γ2 / (γ1 + γ2 ) . Since weak approximations of extreme value theory based
statistics are achieved in the second-order framework (see de Haan and Stadtmüller, 1996),
then it seems quite natural to suppose that cdf F satisfies the well-known second-order
condition of regular variation. That is, we assume that for any x > 0
τ
UF (tx) /UF (t) − xγ1
γ1 x − 1
=
x
,
t→∞
A∗1 (t)
τ
lim
(1.2)
where τ ≤ 0 is the second-order parameter and A∗1 is a function tending to 0, not changing
sign near infinity and having a regularly varying absolute value at infinity with index τ. If τ =
0, interpret (xτ − 1) /τ as log x. In the sequel, the functions K ← (s) := inf {x : K (x) ≥ s} ,
0 < s < 1, and UK (t) := K ← (1 − 1/t) , t > 1, respectively stand for the quantile and
tail quantile functions of any given cdf K. The analysis of extreme values of randomly
censored data, is a new research topic to which Reiss and Thomas (2007) made a very brief
reference, in Section 6.1, as a first step but with no asymptotic results. Beirlant et al. (2007)
proposed estimators for the extreme value index (EVI) γ1 and high quantiles and discussed
their asymptotic properties, when the data are censored by a deterministic threshold. For
their part, Einmahl et al. (2008) adapted various classical EVI estimators to the case where
data are censored, by a random threshold, and proposed a unified method to establish their
asymptotic normality. Their approach is used by Ndao et al. (2014, 2016) to address the
3
nonparametric estimation of the conditional EVI and large quantiles. Based on KaplanMeier integration, Worms and Worms (2014) introduced two new estimators and proved
their consistency. They showed, by simulation, that they perform better, in terms of bias
and mean squared error (MSE) than the adapted Hill estimator of Einmahl et al. (2008), in
the case where the tail index of the censored distribution is less than that of the censoring
one. Brahimi et al. (2015) used the empirical process theory to approximate the adapted Hill
estimator in terms of Gaussian processes, then they derived its asymptotic normality only
under the usual second-order condition of regular variation. Their approach allows to relax
the assumptions, made in Einmahl et al. (2008), on the heavy-tailed distribution functions
and the sample fraction of upper order statistics used in estimate computation. Recently,
Beirlant et al. (2016) developed improved estimators for the EVI and tail probabilities by
reducing their biases which can be quite substantial. In this paper, we develop a new
methodology, for the estimation of the tail index under random censorship, by considering
the nonparametric estimator of cdf F, based on Nelson-Aalen estimator (Nelson, 1972; Aalen,
1976) of the cumulative hazard function
Z z
Z z
dH (1) (v)
dF (v)
=
,
Λ (z) :=
F (v)
H (v)
0
0
Rz
where H (1) (z) := P (Z1 ≤ z, δ1 = 1) = 0 G (y) dF (y) , z ≥ 0. A natural nonparametric
estimator Λn (z) of Λ (z) is obtained by replacing H (v) and H (1) (v) by their respective
P
P
(1)
empirical counterparts Hn (v) := n−1 ni=1 1 (Zi ≤ v) and Hn (v) := n−1 ni=1 δi 1 (Zi ≤ v)
pertaining to the observed Z-sample. However, since H n (Zn:n ) = 0, we use H n (v−) :=
limu↑v H n (u) instead of H n (v) (see, e.g., Shorack and Wellner, 1986, page 295) to get
Z z
n
(1)
dHn (v) X δi 1 (Zi ≤ z)
=
, for z ≤ Zn:n .
Λn (z) :=
n−i+1
0 H n (v−)
i=1
From the definition of Λ (z) , we deduce that F (z) = exp {−Λ (z)} , which by substituting
Λn (z) for Λ (z) , yields Nelson-Aalen estimator of cdf F, given by
Y
δ[i:n]
(N A)
Fn
(z) := 1 −
exp −
, for z ≤ Zn:n ,
n
−
i
+
1
i:Z ≤z
i:n
where Z1:n ≤ ... ≤ Zn:n denote the order statistics, pertaining to the sample (Z1 , ..., Zn ) and
δ[1:n] , ..., δ[n:n] the associated concomitants so that δ[j:n] = δi if Zj:n = Zi . Note that for our
needs and in the spirit of what Efron (1967) did to Kaplan-Meier estimator (Kaplan and
(N A)
Meier, 1958) (given in (3.7)), we complete Fn
beyond the largest observation by 1. In
other words, we define a nonparametric estimator of cdf F by
( (N A)
Fn
(z) for z ≤ Zn:n ,
Fn (z) :=
1
for z > Zn:n .
4
By considering samples of various sizes, Fleming and Harrington (1984) numerically compared Fn with Kaplan-Meier (nonparametric maximum likelihood) estimator of F (Kaplan
and Meier, 1958), given in (3.7) , and pointed out that they are asymptotically equivalent
and usually quite close to each other. A nice discussion on the tight relationship between
the two estimators may be found in Huang and Strawderman (2006).
In the spirit of the tail product-limit process for randomly right-truncated data, recently
introduced by Benchaira et al. (2016), we define Nelson-Aalen tail product-limit process by
√
F n (xZn−k:n )
−1/γ1
, x > 0,
(1.3)
−x
Dn (x) := k
F n (Zn−k:n )
where k = kn is an integer sequence satisfying suitable assumptions. In the case of complete
√ n
−1/γ1
, x > 0, where
data, the process Dn (x) reduces to Dn (x) := k
Fn (xXn−k:n ) − x
k
Pn
−1
Fn (x) := n
i=1 1 (Xi ≤ x) is the usual empirical cdf based on the fully observed sample
(X1 , ..., Xn ) . Combining Theorems 2.4.8 and 5.1.4 in de Haan and Ferreira (2006), we infer
that under the second-order condition of regular variation (1.2) , there exists a sequence of
standard Weiner processes {Wn (s) ; 0 ≤ s ≤ 1} such that, for any x0 > 0 and 0 < < 1/2,
xτ /γ1 − 1 √
p
kA1 (n/k) → 0, as n → ∞,
(1.4)
τ /γ1
1/F (t) and Jn (x) := Wn x−1/γ1 − x−1/γ1 Wn (1) . One of the main
sup x/γ1 Dn (x) − Jn (x) − x−1/γ1
x≥x0
where A1 (t) := A∗1
applications of the this weak approximation is the asymptotic normality of tail indices.
Indeed, let us consider Hill’s estimator (Hill, 1975)
(H)
γ
b1
:= k −1
k
X
log (Xn−i+1:n /Xn−k:n ) ,
i=1
which may be represented, as a functional of the process Dn (x) , into
Z ∞
√ (H)
x−1 Dn (x) dx.
k γ
b1 − γ1 =
1
It follows, in view of approximation (1.4) , that
√
Z ∞
√ (H)
kA1 (n/k)
−1
k γ
b1 − γ1 =
+ op (1) ,
x Jn (x) dx +
1−τ
1
!
e
√ (H)
√
λ
D
e < ∞. For
leading to k γ
b1 − γ1 → N
, γ12 , provided that kA1 (n/k) → λ
1−τ
more details on this matter, see for instance, de Haan and Ferreira (2006) page 76.
The major goal of this paper is to provide an analogous result to (1.4) in the random censoring setting through the tail product-limit process Dn (x) and propose a new asymptotically
normal estimator of the tail index. To the best of our knowledge, this approach has not
been considered yet in the extreme value theory literature. Our methodology is based on the
5
uniform empirical process theory (Shorack and Wellner, 1986) and the related weak approximations (Csörgő et al., 1986). Our main result, given in Section 2 consists in the asymptotic
representation of Dn (x) in terms of Weiner processes. As an application, we introduce, in
Section 3, a Hill-type estimator for the tail index γ1 based on Fn . The asymptotic normality
of the newly proposed estimator is established, in the same section, by means of the aforementioned Gaussian approximation of Dn (x) and its finite sample behavior is checked by
simulation in Section 4. The proofs are postponed to Section 5 and some results that are
instrumental to our needs are gathered in the Appendix.
2. Main result
Theorem 2.1. Let F and G be two cdf ’s with regularly varying tails (1.1) and assume that
the second-order condition of regular variation (1.2) holds with γ1 < γ2 . Let k = kn be an
√
integer sequence such that k → ∞, k/n → 0 and kA1 (h) = O (1) , where h = hn :=
UH (n/k) . Then, there exists a sequence of standard Weiner processes {Wn (s) ; 0 ≤ s ≤ 1}
defined on the probability space (Ω, A, P) , such that for every 0 < < 1/2, we have, as
n → ∞,
sup x/(pγ1 ) Dn (x) − Jn (x) − x−1/γ1
x≥pγ
xτ /γ1 − 1 √
p
kA1 (h) → 0,
τ /γ1
(2.5)
where Jn (x) = J1n (x) + J2n (x) , with
r
n
k −1/γ
k
1/γ2
−1/γ1
x
J1n (x) :=
x Wn,1
−x
,
Wn,1
k
n
n
and
x−1/γ1
J2n (x) :=
γ
r Z x
n
k −1/γ
k −1/γ
1/γ−1
u
pWn,2
u
− qWn,1
u
du,
k 1
n
n
where Wn,1 and Wn,2 are two independent Weiner processes defined, for 0 ≤ s ≤ 1, by
Wn,1 (s) := {Wn (θ) − Wn (θ − ps)} 1 (θ − ps ≥ 0) and Wn,2 (s) := Wn (1) − Wn (1 − qs) ,
with θ := H (1) (∞) , p = 1 − q := γ/γ1 .
Remark 2.1. It is noteworthy that the assumption γ1 < γ2 is required to ensure that enough
extreme data is available for the inference to be accurate. In other words, the proportion p of
the observed extreme values has to be greater than 1/2. This assumption is already considered
by Worms and Worms (2014) and, in the random truncation context, by Gardes and Stupfler
(2015) and Benchaira et al. (2016).
Remark 2.2. In the complete data case, we use Fn instead of Fn and we have p = 1 = θ.
D
This implies that q = 0, H ≡ F, UF (h) = n/k, J1n (x) = Wn x−1/γ1 − x−1/γ1 Wn (1) ,
J2n (x) = 0 and Wn (s) = Wn (1) − Wn (1 − s) . Since
D
{Wn (s) , 0 ≤ s ≤ 1} = {Wn (1) − Wn (1 − s) , 0 ≤ s ≤ 1} ,
6
D
it follows that J1n (x) = Wn x−1/γ1 − x−1/γ1 Wn (1) and so approximations (2.5) and (1.4)
D
agree for x0 = pγ . The symbol = stands for equality in distribution.
3. Tail index estimation
In the last decade, some authors began to be attracted by the estimation of the EVI γ1 when
the data are subject to random censoring. For instance, Einmahl et al. (2008) adapted the
(EF G)
classical Hill estimator (amongst others) to a censored sample to introduce γ
b1
:= γ
b(H) /b
p
Pk
(H)
−1
as an asymptotically normal estimator for γ1 , where γ
b
:= k
i=1 log (Zn−i+1:n /Zn−k:n ) ,
P
is Hill’s estimator of γ based on the complete sample Z1 , ..., Zn and pb := k −1 ki=1 δ[n−i+1:n] .
By using the empirical process theory tools and only by assuming the second-order condition
of regular variation of the tails of F and G, Brahimi et al. (2015) derived, in Theorem 2.1
(EF G)
(assertion (2.9)), a useful weak approximation to γ
b1
in terms of a sequence of Brownian
bridges. They deduced in Corollary 2.1 that
√ (EF G)
D
k γ
b1
− γ1 → N
provided that
√
γ2
λ
, 1
1 − pτ p
, as n → ∞,
(3.6)
kA1 (h) → λ. For their part, Worms and Worms (2014) proposed two es-
timators which incidentally can be derived, through a slight modification, from the one we
will define later on. They proved their consistency (but not the asymptotic normality) by
using similar assumptions as those of Einmahl et al. (2008). These estimators are defined by
(W 1)
γ
b1
:=
1
k
X
(KM )
n 1 − Fn
(Zn−k:n )
i=1
δ[i:n−i+1]
(KM )
1 − Gn
log
Zn−i+1:n
Zn−k:n
i log
Zn−i+1:n
,
Zn−i:n
(Zn−i+1:n −)
and
(W 2)
γ
b1
k
X
1
:=
(KM )
n 1 − Fn
(Zn−k:n )
i=1
1
1−
(KM )
Gn
(Zn−i+1:n −)
where, for z < Zn:n ,
Fn(KM )
Y n − i δ[i:n]
Y n − i 1−δ[i:n]
(KM )
(z) := 1 −
and Gn
(z) := 1 −
,
n
−
i
+
1
n
−
i
+
1
i:Z ≤z
i:Z ≤z
i:n
i:n
(3.7)
are Kaplan-Meier estimators of F and G respectively. Thereafter, we will see that the
assumptions under which we establish the asymptotic normality of our estimator are lighter
and more familiar in the extreme value context. We start the definition of our estimator by
noting that, from Theorem 1.2.2 in de Haan and Ferreira (2006), the first order condition
R∞
of regular variation (1.1) implies that limt→∞ 1 x−1 F (tx)/F (t) dt = γ1 ,which, after an
integration by parts, may be rewritten into
Z ∞
x
1
lim
log
dF (x) = γ1 .
t→∞ F (t) t
t
(3.8)
7
By replacing F by Fn and letting t = Zn−k:n , we obtain
Z ∞
1
γ
b1 =
log (x/Zn−k:n ) dFn (x) ,
F n (Zn−k:n ) Zn−k:n
(3.9)
as an estimator of γ1 . Before we proceed with the construction of γ
b1 , we need to define a
function H (0) , that is somewhat similar to H (1) , and its empirical version by H (0) (z) :=
Rz
P
(0)
P (Z1 ≤ z, δ1 = 0) = 0 F (y) dG (y) and Hn (z) := n−1 ni=1 (1 − δi ) 1 (Zi ≤ z) , z ≥ 0,
nR
o
z
(0)
(1)
respectively. Now, note that dFn (z) = exp 0 dHn (v) /H n (v−) dHn (z) , then we have
( i
)
n
X 1 − δ[j:n]
1 X
Zi:n
exp
δ[i:n] log
n i=n−k+1
Zn−k:n
n−j+1
j=1
γ
b1 =
,
n−k
Y
δ[j:n]
exp −
n−j+1
j=1
which may be rewritten into
Y
i
n
i
Y
δ[j:n]
1
1 X
Zi:n
δ[i:n] exp
exp −
log
n i=n−k+1
n − j + 1 j=1
n−j+1
Zn−k:n
j=1
,
n−k
Y
δ[j:n]
exp −
n−j+1
j=1
which in turn simplifies to
Y
n
i
i
Y
δ[j:n]
1 X
Zi:n
1
δ[i:n] exp
exp −
log
.
n i=n−k+1
n − j + 1 j=n−k+1
n−j+1
Zn−k:n
j=1
By changing i into n − i + 1 and j into n − j + 1, with the necessary modifications, we end
up with the following explicit formula for our new estimator of the EVI γ1 :
γ
b1 =
k
X
i=1
ai,n log
Zn−i+1:n
,
Zn−k:n
where
−1
ai,n := n δ[n−i+1:n]
n
Y
j=i
exp {1/j}
k
Y
exp −δ[n−j+1:n] /j .
(3.10)
j=i
Note that for uncensored data, we have X = Z and all the δ 0 s are equal to 1, therefore
ai,n ≈ k −1 , for i = 1, ..., k. Indeed,
ai,n
n
n
1 Y
1
1 Y
1
n+1 1
1
=
exp
≈
1+
=
≈ , as n → ∞,
n j=k+1
j
n j=k+1
j
n k+1
k
which leads to the formula of the famous Hill estimator of the EVI (Hill, 1975). The consistency and asymptotic normality of γ
b1 are established in the following theorem.
8
Theorem 3.1. Let F and G be two cdf ’s with regularly varying tails (1.1) such that γ1 < γ2 .
Let k = kn be an integer sequence such that k → ∞ and k/n → 0, then γ
b1 → γ1 in probability,
as n → ∞. Assume further that the second-order condition of regular variation (1.2) holds,
then for all large n
√
√
kA1 (h)
k (b
γ1 − γ1 ) −
1−τ
r Z 1
r
k
n
q
k
n
k
−q−1
=γ
Wn,2
s
s + 1−
Wn,1
s
ds − γ1
Wn,1
+ op (1) ,
k 0
n
p
n
k
n
√
where Wn,1 and Wn,2 are those defined in Theorem 2.1. If in addition kA1 (h) → λ, then
√
λ
p
D
2
k (b
γ1 − γ1 ) → N
,
γ , as n → ∞.
(3.11)
1 − τ 2p − 1 1
Remark 3.1. We clearly see that, when there is no censoring (i.e. p = 1 − q = 1) , the
Gaussian approximation and the limiting distribution above perfectly agree with those of
Hill’s estimator (see, e.g., de Haan and Ferreira, 2006, page 76).
Remark 3.2. Since 0 < p < 1 and τ ≤ 0, then 1 − τ ≥ 1 − pτ > 0. This implies that the
(EF G)
absolute value of the asymptotic bias of γ
b1 is smaller than or equal to that of γ
b1
(see
(EF G)
(3.11) and (3.6)). In other words, the new estimator γ
b1 is of reduced bias compared to γ
b1
.
However, for any 1/2 < p < 1, we have p/ (2p − 1) > 1/p meaning that the asymptotic
(EF G)
variance of γ
b1 is greater than that of γ
b1
. This seems logical, because it is rare to reduce
the asymptotic bias of an estimator without increasing its asymptotic variance. This is the
price to pay, see for instance Peng (1998) and Beirlant et al. (2016). We also note that in
the complete data case, both asymptotic biases coincide with that of Hill’s estimator and so
do the variances.
4. Simulation study
We mentioned in the introduction that Worms and Worms (2014) showed, by simulation,
that their estimators outperform the adapted Hill estimator introduced by Einmahl et al.
(W 1)
(2008). Therefore, this study is intended for comparing our estimator γ
b1 to γ
b1
(W 2)
and γ
b1
,
with respect to biases and MSE’s. It is carried out through two sets of censored and censoring
−η1 /γ1
data, both drawn from the following Burr models: F (x) = 1 + x1/η1
and G (x) =
−η
/γ
2
2
1 + x1/η2
, x ≥ 0, where η1 , η2 , γ1 , γ2 > 0. We fix η1 = η2 = 1/4 and choose the values
0.2 and 0.8 for γ1 . For the proportion of really observed extreme values, we take p = 0.55,
0.70 and 0.90 for strong, moderate and weak censoring respectively. For each couple (γ1 , p) ,
we solve the equation p = γ2 /(γ1 + γ2 ) to get the pertaining γ2 -value. We generate 2000
independent replicates of size n = 300 then n = 1000 from both samples (X1 , ..., Xn ) and
(Y1 , ..., Yn ) . Our overall results are taken as the empirical means of the results obtained
9
Figure 4.1. Bias (left panel) and mse (right panel) of the estimators γ
b1 (solid
(W 1)
line), γ
b1
(W 2)
(dashed line) and γ
b1
(dotted line) of the tail index γ1 = 0.2
(top) and γ1 = 0.8 (bottom) of strongly right-censored Burr model, based on
2000 samples of size 300.
through all repetitions. We plot the absolute biases and the MSE’s as functions of the
number k of upper order statistics used in the computation of the three estimators. The
simulation results are illustrated, for the respective p-values, in Figures 4.1, 4.2 and 4.3 for
n = 300 and in Figures 4.4, 4.5 and 4.6 for n = 1000. On the light of all the figures, it is
clear that our estimator performs better than the other two. Indeed, in (almost) each case,
(W 1)
the minima of the absolute bias and MSE of γ
b1 are less than those of γ
b1
addition, our estimator reaches its minima a long way before
(W 1)
γ
b1 and
(W 2)
and γ
b1
(W 2)
γ
b1
. In
do, meaning
that the number k of extremes needed for the last two estimators to be accurate is much
larger than those needed for γ
b1 . In other words, the cost of our estimator in terms of upper
order statistics is very low compared to that of Worms and Worms estimators.
5. Proofs
In the sequel, we will use the following two empirical processes
√ (j)
(j)
n H n (z) − H (z) ,
j = 0, 1; z > 0, which may be represented, almost surely, by two uniform empirical ones.
Indeed, let us consider the independent and identically distributed (iid) (0, 1)-uniform rv’s
Ui := δi H (1) (Zi ) + (1 − δi ) θ + H (0) (Zi ) , i = 1, ..., n, defined in (see Einmahl and Koning,
1992). The empirical cdf and the uniform empirical process based upon these rv’s are
respectively denoted by
Un (s) := # {i : 1 ≤ i ≤ n, Ui ≤ s} /n and αn (s) :=
√
n (Un (s) − s) , 0 ≤ s ≤ 1.
(5.12)
Deheuvels and Einmahl (1996) state that almost surely (a.s.)
Hn(1) (z) = Un H (1) (z) and Hn(0) (z) = Un H (0) (z) + θ − Un (θ) ,
(5.13)
10
Figure 4.2. Bias (left panel) and mse (right panel) of the estimators γ
b1 (solid
(W 1)
line), γ
b1
(W 2)
(dashed line) and γ
b1
(dotted line) of the tail index γ1 = 0.2
(top) and γ1 = 0.8 (bottom) of moderately right-censored Burr model, based
on 2000 samples of size 300.
Figure 4.3. Bias (left panel) and mse (right panel) of the estimators γ
b1 (solid
(W 1)
line), γ
b1
(W 2)
(dashed line) and γ
b1
(dotted line) of the tail index γ1 = 0.2
(top) and γ1 = 0.8 (bottom) of weakly right-censored Burr model, based on
2000 samples of size 300.
for 0 < H (1) (z) < θ and 0 < H (0) (z) < 1 − θ. It is easy to verify that a.s.
√ (1)
(1)
(1)
(1)
n H n (z) − H (z) = αn (θ) − αn θ − H (z) , for 0 < H (z) < θ,
(5.14)
√ (0)
(0)
(0)
(0)
n H n (z) − H (z) = −αn 1 − H (z) , for 0 < H (z) < 1 − θ.
(5.15)
and
Our methodology strongly relies on the well-known Gaussian approximation, given by Csörgő
et al. (1986) in Corollary 2.1. It says that on the probability space (Ω, A, P) , there exists a
sequence of Brownian bridges {Bn (s) ; 0 ≤ s ≤ 1} such that, for every 1/4 < η < 1/2, we
11
Figure 4.4. Bias (left panel) and mse (right panel) of the estimators γ
b1 (solid
(W 1)
line), γ
b1
(W 2)
(dashed line) and γ
b1
(dotted line) of the tail index γ1 = 0.2
(top) and γ1 = 0.8 (bottom) of strongly right-censored Burr model, based on
2000 samples of size 1000.
Figure 4.5. Bias (left panel) and mse (right panel) of the estimators γ
b1 (solid
(W 1)
line), γ
b1
(W 2)
(dashed line) and γ
b1
(dotted line) of the tail index γ1 = 0.2
(top) and γ1 = 0.8 (bottom) of moderately right-censored Burr model, based
on 2000 samples of size 1000.
have
|αn (s) − Bn (s)|
= Op nη−1/2 .
η
[s (1 − s)]
1/n≤s≤1−1/n
sup
(5.16)
For the increments αn (θ) − αn (θ − s) , we will need an approximation of the same type
as (5.16) . Following similar arguments, mutatis mutandis, as those used in the proofs of
assertions (2.2) of Theorem 2.1 and (2.8) of Theorem 2.2 in Csörgő et al. (1986), we may
show that, for every 0 < θ < 1, we have
|{αn (θ) − αn (θ − s)} − {Bn (θ) − Bn (θ − s)}|
= Op nη−1/2 .
η
s
1/n≤s≤θ
sup
(5.17)
12
Figure 4.6. Bias (left panel) and mse (right panel) of the estimators γ
b1 (solid
(W 1)
line), γ
b1
(W 2)
(dashed line) and γ
b1
(dotted line) of the tail index γ1 = 0.2
(top) and γ1 = 0.8 (bottom) of weakly right-censored Burr model, based on
2000 samples of size 1000.
5.1. Proof of Theorem 2.1. The proof is carried out through two steps. First, we asymptotically represent Dn (x) is terms of the empirical cdf Hn and the empirical sub-distributions
(0)
(1)
functions Hn and Hn . Second, we rewrite it as a functional of the following two processes
r n
o
n
(1)
(1)
βn (w) :=
αn (θ) − αn θ − H (wZn−k:n ) , for 0 < H (wZn−k:n ) < θ, (5.18)
k
and
r
n
(0)
(0)
e
βn (w) := −
αn 1 − H (wZn−k:n ) , for 0 < H (wZn−k:n ) < 1 − θ,
k
(5.19)
in order to apply the weak approximations (5.16) and (5.17) . We begin, as in the proof of
Theorem 2.1 in Benchaira et al. (2016), by decomposing k −1/2 Dn (x) into the sum of
F n (xZn−k:n ) − F (xZn−k:n )
,
F (Zn−k:n )
F (Zn−k:n )
F n (xZn−k:n ) − F (xZn−k:n )
Mn2 (x) :=
−1
,
F n (Zn−k:n )
F (Zn−k:n )
Mn1 (x) :=
Mn3 (x) := −
F (xZn−k:n ) F n (Zn−k:n ) − F (Zn−k:n )
,
F (Zn−k:n )
F n (Zn−k:n )
and
Mn4 (x) :=
F (xZn−k:n ) F (xh)
−
F (Zn−k:n )
F (h)
+
F (xh)
−1/γ1
−x
.
F (h)
In the following two subsections, we show that, uniformly on x ≥ pγ , for any 1/4 < η < p/2
and small 0 < 0 < 1, we have
3
X
√
i=1
kMni (x) − Jn (x) = op x(2η−p)/γ±0 ,
(5.20)
13
where Jn (x) is the Gaussian process defined in Theorem 2.1. It is worth mentioning that
both Mn2 (x) and Mn3 (x) may be rewritten in terms of Mn1 (x) . Indeed, it is readily checked
that
F (Zn−k:n )
F (xZn−k:n )
Mn2 (x) =
− 1 Mn1 (x) and Mn3 (x) = −
Mn1 (1) .
F n (Zn−k:n )
F n (Zn−k:n )
√
Then, we only focus on the weak approximation kMn1 (x) which will lead to that of
√
√
kMn3 (x) and the asymptotic negligibility (in probability) of kMn2 (x) . Finally, the ap√
proximation of kMn4 (x) will result in the asymptotic bias. In conclusion, we might say
that representing Dn (x) amounts to representing Mn1 (x) .
(0)
5.1.1. Representation of Mn1 (x) in terms of Hn , Hn
(1)
and Hn . We show that, for
all large n and x ≥ pγ
Mn1 (x) = Tn1 (x) + Tn2 (x) + Tn3 (x) + Rn (x) ,
(5.21)
where
Tn1 (x) :=
Z
∞
Tn2 (x) :=
xZn−k:n
Z
∞
Tn3 (x) :=
xZn−k:n
(1)
Hn
(1)
(w)
(w) − H
F (w) d
,
H (w)
xZn−k:n F (Zn−k:n )
Z w d Hn(0) (v) − H (0) (v)
F (w)
dH (1) (w) ,
F (Zn−k:n ) 0
H (v)
Z
∞
F (w)
F (Zn−k:n )
(Z
w
0
Hn (v) − H (v)
2
)
dH (0) (v) dH (1) (w) ,
H (v)
and Rn (x) := Op (k −2η±0 ) x(2η−p)/γ±0 , for every 1/4 < η < p/2 and small 0 < 0 < 1.
For notational simplicity, we write b±0 := max (b0 , b−0 ) and without loss of generality,
we attribute 0 to any constant times 0 and b±0 to any linear combinations of b±c1 0 and
b±c2 0 , for every c1 , c2 > 0. We begin by letting ` (w) := F (w) /H (w) which may be rewritten
R w
into exp 0 dH (0) (v) /H (v) whose empirical counterpart is equal to
Z w
(0)
`n (w) := exp
dHn (v) /H n (v) .
0
` (w) dH (1) (w) , then by replacing ` and H (1) by `n and
R∞
(1)
respectively, we obtain F n (xZn−k:n ) = xZn−k:n `n (w) dHn (w) . Thus
Z ∞
Z ∞
` (w)
`n (w)
(1)
Mn1 (x) =
dHn (w) −
dH (1) (w) .
(5.22)
xZn−k:n F (Zn−k:n )
xZn−k:n F (Zn−k:n )
Observe that F (xZn−k:n ) =
(1)
Hn
R∞
xZn−k:n
By applying Taylor’s expansion, we may rewrite `n (w) − ` (w) into
(Z
)
(Z
)2
Z w
Z w
(0)
(0)
w
w
dHn (v)
dH (0) (v)
1e
dHn (v)
dH (0) (v)
` (w)
−
+ `n (w)
−
,
2
H n (v)
H (v)
H n (v)
H (v)
0
0
0
0
14
where `en (w) is a stochastic intermediate value lying between ` (w) and `n (w) . This allows
us to decompose Mn1 (x) , in (5.22) , into the sum of
Z ∞
` (w)
∗
Tn1 (x) :=
d Hn(1) (w) − H (1) (w) ,
xZn−k:n F (Zn−k:n )
Z w d Hn(0) (v) − H (0) (v)
Z ∞
` (w)
dHn(1) (w) ,
T∗n2 (x) :=
F
(Z
)
H
(v)
0
xZn−k:n
n−k:n
)
(
Z ∞
Z w
`
(w)
H
(v)
−
H
(v)
n
T∗n3 (x) :=
dHn(0) (v) dHn(1) (w) ,
2
xZn−k:n F (Zn−k:n )
0
H (v)
)
(
2
Z ∞
Z w
H (v) − H n (v)
` (w)
Rn1 (x) :=
dHn(0) (w) dHn(1) (w) ,
2
xZn−k:n F (Zn−k:n )
0
H n (v) H (v)
and
(Z
)2
Z
Z w
(0)
w
1 ∞
`en (w)
dHn (v)
dH (0) (v)
Rn2 (x) :=
−
dHn(1) (w) .
2 xZn−k:n F (Zn−k:n )
H n (v)
H (v)
0
0
First, we clearly see that T∗n1 (x) ≡ Tn1 (x) . Next, we show that T∗n2 (x) and T∗n3 (x) are
approximations of Tn2 (x) and Tn3 (x) respectively, while Rni (x) = Op (k −2η±0 ) x(2η−p)/γ±0 ,
(1)
i = 1, 2, as n → ∞, for any x ≥ pγ . Since, from representation (5.13) , we have H n (w) =
(1)
Un H (w) a.s., then without loss of generality, we may write
Z ∞
Z w d Hn(0) (v) − H (0) (v)
(1)
` (w)
T∗n2 (x) =
dUn H (w) .
H (v)
xZn−k:n F (Zn−k:n ) 0
n
o
(1)
(1)
Let Q(1) (u) := inf w : H (w) > u , 0 < u < θ, and set t = Un H (w) or, in other
words, w = Q(1) (Vn (t)) , where Vn denotes the empirical quantile function pertaining to
Un . By using this change of variables, we get
(0)
(0)
Z H (1)
Z
(1) (V (t))
(xZ
)
Q
d
(v)
−
H
H
(v)
n
n−k:n
n
n
L (Vn (t))
∗
Tn2 =
dt,
F (Zn−k:n ) 0
H (v)
0
where, for notational simplicity, we set L (s) := ` Q(1) (s) . Now, we decompose T∗n2 (x) into
the sum of
Z
(1)
H n (xZn−k:n )
An1 (x) :=
H
Z
H
(1)
(1)
(xZn−k:n )
(xZn−k:n )
An2 (x) :=
0
Z
An3 (x) :=
0
H
(1)
L (Vn (t))
F (Zn−k:n )
Z
Q(1) (Vn (t))
L (t)
F (Zn−k:n )
Z
(0)
Hn
(v) − H
(0)
(v)
H (v)
0
L (Vn (t)) − L (t)
F (Zn−k:n )
(xZn−k:n )
d
Z
Q(1) (Vn (t))
0
Q(1) (Vn (t))
Q(1) (t)
dt,
(0)
(0)
d H n (v) − H (v)
H (v)
(0)
(0)
d H n (v) − H (v)
H (v)
dt,
dt,
15
and
Z
H
(1)
(xZn−k:n )
An4 (x) :=
0
L (t)
F (Zn−k:n )
Q(1) (t)
Z
(0)
(0)
d H n (v) − H (v)
H (v)
0
dt.
It is clear that the change of variables w = Q(1) (t) yields that An4 (x) = Tn2 (x) . Hence,
one has to show that Ani (x) = Op (k −2η± ) x(2η−p)/γ±0 , i = 1, 2, 3, as n → ∞, uniformly on
x ≥ pγ . We begin by An1 (x) for which an integration by parts gives
Z H (1)
Z Q(1) (Vn (t)) d H (0) (v) − H (0) (v)
n (xZn−k:n )
n
L (Vn (t))
An1 (x) :=
dt,
(1)
H (v)
H (xZn−k:n ) F (Zn−k:n ) 0
Z
(1)
H n (xZn−k:n )
An1 (x) =
H
(1)
Z
−
(xZn−k:n )
Q(1) (Vn (t))
0
L (Vn (t))
F (Zn−k:n )
(0)
(
H n (v) − H
H (v)
(0)
(0)
Q(1) (Vn (t)) − H
Q(1) (Vn (t))
Hn
(0)
(v)
If In (x) denotes the interval of endpoints H
H (Q(1) (Vn (t)))
)
dH (v) dt.
(1)
(1)
(xZn−k:n ) and H n (xZn−k:n ) , then from
Lemma (6.1) , we have supt∈In (x) (L (t) /L (Vn (t))) = Op (1) , uniformly on x ≥ pγ . On
the other hand, form representation (5.13) , we infer that
n (0)
o n (0)
o
D
H n (t) , 0 < s < 1 − θ = Un H (t) , 0 < s < 1 − θ .
Then by using assertion (ii) in Proposition 6.1, we have that
Mn+ (x)
L (t)
Mn− (x) F (Zn−k:n )
(0) 1/2−η
1−η Z
(0)
(1)
(1)
H
Q (Vn (t)) H
Q (Vn (t))
(v)
dH (v)
×
+
dt,
H (Q(1) (Vn (t)))
H (v)
0
An1 (x) = Op n
−η
Z
(1)
where Mn+ (x) and Mn− (x) denote, the maximum and the minimum of H n (xZn−k:n ) and
H
(1)
(xZn−k:n ) respectively. Observe that both H
(0)
◦ Q(1) and H ◦ Q(1) are regularly varying
at infinity with the same index equal to 1. Then by using assertion (iii) in Proposition 6.1,
we readily show that An1 (x) equals
(0) 1−η
1−η Z
(0)
(1)
+
Z
(1)
Mn (x)
Q (t) H
H
Q (t)
(v)
dH (v)
Op (n−η )
L (t)
+
dt.
F (Zn−k:n ) Mn− (x)
H (Q(1) (t))
H (v)
0
Note that H
(0)
< H, then it follows, after integration, that
Z Mn+ (x)
Op (n−η )
L (t)
η dt,
An1 (x) =
F (Zn−k:n ) Mn− (x) H (Q(1) (t))
16
which, by a change of variables, becomes
(1)
An1 (x) =
Op (n−η ) H (Zn−k:n )
F (Zn−k:n )
Z
Mn+ (x)/H
Mn− (x)/H
(1)
(1)
(Zn−k:n )
(Zn−k:n )
(1)
(Zn−k:n ) t
L H
h
(1)
iη dt.
H Q(1) H (Zn−k:n ) t
From now on, a key result related to the regular variation concept, namely Potter’s inequalities (see, e.g., Proposition B.1.9, assertion 5 in de Haan and Ferreira, 2006), will be applied
quite frequently. For this reason, we need to recall this very useful tool here. Suppose that Ψ
is a regularly varying function at infinity with index κ and let c1 , c2 be positive real numbers.
Then there exists t0 = t0 (c1 , c2 ) such that for t ≥ t0 , tx ≥ t0 ,
(1 − c1 ) xκ min (xc2 , xc2 ) < Ψ (tx) /Ψ (t) < (1 + c1 ) xκ max (xc2 , xc2 ) .
(5.23)
Since L and H ◦ Q(1) are regularly varying at infinity with respective indices p − 1 and 1,
then we use (5.23) to write that, for sufficiently small 0 > 0 and for all large n, we have
(1)
(1)
−η
Op (n ) H (Zn−k:n ) L H (Zn−k:n ) Z Mn+ (x)/H (1) (Zn−k:n ) tp−1±0
h
(1)
iη
An1 (x) =
dt.
(1)
tη±0
Mn− (x)/H (Zn−k:n )
F (Zn−k:n ) H Q(1) H (Zn−k:n )
(1)
From assertion (i) of Lemma 4.1 in Brahimi et al. (2015), we have pH (t) /H (t) → 1, as
(1)
(1)
p
t → ∞, hence H (Zn−k:n ) /H (Zn−k:n ) → p. On the other hand, we have Q(1) H (t) = t
n
p
and H (Zn−k:n ) → 1, thus
k
(1)
(1)
H (Zn−k:n ) L H (Zn−k:n )
h
(1)
iη = (1 + op (1)) (n/k)η .
F (Zn−k:n ) H Q(1) H (Zn−k:n )
Then, after integration, we obtain
Op (n−η )
An1 (x) =
(k/n)η
We have
(1)
H n (xZn−k:n )
H
(1)
!p−η±0
(Zn−k:n )
−
H
(1)
H
(xZn−k:n )
(1)
!p−η±0
.
(Zn−k:n )
n (1)
p
H (Zn−k:n ) → p, therefore
k
p−η±0 (1)
p−η±0
Op (n−η ) (1)
H n (xZn−k:n )
− H (xZn−k:n )
.
An1 (x) =
(k/n)p±0
By applying the mean value theorem, then assertion (i) in Proposition 6.1, we have
(1)
p−η−1±0
Op (n−η )
(1)
(1)
An1 (x) =
H n (xZn−k:n ) − H (xZn−k:n ) H (xZn−k:n )
,
(k/n)p±0
and by assertion (ii) in Proposition 6.1, we get
An1 (x) =
p−2η±0
Op (n−2η ) (1)
H
(xZ
)
.
n−k:n
(k/n)p±0
17
n (1)
p
H (Zn−k:n ) → p together with routine manipulations
k
of Potter’s inequalities (5.23) , we end up with An1 (x) = Op (k −2η±0 ) x(2η−p)/γ±0 . Let us
Once again, by using the fact that
now consider the term An2 (x) , which may be decomposed into the sum of
Z Q(1) (Vn (t)) d H (0) (v) − H (0) (v)
Z H (1) (xZn−k:n )
n
L (Vn (t)) − L (t)
(1)
dt,
An2 (x) :=
F (Zn−k:n )
H (v)
x−1/γ p/n
0
and
Z
(2)
An2 (x) :=
0
x−1/γ p/n
L (Vn (t)) − L (t)
F (Zn−k:n )
Z
Q(1) (Vn (t))
(0)
(0)
d H n (v) − H (v)
0
H (v)
dt.
(1)
For the term An2 (x) , we integrate the second integral by parts to get
( (0)
Z H (1) (xZn−k:n )
(0)
Q(1) (Vn (t))
L (Vn (t)) − L (t) H n Q(1) (Vn (t)) − H
(1)
An2 (x) =
F (Zn−k:n )
H (Q(1) (Vn (t)))
x−1/γ p/n
)
Z Q(1) (Vn (t)) (0)
(0)
H n (v) − H (v)
dH (v) dt.
(5.24)
−
2
0
H (v)
Using assertion (ii) in Proposition 6.1 yields
Z (1)
H (xZn−k:n ) |L (Vn (t)) − L (t)|
dt
(1)
−η
η ,
An2 (x) = Op n
F (Zn−k:n )
H (Q(1) (t))
x−1/γ p/n
which may be rewritten into
Z H (1) (xZn−k:n )
Op n−η
x−1/γ p/n
In the interval of endpoints H
(1)
L (Vn (t))
L (t)
−1
L (t)
F (Zn−k:n )
dt
η .
H (Q(1) (t))
(xZn−k:n ) and x−1/γ p/n, we have, for large n, Vn (t) is
uniformly close to zero (in probability). On the other hand, assertion (i) in Proposition 6.1,
implies that (Vn (t) /t)± is uniformly stochastically bounded. Then, making use of Potter’s
inequalities (5.23) applied to the regularly varying function L (·) (with index −q), we have
(1 − 0 ) (Vn (t) /t)−0 −q <
L (Vn (t))
< (1 + 0 ) (Vn (t) /t)0 −q .
L (t)
It is clear that
L (Vn (t))
− 1 ≤ max (1 + 0 ) (Vn (t) /t)0 −q − 1 , (1 − 0 ) (Vn (t) /t)−0 −q − 1
L (t)
.
For the sake of simplicity, we rewrite this inequality into
|L (Vn (t)) /L (t) − 1| ≤ (1 ± 0 ) (Vn (t) /t)±0 −q − 1
which is in turn is less than of equal to (Vn (t) /t)±0 −q − 1 + 0 (Vn (t) /t)±0 −q . In other
words, we have
(Vn (t))±0 −q − t±0 −q
|L (Vn (t)) /L (t) − 1| ≤
+ 0 (Vn (t) /t)±0 −q .
±
−q
0
t
18
By making use of assertion (ii) in Proposition 6.1 (once again), we show that
|L (Vn (t)) /L (t) − 1| ≤ Op n−η t(η−1)(±0 −q) + op (1) .
Therefore
(1)
An2
(x) ≤ Op
n−2η
Z
H
(1)
(xZn−k:n )
x−1/γ p/n
−η
+ op n
Z
H
(1)
(xZn−k:n )
x−1/γ p/n
L (t)
t(η−1)(±0 −q)
η dt
F (Zn−k:n ) H (Q(1) (t))
L (t)
dt
η .
F (Zn−k:n ) H (Q(1) (t))
(1)
By a similar treatment as the above (we omit the details), we end up with An2 (x) =
(2)
Op (k −2η±0 ) x(2η−p)/γ±0 . For the term An2 (x) , we start by noting that x−1/γ p/n ≤ 1/n, for
any x ≥ pγ . Since Vn (t) = U1:n , for 0 < t ≤ 1/n, it follows that
(0)
(Z −1/γ
) Z (1)
(0)
x
p/n
Q (U1:n ) d H n (v) − H
(v)
L (U1:n ) − L (t)
(2)
An2 (x) =
dt
.
F (Zn−k:n )
H (v)
0
0
For the second integral, we use analogous arguments based on assertion (ii) in Proposition
6.1, to write
(2)
An2
Op (n−η )
η
(x) =
F (Zn−k:n ) H (Q(1) (U1:n ))
Z
x−1/γ p/n
(L (U1:n ) + L (t)) dt.
0
Note that the latter integral is equal to L (U1:n ) px−1/γ /n +
R x−1/γ p/n
0
L (t) dt. By Potter’s in-
p
equalities (5.23) on L and the fact that nU1:n → 1, it becomes (1 + op (1)) pn−1 L (1/n) x−1/γ .
By replacing L by its expression, we get
(2)
An2
(x) =
F Q(1) (1/n)
F
(H ←
n−η−1
−1/γ
.
η+1 Op x
(1 − k/n)) H (Q(1) (1/n))
Now, we apply Potter’s inequalities (5.23) to F , to write
←
1/γ1 ±0
F Q(1) (1/n)
H (1 − k/n)
= O (1)
, as n → ∞.
Q(1) (1/n)
F (H ← (1 − k/n))
(5.25)
1
Since H (w) > H (w) , then Q(1) (s) > H ← (1 − s) and therefore
H ← (1 − k/n)
H ← (1 − k/n)
≤
,
Q(1) (1/n)
H ← (1 − 1/n)
which, by (once again) using Potter’s inequalities (5.23) to H ← (1 − s) , equals O (k −γ±0 ) .
Then the right-hand side of (5.25) is asymptotically equal to O k −γ/γ1 ±0 . On the other
η+1 (1) (1)
η+1
(2)
> H
Q (1/n)
= n−η−1 , it follows that An2 (x) =
hand, we have H Q(1) (1/n)
Op (k −p±0 ) x−1/γ , where p = γ/γ1 is assumed to be greater than 1/2. Consequently, we
(2)
have An2 (x) = Op (k −2η±0 ) x(2η−p)/γ±0 , for any 1/4 < η ≤ p/2, and thus An2 (x) =
Op (k −2η±0 ) x(2η−p)/γ±0 as well. By similar arguments, we show that An3 (x) asymptotically equals the same quantity, therefore we omit the details. Finally, we may write that
19
T∗n2 (x) = Tn2 (x) + Op (k −2η±0 ) x(2η−p)/γ±0 . For the term T∗n3 (x) we proceed as we did for
T∗n2 (x) but with more tedious manipulations. By two steps, we have to get rid of integrands
(0)
(1)
dHn (v) and dHn (w) and replace them by their theoretical counterparts dH (0) (v) and
dH (1) (w) . First, we define, for 0 < t < 1 − θ,
n
o
n
o
(0)
(0)
Q(0) (t) := inf v : H (v) > t and Q(0)
(t)
:=
inf
v
:
H
(v)
>
t
.
n
n
(0)
By the change of variables v = Qn (t) and similar arguments to those used for the terms
Ani (x) , we show that
Z
∗
Tn3 (x) =
)
(Z
w
` (w)
H (v) − H n (v) (0)
dH (v) dHn(1) (w)
2
xZn−k:n F (Zn−k:n )
0
H (v)
+ Op k −2η±0 x(2η−p)/γ±0 .
∞
(1)
Second, we use the change of variables w = Qn (s) and proceed as above to get T∗n3 (x) =
Tn3 (x)+Op (k −2η±0 ) x(2η−p)/γ±0 . At this stage, we proved that Tn1 (x) is exactly T∗n1 (x) and
that Tn2 (x) and Tn3 (x) are approximated by T∗n2 (x) and T∗n3 (x) respectively. For the first
remainder term Rn1 (x) , it suffices to follow the same procedure to show that, uniformly on
√
x ≥ pγ , kRn1 (x) = Op (k −2η±0 ) x(2η−p)/γ±0 . As for the last term Rn2 (x) , we use similar
technics to the decomposition
Z w
Z w
(0)
dH (0) (v)
dHn (v)
−
H n (v)
H (v)
0
0
Z w d Hn(0) (v) − H (0) (v)
Z w
H n (v) − H (v) (0)
=
+
dH (v) ,
H n (v)
H n (v) H (v)
0
0
√
to show that kRn1 (x) = Op (k −2η±0 ) x(2η−p)/γ±0 as well, therefore we omit details. Now,
the weak approximation (5.21) of Mn1 (x) is well established.
5.1.2. Gaussian approximation to Dn (x). We start by representing each Tnj (x) , j =
1, 2, 3, in terms of the processes (5.18) and (5.19) . Note that Tn1 (x) may be rewritten into
(1)
(1)
Z ∞
F (w) d H n (w) − H (w)
Tn1 (x) = −
,
H (w)
xZn−k:n F (Zn−k:n )
which, by integration by parts, becomes
"
#∞
(1)
(1)
F (w) H n (w) − H (w)
Tn1 (x) = −
F (Zn−k:n )
H (w)
Z
+
xZn−k:n
∞
xZn−k:n
(1)
H n (w) − H
H (w)
(1)
(1)
(w) dF (w)
.
F (Zn−k:n )
Observe that, for u ≥ Zn:n , H n (u) = 0 and, from Lemma 4.1 in Brahimi et al. (2015),
H
(1)
(w) /H (w) tends to p as w → ∞. It follows that
(1)
H (w) − H
lim F (w) n
w→∞
H (w)
(1)
(w)
(1)
H (w)
= − lim F (w)
= 0.
w→∞
H (w)
20
Thus, after a change of variables, we have
(1)
(1)
F (xZn−k:n ) H n (xZn−k:n ) − H (xZn−k:n )
Tn1 (x) =
F (Zn−k:n )
H (xZn−k:n )
Z ∞ (1)
(1)
H n (wZn−k:n ) − H (wZn−k:n ) dF (wZn−k:n )
+
.
H (wZn−k:n )
F (Zn−k:n )
x
Making the change of variables w = F ← 1 − sF (Zn−k:n ) /Zn−k:n =: ξn (s) and using (5.14)
and (5.18) yield
√
√ Z F (xZn−k:n )
F (Zn−k:n )
βn (ξn (s))
k F (xZn−k:n ) βn (x)
k
Tn1 (x) =
+
ds.
n F (Zn−k:n ) H (xZn−k:n )
n 0
H (ξn (s) Zn−k:n )
Next, we show that, for 0 > 0 sufficiently small, we have
Z
√
q ∞ 1/γ2 −1
−1/γ2
βn (x) +
w
βn (w) dw + op x(2η−p)/γ±0 .
kTn1 (x) = x
γ x
(5.26)
To this end, let us decompose Tn1 (x) into the sum of
√
√ Z F (xZn−k:n )
F (Zn−k:n )
k F (xZn−k:n ) βn (x)
k
βn (ξn (s))
(2)
(x) :=
, Tn1 (x) :=
ds,
n F (Zn−k:n ) H (xZn−k:n )
n x−1/γ1
H (ξn (s) Zn−k:n )
√ Z x−1/γ1
k
H (s−γ1 h)
βn (ξn (s))
(3)
Tn1 (x) :=
−1
ds,
n 0
H (ξn (s) Zn−k:n )
H (s−γ1 h)
√ Z x−1/γ1
k
βn (ξn (s)) − βn (s−γ1 )
(4)
Tn1 (x) :=
ds,
n 0
H (s−γ1 h)
√ Z x−1/γ1 γ /γ
√ Z x−1/γ1
k
k
βn (s−γ1 )
s 1 H (h)
βn (s−γ1 )
(5)
(6)
Tn1 (x) :=
−
1
ds,
T
(x)
:=
ds.
n1
n 0
n 0
H (s−γ1 h)
sγ1 /γ H (h)
sγ1 /γ H (h)
√ (i)
We shall show that kTn1 (x) = op x(2η−p)/γ±0 , i = 2, ..., 5, uniformly on x ≥ pγ , while
√ (6)
√ (1)
kTn1 (x) and kTn1 (x) are approximations to the first and second terms in (5.26) respec√ (2)
tively. Let us begin by kTn1 (x) and write
Z
√
k cn (x) |βn (ξn (s))|
(2)
k Tn1 (x) ≤
ds,
n 0
H (ξn (s) Zn−k:n )
where cn (x) := max F (xZn−k:n ) /F (Zn−k:n ) , x−1/γ1 . Next, we provide a lower bound to
(1)
Tn1
H (ξn (s) Zn−k:n ) by applying Potter’s inequalities (5.23) to F . Since Zn−k:n → ∞ a.s., then
from the right-hand side of (5.23) we have F (xZn−k:n ) /F (Zn−k:n ) < 2x−1/γ1 ±0 a.s., which
implies that cn (x) < 2x−1/γ1 ±0 a.s., for any x ≥ pγ , as well. Let us now rewrite the sequence
ξn (s) into
F ← 1 − sF (Zn−k:n )
, 0 < s < 1,
ξn (s) = ←
F
1 − F (Zn−k:n )
(5.27)
and use the left-hand side of Potter’s inequalities (5.23) for the quantile function u →
F ← (1 − u) , to get ξn (s) ≥ 2−1 s−γ1 ±0 a.s., for any x−1/γ1 < s < F (xZn−k:n ) /F (Zn−k:n ) .
21
This implies that ξn (s) ≥ 2−1 x1±0 ≥ 2−1 pγ±0 > 0. Then ξn (s) Zn−k:n → ∞ a.s. and therefore, from the left-hand side of (5.23) (applied to H), we have H (Zn−k:n ) /H (ξn (s) Zn−k:n ) =
1/γ±0
a.s. uniformly on s. This allows us to write that
O (ξn (s))
√ (2)
kTn1 (x) = O (1)
k/n
H (Zn−k:n )
Z
2x−1/γ1 ±0
(ξn (s))1/γ±0 |βn (ξn (s))| ds, a.s.
0
By combining Corollary 2.2.2 with Proposition B.1.10 in de Haan and Ferreira (2006), we
p
have (n/k) H (Zn−k:n ) → 1, hence
√
(2)
kTn1
2x−1/γ1 ±0
Z
(ξn (s))1/γ±0 |βn (ξn (s))| ds.
(x) = Op (1)
0
−1/γ1 ±0
From (5.23) , we infer that 0 < s < 2x0
=: s0 . On the other hand, in view of assertion
(ii) of Lemma 6.2, we have sup0<s<s0 (ξn (s))(1−η)/γ |βn (ξn (s))| = op (1) , therefore
Z 2x−1/γ1 ±0
√ (2)
(ξn (s))η/γ±0 ds.
kTn1 (x) = op (1)
0
Note that (1 − η) /γ ± 0 > 0, then by using the right-hand side of (5.23) (applied to
F ← (1 − ·)), we get
√
(2)
kTn1
Z
2x−1/γ1 ±0
s−ηγ1 /γ±0 ds,
(x) = op (1)
0
√ (2)
which equals op x
. Recall that γ1 = γ/p, then it is easy to verify that kTn1 (x) =
√ (i)
op x(2η−p)/γ±0 . By using similar arguments, we also show that kTn1 (x) = op x(2η−p)/γ±0 ,
η/γ−1/γ1 ±0
i = 3, 5, therefore we omit the details. For the term Tn4 (x) , we have
Z −1/γ1
√
k x
|βn (ξn (s)) − βn (s−γ1 )|
(4)
k Tn1 (x) ≤
ds.
n 0
H (s−γ1 h)
In view of (5.23) with the fact that H (h) = k/n, we have
Z x−1/γ1
√ (4)
kTn1 (x) = Op (1)
s−γ1 /γ±0 βn (ξn (s)) − βn s−γ1 ds.
0
From assertion (ii) of Lemma 6.2, we have βn (ξn (s) − s−γ1 ) = op s(1−η)γ1 /γ , uniformly
√ (4)
on 0 < s < x−1/γ1 , then after elementary calculation, we end up with kTn1 (x) =
(1)
op x(2η−p)/γ±0 . As for the first term Tn1 (x) , it suffices to use Potter’s inequalities (5.23)
(for F and H) to get
√
(1)
kTn1 (x) = x−1/γ2 βn (x) + op x(2η−p)/γ±0 .
√ (6)
R x−1/γ1 γ /γ
(6)
Finally, for Tn1 (x) we observe that kTn1 (x) = 0
s 1 βn (s−γ1 ) ds, which by a change
of variables meets the second term in (5.26) . Let us now consider the term Tn2 (x) . First,
notice that
Z
0
w
(0)
d Hn (v) − H (0) (v)
H (v)
Z
=−
0
w
(0)
(0)
d H n (v) − H (v)
H (v)
,
22
which, after an integration by parts, becomes
(0)
Hn
(0) − H
(0)
H (0) (w) − H (0) (w) Z w (0)
dH (v)
(0)
−
(0) − n
H n (v) − H (v)
.
2
H (w)
0
H (v)
It follows that Tn2 (x) may be written into the sum of
Z ∞
(0)
(0)
(1)
Tn2 (x) := − H n (0) − H (0)
(1)
xZn−k:n
(2)
Tn2
Z
(0)
∞
(x) :=
xZn−k:n
H n (w) − H
H (w)
(0)
(w)
F (w) dH (w)
,
F (Zn−k:n ) H (w)
(1)
F (w) dH (w)
,
F (Zn−k:n ) H (w)
and
(3)
Tn2
Z
∞
F (w)
(x) :=
xZn−k:n F (Zn−k:n )
For the first term, we replace dH
(1)
(Z
w
0
dH (v)
(0)
(0)
H n (v) − H (v)
2
H (v)
)
(1)
dH (w)
.
H (w)
(w) and H (w) by G (w) dF (w) and G (w) F (w) respec-
tively and we get
F (xZ
(0)
(0)
n−k:n )
(1)
.
Tn2 (x) = H n (0) − H (0)
F (Zn−k:n )
By using the routine manipulations of Potter’s inequalities (5.23) (applied to F ), we obtain
(0)
(0)
(1)
Tn2 (x) = H n (0) − H (0) Op x−1/γ1 ±0 . On the other hand, by the central limit
√ (1)
(0)
(0)
theorem, we have H n (0) − H (0) = Op n−1/2 , as n → ∞, it follows that kTn2 (x) =
p
√ (1)
k/nOp x−1/γ1 ±0 . Since x−1/γ1 ±0 = O x(2η−p)/γ±0 and k/n → 0, then kTn2 (x) =
(2)
op x(2η−p)/γ±0 . It is easy to verify that Tn2 (x) may be rewritten into
Z ∞
√ (2)
βen (w) dF (wZn−k:n )
kTn2 (x) =
.
H (wZn−k:n ) F (Zn−k:n )
x
By using similar arguments as those used for Tn1 (x) , we show that
Z
√ (2)
p ∞ 1/γ2 −1 e
kTn2 (x) = −
w
βn (w) dw + op x(2η−p)/γ±0 ,
γ x
(3)
therefore we omit the details. As for the third term Tn2 (x) , we have
)
Z ∞ (Z wZn−k:n
dH
(v)
F (wZn−k:n )
(0)
(0)
(3)
Tn2 (x) =
H n (v) − H (v)
d
.
2
F (Zn−k:n )
x
0
H (v)
After a change of variables and an integration by parts in the first integral, we apply Potter’s
inequalities (5.23) to F , to show that
(Z
)
Z
wZn−k:n √
√
√ (3)
p ∞ −1/γ1 −1
dH
(v)
(0)
(0)
kTn2 (x) = −
w
k H n (v) − H (v)
dw+ kRn3 (x) ,
2
γ x
0
H (v)
where
√
kRn3 (x) := oP x
−1/γ1 ±0
Z
x
∞
w−1/γ1 −1
(Z
wZn−k:n
√
k
0
(0)
Hn
(v) − H
(0)
(v)
dH (v)
2
H (v)
)
dw.
23
By using once again assertion (ii) in Proposition 6.1 with the fact that H
(0)
< H together
√
with Potter’s inequalities (5.23) routine manipulations, we readily show that kRn3 (x) =
op x(2η−p)/γ±0 , uniformly on x ≥ pγ , therefore we omit the details. Recall that γ1 = γ/p
(3)
and let us rewrite the first term in Tn2 (x) into
Z
∞
(Z
x
wZn−k:n
0
dH (v)
√ (0)
(0)
k H n (v) − H (v)
2
H (v)
)
dw−1/γ1 ,
which, by an integration by parts, becomes
dH (v)
√ (0)
(0)
k H n (v) − H (v)
2
0
H (v)
Z ∞
dH (wZ
√ (0)
(0)
n−k:n )
−1/γ1
−
w
k H n (wZn−k:n ) − H (wZn−k:n )
.
2
x
H (wZn−k:n )
Z
xZn−k:n
−
Z
k xe
2
βn (w) dH (wZn−k:n ) /H (wZn−k:n ) and
The first term above may be rewritten into −
n 0
by similar arguments as those used for Tn1 (x) , we show that the second term equals
1 R ∞ 1/γ2 −1 e
w
βn (w) dw + op x(2η−p)/γ±0 and thus we have
x
γ
Z
Z
√ (3)
dH (wZn−k:n ) 1 ∞ 1/γ2 −1 e
k xe
βn (w) dw + op x(2η−p)/γ±0 .
βn (w) 2
w
kTn2 (x) = −
+
n 0
H (wZn−k:n ) γ x
Consequently, we have
√
kTn2 (x) = −x
−1/γ1
k
n
Z
0
x
dH (wZn−k:n ) q
βen (w) 2
+
H (wZn−k:n ) γ
Z
∞
w1/γ2 −1 βen (w) dw+op x(2η−p)/γ±0 .
x
For the term Tn3 (x) , routine manipulations lead to
√
dH (0) (wZ
n−k:n )
e
kTn3 (x) = x
βn (w) + βn (w)
2
0
H (wZn−k:n )
Z ∞
√
q
−
w−1/γ2 −1 βn (w) + βen (w) dw + kRn4 (x) ,
γ x
−1/γ1
k
n
Z
x
where
√
Z
dH (0) (wZ
k x
n−k:n )
kRn4 (x) := op (1) x
|βn (w)| + βen (w)
2
n 0
H (wZn−k:n )
Z ∞
q
+
w−1/γ2 −1 |βn (w)| + βen (w) dw.
γ x
−1/γ1 ±0
By a similar treatment as that of
√
kRn3 (x) , we get
√
kRn4 (x) = op x(2η−p)/γ±0 . By
substituting the results obtained above, for the terms Tnj (x) , j = 1, 2, 3, in equation (5.21) ,
24
we end up with
(Z
(0)
x
√
k
dH (wZn−k:n )
−1/γ1
1/γ2
kMn1 (x) = x βn (x) + x
βn (w)
2
n
0
H (wZn−k:n )
)
Z x
(1)
(wZ
)
dH
n−k:n
+ op x(2η−p)/γ±0 .
−
βen (w)
2
0
H (wZn−k:n )
√
The asymptotic negligibility (in probability) of kMn2 (x) is readily obtained. Indeed,
√
note that we have kMn1 (x) = Op x(2η−p)/γ±0 and from Theorem 2 in Csörgő (1996),
√
we infer that F (Zn−k:n ) /F n (Zn−k:n ) − 1 = Op k −1/2 . This means that kMn2 (x) =
op x(2η−p)/γ±0 (because k → ∞). Recall now that
√
F (xZn−k:n ) √
kMn3 (x) = −
kMn1 (1) ,
F (Zn−k:n )
√
which, by applying Potter’s inequalities (5.23) to F , yields that kMn3 (x) is equal to
(Z
(0)
1
dH (wZn−k:n )
k
1/γ1
−1/γ1
− x βn (x) − x
βn (w)
2
n
0
H (wZn−k:n )
)
Z x
(1)
dH
(wZ
)
n−k:n
(2η−p)/γ±0
βen (w)
−
+
o
x
.
p
2
0
H (wZn−k:n )
Therefore
3
X
√
kMni (x) = x1/γ2 βn (x) − x−1/γ1 βn (1) + Dn βn , βen ; x + op x(2η−p)/γ±0 ,
i=1
where
(Z
)
Z x
(0)
(1)
x
dH
(wZ
)
dH
(wZ
)
k
n−k:n
n−k:n
βn (w)
−
βen (w)
,
Dn βn , βen ; x := x−1/γ1
2
2
n
1
1
H (wZn−k:n )
H (wZn−k:n )
which, by routine manipulations as the above, is shown to be equal to
Z x
Z x
−1
1/γ−1 e
−1
1/γ−1
−1/γ1
γ1
w
βn (w) dw − γ1
w
βn (w) dw + op x(2η−p)/γ±0 .
x
1
1
Therefore, we have
3
X
√
kMni (x) = x1/γ2 βn (x) + op x(2η−p)/γ±0
i=1
−x
−1/γ1
Z
−1
βn (1) − γ1
1
x
w1/γ−1 βen
(w) dw −
γ1−1
Z
x
w
1/γ−1
βn (w) dw .
1
We are now in position to apply the well-known Gaussian approximation (5.16) to get
r
r
3
X
√
n
n
1/γ2
−1/γ1
kMni (x) = x
Bn (x) − x
Bn (1)
k
k
i=1
(Z
)
Z 1
x
∗
B
(w)
B
(w)
n
n
1
(2η−p)/γ±0
+ x−1/γ1
dH
(w)
−
dH
(w)
+
o
x
,
p
2
2
1 H (w)
0 H (w)
25
where Bn (w) and B∗n (w) are two Gaussian processes defined by
(1)
e n (w) := −Bn 1 − H (0) (wZn−k:n ) .
Bn (w) := Bn (θ) − Bn θ − H (wZn−k:n ) and B
By similar arguments as those used in Lemma 5.2 in Brahimi et al. (2015), we end up with
3
X
√
kMni (x)
i=1
r
n
n
k −1/γ
k
−1/γ1
−x
=x
Bn
x
Bn
k
n
k
n
r
Z
k −1/γ
k −1/γ
x−1/γ1 n x 1/γ−1 e
pBn
− qBn
du + op x(2η−p)/γ±0 ,
u
u
u
+
γ
k 1
n
n
1/γ2
r
e n (s) , 0 < s < 1, are sequences of centred Gaussian processes defined
where Bn (s) and B
e n (s) := −Bn (1 − qs) . Let {Wn (t) ; 0 ≤ t ≤ 1} be a
by Bn (s) := Bn (θ) − Bn (θ − ps) and B
sequence of Weiner processes defined on (Ω, A, P) so that
D
{Bn (t) ; 0 ≤ t ≤ 1} = {Wn (t) − tWn (1) ; 0 ≤ t ≤ 1} .
(5.28)
P √
It is easy to verify that 3i=1 kMni (x) = Jn (x) + op x(2η−p)/γ±0 , which is exactly (5.20) .
Finally, we take care of the term Mn4 (x) . To this end, we apply the uniform inequality of
second-order regularly varying functions to F (see, e.g., the bottom of page 161 in de Haan
and Ferreira (2006)), to write
τ /γ1
F (xZn−k:n )
−1
−1/γ1
−1/γ1 x
−x
= (1 + op (1)) A1 (Zn−k:n ) x
,
τ /γ1
F (Zn−k:n )
and
τ /γ1
F (xh)
−1
−1/γ1
−1/γ1 x
−x
,
= (1 + o (1)) A1 (h) x
τ /γ1
F (h)
p
uniformly on x ≥ pγ . Since A1 is regularly varying at infinity (with index τ γ1 ) and Zn−k:n /h →
1, it follows that A1 (Zn−k:n ) = (1 + op (1)) A1 (h) , , therefore
√
kMn4 (x) = (1 + op (1)) x−1/γ1
By assumption we have
√
xτ /γ1 − 1 √
kA1 (h) , as n → ∞.
τ /γ1
kA1 (h) = O (1) , then
Dn (x) − Jn (x) − x−1/γ1
xτ /γ1 − 1 √
kA1 (h) = op x(2η−p)/γ±0 .
τ /γ1
Let η0 > 0 such that 1/4 < η < η0 < p/2, then
τ /γ1
− 1√
(p−2η0 )/γ
−1/γ1 x
x
Dn (x) − Jn (x) − x
kA1 (h) = op x2(η−η0 )/γ±0 .
τ /γ1
Now, we choose 0 sufficiently small so that (η − η0 ) /γ + 0 < 0. Since x ≥ pγ > 0, then
op x2(η−η0 )/γ±0 = op (1) , hence for 1/4 < η0 < p/4
τ /γ1
− 1√
(p−2η0 )/γ
−1/γ1 x
x
Dn (x) − Jn (x) − x
kA1 (h) = op (1) ,
τ /γ1
26
uniformly on x ≥ pγ . To achieve the proof it suffices to replace γ by pγ1 and choose = p−2η0
so that 0 < < 1/2, as sought.
5.2. Proof of Theorem 3.1. For the consistency of γ
b1 , we make an integration by parts
and a change of variables in equation (3.9) to get
Z ∞
F n (xZn−k:n )
x−1
dx,
γ
b1 =
F n (Zn−k:n )
1
R∞
which may be decomposed into the sum of I1n := 1 x−1 F (xZn−k:n ) /F (Zn−k:n ) dx and
R∞
P
I2n := 1 x−1 3i=1 Mni (x) dx. By using the regular variation of F (1.1) and the correP
sponding Potter’s inequalities (5.23) , we get I1n → γ1 as n → ∞. Then, we just need to
show that I2n tends to zero in probability. From (5.20) we have
Z ∞
Z
op (1) ∞ −1−/(pγ1 )
1
−1
x Jn (x) dx + √
x
dx,
I2n = √
k 1
k 1
where the second integral above is finite and therefore the second term of I2n is negligible
R∞
R∞
in probability. On the other hand, we have 1 x−1 Jn (x) dx = 1 x−1 (J1n (x) + J2n (x)) dx,
where J1n (x) and J2n (x) are the two centred Gaussian processes given in Theorem 2.1. After
some elementary but tedious manipulations of integral calculus, we obtain
Z ∞
x−1 Jn (x) dx
(5.29)
1
r Z 1
r
q
k
n
k
k
n
−q−1
=γ
s
s + 1−
s
ds − γ1
Wn,1
Wn,2
Wn,1
.
k 0
n
p
n
k
n
Since {Wn (s) , 0 ≤ s ≤ 1} is a sequence Weiner processes, we may readily show that
r
r
n
k
n
k
1/2
E Wn,1
s ≤ (ps)
and
E Wn,2
s ≤ (qs)1/2 .
k
n
k
n
R∞
But γ1 < γ2 , hence 0 < q < 1/2 and thus it is easy to verify that E 1 x−1 Jn (x) dx < ∞.
√
P
This yields that I2n →
0 when n → ∞ (because 1/ k → 0), as sought. As for the Gaussian
√
R∞
representation result, we write k (b
γ1 − γ1 ) = 1 x−1 Dn (x) dx, then by applying Theorem
√
2.1 together with the representation (5.29) and the assumption, we have kA1 (h) → λ, we
√
λ
get k (b
γ1 − γ1 ) = C1n + C2n + C3n +
+ op (1) , where
1−τ
r Z 1
r Z 1
n
k
q
n
k
−q−1
−q−1
C1n := γ
s
Wn,2
s ds, C2n := γ 1 −
s
Wn,1
s ds
k 0
n
p
k 0
n
r
n
k
and C3n := −γ1
Wn,1
. The computation of the limit of E [C1n + C2n + C3n ]2 gives
k
n
2
2qγ 2
q
2pγ 2
γ2
γ2
q
γ2
+
1
−
+
−
2
1
−
=
,
2q 2 − 3q + 1
p 2q 2 − 3q + 1
p
p
p
p (2p − 1)
which by substituting pγ1 for γ completes the proof.
Concluding notes
27
On the basis of Nelson-Aalen nonparametric estimator, we introduced a product-limit process for the tail of a heavy-tailed distribution of randomly right-censored data. The Gaussian
approximation of this process proved to be a very useful tool in achieving the asymptotic
normality of the estimators of tail indices and related statistics. Furthermore, we defined
a Hill-type estimator for the extreme value index and determined its limiting Gaussian distribution. Intensive simulations show that the latter outperforms the already existing ones,
with respect to bias and MSE. It is noteworthy that the asymptotic behavior of the newly
proposed estimator is only assessed under the second-order condition of regular variation
of the underlying distribution tail, in contrast to the unfamiliar assumptions of Worms and
Worms (2014) and Einmahl et al. (2008). This represents the main big improvement brought
in this paper. Our approach will have fruitful consequences and open interesting paths in the
statistical analysis of extremes with incomplete data. The generalization of this approach
to the whole range of maximum domains of attraction would make a very good topic for a
future work.
6. Appendix
The following Proposition provides useful results with regards to the uniform empirical and
quantile functions Un (t) and Vn (t) respectively.
Proposition 6.1. (i) For n ≥ 1 and 0 < a < 1, we have
sup (t/Un (t))±1 = Op (1) = sup (t/Vn (t))±1 .
a/n≤t≤1
a/n≤t≤1
(ii) For n ≥ 1, 0 < b < 1 and 0 < η < 1/2, we have
sup nη |Un (t) − t| /t1−η = Op (1) = sup nη |Vn (t) − t| /t1−η .
0<t≤1
b/n≤t≤1
(iii) Let ϕ be a regularly varying function at infinity with index α and let 0 < an < bn < 1
be such that nan = O (1) and bn ↓ 0. Then
sup (ϕ (t) /ϕ (Vn (t))) = Op (1) .
an ≤t≤bn
Proof. The proofs of assertion (i) and the first result of assertion (ii) may be found in Shorack
and Wellner (1986) in pages 415 (assertions 5-8 ) and 425 (assertion 16) respectively. The
second result of assertion (ii) is proved by using the first results of both assertions (i) and
(ii) . For the third assertion (iii) , it suffices to apply Potter’s inequalities (5.23) to function
ϕ together with assertion (i) corresponding to Vn (t) .
Lemma 6.1. Let In (x) be the interval of endpoints H
(1)
sup sup (L (t) /L (Vn (t))) = Op (1) , as n → ∞,
x≥pγ t∈In (x)
(1)
(xZn−k:n ) and H n (xZn−k:n ) . Then
28
(1)
where L (s) := F Q(1) (s) /H Q(1) (s) , with Q(1) being the generalized inverse of H .
Proof. First, we show that, for any small 0 < ξ < 1, there exists a constant c > 0, such that
In (x) is included in the half-open interval (1 − ξ) px−1/γ /n, ck/n , with probability close
to 1, as n → ∞. Indeed, following similar arguments as those used in the proof of part (i)
in Lemma 4.1 in Brahimi et al. (2015), we infer that for all large n
H
(1)
(xZn−k:n ) = (1 + op (1)) px−1/γ H (Zn−k:n ) ,
(6.30)
uniformly on x ≥ pγ . We have H (Zn−k:n ) = (1 + op (1)) k/n, which implies that
(1)
−1/γ k
−1/γ k
< H (xZn−k:n ) < (1 + ξ) px
→ 1, as n → ∞.
P (1 − ξ) px
n
n
Since k > 1 and x ≥ pγ , then
(1 − ξ) px−1/γ
k
(1)
P
< H (xZn−k:n ) < (1 + ξ)
→ 1, as n → ∞.
(6.31)
n
n
(1)
γ
On the other hand, it is obvious that for any x ≥ p , P H n (xZn−k:n ) ≥ 1/n = 1
(1)
(1)
and H n (xZn−k:n ) = Un H (xZn−k:n ) , a.s. Then, in view of assertion (i) in Propo (1)
(1)
sition (6.1) , we have H n (xZn−k:n ) = Op H (xZn−k:n ) , it follows, from (6.30) , that
(1)
H n (xZn−k:n ) = Op (k/n) , uniformly on x ≥ pγ . This means that there exists d > 0 such
that
(1 − ξ) px−1/γ
(1)
≤ H n (xZn−k:n ) < dk/n → 1, as n → ∞.
P
n
Therefore, from (6.31) and (6.32) , that P In (x) ⊂ (1 − ξ) px−1/γ /n, ck/n
(6.32)
tends to 1
as n → ∞, for any x ≥ pγ , where c := max (d, 1 + ξ) , as sought. Next, let an (x) :=
(1 − ξ) px−1/γ /n and bn := ck/n ↓ 0. It is clear that 0 < an (x) < bn < 1 and nan (x) < 1 − ξ
that is nan (x) = O (1) for any x ≥ pγ . Then, by using assertion (iii) of Proposition 6.1, we
get supt∈In (x) (L (t) /L (Vn (t))) = Op (1) , as n → ∞, uniformly on x ≥ pγ .
Lemma 6.2. Let βn and βen be the two empirical processes respectively defined in (5.18) and
(5.19) . Then, for all large n and any 1/4 < η < p/2, we have
(i) sup w(1−η)/γ |βn (w)| = op (1) = sup w(1−η)/γ βen (w) .
w>1
w>1
Moreover, for any small 0 > 0, we have uniformly on 0 < s < x−1/γ1 ,
(ii) βn (ξn (s)) − βn s−γ1 = op s(1−η)γ1 /γ±0 ,
where ξn (s) is of the form (5.27) .
29
Proof. Recall that
r n
o
n
(1)
(1)
βn (w) =
αn (θ) − αn θ − H (wZn−k:n ) , for 0 < H (wZn−k:n ) < θ,
k
D
and note that {αn (θ) − αn (θ − t) , 0r< t < θ} = {αn (t) , 0 < t < θ} . Then without loss of
n (1)
generality, we may write βn (w) =
αn H (wZn−k:n ) , for any w > 1. It is easy to
k
verify that
(1)
!η−1 η
(1)
(1)
n Un H (wZn−k:n ) − H (wZn−k:n )
1−η
H
(wZ
)
n
n−k:n
.
w(1−η)/γ |βn (w)| = √
(1)
η−1
w−1/γ
k
H (wZn−k:n )
Making use of Proposition 6.1 (assertion (ii)), we infer that
(1)
(1)
η
n Un H (wZn−k:n ) − H (wZn−k:n )
= Op (1) ,
(1)
η−1
H (wZn−k:n )
uniformly on w > 1, for any 1/4 < η < p/2. It follows that
!η−1
1−η
(1)
H
(wZ
)
n
n−k:n
w(1−η)/γ |βn (w)| = Op √
.
−1/γ
w
k
On the other hand, by using Potter’s inequalities (5.23) to both F and G, with similar
arguments as those used in the proof of Lemma 4.1 (assertion (i)) in Brahimi et al. (2015), we
n
(1)
p
may readily show that limz→∞ supw>1 H (zw) /H (z) − pw−1/γ = 0. Since H (Zn−k:n ) →
k
1, then w(1−η)/γ |βn (w)| = Op (k/n)η k 1/2−2η uniformly on w > 1. We have k → ∞,
k/n → 0 and 1/2 − 2η < 0, for any 1/4 < η < p/2, therefore w(1−η)/γ |βn (w)| = op (1) ,
uniformly on w > 1, leading to the first result of assertion (i) . The proof of the second result
follows similar arguments. For assertion (ii) , we first note that
D
βn (ξn (s)) − βn s−γ1 , 0 < s < 1 = βn ξn (s) − s−γ1 , 0 < s < 1 .
(η−1)/γ
By using assertion (i) , we write βn (ξn (s) − s−γ1 ) = op |ξn (s) − s−γ1 |
, uniformly
on 0 < s < x−1/γ1 and by applying (once again) Potter’s inequalities (5.23) to F ← (1 − ·) ,
we infer that ξn (s) − s−γ1 = op s−1/γ1 ±0 . It follows that
βn ξn (s) − s−γ1 = op s(1−η)γ1 /γ±0 ,
which completes the proof.
References
Aalen, O., 1976. Nonparametric inference in connection with multiple decrement models.
Scand. J. Statist. 3, 15-27.
Beirlant, J., Guillou, A., Dierckx, G. and Fils-Villetard, A., 2007. Estimation of the extreme
value index and extreme quantiles under random censoring. Extremes 10, 151-174.
30
Beirlant, J., Bardoutsos, A., de Wet, T. and Gijbels, I., 2016. Bias reduced tail estimation
for censored Pareto type distributions. Statist. Probab. Lett. 109, 78-88.
Benchaira, S., Meraghni, D. and Necir, A. , 2016. Tail product-limit process for truncated
data with application to extreme value index estimation. Extremes 19, 219-251.
Bingham, N.H., Goldie, C.M. and Teugels, J.L., 1987. Regular Variation. Cambridge University Press.
Brahimi, B., Meraghni, D. and Necir, A., 2015. Approximations to the tail index estimator
of a heavy-tailed distribution under random censoring and application. Math. Methods
Statist. 24, 266-279.
Csörgő, M., Csörgő, S., Horváth, L. and Mason, D.M., 1986. Weighted empirical and quantile
processes. Ann. Probab. 14, 31-85.
Csörgő, S., 1996. Universal Gaussian approximations under random censorship. Ann. Statist.
24, 2744-2778.
Deheuvels, P. and Einmahl, J.H.J., 1996. On the strong limiting behavior of local functionals
of empirical processes based upon censored data. Ann. Probab. 24, 504-525.
Efron, B., 1967. The two-sample problem with censored data. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics 4, 831-552.
Einmahl, J.H.J., Fils-Villetard, A. and Guillou, A., 2008. Statistics of extremes under random
censoring. Bernoulli 14, 207-227.
Einmahl, J.H.J. and Koning, A.J., 1992. Limit theorems for a general weighted process under
random censoring. Canad. J. Statist. 20, 77-89.
Fleming, T.R. and Harrington, D. P., 1984. Nonparametric estimation of the survival distribution in censored data. Comm. Statist. A-Theory Methods 13, 2469-2486.
Gardes, L. and Stupfler, G., 2015. Estimating extreme quantiles under random truncation.
TEST 24, 207-227.
de Haan, L. and Stadtmüller, U., 1996. Generalized regular variation of second order. J.
Australian Math. Soc. (Series A) 61, 381-395.
de Haan, L. and Ferreira, A., 2006. Extreme Value Theory: An Introduction. Springer.
Hill, B.M., 1975. A simple general approach to inference about the tail of a distribution.
Ann. Statist. 3, 1163-1174.
Huang, X. and Strawderman, R. L, 2006. A note on the Breslow survival estimator. J.
Nonparametr. Stat. 18, 45-56.
Kaplan, E.L. and Meier, P., 1958. Nonparametric estimation from incomplete observations.
J. Amer. Statist. Assoc. 53, 457-481.
Ndao, P., Diop, A. and Dupuy, J.-F., 2014. Nonparametric estimation of the conditional tail
index and extreme quantiles under random censoring. Comput. Statist. Data Anal. 79,
31
63–79.
Ndao, P., Diop, A. and Dupuy, J.-F., 2016. Nonparametric estimation of the conditional
extreme-value index with random covariates and censoring. J. Statist. Plann. Inference
168, 20-37.
Nelson,W., 1972. Theory and applications of hazard plotting for censored failure data.
Techno-metrics 14, 945-966.
Peng, L., 1998. Asymptotically unbiased estimators for the extreme-value index. Statist.
Probab. Lett. 38, 107-115.
Reiss, R.-D. and Thomas, M., 2007. Statistical Analysis of Extreme Values with Applications
to Insurance, Finance, Hydrology and Other Fields, 3rd ed. Birkhäuser Verlag, Basel,
Boston, Berlin.
Resnick, S., 2006. Heavy-Tail Phenomena: Probabilistic and Statistical Modeling. Springer.
Shorack, G.R. and Wellner, J.A., 1986. Empirical Processes with Applications to Statistics.
Wiley.
Worms, J. and Worms, R., 2014. New estimators of the extreme value index under random
right censoring, for heavy-tailed distributions. Extremes 17, 337-358.
| 1 |
NORM GROWTH FOR THE BUSEMANN COCYCLE
arXiv:1704.02274v1 [math.GR] 7 Apr 2017
THIBAUT DUMONT
Abstract. Using explicit methods, we provide an upper bound to the norm of the
Busemann cocycle of a locally finite regular tree X, emphasizing the symmetries of the
cocycle. The latter takes value into a submodule of square summable functions on the
edges of X, which corresponds the Steinberg representation for rank one groups acting
on their Bruhat-Tits tree. The norm of the Busemann cocycle is asymptotically linear
with respect to square root of the distance between any two vertices. Independently,
Gournay and Jolissaint [10] proved an exact formula for harmonic 1-cocycles covering
the present case.
Contents
1. Introduction
2. Preliminaries
3. The proof
4. Symmetry and spherical rearrangement
References
1
4
7
12
16
1. Introduction
We present a method from the author’s doctoral dissertation [7]. Therein, the study
of the norm growth of the cocycles introduced by Klingler in [12] is transported, in rank
one, to the geometric Question 1 below.
Let q ≥ 2 be an integer and X be a q + 1 regular tree with vertex set also denoted X
by abuse, edge set E, and visual boundary ∂X. The Busemann cocycle B : X 2 → C(∂X)
is given by the usual formula
(1)
B(x, y)(ξ) = lim d(y, z) − d(x, z),
z→ξ
where d is the metric on X giving length 1 each edge. Let B̄ denote the composition
with the quotient map C(∂X) → C(∂X)/C modding out constant functions.
In a context of Bruhat-Tits building [13], Klingler introduces a transform named after
Poisson which, in the present setting, is a map P : C(∂X) → CE defined by integration
against an Aut(X)-equivariant field ν : E → M(∂X) of signed measures on ∂X:
Z
φ dνe ,
(2)
Pφ(e) :=
∂X
where φ ∈ C(∂X) and e ∈ E, (see Section 2.3 for precise definitions). The Poisson
transform is Aut(X)-equivariant, factors through C(∂X)/C, and maps locally constant
functions into ℓ2 (E) (Proposition 5). This was first proved by Klingler for Bruhat-Tits
trees [13], where a similar statement is proved for more general Bruhat-Tits buildings
in relation to the Steinberg representation as explained in the motivations below.
1
2
THIBAUT DUMONT
Question 1. Find an upper bound for the norm kPB̄(x, y)kℓ2 (E) depending only on x, y
(and q).
The present paper exposes the solution developed in [7, Chap. 4] which emphasis the
symmetries of the cocycle and hopes to inspire an approach to the higher rank case
started in [7, Chap. 2–3].
For a fixed q, the norm depends only on d(x, y) thanks to the equivariance of the
construction, and one may ask to determine the asymptotic growth type of kPB̄(x, y)k
as d(x, y) → ∞. The difficulty lies into the search of an upper bound. In fact, we prove:
Theorem 2. For every integer q ≥ 2, there are constants C, K > 0 such that
4 d(x, y) ≤ kPB̄(x, y)k2 ≤ Cd(x, y) + K,
for all x, y ∈ X, with constants given by:
C=
8(q + 1)2
(q − 1)2
and
K=
16q 2 (2q + 1)
.
(q − 1)3 (q + 1)
In an independent work, Gournay and Jolissaint obtained a formula for the norm of
harmonic cocycles [10, Theorem 1.2] which subsume our estimate. Indeed, since the
average of B(x, y) over the neighbors y of x is proportional to the indicator function
P
on ∂X, the cocycle B̄ : X 2 → C(∂X)/C is harmonic(1), i.e. satisfies y∼x B̄(x, y) = 0.
Therefore PB̄ yields an harmonic 1-cocycle of Aut(X) for its regular representation into
ℓ2 (E). Viewing it as an inhomogeneous 1-cocycle, their result implies
Theorem 3 (Gournay-Jolissaint [10, Theorem 1.2]). For every integer q ≥ 2, there are
constants C ′ , K ′ > 0 such that
for all x, y ∈ X.
kPB̄(x, y)k2 = C ′ d(x, y) − K ′ (1 − q −d(x,y) ),
The discrete Laplacian plays a central role in establishing the above formula as it is
invertible in regular trees.
1.1. On the proof of Theorem 2. Let e ∈ E be an oriented edge of X and let Aut(X)e
−
denote its stabilizer. The measure νe is defined on the partition ∂X = Ω+
e ⊔ Ωe into
−
+
Aut(X)e -orbits as the difference νe − νe of the probability measures supported on Ω+
e
and Ω−
e respectively and proportional to a fixed visual measure. For definiteness we take
Ω−
e to be the shadow of t(e), the target of e, casted by light emanating from o(e), the
origin of e as in Figure 1. One should think of e as being the neck of an hourglass with
−
sand flowing from Ω+
e through e in the direction of Ωe .
By construction the map e 7→ νe is equivariant under any automorphism of X and
if ē denote the reverse orientation then νē = −νe . Thus the Poisson transform satisfies
Pφ(ē) = −Pφ(e). Each geometric edge has therefore a preferred orientation e for which
PB̄(x, y)(e) is non-negative; Figure 2 illustrates this globally. In that case, the subset
of ∂X where B(x, y) takes its maximum, namely d(x, y), has νe+ -measure contributing
more to the value of PB̄(x, y)(e) than its νe− -measure. Symmetrically, the set where
B(x, y) is minimal and equal to −d(x, y) will have larger νe− -measure. For the latter
signs cancel which explains in principle the positivity of the poisson transform for this
preferred orientation. One notices the central role of the barycenter of [x, y] in the
symmetry.
(1)This is distinct from the fact that P
o(e)=x νe = 0, which implies that P ranges in the space of
harmonic functions on E, see Section 2.3.
NORM GROWTH FOR THE BUSEMANN COCYCLE
Ω+
e
3
Ω−
e
e
o(e)
t(e)
Figure 1. An oriented edge e.
y
x
Figure 2. The preferred orientations.
The next step derives an integration formula (Propositions 9 and 11) for
Z
B(x, y) dνe ,
PB̄(x, y)(e) =
∂X
which exhibits a symmetry around the barycenter of [x, y]. More precisely every edge e
has an associated edge e′ , which is aligned with [x, y] if and only if e is so, whose position
relatively to [x, y] is opposite to the barycenter, and for which the integration formula
shows PB̄(x, y)(e) = PB̄(x, y)(e′ ), see Corollary 16. Using this symmetry, one averages
the aforementioned formulae for PB̄(x, y)(e) and PB̄(x, y)(e′ ) to obtain a spherical rearrangement of the terms into non-negative quantities (Sections 4.1 and 4.2) and in turn
yields the relevant upper bounds for PB̄(x, y)(e), see equations (8) and (9).
The upper bound of Theorem 2 is obtained by observing that the sum of PB̄(x, y)(e)2
over the edges not on [x, y] is bounded independently of x, y and that each edge on [x, y]
contributes a uniformly bounded quantity to the ℓ2 -norm, see Section 3.3.
The spherical rearrangement of Section 4.1 also provides the lower bound by applying
Cauchy-Schwarz to the indicator function of the edges of [x, y] pointing towards y.
1.2. Motivations. Let G be a group with a length function L, typically the word length
function of a compact generating set of a locally compact group, and let V be a Banach
space endowed with a linear isometric action of G. In this setting, cohomology theories
obtained by imposing a growth condition (with respect to L) on the norm of cocycles
have been extensively studied in recent decades and have proven themselves powerful
refinements of group cohomology, e.g. bounded cohomology [11], [15], or polynomially
bounded cohomology [16]. The polynomial theory has notable applications to the ℓ1 analogue of the Novikov conjecture [17].
Our result represents the first step of a program initiated in [7] aiming at determining
the norm growth of the natural cocycles introduced by Klingler [12] for some (almostsimple) algebraic groups over a non-Archimedean local field of characteristic 0 and their
Steinberg representation. The latter, introduced by Matsumoto [14] and Shalika [19], is
often called the special representation as in [3], [2].
4
THIBAUT DUMONT
For simplicity, we focus on the special linear group G = SLn (Qp ) over a p-adic
field, n ≥ 2. Among the (admissible) irreducible pre-unitary representation of G, the
Steinberg representation St is the only non-trivial coefficient module for which the cohomology of G does not vanish identically in positive degree. This result was implicit
in the work of Borel [1] for large p using ideas of Garland [9], but was first proved in
full generality by Casselman [6]. More precisely, the only non-vanishing positive degree
is the rank r = n − 1 of G for which Hr (G, St) = C. In [12], Klingler geometrically
builds an r-cocycle volG whose class spans the r-cohomology [12, Théorème 1]. Later
Monod [15, Problem P] suggested volG should be a good candidate to look for polynomial growth phenomenon. He is actually interested in ‘quasifying’ volG to obtain a new
polynomially bounded cohomology class of degree r + 1 which remains an open question.
To study the norm of the cocycle volG , one needs to understand the G-invariant
inner product of the Steinberg representation. The Poisson transform [13] relative to an
Iwahori subgroup B of G yields an explicit isomorphism of (admissible) representations
between St and a submodule H ⊂ L2 (G/B) of (smooth) harmonic functions. Borel–
Serre [3, §5.10] had earlier established this isomorphism using abstract arguments leaving
no hope for explicit computations; the same holds for the argument in [2, §6].
Both volG and the Poisson transform have geometric description using the BruhatTits building of G. Following Klingler [12] [13], the cocycle volG for G = SL2 (Qp )
corresponds to the Busemann cocycle B̄ of the Bruhat-Tits tree of G, whereas the
Poisson transform relative to an Iwahori subgroup (edge stabilizer) is the map P studied
here. More precisely, fixing a base vertex x, the map volG (g, g ′ ) := B̄(gx, g ′ x) defines
Klingler’s homogeneous 1-cocycle for Aut(X) valued into the representation C∞ (∂X)/C
of locally constant functions modulo constants. The latter is pre-unitary because it is
isomorphic, under the Poisson transform, to a submodule of ℓ2 (E), and identifies to St
when restricted to G. This geometric description allows to somewhat ignore the role of
G, and the definitions extend to arbitrary regular trees. More generally if X is a regular
e2 , similar constructions
locally finite Euclidean building, e.g. as studied in [7] for type A
can and will be considered in future work.
1.3. Acknowledgements. The author is indebted to many people and organizations
which supported the present research, and wishes to thank them warmly. Mainly performed at the EPF Lausanne, the project was funded by the Swiss Confederation and
the ERC Advance Grant 267635 attributed to Nicolas Monod. The author’s stay at the
University of Utah, which we thank for its generous hospitality, lead to the completion of
the present work under the sponsorship of the Swiss NSF via an Early Postdoc.Mobility
grant (Project 171852, Cohomology and Bruhat-Tits Buildings).
We thank also Maxime Gheysens for discussions, notably leading to the proof of the
lower bound in Theorem 2. Finally, we benefited greatly from the supervision of Nicolas
Monod who suggested this research topic; we are profoundly grateful.
2. Preliminaries
We start with some preliminaries regarding the Poisson transform and the Busemann
cocycle. Our conventions for graphs and trees follow Serre’s book [18].
Let q ≥ 2 be an integer and X a (q + 1)-regular tree. In addition X comes with an
extremity map (o, t) : E → X × X, assigning to each edge e its origin o(e) and target
t(e), and an orientation reversing map e 7→ ē. A geometric edge is understood to be a
set of the form {e, ē}.
NORM GROWTH FOR THE BUSEMANN COCYCLE
5
We identify the tree X with its geometric realization and endow it with the metric
for which geometric edges have length 1. We denote SR (x) the sphere about x of radius
R ∈ [0, ∞). The visual boundary of X, denoted ∂X, is the set of asymptotic classes of
geodesic rays r : [0, ∞) → X. Endowed with the cone topology, ∂X is compact metrizable and totally disconnected; the basis for the topology being given by the following
family of closed-open subsets:
(3)
Ωx (z) := {ξ ∈ ∂X | z sit on the geodesic ray from x to ξ},
with x, z ∈ X. The set Ωx (z) is called the shadow of z from x (or from light emanating
from x). The visual boundary ∂X is also homeomorphic to the projective limit of the
system
{Sn+1 (x) → Sn (x) | n ∈ N},
for any base point x ∈ X, where each sphere is given the discrete topology.
2.1. Busemann Cocycle. For every pair of vertices (x, y) ∈ X 2 , the function z 7→
d(y, z) − d(x, z) can be extended to the visual boundary via (1). The induced map
B(x, y) : ∂X → R is continuous and called the Busemann cocycle, a terminology justified
by the 1-cocycle identity:
B(x, z) = B(x, y) + B(y, z).
The Busemann cocycle is a locally constant function on ∂X as we now see by identifying
its values and level sets. We consider two fixed vertices x and y. Given ξ ∈ ∂X, let r
be the unique geodesic ray from x to ξ. By definition, the value of Busemann cocycle
at ξ ∈ ∂X is given by:
B(x, y)(ξ) = lim d(y, r(t)) − d(x, r(t)) = lim d(y, r(t)) − t.
t→∞
t→∞
The argument of the limit is in fact constant as soon as r(t) reaches the geodesic ray r ′
joining y to ξ. More precisely,
B(x, y)(ξ) = t′ − t
for all t, t′ ≥ 0 which satisfy r ′ (t′ ) = r(t). Set d := d(x, y) and k := d(x, r ′ ), then
B(x, y)(ξ) = d − 2k.
Consider the geodesic segment σ : [0, d] → X from x to y; the level set for the above
value is given by:
B(x, y)−1 (d − 2k) = {ξ | B(x, y)(ξ) = d − 2k} = Ωx (σ(k)) r Ωx (σ(k + 1)),
for integers 0 ≤ k < d, and equals Ωx (y) otherwise.
2.2. Visual measure νx and Radon-Nikodym derivative. The well-known probability measures νx on ∂X were, to the best of our knowledge, introduced by Cartier in
[4] and [5, §8] as tree analogues of the visual measures on the visual boundary of the
hyperbolic plane. We refer also to the book of Figà-Talamanca and Nebbia [8, Chap II]
for visual measures. Fix a base vertex x, the Borel probability measure νx , called visual
measure at x, can be defined as the projective limit of the uniform probability measures
on the spheres Sn (x), with n ∈ N. In other words, it is the unique measure on ∂X
satisfying
1
,
νx (Ωx (z)) = card(Sn (x))−1 = n−1
q
(q + 1)
6
THIBAUT DUMONT
for all z ∈ Sn (x) and all n ∈ N∗ . Different base points x, y ∈ X yield absolutely
continuous measures for which the Radon–Nikodym derivative is given by:
dνx
(ξ) = q B(x,y)(ξ) ,
dνy
for all ξ ∈ ∂X. By construction, x 7→ νx is equivariant under automorphisms of X.
2.3. Visual measure νe and Poisson transform. We now detail the construction of
the signed measure νe on ∂X associated to an oriented edge e ∈ E. We merely translate
into a geometric language the Poisson transform relative to Iwahori subgroups and its
associated measures developed by Klingler in [13].
The field of measures e 7→ νe constructed below is naturally Aut(X)-equivariant and
satisfies νē = −νe . First, we consider νe+ , the Borel probability measure supported on
Ω+
e := Ωt(e) (o(e)) (see (3) for the notation) obtained from νo(e) by restriction and scaling
as follows:
q
· ν + = νo(e) |Ω+
.
e
q+1 e
−
−
On the complement of Ω+
e , we define νe to be the Borel probability measure on Ωe :=
Ωo(e) (t(e)) given by:
1
· ν − = νo(e) |Ω−
.
e
q+1 e
Finally, we define the signed measure associated to e to be νe := νe+ − νe− . The Poisson
transform Pφ : E → C of a continuous function φ : ∂X → C is defined by integration,
see equation (2).
Remark 4. If φ is constant then Pφ = 0, hence the Poisson transform of a function
depends only on its class modulo constant functions.
The Poisson transform ranges in the space of alternate harmonic functions on E. For
if x ∈ X, we have
X
X
νe = 0,
thus
Pφ(e) = 0.
o(e)=x
o(e)=x
We reproduce and adapt the argument from [7, Proposition 2.3.6] showing that locally
constant functions have square summable Poisson transform.
Proposition 5. Let φ be a locally constant function on ∂X, then Pφ is in ℓ2 (E).
Proof. By compactness, φ takes only finitely many values and consequently there is a
finite disjoint open cover {Ωx (u) | u ∈ SR (x)} into small shadows, R-distant from a base
point x, on each piece of which φ is constant. Let K be the pointwise stabilizer of the
ball BR (x). Then φ is K-invariant and so is Pφ. Each tree Tu rooted u ∈ SR (x) in the
complement of BR (x) has visual boundary Ωx (u) and is acted upon by K with quotient
a ray. Thus Pφ is constant on each level of Tu provided all edges point toward u. Using
harmonicity, the sum of Pφ(e) over q adjacent edges of Tu on level j ≥ 1 equals q times
the value of their common ancestor edge of level j − 1, again provided the edges point
toward u. Then a geometric series argument on each ray shows that
kPφkℓ2 (E) ≤ C
where C depends on R and q only.
max
d(e,x)≤R+1
|Pφ(e)|,
NORM GROWTH FOR THE BUSEMANN COCYCLE
7
3. The proof
This section aims at proving the following estimate of the ℓ2 (E)-norm of the Poisson
transform of B(x, y):
X
(4)
PB(x, y)(e)2 ≤ Cd(x, y) + C ′ ,
e∈E
C, C ′
where
are the constants of Theorem 2 depending only on q. The lower bound of
the theorem is proved at the end of Section 4.1
Here is a detailed outline of the proof.
Firstly, in Section 3.1, the oriented edges are parametrized according to their position
relatively to x and y. We distinguish edges aligned with the geodesic segment [x, y] from
those not, the projection of the latter onto [x, y] is well defined and equals neither x
nor y. The distances of their extremities to x and y are implicitly used as parameters.
Secondly, we obtain the integration formulae for PB(x, y)(e) (Propositions 9 and 11)
depending only on the aforementioned parameters (and q). The strategy is to decompose
the boundary ∂X into countably many disjoint open sets whose νe -measure is easily
computed, on each of which B(x, y) is constant, and then apply countable additivity of
the measure. In general, removing a finite subtree of X yields a decomposition of ∂X
into finitely many disjoint open subsets, the boundaries of the connected components of
the resulting forest. For instance, the level sets of B(x, y) are obtained by removing the
segment [x, y], see Section 2.1. On the other hand, Section 2.3 defines νe by removing
−
the geometric edge {e, ē} and looking at the resulting partition ∂X = Ω+
e ⊔ Ωe . Suppose
there is a geodesic σ containing e and [x, y], the aligned case, one could naively remove
their convex hull and obtain a finite open partition of ∂X. However the resulting finite
sum representing PB(x, y)(e) has no obvious closed form. Instead, picking σ : R → X to
be a geodesic line, with σ(0) = x say, we obtain a countable partition of ∂X indexed over
Z into open sets with explicit νe -measure (Lemma 8) and on which B(x, y) is constant
and given by the piecewise affine function f defined in (6). The case where e is not
aligned with [x, y] requires more work, but follows similar principles, see Section 3.2.
Thirdly, from the integration formula emerges a symmetry described in the introductory Section 1.1. Averaging the integration formulae of two symmetric edges provides
a rearrangement into non-negative terms spherically distributed around a point. The
averaging technique depends again on whether e is aligned with [x, y] or not (Section
4.1 and Section 4.2 respectively). This is achieved in the last part of the paper.
Finally, we gather in Section 3.3 the above ingredients to complete the proof of inequality (4).
3.1. Parametrization of the edges. We fix x, y ∈ X and an edge e ∈ E. Without
loss of generality we may assume d := d(x, y) is at least 2. The possible configurations
of x, y, e in X fall into two cases depending implicitly only on the distances between x,
y and the extremities of e:
(A) e is aligned with [x, y],
(B) e is not aligned with [x, y].
In (A), we shall always assume that there is a geodesic line σ : R → X such that
x = σ(0), y = σ(d) and (o(e), t(e)) = (σ(i), σ(i + 1)) for some i ∈ Z. This corresponds
to the preferred orientation of e discussed in Section 1.1.
On the other hand, we choose for (B) to orientate e toward [x, y], hence we assume the
existence of a geodesic segment τ : [0, j] → X such that (o(e), t(e)) = (τ (0), τ (1)) and for
which τ (j) is the projection of e onto [x, y]. This orientation does not necessarily give
8
THIBAUT DUMONT
positiveness of the Poisson transform of B(x, y), but it provides a uniform treatment
of (B). In this case τ (j) 6= x, y and one may extend τ to a geodesic τ : (−∞, j] → X.
When q ≥ 3, one can extend τ further into a geodesic line τ : R → X intersecting [x, y]
only at τ (j), forming a cross with [x, y]. However if q = 2, it is not possible to do so;
this case gets special attention in Remark 12. We therefore assume q ≥ 3. Like in case
(A), we fix a geodesic σ : R → X with x = σ(0), y = σ(d), so that the projection of e
onto [x, y] is σ(i) = τ (j), for some integer 1 < i < d.
Definition 6. We say that an oriented edge e has parameter i ∈ Z if it is described
as in case (A) above with (o(e), t(e)) = (σ(i), σ(i + 1)). Otherwise, e is said to have
parameters (i, j), with 1 < i < d and j ≥ 1, as in case (B), and its projection onto [x, y]
corresponds to σ(i) = τ (j).
Lemma 7. The number of edges with parameter i ∈ Z is given by:
|i|
if i < 0,
q
n(i) = 1
if 0 ≤ i < d,
i−d
q
if d ≤ i,
whereas there are n(i, j) = (q − 1)q j−1 edges with parameters (i, j), for all 1 ≤ i ≤ d − 1
and j ≥ 1.
3.2. Integration formulae. In this section, we derive the integration formula for
PB(x, y)(e) which depends only on the parameters of e. With the parametrizations
of the previous section, two countable partitions of ∂X are obtained by removing the
geodesic lines σ and τ .
The first partition {Ωσk | k ∈ Z} is obtained by removing only σ and is used for the
aligned case (A). For every k ∈ Z, let
G
Ωσ(0) (z).
Ωσk :=
z∈S1 (σ(k))
z6=σ(k±1)
Equivalently, we remove from X the open geometric edges of the geodesic line σ and
consider the visual boundary components of the resulting forest. Of course, the end
points of σ are νe -null sets and can be ignored. In the right hand side, the base point
σ(0) = x can be replaced by any node of σ.
The νe measure of Ωσk can be computed using the Sections 2.2 and 2.3. Suppose
σ
Ωk ⊂ Ω+
e , the intuition is that of one starts at the origin o(e) = σ(i) of e with a bag of
sand of total mass one and then distributes this sand equally to the neighbouring nodes
except t(e) = σ(i + 1), hence dividing the total sand in q equal piles. Repeating this
process along σ, one reaches σ(k) with having divided by q as many times as there are
edges between σ(i) and σ(k), namely d(σ(k), σ(i)) = |k − i|. From here 1/q of the mass
continues its travel along σ and the remaining q−1
q of the mass at σ(k) will constitute
the νe -measure of Ωσk .
Lemma 8 (Proposition 4.3.4, [7]). Suppose e is aligned with [x, y] and parametrized by
i ∈ Z, then the νe -measure of Ωσk is given by:
(
q −|k−i|
if k − i ≤ 0,
(q
−
1)
(5)
νe (Ωσk ) =
−(k−i)+1
q
−q
if k − i > 0,
for all k ∈ Z.
NORM GROWTH FOR THE BUSEMANN COCYCLE
To synthetize the right hand side, we define
g(x) = q −|x| (Figure 4) and
g(x)
g 1 (x) = 1 − 2x
2
−g(x − 1)
so that (5) becomes νe (Ωσk ) =
(q−1)
q g 21 (k − i),
9
the following continuous real functions
if x ≤ 0,
if 0 ≤ x ≤ 1,
if 1 ≤ x,
for all k ∈ Z. The presence of the index
1
2
emphasizes the central symmetry of the graph of g 1 about ( 21 , 0) ∈ R2 (Figure 5).
2
Proposition 9 (Integration formula for parameter i). Let e be an edge parametrized
by i ∈ Z. Then the Poisson transform of B(x, y) evaluated at e is given by:
PB(x, y)(e) =
(q − 1) X
f (k)g 1 (k − i),
2
q
k∈Z
where f is the continuous piecewise affine function (of Figure 3) given by:
if x ≤ 0,
d
(6)
f (x) = d − 2x if 0 ≤ x ≤ d,
−d
if d ≤ x.
Proof. By construction B(x, y) is constant on Ωσk where it takes value f (k) thanks to
Section 2.1. We can apply countable additivity to integrate B(x, y) against νe ,
Z
X
XZ
B(x, y) dνe =
f (k)νe (Ωσk )
B(x, y) dνe =
PB(x, y)(e) =
σ
k∈Z Ωk
∂X
k∈Z
(q − 1) X
=
f (k)g 1 (k − i).
2
q
k∈Z
Consider now an edge e with parameters (i, j). The adequate partition is obtained
similarly removing further the geodesic τ containing e and intersecting with the sets Ωσk .
More precisely, for every l ∈ Z, let
G
Ωτl :=
Ωτ (0) (z)
z∈S1 (τ (l))
z6=τ (l±1)
and Ωk,l := Ωσk ∩ Ωτl , for all k ∈ Z.
The tree X consists of the cross formed by σ and τ with rooted trees attached at the
nodes of σ or τ . Note Ωk,l is empty whenever l 6= j and k 6= i. When l = j, the set Ωk,j
is the boundary of the tree attached to σ(k) and for k = i the set Ωi,l is the boundary
of the tree attached to τ (i). This is provided (k, l) 6= (i, j); the branching at σ(i) = τ (j)
has various configurations depending on the valency of X. For instance if l > j, the set
Ωi,l is non-empty if and only if τ can indeed be extended to form a cross with σ, that is
if q ≥ 3. Finally the center node of the cross also have a tree attached with boundary
Ωi,j which is empty if and only if q = 3. This is fortunately covered by the last formula
of Lemma 10.
Intuitively, the mass spreads from e along τ following g 1 similarly to the previous
2
case. At the node σ(i) = τ (j), the mass entering σ is 2/q · g 1 (j) and spreads uniformly
2
10
THIBAUT DUMONT
along σ in both directions according to
by:
g(x + 1)
1
h(x) =
0
g(x − 1)
the function h : R → R of Figure 6. It is given
if
if
if
if
x ≤ −1,
− 1 ≤ x ≤ 1 and x 6= 0,
x = 0,
x ≥ 1,
with a discontinuity at x = 0 introduced for later convenience.
Lemma 10 (Proposition 4.4.6, [7]). The νe -measure of Ωk,l is given by:
νe (Ωk,l ) = 0
for all k 6= i and l 6= j,
(q − 1)
= νe (Ωτl ) ,
· g 1 (l)
2
q
(q − 1)
1
νe (Ωk,j ) =
· g 1 (j) · · h(k − i),
2
q
q
(q − 3)
νe (Ωi,j ) =
· g 1 (j).
2
q
νe (Ωi,l ) =
for all l 6= j,
for all k 6= i,
Proposition 11 (Integration formula for parameter (i, j)). Let e be an edge with parameters (i, j), then PB(x, y)(e) is given by:
!
X
2
(q − 1)
g 1 (j)
PB(x, y)(e) = − f (i)g 1 (j) +
f (k)h(k − i) .
2
2
q
q2
k∈Z
Proof. We apply countable additivity to the partition {Ωk,l | k, l ∈ Z}.
Z
X
X Z
B(x, y) dνe =
f (k)νe (Ωl,k )
B(x, y) dνe =
PB(x, y)(e) =
∂X
= f (i)νe (Ωi,j ) +
X
k,l∈Z Ωl,k
f (k)νe (Ωk,j ) + f (i)
k6=i
X
k,l∈Z
νe (Ωi,l )
l6=j
X
X
(q − 3)
(q − 1)
(q − 1)
f (i)g 1 (j) +
f
(i)
g
g 1 (l).
f
(k)h(k
−
i)
+
1 (j)
2
2
2
q
q2
q
l6=j
k∈Z
P
One concludes using l6=j g 1 (l) = −g 1 (l).
=
2
2
Remark 12. When q = 2, the set Ωj,i is empty and the ray τ : (−∞, j] → X cannot be
extended further to form a cross with σ. In that case the above becomes
X
X
X
X
PB(x, y)(e) =
f (k)νe (Ωk,j ) + f (i)
νe (Ωi,l ) =
f (k)νe (Ωk,j ) − f (i)
νe (Ωl,i )
k6=i
l<j
k6=i
This integration formula equals that of Proposition 11 provided
P
which holds thanks to l≥j g 1 (l) = q · g 1 (j) (and q = 2).
2
2
P
l≥j
l≥j
νe (Ωi,l ) = g 1 (j),
2
Let P (i) denote the value of the right hand side in Proposition 9 and P (i, j) that of
Proposition 11. We use the notation h·, ·i for the standard pairing of ℓp (Z) and its dual
ℓq (Z) and τ the regular representation of Z on ℓp (Z). The integration formulae for P (i)
and P (i, j) can be written as:
(7)
P (i) =
(q − 1)
hf, τi g 1 i and
2
q
2
(q − 1)
g 1 (j)hf, τi hi.
P (i, j) = − f (i)g 1 (j) +
2
2
q
q2
NORM GROWTH FOR THE BUSEMANN COCYCLE
11
3.3. Summation. In the last section, we obtain the following bounds. On the one
hand, for an edge e parametrized by i ∈ Z,
2
−(|i|−1)
if i < 0,
(q−1) q
2(q+1)
if 0 ≤ i < d,
(8)
|PB(x, y)(e)| = |P (i)| ≤ (q−1)
2
−(i−d−1)
q
if d ≤ i.
(q−1)
On the other hand, if e is an edge parametrized by (i, j), with j ≥ 1, then
(
2
q −(i+j−1)
if 1 ≤ i ≤ d/2,
(9)
|PB(x, y)(e)| = |P (i, j)| ≤ (q−1)
2
−(d−i+j−1)
if d/2 ≤ i ≤ d − 1.
(q−1) q
We now use them to prove (4) hence the upper bound of Theorem 2.
Proof of the upper bound of Theorem 2. Case (A). The number n(i) of edges aligned
with [x, y] with parameter i is obtained in Lemma 7. It only accounts for the preferred
orientations, hence the factor 1/2 below.
1
2
X
e aligned
with [x,y]
PB(x, y)(e)2 =
≤
X
n(i)P (i)2
i∈Z
2(q + 1)
(q − 1)
2
· d(x, y) + 2
2
(q − 1)
4(q + 1)2
8q 2
=
·
d(x,
y)
+
.
(q − 1)2
(q − 1)3
2 X
q i q −2(i−1)
i>0
Case (B). The graph of the right hand side of (9) is symmetric with respect to the axis
y = d/2, meaning it is invariant under i 7→ d − i. In fact we show in Section 4.2 that
i 7→ P (i, j) satisfies the same of symmetry. This allows us to only sum over 1 ≤ i ≤ d/2.
Similarly to the previous proof:
1
2
X
e not aligned
with [x,y]
PB(x, y)(e)2 =
X
1≤i≤d−1
j≥1
=2
n(i, j)P (i, j)2 ≤ 2
X
1≤i≤d/2
j≥1
≤ 2(q − 1)
=
≤
8
(q − 1)
X
n(i, j)P (i, j)2
1≤i≤d/2
j≥1
(q − 1)q j−1 P (i, j)2
X
q
j−1
1≤i≤d/2
j≥1
X
2
q −(i+j−1)
(q − 1)
2
q −2i q −j
1≤i≤d/2
j≥0
X
X
8q 3
8
q −2i =
q −j
.
(q − 1)
(q − 1)3 (q + 1)
j≥0
i≥0
12
THIBAUT DUMONT
4. Symmetry and spherical rearrangement
This section presents the averaging methods which, applied to the integration formulae (7), yield the estimates (8) and (9). The series hf, τi g 1 i and hf, τi hi in (7) are
2
manipulated to obtain a spherical rearrangement of their terms.
Recall that σ is a geodesic with x = σ(0) and y = σ(d), so that the barycenter of
[x, y] corresponds to σ(d/2). As mentioned in the introduction, the Poisson transform of
B(x, y) is symmetric about σ(d/2). For edges e of type (i, j), the map i 7→ P (i, j) indeed
shows a symmetry (with a sign) around d/2, notably because σ(i) is the projection of
e onto [x, y]. On the other hand, the parameter i of an edge e aligned with [x, y]
corresponds to its origin o(e) = σ(i). Therefore the symmetry around the barycenter
translates into i 7→ P (i) being symmetric about (d − 1)/2, see Proposition 15.
To concretize the above discussion, we introduce some notations. For a real valued
function f : R → R (or f : Z → R), we write τt f (x) = f (x − t), fˇ(x) = (f )ˇ(x) = f (−x)
and (τ̌t f )(x) = f (x + t) = τ−t f (x). The operators τt and ˇ correspond to the action of
t and −1 for the canonical linear action of Isom(R) = R ⋊ {±1} (resp.P
Z ⋊ {±1}) onto
the space of functions over R (resp. over Z). Denote further hf1 , f2 i = k∈Z f1 (k)f2 (k)
when the series is well defined, e.g. absolutely convergent. We shall use the following
identities freely:
hτt f1 , τt f2 i = hf1 , f2 i = hfˇ1 , fˇ2 i,
fˇ = f,
and (τt f )ˇ = τ−t fˇ = τ̌t fˇ.
Definition 13. A function f : R → R (or f : Z → R) is said to have a (central)
symmetry about h ∈ R (resp. h ∈ 12 Z) if its graph is invariant under the central symmetry
about (h, 0) ∈ R2 . Equivalently f satisfies −fˇ = τ−2h f .
We say that f has an (axial) symmetry about y = h if its graph is invariant under the
reflexion through the vertical line y = h. Equivalently f satisfies fˇ = τ−2h f .
The following is clear from the graphs of Figure 3–6.
Lemma 14. The functions f , g, g 1 , and h defined in Section 3.2 satisfy
2
−fˇ = τ−d f
ǧ = g
(symmetry about d/2),
and ȟ = h
(symmetry about y = 0),
−ǧ 1 = τ−1 g 1
2
(symmetry about 1/2).
2
When studying P (i, j), there is no obstacle to work with arbitrary parameters i, j ∈ Z.
Proposition 15. Let Pj (i) := P (i, j), then
(10)
(11)
P̌ = τ−d+1 P
−P̌j = τ−d Pj
(symmetry about y = (d − 1)/2),
(symmetry about d/2).
Proof. Using the above and (7), the following computation
q
P̌ (i) = hf, τ−i g 1 i = hf, τ̌i g 1 i = hfˇ, τi ǧ 1 i = hτ−d f, τi−1 g 1 i = hf, τd+i−1 g 1 i
2
2
2
2
2
q−1
q
q
=
P (d + i − 1) =
τ−d+1 P (i),
q−1
q−1
shows P̌ = τ−d+1 P which means P is symmetric about y = (d − 1)/2. Similarly one
obtains −P̌j = τ−d Pj .
NORM GROWTH FOR THE BUSEMANN COCYCLE
f
13
d
d
2
1
d
g
0
1
0
−d
Figure 3. Graph of f .
Figure 4. Graph of g.
1
g1
1
2
1
0
h
1
0
Figure 5. Graph of g 12 .
Figure 6. Graph of h.
Corollary 16. Let e be an edge with parameter i ∈ Z, then PB(x, y)(e) = PB(x, y)(e′ )
for all edge e′ with parameters d − i − 1. On the other hand if e has parameter (i, j),
with 1 < i < d and j ≥ 1, then PB(x, y)(e) = −PB(x, y)(e′ ) for all edge e′ with
parameters (d − i, j)
4.1. Averaging P . This section establishes the bound (8) using an average of P . The
symmetry of P is encoded by equation (10) of Proposition 15, which can also be written
P = τd−1 P̌ . The mean of the two sides of the latter is
q−1
1
hf, τi g 1 i + hf, τd−i−1 g 1 i .
P (i) = (P (i) + P (d − i − 1)) =
2
2
2
2q
By transferring the operators τ to the left side of the pairing, one obtains
q−1
hτ−i f, g 1 i + hτ−d+i+1 f, g 1 i .
P (i) =
2
2
2q
Consider the linear operator Ti := 12 (τ−i + τ−d+i+1 ); the previous equation becomes
q−1
hTi f, g 1 i.
2
q
This has the advantage of being computable, thanks to f being piecewise affine, yielding
a simple expression for Ti f . Since f has a symmetry around d/2, one verifies that Ti f
has one around 1/2. Indeed,
1
1
τ̌−i fˇ + τ̌−d+i+1 fˇ
(Ti f )ˇ = ((τ−i f )ˇ+ (τ−d+i+1 f )ˇ) =
2
2
1
1
= − (τi τ−d f + τd−i−1 τ−d f ) = − τ−1 (τ−d+i+1 f + τ−i f ) = −τ−1 Ti f.
2
2
This symmetry is key to proving
(12)
P (i) =
14
THIBAUT DUMONT
Proposition 17. With the above notations,
2(q − 1)
kTi f · g 1 kℓ1 (N∗ ) .
(13)
P (i) =
2
q
Proof. Translates of f change sign only at their center of symmetry, so that the same
holds for Ti f which in turn has the same sign as g 1 . Precisely, Ti f (x) ≥ 0 for x ≤ 1/2
2
and Ti f (x) < 0 otherwise. From (12), we have
q
(i)
(ii)
P (i) = hTi f, g 1 i = h|Ti f |, |g 1 |i = kTi f · g 1 kℓ1 (Z) = 2kTi f · g 1 kℓ1 (N∗ ) ,
2
2
2
2
q−1
where (i) follows from Ti f and g 1 having the same sign and (ii) from the fact that the
2
pointwise product preserves the shared symmetry about 1/2.
Proposition 18 (Proposition 4.3.18, [7]). For every k ∈ N∗ , the average
1
(Ti f )(k) = (f (k + i) + f (k + d − i − 1))
2
is non-negative and bounded above by:
if i < 0,
(k − |i|) · 1[−i,∞)(k)
1
(14)
Ti f (k) ≤ 2(k − 2 )
if 0 ≤ i < d,
(k − (i − d + 1)) · 1[−d+i+1,∞) (k) if d ≤ i.
Therefore the ℓ1 (N∗ )-norm of the pointwise product of
q
−(|i|−1)
(q−1)2 q
q(q+1)
(15)
kTi f · g 1 kℓ1 (N∗ ) ≤ (q−1)2
2
q 2 q −(i−d)
(q−1)
Ti f and g 1 is bounded by:
2
if i < 0,
if 0 ≤ i < d,
if d ≤ i,
which, applied to (13), proves (8).
Proof. For the first part (14), one writes f as a sum of affine and constant functions each
supported on an interval determined by the two cut points of f , namely 0 and d. Then
one may compute explicitly the translates of f and Ti f in terms of affine and indicator
functions. As a result of some cancelations, Ti f vanishes on the interval [1/2, −i] when
i < 0 and on [1/2, −d + i + 1] when d ≤ i. It follows that |Ti f | is bounded by the affine
functions of (14). The bounds are asymptotically sharp in the sense that |Ti f | converge
pointwise to them as d → ∞. The second part (15) is deduced by direct computations
by plugging (14) into (13).
This spherical rearrangement for the edges of [x, y] is sufficient to prove the lower
bound of Theorem 2.
Proof of the lower bound of Theorem 2. As mentioned in the previous proof, Ti f does
not vanish around 1/2 when 0 ≤ i < d. In fact, for such parameters i, one sees that
|Ti f (k)| ≥ 1 for all k ∈ Z. Hence by (13),
2(q − 1)
2(q − 1)
kTi f · g 1 kℓ1 (N∗ ) ≥
kg 1 kℓ1 (N∗ ) = 2,
2
2
q
q
for all edges of [x, y] pointing from x to y. Applying Cauchy-Schwarz to the indicator
function of the later edges and PB(x, y) yields
X
√
PB(x, y)(e) ≤ dkPB(x, y)k.
PB(x, y)(e) =
e⊂[x,y]
NORM GROWTH FOR THE BUSEMANN COCYCLE
The left hand side is at least 2d, hence the result.
15
4.2. Averaging Pj = P (−, j). We proceed similarly to last section in order to establish
(9), with a twist in complexity due to the particular form of the integration formula (7)
for P (i, j). We may focus on the case 1 ≤ i ≤ d/2 thanks to the symmetry (11) of Pj .
The latter translates into Pj = −τd P̌j . Therefore,
(q − 1)
2
1
g 1 (j) (hf, τi hi − hf, τd−i hi)
Pj (i) = (Pj (i) − Pj (d − i)) = − f (i)g 1 (j) +
2
2
2
q
2q 2
2
(q − 1)
= − f (i)g 1 (j) +
g 1 (j)hτ−i f − τi−d f, hi.
2
2
q
2q 2
We consider the linear operator T̃i := 12 (τ−i − τi−d ) to rewrite the above as:
(16)
(q − 1)
2
Pj (i) = − f (i)g 1 (j) +
g 1 (j)hT̃i f, hi.
2
2
q
q2
Here T̃i f is symmetric with respect to the axis y = 0; for one quickly verifies (T̃i f )ˇ = T̃i f .
Proposition 19 (Section 4.4.1, [7]). For 1 ≤ i ≤ d/2, we have T̃i f ≥ 0 and
hT̃i f, hi = 2kT̃i f · hkℓ1 (N∗ ) .
Moreover
(17)
Pj (i) =
−2q −i
g 1 (j)(1 − q −f (i) ),
q−1 2
Since f (i) = d − 2i ≥ 0, the absolute value of (17) yields (9).
Proof. We proceed as in Proposition 18. One may write T̃i f = 21 (τ−i f −τi−d f ) explicitly
as a piecewise affine map and observe its non-negativeness on its support [i − d, d − i].
Consequently,
hT̃i f, hi = kT̃i f · hkℓ1 (Z) = 2kT̃i f · hkℓ1 (N∗ ) ,
where in the last equation we used that h(0) = 0 and the symmetry about y = 0. For
the second part, we use additional notations:
q
∆(i) := kT̃i f · hkℓ1 (N∗ ) and Σ(i) := ∆(i) −
f (i),
q−1
so that Pj can be written as:
(18)
2(q − 1)
q
2(q − 1)
f (i) =
g 1 (j) ∆(i) −
g 1 (j)Σ(i).
Pj (i) =
2
2
2
q
q−1
q2
For k ≥ 0, the function T̃i f is more explicitly described on its support by:
(
d − 2i = f (i)
if 0 ≤ k ≤ i,
T̃i f (k) =
d − i − k = f (i) − (k − i) if i ≤ k ≤ d − i.
Therefore
∆(i) = kT̃i f · hkℓ1 (N∗ ) =
= f (i)
d−i
X
k=1
d−i
X
k=1
f (i)q −k+1 −
f (i)
q −k+1 − q −i
X
k=0
kq −k+1 .
d−i
X
(k − i)q −k+1
k=i
16
THIBAUT DUMONT
We set x := q −1 and plug the previous equation into Σ(i):
f (i)
d−i
X
X
1
1
k−1
i
kxk−1 −
x
−x
f (i) = f (i)
f (i)
Σ(i) = ∆(i) −
1−x
1−x
k=0
k=1
1 − xd−i
xi
1
f (i)+1
f (i)
= f (i)
−
f (i)
f
(i)x
−
(f
(i)
+
1)x
+
1
−
2
1−x
(1 − x)
1−x
−xd−i
xi
f (i)+1
f (i)
= f (i)
−
f
(i)x
−
(f
(i)
+
1)x
+
1
1−x
(1 − x)2
xi
−xi xf (i)
f (i)+1
f (i)
−
f
(i)x
−
(f
(i)
+
1)x
+
1
= f (i)
1−x
(1 − x)2
i
−x
f (i)
f (i)+1
f (i)
f
(i)x
(1
−
x)
+
f
(i)x
−
(f
(i)
+
1)x
+
1
=
(1 − x)2
−xi f (i)
−q 2 q −i
−f (i)
=
.
−x
+
1
=
1
−
q
(1 − x)2
(q − 1)2
Finally (18) becomes
2(q − 1)
−q 2 q −i
2(q − 1)
−f (i)
g
g
1
−
q
1 (j)Σ(i) =
1 (j)
2
2
q2
q2
(q − 1)2
−2q −i
=
g 1 (j)(1 − q −f (i) ),
q−1 2
Pj (i) =
as desired.
References
[1] A. Borel, Cohomologie de certains groupes discretes et laplacien p-adique (d’après H. Garland),
Séminaire Bourbaki, 26e année (1973/1974), Exp. No. 437, 1975, pp. 12–35. Lecture Notes in Math.,
Vol. 431.
[2]
, Admissible representations of a semi-simple group over a local field with vectors fixed under
an Iwahori subgroup, Invent. Math. 35 (1976), 233–259.
[3] A. Borel and J.-P. Serre, Cohomologie d’immeubles et de groupes S-arithmétiques, Topology 15
(1976), no. 3, 211–232.
[4] P. Cartier, Fonctions harmoniques sur un arbre, Symposia Mathematica, Vol. IX (Convegno di
Calcolo delle Probabilità, INDAM, Rome, 1971), 1972, pp. 203–270.
[5]
, Géométrie et analyse sur les arbres, Séminaire Bourbaki, 24ème année (1971/1972), Exp.
No. 407, 1973, pp. 123–140. Lecture Notes in Math., Vol. 317.
[6] W. Casselman, On a p-adic vanishing theorem of Garland, Bull. Amer. Math. Soc. 80 (1974), 1001–
1004.
[7] T. Dumont, Cocycle growth for the Steinberg representation, Ph.D. Thesis, Lausanne, 2016.
[8] A. Figà-Talamanca and C. Nebbia, Harmonic analysis and representation theory for groups acting
on homogeneous trees, London Mathematical Society Lecture Note Series, vol. 162, Cambridge
University Press, Cambridge, 1991.
[9] H. Garland, p-adic curvature and the cohomology of discrete subgroups of p-adic groups, Ann. of
Math. (2) 97 (1973), 375–423.
[10] A. Gournay and P.-N. Jolissaint, Conditionally negative type functions on groups acting on regular
trees, J. Lond. Math. Soc. (2) 93 (2016), no. 3, 619–642.
[11] M. Gromov, Volume and bounded cohomology, Inst. Hautes Études Sci. Publ. Math. 56 (1982), 5–99
(1983).
[12] B. Klingler, Volumes des représentations sur un corps local, Geom. Funct. Anal. 13 (2003), no. 5,
1120–1160.
[13]
, Transformation de type Poisson relative aux groupes d’Iwahori, Algebraic groups and arithmetic, 2004, pp. 321–337.
NORM GROWTH FOR THE BUSEMANN COCYCLE
17
[14] H. Matsumoto, Fonctions sphériques sur un groupe semi-simple p-adique, C. R. Acad. Sci. Paris
Sér. A-B 269 (1969), A829–A832.
[15] N. Monod, An invitation to bounded cohomology, International Congress of Mathematicians. Vol.
II, 2006, pp. 1183–1211.
[16] C. Ogle, Polynomially bounded cohomology and discrete groups, J. Pure Appl. Algebra 195 (2005),
no. 2, 173–209.
[17] C. Ogle, Polynomially bounded cohomology and the Novikov Conjecture, ArXiv e-prints (April 2010),
available at https://arxiv.org/abs/1004.4680.
[18] J.-P. Serre, Arbres, amalgames, SL2 , Société Mathématique de France, Paris, 1977.
[19] J. A. Shalika, On the space of cusp forms of a P-adic Chevalley group, Ann. of Math. (2) 92 (1970),
262–278.
Department of Mathematics, University of Utah, Salt Lake City, 84112-0090, UT, USA
E-mail address: [email protected]
| 4 |
An efficient exact model for the cell formation problem with a variable number of
production cells
Ilya Bychkov∗, Mikhail Batsyn
Laboratory of Algorithms and Technologies for Network Analysis,
National Research University Higher School of Economics,
136 Rodionova, Nizhniy Novgorod, 603093, Russian Federation
arXiv:1701.02472v2 [cs.DS] 27 Aug 2017
Abstract
The Cell Formation Problem has been studied as an optimization problem in manufacturing for more than 90 years. It consists
of grouping machines and parts into manufacturing cells in order to maximize loading of cells and minimize movement of parts
from one cell to another. Many heuristic algorithms have been proposed which are doing well even for large-sized instances.
However, only a few authors have aimed to develop exact methods and most of these methods have some major restrictions such
as a fixed number of production cells for example. In this paper we suggest a new mixed-integer linear programming model for
solving the cell formation problem with a variable number of manufacturing cells. The popular grouping efficacy measure is used
as an objective function. To deal with its fractional nature we apply the Dinkelbach approach. Our computational experiments are
performed on two testsets: the first consists of 35 well-known instances from the literature and the second contains 32 instances less
popular. We solve these instances using CPLEX software. Optimal solutions have been found for 63 of the 67 considered problem
instances and several new solutions unknown before have been obtained. The computational times are greatly decreased comparing
to the state-of-art approaches.
Keywords: cell formation problem, cellular manufacturing, fractional objective, two-index model, grouping efficacy
1. Introduction
The Cell Formation Problem as a part of Group Technology (GT) was introduced by Burbidge (1961) and Mitrofanov
(1966). In the most general formulation it is designed to reduce
production costs by grouping machines and parts into manufacturing cells (production shops). The goal of such kind of grouping is to set up manufacturing process in a way that maximizes
loading of machines within the cells and minimizes movement
of parts from one cell to another. In classical formulation the
problem is defined by a binary matrix A with m rows representing machines and p columns representing parts. In this
machine-part matrix ai j = 1 if part j is processed on machine i.
The objective is to form production cells, which consist of machines and parts together, optimizing some production metrics
such as machine loading and intercell movement.
As an example of input data we will consider the instance of
Waghodekar and Sahu (1984) shown in Table 1. This instance
consists of 5 machines and 7 parts. The ones in a machinepart matrix are called operations. In Table 2 a solution with
2 manufacturing cells is presented. The first manufacturing
cell contains machines m1 , m4 with parts p1 , p7 and the second manufacturing cell contains machines m2 ,m3 ,m5 with parts
p2 ,p3 ,p4 ,p5 ,p6 . Some parts have to be moved from one cell to
∗ Corresponding
another for processing (e.g. part p6 needs to be processed on
machine m1 , so it should be transported from its cell 2 to cell
1). The operations lying outside cells are called exceptional elements or exceptions. There can be also non-operation elements
inside cells (ai j = 0). These elements reduce machine load and
are called voids. So the goal is to minimize the number of exceptions and the number of voids at the same time.
m1
m2
m3
m4
m5
p1
1
0
0
1
0
p2
0
1
0
1
1
p3
0
1
1
1
0
p4
0
1
1
1
1
p5
1
1
1
0
1
p6
1
0
1
0
1
p7
1
0
0
0
0
Table 1: Machine-part 5 × 7 matrix from Waghodekar and Sahu (1984)
m1
m4
m2
m3
m5
p7
1
0
0
0
0
p1
1
1
0
0
0
p6
1
0
0
1
1
p5
1
0
1
1
1
p4
0
1
1
1
1
p3
0
1
1
1
0
p2
0
1
1
0
1
Table 2: Solution with 2 production cells
author
Email addresses: [email protected] (Ilya Bychkov),
[email protected] (Mikhail Batsyn)
Preprint submitted to Computers & Operations Research
August 29, 2017
Pinheiro et al. (2016) reduced the cell formation problem to
another biclustering problem - bicluster graph editing problem
and suggested an exact method and a linear programming model
which provides good computational results for the grouping efficacy objective.
1.1. Related work
Many different approaches are proposed for solving the cell
formation problem. The majority of them provide heuristic solutions and only a few exact methods have been suggested.
Krushinsky and Goldengorin (2012) provided two MINpCUT exact models based on the well-known k-cut graph partition problem. The objective function considered in this research
is minimization of the exceptional elements number for a fixed
number of cells. Unfortunately this objective function does not
address the load inside cells.
Elbenani & Ferland (2012) presented a mixed-integer linear
programming model which maximizes the most popular objective for the cell formation problem - the grouping efficacy, introduced by Kumar and Chandrasekharan (1990). These authors
suggested to apply Dinkelbach algorithm since the grouping efficacy is a fractional objective function. Their model allows
solving the cell formation problem only if the number of production cells is predefined. Thus the suggested approach cannot guarantee global optimality of the obtained solutions with
respect to a variable number of production cells. In many cases
the computational times for this model are quite long or memory limitations are exceeded and the optimal solutions cannot
be found.
Brusco (2015) introduced two approaches for solving the
cell formation problem with the grouping efficacy objective.
The first is a mixed-integer linear programming model which is
based on a general two-mode clustering formulation with some
simplifying assumptions (e.g. the numbers of clusters by rows
and columns are equal). This model looks interesting, but requires too much time to be solved for many even medium-sized
instances. The second approach is a branch-and-bound algorithm combined with a relocation heuristic to obtain an initial
solution. The branch and bound approach is able to solve about
two times more problem instances and the computational times
are greatly improved as well. Generally it runs fine on wellstructured (with grouping efficacy value > 0.65 - 0.7) mediumsized problems. Two major assumptions are made for both
of these approaches: singletons are permitted (manufacturing
cells containing only one machine or one part) that is quite a
common practice; residual cells are permitted (cells containing
only machines without parts, or only parts without machines).
Also the number of production cells is predefined for the both
approaches, but for some test instances several values for the
number of cells are considered in computational experiments.
Another model is provided in our earlier paper (Bychkov et
al., 2014). There we present a mixed-integer linear programming formulation for the cell formation problem with a variable number of production cells. It deals well with small-sized
instances, but nevertheless the number of variables and constraints is huge - O(m2 p). This does not allow obtaining solutions even for some moderate-sized test instances and in some
cases this model runs too slowly.
Some authors used biclustering approaches to solve the cell
formation problem. Boutsinas (2013) applied simultaneous clustering for both dimensions (machines and parts) and minimized
the number of voids plus the number of exceptional elements.
1.2. Contributions of this research
In this paper we develop a fast compact model providing
optimal solutions for the cell formation problem with a variable number of manufacturing cells and the grouping efficacy
objective. Unlike the majority of linear programming models
our model does not contain a direct assignment of machines or
parts to cells. We use machine-machine and part-machine assignments instead of the widely used machine-part-cell assignment. This leads to a compact and elegant formulation considering only constraints which ensure a block-diagonal structure
of solutions. It allows us to drastically reduce the number of
variables and constraints in our programming model and obtain
globally optimal solutions even for some large-sized problem
instances.
Computational experiments show that our model outperforms all known exact methods. We have solved 63 of 67 problem instances to the global optimum with respect to a variable
number of production cells. We have also found several new
solutions unknown before.
We would like to highlight that many researchers in the
field use the 35 GT instances dataset provided by Gonçalves
and Resende (2004). These instances are taken from different cell formation research papers (references to the original
sources are shown in Table 3). Some problem instances in this
35 GT dataset have errors and differ from the ones presented
in the original papers. Many researchers including Elbenani &
Ferland (2012) and Pinheiro et al. (2016) have performed their
computational experiments using these data from Gonçalves
and Resende (2004). We have reviewed all the original sources,
comparing and forming the corrected version of this popular
dataset. We have also collected many other problem instances
less popular and formed a new dataset. All data can be downloaded from website opt-hub.com or researchgate.net (full urls
can be found in references).
The paper is organized as follows. In Section 2 we provide
the formulation of the cell formation problem. Then in Section
3 we present our new mixed-integer linear programming model.
Sections 4 contains the information about datasetes we used for
our experiments and the computational results and comparisons
to other exact approaches are given in Section 5. The conclusion is provided in Section 6.
2. Problem formulation
Cellular manufacturing systems apply are aimed to process
similar parts within the same production cell, balance machines
workload and minimize parts movement from one cell to another during the production process. The most popular objective for the cell formation problem is the grouping efficacy introduced by Kumar and Chandrasekharan (1990):
2
τ=
nin
1
c
X
n1 + nin
0
k=1
where
n1 – the total number of operations (ones) in the machinepart matrix,
nin
1 – the number of operations performed inside cells,
nin
0 – the number of voids (zeros inside cells).
m
X
y jk = 1
xik ≤ m ·
i=1
p
X
j=1
In comparisson to the other objectives the grouping efficacy
function addresses the best block-diagonal structure of the cell
formation problem solutions (Sarker, 2001).
In the literature several constraints related to the minimal
size of a cell could be found. The following are the three most
popular consideratons:
• allowing residual cells (cells containing only machines or
parts)
• allowing singletons (cells with one machine and several
parts or vice versa) and prohibiting residual cells
• allowing only cells with at least 2 machines and 2 parts
The most popular option is allowing singletons and prohibiting residual cells. In this section for the classical formulation we assume that singletons can appear in solutions and
residual cells are prohibited. In our new model and in computational experiments we consider the first two options.
A straightforward integer fractional programming (IFP) model
for the cell formation problem with the grouping efficacy objective function allowing singletons and prohibiting residual cells
is given below. We use the following notation: m is the number
of machines, p is the number of parts, ai j equals to 1 if machine
i processes part j and c is the maximal possible number of production cells. Since each production cell has to contain at least
one machine and at least one part we set c = min(m, p).
(IFP model):
Decision variables:
1, if machine i belongs to cell k,
xik =
0, otherwise
1, if part j belongs to cell k,
y jk =
0, otherwise
p
X
j = 1, ..., p
(3)
y jk
k = 1, ..., c
(4)
xik
k = 1, ..., c
(5)
j=1
y jk ≤ p ·
m
X
i=1
Objective function (1) is the grouping efficacy measure where
the numerator is the number of ones inside cells (nin
1 ) and two
sums in the denominator are the total number of ones (n1 ) and
the number of zeros inside cells (nin
0 ) respectively. Constraints
(2) and (3) require that each machine and each part is assigned
to exactly one production cell. The following two inequalities
(4) and (5) prohibit residual cells (without machines or parts).
The left part of (4) is the total number of machines assigned to
the particular cell (this sum is not greater than m) and the right
part is the total number of parts assigned to that cell (multiplied
by m). It means that if we have at least one machine assigned
to some cell there should be at least one part assigned to this
cell. This model allows us to have any number of cells in the
optimal solution. For example if optimal solution has only two
cells then variables xik and y jk will be zero for all k except only
two values of k.
3. MILP model
3.1. Objective linearization
In our paper Bychkov et al. (2014) we have proposed a
mixed-integer linear programming model for the cell formation
problem which is very similar to the one described in the previous section. One of the most important points there was linearization of the grouping efficacy objective. Our previous idea
was to linearize the grouping efficacy objective function by fixing the value of denominator n1 + nin
0 and considering a range
in
of all possible numbers of voids nin
.
0 The lower bound for n0
equals to 0 because generally there can be a solution without
any voids. The upper bound is computed using the following
proposition.
Proposition 1 (Bychkov et al., 2014). The number of voids in
the optimal solution satisfies the following inequality:
%
$
1−τ
n
nin
≤
1
0
τ
where τ is the grouping efficacy value of any feasible solution.
Pm P p Pc
max
j=1 k=1 ai j xik y jk
Pm P p Pc
j=1 ai j + i=1
j=1 k=1 (1 − ai j )xik y jk
i=1
Pm P p
i=1
So if we know a feasible solution we can limit the range
of possible values for the number of voids. Unfortunately, the
performance of this approach strongly depends on the feasible
solution we use for obtaining our bounds. This way solving
problem instances where grouping efficacy value is low takes
too much computational resources (since the number of subtasks is too large) and sometimes we are unable to solve even
medium-sized cell formation instances.
(1)
Subject to:
c
X
xik = 1 i = 1, ..., m
(2)
k=1
3
yi j − yk j − xik ≥ −1
In this paper together with using our new mixed-integer
linear model we use another way of linearization – Dinkelbach (1967) algorithm. The parametric approach introduced
by W.Dinkelbach is one of the most general and popular strategies for fractional programming. It reduces the solution of a
fractional programming problem to the solution of a sequence
of simpler problems. If we consider a fractional programming
model with the following objective function:
Q(x) =
P(x)
,
D(x)
yk j − yi j − xi j ≥ −1
p
X
m
X
P(x0 )
D(x0 )
• Step 3 If F(λk ) is equal to 0 (or less than some predefined
tolerance) then stop the procedure and return xk as the
optimal solution.
P(xk )
Else k = k + 1, λk = D(x
k ) and goto step 2.
Elbenani & Ferland (2012) have also used Dinkelbach approach for linearization of grouping efficacy measure. Although
their computational times are quite high and the results are given
only for the particular fixed number of production cells.
1, if machine i and part j are in the same cell,
yi j =
0, otherwise
p
p
m X
m X
X
X
ai j yi j − λ · (
(1 − ai j )yi j +
ai j ) (7)
i=1 j=1
i=1 j=1
Subject to:
2xik − yi j − yk j ≥ −1
i, k = 1, . . . , m
j = 1, . . . , p
yi j ≥ 1
i = 1, ..., m
(11)
yi j ≥ 1
j = 1, ..., p
(12)
Table 3: Testset A - Instances
3.2. Suggested two-index model
Due to a large number of variables and constraints in threeindex model (Bychkov et al., 2014) CPLEX spends too much
computational resources solving even small-sized instances (we
have solved only 14 of 35 problem instances). Here we introduce a two-index mixed-integer linear programming model for
the cell formation problem. The key idea of this model is removing machine-part-cell relation as it has been done in many
works before. Instead of mapping elements to cells we use a
simple fact that machines within the same production cell have
the same set of parts assigned to that cell. The two-index model
combines well with the Dinkelbach algorithm and shows impressing performance even on large-sized problem instances.
Two-index model:
1, if machines i and k are in the same cell,
xik =
0, otherwise
i=1 j=1
(10)
Technically matrix [xik ] here can be replaced by the one
with part-part relations, however the number of machines in
problem instances is usually lower than the number of parts (for
large-sized instances the difference is significant). As a result
we have m2 variables from matrix [xik ] and mp variables from
matrix [yi j ].
Objective function (7) is the grouping efficacy measure linearized using Dinkelbach method. Constraints (8), (9), (10) set
relations between machines and parts to ensure the solution can
be transformed into the block-diagonal form (which means its
feasibility). The last two inequalities (11) and (12) are optional
and prohibit residual cells.
We start with setting λ equal to the best known efficacy
value for the considered cell formation problem instance. Then
we sequentially solve several two-index problems according to
the Dinkelbach algorithm and update λ value with the solutions
found until our objective function is above 0.
• Step 2 solve the original problem with objective function
Q(x) replaced with F(λk ) = P(x) − λk D(x) → max and
let xk be the optimal solution
p
m X
X
j = 1, . . . , p
(9)
i=1
then Dinkelbach procedure is the following:
max
i, k = 1, . . . , m
j = 1, . . . , p
j=1
(6)
• Step 1 take some feasible solution x0 , compute λ1 =
and let k = 1
i, k = 1, . . . , m
(8)
4
ID
Source
m
p
A1
A2
A3
A4
A5
A6
A7
A8
A9
A10
A11
A12
A13
A14
A15
A16
A17
A18
A19
A20
A21
A22
A23
A24
A25
A26
A27
A28
A29
A30
A31
A32
A33
A34
A35
King and Nakornchai (1982) - Figure 1a
Waghodekar and Sahu (1984) - Problem 2
Seifoddini (1989b)
Kusiak and Cho (1992)
Kusiak and Chow (1987)
Boctor (1991) - Example 1
Seifoddini and Wolfe (1986)
Chandrasekaran and Rajagopalan (1986a)
Chandrasekaran and Rajagopalan (1986b)
Mosier and Taube (1985a)
Chan and Milner (1982)
Askin and Subramanian (1987)
Stanfel (1985)
McCormick et al. (1972)
Srinivasan et al. (1990)
King (1980)
Carrie (1973)
Mosier and Taube (1985b)
Kumar et al. (1986)
Carrie (1973)
Boe and Cheng (1991)
Chandrasekharan and Rajagopalan (1989) - Dataset 1
Chandrasekharan and Rajagopalan (1989) - Dataset 2
Chandrasekharan and Rajagopalan (1989) - Dataset 3
Chandrasekharan and Rajagopalan (1989) - Dataset 5
Chandrasekharan and Rajagopalan (1989) - Dataset 6
Chandrasekharan and Rajagopalan (1989) - Dataset 7
McCormick et al. (1972)
Carrie (1973)
Kumar and Vannelli (1987)
Stanfel (1985) - Figure 5
Stanfel (1985) - Figure 6
King and Nakornchai (1982)
McCormick et al. (1972)
Chandrasekharan and Rajagopalan (1987)
5
5
5
6
7
7
8
8
8
10
15
14
14
16
16
16
18
20
23
20
20
24
24
24
24
24
24
27
28
30
30
30
30
37
40
7
7
18
8
11
11
12
20
20
10
10
24
24
24
30
43
24
20
20
35
35
40
40
40
40
40
40
27
46
41
50
50
90
53
100
Table 5: Testset B - Instances
4. Test instances
For our computational experiments we have used two datasets,
Testset A and Testset B.
Testset A - Classic. The first dataset is a classical 35 GT
problem set taken from Gonçalves and Resende (2004). It contains 35 test instances with sizes from 5 × 7 up to 40 × 100 (machines × parts notation). This dataset is very popular among
cell formation researchers and there are lots of computational
results obtained by different methods (heuristics and metaheuristics generally). As we highlighted before some problem instances in this dataset have inconsistencies with the original papers they are published in. We have compared these instances
to the original ones and corrected the dataset.
Testset B - Extra. Another dataset named Testset B consists
of other instances taken from different papers. We have looked
through many papers on the cell formation problem and formed
this new set. There are 32 test instances with sizes from 6 × 6 to
50 × 150. A couple of instances from this set have been adopted
to the classical formulation of the cell formation problem.
Since the number of machines determines the size of our
model (the number of decision variables and constraints) we
consider 3 classes of problem instances.
• small (less than 10 machines)
• medium (from 10 to 20 machines)
• large (20 machines or greater)
For our data we can conclude that Testset A has 2 times
more large instances, but less medium and small instances (see
Table 4).
Table 4: Testsets instances size
Testset A
Testset B
small
medium
large
9
11
8
13
18
8
ID
Source
m
p
B1
B2
B3
B4
B5
B6
B7
B8
B9
B10
B11
B12
B13
B14
B15
B16
B17
B18
B19
B20
B21
B22
B23
B24
B25
B26
B27
B28
B29
B30
B31
B32
Adil (1996)
Parkin and Li (1997)
Brown and Sumichrast (2001)
Chan and Milner (1982)
Kusiak and Chow (1987)
Zolfaghari and Liang (2002)
Won and Kim (1997)
Sarker and Khan (2001)
Nair (1999)
Islam and Sarker (2000)
Kumar et al. (1986)
Ham et al. (1985)
Viswanathan (1996)
Shargal et al. (1995)
Won and Kim (1997)
Seifoddini (1988)
Moon and Chi (1992)
Li (2003)
Chan and Milner (1982) - Fig.3a
Yang and Yang (2008) - Fig.6b
Yang and Yang (2008) - Fig.6c
Yang and Yang (2008) - Fig.6d
Harhalakis et al. (1994)
Seifoddini and Djassemi (1991)
Sandbothe (1998)
Nagi et al. (1990)
Won and Kim (1997)
Yang and Yang (2008) - Fig.7
Seifoddini and Djassemi (1996)
Seifoddini and Djassemi (1996)
Yang and Yang (2008) - Fig.12
Zolfaghari and Liang (1997)
6
6
6
7
7
7
7
8
8
8
9
10
10
10
11
11
12
14
15
15
15
15
17
18
20
20
26
28
35
41
46
50
6
7
11
5
8
8
10
8
10
10
15
8
12
38
10
22
19
14
10
15
15
15
20
24
10
51
28
35
15
50
105
150
5.1. Testset A
5.1.1. Experiments
The instances from Table 3 have been studied widely in
the literature. We report results separately for the formulation
where minimal cell size is 1 × 1 (Table 7 and Figure 1) and the
formulation with residual cells allowed (Table 8 and Figure 2).
In the first case we have selected two approaches for the results
comparison:
5. Computational results
For our computational experiments we consider two most
popular versions of cell size constraints:
1. Residual cells are prohibited, singletons are allowed (each
cell has at least 1 machine and 1 part)
2. Residual cells are allowed (cells with only machines or
only parts can appear in the final solution)
1. The MILP model by Elbenani & Ferland (2012)
2. The MILP model by Bychkov et al. (2014)
Several state-of-art exact approaches have been chosen for
comparisons. As a platform for our computations we have used
a system with Intel Xeon processor running at 3.4 GHz with
16GB RAM and CPLEX 12.4.0 installed. Due to high-quality
initial solutions the Dinkelbach algorithm makes only one or, in
rare cases, two iterations.
Elbenani & Ferland (2012) considered a simplified formulation
of the cell formation problem solving every problem instance
only for one fixed number of production cells. These authors
have performed computational experiments on an AMD processor 2.2 GHz with 4GB RAM. For Testset A we use the best
efficacy results from the literature as initial values for λ parameter.
5
In case of unrestricted cell sizes (residual cells are allowed)
we have compared our results to the following approaches:
tion cells, but due to its complexity and not very effective linearization technique it is able to solve only 14 test problems
of 35. The model suggested by Elbenani & Ferland (2012)
solved 27 problem instances but only for the one fixed number
of production cells for each problem instance. Our new model
provides global optimal solutions (with respect to any possible
number of cells) for 31 of 35 problem instances. For problem
instance A33 we have found a new solution with grouping efficacy 0.48 unknown before.
For 17 instances: A14-A21, A23-A26, A28, A30, A31, A34
and A35 we are the first to prove the global optimality of the
best known solutions with respect to a variable number of production cells.
1. The branch-and-bound algorithm by Brusco (2015)
2. The ILP model by Pinheiro et al. (2016)
3. The iterative exact method by Pinheiro et al. (2016)
Brusco (2015) considers several values for the number of
cells for some problem instances, so in this case we compare
our computational time with these times summed up for every
test instance. As hardware platforms Brusco (2015) reports 3.4
GHz Intel Core i7-2600 with 8GB RAM and Pinheiro et al.
(2016) the same 3.4 GHz Intel Core i7-2600 with 32 GB RAM.
Since Elbenani & Ferland (2012) and Brusco (2015) do not
consider all possible numbers of production cells we show the
number of cells in brackets for these approaches.
5.1.2. Results
The results for Testset A are summarized in Table 7 and Table 8. For each algorithm we report the grouping efficacy value
and the running time in seconds. Since our testset is larger than
the one used by Brusco (2015) the missing results are marked
as ”-”. For some problems exact solutions have not been obtained because CPLEX runs too long or takes too much memory. These instances are marked as ”*”.
Figure 2: Testset A - Allowed residual cells. Running times comparison.
Running times bar chars for Table 7 are presented in Figure 1. Here we have used logarithmic scale with base 10 for Y
axis (running time). Our new model shows really good performance, it works from 7 to 43383 times faster than the model
from Elbenani & Ferland (2012) and from 11 to 1833 times
faster than the model from Bychkov et al. (2014). We must underline that although we use a better hardware platform than
Elbenani & Ferland (2012), our problem formulation is more
complicated than a formulation with a fixed number of cells.
The results for the formulation with no constraints on cell
sizes are summarized in Table 8. The model suggested by Pinheiro et al. (2016) solved 27 problem instances to the global
optimum. Our approach has obtained exact solutions for 32 of
Figure 1: Testset A - No residual cells. Running times comparison.
Table 7 shows the results for the case where we prohibit
cells without machines or parts. Our previous model from Bychkov et al. (2014) also considers a variable number of produc6
35 test instances. In addition for problem instances A18, A33
and A34 we have found new solutions unknown before.
Running times bar charts for Table 8 are presented in Figure 2. Here we have chosen the ILP model from Pinheiro et
al. (2016) for comparison since it has a better performance than
the exact iterative method of the same authors. In Figure 2 we
have also used logarithmic scale with base 10 for the first and
second plots (instances with running times less than 60 seconds
and less than 5000 seconds). For the last plot (instances with
running times less than 1500000 seconds) we have used logarithmic scale with base 100. We can see that the two-index
model runs up to 1 million times faster than the branch-andbound algorithm by Brusco (2015) and up to 161 times faster
than the ILP model by Pinheiro et al. (2016).
our model using the same input data (see Table 6). Our experiments have confirmed all the results obtained by Pinheiro et
al. (2016). Also we can conclude that the running times of our
model have not changed much on these input data.
5.2. Testet B results
Since the test instances from Table 5 are less popular in research papers our goal is just to obtain optimal solutions for this
set. We have used our multi-start local search heuristic (Bychkov et al., 2013) to get good solutions which are then passed
as initial values for λ parameter (we pick the best solution found
by the heuristic within 30 seconds).
The results for Testset B are shown in Table 9. Here we
have found optimal solutions for 31 of 32 test problems. Another result is an excellent performance of our multi-start local
search heuristic algorithm: only one of 32 instances solved by
the heuristic differs from the exact solution (instance B18). Due
to the high computational complexity we are unable to solve the
largest problem in the set – problem B32 (50 × 150).
5.1.3. Inconsistencies
The classical dataset of 35 GT problems from Gonçalves
and Resende (2004) have been used for many years by the cell
formation researchers for computational experiments and results comparison. Unfortunately, the dataset contains several
inconsistencies with the original sources: papers from King
and Nakornchai (1982) to Chandrasekharan and Rajagopalan
(1987) (see Table 3). Many researchers have used corrupted
instances and sometimes add some new inconsistencies. Therefore obtaining results for these problems and comparing it to
results of other approaches becomes a really difficult task. One
of the goals of this paper is to provide correct data for the cell
formation researchers. In this paper we mark usage of inconsistent data with superscript E .
We have not been able to obtain results reported in Elbenani
& Ferland (2012) for problem instances A15 and A31 using
both possible data sets - dataset from Gonçalves and Resende
(2004) and our corrected version. Probably some other data
have been used.
6. Conclusion
The cell formation problem is a well known combinatorial
optimization problem with a high computational complexity. A
very few authors have suggested exact approaches for the most
popular problem formulation with the grouping efficacy objective function. The majority of these works assume that the number of production cells is predefined. In this paper we suggest
a new compact and effective integer linear programming model
for the cell formation problem with a variable number of production cells. The model is based on the machine-machine and
part-machine relations instead of the widely used machine-partcell relation. It allows us to drastically reduce the number of
variables and constraints in the resulting integer linear program.
Computational experiments show that our new model outperforms the state-of-art exact methods. We have solved 63 of 67
problem instances to the global optimum with respect to a variable number of production cells. We have also found several
new solutions unknown before. Unfortunately many problem
instances from the cell formation datasets have inconsistencies
with the original papers. This makes it really difficult to perform computational experiments and compare results to other
approaches in the field. We have extracted and checked over 67
problem instances. All these data are available for downloading from website opt-hub.com or researchgate.net and we hope
it will help the researchers in this area. The suggested model
can be also used for solving biclustering problems and this is
one of the directions of our future work.
Table 6: Computational experiments on the data provided by
Gonçalves and Resende (2004)
#
A1
A7
A14
A15
A17
A20
A21
A25
A30
Time, sec
Efficacy
Pinheiro et al. two-index Pinheiro et al. two-index
(2016)
(2016)
0.01
0.03
144.91
0.54
42.32
14.55
305.48
48743.90
41.53
0.01
0.01
4.99
0.17
3.51
0.11
15.08
678.53
8.58
0.7500
0.6944
0.5333
0.6992
0.5773
0.7938
0.5879
0.5329
0.6304
0.7500
0.6944
0.5333
0.6992
0.5773
0.7938
0.5879
0.5329
0.6304
Several instances provided by Gonçalves and Resende (2004),
which are different from its original sources (papers from King
and Nakornchai (1982) to Chandrasekharan and Rajagopalan
(1987), see Table 3), have been also used by Pinheiro et al.
(2016). These instances are A1, A7, A14, A15, A17, A20,
A21, A25 and A30. For a fair comparison we have also run
7
7. Acknowledgments
This work was conducted at National Research University
Higher School of Economics, Laboratory of Algorithms and
Technologies for Network Analysis and supported by RSF grant
14-41-00039.
References
Kumar, K. R., Chandrasekharan, M. P. (1990). Grouping efficacy: A quantitative criterion for goodness of block diagonal forms of binary matrices in
group technology. International Journal of Production Research, 28(2), 233–
243.
Kumar K.R., Vannelli A. (1987). Strategic subcontracting for efficient disaggregated manufacturing. International Journal of Production Research, 25(12),
1715-1728.
Kumar K. R., Kusiak A., Vannelli A. (1986). Grouping of parts and components in flexible manufacturing systems. European Journal of Operations
Research, 24, 387-397.
Kusiak, A., & Cho, M. (1992). Similarity coefficient algorithm for solving the
group technology problem. International Journal of Production Research, 30
, 2633-2646.
Kusiak, A., & Chow, W. (1987). Efficient solving of the group technology problem.Journal of Manufacturing Systems , 6(2), 117-124.
Li M.L.(2003). The algorithm for integrating all incidence matrices in multidimensional group technology. International Journal Production Economics,
86,121-131.
McCormick, W.T., Schweitzer, P.J., White, T.W., 1972. Problem decomposition and data reorganization by a clustering technique. Operations Research
20(5), 993-1009.
Mitrofanov, S. P. (1966). The Scientific Principles of Group Technology. Boston
Spa, Yorkshire: National Lending Library Translation. Translation of Mitrofanov (1959).
Moon Y.B., Chi S.C.(1992). Generalized part family formation using neural
network techniques. Journal of Manufacturing Systems, 11(3), 149-159.
Mosier, C. T., & Taube, L. (1985a).The facets of group technology and their
impact on implementation, OMEGA, 13(6), 381-391.
Mosier, C. T., & Taube, L. (1985b). Weighted similarity measure heuristics
for the group technology machine clustering problem. OMEGA, 13(6), 577583.
Nagi.R., Harhalakis G., Proth J-M. (1990) Multiple routeings and capacity considerations in group technology applications. International Journal of Production Research, 28:12, 1243-1257
Nair G.J.(1999). Accord: A bicriterion algorithm for cell formation using ordinal and ratio-level data.International Journal of Production Research, 37:3,
539-556.
Pa Rkin R.E., Li M.-L(1997). The multi-dimensional aspects of a group
technology algorithm. International Journal of Production Research, 35(8),
2345-2358.
Pinheiro R. G. S., Martins I. C., Protti F., Ochi, L. S.; Simonetti L.G., & Subramanian A. (2016), On solving manufacturing cell formation via Bicluster
Editing, European Journal of Operational Research, 254(3), 769 - 779
Sandbothe R.A (1998) Two observations on the grouping efficacy measure for
goodness of block diagonal forms. International Journal of Production Research, 36:11, 3217-3222.
Sarker B.R., Khan M.(2001). A comparison of existing grouping efficiency
measures and a new weighted grouping efficiency measure. IIE Transactions, 33, 11-27.
Sarker, B.R. (2001) Measures of grouping efficiency in cellular manufacturing
systems, European Journal of Operational Research, 130, 588-611
Seifoddini H., Djassemi M.(1991). The production data-based similarity coefficient versus Jaccard’s similarity coefficient. Computers Industrial Engineering, 21, 263-266.
Seifoddini H., Djassemi M. (1996) The threshold value of a quality index for
formation of cellular manufacturing systems. International Journal of Production Research, 34:12, 3401-3416
Seifoddini, H., & Wolfe, P. M. (1986). Application of the similarity coefficient
method in group technology. IIE Transactions, 18(3), 271-277.
Seifoddini H.(1988). Comparison between single linkage and average linkage
clustering techniques in forming machine cells. Computers Industrial Engineering, 15, 210-216.
Seifoddini H. (1989b). A note on the similarity coefficient method and the problem of improper machine assignment in group technology applications. International Journal of Production Research, 27(7), 1161-1165
Shargal M., Shekhar S., Irani S.A.(1995). Evaluation of search algorithms and
clustering efficiency measures for machine-part matrix clustering. IIE Transactions, 27:1, 43-59.
Srinivasan G., Narendran T.T., Mahadevan B.(1990). An assignment model for
the part-families problem in group technology. International Journal of Production Research, 28(1), 145-152,
Adil G.K, Rajamani D., Strong D.(1996). Cell formation considering alternate
routeings. International Journal of Production Research, 34(5), 1361-1380.
Askin, R. G., & Subramanian, S. (1987). A cost-based heuristic for group technology configuration. International Journal of Production Research, 25(1),
101-113.
Boctor, F. (1991). A Jinear formulation of the machine-part cell formation problem. International Journal of Production Research, 29(2), 343-356.
Boe, W. J., & Cheng, C. H. (1991). A close neighbour algorithm for designing cellular manufacturing systems. International Journal of Production Research, 29(10), 2097-2116.
Boutsinas, B.(2013) Machine-part cell formation using biclustering, European
Journal of Operational Research, 230, 563-572
Brown E.C., Sumichrast R.T. (2001). CF-GGA: A grouping genetic algorithm
for the cell formation problem. International Journal of Production Research, 39(16), 3651-3669.
Brusco, M.J. (2015). An exact algorithm for maximizing grouping efficacy in
part–machine clustering. IIE Transactions, 47(6), 653-671.
Burbidge, J. L. (1961). The new approach to production. Prod. Eng., December,
3 - 19.
Bychkov I., Batsyn M., Sukhov P., Pardalos P.M.(2013). Heuristic Algorithm
for the Cell Formation Problem. Models, Algorithms, and Technologies for
Network Analysis, Springer Proceedings in Mathematics & Statistics, 59,
43-69.
Bychkov, I.S., Batsyn, M.V., Pardalos P.M.(2014). Exact model for the cell
formation problem. Optimization Letters, 8(8), 2203-2210.
Carrie, A.S. (1973). Numerical taxonomy applied to group technology and
plant layout. International Journal of Production Research, 11, 399-416.
Chan, H.M., Milner, D.A., (1982). Direct clustering algorithm for group formation in cellular manufacture. Journal of Manufacturing Systems 1 (1), 64-76.
Chandrasekaran, M.P., Rajagopalan, R., (1986a). An ideal seed nonhierarchical clustering algorithm for cellular manufacturing. International
Journal of Production Research 24, 451-463.
Chandrasekaran, M.P., Rajagopalan, R., (1986b). MODROC: An extension of
rank order clustering of group technology. International Journal of Production Research 24 (5), 1221-1233.
Chandrasekharan, M. P., & Rajagopalan, R. (1987). ZODIAC - An algorithm
for concurrent formation of part families and machine cells. International
Journal of Production Research, 25(6), 835-850.
Chandrasekharan, M. P., & Rajagopalan, R. (1989). Groupability: An analysis
of the properties of binary data matrices for group technology. International
Journal of Production Research , 27(6), 1035-1052.
Dinkelbach, W.(1967) On nonlinear fractional programming, Management Science, 13(7), 492–498.
Elbenani, B., & Ferland, J.A. Cell Formation Problem Solved Exactly with
the Dinkelbach Algorithm, Montreal. Quebec. CIRRELT-2012-07, pp. 1–14
(2012)
Gonçalves J.F., Resende M.G.C(2004). An evolutionary algorithm for manufacturing cell formation. Computers & Industrial Engineering, 47(2-3),247273.
Ham I., Hitomi K., Yoshida T.(1985). Layout Planning for Group Technology.
Group Technology, International Series in Management Science/Operations
Research, pp 153-169
Harhalakis G., Ioannou G., Minis I., Nagi R.(1994). Manufacturing cell formation under random product demand, International Journal of Production
Research, 32:1, 47-64
Islam K.M.S, Sarker B.R.(2000). A similarity coefficient measure and machineparts grouping in cellular manufacturing systems. International Journal of
Production Research, 38:3, 699-720
King, J. R., & Nakornchai, V. (1982). Machine-component group formation in
group technology: Review and extension. International Journal of Production Research , 20(2), 117-133.
King, J. R., & Nakornchai, V. (1982). Machine-component group formation in
group technology: Review and extension. International Journal of Production Research , 20(2), 117-133.
King, J. R. (1980). Machine-component grouping in production flow analysis:
An approach using a rank order clustering algorithm. International Journal
of Production Research , 18(2), 213-232.
Krushinsky, D., & Goldengorin, B. (1984). An exact model for cell formation
in group technology. Computational Management Science, 9(3), 323-338.
8
Stanfel, L. (1985). Machine clustering for economic production. Engineering
Costs and Production Economics, 9, 73-81.
Viswanathan S.(1996). A new approach for solving the P-median problem
in group technology. International Journal of Production Research, 34:10,
2691-2700
Waghodekar, P. H., & Sahu, S. (1984). Machine-component cell formation in
group technology MACE. International Journal of Production Research, 22,
937-948.
Waghodekar, P. H., & Sahu, S. (1984). Machine-component cell formation in
group technology: MACE. International Journal of Production Research,
22(6), 937-948.
Won Y., Kim S.(1997). Multiple criteria clustering algorithm for solving the
group technology problem with multiple process routings. Computers & Industrial Engineering, 32(1), 207-220.
Yang M.S., Yang J.H.(2008). Machine-part cell formation in group technology
using a modified ART1 method. European Journal of Operational Research,
188,140-152.
Zolfaghari S., Liang M. (1997) An objective-guided ortho-synapse Hopfield
network approach to machine grouping problems. International Journal of
Production Research, 35(10), 2773-2792
Zolfaghari S., Liang M.(2002). Comparative study of simulated annealing,
genetic algorithms and tabu search for solving binary and comprehensive
machine-grouping problems. International Journal of Production Research,
40(9), 2141-2158
http://opt-hub.com/problems/cfp
https://www.researchgate.net/publication/316648108 Test instances and solutions for the cell formation problem
9
Table 7: Testset A - Computational results (residual cells are prohibited, singletons are allowed)
#
A1
A2
A3
A4
A5
A6
A7
A8
A9
A10
A11
A12
A13
A14
A15
A16
A17
A18
A19
A20
A21
A22
A23
A24
A25
A26
A27
A28
A29
A30
A31
A32
A33
A34
A35
Time, sec
Elbenani &
Bychkov
Ferland (2012)
et al.
two-index
(2014)
model
2.3
1.6
3.1
2.0
30.6
4.3
9.6
3.1
3.5
1.1
1.6
2188.7
593.2
15130.5
252.5
183232.5
2345.6
*
131357.5
31.1
14583.6
11.3
230.7
1101.1
*
*
*
958714.1
*
378300.0
*
*
*
268007.6
7365.3
0.63
0.00
2.29
0.00
5.69
0.00
1.86
0.09
9.14
0.17
5.15
0.01
13.37
0.02
18.33
0.01
208.36
0.45
6.25
0.00
2.93
0.02
259.19
0.19
179.21
0.23
*
4.24
*
0.25
*
4.80
*
3.82
* 32243.10
*
245.59
*
0.22
*
24.34
1.64
0.14
*
0.12
*
0.16
*
1026.96
* 178182.24
*
*
*
1964.00
*
*
*
8.72
*
136.00
*
*
*
*
* 16323.71
*
1.34
10
Efficacy
Elbenani &
Bychkov
Ferland (2012)
et al.
two-index
(cells)
(2014)
model
0.8235(2)
0.6957(2)
0.7959(2)
0.7692(2)
0.6087(5)
0.7083(4)
0.6944(4)
0.8525(3)
0.5872(2)
0.7500(5)
0.9200(3)
0.7206(7)
0.7183(7)
0.5326(8)
0.6953(6)E
0.5753(8)
0.5773(9)
*
0.5081(7)
0.7791(5)
0.5798(5)
1.0000(7)
0.8511(7)
0.7351(7)
*
*
*
0.5482(5)
*
0.6331(14)
0.6012(13)E
*
*
0.6064(3)
0.8403(10)
0.8235
0.6957
0.7959
0.7692
0.6087
0.7083
0.6944
0.8525
0.5872
0.7500
0.9200
0.7206
0.7183
*
*
*
*
*
*
*
*
1.0000
*
*
*
*
*
*
*
*
*
*
*
*
*
0.8235
0.6957
0.7959
0.7692
0.6087
0.7083
0.6944
0.8525
0.5872
0.7500
0.9200
0.7206
0.7183
0.5326
0.6899
0.5753
0.5773
0.4345
0.5081
0.7791
0.5798
1.0000
0.8511
0.7351
0.5329
0.4895
*
0.5482
*
0.6331
0.5977
*
0.4800
0.6064
0.8403
Table 8: Testset A - Computational results (residual cells are allowed)
#
Brusco
(2015)
A1
0.01
A2
0.01
A3
0.02
A4
0.01
A5
0.6
A6
0.04
A7
0.08
A8
0.01
A9
35.86
A10
0.06
A11
0.01
A12
633.91
A13
2631.76
A14
24716.34
A15
1279.93
A16
A17
20840.55
A18
A19 1375608.66
A20
4830.00
A21
A22
0.01
A23
42.29
A24 208158.02
A25
A26
A27
A28
A29
A30
A31
A32
A33
A34
A35
-
Time, sec
Pinheiro et al. Pinheiro et al.
(2016)
(2016)
two-index
IM
ILP
0.16
0.07
0.09
0.02
0,29
0.14
0.18
2.06
81.46
0.03
0.01
0.49
0.49
600.98
7.24
1156.23
87.13
*
23928.70
1.78
2145.24
0.02
10.08
17.46
371233.00
*
*
*
*
183.71
13807.50
*
*
*
325.53
0.01
0.01
0.01
0.01
0.03
0.01
0.01
0.01
0.06
0.17
0.01
0.01
0.03
0.01
0.04
0.01
4.94
0.45
0.01
0.01
0.02
0.02
0.09
0.03
0.11
0.03
144.91
4.88
0.54
0.16
125.62
4.24
42.32
3.84
* 52810.10
1771.99
249.52
14.55
0.09
305.48
22.60
0.15
0.14
0.44
0.14
0.78
0.20
48743.90
759.70
* 134418.65
*
*
* 46361.97
*
*
41.53
8.00
2622.06
64.82
* 234055.90
*
*
* 14212.57
18.22
1.61
11
Brusco
(2015)
(cells)
Efficacy
Pinheiro
et al.
two-index
(2016)
0.8235(2,3,4)
0.6957(2,3,4)
0.8085(2,3,4)
0.7916(2,3,4)
0.6087(2,3,4,5,6)
0.7083(2,3,4,5)
0.6944(2,3,4,5)
0.8525(2,3,4)
0.5872(2,3,4)
0.7500(2,3,4,5,6)
0.9200(2,3,4)
0.7424(6,7,8)
0.7285(6,7,8)
0.5385(8)
0.6992(5,6,7)
0.5773(9)
0.5081(7)
0.7888(5,6,7)
1.0000(7)
0.8511(7)
0.7351(7)
-
0.7500E
0.6956
0.8085
0.7917
0.6087
0.7083
0.6944E
0.8525
0.5872
0.7500
0.9200
0.7424
0.7286
0.5333E
0.6992E
0.5804
0.5773E
*
0.5081
0.7938E
0.5879E
1.0000
0.8511
0.7351
0.5329E
*
*
*
*
0.6304E
0.5977
*
*
*
0.8403
0.8235
0.6957
0.8085
0.7917
0.6087
0.7083
0.6944
0.8525
0.5872
0.7500
0.9200
0.7424
0.7286
0.5385
0.6992
0.5804
0.5773
0.4397
0.5081
0.7888
0.5860
1.0000
0.8511
0.7351
0.5329
0.4895
*
0.5482
*
0.6331
0.5977
0.5084
0.4829
0.6131
0.8403
Table 9: Testset B - Computational results
#
B1
B2
B3
B4
B5
B6
B7
B8
B9
B10
B11
B12
B13
B14
B15
B16
B17
B18
B19
B20
B21
B22
B23
B24
B25
B26
B27
B28
B29
B30
B31
B32
Time
two-index
two-index
(no residual
(allow
cells)
residual cells)
0.01
0.01
0.25
0.01
0.01
0.01
0.01
0.01
0.01
0.01
0.01
0.01
0.36
0.25
0.01
0.16
0.98
1.82
0.03
0.05
0.03
0.05
0.05
4.79
0.20
13.81
0.25
0.83
33.82
4.76
19.69
*
Heuristic
bound
0.01
0.01
0.03
0.01
0.01
0.01
0.01
0.01
0.01
0.01
0.02
0.01
0.80
0.30
0.01
0.06
0.26
1.65
0.06
0.03
0.04
0.01
0.06
7.80
0.10
25.75
0.28
1.04
51.76
8.67
17.50
*
0.8095
0.7222
0.6071
0.8889
0.7500
0.7391
0.8148
0.7222
0.7576
0.9000
0.7273
0.8276
0.5962
0.6405
0.8333
0.7391
0.6552
0.6027
0.8000
0.8710
0.8333
0.7258
0.8111
0.5673
0.7600
0.6068
0.7248
0.6729
0.5730
0.7308
0.6799
0.6193
12
Efficacy
two-index
two-index
(no residual
(allow
cells)
residual cells)
0.8095
0.7222
0.6071
0.8889
0.7500
0.7391
0.8148
0.7222
0.7576
0.9000
0.7273
0.8276
0.5962
0.6405
0.8333
0.7391
0.6552
0.6129
0.8000
0.8710
0.8333
0.7258
0.8111
0.5673
0.7600
0.6068
0.7248
0.6729
0.5730
0.7308
0.6799
*
0.8095
0.7222
0.6071
0.8889
0.7500
0.7391
0.8148
0.7222
0.7576
0.9000
0.7297
0.8276
0.6042
0.6405
0.8333
0.7444
0.6842
0.6129
0.8113
0.8710
0.8333
0.7258
0.8111
0.5728
0.8000
0.6078
0.7248
0.6729
0.5745
0.7325
0.6799
*
| 8 |
arXiv:1311.5107v1 [math.AC] 20 Nov 2013
FACTORIZATION IN THE SELF-IDEALIZATION OF A PID
GYU WHAN CHANG AND DANIEL SMERTNIG
a b
Abstract. Let D be a principal ideal domain and R(D) = {
|
0 a
a, b ∈ D} be its self-idealization. It is known that R(D) is a commutative
noetherian ring with identity, and hence R(D) is atomic (i.e., every nonzero
nonunit can be written as a finite product of irreducible elements). In this
paper, we completely characterize the irreducible elements of R(D). We then
use this result to show how to factorize each nonzero nonunit of R(D) into
irreducible elements. We show that every irreducible element of R(D) is a
primary element, and we determine the system of sets of lengths of R(D).
1. Introduction
Let R be a commutative noetherian ring. Then R is atomic, which means that
every nonzero nonunit element of R can be written as a finite product of atoms
(irreducible elements) of R. The study of non-unique factorizations has found
a lot of attention. Indeed this area has developed into a flourishing branch of
Commutative Algebra (see some surveys and books [3, 6, 8, 5]). However, the focus
so far was almost entirely on commutative integral domains, and only first steps
were done to study factorization properties in rings with zero-divisors (see [2, 7]).
In the present note we study factorizations in a subring of a matrix ring over a
principal ideal domain, which will turn out to be a commutative noetherian ring
with zero-divisors.
To begin with, we fix our notation and terminology. Let R be a commutative
ring with identity and U (R) be the set of units of R. Two elements a, b ∈ R are said
to be associates if aR = bR. Clearly, if a = ub for some u ∈ U (R), then a and b are
associates. An a ∈ R is said to be irreducible if a = bc implies that either b or c is
associated with a. We say that R is atomic if every nonzero nonunit of R is a finite
product of irreducible elements. It is clear that noetherian rings are atomic (cf. [2,
Theorem 3.2]) and that 0 ∈ R is irreducible if and only if R is an integral domain.
A ring R is a half-factorial ring (HFR) (resp., bounded factorization ring (BFR))
if R is atomic and two factorizations of a nonzero nonunit into irreducible elements
have the same length (resp., for each nonzero nonunit x ∈ R, there is an integer
N (x) ≥ 1 so that for any factorization x = x1 · · · xn , where each xi is a nonunit,
we have n ≤ N (x)). R is a FFR (finite factorization ring) if R is atomic and each
nonzero nonunit has only finitely many factorizations into irreducibles, up to order
and associates. A nonzero nonunit x ∈ R is said to be prime (resp., primary) if xR
Date: 24.5.2013.
2010 Mathematics Subject Classification. 13A05, 13F15, 20M13, 20M14.
Key words and phrases. non-unique factorizations, matrix ring, noetherian ring, sets of lengths.
1
2
G.W. CHANG AND D. SMERTNIG
is a prime (resp., primary) ideal. Hence a prime element is primary but not vice
versa (for example, if Z is the ring of integers, then 4 ∈ Z is primary but not prime).
We say that R is a unique factorization ring (UFR) if every nonzero principal ideal
of R can be written as a product of principal prime ideals (cf. [2, Theorem 4.9]).
Clearly, a prime element is irreducible, and so a UFR is atomic.
For x ∈ R a nonzero nonunit, its set of lengths is defined as
L(x) = { k ∈ N | there exist irreducibles u1 , . . . , uk ∈ R with x = u1 · . . . · uk }.
Clearly, x is irreducible if and only if L(x) = { 1 }. If x ∈ U (R), we set L(x) = { 0 }.
The system of sets of lengths is defined as L(R) = L(x) | x ∈ R \ { 0 } . Sets
of lengths and invariants derived from them are some of the classical invariants
considered in non-unique factorization theory (see [8, Ch. 1.4]). The reader is
referred to [8] for undefined definitions and notations.
Let M be an R-module. The idealization R(+)M of M is a ring, which is defined
as an abelian group R ⊕ M , whose ring multiplication is given by (a, b) · (x, y) =
(ax, ay + bx) for all a, x ∈ R and b, y ∈ M . It is known that R(+)M is a noetherian
ring if and only if R is noetherian and M is finitely generated [4, Theorem 4.8].
Let D be an
integral
domain, Mat2×2 (D) be the ring of 2 × 2 matrices over D, and
a b
R(D) = {
| a, b ∈ D}. It is easy to show that R(D) is a commutative
0 a
ring with identity under the usual matrix addition and multiplication;
in particular,
a 0
embeds D into
R(D) is a subring of Mat2×2 (D). Clearly, the map a 7→
0 a
a b
R(D), and the map ϕ : D(+)D → R(D), given by ϕ(a, b) =
, is a ring
0 a
isomorphism. In view of this isomorphism, R(D) is called the self-idealization of D
(cf.[13]). There is also an isomorphism D[X]/hX 2 i → R(D) mapping a+bX +hX 2 i
a b
to
. Some factorization properties of R(+)M have been studied in [2,
0 a
Theorem 5.2]. For more on basic properties of R(+)M (and hence of R(D)), see
[4] or [11, Section 25].
Let D be a principal ideal domain (PID). Then R(D) is noetherian, and thus
R(D) is atomic. In Section 2, we first characterize the irreducible elements of R(D),
and we then use this result to show how to factorize each nonzero nonunit
of R(D)
0 1
into irreducible elements via the factorization of D. We show that
is
0 0
the unique prime element (up to associates) of R(D). We prove that every nonzero
nonunit of R(D) can be written as a product of primary elements. Finally, in
Section 3, we completely describe the system of sets of lengths L(R(D)).
We write N = { 1, 2, 3, . . . } for the set of positive integers, and N0 = N ∪ { 0 } for
the set of non-negative integers.
2. Factorization in R(D) when D is a PID
Let D be an integral domain, and
a b
R(D) = {
| a, b ∈ D}
0 a
FACTORIZATION IN THE SELF-IDEALIZATION OF A PID
3
1 0
be the self-idealization of D. Clearly,
is the identity of R(D).
0 1
a b
If α =
∈ R(D), then nr(α) = a is the norm, and this is a ring homo0 a
morphism R(D) → D. Observe that α is a zero-divisor if and only if a = 0. We
write R(D)• for the monoid of non-zero-divisors of R(D).
We begin this paper by characterizing the units of R(D), which is very useful in
the proof of Theorem 5.
Lemma 1. (cf. [11, Theorem 25.1(6)]) If D is an integral domain, then α =
a b
∈ R(D) is a unit of R(D) if and only if a is a unit of D.
0 a
x y
1 0
x y
Proof. If α is a unit, then α ·
=
for some
∈ R(D).
0 x
0 1
0 x
Thus ax = 1, and
so a∈ U (D). Conversely,
assume
that
a is a unit,
and let
a b
u −bu2
1 0
u −bu2
−1
u = a . Then
=
and
∈ R(D).
0 a
0
u
0 1
0
u
Thus α is a unit.
For an arbitrary commutative ring R, there can be two elements a, b ∈ R such
that a and b are associates but a 6= ub for all u ∈ U (R) (see, for example, [2,
Example 2.3]). This cannot happen in the self-idealization of an integral domain.
Lemma 2. Let
D
a
such that α =
0
be an integral
domain
and α, β ∈ R(D) and let a, b, x, y ∈ D
b
x y
and β =
. The following statements are equivalent.
a
0 x
(1) α and β are associates.
(2) There exists θ ∈ U (R(D)) such that β = θα.
(3) There exists u ∈ U (D) such that x = au and y ≡ bu mod a.
Proof. (1) ⇒ (2): If α and β are associates,
are some
so
then there
γ, δ ∈ R(D)
a1 b 1
x1 y1
that α = βγ and β = αδ. Hence if γ =
and δ =
, then
0 a1
0 x1
a = xa1 and x = ax1 , and
a1 , x1 ∈ U (D). Thus γ, δ ∈ U (R(D)) by Lemma 1.
so
u v
(2) ⇒ (3): Let θ =
. By Lemma 1, u ∈ U (D). From β = θα it follows
0 u
that x = au and y = av + bu ≡ bu mod a.
u v
(3) ⇒ (2) and (1): Let v ∈ D be such that y = bu + av. Define θ =
.
0 u
Then θ ∈ U (R(D)) by Lemma 1 and β = θα. Thus, α and β are associates.
We write α ≃ β if α, β ∈ R(D) are associates.
a b
Lemma 3. Let D be a PID and let α =
∈ R(D)• . If a = cd with coprime
0 a
c, d ∈ D, then there exist γ, δ ∈ R(D) with α = γδ and nr(γ) = c, nr(δ) = d.
This representation is unique in the sense that, if γ ′ , δ ′ ∈ R(D) with α = γ ′ δ ′ and
nr(γ ′ ) ≃ c, nr(δ ′ ) ≃ d, then γ ≃ γ ′ and δ ≃ δ ′ .
4
G.W. CHANG AND D. SMERTNIG
Proof. Existence: Since 1 ∈
GCD(c,
d) and Dis a PID,
there exist e, f ∈ D such
c e
d f
that b = cf + ed. Then γ =
and δ =
are as claimed.
0 c
0 d
′
d′ f ′
c e′
with c′ , e′ , d′ , f ′ ∈ D and
and δ ′ =
Uniqueness: Let γ ′ =
′
0 d′
0 c
suppose that α = γ ′ δ ′ . Let u, v ∈ U (D) such that c′ = cu and d′ = dv. Since
c′ d′ = cd, necessarily v = u−1 . Since cf + ed = c′ f ′ + e′ d′ = c(f ′ u) + d(e′ v), we
have c(f ′ u) ≡ cf mod d and f ′ u ≡ f mod d, i.e., f ′ ≡ f v mod d. Therefore
δ ′ ≃ δ and similarly γ ′ ≃ γ.
Corollary 4. Let D be a PID and let α ∈ R(D)• \ U (R(D)). Then there exist
β1 , . . . , βn ∈ R(D)• of pairwise distinct prime power norm, such that α = β1 ·. . .·βn .
This representation is unique up to order and associates.
Proof. Let nr(α) = pe11 · . . . · penn with n ≥ 0, p1 , . . . , pn ∈ D pairwise distinct prime
elements and e1 , . . . , en ≥ 1. By induction on n and the previous lemma, there
exist β1 , . . . , βn ∈ R(D)• such that α = β1 · . . . · βn and nr(βi ) = pei i for all i ∈ [1, n].
′
Suppose α = β1′ · . . . βm
is another such factorization. Since D is a UFD, then
′
m = n and there exists a permutation π ∈ Sn such that nr(βπ(i)
) ≃ nr(βi ) for all
i ∈ [1, n]. The uniquenes statement of the previous lemma implies βi′ ≃ βi for all
i ∈ [1, n].
As a consequence, to study factorizations of α ∈ R(D)• , it is sufficient to study
factorizations of α ∈ R(D)• with prime power norm.
We next give the first main result of this paper, which completely characterizes
the irreducible elements of R(D) when D is a PID.
a b
Theorem 5. Let D be a PID and α =
∈ R(D). Then α is irreducible
0 a
if and only if either (i) a = 0 and b ∈ U (D), (ii) a = p or (iii) a = upn and
1 ∈ GCD(a, b) for some prime p ∈ D, u ∈ U (D), and integer n ≥ 2.
b 0
0 1
Proof. Necessity. Assume that a = 0, and let β =
and γ =
.
0 b
0 0
Then α = β · γ and αR(D)
6= βR(D)
because b 6= 0. Hence αR(D) = γR(D), and
x y
so γ = α · δ for some δ =
∈ R(D). Thus bx = 1.
0 x
Next, assume that a 6= 0. If a is not of the form upn , then Lemma 3 implies
that α = β · γ with nr(β) and nr(γ) nonzero nonunits. Hence α is not irreducible, a
contradiction. Thus a = upn for some prime p ∈ D, u ∈ U (D), and integer n ≥ 1.
If n = 1, then up is also a prime element of D and we have case (ii).
Finally, assume that n ≥ 2 and pk ∈ GCD(a, b) for someinteger
k ≥ 1. Let
p
0
and ξ =
b = b1 pk , where b1 ∈ D. Then α = θ · ξ, where θ =
0 p
upn−1 b1 pk−1
, but θ, ξ 6∈ αR(D), a contradiction. This completes the proof.
0
upn−1
x2 y2
x1 y1
. We
and γ =
Sufficiency. Let α = β · γ, where β =
0 x2
0 x1
will show that β or γ is a unit, and thus α is irreducible.
FACTORIZATION IN THE SELF-IDEALIZATION OF A PID
5
Case 1. a = 0 and b ∈ U (D). Note that x1 = 0 or x2 = 0; so for convenience,
let x2 = 0. Then x1 y2 = b, and since b ∈ U (D), we have x1 ∈ U (D). Thus β is a
unit of R(D) by Lemma 1.
Case 2. a = p for a prime p ∈ D. Then α = β · γ implies that either x1 or x2 is
a unit in D. Hence β or γ is a unit in R(D) by Lemma 1.
Case 3. a = upn for a prime p ∈ D,u ∈ U (D), n≥ 2 and 1
∈ GCD(a, b). Since p
wpn−k
y
vpk
x
and γ =
is a prime and α = β · γ, we have β =
k
0
wpn−k
0 vp
for some 0 ≤ k, n − k ≤ n, x, y ∈ D, and v, w ∈ U (D) with vw = u. Hence
b = pk vy + pn−k wx, and thus k = 0 or n − k = 0 because a and b are coprime.
Therefore β or γ is a unit in R(D) by Lemma 1.
a b
Corollary 6. Let D be a PID and α =
∈ R(D) be a nonzero nonunit
0 a
such that c ∈ GCD(a, b), a = ca1 , and b = cb1 for some c, a1 , b1 ∈ D. Let c =
km
upe11 · · · penn and a1 = q1k1 · · · qm
(when a 6= 0) be prime factorizations of c and a1 ,
respectively, where u ∈ U (D). The following is a factorization of α into irreducible
elements.
ei
0 u Qn
pi 0
(1) If a = 0, then α =
.
i=1
0 0
0 pi
!
ei
k
Qm
qj j cj
u 0 Qn
pi 0
) for some
(2) If a 6= 0, then α =
( i=1
)( j=1
k
0 u
0 pi
0 qj j
cj ∈ D with 1 ∈ GCD(cj , qj ).
Proof. (1) Clear.
(2) We first note that α =
c 0
0 c
=
c
0
u 0
0 u
0
c
a1
0
p1
0
0
p1
b1
a1
e1
···
and
pn
0
0
pn
en
.
Next, assume that a1 = b2 d2 for some b2 , d2 ∈ D with 1 ∈ GCD(b2 , d2 ). Then
there are some
x, y ∈ D such
that b
2 (xb
1 ) + d2 (yb1 ) = b1 because D is a PID,
a1 b 1
b2 yb1
d2 xb1
and hence
=
. Note that 1 ∈ GCD(a1 , b1 );
0 a1
0 b2
0 d2
hence 1 ∈ GCD(b2 , yb1 ) and 1 ∈ GCD(d
! 2 , xb1 ). So by repeating this process, we
kj
Qm
cj
q
a1 b 1
for some cj ∈ D with 1 ∈ GCD(cj , qj ).
= j=1 j
have
k
0 a1
0 qj j
Corollary 7. If D is a PID, then
associates) of R(D).
0 1
0 0
is the unique prime element (up to
Proof. Clearly, prime elements are
and hence by Theorem 5, we have
irreducible,
a b
three cases to consider. Let α =
∈ R(D) be irreducible.
0 a
6
G.W. CHANG AND D. SMERTNIG
0 1
Case 1. a = 0 and b ∈ U (D). Note that if we set I =
, then α =
0 0
b 0
b 0
I·
and
∈ U (R(D)) by Lemma 1; so α and I are associates.
0 b
0 b
x y
c d
Let β =
,γ =
∈ R(D). Then βγ ∈ IR(D) if and only if
0 x
0 c
xc = 0; so if x = 0 (for convenience), then β ∈ IR(D). Thus I is a prime.
Cases 2 and 3. a 6= 0. Note that
2
2
a b−1
a 2a(b − 1)
=
0
a
0
a2
a b
a b−2
=
∈ αR(D),
0 a
0
a
a b−1
but
6∈ αR(D) because a 6∈ U (D). Thus α is not a prime.
0
a
For zero-divisors and elements with prime power norm, the following lemma
further refines Corollary 6, by giving all possible factorizations, up to order and
associates. The general case can be obtained in combination with Corollary 4.
a b
Lemma 8. Let D be a PID, and let α =
∈ R(D) \ { 0 } with a, b ∈ D.
0 a
(1) Suppose a = 0 and b = q1 · . . . · qn , with (possibly associated) prime powers
q1 , . . . , qn ∈ D. Then, for every choice of a1 , . . . , an ∈ D,
n
0 1 Y qi ai
α=
,
0 qi
0 0
i=1
and this is a factorization into irreducibles if and only if for all i ∈ [1, n]
either qi is prime or 1 ∈ GCD(qi , ai ).
(2) Suppose a = pn with p ∈ D a prime element and n ∈ N. For all l ∈ [1, n]
let ml ∈ N0 and for all j ∈ [1, ml ] let al,j ∈ D. Then
ml l
n Y
Y
p al,j
α=
0 pl
l=1 j=1
Pn
Pn n−l Pml
if and only if n =
( j=1 al,j ). This is a
l=1 ml l and b =
l=1 p
product of irreducibles if and only if 1 ∈ GCD(al,j , p) for all l ∈ [2, n] and
j ∈ [1, ml ].
Up to order and associativity of the factors, all the factorizations of α are of this
form.
Proof. This is checked by a straightforward calculation. The statement about the
irreducibles follows from the characterization of the irreducible elements in Theorem
5. That every representation of α as a product of irreducibles is, up to order and
associates, one of the stated ones also follows from this characterization.
Corollary 9. Let D be a PID.
(1) R(D) is a BFR.
FACTORIZATION IN THE SELF-IDEALIZATION OF A PID
7
(2) R(D) is a FFR if and only if D/pD is finite for all prime elements p ∈ D.
(3) If D is a field, then every nonzero nonunit of R(D) is a prime, and hence
R(D) is a UFR with a unique nonzero (prime) ideal.
a b
Proof. (1) By Corollary 6, R(D) is atomic, and if α =
∈ R(D), then
0 a
the lengths of factorizations of α into irreducible elements are less than or equal to
that of the prime factorization of a or b in D, plus at most one. Thus R(D) is a
BFR.
(2) Suppose first that D/pD is finite for all prime elements p ∈ D. Then also
D/pn D is finite for all n ≥ 1 and all prime elements p ∈ D. Hence, by the Chinese
Remainder Theorem, D/cD is finite for all nonzero c ∈ D.
Let c ∈ D• . By Lemma 2(3) there exist, up to associativity, only finitely many
elements γ ∈ R(D) with nr(γ) ≃ c. If α ∈ R(D)• and γ|α, then nr(γ)| nr(α),
and therefore there are, up to associativity, only finitely many irreducibles that can
possibly divide α. Together with (1), this implies that every α ∈ R(D)• has only
finitely many
factorizations.
0 b
If α =
∈ R(D) is a zero-divisor, then every factorization has exactly one
0 0
0 1
factor associated to
and if γ is any other factor in the factorization then
0 0
nr(γ) | b (cf. Lemma 8(1)). By the same argument as before, α has only finitely
many factorizations.
For the converse, suppose that p ∈ D is a prime element and |D/pD| = ∞. Since
2
p
0
p a
p −a
=
,
0 p2
0 p
0 p
2
p
0
for all a ∈ D,
has infinitely many (non-associated) factorizations in
0 p2
R(D).
a b
(3) Let α =
∈ R(D) be a nonzero nonunit. Since D is a field, by
0 a
0 1
Lemma 1, a = 0 and b ∈ U (D). Hence α is associated with I :=
, and
0 0
so α is a prime by the proof of Corollary 7. Thus R(D) is a UFR and IR(D) is a
unique nonzero (prime) ideal of R(D).
If D is a PID but not a field, we will see in Corollary 15 that R(D) is not a UFR,
even when D is the ring of integers.
We next prove that every nonunit of R(D) can be written as a (finite) product
of primary elements.
√
Lemma 10. Let R be a commutative ring. If a ∈ R is such that aR is a maximal
ideal, then aR is primary.
√
√
Proof. Let x, y ∈ R be such
√
√ that xy ∈ aR but x√ 6∈ aR. Note that aR (
aR + xR; so aR + xR = aR + xR = R because aR is a maximal ideal. Hence
1 = as + xt for some s, t ∈ R. Thus y = y(as + xt) = a(ys) + (xy)t ∈ aR.
8
G.W. CHANG AND D. SMERTNIG
Corollary 11. If D is a PID, then every irreducible element of R(D) is primary.
In particular, each nonzero nonunit of R(D) can be written as a finite product of
primary elements.
a b
Proof. Let α =
∈ R(D) be irreducible. By Theorem 5, there are three
0 a
cases that we have to consider.
Case 1. a = 0 and b ∈ U (D). By Corollary 7, α is a prime, and hence a primary
element.
Cases 2 and 3. a = upn for some prime
pelement p ∈ D, u ∈ U (D), and n ∈ N.
By Lemma 10, it suffices to show that αR(D) is a maximal ideal. Let β =
p
0 d
x y
∈ R(D), then δ 2 = 0,
∈ R(D) \ αR(D). Note that if δ =
0 0
0 x
p
p
x 0
upn
0
6∈ αR(D) and
∈
and hence δ ∈ αR(D). Hence
0 x
0
upn
p
p
p 0
αR(D). But then
∈
αR(D). Note also that if x ∈ pD, then
0 p
p
x 0
p 0
x1 0
x = px1 for some x1 ∈ D, and so
=
∈ αR(D),
0 x
0 p
0 x1
a contradiction.
So
x
∈
6
pD,
and
hence
xz
+
pz
=
1
for
some
z1 , z
1
2
2 ∈ D. Thus
z1 0
1 0
z2 0
p 0
0 −yz1
=β·
+
+
∈ βR(D) +
0 z1
0 1
0 z2
0 p
0
0
p
p
αR(D). Therefore αR(D) is maximal.
Remark 12. In view of Corollary 11, Corollary 4 in fact corresponds to the (unique)
primary decomposition of αR(D), as every prime ideal of R(D), except for 0(+)D,
is maximal (cf. [4, Theorem 3.2]).
Associativity is a congruence relation on (R(D)• , ·), and we denote by R(D)•red
the corresponding quotient monoid. Corollary 4 may also be viewed as a monoid
`
isomorphism R(D)•red ∼
= p R(D(p) )•red , where the coproduct is taken over all associativity classes of prime elements p of D, and D(p) is the localization at pD.
3. The sets of lengths in R(D) when D is a PID
a b
Let D be an integral domain and R(D) = {
| a, b ∈ D}. In this
0 a
section, we characterize the sets of lengths in R(D) when D is a PID.
Lemma 13. Let D be a PID and α, β ∈ R(D).
(1) If αβ 6= 0, then L(α) + L(β) ⊂ L(αβ).
(2) If nr(α) and nr(β) are coprime, then L(α) + L(β) = L(αβ).
Proof. (1) Clear.
(2) Let n ∈ L(αβ). Then there exist irreducible γ1 , . . . , γn ∈ R(D)• such
that αβ = γ1 · . . . · γn . Then also nr(α) nr(β) = nr(γ1 ) · . . . · nr(γn ). Since
1 ∈ GCD(nr(α), nr(β)), we may without loss of generality assume nr(α) ≃ nr(γ1 ) ·
. . . · nr(γk ) and nr(β) ≃ nr(γk+1 ) · . . . · nr(γn ) for some k ∈ [0, n]. By Lemma 3,
therefore α ≃ γ1 ·. . .·γk and β ≃ γk+1 ·. . .·γn , and n = k+(n−k) ∈ L(α)+L(β).
FACTORIZATION IN THE SELF-IDEALIZATION OF A PID
9
For a prime element p ∈ D we denote by vp : D → N0 ∪ { ∞ } the corresponding
valuation, i.e., vp (0) = ∞ and vp (apk ) = k if k ∈ N0 and a ∈ D• is coprime to p.
a b
Theorem 14. Let D be a PID, α ∈ R(D) and suppose α =
with a, b ∈ D.
0 a
(1) If a = 0, and b = pe11 · . . . · penn with pairwise non-associated prime elements
p1 , . . . , pn ∈ D and e1 , . . . , en ∈ N, then L(α) = [1 + n, 1 + e1 + . . . + en ].
(2) Let p ∈ D be a prime element, n ∈ N and suppose a = pn and vp (b) = k ∈
N0 ∪ { ∞ }. Then L(α) = { 1 } if and only if k = 0 or n = 1. If k ≥ n − 1,
then
[3, n − 2] ∪ { n } ⊂ L(α) ⊂ [2, n − 2] ∪ { n },
and if k ∈ [1, n − 2], then
[3, k + 1] ⊂ L(α) ⊂ [2, k + 1].
Moreover, if k ≥ 1, then 2 ∈ L(α) if and only if n is even or k <
n
2.
Proof. (1) This is clear from Lemma 8(1), as every factorization of b into prime
powers gives a factorization of α (choose ai = 1), and conversely.
(2) The cases k = 0 and n = 1 are clear from Theorem 5, so from now on we
assume k ≥ 1 and n > 1. Let b = upk with u ∈ D and 1 ∈ GCD(u, p). We
repeatedly make use of Lemma 8(2), and the notation used there to describe a
factorization, without explicitly mentioning this fact every time.
Claim A: L(α) ⊂ [2, min{ k + 1, n }].
Proof. Because α is not an atom, 1 6∈ L(α). Any factorization of α is associated to
one in Lemma 8(2); we fix a factorization of α with notation as in the lemma. The
P
P
length of the factorization is then t = nl=1 ml . Since nl=1 ml l = n, clearly t ≤ n.
Pn
P ml
Moreover, necessarily ml = 0 for all l > n−(t−1). Since b = l=1 pn−l ( j=1
al,j ),
therefore k = vp (b) ≥ vp (pn−(n−t+1) ) = t − 1, i.e., t ≤ k + 1.
Claim B: 2 ∈ L(α) if and only if n is even or k <
n
2.
Proof. Suppose 2 ∈ L(α) and n is odd. Then n = l + (n − l) and b = pn−l al,1 +
pl an−l,1 with 1 ∈ GCD(al,1 , p) and 1 ∈ GCD(an−l,1 , p). Since n is odd, then
n − l 6= l and therefore k = vp (b) = min{ n − l, l } < n2 .
For the converse suppose first 1 ≤ k < n2 . Then n = k + (n − k), n − k > k and
b = pn−k · 1 + pk (u − pn−2k ) with 1 ∈ GCD(u − pn−2k , p). If n is even and k ≥ n2 ,
n
n
n
then n = n2 + n2 and b = p 2 (1 + (upk− 2 − 1)) with 1 ∈ GCD(upk− 2 − 1, p).
Claim C: If n − 1 ∈ L(α), then k = n − 2.
Proof. For a corresponding factorization we must have m1 = n − 2, m2 = 1,
and ml = 0 for all l > 2. Then b = pn−1 (a1,1 + . . . + a1,n−2 ) + pn−2 a2,1 with
1 ∈ GCD(a2,1 , p), whence k = vp (b) = n − 2.
Claim D: Let n ≥ 3 and k ≥ 2. If either k = 2 or n 6= 4, then 3 ∈ L(α).
Proof. Suppose first that n
p
(1)
α=
0
is odd and set b′ = b/p. Then
n−1
0
p
b′
α′ with α′ =
,
p
0
pn−1
10
G.W. CHANG AND D. SMERTNIG
and, by Claim B, 2 ∈ L(α′ ). Therefore 3 ∈ L(α).
If n is even, n ≥ 6, and k ≥ 3, then
2
p
α=
0
u
p2
n−2
p
0
u(pk−2 − pn−4 )
,
pn−2
where the first factor is irreducible and the second has a factorization of length 2
by Claim B.
If k = 2, then
2 n−2
p 0
p
u
α=
0 p
0
pn−2
is a factorization of length 3.
Claim E: If k ≥ n − 1, then n ∈ L(α).
Proof. We use Lemma 8(2). Set m1 = n, a1,1 = upk−(n−1) and a1,2 = . . . = al,n =
0. Then pn−1 (upk−(n−1) + 0 + . . . + 0) = b.
Claim F: If k ∈ [1, n − 2], then [3, k + 1] ⊂ L(α).
Proof. If n ≤ 3 or k = 1, then the claim is trivially true, so we may assume k ≥ 2.
We proceed by induction on n. Suppose n ≥ 4, and that the claim is true for n − 1.
Let b′ = b/p and let α′ be as in (1). We have vp (b′ ) = k − 1 ≥ 1.
′
If k = 2, then 1 = k − 1 < n−1
2 , and hence 2 ∈ L(α ) (by Claim B). Therefore
′
{ 3 } = [3, k + 1] ⊂ { 1 } + L(α ) ⊂ L(α).
If k ≥ 3, then by induction hypothesis, [3, k] ⊂ L(α′ ), and thus [4, k + 1] =
{ 1 } + L(α′ ) ⊂ L(α), and by Claim D, also 3 ∈ L(α).
Claim G: If k ≥ n − 1, then [3, n − 2] ⊂ L(α).
Proof. If n ≤ 4, then the claim is trivially true. We again proceed by induction on
n. Suppose n ≥ 5 (then k ≥ 4), and that the claim is true for n − 1.
Let b′ = b/p and let α′ be as in (1). Again, vp (b′ ) = k − 1 ≥ 3 and by induction
hypothesis [3, n − 3] ⊂ L(α′ ). Therefore [4, n − 2] ⊂ L(α) and by Claim D also
3 ∈ L(α).
If k ≥ n − 1, then the claim of the theorem follows from claims A, B, C, E and
G. If k ∈ [2, n − 2], then the claim of the theorem follows from claims A, B and
F.
If α ∈ R(D) is a nonzero nonunit, and L(α) = { l1 , l2 , . . . , lk }, then theset
of distances of α is defined as ∆(α) = { li − li−1 | i ∈ [2, k] }, and ∆ R(D) =
S
∆(α). For k ∈ N≥2 , set Uk R(D) = S
α∈R(D),k∈L(α) L(α).
α∈R(D)\ { 0 }∪U(R(D))
Corollary 15. If D is a PID, but not a field, then U2 R(D) = N≥2 and ∆ R(D) =
{ 1, 2 }.
Proof. This follows directly from Theorem 14.
FACTORIZATION IN THE SELF-IDEALIZATION OF A PID
11
Corollary 16. Suppose D is a PID that has infinitely many pairwise non-associated
prime elements. Then
n
o
n
o
L(R(D)) = { 0 }, { 1 } ∪ [m, n] | m ∈ [2, n]
n
o
∪ [m, n] ∪ { n + 2 } | m ∈ [2, n] and n even
n
o
∪ [m, n] ∪ { n + 2 } | m ∈ [3, n] and n odd
n
∪ m + 2[0, n] | with m ∈ N≥2n and n ∈ N }.
Proof. The sets { 0 } and { 1 } correspond to units and irreducibles. For zerodivisors, the sets of lengths are discrete intervals and completely described in Theorem 14(1). By Corollary 4 and Lemma 13(2), the sets of lengths of nonunit
non-zero-divisors are arbitrary sumsets of sets as in Theorem 14(2), i.e., of sets of
the form { 1 }, [2, n] (for n ≥ 2), [3, n] (for n ≥ 3), [2, n] ∪ { n + 2 } for even n ≥ 2,
and [3, n] ∪ { n + 2 } for odd n ≥ 3.
Finally, we remark that other important invariants of factorization theory (their
definitions readily generalize to the zero-divisor case) are easily determined for
R(D) using the characterization of sets of lengths and Corollary 4.
Corollary 17. Let D be
ring with
a PID but not a field. R(D)
is a locally tame
catenary degree c R(D) = 4. In particular, ∆ R(D) = [1, c R(D) − 2].
Proof. We first observe that the catenary degree (see [8, Chapter 1.6] for the definition in the non-zero-divisor case) of R(D) is 4: Let first α ∈ R(D) with nr(α) 6= 0.
Using Corollary 4, we can reduce to the case where nr(α) is a prime power. Since
then min L(α) ≤ 3, we can argue as in bifurcus semigroups (cf. [1, Theorem 1.1]),
to find c(α) ≤ 4. In view of Lemma 8(1), and with a similar argument, the catenary
degree of a zero-divisor is at most 2. Together this gives c(R(D)) ≤ 4. Since there
exists an element with set of lengths { 2, 4 }, also c(R(D)) ≥ 4.
We still have to show that R(D) is locally tame (see [8, Chapter 1.6] or [10] for
definitions). For this we have to show t(R(D), γ) < ∞ for all irreducible γ ∈ R(D).
Let α ∈ R(D) and γ ∈ R(D) be irreducible. If γ is prime, then t(R(D), γ) = 0,
hence we may suppose that γ is associated to one of the non-prime irreducibles
from Theorem 5, and hence there exist a prime element p ∈ D and n ∈ N such
that nr(γ) = pn . If α ∈ R(D) is a zero-divisor, then t(α, γ) = n follows easily from
Lemma 8(1).
A standard technique allows us to show t(R(D)• , γ) < ∞: By [10, Proposition
3.8], it suffices to show that two auxiliary invariants, ω(R(D)• , γ) and τ (R(D)• , γ)
are finite.
Suppose I ⊂ (R(D)• , ·) is a divisorial ideal. If we denote by R(D) hIi the ideal
of R(D) generated by I, one checks that R(D) hIi ∩ R(D)• = I. Since R(D) is
noetherian, R(D)• is therefore v-noetherian. By [10, Theorem 4.2], ω(R(D)• , γ) is
finite.
Recalling the definition of τ (α, γ) (from [10, Definition 3.1]), it is immediate from
Theorem 14 together with Corollary 4, that τ (R(D)• , γ) ≤ 3. Altogether, therefore
t(R(D), γ) < ∞.
12
G.W. CHANG AND D. SMERTNIG
Remark 18. Suppose D is a PID but not a field.
(1) Trivially, Theorem 14(2) holds true for R(D)• .
(2) Let K be the quotient field of D, and H = R(D)• . We have
n a b
o
| b ∈ D, a ∈ D• ,
H=
0 a
and the complete integral closure of H is equal to
o
n a b
b=
H
| b ∈ K, a ∈ D•
0 a
n n
a b
a
nan−1 b
because
for all a, b ∈ K and n ∈ N. This shows
=
0 a
0
an
b and even more we have f = (H : H)
b = ∅ (note that (D : K) = ∅).
H 6= H,
Thus the monoid under discussion is neither a Krull nor a C-monoid, which
have been extensively studied in recent literature (see [8, Chapters 2.9, 3.3,
and 4.6], [9], [12]).
Acknowledgements. The first author’s work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded
by the Ministry of Education, Science and Technology (2010-0007069). The second
author was supported by the Austrian Science Fund (FWF): W1230. We thank
the anonymous referee for his/her careful reading and all the helpful comments, in
particular on the concept of self-idealization.
References
[1] D. Adams, R. Ardila, D. Hannasch, A. Kosh, H. McCarthy, V. Ponomarenko, and R. Rosenbaum, Bifurcus semigroups and rings, Involve 2 (2009) 351–356.
[2] D.D. Anderson and S. Valdes-Leon, Factorization in commutative rings with zero divisors,
Rocky Mountain J. Math. 26 (1996) 439–480.
[3] D.D. Anderson (ed.), Factorization in Integral Domains, Lect. Notes Pure Appl. Math., vol.
189, Marcel Dekker, 1997.
[4] D.D. Anderson and M. Winders, Idealization of a Module, J. Commutative Algebra 1 (2009)
3–56.
[5] N.R. Baeth and R.A. Wiegand, Factorization theory and decomposition of modules, Am.
Math. Mon. 120 (2013), 3–34.
[6] S.T. Chapman (ed.), Arithmetical Properties of Commutative Rings and Monoids, Lect.
Notes Pure Appl. Math., vol. 241, Chapman & Hall/CRC, 2005.
[7] C. Frei and S. Frisch, Non-unique factorization of polynomials over residue class rings of the
integers, Comm. Algebra 39 (2011) 1482–1490.
[8] A. Geroldinger and F. Halter-Koch, Non-unique factorizations, vol. 278 of Pure and Applied Mathematics (Boca Raton), Chapman & Hall/CRC, Boca Raton, FL, 2006. Algebraic,
combinatorial and analytic theory.
[9] A. Geroldinger and W. Hassler, Arithmetic of Mori domains and monoids, J. Algebra 329
(2008) 3419–3463.
[10] A. Geroldinger and W. Hassler, Local tameness of v-noetherian monoids, J. Pure Appl.
Algebra 212 (2008) 1509–1524.
[11] J. Huckaba, Commutative rings with zero divisors, Dekker, New York, 1988.
[12] A. Reinhart, On integral domains that are C-monoids, Houston J. Math., to appear.
[13] L. Salce, Transfinite self-idealization and commutative rings of triangular matrices, in Commutative algebra and its applications, 333–347, Walter de Gruyter, Berlin, 2009.
FACTORIZATION IN THE SELF-IDEALIZATION OF A PID
13
(Chang) Department of Mathematics, University of Incheon, Incheon 406-772, Korea.
E-mail address: [email protected]
(Smertnig) Institut für Mathematik und Wissenschaftliches Rechnen, Karl-FranzensUniversität Graz, Heinrichstraße 36, 8010 Graz, Austria
E-mail address: [email protected]
| 0 |
1
Residential Energy Storage Management with
Bidirectional Energy Control
arXiv:1712.00093v2 [cs.SY] 16 Mar 2018
Tianyi Li, and Min Dong, Senior Member, IEEE
Abstract—We consider the residential energy storage management system with integrated renewable generation, with the
availability of bidirectional energy flow from and to the grid
thorough buying and selling. We propose a real-time bidirectional
energy control algorithm, aiming to minimize the net system cost,
due to energy buying and selling and battery deterioration and
inefficiency from storage activities, within a given time period,
subject to the battery operational constraints and energy buying
and selling constraints. We formulate the problem as a stochastic
control optimization problem. We then modify and transform this
difficult problem into one that enables us to develop the realtime energy control algorithm through Lyapunov optimization.
Our developed algorithm is applicable to arbitrary and unknown
statistics of renewable generation, load, and electricity prices.
It provides a simple closed-form control solution only based
on current system states with minimum complexity for realtime implementation. Furthermore, the solution structure reveals
how the battery energy level and energy prices affect the
decision on energy flow and storage. The proposed algorithm
possesses a bounded performance guarantee to that of the optimal
non-causal T -slot look-ahead control policy. Simulation shows
the effectiveness of our proposed algorithm as compared with
alternative real-time and non-causal algorithms, as well as the
effect of selling-to-buying price ratio and battery inefficiency on
the storage behavior and system cost.
Index Terms—Energy Storage, renewable generation, energy
selling, home energy management, Lyapunov optimization, realtime control
I. I NTRODUCTION
Energy storage and renewable energy integration are considered key solutions for future power grid infrastructure
and services to meet the fast rising energy demand and
maintain energy sustainability [1], [2]. For the grid operator,
energy storage can be exploited to shift energy across time
to meet the demand and counter the fluctuation of intermittent renewable generation to improve grid reliability [2]–
[7]. For the electricity customer, local energy storage can
provide means for energy management to control energy flow
in response to the demand-side management signal and to
reduce electricity cost. For example, dynamic pricing is one
of main demand-side management techniques to relieve grid
congestion [8], [9]. Its effectiveness relies on the customerside energy management solution to effectively control energy
flow and demand in response to the pricing change. With
local renewable generation and energy storage introduced to
residential and commercial customers, there are potentially
Corresponding author: Min Dong. Tianyi Li and Min Dong are with
Department of Electrical, Computer and Software Engineering, University of
Ontario Institute of Technology, Oshawa, Ontario, Canada (Email: {tianyi.li,
min.dong}@uoit.ca).
greater flexibility in energy control to respond to the dynamic
pricing and demand fluctuation, as well as maximally harness
the energy from renewable source to reduce electricity bills
[10]–[15].
With more local renewable generation at customers available, the utility retailer now allows customers to sell energy
back to the utility at a price dynamically controlled by the
utility in an attempt to harness energy from distributed renewable generation at customers and further improve stability and
reliability [16], [17]. This means both renewable generation
and previously stored energy, either purchased from the grid
or harnessed from the renewable source can be sold for profit
by the customer. The ability to sell energy back enables
bidirectional energy flow between the energy storage system
and the grid. This also gives the customer a greater control
capability to manage energy storage and usage based on the
dynamic pricing for both buying and selling. The repayment
provides return for the storage investment and further reduce
the net cost at the customer. An intelligent energy management
solution exploring these features to effectively manage storage
and control the energy flow, especially at a real-time manner,
is crucially needed to maximize the benefits.
Developing an effective energy management system faces
many challenges. For the energy storage system itself, the
renewable source is intermittent and random, and its statistical characteristics over each day are often inheritably timevarying, making it difficult to predict. The benefit of storage,
either for electricity supply or for energy selling back, also
comes at the cost of battery operation that should not be
neglected. The bidirectional energy flow between the energy
storage system and the grid under dynamic pricing complicates
the energy control of the system when facing future uncertainty, and creates more challenges. More control decisions
need to be made for energy flows among storage battery, the
grid, the renewable generation, and the load. The potential
profit for energy selling but with unpredictable pricing makes
control decisions much more involved on when and how much
to sell, store, or use. Moreover, the battery capacity limitation
further makes the control decisions coupled over time and
difficult to optimize. In this paper, we aim to develop a realtime energy control solution that addresses these challenges
and effectively reduces the system cost at minimum required
knowledge of unpredictable system dynamics.
Related Works: Energy storage has been considered at
power grid operator or aggregator to combat the fluctuation
of renewable generation, with many works in literature on
storage control and assessing its role in renewable generation
[2], for power balancing with fixed load [3], [4] or flexible load
2
control [6], [7], and for phase balancing [5]. Residential energy
storage system to reduce electricity cost has been considered
without renewable [18] and with renewable integration [10]–
[13], [19]–[25]. Only energy buying was considered in these
works. Among them, off-line storage control strategies for
dynamics systems are proposed [10], [11], [19], where combined strategies of load prediction and day-ahead scheduling
on respective large and small timescales are proposed. The
knowledge of load statistics and renewable generation are
known ahead of time, while no battery operational cost are
considered.
Real-time energy storage management amid unknown system dynamics is much more challenging. Assuming known
distributions of system dynamics (e.g., load, renewable generation, and prices), the storage control problem is formulated as a Markov Decision Process (MDP) and Dynamic
Programming is applied to solve it numerically [14], [19].
However, besides the requirement of known statistics, this
method suffers from high computational complexity to be implementable for practical systems. Due to unpredictable nature
of system dynamics, these statistics are difficult to acquire
or predict in practice. Without the statistical knowledge of
system dynamics, Lyapunov optimization technique [26] has
been employed to develop real-time control strategies in [12],
[13], [20]–[23]. For independent and identically distributed or
stationary system dynamics ( pricing, renewable, and load),
energy control algorithms are proposed in [20], [21] without
considering battery operational cost, and in [22] with battery
charging and discharging operational cost considered. All the
above works aim to minimize the long-term average system
cost. A real-time energy control algorithm to minimize the
system cost within a finite time period is designed in [12] for
arbitrary system dynamics. Furthermore, joint storage control
and flexible load scheduling is considered in [13] where the
closed-form sequential solution was developed to minimize the
system cost while meeting the load deadlines.
The idea of energy selling back or trading is considered
in [27]–[29], where [27], [28] focus on demand-side management via pricing schemes using game approaches for load
scheduling among customers, and [29] considers a microgrid
operation and supply. In addition, although not explicitly
modeled, the system considered in [25] can be generalized
to include energy selling under a simplified model, provided
that buy and sell prices are constrained such that the overall
cost function is still convex. All these works consider the grid
level operation and the cost associated with it, and use a simple
battery storage model without degradation or operational cost.
Since the consumers may prefer a cost saving solution in a
customer defined time period and system dynamics may not be
stationary, it is important to provide a cost-minimizing solution
to meet such need. To the best of our knowledge, there is no
such existing bidirectional energy control solution with energy
selling-back capability. In addition, most existing works ignore
battery inefficiency in charging and discharging, which results
in energy loss that affects the storage behaviors and should be
taken into account in the energy control design.
Contributions: In this paper, we consider a residential
energy storage management system with integrated renewable
generation, with the availability of bidirectional energy flow
from and to the grid through buying and selling. We develop
a real-time bidirectional energy control algorithm, aiming to
minimize the net system cost within a finite time period subject
to the battery operational constraints and energy buying and
selling constraints. In considering the system cost, we include
the storage cost by carefully modeling both battery operational
cost and inefficiency associated with charging/discharging
activities. The system dynamics, including renewable source,
buying/selling electricity price, and the customer load, can
have arbitrary distributions which may not be stationary, and
are unknown to us.
We formulate the net system cost minimization as a stochastic optimization problem over a finite time horizon. The
interaction of storage, renewable, and the grid, as well as
cost associated with energy buying and selling, and battery
storage complicate the energy control decision making and
optimization over time. To tackle this difficult problem, we
use special techniques to modify and transform the original
problem into one which we are able to apply Lyapunov
optimization to develop a real-time algorithm to find the
control solution. Our developed real-time algorithm provides
a simple closed-form energy control solution, which only
relies on current battery level, pricing, load, and renewable
generation, and thus is very simple for real-time implementation. In addition, the closed-form expression reveals how
the battery energy level and prices affect the decisions of
energy buying and selling, energy storage and usage from
the battery, and the priority order of multiple sources for
storing or selling energy from multiple sources. We show that
our proposed real-time algorithm provides the performance
within a bounded gap to that of the optimal T -slot look-ahead
solution which has full system information available ahead
of time. The algorithm is also shown to be asymptotically
optimal as the battery capacity and the time duration go to
infinity. Simulation results demonstrate the effectiveness of
our proposed algorithm as compared with alternative realtime or non-causal control solutions. Furthermore, simulation
studies are provided to understand the effects of bidirectional
energy control, the selling and buying price ratio, and battery
efficiency on the energy storage behavior and the system cost.
Organization: The rest of this paper is organized as follows.
In Section II, we describe the energy storage and management
system model. In Section III, we formulate the ESM stochastic
control optimization problem within a finite period. In Section IV, we develop our real-time energy control algorithm.
In Section V, we analyze the performance of our algorithm.
Simulation results are provided in Section VI, and followed
by conclusion in Section VII.
Notations: The main symbols used in this paper are summarized in Table I.
II. S YSTEM M ODEL
We consider a residential-side energy storage management
(ESM) system as shown in Fig. 1. The system contains an
energy storage battery which is connected to an on-site renewable generator (RG) and the conventional grid (CG). Energy
3
TABLE I
L IST OF MAIN SYMBOLS
Wt
customer load at time slot t
St
renewable generation at time slot t
Sw,t
portion of renewable energy serving load Wt at time slot t
Sc,t
portion of renewable energy sold to grid at time slot t
Ss,t
portion of renewable energy stored to battery at time slot t
Et
energy purchased from conventional grid at time slot t
Qt
portion of Et stored into battery at time slot t
Fd,t
amount of energy from battery serving load Wt at time slot t
Fs,t
amount of energy from battery sold to grid at time slot t
Pb,t
energy unit buying price from conventional grid at time slot t
Ps,t
energy unit selling price to conventional grid at time slot t
Bt
battery energy level at time slot t
xe,t
entry cost for battery usage at time slot t:
xe,t = 1R,t Crc + 1D,t Cdc
xu,t
net change of battery energy level in time slot t:
xu,t = |Qt + Sr,t − Dt |
at
control actions at time slot t:
at , [Et , Qt , Fd,t , Fs,t , Sc,t , Ss,t ]
µt
system inputs at time slot t: µt , [Wt , St , Pb,t , Ps,t ]
xe
average entry cost for battery usage over To slots
xu
average net change of battery energy level over To slots
J
average cost of buying energy from the grid in To slots
C(·)
average usage cost function of the battery
ηc
battery charging efficiency factor
ηd
battery discharging efficiency factor
Rmax
maximum charging amount into the battery
Dmax
maximum discharging amount from the battery
Γ
max{ηc Rmax , Dmax /ηd }
Bmin
minimum energy required in battery
Bmax
maximum energy allowed in battery
Crc
entry cost for battery usage due to each charging activity
Cdc
entry cost for battery usage due to each discharging activity
∆a
desired change amount of battery energy level in To slots
ag replacements
CG
(Pb,t )
Et
Qt
Fs,t + Ss,t
(Ps,t )
RG
St
−
Battery
Fs,t
+
Ss,t
Fd,t +
Sc,t + Ss,t ∈ [0, St − Sw,t ],
(1)
2) CG: The customer can buy energy from or sell energy
to the CG at real-time unit buying price Pb,t ∈ [Pbmin , Pbmax ]
and selling price Ps,t ∈ [Psmin , Psmax ], respectively. Both Pb,t
and Ps,t are known at time slot t. To avoid energy arbitrage,
the buying price is strictly greater than the selling price at any
time, i.e., Pb,t > Ps,t . Let Et denote the amount of energy
bought from the CG at time slot t. Let Qt denote the portion
of Et that is stored into the battery. The remaining portion
Et − Qt directly serves customer load. Let Fs,t denote the
amount of energy from the battery that is sold back to the
CG. The total energy sold from the battery and the RG is
bounded by
Fs,t + Ss,t ∈ [0, Umax ]
(2)
where Umax is the maximum amount of energy that can be
sold back to the CG1 .
Note that while Ss,t from the RG can be sold back to the CG
at any time, energy buying from or selling to the CG should
not happen at the same time to avoid energy arbitrage, which
is ensured by the constraint Pb,t > Ps,t . With this constraint,
we expect the following condition to be satisfied
Et − Qt
−
1) RG: Let St be the amount of energy harvested from
the RG at time slot t. Due to the uncertainty of the renewable
source, St is random, and we assume no prior knowledge of St
or its statistics. Let Wt be the customer load at time slot t. We
assume a priority of using St to directly serve Wt . Let Sw,t be
this portion of St at time slot t. We have Sw,t = min{Wt , St }.
A controller will determine whether the remaining portion, if
any, should be stored into the battery and/or sold back to the
CG. We denote the stored amount and sold amount by Sc,t
and Ss,t , respectively, satisfying
Wt
Load
Sr,t
Control
Et · Fs,t = 0.
We will verify that our proposed algorithm satisfies (3) in
Section V.
3) Battery Storage: The battery operation for storage
causes the battery to deteriorate, contributing to the storage
cost, which has been ignored in many prior works. We model
battery charging and discharging activities and the degradation
cost associate to them as follows.
i) Storage operation: The battery can be charged from
multiple sources (i.e., the CG and the RG) at the same time.
The total charging amount at time slot t should satisfy
Sw,t
Fig. 1. An ESM system with RG and bidirectional energy flow from/to CG.
can be charged into the battery from both the RG and the CG,
discharged from the battery for customer electricity demand,
or sell back to the CG. Both the RG and the CG can also
directly supply energy to the customer. We assume the ESM
system operates in discrete time slots with t ∈ {1, 2, · · · }, and
all energy control operations are performed per time slot t.
(3)
Sc,t + Qt ∈ [0, Rmax ]
(4)
where Rmax is the maximum charging amount per time slot.
Similarly, energy stored in the battery can be used to either
serve the customer load and/or sell back to the CG. Let Fd,t
denote the amount of energy from the battery used to serve the
customer at time slot t. The total amount of energy discharged
is bounded by
Fd,t + Fs,t ∈ [0, Dmax ]
(5)
1 This
amount may be regulated by the utility.
4
where Dmax is the maximum discharging amount per time
slot. We assume that there is no simultaneous charging and
discharging, i.e.,
(Sc,t + Qt ) · (Fd,t + Fs,t ) = 0.
operation, the average entry cost and average net change over
the To -slot period are respectively given by
To −1
1 X
xe,t ,
xe ,
To t=0
(6)
Let Bt be the battery energy level at time slot t, bounded by
Bt ∈ [Bmin , Bmax ]
(7)
where Bmin and Bmax are the minimum energy required and
maximum energy allowed in the battery, respectively, whose
values depend on the battery type and capacity. Based on
charging and discharging activities and taking the battery
charging/discharging inefficiency into account, Bt evolves
over time as
Bt+1 = Bt + ηc (Sc,t + Qt ) − (Fd,t + Fs,t )/ηd
(8)
where ηc and ηd denote the battery charging and discharging
efficiency factors, respectively, with ηc , ηd ∈ [0, 1].
Finally, the demand-and-supply balance requirement is
given by
Wt = Et − Qt + Sw,t + Fd,t .
(9)
ii) Battery degradation cost: It is well known that frequent charging/discharging activities cause a battery to degrade [30]. We model two types of battery degradation cost:
entry cost and usage cost. The entry cost is a fixed cost
incurred due to each charging or discharging activity. Define
two indicator functions to represent charging and discharging
activities: 1R,t = {1 : if Qt + Sc,t > 0; 0 : otherwise} and
1D,t = {1 : if Fd,t + Fs,t > 0; 0 : otherwise}. Denote the entry cost for charging by Crc and that for discharging by Cdc .
The entry cost for battery usage at time slot t is given by
xe,t , 1R,t Crc + 1D,t Cdc .
The battery usage cost is the cost associated with
the charging/discharging amount. Define the net
change of battery energy level at time slot t by
∆
xu,t = |ηc (Qt + Sc,t ) − (Fd,t + Fs,t )/ηd |. From (4) and (5),
it follows that xu,t is bounded by
xu,t ∈ [0, Γ].
To −1
1 X
xu,t
xu ,
To t=0
(11)
where by (10), xu is bounded by
xu ∈ [0, Γ],
(12)
and the battery average usage cost is C(xu ). Thus, the average
battery degradation cost over the To -slot period is xe + C(xu ).
Denote the system inputs by µt , [Wt , St , Pb,t , Ps,t ]
and the control actions for energy storage management by
at , [Et , Qt , Fd,t , Fs,t , Sc,t , Ss,t ] at time slot t. With only
current input µt known, our objective is to determine a control
policy πt for at (i.e., a mapping πt : µt → at ) to minimize
the average system cost within the To -slot period. This is a
stochastic control optimization problem and is formulated by
P1: min J + xe + C(xu )
πt
s.t. (1), (2), (6), (9), (12), and
0 ≤ Sc,t + Qt ≤ min {Rmax , (Bmax − Bt )/ηc }
(13)
0 ≤ Fd,t + Fs,t ≤ min{Dmax , ηd (Bt − Bmin )}
(14)
where constraints (13) and (14) are the derived results of
constraints (4)–(8).
P1 is a difficult stochastic control optimization problem due
to the finite time period and the correlated control actions
{at } over time as a result of time-coupling dynamics of Bt in
(8). Furthermore, the input distributions in µt are unknown.
The optimal control policy is difficult to obtain. Instead,
we develop a real-time algorithm for control action at over
time which provides a suboptimal solution for P1 with the
cost objective being minimized as small as possible. In the
following, we fist summarize our approach and technique to
develop this real-time algorithm and then present details.
(10)
where Γ , max{ηc Rmax , Dmax /ηd }.2 It is known that typically faster charging/discharging, i.e., larger xu,t , has a more
detrimental effect on the life time of the battery. Thus, we
assume the associated cost function for usage xu,t , denoted
by C(·), is a continuous, convex, non-decreasing function with
maximum derivative C ′ (·) < ∞.
III. P ROBLEM F ORMULATION
For the ESM system, the system cost includes the energy
buying cost minus selling profit and the battery degradation
cost. Within a pre-defined To -slot time period, the average
net cost of energy buying and selling over the CG is given
PTo −1
∆
Et Pb,t − (Fs,t + Ss,t )Ps,t . For the battery
by J = T1o t=0
2 Accurate modeling of the battery deterioration due to charging and
discharging activities is a challenging problem. Some recent work provides
more detailed study on practical modeling of deterioration due to battery
operation [31].
A. Summary of Our Approach
We develop our real-time algorithm for P1 using the
technique of Lyapunov optimization [26], which is powerful
to design dynamic control. However, to use Lyapunov optimization, the problem needs to have time-averaged objective
function and constraints, which is not the case for P1. To
overcome these difficulties, we first modify P1 and then
transfer the modified problem to an equivalent problem P3
that belongs to the class of stochastic optimization problem
that Lyapunov optimization technique can be applied to. Then,
using Lyapunov technique, we develop our real-time algorithm
to determine control action at for P3. Our algorithm is to solve
a per-slot opportunistic optimization problem P4 for at at each
time slot t, for which the solution is presented in Section IV.
Finally, since {at } is obtained for P3, we design parameters of
our algorithm in Section IV-B to ensure that it is also feasible
to the original problem P1.
5
B. Problem Modification and Transformation
1) Modification: To make P1 tractable, we first remove the
coupling of control actions over time by modifying the constraints (13) and (14) on the per-slot charging and discharging
amounts. We set the change of Bt over the To -slot period, i.e.,
BTo − B0 , to be a desired value ∆a . From (8), this means
TX
o −1
(ηc (Qτ + Sr,τ ) − (Fd,τ + Fs,τ )/ηd ) = ∆a
(15)
τ =0
where by (4)(5)(7), the range of ∆a is given
by |∆a | ≤ ∆max , with ∆max , min{Bmax −
Bmin , To max{ηc Rmax , Dmax /ηd }}. We point out that,
∆a is only a desired value we set. It may not be achieved at
the end of To -slot period by a control algorithm. In Section V,
we will quantify the amount of mismatch with respect to ∆a
under our proposed control algorithm. We now modify P1 to
the follow optimization problem
P2: min J + xe + C(xu )
πt
s.t (1), (2), (4), (5), (6), (9), (12), (15).
From P1 to P2, by imposing the new constraint (15), we remove the dependency of per-slot charging/discharging amount
on Bt in constraints (13) and (14), and replace them by (4)
and (5), respectively.
2) Problem Transformation: The objective of P2 contains
C(xu ) which is a cost function of a time-averaged net change
xu . Directly dealing with such function is difficult. Adopting
the technique introduced in [32], we transform the problem
to one that contains the time-averaged function. To do so,
we introduce an auxiliary variable γt and its time average
PTo −1
∆
γt satisfying
γ = T1o τ =0
γt ∈ [0, Γ], ∀t
γ = xu .
(16)
(17)
These constraints ensure that the auxiliary variable γt and
xu,t lie in the same range and have the same time-averaged
PTo −1
behavior. Define C(γ) , T1o t=0
C(γt ) as the time average
of C(γt ). By using γt instead of xu,t , we replace constraint
(12) with (16) and (17), and transform P2 into the following
problem which is to optimize the control policy πt′ for (γt , at )
(i.e., πt′ : µt → (γt , at )) to minimize the To -slot time average
of system cost
J + xe + C(γ)
P3: min
′
πt
s.t (1), (2), (4), (5), (6), (9), (15), (16), (17).
It can be shown that P2 and P3 are equivalent problems (see
Appendix A). The modification and transformation from P1 to
P3 has enabled us to utilize Lyapunov optimization techniques
[26] to design real-time control policy to solve P3.
C. Real-Time Control via Lyapunov Optimization
To design a real-time algorithm, in Lyapunov optimization,
virtual queue is introduced for each time-averaged constraint,
and Lyapunov drift for the queue is defined. It is shown that
keeping the stability of each queue is equivalent to satisfying
the constraint [26]. Thus, instead of the original cost objective in the optimization problem, a drift-plus-cost metric is
considered in Lyapunov optimization and real-time algorithm
is developed to minimize this metric. In the following, we
develop our algorithm using this technique.
1) Virtual queues: Based on the Lyapunov framework, we
introduce two virtual queues Zt and Ht for time-averaged
constraints (15) and (17), respectively, as
Zt+1 = Zt + ηc (Qt + Sc,t ) − (Fd,t + Fs,t )/ηd −
Ht+1 = Ht + γt − xu,t .
∆a
, (18)
To
(19)
From (8) and (18), Zt and Bt have the following relation
Zt = Bt − At
(20)
∆
a
where At = Ao + ∆
To t in which Ao is a constant shift and
∆a
To t ensures that the left hand side equality in (15) is satisfied.
We will revisit the value of Ao to ensure a feasible solution
for P1.
2) Lyapunov drift: Define Θt , [Zt , Ht ]. Define the
quadratic Lyapunov function L(Θt ) , 21 (Zt2 + Ht2 ). Divide
To slots into M sub-frames of T -slot duration as To = M T ,
for M, T ∈ N+ . We define a one-slot sample path Lyapunov
drift as ∆(Θt ) , L (Θt+1 ) − L(Θt ), which only depends on
the current system inputs µt .
3) Drift-plus-cost metric: We define a drift-plus-cost metric
which is a weighted sum of the drift ∆(Θt ) and the system
cost at current time slot t, given by
∆(Θt ) + V [Et Pb,t − (Fs,t + Ss,t )Ps,t + xe,t + C(γt )] (21)
where constant V > 0 sets the relative weight between the
drift and the system cost.In Lyapunov optimization, instead
of minimizing the system cost objective in P3, we aim to
minimize this drift-to-cost metric. However, directly using this
metric to design a control policy is still difficult. Instead, we
obtain an upper bound on the drift ∆(Θt ), which will be used
for designing our real-time control algorithm.
Lemma 1: Lyapunov drift ∆(Θt ) is upper bounded by
∆a
∆(Θt ) ≤ Zt Et + Sc,t + Sw,t − Wt − Fs,t −
To
+ Ht γt − g(Ht )(Et + Sc,t + lt ) + G
(22)
n
o
∆a 2
a 2
where G , 12 max (ηc Rmax − ∆
+
To ) , (Dmax /ηd + To )
1 2
Γ
,
and
2
(
ηc Ht
Ht ≥ 0
g(Ht ) ,
(23)
Ht /ηd Ht < 0
lt , sgn(Ht )(Sw,t − Wt − Fs,t )
(24)
where sgn(.) is the sign function.
Proof: See Appendix B.
By Lemma 1, an upper bound on the per-slot drift-plus-cost
metric in (21) is readily obtained. In the following, we use this
upper bound to develop a real-time control algorithm.
6
IV. R EAL -T IME B IDIRECTIONAL C ONTROL A LGORITHM
We now propose our real-time control algorithm to minimize the upper bound on the drift-plus-cost metric per slot.
Removing all the constant terms in the upper bound of the
drift-plus-cost metric that are independent of at and γt , we
have the equivalent optimization problem which can be further
separated into two sub problems for γt and at , respectively,
as follows
P4a : min
Ht γt + V C(γt ) s.t. (16).
P4b : min
Et (Zt − g(Ht ) + V Pb,t ) + Sc,t (Zt − g(Ht ))
γt
at
s.t.
− Fs,t (Zt − |g(Ht )| + V Ps,t ) − Ss,t V Ps,t + V xe,t
(1), (2), (4), (5), (6), (9), (15).
First, we solve P4a to obtain the optimal solution γt∗ . Note
that P4a is convex for C(·) being convex. Thus, we can
directly solve it and obtain the optimal γt∗ of P4a .
Lemma 2: The optimal solution γt∗ of P4a is given by
if Ht ≥ 0
0
∗
γt = Γ
(25)
if Ht < −V C ′ (Γ)
′−1
Ht
C
−V
otherwise
where C ′ (·) is the first derivative of C(·), and C ′−1 (·) the
inverse function of C ′ (·).
Proof: See Appendix C.
Next, we obtain the optimal a∗t of P4b and provide the
conditions under which a∗t is feasible to P1.
A. The Optimal Control a∗t for P4b
Denote the objective function of P4b by J(at ). Define
the idle state of the battery as the state where there is no
charging or discharging activity. The control decision in the
id
id
id
id
id
id
idle state is given by aid
t = [Et , Qt , Fd,t , Fs,t , Sc,t , Ss,t ],
id
id
id
id
id
where Et = Wt − Sw,t , Qt = Fd,t = Fs,t = Sc,t = 0,
id
and Ss,t
= min{St − Sw,t , Umax }. Then, in the idle state,
we have J(aid
t ) = (Wt − Sw,t )(Zt − g(Ht ) + V Pb,t ) −
V Ps,t min{St − Sw,t , Umax }. We derive the optimal control
∗
∗
∗
∗
decision a∗t = [Et∗ , Q∗t , Fd,t
, Fs,t
, Sc,t
, Ss,t
] in five cases in
+
Proposition 1 below. Define [a] , max{a, 0}.
a
a
Proposition 1: Define (Sc,t
, Ss,t
) as follows: If V Ps,t ≥
a
a
g(Ht ) − Zt : Ss,t , min{St − Sw,t , Umax }, Sc,t
, min{St −
a
a
Sw,t − Ss,t , Rmax }; Otherwise, Sc,t , min{St − Sw,t , Rmax },
a
a
Ss,t
, min{St − Sw,t − Sc,t
, Umax }.
w
w
w
w
w
w
w
Denote at = [Et , Qt , Fd,t
, Fs,t
, Sc,t
, Ss,t
] as the control
decision in the charging or discharging state. The optimal
control solution a∗t for P4b is given in the following cases:
1 ) For Zt − g(Ht ) + V Pb,t ≤ 0: The battery is either in the
charging state or the idle state. Let
w
w
w
a
w
a
Fd,t = Fs,t = 0, Sc,t = Sc,t , Ss,t = Ss,t
w
(26)
Qw
t = Rmax − Sc,t
w
w
.
Et = Wt − Sw,t + Rmax − Sc,t
Then, a∗t = arg minat ∈{awt ,aidt } J(at ).
2 ) For max{Zt − g(Ht ), Zt − |g(Ht )| + V Ps,t } < 0 ≤ Zt −
g(Ht ) + V Pb,t : The battery is either in the charging state
(from the RG only), the discharging state (to the customer
load only), or the idle state. Let
w
Fd,t = min{Wt − Sw,t , Dmax }
a
w
a
w
w
(27)
= Qw
Fs,t
t = 0, Sc,t = Sc,t , Ss,t = Ss,t
w
Et = [Wt − Sw,t − Dmax ]+ .
Then, a∗t = arg minat ∈{awt ,aidt } J(at ).
3 ) For Zt − g(Ht ) ≤ 0 ≤ Zt − |g(Ht )| + V Ps,t : The
battery is either in the charging state (from the RG only),
the discharging state, or the idle state. Define adc
t in the
discharging state as
dc
Fd,t = min{Wt − Sw,t , Dmax }
S dc = min{S − S , U
t
w,t
max }
s,t
(28)
dc
dc
dc
F
=
min{D
−
F
max
s,t
d,t , Umax − Ss,t }
dc
dc
+
Sc,t = Qdc
t = 0, Et = [Wt − Sw,t − Dmax ] ;
Define arc
t in the charging state as
rc
rc
rc
Fd,t = Fs,t = Qt = 0
rc
a
rc
a
Sc,t
= Sc,t
, Ss,t
= Ss,t
rc
Et = Wt − Sw,t .
(29)
Then, a∗t = arg minat ∈{arct ,adct ,aidt } J(at ).
4 ) For Zt − |g(Ht )| + V Ps,t < 0 ≤ Zt − g(Ht ): The battery
is in the discharging state (to the customer load only) or
the idle state. Let
w
Fd,t = min{Wt − Sw,t , Dmax }
F w = Q w = 0
s,t
t
(30)
w
w
S
=
min{S
−
S
,
U
},
S
=
0
t
w,t
max
s,t
c,t
Etw = [Wt − Sw,t − Dmax ]+ .
Then, a∗t = arg minat ∈{awt ,aidt } J(at ).
5 ) For min{Zt − g(Ht ), Zt − |g(Ht )| + V Ps,t } > 0: The
battery is either in the discharging state or the idle state.
If Zt > |g(Ht )|, let
w
Fd,t = min{Wt − Sw,t , Dmax }
F w = min{D
w
max − Fd,t , Umax }
s,t
(31)
w
w
Ss,t
= min{St − Sw,t , Umax − Fs,t
}
w
w
+
Sc,t = Qw
t = 0, Et = [Wt − Sw,t − Dmax ] ;
Otherwise, let
w
Fd,t = min{Wt − Sw,t , Dmax }
S w = min{S − S , U
t
w,t
max }
s,t
w
w
w
F
=
min{D
−
F
max
s,t
d,t , Umax − Ss,t }
w
w
+
Sc,t = Qw
t = 0, Et = [Wt − Sw,t − Dmax ] .
(32)
Then, a∗t = arg minat ∈{awt ,aidt } J(at ).
Proof: See Appendix D.
Proposition 1 provides the closed-form control solution in
five cases, depending on the battery energy level (via Zt ),
battery usage cost (via Ht ), and the prices. In each case, J(aw
t)
in the charging (or discharging) state is compared with J(aid
t )
in the idle state, and the optimal a∗t is the control solution of
the state with the minimum objective value.
7
Remarks: Note that there are two sources to be controlled
for selling energy back, Fs,t from the battery and Ss,t from the
RG. Whether to sell energy from the battery back to the grid
depends on the battery energy level. When the battery energy
level is low (Case 1), energy is kept in the battery. When the
battery has a moderately low energy level (Case 2), it may be
in either the charging or discharging state. For the latter, the
battery only supplies enough energy to the customer but does
not sell energy back. When the battery energy level is higher
but still moderate (Case 3), it may still be in either the charging
or discharging state. For the latter, the battery may sell energy
back to the grid. When the battery has just sufficient energy
(Case 4), it may supply energy to the customer, but will not sell
energy back to the grid. When the energy level in the battery
is high (Cases 5), it may supply energy to the customer and
at the same time sell energy back. In contrast, the renewable
energy can be sold to the grid regardless of the battery energy
level, state (charging, discharging, or idle) and the price to
make an additional profit. As the result, energy generated by
the renewable will be utilized as much as possible. However,
when the system wants to sell energy from both the battery and
the renewable, the order to determine Ss,t and Fs,t depends on
which results in the minimum cost in P4b . In Case 5, for the
control decision in (31), Ss,t is determined after Fs,t , while
in (32), Fs,t is determined after Ss,t .
Algorithm 1 Real-time battery management control algorithm
Initialize: Z0 = H0 = 0.
Determine To .
Set ∆a ∈ [−∆max , ∆max ].
Set Ao and V ∈ (0, Vmax ] as in (34) and (33), respectively.
At time slot t:
1: Observe the system inputs µt and queues Zt and Ht .
2: Solve P4a and obtain γt∗ in (25); Solve P4b and obtain
a∗t by following cases (26)-(30).
∗
3: Use a∗
t and γt to update Zt+1 and Ht+1 in (18) and (19),
respectively.
4: Output control decision a∗
t.
The proposed real-time bidirectional energy control algorithm is summarized in Algorithm 1. We emphasize that i) our
proposed algorithm does not require or rely on any statistical
distributions of system inputs µt (prices, load, and renewable
generation processes), and thus can be applied to general
scenarios, especially when these processes are non-ergodic or
difficult to predict in a highly dynamic environment. ii) The
control solution provided by our real-time algorithm is given
in closed-form which requires little computation, and thus is
attractive for practical implementation.
V. P ERFORMANCE A NALYSIS
B. Feasible {a∗t } for P1
The optimal solution a∗t of P4b provides a real-time solution
for P3. However, it may not be feasible to P1, because
the battery capacity constraint (7) on Bt may be violated.
By properly designing Ao and V , we can guarantee that
a∗t satisfies constraint (7), and ensure the feasibility of the
solution. Define [a]− , min{a, 0}. The result is stated below.
Proposition 2: For the proposed real-time control algorithm,
by setting V ∈ (0, Vmax ] with
Vmax =
Bmax − Bmin − ηc Rmax − (Dmax + 2Γ)/ηd − |∆a |
, (33)
Pbmax + C ′ (Γ)/ηd + [C ′ (Γ)/ηd − Psmin ]+
and At in (20) with
Ao = Bmin +V Pbmax +
V C ′ (Γ)+ Γ+Dmax ∆a
+
−[∆a ]−
ηd
To
(34)
where [a]− , min{a, 0}, then Bt satisfies the battery capacity
constraint (7), and control solution a∗t of P4b , for any t, is
feasible to P1.
Proof: See Appendix E.
Note that Vmax > 0 in (33) is generally satisfied for practical
battery storage capacity and |∆a | being set relatively small.
We should also point out that since ∆a is a desired value
set by our proposed algorithm, the solution a∗t of P4b may
not necessarily satisfy constraint (15) at the end of the To slot period, and thus may not be feasible to P2. However,
Proposition 2 provides the values of Ao and V to guarantee
the control solutions {a∗t } being feasible to P1.
To analyze the performance of our real-time solution in
Algorithm 1 with respect to P1, let u∗ (V ) denote the To slot average system cost objective of P1 achieved by Algorithm 1, which depends on the value of V set by Algorithm 1.
For comparison, we partition To slots into T frames with
To = M T , for some integers M, T ∈ N+ . Within each
frame m, we consider a T -slot look-ahead optimal control
policy, where {Wt , St , Pb,t , Ps,t } are known non-causally for
the entire frame beforehand. Let uopt
m denote the minimum T slot average cost for frame m achieved by this optimal policy.
opt
We can view um as the minimum objective value of P1 with
To = T under the optimal T -slot look-ahead solution. The
performance gap of our proposed real-time algorithm to the
optimal T -slot look-ahead policy is bounded in the following
theorem.
Theorem 1: For any arbitrary system inputs {µt }, and any
M, T ∈ N+ with To = M T , the To -slot average system cost
under Algorithm 1 to that under the optimal T -slot look-ahead
policy satisfies
u∗ (V ) −
≤
M−1
1 X opt
u
M m=0 m
L(Θ0 ) − L(ΘTo ) C ′ (Γ)(H0 − HTo )
GT
+
+
V
V To
To
(35)
with the bound at the right hand side being finite.
Asymptotically as To → ∞,
M−1
1 X opt
GT
lim u∗ (V ) − lim
.
(36)
um ≤
To →∞
To →∞ M
V
m=0
Proof: See Appendix F.
|ǫ| ≤ 2Γ/ηd + V C (Γ)/ηd + V [C (Γ)/ηd −
+V
Pbmax
+ ηc Rmax + Dmax /ηd .
Li-on battery has 0.99 charging efficiency.
15
20
25
15
20
25
15
20
25
Load (kWh)
4
2
0
Price ($/kWh)
5
10
Hour
0.15
0.1
0.05
0
5
10
0.015
0.01
0.005
0
2
3
4
5
6
7
2
3
4
5
6
7
0.01
0.005
0
Average energy bought (sold) into (from) the battery at
different prices (selling-to-buying price ratio ρp = 0.9).
Fig. 3.
VI. S IMULATION R ESULTS
3A
10
Fig. 2. Example of Wt , St and Pb,t over 24 hours.
(37)
0.02
0.015
0.01
0.005
0
2
Avg. energy (kWh)
We set the slot duration to be 5 minutes, and assume that
system input µt remains unchanged within each slot. We set
the buying price Pb,t using the data collected from Ontario
Energy Board [33]. We use solar energy for the RG to generate
{St }. As a result, {St } is a non-ergodic process, with the
mean S t = E[St ] changing periodically over 24 hours. As
shown in Fig. 2 top, we model {S t } by a three-stage pattern as
{S h , S m , S l } = {1.98, 0.96, 0.005}/12 kWh, and set standard
deviation σS i = 0.4S i , for i = h, m, l. We also model the load
Wt as a non-ergodic process with mean W t = E[Wt ] following a three-stage pattern over each day as {W h , W m , W l } =
{2.4, 1.38, 0.6}/12 kWh, shown in Fig. 2 middle, and set standard deviation σW i = 0.2W i , for i = h, m, l. Buying price
Pb,t follows a three-stage price pattern repeated each day as
{Pbh , Pbm , Pbl } = {$0.118, $0.099, $0.063} per kWh, as shown
in Fig. 2 bottom. The battery and storage parameters are set as
follows: Bmin = 0, Rmax = 0.15 kWh, Dmax = 0.15 kWh,
Crc = Cdc = 0.001, Umax = 0.2 kWh, and the initial battery
energy level B0 = Bmax /2. Unless specified, we set the
following default values: Bmax = 3 kWh, and ηc = ηd = 0.98
3
.
Quadratic battery usage cost: We use a quadratic function
for the battery usage cost as an exemplary case, given by
C(xu ) = kxu 2 , where k > 0 is the battery cost coefficient
depending on the battery specification and xu is given in (11).
5
Hour
Psmin ]+
Proof: See Appendix G.
Finally, we expected constraint (3) to being satisfied by
Algorithm 1, i.e., buying energy (Et > 0) and selling back
from battery storage (Fs,t > 0) should not occur at same time.
This is verified in the following result.
Proposition 4: For any system inputs µt , the optimal control
solution a∗t under Algorithm 1 guarantees constraint (3) being
satisfied.
Proof: See Appendix H.
0
0
Avg. energy (kWh)
′
2
Hour
Avg. energy (kWh)
′
4
0
Avg. energy (kWh)
By Theorem 1, the performance gap of Algorithm 1 to the
T -slot look-ahead optimal policy is upper bounded in (35),
for any T with To = M T . To minimize the gap, we should
always set V = Vmax . From (36), as the duration goes to
infinity, the asymptotic gap is in the order of O(1/V ). Since
Vmax increases with Bmax , When V = Vmax , Algorithm 1
is asymptotically equivalent to the T -slot look-ahead optimal
policy as the battery capacity and time duration increases.
As discussed at the end of Section IV-B, constraint (15) in
P2 sets a desired value for ∆a which may not be achieved
by our proposed algorithm at the end ofP
To slots. Denote this
To −1
mismatch under Algorithm 1 by ǫ ,
τ =0 (Qτ + Sr,τ −
Fs,τ − Fd,τ ) − ∆a . This mismatch is quantified below.
Proposition 3: For any arbitrary system inputs {µt } and any
initial queue value Z0 ∈ R, the mismatch for constraint (15)
by Algorithm 1 is given by ǫ = ZTo − Z0 , and is bounded by
RG (kWh)
8
1
3
4
5
6
7
3
4
5
6
7
10-3
0.5
0
2
Average energy bought (sold) into (from) the battery at
different prices (selling-to-buying price ratio ρp = 0.3).
Fig. 4.
The optimal γt∗ of P4a in (25) in this case can be derived
as: i) γt∗ = 0 for Ht > 0; ii) γt∗ = Γ for Ht < −kV Γ; iii)
γt∗ = −Ht /(2kV ) for −2kV Γ ≤ Ht ≤ 0. We use this cost
function throughout our simulation study. Unless specified, we
set k = 0.1 as the default value.
We consider a 24-hour duration with To = 288 slots. Since
a positive (negative) ∆a allows battery to charge (discharge)
more than discharge (charge) over a To -period, we alternate
the sign of ∆a over multiple To -slot periods to control this
tendency: we set ∆a = +c (−c) for the odd (even) To -slot
periods, for some constant c > 0. Unless specified, we set
V = Vmax as the default value.
9
1
0.995
a
a
0.9
a
0.7
a
a
CDF
T o avg. system cost ($/day)
a
0.985
=0.2
1.12
=0.1
1.1
=0
a
0.8
0.99
1.14
=0.3
=-0.1
1.08
=-0.2
1.06
=-0.3
0.6
1.04
0.5
1.02
1
0.98
0.4
0.98
0.3
0.96
0.975
0.2
0.94
2
3
4
5
6
7
8
0.1
0.97
0
0
0.1
0.2
0.3
-0.2
-0.1
0
0.1
0.2
Fig. 6. Average system cost vs. battery capacity Bmax .
1.25
Fig. 5. Performance of Algorithm 1 at different ∆a setting (selling-
to-buying price ratio ρp = 0.9). Left: average system cost at different
|∆a |; Right: CDF of mismatch ǫ at different ∆a .
1.2
1.15
1.1
1) Energy buying and selling vs. prices: We study the
energy buying and selling behaviors under different sellingto-buying price ratios. Define the selling-to-buying price ratio
by ρp , Ps,t /Pb,t with 0 ≤ ρp < 1. It is fixed over To time
slots. Define the average amount of purchased and sold energy
at each price stage of Pb,t (high, medium, low) as
1 X ∗
1 X ∗
Fs,t , i = h, m, l
Qt , Fsi , i
Qi , i
|Ts |
|Tb |
i
i
t∈Tb
t∈Ts
where Tbi , {t : Pb,t = Pbi } and Tsi , {t : Ps,t = Psi }.
Also, denote Q and Fs as the overall energy bought and
sold, respectively. Figs. 3 and 4 show the average amount of
energy for ρp = 0.9 and 0.3 (high and low selling prices),
respectively. Comparing Fig. 3 bottom with Fig. 4 bottom, we
see that more energy is sold at higher ρp to increase the profit,
while at lower ρp , selling at the price is not cost-effective,
and the system tends to keep the energy in the storage for
future usage. Furthermore, in Fig. 3 bottom, at higher ρp , the
average amount of energy sold at Psm and Psh increases with
the battery capacity Bmax . This is because a larger capacity
offers more flexibility for charging and discharging activities,
and allows more energy to be sold back at higher prices. For
the same reason, a larger capacity allows more energy to be
bought at lower price, as shown in Figs. 3 and 4 top, where, as
Bmax increases, the amount of energy bought from the grid at
higher price Pb,t = Pbh decreases and at lower price Pb,t = Pbl
increases.
2) Setting ∆a and mismatch ǫ: Fig. 5 shows how the average system cost varies with ∆a set by proposed Algorithm 1,
for ρp = 0.9. We simulate the system over multiple To slot periods for a given |∆a |. It shows that the system cost
increases with |∆a |. More energy needs to be bought to be
stored in the battery for ∆a > 0, leading to a higher system
cost, while it is the opposite for ∆a < 0. Overall, the system
cost is the smallest when ∆a = 0. Fig. 5 right shows the CDF
of the mismatch ǫ at different value of ∆a by Algorithm 1.
We see that ǫ at different values of ∆a is relatively small for
1.05
1
0.95
0.9
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 7. Average system cost vs. selling-to-buying price ratio ρp
(Default: Bmax = 3 kWh, k = 0.1).
1.3
1.25
1.2
1.15
1.1
1.05
1
0.95
0.9
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 8. Comparison of average system cost over ρp at different battery
charging (discharging) efficiency ηc (= ηd ).
ρp = 0.9, as the availability to sell energy helps keep the
mismatch relatively small. Also from the sign, it shows that
the net energy change in the battery over the period tends to be
less than |∆a |. The mismatch ǫ is the smallest when ∆a = 0.
Based on the above results, we conclude that setting ∆a = 0
in Algorithm 1 is the most desirable, and is used as the default
value for our simulation study.
3) Performance Comparison: We consider three other algorithms for comparison: i) 3-slot look-ahead: The non-causal
T -slot look-ahead optimal solution with T = 3, where system
inputs {µt } for each 3-slot frame are known non-causally
10
ahead of time. The resulting 3-slot minimum system cost
opt
is um for frame m.4 ii) Greedy: A greedy algorithm that
minimizes the per-slot system cost, based on the current
inputs µt . For one-slot cost minimization, to avoid the battery
operation cost, the system directly purchases energy from the
grid to serve the load without storing energy into the battery.
Thus, this greedy method is essentially a method without
storage. iii) Storage without selling-back: With no energy
selling-back capability in the model, the problem is reduced to
the one considered in [12] and the algorithm proposed therein.
Note that among the three methods, the first one is a noncausal approach and the rest are real-time approaches.
Fig. 6 compares the average system cost of different algorithms at different battery capacity Bmax , where the cost
is converted into dollars per day. The system cost under
Algorithm 1 reduces as Bmax increases, because a larger
battery capacity gives more flexibility on charging/discharging,
allowing the system to buy more energy at a lower price and
sell more at a higher price. Also, higher selling-to-buying price
ratio ρp = 0.9 results in a lower average system cost. For the
greedy algorithm, since it is essentially a method without the
use of storage, the system cost does not vary with battery
capacity. Compared with the greedy algorithm, our proposed
Algorithm 1 offers more cost reduction as the battery capacity
becomes larger. Moreover, compared with the storage without
selling-back, the extra cost saving by the availability to sell
back energy at different selling price is clearly seen.
Fig. 7 provides the comparison of system cost at various
selling-to-buying price ratio ρp . For other algorithms, default
Bmax = 3 kWh and k = 0.1 are used. At the default setting,
our proposed algorithm outperforms all other three methods
for ρp ∈ [0, 1]. By comparing Algorithm 1 with the storage
without selling-back method, we see the cost saving by selling
extra energy back to the grid at different ρp . We also plot the
performance of Algorithm 1 at a larger battery capacity and at
a higher value of k (higher battery usage cost ) to see the effect
of different battery specifications on the cost performance.
Finally, in Fig. 8, we show the effect of different battery
charging and discharging efficiencies on the performance.
Depending on the quality of battery, the efficiency may range
from 0.85 to 0.99. As we see, compared to the 3-slot lookahead solution, the increase of system cost at lower ηc (ηd )
under Algorithm 1 is more noticeable. The performance of
the greedy algorithm does not change with different ηc since
the algorithm doe not use the storage. When the charging
efficiency is low (0.85 ∼ 0.9) and selling-to-buying price
ratio is high, the greedy and 3-slot look-ahead methods can
have a lower system cost, indicating that the benefit of storage
diminishes in this scenario.
VII. C ONCLUSION
In this work, we proposed a real-time bidirectional energy
control algorithm for a residential ESM system to minimize
the system cost within a given time period. The ESM system
4 The 3-slot look-ahead policy is obtained through exhaustive search. In
general, the optimal solution can only be obtained from exhaustive search,
which is difficult for larger T .
has an integrated RG besides connected to the CG, and is
capable of buy/sell energy from/to the CG. For the system cost,
besides the energy buying cost and selling profit, we included
the storage cost by accounting the battery operational cost
and inefficiency due to charging/discharging activities. Our
real-time algorithm provides a closed-form control solution for
the ESM system which is very simple to implement. It does
not relay on any statistical knowledge of system inputs and
is applicable to arbitrary system input dynamics. We showed
that our proposed algorithm resulted in a guaranteed bounded
performance from the optimal non-causal T -slot look-ahead
control solution. Simulation demonstrated that our proposed
algorithm outperforms other non-causal or real-time alternative
methods, and provided understanding on how the availability
of selling energy, selling and buying price setting, and battery
inefficiency affect the storage behavior and system cost.
A PPENDIX A
E QUIVALENCE OF PROBLEMS P2 AND P3
The proof follows the general argument in [32]. The optimal
solution of P2 satisfies all constraints of P3, and therefore it is
a feasible solution of P3. Let uo2 and uo3 denote the minimum
objective values of P2 and P3, respectively. We have uo3 ≤ uo2 .
By Jensen’s inequality and convexity of C(·), we have C(γ) ≥
C(γ) = C(xu ), which means uo3 ≥ uo2 . Hence, uo2 = uo3 , and
P2 and P3 are equivalent.
A PPENDIX B
P ROOF OF L EMMA 1
Proof: By the definition of Lyapunov drift ∆(Θt ),
1 2
2
Zt+1 − Zt2 + Ht+1
− Ht2
∆(Θt ) , L(Θt+1 ) − L(Θt ) =
2
Fd,t + Fs,t
∆a
= Zt ηc (Qt + Sc,t ) −
−
ηd
To
1
+ Ht (γt − xu,t ) + (γt − xu,t )2
2
2
Fd,t + Fs,t
∆a
1
ηc (Qt + Sc,t ) −
−
.
(38)
+
2
ηd
To
Let gt denote the sum of the last two terms in (38). From
a
(15), we have ∆
To ≤ max{ηc Rmax , Dmax /ηd } = Γ. For a
given value of ∆a , by (4), (5), (10) and (16), gt is upper
bounded by
2
2
Dmax
∆a
a
,
max
ηc Rmax − ∆
+
To
ηd
To
Γ2
gt ≤
+
, G.
2
2
(39)
We now find the upper bound of −Ht xu,t in the second
term of (38).
Note that Sw,t , Wt and Ht are known for the current time
slot t. Also note that, Sw,t − Wt ≤ 0, because Sw,t =
min{Wt , St }. The upper bound of −Ht xu,t is obtained as
follows:
1 ) For Ht ≥ 0: Let lt , Sw,t − Wt − Fs,t ≤ 0. we have
xu,t = |ηc (Sc,t + Qt ) − (Fd,t + Fs,t )/ηd | ≥ ηc |(Sc,t + Qt ) −
(Fd,t − Fs,t )| = ηc |Et + Sc,t + Sw,t − Wt − Fs,t |, where the
11
last equality is by the supply-demand balance requirement in
(9). Thus,
−Ht xu,t ≤ −Ht ηc |Et + Sc,t + Sw,t − Wt − Fs,t |
≤ −Ht ηc (|Et + Sc,t | + Sw,t − Wt − Fs,t )
≤ −Ht ηc (Et + Sc,t + lt ).
2 ) For Ht < 0: Let lt , Wt − Sw,t + Fs,t ≥ 0. We have
xu,t ≤ |(Sc,t + Qt ) − (Fd,t + Fs,t )|/ηd = |Et + Sc,t + Sw,t −
Wt − Fs,t |/ηd . Thus,
−Ht xu,t ≤ −Ht |Et − Fs,t + Sc,t + Sw,t − Wt |/ηd
≤ −Ht (|Et + Sc,t | + |Sw,t − Wt | + |Fs,t |)/ηd
= −Ht /ηd (Et + Sc,t + lt ).
Finally, for the first term in (38), we have Zt (ηc (Qt + Sc,t ) −
∆a
a
(Fd,t + Fs,t )/ηd − ∆
To ) ≤ Zt (Qt + Sc,t − Fd,t − Fs,t − To ) =
∆a
Zt (Et + Sc,t + Sw,t − Wt − Fs,t − To ). Combining the above
results and (39), we have the upper bound of ∆(Θt ) in (22).
A PPENDIX C
P ROOF OF L EMMA 2
Proof: The derivation follows the same steps as the
one in Lemma 3 in [12]. We provide it here briefly. Since
C(γt ) is a continuous, convex, non-decreasing function in γt
with C ′ (γt ) ≥ 0 and C ′ (γt ) increasing with γt . Denote the
objective of P4a by J(γt ). Since P4a is convex, we examine
the derivative of J(γt ) given by J ′ (γt ) = Ht + V C ′ (γt ).
1 ) For Ht ≥ 0: We have J ′ (γt ) > 0, thus J(γt ) monotonically increases, with its minimum obtained at γt∗ = 0.
2 ) For Ht < −V C ′ (Γ): Since V C ′ (Γ) ≥ V C ′ (γt ), we have
Ht + V C ′ (γt ) < 0. J(γt ) monotonically decreases, and its
minimum is reached with γt∗ = Γ.
3 ) For −V C ′ (Γ) ≤ Ht ≤ 0: In this case, γt∗ is the root of
Ht + V C ′ (γt ) = 0, given by γt∗ = C ′−1 − HVt .
A PPENDIX D
P ROOF OF P ROPOSITION 1
Proof: Removing the constant terms in (21) and from lt
in (22), and after regrouping the rest of terms, we have the
objective function of P4b .
w
w
Determine Ss,t
and Sc,t
: To determine how to use the
remaining renewable St − Sw,t , we need to minimize the term
w
w
Sc,t
(Zt − g(Ht )) + V Crc − Ss,t
V Ps,t in the objective of P4b :
w
S1) If Zt − g(Ht ) > 0: We should maximize Ss,t
and
w
minimize Sc,t . Thus, the remaining amount should be
sold back to the grid and not stored into the battery. We
w
w
have Sc,t
= 0, and Ss,t
= min{St − Sw,t , Umax } or
w
w
Ss,t = min{St − Sw,t − Fs,t
, Umax }.
S2) If Zt − g(Ht ) ≤ 0: The remaining renewable can be
stored into the battery and/or sold back to the grid.
To minimize the cost, if V Ps,t ≥ g(Ht ) − Zt , we set
a
a
Ss,t
= min{St − Sw,t , Umax }, Sc,t
= min{St − Sw,t −
w
Ss,t , Rmax }, i.e., first maximize the renewable sold back
to the grid, then to the battery. If V Ps,t < g(Ht ) − Zt , we
a
a
set Sc,t
= min{St −Sw,t , Rmax }, Ss,t
= min{St −Sw,t −
w
Sc,t
, Umax }, i.e., first maximize the amount charged into
the battery then consider the rest to be sold to the grid.
In the objective function of P4b , the optimal Fs,t , Et and
Sc,t depend on the sign of Zt − |g(Ht )|+ V Ps,t , Zt − g(Ht )+
V Pb,t , and Zt −g(Ht ), respectively. These three functions have
the following relations: If Ht ≥ 0,
Zt − g(Ht ) ≤ Zt − g(Ht ) + V Ps,t < Zt − g(Ht ) + V Pb,t ,
(40)
If Ht < 0, the following two relations are possible
Zt − g(Ht ) ≤ Zt − |g(Ht )| + V Ps,t < Zt − g(Ht ) + V Pb,t
(41)
Zt − |g(Ht )| + V Ps,t ≤ Zt − g(Ht ) < Zt − g(Ht ) + V Pb,t .
(42)
Based on the relations in (40)–(42), we have the following five
cases and derive the optimal solution in each case.
1) For Zt − g(Ht ) + V Pb,t ≤ 0: From (40) and (42), to
minimize the objective function of P4b , Fs,t = 0, and we want
to maximize Etw . This means the battery is in the charging
w
w
state. We have 1R,t = 1, 1D,t = 0, Fd,t
= Fs,t
= 0, and use
w
w
maximum charging rate Sc,t + Qt = Rmax if possible. Since
w
Zt − g(Ht ) ≤ Zt − g(Ht ) + V Pb,t ≤ 0, between Qw
t and Sc,t ,
w
determining Sc,t first will further reduce the objective value
w
w
a
a
of P4b . Since Zt − g(Ht ) ≤ 0, (Sc,t
, Ss,t
) = (Sc,t
, Ss,t
) as in
S2) given earlier. By supply-demand balancing equation (9),
w
we obtain Qw
t and Et in (26). Alternatively, we can keep the
battery idle and only buy energy Etid from the grid, where
id
Sc,t
+ Qid
t = 0. In this case, the battery cost can be avoided:
1R,t = 1D,t = 0, but Etid will be smaller. The optimal a∗t is
the one that achieves the minimum objective value.
2) For max{Zt − g(Ht ), Zt − |g(Ht )| + V Ps,t } < 0 ≤
Zt − g(Ht ) + V Pb,t : In this case, to minimize the objective of
w
P4b , we want to set Etw as small as possible and Fs,t
= 0. It
is possible that the battery is in either charging or discharging
state. If we charge the battery, it should only be charged
w
from renewable Sc,t
, while Qw
t = 0. Since Zt − g(Ht ) < 0,
w
w
a
a
(Sc,t , Ss,t ) = (Sc,t , Ss,t
) as in S2) given earlier. Note that, if
w
in the charging state, it means Sc,t
> 0, which happens when
w
Wt = Sw,t < St , and we have Fd,t
= 0. Thus, constraint
(6) is satisfied. On the other hand, if Wt > Sw,t , it means
w
w
St = Sw,t and Ss,t
= Sc,t
= 0. We could either use the
battery and/or buy energy Etw (i.e., idle or discharging state)
to serve the load. If the battery is in the discharging state, the
w
amount Fd,t
should be set as large as possible to minimize Etw .
Based on the above, we have the control solution aw
t as shown
in (27). Alternatively, we can keep the battery idle to avoid
battery cost, and only buy energy Etid from the grid. Thus,
the optimal a∗t is chosen by whichever achieves the minimum
objective value.
3) For Zt − g(Ht ) ≤ 0 ≤ Zt − |g(Ht )| + V Ps,t : This is the
case when (40) and (41) hold. To minimize the objective of
P4b , one possible solution is to minimize Etw and maximize
w
w
w
Fs,t
. Due to constraint (6), Sc,t
· Fs,t
= 0 must be satisfied.
w
w
Thus, we have two conditions: i) If Fs,t
≥ 0 and Sc,t
=
0, it means the remaining amount St − Sw,t , if any, will be
only sold back to the grid. Since Zt ≤ g(Ht ), it is easy to
12
see that 0 < Zt − |g(Ht )| + V Ps,t ≤ V Ps,t . Thus, we first
w
w
maximize Ss,t
and then maximize Fs,t
. Since Zt −g(Ht ) ≤ 0,
w
w
a
a
(Sc,t , Ss,t ) = (Sc,t , Ss,t ) as in S2). The control solution aw
t
w
w
is shown as in (28). ii) If Sc,t
≥ 0 and Fs,t
= 0, the battery
w
will be charged from Sc,t
only and no energy from the battery
w
w
a
a
will be sold. We have (Sc,t
, Ss,t
) = (Sc,t
, Ss,t
) as in S2). The
w
control solution at is shown as in (29). After comparing i) and
ii), aw
t is the one whichever achieves the less objective value.
Alternatively, we can keep the battery idle. Thus, the optimal
a∗t is chosen by whichever achieves the minimum objective
id
value between aw
t and at .
4) For Zt − |g(Ht )| + V Ps,t < 0 ≤ Zt − g(Ht ): Note that
this case happens when Ht < 0 and (42) holds. We want to
w
set Etw as small as possible and Fs,t
= 0. Since Zt ≥ g(Ht ),
w
from earlier we have Sc,t = 0. Thus, the battery can be in
the discharging state, and it is straightforward to obtain aw
t in
(30). After comparing to the alternative idle state, the optimal
a∗t is chosen by whichever achieves the minimum objective
value.
5) For min{Zt − g(Ht ), Zt − |g(Ht )| + V Ps,t } > 0:
w
Since Zt > g(Ht ), Sc,t
= 0. By (40)-(42), to minimize the
w
objective of P4b , we want to minimize Etw and maximize Fs,t
.
This means no charging: Qw
=
0.
Thus,
only
the
discharging
t
or idle state could be considered. For the discharging state,
since Zt − |g(Ht )| + V Ps,t < Zt − g(Ht ) + V Pb,t , we
w
=
should first maximize the discharging amount as Fd,t
w
.
min{Wt − Sw,t , Dmax } to minimize Etw , then maximize Fs,t
w
w
, to minimize the
and Ss,t
For energy selling, between Fs,t
w
cost, if Zt − |g(Ht )| > 0, we should first maximize Fs,t
from
w
w
the battery and then sell from the renewable, as Fs,t and Ss,t
in (31). Otherwise, we first sell as much as possible from
w
the renewable, and then determine Fs,t
as given in (32). By
w
supply-demand equation (9), Et can be obtained as in (31)
and (32). Alternatively, we can keep the battery idle and only
buy energy Etid from the grid. The optimal a∗t is the one that
achieves the minimum objective value.
P ROOF
A PPENDIX E
OF P ROPOSITION 2
Proof: Before going into the details, we first provide an
outline of our proof. Using the solutions γt∗ and a∗t of P4a
and P4b , respectively, we can show that both Zt and Ht are
upper and lower bounded. Then, by applying these bounds to
(20) and using the battery capacity constraint (7), we obtain
Ao as the minimum value that can be achieved with a given
value of ∆a . With Ao obtained, we derive the upper bound of
V , i.e., Vmax , to ensure that (7) is satisfied.
To prove Proposition 2, we first introduce Lemma 3 and
Lemma 4 below.
Lemma 3: For γt∗ in (25), Ht is bounded by
Hmin ≤ Ht ≤ Hmax
(43)
where Hmin , −V C ′ (Γ) − Γ < 0, and Hmax , Γ.
Proof:
1) Upper bound: From (10), xu,t ≥ 0. If Ht ≥ 0, from
(25), we have γt∗ = 0. Thus, based on the dynamics of Ht in
(19), Ht+1 ≤ Ht , i.e., non-increasing. When Ht < 0, from
(25), the maximum increment of Ht+1 in (19) is when γt∗ = Γ
and xu,t = 0, and thus Ht+1 ≤ Γ = Hmax .
2) Lower bound: From (25), if Ht < −V C ′ (Γ), we have
∗
γt = Γ, and Ht+1 is non-decreasing in (19). If Ht ≥
−V C ′ (Γ), the maximum decrement of Ht+1 from Ht in (19)
is when γt∗ = 0 and xu,t = Γ, and Ht+1 ≥ −V C ′ (Γ) − Γ =
Hmin . Since C(·) is non-decreasing, C ′ (·) ≥ 0, we have
Hmin < 0.
Lemma 4: Under the proposed solution in Proposition 1, we
have
∗
∗
1) If Zt < −V Pbmax + Hmin /ηd , then Fd,t
+ Fs,t
= 0;
min
∗
2) If Zt > max{ηc Hmax , |Hmin |/ηd − V Ps }, then Sr,t
+
∗
Qt = 0.
Proof: Note that Hmin < 0 and Hmax > 0. 1) This
case corresponds to Case 1) in Proposition 1. We have
Zt < −V Pbmax + Hmin /ηd ≤ −V Pbmax + g(Ht ), thus
∗
∗
Fd,t
+ Fs,t
= 0 is the optimal control action. 2) This case
corresponds to Case 5) in Proposition 1. From Lemma 3, we
know |Hmin | > |Hmax |. Thus, it is easy to see that if Zt >
∗
max{ηc Hmax , |Hmin |/ηd − V Psmin }, then Sc,t
= Q∗t = 0 are
the optimal control action.
Now, we are ready to prove Proposition 2. When first show
that under Ao and V in (34) and (33), Bt is always upper
bounded by Bmax ; Then we prove that Bt is lower bounded
by Bmin .
A. Upper Bound Bt ≤ Bmax
∗
∗
Based on Lemma 4.1), we have Fd,t
+ Fs,t
= 0 if
max
Zt < −V Pb
+ Hmin /ηd , no discharging from the battery.
When Zt ≥ −V Pbmax + Hmin /ηd , from (5), the maximum
decreasing amount from Zt to Zt+1 in (18) in the next time
slot is Dmax /ηd , and we have, for t ∈ [0, To − 1],
Zt+1 ≥ −V Pbmax −
∆a Dmax −Hmin
−
.
To
ηd
(44)
a
In (20), we have Bt = Zt + Ao + ∆
To t. To satisfy the lower
a
bound of Bt in (7), we must ensure Zt + Ao + ∆
To t ≥ Bmin .
∆a
max
From (44), it is sufficient to let −V Pb
− To − (Dmax −
a
Hmin )/ηd + Ao + ∆
t
≥
B
,
which
means
min
To
Ao ≥ Bmin + V Pbmax +
Dmax −Hmin ∆a
+
(1 − t),
ηd
To
(45)
for all t ∈ [0, To ]. We next determine the minimum possible
value of Ao based on the sign of ∆a .
1) If ∆a ≥ 0: The minimum value of Ao in (45) is
Ao,min = Bmin + V Pbmax +
Dmax −Hmin ∆a
+
.
ηd
To
(46)
a
As a result, we have At = Ao,min + ∆
To t.
∗
a
Based on Lemma 4.2), we have Sr,t + Q∗t = 0 if Zt − ∆
To >
∆a
min
max{ηc Hmax , |Hmin |/ηd − V Ps } − To , i.e., no charging
a
for the battery. When Zt − ∆
To ≤ max{ηc Hmax , |Hmin |/ηd −
∆a
min
V Ps } − To , based on the maximum increment of Zt to
Zt+1 in (18), we have, for t ∈ [0, To ],
Zt ≤ max{ηc Hmax ,
|Hmin |
∆a
−V Psmin }−
+ ηc Rmax , (47)
ηd
To
13
Substituting At with Ao,min in (46) into (20), and from (47),
we have
|Hmin |
Bt ≤Bmin + max{ηc Hmax ,
− V Psmin } + ηc Rmax
ηd
Dmax −Hmin ∆a
+ V Pbmax +
+
t
(48)
ηd
To
For the control solution to be feasible, we need Bt ≤ Bmax .
This is satisfied if RHS of (48) ≤ Bmax , for all t ∈ [0, To ].
Using expressions in Lemma 3, this means
|Hmin |
Hmin
− max{ηc Hmax ,
− V Psmin }
ηd
ηd
V C ′ (Γ)
1
≤ B̃t −
− Γ( + ηc )
ηd
ηd
C ′ (Γ)
1
− max{0, V (
− Psmin ) + Γ( − ηc )}
ηd
ηd
V C ′ (Γ)−2Γ
1
C ′ (Γ)
= B̃t −
−max{Γ(ηc − ), V (
−Psmin )}
ηd
ηd
ηd
V Pbmax ≤ B̃t +
a
where B̃t , Bmax−Bmin−ηc Rmax−Dmax /ηd − ∆
To t. To satisfy
the above inequality, it is suffice that the following holds
A PPENDIX F
P ROOF OF T HEOREM 1
Proof: A T -slot sample path Lyapunov drift is defined by
∆T (Θt ) , L(Θt+T ) − L(Θt ). We upper bound it as follows
1
2
2
− Zt2 + Ht+T
− Ht2
Zt+T
2
t+T
X−1
∆a
= Zt
ηc (Qτ + Sr,τ ) − (Fd,τ + Fs,τ )/ηd −
To
τ =t
"
#
2
t+T
t+T −1
X−1
1 X
+ Ht
(γτ − xu,τ ) +
(γτ − xu,τ )
2 τ =t
τ =t
"t+T −1
#2
1 X
∆a
+
ηc (Qτ + Sr,τ ) − (Fd,τ + Fs,τ )/ηd −
2 τ =t
To
t+T
−1
X
∆a
ηc (Qτ + Sr,τ ) − (Fd,τ + Fs,τ )/ηd −
≤ Zt
To
τ =t
∆T (Θt ) =
+ Ht
t+T
X−1
(γτ − xu,τ ) + GT 2
(53)
τ =t
C ′ (Γ)
V C ′ (Γ) − 2Γ
− V max{0, (
− Psmin )}, where G is defined in Lemma 1.
ηd
ηd
Let To = M T . We consider a per-frame optimization
problem below, with the objective of minimizing the timewhich is satisfied if V ∈ (0, Vmax ] with
averaged system cost within the mth frame of length T time
B̃0 − 2Γ/ηd − ∆a
slots.
Vmax = max
.
(49)
Pb
+ C ′ (Γ)/ηd + [C ′ (Γ)/ηd − Psmin ]+
(m+1)T −1
X
1
2) If ∆a < 0: The minimum value of Ao in (45) is
[Et Pb,t − (Fs,t + Ss,t )Ps,t
Pf : min
{at ,γt } T
t=mT
∆
D
−
H
a
max
min
+
− ∆a . (50)
Ao,min = Bmin +V Pbmax +
+ xe,t + C(γt )]
ηd
To
s.t
(9),
(1),
(6),
(13),
(14),
(17),
and (16).
Substituting Ao,min in At , and from (20) and (47), we have
V Pbmax ≤ B̃t −
|Hmin |
− V Psmin } + ηc Rmax
ηd
Dmax −Hmin
∆a
+ V Pbmax +
− ∆a +
t.
(51)
ηd
To
Bt ≤Bmin + max{ηc Hmax ,
Again, to satisfy Bt ≤ Bmax , it is suffice that RHS of (51)
≤ Bmax . Using the similar analysis in the case of ∆a ≥ 0 ,
the bound is satisfied if V ∈ (0, Vmax ] with
Vmax =
Pbmax
+
B̃0 − 2Γ/ηd − |∆a |
′
C (Γ)/ηd + [C ′ (Γ)/ηd
− Psmin ]+
.
(52)
Combining (49) and (52) leads to (33), and from (46) or (50),
we have Ao in (34).
We show that Pf is equivalent to P1 in which To is replaced
by T . Let ufm denote the minimum objective value of Pf .
The optimal solution of P1 satisfies all constraints of Pf and
therefore is feasible to Pf . Thus, we have ufm ≤ uopt
m . By
Jensen’s inequality and convexity of C(·), we have C(γ) ≥
C(γ) = C(xu ). Note that introducing the auxiliary variable γt
with constraints (16) and (17) does not modify the problem.
opt
f
This means ufm ≥ uopt
m . Hence, we have um = um and Pf
and P1 are equivalent.
From (53) and the objective of Pf , we have the T -slot driftplus-cost metric for the mth frame upper bounded by
∆T (Θt )
(m+1)T −1
B. Lower Bound Bt ≥ Bmin .
We now show that using Ao,min in (46) or (50) for ∆a ≥ 0
or ∆a < 0, respectively, and V ∈ (0, Vmax ] with Vmax in (49)
or (52), respectively, we have Bt ≥ Bmin for all t.
1) If ∆a ≥ 0: Substitute Ao,min in (46) in At , and Zt in
∆a
a
(20) into (44), we have Bmin + ∆
To t ≤ Bt . Since To t > 0,
for t ∈ [0, To − 1], Bt ≥ Bmin is satisfied for ∆a ≥ 0.
2) If ∆a < 0: Substitute Ao,min in (50) in At , and Zt
in (20) into (44), we have Bmin + ∆a ( Tto − 1) ≤ Bt . Since
∆a ( Tto − 1) > 0, for t ∈ [0, To − 1], Bt ≥ Bmin is satisfied
for ∆a < 0.
X
+V
[Et Pb,t − (Fs,t + Ss,t )Ps,t + xe,t + C(γt )]
t=mT
(m+1)T −1
X
≤ Zt
ηc (Qτ + Sr,τ ) − (Fd,τ + Fs,τ )/ηd −
t=mT
(m+1)T −1
+ Ht
X
∆a
To
(γt − xu,t ) + GT 2
t=mT
(m+1)T −1
+V
X
t=mT
[Et Pb,t − (Fs,t + Ss,t )Ps,t + xe,t + C(γt )]. (54)
14
Let {ãt , γ̃t } denote a pair of feasible solution of Pf , satisfying
the following relations
!
(m+1)T −1
(m+1)T
X −1 F̃d,t + F̃s,t
X
∆a
ηc Q̃t + S̃r,t =
(55)
+
ηd
To
t=mT
t=mT
(m+1)T −1
(m+1)T −1
X
X
γ̃t =
t=mT
x̃u,t
(56)
t=mT
with the corresponding objective value denoted as ũfm .
Note that comparing with P1, we impose per-frame constraints (55) and (56) as oppose to (15) and (17) for the To -slot
period. Let δ ≥ 0 denote the gap of ũfm to the optimal objective
opt
f
value uopt
m , i.e., ũm = um + δ.
Among all feasible control solutions satisfying (55) and
(56), there exists a solution which leads to δ → 0. The upper
bound in (54) can be rewritten as
(m+1)T −1
∆T (Θt ) + V
X
[Et Pb,t − (Fs,t + Ss,t )Ps,t + xe,t + C(γt )]
t=mT
2
opt
≤ GT 2 + V T lim uopt
m + δ = GT + V T um .
δ→0
(57)
Summing both sides of (57) over m for m = 0, . . . , M − 1,
and dividing them by V M T , we have
L(ΘTo ) − L(Θ0 )
V MT
M −1 (m+1)T
X −1
1 X
+
[Et Pb,t − (Fs,t + Ss,t )Ps,t + xe,t + C(γt )]
M T m=0 t=mT
≤
M −1
1 X opt GT
.
u +
M m=0 m
V
(58)
C(γ) ≥ C(γ) for the convex function C(·) where
SinceP
To −1
γt , from (58), we have
γ , T1o t=0
To −1
1 X
[Et Pb,t − (Fs,t + Ss,t )Ps,t ] + xe + C(γ)
To t=0
≤
To −1
1 X
[Et Pb,t − (Fs,t + Ss,t )Ps,t + xe,t + C(γt )]. (59)
To t=0
For a continuously differentiable convex function f (·), the
following inequality holds [34]
f (x) ≥ f (y) + f ′ (y)(x − y).
(60)
Applying (60) to C(xu ) and C(γ), we have
C(xu ) ≤ C(γ) + C ′ (xu )(xu − γ) ≤ C(γ) + C ′ (Γ)(xu − γ)
H T − H0
(61)
= C(γ) − C ′ (Γ) o
To
where the last term in (61) is obtained by summing both sides
of (19) over To .
Applying the inequality (61) to C(γ) at the LHS of (59),
and further applying the inequality (59) to the LHS of (58), we
have the bound of the objective value u∗ (V ) of P1 achieved
by Algorithm 1 as in (35).
For the bound in (35), note that Ht is bounded as in (43),
and Zt is bounded by (44) and (47). It follows that L(Θt )
C ′ (Γ)(H0 −HTo )
→ 0 and
is bounded. As To → ∞, we have
To
L(Θ0 )−L(ΘTo )
→ 0, and (36) follows.
V To
A PPENDIX G
P ROOF OF P ROPOSITION 3
Proof: For t = To , from the dynamic shifting in (20),
we have ZTo = BT0 − Ao − ∆a . For t = 0, we have Z0 =
B0 −Ao . Thus, we have the following relation: T1o (ZTo −Z0 ) =
∆a
1
To (BTo −B0 )− To . Substituting the first equation in (15) into
the above, we have
PTo −1
Fd,τ +Fs,τ
] − ∆a
ZTo − Z0
τ =0 [ηc (Qτ + Sr,τ ) −
ηd
=
.
To
To
Note that Zt in (18) is derived from the above. Since this
finite time horizon algorithm, (15) is satisfied with error ǫ =
ZTo − Z0 . Because Zt is bounded by (44) and (47) and Ht is
bounded by (43), the error ǫ has the following upper bound
|ǫ| ≤ max{ηc Hmax , |Hmin |/ηd − V Psmin } + ηc Rmax
+ V Pbmax − Hmin /ηd + Dmax /ηd
≤ max{Hmax /ηd + |Hmin |/ηd , 2|Hmin |/ηd − V Psmin }
+ V Pbmax + ηc Rmax + Dmax /ηd .
= (2Γ + V C ′ (Γ))/ηd + max{0, V C ′ (Γ)/ηd − V Psmin }
+ V Pbmax + ηc Rmax + Dmax /ηd .
(62)
Thus, we complete the proof.
A PPENDIX H
P ROOF OF P ROPOSITION 4
Proof: To ensure (3) is satisfied, we must show the
optimal control solution (26)-(30) in Proposition 1 can ensure
(3) being satisfied. For Cases 1, 2 and 4, from their optimal
control solutions (26), (27) and (30), it is easy to see that
(3) is satisfied. For Cases 3 and 5, from their optimal control
w
solutions (31) or (32) and (28) or (29), if Fd,t
= Wt − Sw,t <
w
w
Dmax , Et = 0 and Fs,t ≥ 0; If Fd,t = Dmax , we have
w
Fs,t
= 0 and Etw ≥ 0. If the battery is in the idle state, we
id
always have Fs,t
= 0. Thus, (3) is a sufficient condition for
Algorithm 1.
R EFERENCES
[1] P. Denholm, E. Ela, B. Kirby, and M. Milligan, “The role of energy
storage with renewable electricity generation,” National Renewable
Energy Laboratory, Tech. Rep. NREL/TP-6A2-47187, Jan. 2010.
[2] A. Castillo and D. F. Gayme, “Grid-scale energy storage applications
in renewable energy integration: A survey,” Energy Convers. Manage,
vol. 87, pp. 885–894, Nov. 2014.
[3] H.-I. Su and A. El Gamal, “Modeling and analysis of the role of energy
storage for renewable integration: Power balancing,” IEEE Trans. Power
Syst., vol. 28, pp. 4109–4117, Nov. 2013.
[4] S. Sun, M. Dong, and B. Liang, “Real-time power balancing in electric
grids with distributed storage,” IEEE J. Select. Topics Signal Processing,
vol. 8, pp. 1167–1181, Dec. 2014.
[5] S. Sun, B. Liang, M. Dong, and J. A. Taylor, “Phase balancing using
energy storage in power grids under uncertainty,” IEEE Trans. Power
Syst., vol. 31, no. 5, pp. 3891–3903, Sep. 2016.
[6] S. Sun, M. Dong, and B. Liang, “Distributed real-time power balancing
in renewable-integrated power grids with storage and flexible loads,”
IEEE Trans. Smart Grid, vol. 7, no. 5, pp. 2337–2349, Sep. 2016.
[7] Y. Zhang, N. Gatsis, and G. Giannakis, “Robust energy management
for microgrids with high-penetration renewables,” IEEE Trans. Sustain.
Energy, vol. 4, pp. 944–953, Oct 2013.
[8] A. Faruqui, R. Hledik, and J. Tsoukalis, “The power of dynamic pricing,”
Electr. J., vol. 22, no. 3, pp. 42–56, 2009.
15
[9] C. Joe-Wong, S. Sen, S. Ha, and M. Chiang, “Optimized day-ahead
pricing for smart grids with device-specific scheduling flexibility,” IEEE
J. Sel. Areas Commun., vol. 30, pp. 1075–1085, Jul. 2012.
[10] Y. Wang, X. Lin, and M. Pedram, “Adaptive control for energy storage
systems in households with photovoltaic modules,” IEEE Trans. Smart
Grid, vol. 5, no. 2, pp. 992–1001, Mar. 2014.
[11] M. He, S. Murugesan, and J. Zhang, “A multi-timescale scheduling
approach for stochastic reliability in smart grids with wind generation
and opportunistic demand,” IEEE Trans. Smart Grid, vol. 4, pp. 521–
529, Mar. 2013.
[12] T. Li and M. Dong, “Real-time energy storage management with
renewable integration: Finite-time horizon approach,” IEEE J. Sel. Areas
Commun., vol. 33, no. 12, pp. 2524–2539, Dec. 2015.
[13] ——, “Real-time residential-side joint energy storage management and
load scheduling with renewable integration,” IEEE Trans. Smart Grid,
2016, available IEEE Xplore Early Access.
[14] Y. Zhang and M. van der Schaar, “Structure-aware stochastic storage
management in smart grids,” IEEE J. Sel. Topics Signal Process., vol. 8,
pp. 1098–1110, Dec. 2014.
[15] M. R. V. Moghadam, R. Zhang, and R. T. B. Ma, “Demand response for
contingency management via real-time pricing in smart grids,” in Proc.
IEEE Int. Conf. Smart Grid Commun. (SmartGridComm), Nov. 2014,
pp. 632–637.
[16] “Grid-connected renewable energy systems,” U.S. Department of
Energy. [Online]. Available: https://www.energy.gov/energysaver.
[17] “Information
for
renewable
generators,”
Ontario
Energy
Board.
[Online].
Available:
https://www.oeb.ca/industry/tools-resources-and-links/information-renewable-generators.
[18] R. Urgaonkar, B. Urgaonkar, M. J. Neely, and A. Sivasubramaniam,
“Optimal power cost management using stored energy in data centers,”
in Proc. ACM SIGMETRICS, Jun. 2011.
[19] K. Rahbar, J. Xu, and R. Zhang, “Real-time energy storage management for renewable integration in microgrid: An off-line optimization
approach,” IEEE Trans. Smart Grid, vol. 6, pp. 124–134, Jan 2015.
[20] L. Huang, J. Walrand, and K. Ramchandran, “Optimal demand response
with energy storage management,” in Proc. IEEE SmartGridComm,
2012.
[21] S. Salinas, M. Li, P. Li, and Y. Fu, “Dynamic energy management for
the smart grid with distributed energy resources,” IEEE Trans. Smart
Grid, vol. 4, pp. 2139–2151, Sep. 2013.
[22] T. Li and M. Dong, “Online control for energy storage management
with renewable energy integration,” in Proc. IEEE ICASSP, May 2013.
[23] ——, “Real-time energy storage management with renewable energy of
arbitrary generation dynamics,” in Proc. IEEE ASILOMAR, Nov. 2013.
[24] ——, “Real-time energy storage management: Finite-time horizon approach,” in Proc. IEEE Int. Conf. Smart Grid Commun. (SmartGridComm), Nov. 2014, pp. 115–120.
[25] J. Qin, Y. Chow, J. Yang, and R. Rajagopal, “Distributed online modified
greedy algorithm for networked storage operation under uncertainty,”
IEEE Trans. Smart Grid, vol. 7, no. 2, pp. 1106–1118, Mar. 2016.
[26] M. J. Neely, Stochastic Network Optimization with Application to
Communication and Queueing Systems. Morgan & Claypool, 2010.
[27] B. G. Kim, S. Ren, M. van der Schaar, and J. W. Lee, “Bidirectional
energy trading and residential load scheduling with electric vehicles in
the smart grid,” IEEE J. Sel. Areas Commun., vol. 31, no. 7, pp. 1219–
1234, Jul. 2013.
[28] C. Mediwaththe, E. Stephens, D. Smith, and A. Mahanti, “Competitive
energy trading framework for demand-side management in neighborhood area networks,” IEEE Trans. Smart Grid, 2017.
[29] Y. Huang, S. Mao, and R. Nelms, “Adaptive electricity scheduling in
microgrids,” IEEE Trans. Smart Grid, vol. 5, pp. 270–281, Jan. 2014.
[30] P. Ramadass, B. Haran, R. White, and B. N. Popov, “Performance study
of commercial LiCoO2 and spinel-based Li-ion cells,” J. Power Sources,
vol. 111, pp. 210–220, Sep. 2002.
[31] Y. Shi, B. Xu, Y. Tan, and B. Zhang, “A convex cycle-based degradation model for battery energy storage planning and operation,”
arXiv:1703.07968, Tech. Rep., 2017.
[32] M. J. Neely, “Universal scheduling for networks with arbitrary traffic,
channels, and mobility,” arXiv:1001.0960, Tech. Rep., 2010.
[33] “Electricity prices,” Ontario Energy Board, 2012. [Online]. Available:
http://www.ontarioenergyboard.ca/OEB/Consumers/Electricity
[34] S. Boyd and L. Vandenberghe, Convex Optimization.
Cambridge
University Press, 2004.
| 3 |
arXiv:1210.2885v1 [math.CO] 10 Oct 2012
A rank computation problem related to the
HK function of trinomial hypersurfaces
Shyamashree Upadhyay
Department of Mathematics
Indian Institute of Technology, Guwahati
Assam-781039, INDIA
email:[email protected]
Abstract
In this article, I provide a solution to a rank computation problem related to the computation of the Hilbert-Kunz function for any
disjoint-term trinomial hypersurface, over any field of characteristic 2.
This rank computation problem was posed in [3]. The formula for the
Hilbert-Kunz function for disjoint-term trinomial hypersurfaces can
be deduced from the result(s) of this article, modulo a lot of tedious
notation.
Contents
1 Introduction
2
2 Stating the problem in a general way
2.1 Statement of Problem I . . . . . . . . . . . . . . . . . . . . . .
2.2 Statement of Problem II . . . . . . . . . . . . . . . . . . . . .
3
3
3
3 Structure of the string of Binomial coefficients
5
4 Solutions to Problems I and II
6
4.1 Solution to Problem I . . . . . . . . . . . . . . . . . . . . . . . 6
4.2 Solution to Problem II . . . . . . . . . . . . . . . . . . . . . . 11
1
5 Concluding remarks about rationality
1
13
Introduction
A ‘disjoint-term trinomial hypersurface’ is defined in §2 of [3]. In the last
section of [3], I gave an algorithm for computing the Hilbert-Kunz function
for any disjoint-term trinomial hypersurface in general, over any field of arbitrary positive characteristic. But the algorithm was given modulo a rank
computation problem for 3 types of systems of linear equations which were
given in tables 5, 6 and 8 of [3]. In this article, I provide a solution to this
rank computation problem over fields of characteristic 2. It will be nice if one
can provide a similar solution over fields of arbitrary positive characteristic
p.
The formula for the Hilbert-Kunz function for disjoint-term trinomial
hypersurfaces follows as a corollary to the solution of this rank computation
problem, but a lot of tedious notation is needed for deducing the formula.
Hence in this article, I am avoiding the work on the deduction of the HilbertKunz function from this.
In the article [3], I was suspecting that due to the weird combinatorial
pattern of the matrices in tables 5, 6 and 8, there can be examples of disjointterm trinomial hypersurfaces for which the corresponding Hilbert-Kunz multiplicity can be irrational. But my suspicion turns out to be incorrect (see
the papers [2] and [1] for a reasoning). However if we consider trinomial hypersurfaces which are defined by a polynomial not having disjoint terms in it,
then we will encounter a similar-looking (but larger and more complicated)
rank computation problem for those. And I suspect that the Hilbert-Kunz
multiplicity may become irrational for those hypersurfaces because of the
more complicated nature of the system of linear equations involved in it. In
fact, the solution to the rank computation problem given in this article is
also important for the next article in which I will consider trinomial hypersurfaces which are defined by a polynomial not having disjoint terms in it,
because something similar happens there.
2
2
Stating the problem in a general way
The following combinatorial fact is well known (called the Chu-Vandermonde
identity):
For any two natural numbers m and n and for any non-negative integer j,
the standard inner product of the two vectors ( m C0 , m C1 , . . . , m Cj ) and
( n Cj , n Cj−1 , . . . , n C0 ) equals m+n Cj where we declare that for any
positive integer M, M Cl = 0 if l > M.
It follows from the above fact and from the pattern of the matrices present
in tables 5, 6 and 8 of the article [3] that
• for the linear systems corresponding to the matrices in tables 5 and 8,
we need to solve the general combinatorial problem mentioned below
as Problem I and
• for the linear systems corresponding to the matrices in tables 6, we
need to solve the general combinatorial problem mentioned below as
Problem II.
It is worthwhile to mention here that in this article, we will be considering
systems of linear equations over fields of characteristic 2 only.
2.1
Statement of Problem I
Given any positive integer M and any non-negative integer j, what all values
of positive integers k and l solve the following system of (l + 1) many linear
equations in (k + 1) many unknowns given by AX = b where the coefficient
matrix A is the (l + 1) × (k + 1) matrix given below in table 1 and b is
the (l + 1) × (1) column vector (0, 0, . . . , 1)t . The lines along the southwest↔north-east direction (in table 1) indicate the continuation of the same
entry along that direction.
2.2
Statement of Problem II
Given any positive integer M and non-negative integer j, what all values of
positive integers k, l and q solve the following system of (l+q +1) many linear
equations in (k + 1) many unknowns given by AX = b where the coefficient
3
Table 1: The coefficient matrix A of Problem I
M
Cj+l+k
Cj+l+(k−1) · · · · · · · · · M Cj+l+1
M
Cj+l+(k−1)
M
M
M
Cj+l
Cj+(l−1)
..
.
M
Cj+1
Cj
M
matrix A is the (l + q + 1) × (k + 1) matrix given below in table 2 and b is
the (l + q + 1) × (1) column vector (0, 0, . . . , 1)t . In table 2,
Table 2: The coefficient matrix A of Problem II
♥1q
··· ··· ···
♥qk−1
♥kq
..
..
..
..
..
..
.
.
.
.
.
.
k−1
0
1
♥2
♥2
··· ··· ···
♥2
♥k2
♥01
♥11
··· ··· ···
♥1k−1
♥k1
M
M
M
M
Cj+l+k
Cj+l+(k−1) · · · · · · · · ·
Cj+l+1
Cj+l
M
M
Cj+l+(k−1)
Cj+(l−1)
..
.
♥0q
..
.
M
Cj+1
Cj
M
M = α + δ for some natural numbers α and δ (given to us).
For any 1 ≤ i ≤ q and 0 ≤ r ≤ k, ♥ri := the standard inner product
of the two vectors ( α C0 , α C1 , . . . , α Cj+k+l−r )
and ( δ Cj+k+l+i−r , . . . , δ Ci+1 , δ Ci ).
The lines along the south-west↔north-east direction indicate the
continuation of the same entry along that direction.
4
3
Structure of the string of Binomial coefficients
For providing solutions to both problems I and II, we first need to have an
account of the position of all odd entries in the string of binomial coefficients
of any given positive integer M starting from M C0 to M CM . This is given
as follows:
Let M = 2lm + 2lm−1 + · · · + 2l1 be the binary expansion of M where
lm > lm−1 > · · · > l1 are non-negative integers.
Let DM := the set of all possible (m + 1)-tuples (α0 , α1 , . . . , αm ) generated
out of the set {0, 2l1 , 2l2 , . . . , 2lm } which satisfy the following 3 properties
simultaneously:
• α0 = 1.
• α1 ≥ α2 ≥ · · · ≥ αm .
• If αi 6= 0 for some i ∈ {1, . . . , m}, then αi > αj for any j ∈ {i +
1, . . . , m}.
Clearly, the total number of elements in the set DM is 2m . Let us put an
order - on DM by declaring that
(α0 , α1 , . . . , αm ) - (α0 , α1 , . . . , αm )
if and only if
αi ≤ αi ∀i ∈ {0, 1, . . . , m}.
Observe that - is a total order on the set DM . The positions of the odd
entries in the string of binomial coefficients of M are indexed by elements of
the set DM ; and are arranged (in ascending order) according to the order on DM . This indexing is given by:
For any d = (α0 , α1 , . . . , αm ) ∈ DM , the position of the odd entry
corresponding to it in the string of binomial coefficients of M is Σm
i=0 αi .
m
Since the total number of elements in the set DM is 2 , we can denote the
(M )
(M )
(M )
(M ) (M )
(M )
elements of DM by d1 , d2 , . . . , d2m where d1 - d2 - . . . - d2m .
And let us denote the corresponding positions of the odd entries in the
string of binomial coefficients of M by
(M )
(M )
(M )
Σd1 , Σd2 , . . . , Σd2m .
5
Looking at the positions of all odd entries in the string of binomial coefficients
of the given positive integer M, we can deduce some important facts about
the position of all even entries in the string of binomial coefficients of M.
These facts will be very much useful in writing down solutions to both the
Problems I and II. These facts are as given below:
Let the binary expansion of M be as mentioned above. Then
1. the number of even entries between any two consecutive odd entries
in the continuous string of binomial coefficients of M is of the form
2lg − 2lg−1 − . . . − 2l1 − 1 for some g ∈ {1, . . . , m}.
2. If lg < lĝ , then a bunch of 2lg − 2lg−1 − . . . − 2l1 − 1 many even entries
appears on both right and left sides of the bunch of 2lĝ − 2lĝ−1 − . . . −
2l1 − 1 many even entries at equal distance from the central bunch.
3. Between any two consecutive bunches of 2lg − 2lg−1 − . . . − 2l1 − 1 many
even entries, there exists (2g − 1) many bunches of even entries such
that the number of even entries in each of these (2g − 1) many bunches
is different from 2lg − 2lg−1 − . . . − 2l1 − 1.
4
Solutions to Problems I and II
In subsection 4.1 below, the notation and terminology will remain the same
as in subsection 2.1. Similarly, in subsection 4.2 below, the notation and
terminology will remain the same as in subsection 2.2.
4.1
Solution to Problem I
Problem I can be restated in the following way:—
Given any positive integer M and any non-negative integer j, for what all
values of positive integers k and l does the string
M
Cj ,
M
Cj+1 , . . . , . . . ,
M
Cj+l+k
of consecutive binomial coefficients of M satisfy all the following 3
properties simultaneously:
•
M
Cj and
is even).
M
Cj+k+1 have different parity (that is, exactly one of them
6
•
M
Cj+t = M C(j+t)+q(k+1) for every t ∈ {1, . . . , k + 1} and for every
positive integer q such that (j + t) + q(k + 1) ≤ j + l + k. In other
words, the string of length k + 1 beginning from M Cj+1 and ending at
M
Cj+k+1 should be repeated until we reach M Cj+l+k (there can be a
truncated string of length k + 1 at the end).
• The string of length k + 1 beginning from M Cj+1 and ending at
M
Cj+k+1 should contain evenly many odd elements in it.
There are many possible values of positive integers k and l which do the job.
A list of all possible answers is given below in cases I and II.
Case I: When
M
Cj is odd.
(M )
Suppose that M Cj equals Σdi for some i ∈ {1, . . . , 2m }, then we provide
a solution to Problem I by considering different sub cases depending upon
M
M
M
the value ΣdM
i+1 − Σdi − 1 being 0 or > 0. Observe that Σdi+1 − Σdi − 1is
nothing but the total number of zeroes lying in between the positions ΣM
i+1
and ΣM
.
i
(M )
(M )
Sub case I.a: When Σdi+1 − Σdi − 1 = 0.
(M )
(M )
If there exists an even natural number s such that Σdi+s+1 − Σdi+s − 1 > 0,
then take the string M Cj+1, . . . , M Cj+k+1 of length (k + 1) to be the continuous string of binomial coefficients of M starting from position number
(M )
(M )
1 + Σdi
and ending at position number Σdi+s + L (inclusive of the beginning and end points), where L is some natural number such that 0 <
(M )
(M )
L ≤ Σdi+s+1 − Σdi+s − 1. Hence the natural number k is given by k =
(M )
(M )
Σdi+s + L − Σdi − 1 where L and s are as mentioned above.
To determine the possible values of l in this sub case, look at the entry in
(M )
position number Σdi+s +L+1. If it is even, then take l = 1. If it is odd, then
look at the number of odd entries in the string of binomial coefficients of M
(M )
that come in continuum starting from position number 1 + Σdi (including
(M )
the odd entry at this position). Let pj denote this number. Similarly, look
at the number of odd entries in the string of binomial coefficients of M that
(M )
come in continuum starting from position number 1 + L + Σdi+s (including
(M )
the odd entry at this position). Let qj denote this number. Tale l to be
(M ) (M )
any natural number such that l ≤ 1 + min{pj , qj }.
7
Remark 4.1.1. An even natural number s as mentioned above in this sub
(M )
case may not exist for some M and for some position number Σdi . In that
situation, this sub case is invalid.
(M )
(M )
Sub case I.b: When Σdi+1 − Σdi − 1 > 0.
I will now list down the various possibilities under this sub case:
Either
(M )
(M )
Take k to be any natural number such that k + 1 ≤ Σdi+1 − Σdi − 1.
And take the string M Cj+1 , . . . , M Cj+k+1 of length (k + 1) to be the
continuous string of binomial coefficients of M starting from position
(M )
(M )
number 1 + Σdi and ending at position number 1 + k + Σdi . And take
(M )
(M )
l to be any natural number such that l ≤ Σdi+1 − Σdi − (k + 1).
Or
For computing the possible values of the natural number k, proceed
similarly as in sub case (I.a). And to determine the possible values of l,
(M )
look at the entry in position number Σdi+s + L + 1. If it is odd, then take
l = 1. If it is even, then look at the number of even entries in the string of
binomial coefficients of M that come in continuum starting from position
(M )
(M )
number 1 + Σdi (including the even entry at this position). Let pj
denote this number. Similarly, look at the number of even entries in the
string of binomial coefficients of M that come in continuum starting from
(M )
position number 1 + L + Σdi+s (including the even entry at this position).
(M )
Let qj denote this number. Tale l to be any natural number such that
(M ) (M )
l ≤ 1 + min{pj , qj }.
Case II: When M Cj is even.
The position of M Cj in the string of binomial coefficients of M (starting
from M C0 to M CM , that is, from left to right) is (j + 1)-th.
(M )
Sub case II.a: When j + 2 = Σdi for some i ∈ {1, . . . , 2m } and there exists
an odd natural number u such that all the entries at the position numbers
(M )
(M )
(M )
Σdi , 1 + Σdi , . . . , u + Σdi are odd and the entry at position number
(M )
u + 1 + Σdi is even.
I will now list down the various possibilities under this sub case:
Either
8
Take k to be any odd natural number such that k + 1 ≤ u + 1. And take
the string M Cj+1 , . . . , M Cj+k+1 of length (k + 1) to be the continuous
(M )
string of binomial coefficients of M starting from position number Σdi
(M )
and ending at position number k + Σdi . Take l to be any natural number
such that l ≤ (u + 1) − k.
Or
Take the string Cj+1 , . . . , Cj+k+1 of length (k + 1) to be the continuous
string of binomial coefficients of M starting from position number
(M )
(M )
j + 2(= Σdi ) and ending at position number Σdi+s , where s is any odd
natural number such that i + s ≤ 2m . [Remark: Such a odd natural number
s may not exist always. If it does not exist, then this is an invalid
possibility.] And to determine the possible values of l, look at the entry in
(M )
position number Σdi+s + 1. If it is even, then take l = 1. If it is odd, then
look at the number of odd entries in the string of binomial coefficients of M
(M )
that come in continuum starting from position number Σdi (including
(M )
the odd entry at this position). Let pj denote this number. Similarly,
look at the number of odd entries in the string of binomial coefficients of M
(M )
that come in continuum starting from position number 1 + Σdi+s (including
(M )
the odd entry at this position). Let qj denote this number. Tale l to be
(M ) (M )
any natural number such that l ≤ 1 + min{pj , qj }.
M
M
(M )
(M )
Sub case II.b: When j + 2 = Σdi for some i ∈ {1, . . . , 2m } and Σdi+1 −
(M )
Σdi − 1 > 0.
Take the string M Cj+1 , . . . , M Cj+k+1 of length (k + 1) to be the continuous
string of binomial coefficients of M starting from position number j + 2(=
(M )
(M )
Σdi ) and ending at position number Σdi+s , where s is any odd natural
number such that i + s ≤ 2m . [Remark: Such a odd natural number s may
not exist always. If it does not exist, then this is an invalid possibility.] In
this sub case, take l = 1.
(M )
Sub case II.c: When j + 2 6= Σdi for any i ∈ {1, . . . , 2m }.
(M )
Clearly there exists i ∈ {1, . . . , 2m } such that j + 2 < Σdi . Look at the
smallest such i, call it i0 . Take the string M Cj+1, . . . , M Cj+k+1 of length
(k + 1) to be the continuous string of binomial coefficients of M starting
(M )
from position number j + 2 and ending at position number Σdi0 +s , where s
is any odd natural number such that i0 + s ≤ 2m . [Remark: Such a odd
9
natural number s may not exist always. If it does not exist, then this is an
invalid possibility.]
And to determine the possible values of l, look at the entry in position
(M )
number 1 + Σdi0 +s . If it is odd, then take l = 1. If it is even, then look at
the number of even entries in the string of binomial coefficients of M that
come in continuum starting from position number j + 2 (including the even
(M )
entry at this position). Let pj denote this number. Similarly, look at the
number of even entries in the string of binomial coefficients of M that come
(M )
in continuum starting from position number 1 + Σdi0 +s (including the even
(M )
entry at this position). Let qj denote this number. Take l to be any
(M ) (M )
natural number such that l ≤ 1 + min{pj , qj }.
There is another non-trivial possibility under this sub case, which is the following:
Let M = 2Nr + 2Nr−1 + · · · + 2N1 be the binary expansion of M where
Nr > Nr−1 > · · · > N1 are non-negative integers. Let Ns0 be the least
element of the set {Nr , Nr−1 , · · · , N1 } which is ≥ 2. Let
{Nst , Nst−1 , · · · , Ns1 } be the subset of the set {Nr , Nr−1 , · · · , N1 } defined by
{Nb |1 ≤ b ≤ r − 1 and N1+b − Nb > 1}. It may happen in some cases that
Ns0 = Ns1 .
Given any d ∈ {1, . . . , t}, let Ud denote the set of all possible sums
generated from elements of the set {2Nr , 2Nr−1 , . . . , 2N1+sd } (where each
element of this set can appear atmost once in any such sum) which are
strictly less than 2Nr + 2Nr−1 + · · · + 2N1+sd . Given any d ∈ {1, . . . , t} and
any x(d) ∈ Ud , suppose M Cj is at the (1 + x(d) + 21+Nsd )-th position in the
string of binomial coefficients of M counting from the end (that is, from the
right to left). In this situation, the following solutions are possible:
(i) Take k + 1 to be equal to 2Nsd . And to determine the possible values of
l, look at the number of even entries in the string of binomial coefficients of
M that come in continuum after M Cj (i.e., excluding M Cj and to the right
(M )
of it). Let pj denote this number. Similarly, look at the number of even
entries in the string of binomial coefficients of M that come in continuum
after M Cj+2(1+Nsd ) (i.e., excluding M Cj+2(1+Nsd ) and to the right of it). Let
(M )
denote this number. Tale l to be any natural number such that
(M ) (M )
l ≤ 1 + 2Nsd + min{pj , qj }.
(ii) If d > 1 and there exists integer(s) z such that Nsd > z > Nsd−1 and
z ∈ {Nr , Nr−1 , · · · > N1 }, then for any such z, take k + 1 to be equal to 2z .
qj
10
And to determine the possible values of l, look at the number of even
entries in the string of binomial coefficients of M that come in continuum
(M )
after M Cj (i.e., excluding M Cj and to the right of it). Let pj denote this
number. Similarly, look at the number of even entries in the string of
binomial coefficients of M that come in continuum after M Cj+2(1+Nsd ) (i.e.,
(M )
excluding M Cj+2(1+Nsd ) and to the right of it). Let qj denote this
number. Tale l to be any natural number such that
(M ) (M )
l ≤ 1 + 2z (2(Nsd −z+1) − 1) + min{pj , qj }.
(iii) If d = 1 and Ns1 > Ns0 , then there always exists integer(s) z such that
Ns1 > z ≥ Ns0 and z ∈ {Nr , Nr−1 , · · · > N1 }. For any such z, do similarly
as in (ii) above replacing d there by 1.
4.2
Solution to Problem II
Problem II can be restated in the following way:—
Given any two positive integers α and δ and any non-negative integer j, for
what all values of positive integers k, l and q does the string
M
Cj ,
M
Cj+1 , . . . , . . . ,
M
Cj+l+k
(where M = α + δ) of consecutive binomial coefficients of M becomes a
solution to Problem I as mentioned above and the following properties are
also satisfied (simultaneously):
•
•
•
M
Cj+k+l+1 + M Cj+k+l + · · · + M Cj+l+1 and
· · · + α Cj+l+1 should have the same parity.
α
Cj+k+l+1 +
α
Cj+k+l +
M
Cj+k+l+2 + M Cj+k+l+1 +· · ·+ M Cj+l+2 and α Cj+k+l+2 + α Cj+k+l+1 +
· · · + α Cj+l+2 differ in parity if and only if δ C1 ( α Cj+k+l+1 + α Cj+k+l +
· · · + α Cj+l+1) is odd.
M
Cj+k+l+3 + M Cj+k+l+2 +· · ·+ M Cj+l+3 and α Cj+k+l+3 + α Cj+k+l+2 +
· · ·+ α Cj+l+3 differ in parity if and only if δ C1 ( α Cj+k+l+2 + α Cj+k+l+1 +
· · · + α Cj+l+2) + δ C2 ( α Cj+k+l+1 + α Cj+k+l + · · · + α Cj+l+1) is odd.
• and so on till · · · · · ·
•
M
Cj+k+l+q + M Cj+k+l+q−1+· · ·+ M Cj+l+q and α Cj+k+l+q + α Cj+k+l+q−1+
q−1 δ
Cp ( α Cj+k+l+q−p +
· · · + α Cj+l+q differ in parity if and only if Σp=1
α
α
Cj+k+l+q−1−p + · · · + Cj+l+q−p ) is odd.
11
Given any positive integers α, δ and non-negative integer j, we can proceed
similarly as in subsection 4.1 and find the possible values of the positive
integers k and l for which the string
M
Cj ,
M
Cj+1 , . . . , . . . ,
M
Cj+l+k
of consecutive binomial coefficients of M becomes a solution to Problem I.
But these values of positive integers k and l should also satisfy the additional
q-many properties (about parities of sums of strings of length (k + 1) of the
binomial coefficients of M and α) as mentioned above.
For any two natural numbers Z and y, let sy (Z) denote the sum of the
first y-many (starting from Z C0 ) binomial coefficients of Z. It is an easy
exercise to check that for any such Z and y, sy (Z) = Z−1 Cy−1 . Since we
are working over field of characteristic 2, it is easy to see that M Cj+k+l+1 +
M
Cj+k+l + · · · + M Cj+l+1 equals sj+k+l+1(M) + sj+l (M) which in turn equals
M −1
Cj+k+l + M −1 Cj+l−1 . Similarly one can say that α Cj+k+l+1 + α Cj+k+l +
· · · + α Cj+l+1 equals α−1 Cj+k+l + α−1 Cj+l−1. Therefore the condition that
M
Cj+k+l+1 + M Cj+k+l +· · ·+ M Cj+l+1 and α Cj+k+l+1 + α Cj+k+l +· · ·+ α Cj+l+1
should have the same parity translates into the condition that
α+δ−1
Cj+k+l + α+δ−1 Cj+l−1 and α−1 Cj+k+l +
have the same parity. · · · · · · · · · condition (∗)
α−1
Cj+l−1 should
The other (q − 1)-many conditions (about parity of sums of strings of length
(k + 1)) imply that the positive integer q should be such that for every
a ∈ {2, . . . , q}, the sums M Cj+k+l+a + M Cj+k+l+a−1 + · · · + M Cj+l+a and
α
Cj+k+l+a + α Cj+k+l+a−1 + · · · + α Cj+l+a differ in parity if and only if the
following condition holds:
If p1 < · · · < pf is the collection of all elements of the set
{p|p ∈ {1, . . . , a − 1} and α Cj+k+l+a−p + α Cj+k+l+a−1−p + · · · +
α
Cj+l+a−p is odd}, then exactly odd many elements of the set
{ δ Cp1 , . . . , δ Cpf } should be odd. And this should hold true for
every a ∈ {2, . . . , q}.· · · · · · · · · condition (∗∗)
We therefore need to determine when the value of the string sum M Cj+k+l+a+
M
Cj+k+l + · · · + M Cj+l+a becomes odd and when it is even as the integer a
ranges over the set {1, . . . , q}. And similarly for the string sums α Cj+k+l+a +
α
Cj+k+l+a−1 + · · · + α Cj+l+a as the integer a ranges over the set {1, . . . , q}.
But this problem is similar to solving problem I (or a problem equivalent to
12
problem I where the column vector b is replaced by a suitable vector which is
either (1, 0, . . . , 0)t or (0, 1, . . . , 1)t ) for both the integers M and α, assuming
(to begin with) that the string sums M Cj+k+l+1 + M Cj+k+l + · · · + M Cj+l+1
and α Cj+k+l+1 + α Cj+k+l + · · · + α Cj+l+1 have the same parity. Recall that
while solving problem I considering various sub cases, we got solutions for
all possible values of the integer l. Speaking more precisely, we got all possible values of ‘upper bounds’ of the natural number l. These upper bounds
are nothing but the number of maximum possible strings (in continuum) of
length (k + 1) for which the parity of the string sum remains the same. That
means, as soon as the value of l exceeds this ‘upper bound’, the string sum
changes parity. Hence the positive integer q should be such that the various
upper bounds of the values of l under consideration (for α and M both)
should tally with conditions (∗) and (∗∗) mentioned above.
5
Concluding remarks about rationality
Looking at the rhythm of the parity of sums of continuous strings of binomial
coefficients of any given positive integer M (as described in subsections 4.1
and 4.2 above), one can depict that over fields of characteristic 2, the HilbertKunz multiplicity in this case of ‘disjoint-term trinomial hypersurfaces’ will
turn out to be rational. In fact, it will depend upon the positions of the odd
and even entries in the entire string of binomial coefficients of M, a precise
account of which is given in section 3. I hope similar thing will hold true
over fields of arbitrary positive characteristic p.
For any hypersurface defined by polynomials having ‘disjoint terms’ in
it (not just trinomial hypersurfaces), it is known that the corresponding
Hilbert-Kunz multiplicity will be rational [see [2] and [1] for a reasoning].
The ‘phis in the papers of [2] and [1] attached to ‘disjoint-term trinomials
(or more generally to any sum of monomials that are pairwise prime) are ‘pfractals. As a consequence, the Hilbert-Kunz series will be a rational function
and the corresponding HK multiplicity will be in Q.
But for trinomial hypersurfaces NOT having ‘disjoint-terms’ in it, the
situation becomes more interesting. There we need to solve ‘similar’ rank
computation problems related to larger and more complicated systems of
linear equations [in fact, there we can have infinitely many linear equations
inside a single system]. But the solution to the rank computation problem
mentioned in this article is like providing a basement for the work in the
13
more general case. Due to the more complicated nature of the systems in
that case, I suspect that the Hilbert-Kunz multiplicity can become irrational
there.
References
[1] Paul Monsky, Pedro Teixeira, p-Fractals and power series-I. Some 2
variable results, Journal of Algebra, 280 (2004), 505–536.
[2] Paul Monsky, Pedro Teixeira, p-Fractals and power series-II. Some applications to Hilbert-Kunz theory, Journal of Algebra, 304 (2006), 237–
255.
[3] Shyamashree Upadhyay, An algorithm for the HK function for disjointterm trinomial hypersurfaces, arXiv:1204.5417, [math.CO]
14
| 4 |
On a conjecture of Demailly and new bounds on
Waldschmidt constants in PN
Grzegorz Malara, Tomasz Szemberg, Justyna Szpond
arXiv:1701.04848v1 [math.AG] 17 Jan 2017
January 19, 2017
Abstract
In the present note we prove a conjecture of Demailly for finite sets of sufficiently many very general points in projective spaces. This gives a lower bound
on Waldschmidt constants of such sets. Waldschmidt constants are asymptotic
invariants of subschemes receiving recently considerable attention [1], [2], [6],
[11], [14].
Keywords point configurations, Waldschmidt constants, symbolic powers,
projective space
Mathematics Subject Classification (2000) MSC 14C20 and MSC 13A15
and MSC 13F20 and 32S25
1
Introduction
In 1980 Jean-Charles Moreau proved the following version of the Schwarz Lemma
in several complex variables, [15, Theorem 1.1].
Theorem 1.1 (Moreau). Let Z ⊂ CN be a finite set of points. For every positive
m ∈ Z, there exists a real number rm (Z) such that for all R > r > rm (Z) and for
all holomorphic functions f vanishing to order at least m at every point of Z there
is
!α(mZ)
2eN/2 r
|f |R ,
(1)
|f |r 6
R
where |f |s = sup|z|6s |f (z)| and α(kW ) is the least degree of a polynomial vanishing
at all points of a finite set W to order at least k.
The number α(mZ) in the Theorem is optimal, i.e., the statement fails with
any larger number. Several authors, in particular Chudnovsky, were interested in
obtaining an exponent in (1) independent of m. To this end one defines the following
quantity [16].
Definition 1.2 (Waldschmidt constant). Let Z ⊂ CN be a finite set of points. The
Waldschmidt constant of Z is the real number
α(mZ)
.
m→∞
m
α
b(Z) = lim
The existence of the limit has been showed by Chudnovsky [3, Lemma 1]. It is
well known that α
b(Z) = inf m>1 α(mZ)
m . Chudnovsky established also the following
fundamental fact, see [3, Theorem 1].
2
Theorem 1.3. Let Z ⊂ CN be a finite set of points. Then
α
b(Z) >
α(Z)
.
N
(2)
The bound in (2) can now be easily derived from the seminal results of Ein,
Lazarsfeld and Smith [9]. We discuss it briefly below in Section 2. Chudnovsky
suspected that the bound in (2) is not optimal and raised the following Conjecture,
see [3, Problem 1].
Conjecture 1.4 (Chudnovsky). Let Z ⊂ CN be a finite set of points. Then
α
b(Z) >
α(Z) + N − 1
.
N
(3)
This has been subsequently generalized by Demailly, see [4, p. 101].
Conjecture 1.5 (Demailly). Let Z ⊂ CN be a finite set of points. Then for all
m>1
α(mZ) + N − 1
α
b(Z) >
.
(4)
m+N −1
Of course, for m = 1 Demailly’s Conjecture reduces to that of Chudnovsky.
There has been recently considerable progress on the Chudnovsky Conjecture for
general points obtained independently by Dumnicki and Tutaj-Gasińska in [8] and
Fouli, Mantero and Xie in [12].
Our main result here is the following.
Main Theorem. The Demailly’s Conjecture (4) holds for s > (m + 1)N very
general points in PN .
Remark 1.6. For m = 1 we recover the aforementioned result [8] that the Chudnovsky Conjecture holds for s > 2N very general points in PN .
Throughout the paper we work over the field C of complex numbers.
2
Around the Chudnovsky Conjecture
Esnault and Viehweg using methods of complex projective geometry have proved
the following useful result, see [10, Inégalité A].
Theorem 2.1 (Esnault – Viehweg). Let I be a radical ideal of a finite set of points
in PN with N > 2. Let m 6 k be two integers. Then
α(I (k) )
α(I (m) ) + 1
6
,
m+N −1
k
in particular
α(I (m) ) + 1
6 α̂(I).
m+N −1
For N = 2 the inequality in (5) establishes Demailly’s Conjecture in P2 .
Corollary 2.2. Conjecture 1.5 holds for arbitrary finite sets of points in P2 .
(5)
3
Around 2000 Ein, Lazarsfeld and Smith established a uniform containment result
for symbolic and ordinary powers of homogeneous ideals. For the purpose of this
paper we recall here a somewhat simplified version of their general result.
Definition 2.3 (Symbolic power). Let Z = {P1 , . . . , Ps } be a finite set of points
in PN . For an algebraic set W ⊂ PN , let I(W ) be its homogeneous defining ideal.
Then
I(Z) = I(P1 ) ∩ . . . ∩ I(Ps )
and for a positive integer m
I (m) (Z) = I(P1 )m ∩ . . . ∩ I(Ps )m
is the mth symbolic power of I(Z).
Theorem 2.4 (Ein – Lazarsfeld – Smith). Let Z be a finite set of points in PN and
let I = I(Z) be its defining ideal. Then the containment
I (m) ⊂ I r
(6)
holds for all m > N r.
CN
Theorem 1.3 is an immediate consequence of Theorem 2.4. Indeed, let Z ⊂
⊂ PN be a finite set of points with the defining ideal I = I(Z). Then
α(I (N r) ) > α(I r ) = rα(I)
follows from the containment in (6). Hence
α(I)
α(I (N r) )
>
Nr
N
for all r > 1. Passing with r to infinity we obtain
α
b(I) >
3
α(I)
.
N
A combinatorial inequality
In this section we prove the following auxiliary fact.
Lemma 3.1. For all N > 3, m > 1 and k > m + 1 there is
k(m + N − 1) + 1
m+N −1
>
(k + 1)N .
N
N
Proof. It is convenient to abbreviate q := m + N − 1. The claim in the Lemma is
equivalent to the following inequality
(kq + 1) · (kq) · . . . · (kq + 2 − N ) > q · (q − 1) · . . . · m · (k + 1)N .
(7)
We will group factors in (7) and show that
(kq + 1 − i)(kq + 2 − N + i) > (q − i)(m + i)(k + 1)2 .
(8)
4
holds for all i = 0, . . . , ⌊ N 2−1 ⌋. To this end we define
u(N, m, k, i) := (kq + 1 − i)(kq + 2 − N + i) − (q − i)(m + i)(k + 1)2
(9)
and show that this function is non-negative.
Reduction 1. In the first step, we will show that the difference function
dk(N, m, k, i) = u(N, m, k + 1, i) − u(N, m, k, i)
is non-negative. Taking this for granted, in order to show that the function in (9) is
non-negative, it suffices to check it for the least allowed value of k, i.e. for k = m+1.
In other words the claim in (9) reduces to the claim that the function
uk(N, m, i) := u(N, m, m + 1, i)
(10)
is non-negative for all N, m, i in the given range.
Turning to the proof of the Reduction 1 claim, since the difference function is
linear in k, it suffices to show
a) the leading coefficient of dk(N, m, k, i) treated as a polynomial in k is positive
and
b) the function is non-negative for k = m + 1.
The leading coefficient in a) can be written as
2
1
(i2 − N i + i + N 2 ) + ( N 2 + mN − m − 2N + 1).
3
3
It is elementary to check that the terms in brackets are non-negative.
Evaluating dk(N, m, m + 1, i) we obtain the following expression
2N m2 − 4m2 + 2mN 2 − 4mN − 2imN + 2i2 m + 2im + 4m + 2N 2 − 2N + 5i− 5iN + 5i2
(11)
The term (2N −4)·m2 is positive. The remaining summands in (11) can be rearrange
in the following way
(2m + 2) · N 2 − (4m + 2 + 5i + 2im)N + (2im + 2i2 m + 5i + 4m + 5i2 ).
(12)
This is a quadratic function in N whose discriminant
∆ = 4 − 16m2 − 12i2 m2 − 16m − 8im − 36i2 m − 20i − 15i2
is negative for all m and i in the allowed range. Thus the expression in (12) is
positive. This concludes the proof of Reduction 1.
We study now the function uk(N, m, i) defined in (10). Our approach is similar.
We show in
Reduction 2. that the difference function
dN (N, m, i) = uk(N + 1, m, i) − uk(N, m, i)
is non-negative. This follows immediately from the following presentation of this
function
N −1
N −1 2
dN (N, m, i) = m3 + (
− i)m2 + (2N − 4i − 2)m + (3
m − 3i + 1) (13)
2
2
Indeed, all terms in brackets in (13) are non-negative.
Hence, it is enough to check that the function in (9) is non-negative for N = 3
(and k = m + 1). But this is immediate since
uk(3, m, i) = (m + 3)(m + 1)(i − 1)2 .
This ends the proof of the Lemma.
5
4
A proof of the Main Theorem
In this section we prove the Main Theorem. First we recall from [8, Theorem 3] the
following crucial observation.
Theorem 4.1 (Lower bound on Waldschmidt constants). Let Z be a set of s very
general points in PN . Then
√
α
b(Z) > ⌊ N s⌋.
Turning to the proof of the Main Theorem, let Z be a set of s > (m + 1)N very
general points in PN . Since the result holds in P2 by Corollary 2.2, we may assume
here N > 3. There exists a unique integer k > m + 1 such that
kN 6 s < (k + 1)N .
By Theorem 4.1 we have α
b(Z) > k.
We claim that there exists a form of degree k(m + N − 1) − N + 1 vanishing to
order at least m at every point of Z. This follows from the dimension count. Indeed,
we need to show that
k(m + N − 1) + 1
N +m−1
>
·s
N
N
holds. Since s 6 (k + 1)N − 1, it is enough to show that
k(m + N − 1) + 1
N +m−1
>
· (k + 1)N
N
N
holds. This is exactly the statement of Lemma 3.1.
It follows that
α(mZ) 6 k(m + N − 1) − N + 1.
But then
and we are done.
α(mZ) + N − 1
6k6α
b(Z)
m+N −1
We conclude this note with examples showing that the inequality in Conjecture
1.5 cannot be improved in general. To this end we recall first the notion of star
configurations, see [13].
Definition 4.2 (Star configuration of points). We say that Z ⊂ PN is a star configuration of degree d if Z consists of all intersection points of d > N general
hyperplanes in PN . By intersection points we mean the points which belong to
exactly N of given d hyperplanes.
The assumption general in Definition means that any N of d given hyperplanes
meet in a single point and there is no point belonging to N + 1 or more
hyperplanes.
In particular a star configuration of degree d consists of exactly Nd points.
Example 4.3. Let Z ⊂ PN be a star configuration of degree d. Then it is easy to
check that for any k > 1
α((1 + kN )Z) = (k + 1)d − N + 1
and hence
α((1 + kN )Z) + N − 1
d
=α
b(Z) =
,
N
1 + kN + N − 1
so that there is equality in (4) for infinitely many values of m = 1 + kN .
6
The second example is in a sense more exotic.
Example 4.4. Let Z be the set of points in P2 defined by the ideal
I = hx(y 3 − z 3 ), y(z 3 − x3 ), z(x3 − y 3 )i.
The Z is the union of points
P1 = (1 : 0 : 0),
P4 = (1 : 1 : 1),
P7 = (ε : 1 : 1),
P10 = (ε2 : 1 : 1),
P2 = (0 : 1 : 0),
P5 = (1 : ε : ε2 ),
P8 = (1 : ε : 1),
P11 = (1 : ε2 : 1),
P3 = (0 : 0 : 1),
P6 = (1 : ε2 : ε),
P9 = (1 : 1 : ε),
P12 = (1 : 1 : ε2 ).
which together with lines
L1 : x − y = 0,
L2 : y − z = 0,
L3 : z − x = 0,
L4 : x − εy = 0, L5 : y − εz = 0, L6 : z − εx = 0,
L7 : x − ε2 y = 0, L8 : y − ε2 z = 0, L9 : z − ε2 x = 0.
form a 123 94 configuration, see [7].
The Waldschmidt constant α
b(Z) = 3 has been computed in passing in the proof
of Theorem 2.1 in [5]. In fact the proof shows that
α(3kZ) = 9k
(14)
α((3k + 2)Z) = 9k + 8
(15)
for all k > 1. We claim now that
for all k > 0. For k = 0 this can be checked computing I (2) explicitly. Clearly (14)
implies
α((3k + 2)Z) 6 9k + 8.
Indeed, any partial derivative of a polynomial computing α(3(k + 1)Z) has degree
9k + 8 and the right order of vanishing at Z.
Assume that there is a k > 2 such that
α((3k + 2)Z) 6 9k + 7.
Then there is a divisor D of degree 9k + 7 vanishing to order at least 3k + 2 at every
point Pi of Z. Intersecting D with any of the lines Lj for j = 1, . . . , 9, we conclude
by Bezout P
Theorem that Lj is a component of D. Hence there exists a divisor
′
D = D − 9j=1 Lj of degree 9(k − 1) + 7 vanishing to order at least 3(k − 1) + 2
at every point of Z. Repeating this argument k times we get a contradiction with
α(2Z) = 8.
Now, for m = 3k + 2 with k > 1 we obtain the equality in (4).
Acknowledgement. These notes originated from discussions during the VIII Workshop on Algebraic Geometry held in Lanckorona in October 2016. Research of Szemberg and Szpond was partially supported by National Science Centre, Poland, grant
2014/15/B/ST1/02197. Research of Malara was partially supported by National
Science Centre, Poland, grant 2016/21/N/ST1/01491. We thank Marcin Dumnicki
and Michal Kapustka for helpful conversations.
7
References
[1] Bocci, C., Franci, B.: Waldschmidt constants for StanleyReisner ideals of a class of simplicial
complexes, J. Algebra Appl. Vol. 15, No. 6 (2016) 1650137 (13 pages)
[2] Bocci, C., Cooper, S., Guardo, E., Harbourne, B., Janssen, M., Nagel, U., Seceleanu, A., Van
Tuyl, A., Vu, T.: The Waldschmidt constant for squarefree monomial ideals, J. Algebraic
Combin. 2016, DOI: 10.1007/s10801-016-0693-7
[3] Chudnovsky, G. V.: Singular points on complex hypersurfaces and multidimensional Schwarz
Lemma, Seminaire de Theorie des Nombres, Paris 1979–80, Seminaire Delange-Pisot-Poitou,
Progress in Math vol. 12, M-J Bertin, editor, Birkhauser, Boston-Basel-Stutgart 1981
[4] Demailly, J.-P.: Formules de Jensen en plusieurs variables et applications arithmétiques, Bull.
Soc. Math. France 110 (1982), 75–102
[5] Dumnicki, M., Harbourne, B., Nagel, U., Seceleanu, A. Szemberg, T., Tutaj-Gasińska, H.:
Resurgences for ideals of special point configurations in PN coming from hyperplane arrangements, J. Algebra 443 (2015), 383–394
[6] Dumnicki, M., Harbourne, B., Szemberg, T., Tutaj-Gasińska, H.: Linear subspaces, symbolic
powers and Nagata type conjectures. Adv. Math. 252 (2014), 471–491
[7] Dumnicki M., Szemberg T., Tutaj-Gasińska H.: Counterexamples to the I (3) ⊂ I 2 containment, J. Algebra 393 (2013), 24–29
[8] Dumnicki, M., Tutaj-Gasińska, H.: A containment result in Pn and the Chudnovsky conjecture, arXiv:1603.03708
[9] Ein, L., Lazarsfeld, R., Smith, K.: Uniform bounds and symbolic powers on smooth varieties,
Invent. Math. 144 (2001), 241–252
[10] Esnault, H., Viehweg, E.: Sur une minoration du degré d’hypersurfaces s’annulant en certains
points, Ann. Math. 263 (1983), 75–86
[11] Farnik, L Gwoździewicz, J., Hejmej, B., Lampa-Baczyńska, M., Malara, G., Szpond, J.: Initial
sequences and Waldschmidt constants of planar point configurations, arXiv:1607.01031
[12] Fouli, L., Mantero, P., Xie, Y.: Chudnovsky’s Conjecture for very general points in PN
k ,
arXiv:1604.02217
[13] Geramita, A.V., Harbourne, B., Migliore, J: Star configurations in Pn , J. Algebra 376 (2013),
279–299
[14] Harbourne, B.: The atlas of Waldschmidt constants,
http://www.math.unl.edu/∼bharbourne1/GammaFile.html
[15] Moreau, J.-C.: Lemmes de Schwarz en plusieurs variables et applications arithmétiques,
Séminaire P. Lelong - Henri Skoda(Analyse), 1978/79, 174–190, Lecture Notes Math. 822,
Springer-Verlag, 1980
[16] Waldschmidt, M.: Propriétés arithmétiques de fonctions de plusieurs variables II, Séminaire
P. Lelong (Analyse), 1975/76, 108–135, Lecture Notes Math. 578, Springer-Verlag, 1977
Grzegorz Malara, Department of Mathematics, Pedagogical University of Cracow, Podchora̧żych 2, PL-30-084 Kraków, Poland.
E-mail address: [email protected]
Tomasz Szemberg, Department of Mathematics, Pedagogical University of Cracow, Podchora̧żych 2, PL-30-084 Kraków, Poland.
E-mail address: [email protected]
Justyna Szpond, Department of Mathematics, Pedagogical University of Cracow, Podchora̧żych 2, PL-30-084 Kraków, Poland.
E-mail address: [email protected]
| 0 |
Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies∗
Hanbyul Joo
Tomas Simon
Yaser Sheikh
Carnegie Mellon University
arXiv:1801.01615v1 [cs.CV] 5 Jan 2018
{hanbyulj,tsimon,yaser}@cs.cmu.edu
Figure 1: Frankenstein (silver) and Adam (gold). This paper presents a 3D human model capable of concurrently tracking
the large-scale posture of the body along with the smaller details of a persons facial expressions and hand gestures.
Abstract
interacting individuals, gestures such as a gentle shrug of
the shoulders, a quick turn of the head, or an uneasy shifting of weight from foot to foot, all transmit critical information about the attention, emotion, and intention to observers.
Notably, these social signals are usually transmitted by the
organized motion of the whole body: with facial expressions, hand gestures, and body posture. These rich signals
layer upon goal-directed activity in constructing the behavior of humans, and are therefore crucial for the machine perception of human activity.
We present a unified deformation model for the markerless capture of multiple scales of human movement, including facial expressions, body motion, and hand gestures. An
initial model is generated by locally stitching together models of the individual parts of the human body, which we refer
to as the “Frankenstein” model. This model enables the full
expression of part movements, including face and hands by
a single seamless model. Using a large-scale capture of
people wearing everyday clothes, we optimize the Frankenstein model to create “Adam”. Adam is a calibrated model
that shares the same skeleton hierarchy as the initial model
but can express hair and clothing geometry, making it directly usable for fitting people as they normally appear in
everyday life. Finally, we demonstrate the use of these models for total motion tracking, simultaneously capturing the
large-scale body movements and the subtle face and hand
motion of a social group of people.
However, there are no existing systems that can track,
without markers, the human body, face, and hands simultaneously. Current markerless motion capture systems focus
at a particular scale or on a particular part. Each area has
its own preferred capture configuration: (1) torso and limb
motions are captured in a sufficiently large working volume
where people can freely move [17, 21, 44, 19]; (2) facial
motion is captured at close range, mostly frontal, and assuming little global head motion [5, 24, 6, 9, 51]; (3) finger
motion is also captured at very close distances from hands,
where the hand regions are dominant in the sensor measurements [36, 49, 42, 50]. These configurations make it
difficult to analyze these gestures in the context of social
communication.
1. Introduction
Social communication is a key function of human motion [7]. We communicate tremendous amounts of information with the subtlest movements. Between a group of
In this paper, we present a novel approach to capture the
motion of the principal body parts for multiple interacting
∗ Website: http://www.cs.cmu.edu/ hanbyulj/totalcapture
˜
1
people (see Fig. 1). The fundamental difficulty of such capture is caused by the scale differences of each part. For
example, the torso and limbs are relatively large and necessitate coverage over a sufficiently large working volume,
while fingers and faces, due to their smaller feature size, require close distance capture with high resolution and frontal
imaging. With off-the-shelf cameras, the resolution for face
and hand parts will be limited in a room-scale, multi-person
capture setup.
To overcome this sensing challenge, we use two general approaches: (1) we leverage keypoint detection (e.g.,
faces [18], bodies [54, 14, 35], and hands [41]) in multiple views to obtain 3D keypoints, which is robust to multiple people and object interactions; (2) to compensate for
the limited sensor resolution, we present a novel generative body deformation model, which has the ability to express the motion of the each of the principal body parts.
In particular, we describe a procedure to build an initial
body model, named “Frankenstein”, by seamlessly consolidating available part template models [33, 13] into a single skeleton hierarchy. We optimize this initialization using a capture of 70 people, and learn a new deformation
model, named “Adam”, capable of additionally capturing
variations of hair and clothing, with a simplified parameterization. We present a method to capture the total body
motion of multiple people with the 3D deformable model.
Finally, we demonstrate the performance of our method on
various sequences of social behavior and person-object interactions, where the combination of face, limb, and finger
motion emerges naturally.
2. Related Work
Motion capture systems performed by tracking retroreflective markers [55] are the most widely used motion
capture technology due to their high accuracy. Markerless
motion capture methods [23, 17, 21, 44] have been explored
over the past two decades to achieve the same goal without markers, but they tend to implicitly admit that their performance is inferior by treating the output of marker based
methods as a ground truth or an upper bound. However,
over the last few years, we have witnessed a great advance
in key point detections from images (e.g., faces [18], bodies [54, 14, 35], and hands [41]), which can provide reliable anatomical landmark measurements for markerless
motion capture methods [19, 28, 41], while the performance of marker based methods relatively remains the same
with their major disadvantages including: (1) a necessity
of sparsity in marker density for reliable tracking which
limits the spatial resolution of motion measurements, and
(2) a limitation in automatically handling occluded markers
which requires an expensive manual clean-up. Especially,
capturing high-fidelity hand motion is still challenging in
marker-based motion capture systems due to the severe self-
occlusions of hands [59], while occlusions are implicitly
handled by guessing the occluded parts with uncertainty
using the prior learnt from a large scale dataset [41]. Our
method shows that the markerless motion capture approach
potentially begins to outperform the marker-based counterpart by leveraging the learning based image measurements.
As an evidence we demonstrate the motion capture from
total body, which has not been demonstrated by other existing marker based methods. In this section, we review the
most relevant markerless motion capture approaches to our
method.
Markerless motion capture largely focuses on the motion of the torso and limbs. The standard pipeline is based
on a multiview camera setup and tracking with a 3D template model [32, 23, 15, 10, 29, 16, 52, 11, 44, 17, 20, 19].
In this approach, motion capture is performed by aligning
the 3D template model to the measurements, which distinguish the various approaches and may include color, texture, silhouettes, point clouds, and landmarks. A parallel
track of related work therefore focuses on capturing and
improving body models for tracking, for which a highly
controlled multiview capture system—specialized for single person capture—is used to build precise models. With
the introduction of commodity depth sensors, single-view
depth-based body motion capture became a popular direction [3, 40]. A recent collection of approaches aims to reconstruct 3D skeletons directly from monocular images, either by fitting 2D keypoint detections with a prior on human
pose [60, 8] or getting even closer to direct regression methods [61, 34, 48].
Facial scanning and performance capture has been
greatly advanced over the last decade. There exist multiview based methods showing excellent performance on
high-quality facial scanning [5, 24] and facial motion capture [6, 9, 51]. Recently, light-weighed systems based on
a single camera show a compelling performance by leveraging morphable 3D face model on 2D measurements[22,
18, 31, 47, 13, 12, 56]. Hand motion captures are mostly
lead by single depth sensor based methods [36, 46, 49, 30,
57, 45, 53, 43, 39, 42, 50, 58], with few exceptions based
on multi-view systems [4, 43, 38]. In this work, we take
the latter approach and use the method of [41] who introduced a hand keypoint detector for RGB images which can
be directly applicable in multiview systems to reconstruct
3D hand joints.
As a way to reduce the parameter space and overcome
the complexity of the problems, generative 3D template
models have been proposed in each field, for example the
methods of [2, 33, 37] in body motion capture, the method
of [13] for facial motion capture, and very recently, the
combined body+hands model of Romero et al. [38]. A generative model with expressive power for total body motion
has not been introduced.
Figure 2: Part models and a unified Frankenstein model. (a)
The body model [33]; (b) the face model [13]; and (c) a
hand rig, where red dots have corresponding 3D keypoints
reconstructed from detectors in (a-c). (d) Aligned face and
hand models (gray meshes) to the body model (the blue
wireframe mesh); and (e) the seamless Frankenstein model.
3. Frankenstein Model
The motivation for building the Frankenstein body
model is to leverage existing part models—SMPL [33] for
the body, FaceWarehouse [13] for the face, and an artistdefined hand rig—each of which capture shape and motion
details at an appropriate scale for the corresponding part.
This choice is not driven merely by the free availability
of the component models: note that due to the trade-off
between image resolution and field of view of today’s 3D
scanning systems, scans used to build detailed face models
will generally be captured using a different system than that
used for the rest of the body. For our model, we merge all
transform bones into a single skeletal hierarchy but keep the
native parameterization of each component part to express
identity and motion variations, as explained below. As the
final output, the Frankenstein model produces motion parameters capturing the total body motion of humans, and
generates a seamless mesh by blending the vertices of the
component meshes.
3.1. Stitching Part Models
Each of the component part models maps from a subset of the above parameters to a set of vertices, respecB
F
H
tively, VB ∈ RN ×3 , VF ∈ RN ×3 , VLH ∈ RN ×3 , and
H
VRH ∈ RN ×3 , where the number of vertices of each
mesh part is N B =6890, N H =2068, and N F =11510. The
U
final mesh of the Frankenstein model, VU ∈RN ×3 , is
defined by linearly blending them with a matrix C ∈
U
B
F
H
RN ×(N +N +2N ) :
VU = C
h
VB
T
VF
T
VLH
T
VRH
T i T
, (4)
where T denotes the transpose of a matrix. Note that VU
has fewer vertices than the sum of part models because
there are redundant parts in the body model (e.g., face and
hands of the body model). In particular, our final mesh
has N U =18540 vertices. Figure 2 shows the part models which are aligned by manually clicking corresponding
points between parts, and also shows the final mesh topology of Frankenstein model at the mean shape in the rest
pose. The blending matrix C is a very sparse matrix and
most rows have a single column set to one with zeros elsewhere, simply copying the vertex locations from the corresponding part models with minimal interpolation at the
seams.
In the Frankenstein model, all parts are rigidly linked by
a single skeletal hierarchy. This unification is achieved by
substituting the hands and face branches of the SMPL body
skeleton with the corresponding skeletal hierarchies of the
detailed part models. All parameters of the Frankenstein
model are jointly optimized for motion tracking and identity
fitting. The parameterization of each of the part models is
detailed in the following sections.
3.2. Body Model
The Frankenstein model M U is parameterized by motion
parameters θ U , shape (or identity) parameters φU , and a
global translation parameter tU ,
For the body, we use the SMPL model [33] with minor
modifications. In this section, we summarize the salient aspects of the model in our notation. The body model, M B ,
is defined as follows,
VU = M U (θ U , φU , tU ),
VB = M B (θ B , φB , tB )
(1)
where VU is a seamless mesh expressing the motion and
shape of the target subject.
The motion and shape parameters of the model are a
union of the part models’ parameters:
θ U = {θ B , θ F , θ LH , θ RH },
(2)
φU = {φB , φF , φLH , φRH },
(3)
where the superscripts represent each part model: B
for the body model, F for the face model, LH for for
the left hand model, and RH for the right hand model.
(5)
B
with VB = {viB }N
i=1 . The model uses a template mesh
of N B =6890 vertices, where we denote the i-th vertex as
viB ∈ R3 . The vertices of this template mesh are first displaced by a set of blendshapes describing the identity or
body shape. Given the vertices in the rest pose, the posed
mesh vertices are obtained by linear blend skinning using
transformation matrices TB
j ∈ SE(3) for each of J joints,
B
viB
= I3×4 ·
J
X
j=1
B
wi,j
TB
j
PKb k B
viB0 + k=1
bi φk
,
1
(6)
where bki ∈ R3 is the i-th vertex of the k-th blendshape,
B
Kb
φB
with Kb =10
k is the k-th shape coefficient in φ ∈ R
the number of identity body shape coefficients, and viB0 is
the i-th vertex of the mean shape. The transformation matrices TB
j encode the transform for each joint j from the
rest pose to the posed mesh in world coordinates, which is
constructed by following skeleton hierarchy from the root
joint with pose parameter θ B (see [33]). The j-th pose parameter θjB is the angle-axis representation of the relative
B
is
rotation of joint j with respect to its parent joints. wi,j
B
the weight with which transform Tj affects vertex i, with
PJ
B
j=1 wi,j =1 and I3×4 is the 3×4 truncated identity matrix
to transform from homogenous coordinates to a 3 dimensional vector. We use J B =21 with θ B ∈ R21×3 , ignoring
the last joint of each hand of the original body model. For
simplicity, we do not use the pose-dependent blendshapes.
3.3. Face Model
As a face model, we build a generative PCA model from
the FaceWarehouse dataset [13]. Specifically, the face part
model, M F , is defined as follows,
F
{viF }N
i=1 ,
viF
Kf
X
k=1
fik φF
k +
Ke
X
esi θsF
We use an artist rigged hand mesh. Our hand model has
J H =16 joints and the mesh is deformed via linear blend
skinning. The hand model has a fixed shape, but we introduce scaling parameters for each bone to allow for different finger sizes. The transform for each joint j is parameterized by the Euler angle rotation with respect to its
parent, θ j ∈ R3 , and an additional anisotropic scaling factor along each axis, φj ∈ R3 . Specifically, the linear transform for joint j in the bone’s local reference frame becomes
eul(θ j )·diag(sj ), where eul(θ j ) converts from an Euler angle representation to a 3 × 3 rotation matrix and diag(φj )
is the 3 × 3 diagonal matrix with the X,Y ,Z scaling factors φj on the diagonal. The vertices of the hand in world
H
coordinates are given by LBS with weights wi,j
:
H
vi = I3×4 · TB
j=H · Γ ·
J
X
H
wi,j
TH
j
vi0
.
1
(10)
(7)
3
with V =
where the i-th vertex is
∈ R ,
and N F =11510. The vertices are represented by the linear
combination of the subspaces:
v̂iF = viF 0 +
3.4. Hand Model
j=1
VF = M F (θ F , φF , TF ),
F
where the transform ΓF , directly determined by the body
shape parameters φB , aligns the face model with the body
model.
(8)
s=1
where, as before, viF 0 denotes i-th vertex of the mean shape,
F
and φF
k and θs are k-th face shape identity (shape) and sth facial expression (pose) parameters respectively. Here,
fik ∈ R3 is the i-th vertex of the k-th identity blendshape
(Kf = 150), and esi ∈ R3 is the i-th vertex of the s-th
expression blendshape (Ke = 200).
Finally, a transformation TF brings the face vertices into
world coordinates. To ensure that the face vertices transform in accordance to the rest of the body, we manually
align the mean face viF 0 with the body mean shape, as
shown in Fig. 2. This way, we can apply the transformation
B
of the body model’s head joint TB
j=F (θ ) as a global transformation for the face model in Eq. 9. However, to keep
the face in alignment with the body, an additional transform
matrix ΓF ∈ SE(3) is required to compensate for displacements in the root location of the face joint due to body shape
changes in Eq. 6.
Finally, each face vertex position is given by:
F
v̂i
F
B
F
vi = I3×4 · Tj=F · Γ
,
(9)
1
where TH
j is each bone’s composed transform (with all parents in the hierarchy), TB
j=H is the transformation of the
corresponding hand joint in the body model, and ΓH is
the transformation that aligns the hand model to the body
model. As with the face, this transform depends on the
shape parameters of the body model.
4. Motion Capture with Frankenstein Model
We fit the Frankenstein model to data to capture the total
body motion, including the major limbs, the face, and fingers. Our motion capture method relies heavily on fitting
mesh correspondences to 3D keypoints, which are obtained
by triangulation of 2D keypoint detections across multiple
camera views. To capture shape information we also use
point clouds generated by multiview stereo reconstructions.
Model fitting is performed by an optimization framework
to minimize distances between corresponded model joints
and surface points and 3D keypoint detections, and iterative
closest point (ICP) to the 3D point cloud.
4.1. 3D Measurements
We incorporate two types of measurements in our framework as shown in Fig. 3: (1) corresponded 3D keypoints,
which map to known joints or surface points on the mesh
models (see Fig. 2), and (2) uncorresponded 3D points from
multiview stereo reconstruction, which we match using ICP.
3D Body, Face, and Hand Keypoints: We use the
OpenPose detector [25] in each available view, which produces 2D keypoints on the body with the method of [14],
are expressed as combinations of vertices via a regression
U
matrix J ∈ RC×N , where C denotes the number of correspondences and N U is the number of vertices in the model.
Let D denote the set of available detections in a particular
frame. The cost is then:
X
Ekeypoints = λkeypoints
||Ji V − yiT ||2 ,
(12)
i∈D
Figure 3: 3D measurements and Frankenstein fitting result.
and hand and face keypoints using the method of [41]. 3D
body skeletons are obtained from the 2D detections using
the method of [28], which uses known camera calibration
parameters for reconstruction. The 3D hand keypoints are
obtained by triangulating 2D hand pose detections, following the method of [41], and similarly for the facial keypoints. Note that subsets of 3D keypoints can be entirely
missing if there aren’t enough 2D detections for triangulation, which can happen in challenging scenes with interocclusions or motion blur.
3D Feet Keypoints: An important cue missing from the
OpenPose detector are landmarks on the feet. For motion
capture, this is an essential feature to prevent footskate, as
well as to accurately determine the orientation of the feet.
We therefore train a keypoint detector for the tip of the big
toe, the tip of the little toe, and the ball of the foot. We
annotate these 3 keypoints per foot in each of around 5000
person instances of the COCO dataset, and use the neural
network architecture presented by [54] with a bounding box
around the feet determined by the 3D body detections1 .
3D Point Clouds: We use the commercial software Capturing Reality to obtain 3D point clouds from the multiview
images, with associated point normals.
4.2. Objective Function
We initially fit every frame in the sequence independently. For clarity, we drop the time index from the notation
and describe the process for a single frame, which optimizes
the following cost function:
E θU , φU , tU = Ekeypoints + Eicp + Eseam + Eprior (11)
Anatomical Keypoint Cost: the term Ekeypoints matches 3D
keypoint detections which are in direct corresponce to our
mesh models. This includes joints (or end effects) in the
body and hands, and also contains points corresponding to
the surface of the mesh (e.g., facial keypoints and the tips
of fingers and toes). Both of these types of correspondence
1 More
details provided in the supplementary material.
where Ji indexes a row in the correspondence regression
matrix and represents an interpolated position using a small
number of vertices, and yi ∈ R3×1 is the 3D detection. The
λkeypoints is a relative weight for this term.
ICP Cost: The 3D point cloud measurements are not a
priori in correspondence with the model meshes. We therefore establish their correspondence to the mesh using Iterative Closest Point (ICP) during each solver iteration. We
find the closest 3D point in the point cloud to each of the
mesh vertices,
i∗ = arg mini ||xi − vj ||2 ,
(13)
where xi∗ is the closest 3D point to vertex j, where vj is
a vertex2 in VU of the Frankenstein model. To ensure that
this is a correct correspondence, we use thresholds for the
distance and normals during the correspondence search.
Finally, for each vertex j we compute the point-to-plane
residual, i.e., the distance along the normal direction,
X
Eicp = λicp
n(xi∗ )T (xi∗ − vj ),
(14)
vj ∈VtU
where n(·) ∈ R3 represents the point’s normal, and λicp is
a relative weight for this term.
Seam Constraints: The part models composing the
Frankenstein model are rigidly linked by the skeletal hierarchy. However, the independent surface parameterizations of
each of the part models may introduce discontinuities at the
boundary between parts (e.g., a fat arm with a thin wrist).
To avoid this artifact, we encourage the vertices around the
seam parts to be close by penalizing differences between
the last two rings of vertices around the seam of each part,
and the corresponding closest point in the body model in
the rest pose expressed as barycentric coordinates (see the
supplementary materials for details).
Prior Cost: Depending on the number of measurements
available in a particular frame, the set of parameters of
M u may not be determined uniquely (e.g., the width of
the fingers). More importantly, the 3D point clouds are
noisy and cannot be well explained by the model due to
hair and clothing, which are not captured by the SMPL and
2 We do not consider some parts (around hands and face), as depth sensor resolution is too low to improve the estimate. These parts are defined
as a mask.
Figure 4: Regressing detection target positions. (Left) The
template model is aligned with target object. (Mid.) The
torso joints of the template model (magenta) have discrepancy from the joint definitions of 3D keypoint detection
(cyan). (Right) The newly regressed target locations (green)
are more consistent with 3D keypoint detections.
FaceWarehouse meshes, which can result in erroneous correspondences during ICP. Additionally, the joint locations
of the models are not necessarily consistent with the annotation criteria used to train the 2D detectors. We are
therefore forced to set priors over model parameters to
avoid the model from overfitting to these sources of noise,
F
B
H
Eprior = Eprior
+ Eprior
+ Eprior
. The prior for each part is
defined by corresponding shape and pose priors, for which
we use 0-mean standard normal priors for each parameter
except for scaling factors, which are encouraged to be close
to 1. Details and relative weights can be found in supplementary materials.
4.3. Optimization Procedure
The complete model is highly nonlinear, and due to the
limited degrees of freedom of the skeletal joints, the optimization can get stuck in bad local minima. Therefore, instead of optimizing the complete model initially, we fit the
model in phases, starting with a subset of measurements and
strong priors that are relaxed as optimization progresses.
Model fitting is performed on each frame independently.
To initialize the overall translation and rotation, we use four
keypoints on the torso (left and right shoulders and hips)
without using the ICP term, and with strong weight on the
priors. Once the torso parts are approximately aligned, we
use all available keypoints of all body parts, with small
weight for the priors. The results at this stage already provide reasonable motion capture but do not accurately capture the shape (i.e., silhouette) of the subject. Finally, the
entire optimization is performed including the ICP term
to find correspondences with the 3D point cloud. We run
the final optimization two times, finding new correspondences each time. For the optimization we use LevenbergMarquardt with the Ceres Solver library [1].
5. Creating Adam
We derive a new model, which we call Adam, enabling
total body motion capture with a simpler parameterization
than the part-based Frankenstein model. In particular, this
new model has a single joint hierarchy and a common parameterization for all shape degrees of freedom, tying together the face, hand, and body shapes and avoiding the
need for seam constraints. To build the model, it is necessary to reconstruct the shape and the motion of all body
parts (face, body, and hands) from diverse subjects where
model can learn the variations. To do this, we leverage our
Frankenstein model and apply it on a dataset of 70 subjects
where each of them performs a short range of motion in a
multiview camera system. We selected 5 frames for each
person in different poses and use the the reconstruction results to build Adam. From the data, both joint location information and linear shape blendshapes are learnt. Because
we derive the model from clothed people, the blendshapes
explain some variations of them.
5.1. Regressing Detection Targets
There exists a discrepancy between the joint locations of
the body model (e.g., SMPL model in our case) and the location of the keypoint detections (i.e., a model joint vs. a
detection joint), as shown in Fig. 4. This affects mainly the
shoulder and hip joints, which are hard to precisely annotate. This difference has the effect of pulling the Frankenstein model towards a bad fit even while achieving a low
keypoint cost, Ekeypoints . We alleviate this problem by computing the relative location of the 3D detections with respect to the fitted mesh vertices by leveraging the the reconstructed 70 people data. This allows us to define new targets
for the keypoint detection cost that, on average, are a better match for the location of the 3D detections with respect
to the mesh model, as shown in Fig. 4. In particular, given
the fitting results of 70 identities, we approximate the target
3D keypoint locations as a function of the final fitted mesh
vertices following the procedure of [33] to find a sparse,
linear combination of vertices that approximates the position of the target 3D keypoint. Note that we do not change
the joint location used in the skeleton hierarchy during LBS
deformation, only the regression matrices Ji in Eq. (12).
5.2. Fitting Clothes and Hair
The SMPL model captures the shape variability of human bodies, but does not account for clothing or hair. Similarly, the FaceWarehouse template mesh was not design to
model hair. However, for the natural interactions that we are
most interested in capturing, people wear everyday clothing and sport typical hairstyles. To learn a new set of linear blendshapes that better capture the rough geometry of
clothed people and jointly model face, it is required to reconstruct the accurate geometry of the source data. For this
purpose, we reconstruct the out-of-shape spaces in the reconstructed 70 people results by Frankenstein model fitting.
For each vertex in the Frankenstein model, we write
ṽi = vi + n(vi )δi ,
(15)
where δi ∈ R is a scalar displacement meant to compensate
for the discrepancy between the Frankenstein model vertices and the 3D point cloud, along the normal direction at
each vertex. We pose the problem as a linear system,
NT
(P − VU )T
∆
=
,
(16)
0
(WLN)T
U
where ∆ ∈ RN contains the stacked per-vertex displacements, VU are the vertices in the Frankenstein model, P ∈
U
U
RN ×3 are corresponding point cloud points, N ∈ RN ×3
U
U
contains the mesh vertex normals, and L ∈ RN ×N is
the Laplace-Beltrami operator to regularize the deformation. We also use a weight matrix W to avoid large deformations where the 3D point cloud has lower resolution
than the original mesh, like details in the face and hands.
Figure 5: (Top) Visualization of silhouette from different
methods with Ground-truth. The ground truth is drawn on
red channel and the rendered silhouette masks from each
model is drawn on green channel. Thus, the correctly overlapped region is shown as yellow color.; (Bottom) Silhouette accuracy compared to the ground truth silhouette.
5.3. Building the Shape Deformation Space
After ∆ fitting, we warp each frame’s surface to the rest
pose, applying the inverse of the LBS transform. With the
fitted surfaces warped to this canonical pose, we do PCA
analysis to build a joint linear shape space that captures
shape variations across the entire body. As in Section 3.3,
we separate the expression basis for the face and retain the
expression basis from the FaceWarehouse model, as our
MVS point clouds are of too low resolution to fit facial expressions.
This model now can have shape variation for all parts,
including body, hand, and face. The model also includes
deformation of hair and clothing. That is this model can
substitute parameters of φF , φB , and φH .
M T (θT , φT , tT ) = VT
(17)
T
T
with VT = {viT }N
i=1 and N =18540. As in SMPL, the
vertices of this template mesh are first displaced
PKTby ka set
of blendshapes in the rest pose, v̂iT = viT 0 + k=1
si φB
k,
k
3
where si ∈ R is the i-th vertex of the k-th blendshape, φTk
is the k-th shape coefficients of φT ∈ RKb , and KT = 40 is
the number of identity coefficients, vT 0 is the mean shape
and vT 0 is its i-th vertex. However, these blendshapes now
capture variation across the face, hands, and body. These
are then posed using LBS as in Eq. (6). We define the joints
and weights for LBS followoing the part models, which is
further explained in the supplementary material.
5.4. Tracking with Adam
The cost function to capture total body motion using
Adam model is similar to Eqn. 11 without the seam term:
E θT , φT , tT = Ekeypoints + Eicp + Eprior .
(18)
Table 1: Accuracy of Silhouettes from different models
Mean
Std.
SMPL[33]
84.79%
4.55
Franken
85.91%
4.57
Franken ICP
87.68%
4.53
Adam ICP
87.74%
4.18
However, Adam is much easier to use than Frankenstein,
because it only has a single type of shapes and pose parameters for all parts. Conceptually, it is based on the SMPL
model parameterization, but with additional joints for the
hands and facial expression blendshapes.
Optical Flow Propagation: While fitting each frame independently has benefits—-it does not suffer from error accumulation and frames can be fit in parallel—it typically
produces jittery motion. To reduce this jitter, we use optical flow to propagate the initial, per-frame fit to neighboring frames to find a smoother solution. More concretely,
given the fitting results at the frame t, we propagate this
mesh to frames t−1 and t+1 using optical flow at each vertex, which is triangulated into 3D using the method of [27].
Therefore, each vertex has at most three candidate positions: the original mesh, and the forward and backward
propagated vertices (subject to a forward-backward consistency check). Given these propagated meshes, we reoptimize the model parameters by using all propagated mesh
vertices as additional keypoints to find a compromise mesh.
We run this process multiple times (3, in our case), to further reduce jitter and fill in frames with missing detections.
6. Results
We perform total motion capture using our two models,
Frankenstein and Adam, on various challenging sequences.
Figure 6: Total body reconstruction results on various human body motions. For each example scene, the fitting results from
three different models are shown by different colors (pink for SMPL [33], silver for Frankenstein, and gold for Adam).
For experiments, we use the dataset captured in the CMU
Panoptic Studio [26]. We use 140 VGA cameras to reconstruct 3D body keypoints, 480 VGA cameras for feet, and
31 HD cameras for faces and hands keypoints, and 3D point
clouds. We compare the fits produced by our models with
the body-only SMPL model [33].
6.1. Quantitative Evaluation
We evaluate how well each model can match a moving
person by measuring overlap with the ground truth silhouette across 5 different viewpoints for a 10 second range of
motion sequence. To obtain the ground truth silhouette,
we run a background subtraction algorithm using a Gaussian model for the background of each pixel with a postprocessing to remove noise by morphological transforms.
As an evaluation metric, we compute the percentage of
overlapping region compared to the union between the GT
silhouettes and the rendered forground masks after fitting
each model. Here, we compare the fitting results of 3 different models: SMPL, our Frankenstein, and our Adam models. An example result is shown in Figure 5, and the results
are shown in Fig 5 and Table 1. We first compare accuracy
between SMPL and Frankenstein model by using only 3D
keypoints as measurement cues. The major source of im-
provement of Frankenstein over SMPL is in the articulated
hand model (by construction, the body is almost identical),
as seen in Fig. 5 (a). Including ICP term as cues provides
better accuracy. Finally in the comparison between our two
models, they show almost similar performance. Ideally we
expect the Adam outperforms Frankenstein because it has
more expressive power for hair and clothing, and it shows
it shows better performance in a certain body shape (frame
50-75 in Fig 5). However, Adam sometimes produces artifacts showing lower accuracy; it tends to generate thinner
legs, mainly due to poor 3D point cloud reconstructions on
the source data on which Adam is trained. However, Adam
is simpler for total body motion capture purpose and has potential to be improved once a large scale dataset is available
with more optimized capture setup.
6.2. Qualitative Results
We run our method on sequences where face and hand
motions are naturally emerging with body motions. The sequences include short range of motions for 70 people used
to build Adam, social communications of multiple people,
a furniture building sequence with dexterous hand motions,
musical performances such as cello and guitars, and commonly observable daily motions such as keyboard typing.
Most of these sequences are rarely demonstrated in previous
markerless motion capture methods since capturing subtle
details are the key to achieve the goal. The example results
are shown in Figure 6. Here, we also qualitatively compare
our models (in silver color for Frankenstein, and gold for
Adam) with the SMPL model (in pink) [33]. It should be
noted that the total body motion capture results based on
our models produce much better realism for the scene by
capturing the subtle details from hands and faces. Our results are best shown in the accompanying videos.
7. Discussion
We present the first markerless method to capture total
body motion including facial expression, coarse body motion from torso and limbs, and hand gestures at a distance.
To achieve this, we present two types of models which can
express motion in each of the parts. Our reconstruction results show compelling and realistic results, even when using
only sparse 3D keypoint detections to drive the models.
References
[1] S. Agarwal, K. Mierle, and Others. Ceres solver. http:
//ceres-solver.org.
[2] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers,
and J. Davis. Scape: shape completion and animation of
people. In ToG, 2005.
[3] A. Baak, M. Müller, G. Bharaj, H.-P. Seidel, and C. Theobalt.
A data-driven approach for real-time full body pose reconstruction from a depth camera. In Consumer Depth Cameras
for Computer Vision. Springer, 2013.
[4] L. Ballan, A. Taneja, J. Gall, L. Van Gool, and M. Pollefeys. Motion capture of hands in action using discriminative
salient points. In ECCV, 2012.
[5] T. Beeler, B. Bickel, P. Beardsley, B. Sumner, and M. Gross.
High-quality single-shot capture of facial geometry. In TOG,
2010.
[6] T. Beeler, F. Hahn, D. Bradley, B. Bickel, P. Beardsley,
C. Gotsman, R. Sumner, and M. Gross. High-quality passive
facial performance capture using anchor frames. In TOG,
2011.
[7] R. Birdwhistell. Kinesics and context: Essays on body motion communication. In University of Pennsylvania Press,
Philadelphia., 1970.
[8] F. Bogo, A. Kanazawa, C. Lassner, P. V. Gehler, J. Romero,
and M. J. Black. Keep it SMPL: automatic estimation of 3d
human pose and shape from a single image. In CoRR, 2016.
[9] D. Bradley, W. Heidrich, T. Popa, and A. Sheffer. High resolution passive facial performance capture. In TOG, 2010.
[10] C. Bregler, J. Malik, and K. Pullen. Twist based acquisition and tracking of animal and human kinematics. In IJCV,
2004.
[11] T. Brox, B. Rosenhahn, J. Gall, and D. Cremers. Combined
region and motion-based 3D tracking of rigid and articulated
objects. In TPAMI, 2010.
[12] C. Cao, D. Bradley, K. Zhou, and T. Beeler. Real-time highfidelity facial performance capture. In TOG, 2015.
[13] C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou. Facewarehouse: A 3d facial expression database for visual computing.
In TVCG, 2014.
[14] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multiperson 2d pose estimation using part affinity fields. In CVPR,
2017.
[15] K. M. Cheung, S. Baker, and T. Kanade. Shape-fromsilhouette across time part i: Theory and algorithms. In IJCV,
2005.
[16] S. Corazza, L. Mündermann, E. Gambaretto, G. Ferrigno,
and T. P. Andriacchi. Markerless Motion Capture through Visual Hull, Articulated ICP and Subject Specific Model Generation. In IJCV, 2010.
[17] E. de Aguiar, C. Stoll, C. Theobalt, N. Ahmed, H.-P. Seidel,
and S. Thrun. Performance capture from sparse multi-view
video. In SIGGRAPH, 2008.
[18] F. De la Torre, W.-S. Chu, X. Xiong, F. Vicente, X. Ding, and
J. F. Cohn. Intraface. In FG, 2015.
[19] A. Elhayek, E. Aguiar, A. Jain, J. Tompson, L. Pishchulin,
M. Andriluka, C. Bregler, B. Schiele, and C. Theobalt. Efficient convnet-based marker-less motion capture in general
scenes with a low number of cameras. In CVPR, 2015.
[20] Y. Furukawa and J. Ponce. Dense 3d motion capture from
synchronized video streams. In CVPR, 2008.
[21] J. Gall, C. Stoll, E. De Aguiar, C. Theobalt, B. Rosenhahn,
and H.-P. Seidel. Motion capture using joint skeleton tracking and surface estimation. In CVPR. IEEE, 2009.
[22] P. Garrido, L. Valgaerts, C. Wu, and C. Theobalt. Reconstructing detailed dynamic face geometry from monocular
video. In TOG, 2013.
[23] D. Gavrila and L. Davis. Tracking of humans in action: A
3-D model-based approach. In ARPA Image Understanding
Workshop, 1996.
[24] A. Ghosh, G. Fyffe, B. Tunwattanapong, J. Busch, X. Yu,
and P. Debevec. Multiview face capture using polarized
spherical gradient illumination. In TOG, 2011.
[25] G. Hidalgo, Z. Cao, T. Simon, S.-E. Wei, H. Joo,
and Y. Sheikh. Openpose. https://github.com/
CMU-Perceptual-Computing-Lab/openpose.
[26] H. Joo, H. Liu, L. Tan, L. Gui, B. Nabbe, I. Matthews,
T. Kanade, S. Nobuhara, and Y. Sheikh. Panoptic studio:
A massively multiview system for social motion capture. In
ICCV, 2015.
[27] H. Joo, H. S. Park, and Y. Sheikh. Map visibility estimation
for large-scale dynamic 3d reconstruction. In CVPR, 2014.
[28] H. Joo, T. Simon, X. Li, H. Liu, L. Tan, L. Gui, S. Banerjee,
T. Godisart, B. Nabbe, I. Matthews, et al. Panoptic studio:
A massively multiview system for social interaction capture.
In TPAMI, 2017.
[29] R. Kehl and L. V. Gool. Markerless tracking of complex
human motions from multiple views. In CVIU, 2006.
[30] C. Keskin, F. Kıraç, Y. E. Kara, and L. Akarun. Hand pose
estimation and hand shape classification using multi-layered
randomized decision forests. In ECCV, 2012.
[31] H. Li, J. Yu, Y. Ye, and C. Bregler. Realtime facial animation
with on-the-fly correctives. In TOG, 2013.
[32] Y. Liu, J. Gall, C. Stoll, Q. Dai, H.-P. Seidel, and C. Theobalt.
Markerless motion capture of multiple characters using multiview image segmentation. In TPAMI, 2013.
[33] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J.
Black. Smpl: A skinned multi-person linear model. In TOG,
2015.
[34] D. Mehta, S. Sridhar, O. Sotnychenko, H. Rhodin,
M. Shafiei, H. Seidel, W. Xu, D. Casas, and C. Theobalt.
Vnect: Real-time 3d human pose estimation with a single
RGB camera. In TOG, 2017.
[35] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In ECCV, 2016.
[36] I. Oikonomidis, N. Kyriazis, and A. A. Argyros. Tracking
the articulated motion of two strongly interacting hands. In
CVPR, 2012.
[37] G. Pons-Moll, J. Romero, N. Mahmood, and M. J. Black.
Dyna: A model of dynamic human shape in motion. In TOG,
2015.
[38] J. Romero, D. Tzionas, and M. J. Black. Embodied hands:
Modeling and capturing hands and bodies together. In TOG,
2017.
[39] T. Sharp, C. Keskin, D. Robertson, J. Taylor, J. Shotton,
D. Kim, C. Rhemann, I. Leichter, A. Vinnikov, Y. Wei, et al.
Accurate, robust, and flexible real-time hand tracking. In
CHI, 2015.
[40] J. Shotton, A. Fitzgibbon, M. Cook, and T. Sharp. Real-time
human pose recognition in parts from single depth images.
In CVPR, 2011.
[41] T. Simon, H. Joo, I. Matthews, and Y. Sheikh. Hand keypoint
detection in single images using multiview bootstrapping. In
CVPR, 2017.
[42] S. Sridhar, F. Mueller, A. Oulasvirta, and C. Theobalt. Fast
and robust hand tracking using detection-guided optimization. In CVPR, 2015.
[43] S. Sridhar, A. Oulasvirta, and C. Theobalt. Interactive markerless articulated hand motion tracking using RGB and depth
data. In ICCV, 2013.
[44] C. Stoll, N. Hasler, J. Gall, H.-P. Seidel, and C. Theobalt.
Fast articulated motion tracking using a sums of gaussians
body model. In ICCV, 2011.
[45] X. Sun, Y. Wei, S. Liang, X. Tang, and J. Sun. Cascaded
hand pose regression. In CVPR, 2015.
[46] D. Tang, H. Jin Chang, A. Tejani, and T.-K. Kim. Latent regression forest: Structured estimation of 3D articulated hand
posture. In CVPR, 2014.
[47] J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and
M. Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In CVPR, 2016.
[48] D. Tome, C. Russell, and L. Agapito. Lifting from the deep:
Convolutional 3d pose estimation from a single image. In
CVPR, 2017.
[49] J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for
human pose estimation. In NIPS, 2014.
[50] D. Tzionas, L. Ballan, A. Srikantha, P. Aponte, M. Pollefeys,
and J. Gall. Capturing hands in action using discriminative
salient points and physics simulation. In IJCV, 2016.
[51] L. Valgaerts, C. Wu, A. Bruhn, H.-P. Seidel, and C. Theobalt.
Lightweight binocular facial performance capture under uncontrolled lighting. In TOG, 2012.
[52] D. Vlasic, I. Baran, W. Matusik, and J. Popović. Articulated
mesh animation from multi-view silhouettes. In TOG, 2008.
[53] C. Wan, A. Yao, and L. Van Gool. Direction matters: hand
pose estimation from local surface normals. In arXiv preprint
arXiv:1604.02657, 2016.
[54] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In CVPR, 2016.
[55] H. Woltring. New possibilities for human motion studies by
real-time light spot position measurement. In Biotelemetry,
1973.
[56] C. Wu, D. Bradley, M. Gross, and T. Beeler.
An
anatomically-constrained local deformation model for
monocular face capture. In TOG, 2016.
[57] C. Xu and L. Cheng. Efficient hand pose estimation from a
single depth image. In ICCV, 2013.
[58] Q. Ye, S. Yuan, and T.-K. Kim. Spatial attention deep net
with partial pso for hierarchical hybrid hand pose estimation.
In ECCV, 2016.
[59] W. Zhao, J. Chai, and Y.-Q. Xu. Combining marker-based
mocap and rgb-d camera for acquiring high-fidelity hand motion data. In ACM SIGGRAPH/eurographics symposium on
computer animation, 2012.
[60] X. Zhou, S. Leonardos, X. Hu, and K. Daniilidis. 3d shape
estimation from 2d landmarks: A convex relaxation approach. In CVPR, 2015.
[61] X. Zhou, X. Sun, W. Zhang, S. Liang, and Y. Wei. Deep
kinematic pose regression. In ECCV Workshop on Geometry
Meets Deep Learning, 2016.
| 1 |
Full groups of bounded automaton groups
arXiv:1512.02133v2 [math.GR] 9 Feb 2016
Nicolás Matte Bon
February 2016
Abstract
We show that every bounded automaton group can be embedded in a finitely generated,
simple amenable group. The proof is based on the study of the topological full groups associated
to the Schreier dynamical system of the mother groups. We also show that if G is a minimal
étale groupoid with unit space the Cantor set, the group [[G]]t generated by all torsion elements
in the topological full group has simple commutator subgroup.
1
Introduction
Groups generated by finite automata are a classical source of groups acting on rooted trees.
A well-study family of automaton groups are groups generated by bounded automata. Examples of such groups are the Grigorchuk group of intermediate growth [Gri84], Gupta-Sidki
groups [GS83], the Basilica group [GŻ02], iterated monodromy groups of post-critically finite
polynomials [Nek05]. One feature of automata groups is that they provide many examples
of amenable groups [Gri84, BV05, BKN10, AAV13, JNdlS13], that are often non-elementary
amenable [Gri84, GŻ02, Jus15]. In particular, every bounded automaton group is amenable by
a theorem of Bartholdi, Kaimanovich and Nekrashevych [BKN10].
Another notion that has attracted some attention in relation to amenability is the topological full group of a group of homeomorphisms. Let G y X be a countable group acting by
homeomorphisms on the Cantor set. The topological full group of G y X is the group [[G]] of
all homeomorphisms of X that are locally given by the action of elements of G. More generally
a topological full group can be defined for every étale groupoid G [Mat12] (these definitions are
recalled in Section 2). This recovers the previous definition if G is the groupoid of germs of the
action G y X. Topological full groups were first defined and studied by Giordano, Putnam and
Skau in [GPS99] in the case of minimal Z-actions on the Cantor set. The commutator subgroups
of topological full groups of minimal Z-subhifts provided the first examples of finitely generated
infinite simple groups that are amenable (their simplicity and finite generation was established
by Matui [Mat06] and their amenability by Juschenko and Monod [JM13]. Amenability of some
topological full groups that are not given by Z-actions was established in [JNdlS13, JMBMdlS15].
Recently new examples were given by Nekrashevych in [Nek16], who obtained the first examples
of infinite, finitely generated, simple periodic groups that are amenable.
Many connections between the theory of automata groups and topological full groups exist.
This was first noticed by Nekrashevych in [Nek06] and became more apparent recently, see
[JNdlS13, MB15]. One motivation for this paper is to further explore this connection.
Let G be a group acting level-transitively on a regular rooted tree Td . It is well-known
that the topological full group of the action G y ∂Td on the boundary of the rooted tree does
not provide any substantially new example of finitely generated groups (more precisely, all its
finitely generated subgroups embed in a finite extension of a direct power of G).
We consider instead the topological full group of the Schreier dynamical system of G in the
sense of Grigorchuk. In the terminology of Glasner and Weiss [GW15], this can be defined as
1
the uniformly recurrent subgroup (URS) arising from the stability system of the action on the
boundary of the rooted tree ∂Td . Namely, consider the stabiliser map
Stab : ∂Td → Sub(G)
where Sub(G) is the space of subgroups of G, endowed with the Chabauty topology. This map
is not, in general, continuous. Let Y ⊂ ∂Td be the set of continuity points of Stab (which is
always a Gδ -dense subset of ∂Td), and let X ⊂ Sub (G) be the closure of the image of Y . Then
G acts continuously on X by conjugation. The action G y X is called the Schreier dynamical
system of G.
We define a full group [[G]] associated to G as the topological full group of the Schreier
dynamical system G y X.
As a consequence of the results from Sections 4 and 5, we have:
Theorem 1.1. Every group generated by a bounded activity automaton can be embedded in a
finitely generated, simple amenable group.
The proof is based on a detailed study of [[G]] when G is one of the bounded activity mother
groups, a family of bounded automaton groups introduced in [BKN10, AAV13] that contain all
other bounded automaton groups as subgroups. We show that [[G]] is amenable and admits a
finitely generated, infinite simple subgroup that contains G. The preliminary embedding in the
mother group cannot be avoided to obtain a finitely generated group, as finite generation does
not hold for some different choices of G (see Lemma 3.10).
We also study simplicity of subgroups of topological full groups for more general group
actions and étale groupoids. Matui proves in [Mat06, Mat12] that, for some classes of group
actions and groupoids, the topological full group has simple commutator subgroup (in particular
this holds for any minimal Zd -action on the Cantor set, and more generally for every almost
finite and purely infinite minimal étale groupoid, see [Mat12]). It is not known whether this
holds true for every minimal action of a countable group on the Cantor set (more generally for
every minimal étale groupoid with Cantor set unit space).
Given an étale groupoid G with Cantor set unit space (e.g. the groupoid of germs of a group
action on the Cantor set G y X), we denote by [[G]]t the subgroup of the topological full group
generated by torsion elements. We show the following result:
Theorem 1.2 (Theorem 2.7). Let G be a minimal étale groupoid with unit space the Cantor
set. Then [[G]]′t is simple.
Here [[G]]′t denotes the derived subgroup of [[G]]t . It is not known whether there exists G as
in the statement, for which the inclusion [[G]]′t ≤ [[G]]′ is strict.
A very similar result has been recently shown by Nekrashevych in [Nek15], and appeared
while the writing of this paper was being completed1 . He defines a subgroup A(G) of [[G]]
analogous to the alternating group, and shows that A(G) is simple if G is minimal, and that
A(G) is finitely generated if G is expansive. We refer the reader to [Nek15] for the definition of
A(G). Since it is apparent from the definitions that A(G) E [[G]]′t , it follows from Theorem 1.2
that A(G) = [[G]]′t if G is minimal.
This paper is structured as follows. In Section 2 we recall preliminaries on étale groupoids
and their topological full groups, and prove Theorem 1.2. In Section 3 we recall preliminaries
on groups acting on rooted trees and their Schreier dynamical systems. In Section 4 we study
the Schreier dynamical system of the alternating mother groups M y X. We show that this
action can be efficiently encoded by a Bratteli diagram representation. Combined with a result
from [JNdlS13] this allows to show that [[M ]] is amenable. Finally in Section 5 we show that
[[M ]]′t is finitely generated.
1
In a preliminary version of this paper, Theorem 1.2 was proven under an additional assumption on G, that was
removed in the present version.
2
Acknowledgements I am grateful to Anna Erschler for many discussions, and to G. Amir
for a very useful conversation. I thank R. Grigorchuk, H. Matui and V. Nekrashevych for useful
comments on a previous version. Part of this work was completed at the Bernoulli Center
during the trimester “Analytic and Geometric Aspects of Probability on Graphs”, the author
acknowledges the CIB and the organizers of the program.
2
2.1
Étale groupoids and topological full groups
Preliminary notions
A groupoid G is a small category (more precisely, its set of morphisms) in which every morphism
is an isomorphism. Recall that a category is said to be small if the collection of its objects and
morphisms are sets. The set of objects of a groupoid is called its unit space.
Throughout the section let G be a groupoid and X be its unit space. For every γ ∈ G the
initial and final object of γ are denoted s(γ) and r(γ). The maps s, r : G → X are called the
source and the range maps. Thus, the product of γ, δ ∈ G is defined if and only if r(δ) = s(γ).
In this case we have s(γδ) = s(δ) and r(γδ) = r(γ). We will systematically identify X with a
subset of G via the identification x 7→ Idx (the identity isomorphism of the object x).
An étale groupoid is a groupoid G endowed with a second countable locally compact groupoid
topology so that the range and source maps r, s : G → X are open local homeomorphism. Note
that we do not require G to be Hausdorff.
Example 2.1. Let G be a countable group acting on a topological space X by homeomorphisms.
The action groupoid is the groupoid G = G × X. The product of (g, x) and (h, y) is defined if
and only if hy = x and in this case (g, x)(h, y) = (gh, y). The unit space of G naturally identifies
with X. The range and source maps are given by s(g, x) = x and r(g, x) = gx. The topology
on G is the product topology on G × X, where G is endowed with the discrete topology. This
topology makes G a Hausdorff étale groupoid.
Example 2.2. Let again G be a countable group acting on a topological space X by homeomorphisms. Given g ∈ G and x ∈ X the germ of g at x is denoted germ(g, x). Germs of elements of
G form a groupoid G, the groupoid of germs of the action. The groupoid of germs is naturally
endowed with a topology where a basis for open sets is given by sets of the form
U (g, V ) = {germ(g, x) : x ∈ V },
where g ∈ G is fixed and V ⊂ X is an open subset. This topology makes G an étale groupoid,
which is non-Hausdorff in many interesting cases.
Example 2.3. More generally let F be a pseudogroup of partial homeomorphisms of a topological
space X. A partial homeomorphism of X is a homeomorphisms between open subsets of X
τ : U → V,
U, V ⊂ X open
where U and V are called the domain and the range of τ . A pseudogroup F is a collection
of partial homeomorphisms which is closed under taking inverses, taking restriction to an open
subset of U , taking composition on the locus where it is defined, and verifying the following:
whenever τ is a partial homeomorphism of X as above and the domain U admits a covering
by open subsets U = ∪i∈I Ui so that τ |Ui ∈ F for every i ∈ I then τ ∈ F . One can naturally
associate to F a groupoid of germs G, and endow it with a topology in the same way as in
Example 2.2, which makes it an étale groupoid.
Two étale groupoids are said to be isomorphic if they are isomorphic as topological groupoids.
An étale groupoid is said to be minimal if it acts minimally on its unit space (i.e. every
orbit is dense in X).
Let G be an étale groupoid. A bisection is an open subset U ⊂ G so that the source and range
maps s : U → s(U) and r : U → r(U) are homeomorphism onto their image. To any bisection U
3
one can associate a partial homeomorphism of X (i.e. a homeomorphism between open subsets
of X) given by
τU := r ◦ s−1 : s(U) → r(U).
A bisection U is said to be full if s(U) = r(U) = X.
The topological full group of an étale groupoid G is the group
[[G]] = {τU : U ⊂ G full bisection } ≤ Homeo(X).
Example 2.4. Let G be either an action groupoid as in Example 2.1, or a groupoid of germs as
in Example 2.2. Then [[G]] coincides with the group of homeomorphisms h of X the following
property: for every x ∈ X there exists a neighbourhood V of x and an element g ∈ G so that
h|V = g|V .
Example 2.5. Let G be the groupoid of germs associated to a pseudogroup F as in Example
2.3. Then [[G]] coincides with the set of all elements of F whose domain and range are equal
to X.
For a groupoid G with unit space X the set Gx = {γ ∈ G : s(γ) = r(γ) = x} forms a group,
the isotropy group at the point x ∈ X. A groupoid is said to be principal (or an equivalence
relation) if Gx = {Idx } for every x ∈ X.
From now on, all the groupoids that we consider have a unit space X homeomorphic to the
Cantor set.
An elementary subgroupoid K ≤ G is a principal compact open subgroupoid with unit space
X.
An étale groupoid is said to be an AF-groupoid if it is a countable union of elementary
subgroupoids. In particular, AF-groupoids are principal.
We now recall basic concepts concerning Bratteli diagrams and their relation to AF-groupoids.
A Bratteli diagram is a labelled directed graph B, possibly with multiple edges, whose vertex
set V is a disjoint union of finite levels Vn , n ≥ 0. The level V0 contains only one vertex, called
the top vertex. Every edge e connects a vertex in a level Vn to a vertex in the next level Vn+1
for some n ≥ 0. The starting vertex and the ending vertex of an edge e are called the origin
and target of e and are denoted o(e) and t(e) respectively. We denote En the set of edges going
from vertices in Vn to vertices in Vn+1 .
The path space of a Bratteli diagram B is the set of infinite directed paths in B, i.e.
XB = {e0 e1 e2 · · · : ei ∈ Ei , t(ei ) = o(ei+1 )}.
It is endowed with the topology induced by the product of the discrete topology on each Ei .
A Bratteli diagram is said to be simple if for every n and every vertex v ∈ Vn there exists
m > n so that v is connected by a path to every vertex of Vm .
To a Bratteli diagram one can associate an AF-groupoid HB , the tail groupoid, as follows.
Let γ = f0 f1 · · · fn be a path in B starting at the top vertex, i.e. fi ∈ Ei and t(fi ) = o(fi+1 ).
We denote t(γ) = t(fn ). The sets of infinite paths in XB starting with γ is called a cylinder
subset of XB and is denoted Cγ . Clearly cylinders subsets are clopen and form a basis for the
topology on XB .
Given v ∈ Vn , the tower corresponding to v, denoted Tv , is the collection of all cylinder
subsets Cγ so that t(γ) = v.
Let Cγ , Cγ ′ be two cylinders in the same tower. Then there is a partial homeomorphism of
XB with domain Cγ and range Cγ ′ given by
τγ,γ ′ : Cγ → Cγ ′
γen+1 en+2 · · · 7→ γ ′ en+1 en+2 · · · .
Let FB be the pseudogroup generated by all partial homeomorphisms of this form. The
groupoid of germs of this pseudogroup, endowed with a topology as in Example 2.3, is called
4
the tail groupoid of B and it is denoted HB . The tail groupoid is an increasing union of the
(n)
subgroupoid HB consisting of all germs of partial homeomorphisms of the form τγ,γ ′ where γ
(n)
and γ ′ have length at most n. It is easy to check that each HB is an elementary subgroupoid
in HB . In particular HB is an AF-groupoid. It is not difficult to check that the groupoid HB
is minimal if and only if the diagram B is simple.
(n)
The groups Hn = [[HB ]] are finite and are isomorphic to a direct product of symmetric
groups. The group [[H]]′ is simple if and only if B is simple, see [Mat06].
The following fundamental result says that every minimal AF-groupoid arises in this way.
Theorem 2.6 (Giordano–Putnam–Skau [GPS04]). Let G be a minimal étale groupoid with unit
space the Cantor set. The following are equivalent.
(i) G is an AF-groupoid;
(ii) there exists a simple Bratteli diagram B so that G is isomorphic to HB .
Moreover in this case the topological full group [[G]] is locally finite.
2.2
A characteristic simple subgroup of topological full groups
Given an étale groupoid G, we denote by [[G]]t < [[G]] the subgroup generated by torsion
elements, and by [[G]]′t the derived subgroup of [[G]]t .
Theorem 2.7. Let G be a minimal groupoid with compact totally disconnected unit space X.
Then [[G]]′t is simple.
The reader may compare the present proof with the proof of Bezuglyi and Medynets [BM08]
of simplicity of [[G]]′ in the special case where G is the groupoid of germs of a minimal Z-action
on the Cantor set (this result was first shown by Matui [Mat06] with a rather different proof).
We use the notations [g, h] = ghg −1 h−1 or the commutator and g h = hgh−1 for the conjugation.
Let us fix some notation and terminology in the following definition.
Definition 2.8. Let H be a group of homeomorphisms of a topological space X. Given two
subsets U, V ⊂ X we write U H V if there exists h ∈ H such that h(U ) ⊂ V .
We say that H is infinitesimally generated if for every open subset U ⊂ X the set
SU = {h ∈ H : supp(h) H U }
generates H.
We say that H is doubly infinitesimally generated if the following holds. For every open
subset U ⊂ X and every g, h ∈ H there exist g1 , . . . gm , h1 , . . . hn ∈ H such that g = g1 · · · gm ,
h = h1 · · · hn , and the condition
supp(gi ) ∪ supp(hj ) H U
holds for every 1 ≤ i ≤ m and 1 ≤ j ≤ n.
Remark 2.9. To check that H is doubly infinitesimally generated, it is enough to check that the
condition above is satisfied for g, h in a generating set of H.
The idea in the proof of the following proposition is classical and has been used in many
proofs of simplicity of groups (cf. in particular the argument at the end of the proof of [BM08,
Theorem 3.4].)
Proposition 2.10. Let H a group of homeomorphism of a topological Haurdoff space X. Assume that H is doubly infinitesimally generated. Then every non-trivial normal subgroup of H
contains the derived subgroup H ′ . In particular, if H is perfect then it is simple.
5
Proof. Let N E H be a non-trivial normal subgroup. Let g, h ∈ H. We show that [g, h] ∈ N .
Let f ∈ N be non-trivial. Let U ⊂ X be an open set so that f (U ) ∩ U = ∅ (here we use
the assumption that X is Hausdorff). Write g = g1 · · · gm and h = h1 · · · hn as in Definition
2.8 (with respect to the clopen subset U ). The commutator [g, h] belongs to the subgroup
normally generated by [gi , hj ] for all i, j. Hence it is enough to show that [gi , hj ] ∈ N for every
i and j. Let k ∈ H be such that k(supp(gj ) ∪ supp(hj )) ⊂ U . Since k normalizes N it is
sufficient to show that [gik , hkj ] ∈ N . Hence, up to replacing gi , hj with gik , hkj we may assume
that supp(gi ) ∪ supp(hj ) ⊂ U . After this assumption is made, we have that gif commutes with
gi and hj since its support is contained in f (U ). Since f ∈ N we have [gi , f ] ∈ N . Using that
gif commutes with gi and hj this implies
[hj , gi ] = [hj , gi (gi−1 )f ] = [hj , [gi , f ]] ∈ N
thereby concluding the proof.
Lemma 2.11. Let G be as in Theorem 2.7. Then [[G]]t is doubly infinitesimally generated.
Proof. Let us first show that [[G]]t is infinitesimally generated. Let U ⊂ X be clopen. It is
enough to show that any torsion element g ∈ [[G]]t can be written as a product of elements in
SU . Let d be the order of g. Pick a point x ∈ X and enumerate its g-orbit by x = x1 , x2 . . . xl (for
some l|d). By minimality one can find y1 , . . . yl ∈ U lying in the same G-orbit of x, and such that
x1 . . . xl , y1 , . . . yl are all distinct. Clearly there exists an element k ∈ [[G]]t of order 2 such that
k(xi ) = yi for every i = 1, . . . l. To see this, consider γi ∈ G such that s(γi ) = xi and r(γi ) = yi
for every i = 1, . . . l. For each i let Ui ⊂ G be a bisection containing γi such that xi ∈ s(Ui ),
yi ∈ r(Ui ) are clopen sets and are small enough so that the sets s(U1 ), . . . , s(Ul ), r(U1 ), . . . , r(Ul )
are all disjoint. Let k ∈ [[G]]t be the element that coincides with τUi on s(Ui ) and with τU−1
i
on r(Ui ) for all i and with the identity elsewhere. More
formally
k
=
τ
for
the
full
bisection
V
S
S
V = U1 ∪ . . . Ul ∪ U1−1 ∪ · · · Ul−1 ∪ X ′ where X ′ = X \ ( s(Ui ) ∪ r(Ui )).
i
Now let W be a clopen neighbourhood of x and set V = ∪d−1
i=0 g (W ). Then V is g-invariant,
and we have k(V ) ⊂ U if W is small enough.
By compactness we have proven that there exists a finite covering of X by g-invariant
clopen sets V such that V [[G]]t U . Up to taking a refinement we may assume that this
covering is a partition (since taking intersections and differences preserves the g-invariance).
Then g = g1 · · · gm where each gi coincides with the restriction of g on each element of the
partition and with the identity elsewhere, hence gi ∈ SU . This proves that [[G]]t is infinitesimally
generated.
Now observe that the construction above yields the following more precise conclusion. For
every torsion element g ∈ [[G]]t and every clopen set U ⊂ X there exist a writing g = g1 · · · gm ,
a partition into clopen subsets X = V1 ⊔ . . . ⊔ Vm and elements k1 , . . . km ∈ [[G]]t such that for
every i = 1, . . . , m we have
1. supp(gi ) ⊂ Vi ;
2. the elements ki have order 2, ki (Vi ) ⊂ U and moreover ki (Vi ) ∩ Vi = ∅;
We now show that [[G]]t is doubly infinitesimally generated. Let g, g ′ ∈ [[G]]t be torsion
elements (by Remark 2.9, it is enough to check that the condition in Definition 2.8 is satisfied
for g and g ′ in the generating set consisting of torsion elements). Let U ⊂ X be a non-empty
clopen set and consider a partition U = U1 ∪ U1′ into two non-empty clopen sets.
Consider decompositions g = g1 · · · gm , sets V1 , . . . Vm , elements k1 , . . . , km as above such
that ki (Vi ) ⊂ U1 , and similarly g ′ = g1′ · · · gn′ , V1′ , . . . Vn′ , k1′ , . . . , kn′ such that ki′ (Vi′ ) ⊂ U1′ . Fix
i and j. Set Wi = Vi \ U and Wj′ = Vj′ \ (U ∪ Wi ). Then by construction, the four clopen sets
Wi , ki (Wi ), Wj′ , kj′ (Wj ) are all disjoint. Let h ∈ [[G]]t be the element that coincides with ki on
Wi ∪ ki (Wi ), with kj′ on Wj′ ∪ kj′ (Wj′ ) and with the identity elsewhere. Then h ∈ [[G]]t and we
have h(Vi ∪ Vj′ ) ⊂ U , thereby concluding the proof.
6
Lemma 2.12. The group [[G]]′t is doubly infinitesimally generated and perfect.
The proof is a modification of an argument used by Cornulier (cf. the end of the proof of
Théorème 3.1.6 in [Cor14]).
Proof. Let us say that an element g ∈ [[G]]t is an n-cycle if it has the following form. g has
order n, and its support decomposes as a disjoint union of clopen sets
supp(g) = V1 ⊔ · · · ⊔ Vn ,
such that g(Vi ) = Vi+1 taking i modulo n. Let N be the subgroup of [[G]]t generated by 3
cycles (the same reasoning will apply for any n ≥ 3 odd). Then N is normal in [[G]]t , since any
conjugate of an n-cycle is still an n-cycle. It is easy to check that N 6= {e}. Namely consider 3
points x, y, z ∈ X lying in the same G-orbit, and let γ1 , γ2 ∈ G be such that s(γ1 ) = x, r(γ1 ) =
y = s(γ2 ), r(γ2 ) = y. Consider bisections U1 , U2 containing γ1 and γ2 . Let V1 be a clopen
neighborhood of x small enough so that V1 ⊂ s(U1 ), V1 ∩ τU1 (V1 ) = ∅, τU1 (V1 ) ⊂ s(U2 ), and
τU2 ◦ τU1 (V1 ) ∩ τU1 (V1 ) = ∅. The element g ∈ [[G]]t that coincides with τU1 on V1 , with τU2
on τU1 (V1 ) and with τU−1
◦ τU−1
on τU2 ◦ τU1 (V1 ) and with the identity elsewhere is a non-trivial
1
2
3-cycle.
By Lemma 2.11 and Proposition 2.10 it follows that [[G]]t ⊂ N .
Moreover it is easy to see that every 3-cycle is the commutator of two 2-cycles. Hence
N ⊂ [[G]]′t . Thus [[G]]′t = N .
Using the fact that [[G]]′t is generated by 3 cycles, the same proof as in Lemma 2.11 can
be repeated here to show that it is doubly infinitesimally generated (with a minor modification
to choose the elements ki there to be 3-cycles instead of involutions, thereby ensuring that
ki ∈ [[G]]′t ).
It remains to be seen that [[G]]′t = [[G]]′′t . By Proposition 2.10 and Lemma 2.11 it is enough
to show that [[G]]′′t is non-trivial (since it is normal in [[G]]t ). The same reasoning as above can
be used to see that there exist non-trivial 5-cycles and that every 5-cycle is a commutator of
3-cycles, thereby belongs to [[G]]′′t .
Remark 2.13. The reason why we considered the group [[G]]′t instead of [[G]]′ is that we are not
able to show, in general, that [[G]] is doubly infinitesimally generated. If one is able to show
this, the same proof applies to show simplicity of the group [[G]]′ .
3
3.1
Preliminaries on groups acting on rooted trees
Basic definitions and bounded automata
Let Td be the infinite d-regular rooted tree. The group of automorphisms of Td is denoted
Aut(Td ).
We fix an identification of vertices of Td with the set of words on the finite alphabet A =
{0, . . . d − 1}. The root of Td corresponds to the empty word ∅. We identify the symmetric
group Sd with the subgroup of Aut(Td ) permuting the first level and acting trivially below.
Every g ∈ Aut(Td ) fixes the root and preserves the levels of the tree. Thus for every vertex
v ∈ Td the automorphism g induces an isomorphism between the sub-trees rooted at v and at
g(v). The choice of an indexing of Td by the alphabet A allows to identify this isomorphism
with a new element of Aut(Td ), which is called the section of g at v and is denoted g|v .
A subgroup G < Aut(Td ) is called self-similar if for every g ∈ G and every v ∈ Td we have
g|v ∈ G. Any self-similar group admits a wreath recursion, i.e. an injective homomorphism
M
G ֒→G ≀E Sd :=
G ⋊ Sd
E
g 7→(g|0 , . . . g|d−1 )σ
7
where g|0 · · · g|d−1 are the sections of g at vertices at the first level (identified with the
alphabet E) and the permutation σ ∈ Sd gives the action of g on the first level.
If G is a self-similar group, a the section at a vertex v ∈ Td defines a homomorphism
ϕv : Stab(v) → G,
g 7→ g|v .
An important special case of self-similar groups are automaton groups. A finite-state automaton over the alphabet A is a finite subset S ⊂ Aut(Td ) which is closed under taking sections:
for every g ∈ S and every v ∈ Aut(Td ) we have g|v ∈ S. Such a set can naturally be given the
structure of an automaton in the more usual sense, see [Nek05].
The activity function of an automaton S is the function pS : N → N that counts the number
of vertices at level n for which at least an element of S has non-trivial section:
pS (n) = |{v ∈ An : ∃g ∈ S, g|v 6= e}|.
It can be shown that this function grows either polynomially with some well defined integer
exponent d ≥ 0, or exponentially. In the former case the integer d is called the activity degree
of the automaton S. We will mostly be interested in the case d = 0; in this case the function
pS (n) is uniformly bounded in n and the automaton is said to be of bounded activity (for short,
a bounded automaton).
An automaton group is a self-similar group G < Aut(Td ) generated by a a finite-state automaton.
3.2
The Schreier dynamical system of a group acting on a rooted tree
Every level-transitive self-similar group has an associated Schreier dynamical system, a wellstudied object, see [DDMN10, Vor12]. It fits in the framework of uniformly recurrent subgroups
(URS), the topological analogue of an invariant random subgroup (IRS), recently introduced
and studied by Glasner and Weiss [GW15].
Let G be a countable group, and consider the space Sub(G) of subgroups of G endowed with
the Chabauty topology (in the countable case this is simply the topology induced by the product
topology on {0, 1}G). If G is finitely generated, the choice of a finite symmetric generating set
S allows to identify Sub(G) with a space of pointed labelled Schreier graphs (where edges are
labelled by generators in S), and the Chabauty topology coincides with the topology inherited
by the space of pointed labelled graphs. The group G acts on Sub(G) by conjugation and this
defines an action by homeomorphisms. If Sub(G) is identified with the space of Schreier graphs
with respect to a generating set S, the conjugation action corresponds to the action by “moving
the basepoint” as follows. Given a pointed labelled Schreier graph (Γ, γ) and g ∈ G, choose
a writing g = sn · · · s1 , with si ∈ S. Then g · (Γ, γ) = (Γ, gγ), where by definition gγ is the
endpoint of the unique path starting from γ whose edges are labelled s1 , . . . , sn .
A uniformly recurrent subgroup [GW15], or URS, is a nonempty closed minimal G-invariant
subset X ⊂ Sub(G).
A construction of uniformly recurrent subgroups is given by the stability system associated
to a minimal G-dynamical system [GW15]. Namely let Y be a compact space and G y Y be a
minimal action by homeomorphisms. Consider the stabiliser map
Y → Sub(G)
y 7→ Stab(y).
This map need not be continuous, however there is always a Gδ -subset Y0 ⊂ Y on which it is
continuous, see [GW15, §1]. The following proposition is proven in [GW15].
Proposition 3.1 (Glasner-Weiss [GW15]). Let G y Y be a minimal action of a countable
group on a compact space by homeomorphisms, and Y0 ⊂ Y the continuity locus of the stabiliser
map, Then
X = {Stab(y) : y ∈ Y0 } ⊂ Sub(G).
is a URS, called the stability system associated to the system G y Y .
8
Definition 3.2 (Grigorchuk). Let G be a group acting level-transitively on a rooted tree Td .
The Schreier dynamical system of G is th uniformly recurrent subgroup X ⊂ Sub(G) given by
the stability system for the action on the boundary of the tree.
Remark 3.3. The assumption that the group acts level-transitively is equivalent to minimality of
the action on the boundary of the tree, and thus Proposition 3.1 applies. In particular, G y X
is also minimal.
Let G be a countable group acting by homeomorphisms on a compact space Y . We say that
a point y ∈ Y is topologically regular if every g ∈ G that fixes y fixes a neighbourhood of y
pointwise (in other words, if the isotropy group Gy for the groupoid of germs of the action is
trivial). If every point is topologically regular, we say that the action G y Y is topologically
regular.
Lemma 3.4. A point y ∈ Y is topologically regular if and only if the stabiliser map is continuous
at y.
Proof. Topological regularity is equivalent to the fact that for every g ∈ G the map Y → {0, 1}
given by z 7→ 1gz=z is constant on a neighbourhood of y and hence continuous at y. It follows
that the product map Y → {0, 1}G, z 7→ (1gz=z )g is continuous at y. But this is exactly the
stabiliser map, after identifying Sub(G) with a subset of {0, 1}G.
For groups generated by bounded automata, the Schreier dynamical system admits a fairly
more explicit description. To give it we first recall some terminology.
Let now be a group G acting on a rooted tree Td . We say that a ray γ = x1 x2 · · · ∈ Td is
regular if for every g ∈ G the section g|x1 ···xn is eventually trivial, and singular otherwise. The
following Lemma is well-known and straightforward.
Lemma 3.5. If a ray γ ∈ ∂Td is regular, then it is topologically regular, while the converse does
not hold.
We say that an orbit for the action of G on ∂Td is regular if one (equivalently, all) of its
points is regular, and singular otherwise. The following lemma is also straightforward to check
from the definition of activity.
Lemma 3.6. Let G be a group generated by a bounded activity automaton. Then G has finitely
many singular orbits.
In particular, it follows from minimality that
Corollary 3.7. Let G be a level-transitive bounded automaton group and let γ ∈ ∂Td be any
regular ray. Then the Schreier dynamical system X of G is given by
X = {Stab(gγ) | g ∈ G}.
We now state the following definition.
Definition 3.8. Let G be a bounded automaton group. The full group of G, denoted [[G]], is
the topological full group of the Schreier dynamical system G y X.
Recall that we denote [[G]]t the subgroup generated by torsion elements, and [[G]]′t its commutator subgroup. It immediately follows from Theorem 2.7 that
Corollary 3.9. If G is level-transitive, the group [[G]]′t is simple.
The group [[G]]′t is not, in general, finitely generated, as the following lemma shows:
Lemma 3.10. Assume that G acts topologically regularly on ∂Td . Then the group [[G]]′t is not
finitely generated unless X is finite.
An example of a bounded automaton group acting topologically regularly on ∂Td is given
by the Basilica group. In fact, it follows from results in [DDMN10] that for the Basilica group,
the map Stab : ∂Td → Sub(G) is a homeomorphism onto its image.
9
Proof. Recall that the action G y X is a subshift if there exists a finite partition of X in clopen
subsets such that the G-translates of this partition separate points. A necessary condition for the
groups [[G]], [[G]]′ to be finitely generated is that the action G y X is a subshift [Mat06, Cor14],
and the same argument applies to [[G]]t , [[G]]′t as well. Hence it is enough to show that if G
is as in the statement, then the Schreier dynamical system G y X is not a subshift. In fact,
it is a generalized odometer : the orbit of every clopen set is finite. Indeed by Lemma 3.4,
topological regularity of G y ∂Td implies that Stab : ∂Td → X is a continuous equivariant
surjective map. Let U ⊂ X be clopen. It is enough to show that the orbit of Stab−1 (U ) is finite.
But Stab−1 (U ) ⊂ ∂Td is clopen, hence consists of a finite union of cylinder corresponding to
a deep enough level of the tree. Since the G-action preserves levels of the tree, the conclusion
follows.
The group [[G]]′t is however finitely generated in some cases. In the next section, we will
study the group [[G]] when G is the alternating mother group. In this case the group [[G]]′t is
finitely generated and, as we shall see, this is enough to prove Theorem 1.1.
4
The Schreier dynamical system of the mother group
Mother groups are a family of bounded automaton groups, first defined in [BKN10], that contain
all bounded automaton groups as subgroups [BKN10, AAV13]. We work with a variant of the
original definition defined only in term of alternating permutations, considered by Brieussel in
[Bri14].
Definition 4.1. Fix d ≥ 5. The (alternating) mother group over d elements is the group
M < Aut(Td ) generated by the two finite subgroups A, B < Aut(Td ), where
• A is the alternating group over d elements acting on the first level with trivial sections.
• B is the set of all elements having a wreath recursion of the form
g = (g, σ1 , · · · , σd−1 )ρ,
where σ1 · · · σd−1 , ρ are alternating permutations and ρ fixes 0.
Observe that B is a finite subgroup of Aut(Td ), isomorphic to the permutational wreath
product Ad ≀{1,··· ,d−1} Ad−1 where Ad is the alternating group.
The interest of the mother group relies on the following fact:
Theorem 4.2 ([BKN10], [AAV13]). Let G be a bounded automaton group. Then G embeds in
an alternating mother group (over a possibly bigger alphabet).
Remark 4.3. In fact, the mother groups defined in [BKN10, AAV13] are slightly different: generators are defined by same recursive rules but without the constraint that the permutations
involved belong to the alternating group. However, it is straightforward to see that the mother
group over d elements embeds in the alternating mother group over 2d elements.
The following fact is proven by Brieussel in [Bri14].
Lemma 4.4 (Proposition 3.1 in [Bri14]). If d ≥ 5 the wreath recursion map defines an isomorphism
M→
˜ ⊕A M ⋊ Ad = M ≀A Ad
where Ad is the alternating group.
A consequence (that can also be proven directly by induction) is the following
Corollary 4.5. The action M y ∂Td is totally non-free (two distinct points have different
stabilizers).
10
Proof. Let γ 6= η ∈ ∂Td and let w be their biggest common prefix. Let x, y ∈ A be the letters
following w in γ, η. Using Lemma 4.4, one can find g ∈ M so that g|w is an element of A given
by a permutation σ such that σ(x) = x and σ(y) 6= y. Hence gγ = γ but gη 6= η.
We also collect here same well-known facts about the group M that we will use. They can
be easily proven by induction, or proofs can be found e.g. in [AV14].
Lemma 4.6.
1. M acts level-transitively on Td .
2. Two rays γ, η ∈ ∂Td are in the same M -orbit if and only if they are cofinal. Moreover
there is only one singular orbit, namely the orbit of γ = 0∞ .
3. Let ρ = x0 x1 · · · ∈ ∂Td be in the orbit of 0∞ and g ∈ M . Then the section g|x0 ···xn is
eventually constant and belongs to B. We denote g|ρ its eventual value.
4. The group M is contracting with nucleus A ∪ B: for every g ∈ M there exists n so that
all sections of g at levels r ≥ n belong to A ∪ B.
From now on, we shall fix d ≥ 5, and denote M y X the Schreier dynamical system of
M , and M its groupoid of germs. We further denote [[M ]] the topological full group of M.
Recall that we denote [[M ]]t the subgroup of [[M ]] generated by torsion elements, and [[M ]]′t
the commutator subgroup of [[M ]]t . We have
Lemma 4.7. If d ≥ 6, M embeds in [[M ]]′t .
Proof. First observe that the action M y X is faithful. Namely let γ ∈ Td be a regular ray
and O(γ) be its orbit. By Lemma 4.6 1 O(γ) is dense in ∂Td and thus M y O(γ) is faithful.
By Lemma 3.7 the space X is given by the closure of stabilizers of points in O(γ), and by
Corollary 4.5 the stabiliser map restricted to O(γ) is injective. It follows that X contains an
invariant subset on which the action is faithful. Hence M embeds in [[M ]]. To check that it is
actually contained in [[M ]]′t , it is enough to show that generators in A and B can be written as
commutators of torsion elements. This is obvious for generators in A (since A is the alternating
group over d ≥ 6 elements), and observe that B is generated by its subgroups B0 , B1 , . . . Bd−1
where B0 is the subgroup consisting of elements with all the σi trivial (it is thus isomorphic
to an alternating group over d − 1 ≥ 5 elements) and for 1 ≤ i ≤ d − 1 Bi is the subgroup of
B consisting of elements that have ρ and σj trivial for all j 6= i (it is thus isomorphic to the
alternating group over d elements).
Thus, Theorem 1.1 follows from the combination of Theorem 4.2 with
Theorem 4.8. The group [[M ]]′t is a finitely generated, simple amenable group.
The rest of the paper is devoted to the proof of this result. Simplicity has already been
established (see Corollary 3.9).
4.1
Bratteli diagram representation and amenability
Let B be a Bratteli diagram. Given a vertex v of B, recall that we denote Tv the tower
corresponding to v. Recall that this is the collection of all cylinders subsets of XB corresponding
to paths ending in v.
Definition 4.9. A homeomorphism g of XB is said to be of bounded type if for every v ∈ B the
number of cylinders Cγ ∈ Tv so that g|Cγ is not equal to τγ,γ ′ for some γ ′ is bounded uniformly
in v, and the set of points x ∈ XB such that the germ of g in x does not belong to HB is finite.
The following result is due to Juschenko, Nekrashevych and de la Salle [JNdlS13, Theorem
4.2].
11
Theorem 4.10 ([JNdlS13]). Let G be a group of homeomorphisms of bounded type of the path
space XB of a Brattelli diagram, and G be the groupoid of germs of G y XB . Assume that
for every x ∈ XB the isotropy group Gx is amenable. Then G is amenable. Moreover [[G]] is
amenable.
Recall that M y X is the Schreier dynamical system of the mothergroup. We denote M its
groupoid of germs. We have:
Theorem 4.11. There exists a stationary, simple Bratteli diagram B and a homeomorphism
X ≃ XB that conjugates the action M y X to an action by homeomorphisms of bounded type.
Moreover the action M y XB is topologically regular (equivalently, Mx is trivial for every
x ∈ XB ).
By Theorem 4.10, this implies:
Corollary 4.12. The group [[M ]] is amenable.
Before proving the theorem, let uus first describe the idea of the construction of the Bratteli
diagram in Theorem 4.11 and fix some notation. The path space of the diagram B will be
obtained from the boundary of the tree by “removing” the orbit of the ray 0∞ and replacing it
by finitely many ones with two extra letters “at infinity”.
e defined as follows. let
More precisely, the path space of B will be in bijection with the set X
∞
O ⊂ ∂Td be the orbit of the zero ray 0 . Recall from Lemma 4.6 that O consists exactly of rays
that are co-final with 0∞ , and that O is the only singular orbit of M . For a, b ∈ A = {0, . . . , d−1}
with a 6= 0, denote Oab the set of formal words of the form ρab where ρ ∈ O. We say that the
two letters ab lie “at infinity” in positions ω, ω + 1. Set O∗ = ∪Oab , where the union is taken
over all a, b ∈ A with a 6= 0.
Definition 4.13. With the notations above, we denote the set
e = (∂Td \ O) ∪ O∗ .
X
e as follows. The action on ∂Td \ O is given by the restriction
We make the group M act on X
of the action on the boundary of the tree. If ρab ∈ O∗ and g ∈ M we set g(ρab) = g(ρ)g|ρ (ab),
where the section g|ρ is as in point 3 of Lemma 4.6.
e yet. We will now construct a Bratteli
Note that we do not consider any topology on X
e and then we will consider on X
e the
diagram B so that the path space XB is in bijection with X,
e
topology induced by this bijection. We will then show that M y X = XB (with this topology)
is conjugate to the Schreier dynamical system M y X.
Construction of the diagram
We define a Bratteli diagram B as follows. The level set V0 consists of a single vertex, the top
vertex. All levels Vn , n ≥ 1 are identified with a copy of the set {(ab, i)} where a ∈ A \ {0}, b ∈ A
and i ∈ {0, ∗}. Every vertex (ab, 0) at level n is connected with the vertices (ab, 0) and (ab, ∗)
at level n + 1. We label edges of this form by the symbol 0. Every vertex of the form (ab, ∗)
with b 6= 0 is connected to all vertices of the form (bc, ∗) with c arbitrary. We label edges of this
type by with the symbol a. Finally every vertex of the form (a0, ∗) is connected to all vertices
of the form (cd, 0) with c 6= 0 and d arbitrary. These edges are also labelled a. The top vertex
is connected to each vertex of V1 by d edges, labelled by {0, . . . , d − 1}. Let B be the Bratteli
diagram obtained in this way. Note that B is simple.
e as follows. Every path γ ∈ XB corresponds to
The path space XB is in bijection with X
e read on the labels of its edges, if this sequence does not end with infinitely
the sequence γ̃ ∈ X
many zeros. If the sequence read ends with infinitely many 0s, then observe that the path γ
must end with an infinite sequence of edges between vertices of the form (ab, 0) for some fixed
a 6= 0, b ∈ A, and in this case we put the two letters ab “at infinity”. Conversely, given any
e one can find a unique path γ ∈ XB , so that the labels of its first n edges
sequence γ̃ ∈ X
12
coincide with the first n letters of γ̃, and the n-th vertex vn = (ab, i) ∈ Vn of γ encodes the
following information: the symbol i ∈ {0, ∗} tells wether the n + 1th letter is zero or non-zero,
a is the first non-zero letter in γ appearing after position n and b is the letter following a.
e ≃ XB . We will systematically use
The group M acts on on XB through the identification X
e the topology induced by this identification. Let us describe
this identification. We put on X
this topology in a more explicit way.
Lemma 4.14. Let v ∈ B be a vertex at level n and η be a finite path ending at v. Let w ∈ An
be the sequence of labels read on η. Consider the cylinder subset Cη ⊂ XB , and view it as a
e through the bijection XB ≃ X
e described above. Then
subset of X
1. If v has the form (ab, ∗) then Cη consists exactly of all sequences starting with wab.
e starting with a
2. If v has the form (ab, 0) then Cη consists exactly of all sequences in X
n
prefix of the form w0 for some n ∈ N+ ∪ {∞}, and such that the first non-zero letter
following this prefix is a and the next letter is b (the letters ab could be “at infinity”).
e ≃ XB .
Proof. Follows directly by inspecting the constructed bijection X
Lemma 4.15. The action of M on XB is by homeomorphisms of bounded type.
Proof. Pick a vertex v in B and let η be a path ending at v. Let w be the word read on η, and
note that η is the unique path ending at v on which w is read. Let g ∈ M . It follows from the
e and from Lemma 4.14 that g restricted to Cη coincides with
definition of the action M y X
a transformation of the form τη,η′ unless g|w 6= e. Since M has bounded activity, there are a
bounded number of such w.
Moreover for elements in the standard generating set of M , the germs at every point of XB
e
belongs to HB except perhaps for points that correspond to sequences of the form 0∞ ab ∈ X,
that may be sent to sequences of the same form for a different choice of ab. The conclusion
follows.
Lemma 4.16. The action M y XB is topologically regular.
e By Lemma 4.6
Proof. Let γ ∈ XB and g ∈ M so that gγ = γ. View γ as an element of X.
3, there exists an n so the section of g at nth prefix of γ belongs to the finite subgroup B.
Consider the cylinder Cγn ⊂ XB corresponding to the first n edges of the path γ (now viewed
as a path in the Bratteli diagram). We claim that g fixes Cγn point-wise. The reason is that
Cγn is reminiscent of the letters ab, where a is the first letter that follows the nth position in γ
which is non-zero, and b is the letter following a (b may be 0). An element of B only acts on
the first non-zero letter of a ray and on the next one. Since g fixes γ and its section at the nth
level belong to B, it follows that g fixes Cγn pointwise.
Proof of Theorem 4.11. Consider the stabiliser map Stab : XB → Sub(G). This map is continuous by Lemma 4.16 and Lemma 3.4. Moreover its image is exactly the Schreier dynamical
system X, since XB contains an invariant dense subset on which the action of M is conjugate
to the action on ∂Td \ O. Hence we only need to check that it is injective. Let γ 6= γ ′ ∈ XB .
It is easy to construct, g ∈ M so that gγ = γ and gγ ′ 6= γ ′ (e.g. using Lemma 4.4) and thus
Stab(γ) 6= Stab(γ ′ ).
Remark 4.17. It follows from the proof that every Schreier graph in X has no non-trivial
automorphism (as an unrooted labelled graph). Indeed the proof shows that X coincides with
its stability system, and thus every element of X (regarded now as a subgroup of M ) coincides
with its normalizer in M .
13
5
Finite generation
In this section we show
Theorem 5.1. [[M ]]′t is finitely generated.
We provide two proofs: the first (that was added after the appearance of [Nek15]) consists in
using the explicit description of M y X obtained in the previous section to show that the action
M y X is conjugate to subshift, and then applying a theorem of Nekrashevych from [Nek15].
The second consists in constructing an explicit generating set and is based on combinatorial
properties of Schreier graphs of M .
We denote points in the Schreier dynamical system X with the notation (Γ, ρ), where Γ is a
labelled Schreier graph and ρ ∈ Γ is its basepoint.
Recall that a group action by homeomorphisms on the Cantor set G y X is said to be
conjugate to a subshift over a finite alphabet, if there exists a finite partition of P of X into
clopen sets such that the G-translates of elements of P separate points.
Lemma 5.2. M y X is conjugate to subshift over a finite alphabet.
Proof. We define a clopen partition P of X so that M -translates of P separate points. By
definition (Γ, γ) and (Γ′ , ρ) are in the same element of P if the loops at γ, ρ have the same
e
labels. Let us show that the M -translates of P separate points. Use the identification X ≃ X
e
introduced in the previous section (see Definition 4.13). Let γ 6= ρ ∈ X. Since γ 6= ρ, there is a
first bit in which γ and ρ differ, say a position r ∈ N ∪ {ω, ω + 1}. Assume at first that r ∈ N.
Let this bit be x in γ and y in ρ. If r = 1 then there are generators in A that fix γ but not ρ
and we are done. Otherwise let w be the common prefix of length r + 1. Since the group M
acts level-transitively on Td and using Lemma 4.4, we can find g ∈ M so that g(w) = 0r−2 1 and
g|w = e. Then g(γ) = 0r−2 1x · · · and g(ρ) = 0r−2 1y · · · . It is easy to see that the generators in
B that fix g(γ) and g(ρ) are different and thus g(γ), g(ρ) lye in different elements of P. The case
r ∈ {ω, ω + 1} is similar, but choose instead g so that g(w) = 0 · · · 0 (where w is the non-zero
prefix common to γ and ρ).
Remark 5.3. The mother group plays an important role in the previous proof: by the proof
of Lemma 3.10, the Schreier dynamical system of a bounded automaton group is not always
conjugate to a subshift.
First proof of Theorem 5.1. By Nekrashevych’s result [Nek15], Lemma 5.2 is enough to conclude
the proof. Namely it is proved in [Nek15] that if G is an expansive groupoid with infinite orbits,
the alternating subgroup A(G) < [[G]] is finitely generated (see [Nek15] for the definition of
A(G)), and that the groupoid of germs of a subshift is expansive. In this situation the group
A(G) coincides with [[M ]]′t , indeed it is a normal subgroup of [[M ]]′t and the latter is simple by
Theorem 2.7. This is enough to conclude.
We now turn to the second proof.
5.1
Basic properties of Schreier graphs of M
In this subsection we collect some preliminary considerations on Schreier graphs of M .
e = (∂Td \ O) ∪ O∗ be the space of sequences introduced in Definition 4.13. We will
We let X
e
often use the identification X ≃ X.
e
Fix ρ ∈ X and let O(ρ) be its orbit. Denote Γ the corresponding Schreier graph, with respect
to the generating set S = A ∪ B.
14
Projection to the Gray code line
e let γ be the sequence in the binary alphabet {0, ∗} obtained by replacing all non-zero
For γ ∈ X,
letters of γ by ∗.
This defines a graph projection p : Γ → Γ where Γ is the following graph. Its vertex set
consists of sequences γ where γ ∈ O(ρ). Two such sequences are neighbour if and only if they
differ either in the first bit only, or in the bit that follows the first appearance of ∗ only. Edges
of the first type are called edges “ of type A”, and edges of the second type are called edges “ of
type B”. Since every vertex has exactly two neighbours, Γ is a line. The projection p : Γ → Γ
preserves adjacency. More precisely, edges given by actions of generators in A project to edges
of type A (unless its endpoints are mapped to the same vertex), and the same with B.
It is convenient to think of the graph Γ as “ fibered over” the line Γ.
Definition 5.4. We call Γ the Gray code line associated to Γ.
The reason for the name is the connection to the Gray code ordering of binary sequences.
This connection, and the fact that Schreier graphs of M projects to lines was used in [AV14,
AV12, AAMBV16].
e We say that the bit at position r ∈ N ∪ {ω, ω + 1} is visible in γ
Definition 5.5. Let γ ∈ X.
if it is either the first bit (in which case we say that it is A-visible), or if it is the first non-zero
bit, or the following one (in which cases we say that it is B-visible).
The same terminology applies to bits in the projected sequence γ.
If I ⊂ Γ is a segment, we say that a bit is visible in I if it is visible in at least one sequence
in I.
Remark 5.6. Acting on γ with a generator in A, B only modifies a bit that is A, B-visible (this
is a straightforward consequence of the definition of the groups A, B).
The following definition will allow us to define a basis for the topology of the Schreier
dynamical system X, particularly convenient to work with.
Definition 5.7 (Gray code piece).
1. A Gray code piece I a subgraph of Γ which is a connected component of p−1 (I), where I = [γ 0 , · · · γ n−1 ] ⊂ Γ is a finite segment. We still
denote p : I → I the restriction of the projection map to I. The length of a Gray code
piece is the length of I (i.e. the number of its vertices).
2. A pointed Gray code piece (I, γ) is a Gray code piece I together we a preferred vertex
γ ∈ I. We will always denote v ∈ I the projection of v to I.
3. A pointed Gray code piece (I, γ) is said to be central if γ projects to the midpoint of I (in
particular, a central Gray code piece has odd length).
Lemma 5.8. Gray code pieces are finite graphs.
Proof. Let I ⊂ Γ be a Gray code piece, and pick a vertex γ ∈ I. Since I is connected, every
vertex γ ′ of I can be reached from γ acting with generators in A ∪ B and without never getting
out of I in the projection. Thus, the corresponding sequence γ ′ only differs from γ in bits that
are visible in I. Thus there are only finitely many possibilities for γ ′ .
Definition 5.9 (Marginals). Let (I, γ) be a central Gray code piece of length 2n + 1 with n ≥ 2
and I = [γ −n , · · · , γ 0 , · · · γ n ]. Denote I l = [γ −n , · · · γ n−2 ] ⊂ I the segment consisting of the
2n−1 leftmost vertices of I and by I r the segment consisting of the rightmost 2n−1 vertices. Let
Il , Ir ⊂ I be respectively the connected components of γ in p−1 (I l ), p−1 (I r ). Then (Il , γ), (Ir , γ)
are two (non-central) pointed Gray code pieces, that we call the marginals of (I, γ).
We also call Ic ⊂ I the segment consisting of the 2n − 3 central vertices, and let Ic ⊂ I be
the connected component of γ in p−1 (Ic ), so that (Ic , γ) is a central Gray code piece of length
2n − 3.
The main combinatorial feature of the graphs that we will use is contained in the next
proposition.
15
Proposition 5.10. Let (I, γ) ⊂ Γ be a centered Gray code piece. Then the isomorphism class
of (I, γ) as a labelled graph is uniquely determined by the isomorphism classes of its marginals.
The proof of this proposition requires a more detailed analysis of the Schreier graphs, which
will not be relevant for the rest of the proof. For this reason, it is postponed to Subsection 5.3.
5.2
Second proof of Theorem 5.1
The strategy is to follow the same reduction steps as in Matui’s proof [Mat06, Theorem 5.4] of
finite generation for the topological full group of a minimal Z-subshift. However, the analysis is
substantially more involved, as we need to deal with the more complicated nature of the graphs.
We define a convenient basis for the topology on X.
Definition 5.11 (Gray code cylinder).
1. Let (I, γ) be a pointed Gray code piece. The corresponding Gray code cylinder CI,γ ⊂ X the collection of all (Γ, ρ) ∈ X so that a Gray
code piece around ρ is isomorphic to (I, γ). We say that a Gray code cylinder is central if
the corresponding Gray code piece is central.
2. For (Γ, ρ) ∈ X we denote (Γ|n , ρ) the pointed central Gray code piece of length 2n + 1
around ρ.
Central Gray code cylinders are a basis of clopen sets for the topology on X.
The following is a consequence of Proposition 5.10.
Lemma 5.12. Let (I, γ) be a central Gray code piece, and (Il , γ), (Ir , γ) be its marginals. Then
CI,γ = CIl ,γ ∩ CIr ,γ .
Proof. The inclusion ⊂ is obvious. The reversed inclusion is a consequence of Proposition
5.10.
We define a partial action of M on pointed Gray code pieces. The action of g ∈ M on (I, γ)
is defined if and only if it is possible to write g as g = sn · · · s1 with si ∈ S, in such a way that
the path in I starting from γ with labels s1 , · · · , sn is entirely contained in I. In that case we
set g(I, γ) = (I, gγ) where gγ is the endpoint of the above path.
Definition 5.13.
1. Given a clopen subset U ⊂ X, and elements g, h ∈ M we call the triplet
(U, g, h) admissible if the sets U, g −1 U, hU are disjoint.
2. Given an admissible triplet (U, g, h) we define ηU,g,h to be the element of [[M ]] that acts
as g, h, g −1 h−1 on g −1 U, U, hU respectively, and the identity elsewhere.
Lemma 5.14. Let (U, g, h) be an admissible triplet. Then ηU,g,h ∈ [[M ]]′t .
Proof. Clearly ηU,g,h belongs to a subgroup of [[M ]] isomorphic to the symmetric group S3
(which is thus contained in [[M ]]t ) and corresponds to a 3-cycle (which is thus a commutator).
Lemma 5.15. The elements ηU,g,h generate [[M ]]′t as (U, g, h) varies among admissible triplets.
Proof. As in [Mat06], the starting observation is that since [[M ]]′t is simple, it is generated by
its elements of order 3 (note that there are such elements, for instance by Lemma 5.14). Let
k ∈ [[M ]]t with order 3. Let us show that it can be written as a product of elements of the
form ηU,g,h . Since the action of M on X is topologically regular (Theorem 4.11), the set of
fixed points of k is clopen. Let V be its complement. V is covered by clopen sets of the form
W = W1 ⊔ W2 ⊔ W3 so that k permutes cyclically the Wi and so that the restriction of k to
each Wi coincides with the restriction of an element of M . After taking a finite sub-cover and
refining it, we may suppose that V is partitioned into clopen sets W of this form. Note that
given such a set W , the element of [[M ]] that acts as k on W and as the identity elsewhere
can be written in the form ηW1 ,g,h for some g, h ∈ M . Thus k can be written as a product of
commuting elements of this form.
16
Lemma 5.16. For every R > 0, there exists n > 0 such that the following holds. Let (Γ, γ) ∈ X.
Assume that ρ 6= γ is a vertex of Γ lying at distance less than R from γ. Then (Γ|n , γ) 6= (Γ|n , ρ).
Proof. Assume that there is a sequence (Γn , γn , ρn ) verifying the assumptions so that for every
n we have (Γn |n , γn ) = (Γn |n , ρn ). Up to taking a subsequence we may assume that (Γn , γn )
and (Γn , ρn ) converge to limits (Γ, γ) and (Γ, ρ), where the limit graph Γ is the same with two
different basepoints (here we use the assumptions that γn and ρn are at bounded distance in
Γn ). Then γ and ρ are indistinguishable in Γ. Arguing as in the proof of Lemma 5.18, this
contradicts Remark 4.17, since then Γ has a non-trivial automorphism.
We will need the following simple fact.
Lemma 5.17. Let ∆ be a finite connected graph, with vertex set {1, . . . , n}. Then the alternating
group over n elements is generated by 3-cycles of the form (x, y, z), where x, y, z ∈ {1, . . . , n}
are the vertices of a simple path of length 3 in ∆.
Lemma 5.18. The elements ηU,s,t generate [[M ]]′t as (U, s, t) varies among admissible triplets
so that s, t ∈ S = A ∪ B.
Proof. Consider ηg,h,U with g, h arbitrary. By writing U as a disjoint union of central Gray code
cylinders, we may suppose U = CI,γ is a central Gray code cylinder whose Gray code piece (I, γ)
is such that the length of I is bigger than 2 max{|g|S , |h|S } so that (I, gγ) and (I, h−1 γ) are
defined, and gU and h−1 U are the corresponding Gray-code cylinders. Let ∆ ⊂ I be a minimal
connected subgraph containing h−1 γ, γ and gγ, and let R be its diameter. and let n0 be given
by Lemma 5.16. Up do decomposing again into Gray code cylinders of bigger length, we may
assume that the length of I is at least n0 + R. Then Lemma 5.16 guarantees the following: if
δ, δ ′ ∈ ∆ are distinct, then the Gray code cylinders CI,δ and CI,δ′ are disjoint. Hence [[M ]]
contains a copy of the symmetric group acting over |∆| elements that acts by permuting such
cylinders. Then ηg, h, U corresponds to the 3-cycle (h−1 γ, γ, gγ). Moreover 3-cycles permuting
adjacent elements of ∆ are of the form ηCI,δ ,s,t with s, t ∈ S. Hence ηg,h,U belongs to the group
generated by such elements.
From this point on, let n0 be the integer furnished by Lemma 5.16 for R = 8. In the next
definition and the following lemma, it will be convenient slightly extend the generating set by
setting Se = S 2 . Note that this is still a generating set since e ∈ S.
Definition 5.19. We say that (U, s, t) is a convenient admissible triplet if it is an admissible
triplet that has the following form: U = CI,γ is a central Gray code cylinder where I has length
e and s−1 γ, γ, tγ project to three distinct consecutive points in the Gray
at least 2n0 + 1, s, t ∈ S,
code line.
The following Lemma is based on the same commutator trick as in [Mat06, Lemma 5.3]. The
possibility to apply it in this situation relies on the combinatorial nature of Gray code cylinders
(Proposition 5.10 and Lemma 5.12).
Lemma 5.20. Let (U, s, t) be a convenient admissible triplet, with U = CI,γ . Let (Il , γ), (Ir , γ)
be the marginals of (I, γ). Observe that the definition of convenient admissible triplet implies
that (Il , s−1 γ) and (Ir , tγ) are central Gray code pieces of smaller length. Set Ul = CIl ,s−1 γ and
Ur = CIr ,tγ . Choose s′ , t′ such that the triplets (Ul , s′ , s) and (Ur , t, t′ ) are admissible. Then we
have
−1
[ηUr ,t,t′ , ηU
′ ] = ηU,s,t .
l ,s ,s
Proof. First, observe that by Lemma 5.12
s(Ul ) ∩ t−1 (Ur ) = CIl ,v ∩ CIr ,v = CI,v = U,
where in the second equality we used Lemma 5.12.
17
Second observe that Lemma 5.16 together with the assumption that the length of I is at least
2n0 +1 (made in Definition 5.19) implies that all other pairs in the list s′−1 (Ul ), Ul , s(Ul ), t−1 (Ur ), Ur , t′ (Ur )
are disjoint.
Third and last, observe that if (W, s′ , s), (V, t, t′ ) are any two admissible triplets such that
sW ∩ t−1 V 6= ∅ and such that all other pairs in the list s′−1 (W ), W, s(W ), t−1 (V ), V, t′ (V ) are
disjoint, then we have the identity
−1
[ηV,t,t′ , ηW,s
′ ,s ] = ηtW ∩s′−1 V,s,t .
The last identity applied to W = Ul , V = Ur gives the desired conclusion.
Proof of Theorem 5.1. Consider the set T = {ηV,s,t }, where (V, s, t) runs over convenient admissible triplets such that V is a Gray-code cylinder of length 2n0 + 1. We shall show that T
generates [[M ]]′t .
Let us first show that hT i contains all elements ηU,s,t where (U, s, t) is any convenient admissible triplet. Let (I, v) be the Gray code piece corresponding to U . We prove the claim by
induction on 2n + 1, the length of I. Since (Il , s−1 v) and (Ir , t(v)) are centered of length 2n − 1,
by the inductive assumption we have that ηUl ,s′ ,s and ηUr ,t,t′ belong to hT i, where Ul , Ur , s′ , t′
are as in Lemma 5.20. By Lemma 5.20 also ηU,s,t ∈ hT i.
To conclude the proof, by Lemma 5.18 it is enough to show that every element of the form
ηU,s,t , with U clopen and s, t ∈ S, lies in hT i (here the triplet (U, s, t) is admissible, but not
necessarily convenient). By taking a partition of U into central Gray code cylinders, we may
assume that U is a centered Gray code cylinder of depth 2n + 1 ≥ 2n0 + 1. Let it correspond
to (I, v). If s−1 (v), v, tv project to three consecutive points on the Gray code line, then the
triplet is convenient and we are done. The other cases are covered by taking suitable conjugates
of convenient admissible triplets (the reason why we used the generating set S 2 instead of S
in the definition of a convenient admissible triplet is that this gives enough room to perform
this step). Consider the case where v, tv project to the same point on the Gray code line. Pick
t′ ∈ S so that the points s−1 (v), v, t′ (v) project to three distinct consecutive points (hence,
the triplet U, s, t′ is convenient admissible). Choose also s′ ∈ S 2 so that s′−1 (v) 6= s−1 (v) and
s′−1 (v), s−1 (v) project to the same point on the line (note that it may not be possible if we
considered the generating set S only). Let V be the cylinder corresponding to (I, t(v)). Then
the triplet V, s′ , t′ t−1 is convenient admissible, and we have
−1
ηU,s,t = ηV,s
′ ,t′ t−1 ηU,s,t′ ηV,s′ ,t′ t−1 .
The other cases are done similarly.
5.3
Proof of Proposition 5.10
Definition 5.21. Let (I, γ) ⊂ Γ be a central Gray code piece. We say that (I, γ) branches at
the left if the left marginal (Il , γ) contains vertices that project to I c but that do not belong
to Ic . The connected components of p−1 (I c ) ∩ Il are called the left branches. The left branch
containing γ coincides with Ic and is called the core branch. In the same way we define branching
at the right. We say that (I, γ) bi-branches if it branches both at the left and at the right.
We begin with the following special case of Proposition 5.10.
Lemma 5.22. Let (I, γ) ⊂ Γ be a central Gray code piece. If (I, γ) does not bi-branch, then its
isomorphism class is uniquely determined by the isomorphism class of its marginals.
Proof. Let (Il , γ) and (Ir , γ) be the marginals. Since the position of γ is given, the identification
between vertices of the marginals is uniquely determined on the core branch. So if one of the
marginals does not have any other branches, this determines the isomorphism class of (I, γ)
completely.
To conclude, we need to understand in which situations bi-branching may occur.
18
Lemma 5.23. Let (I, γ) ⊂ Γ be a Gray code piece. Then (I, γ) branches at the left if and only
if there is a bit which is B-visible in I l \ I c , is not visible in Ic and is non-zero in at least a
point of I c . The same characterization holds for branching at right.
Proof. This is an elementary consequence of the definitions and of Remark 5.6.
Let I ⊂ Γ. We say that a point γ ∈ I is a root for I if the position of its first non-zero bit,
say j, is maximal among all points of I. We further say that a point of I is an anti-root if the
position of its first non-zero bit is exactly j − 1. This terminology is inspired by [AV14].
Remark 5.24. It follows from the definition of the Gray code line Γ that every connected segment
has at least one and at most two roots, and if there are two distinct ones, they are neighbours
(this follows from the fact that between any two non-neighboring points of Γ beginning with a
sequence of j zeros there is at least a point beginning with a longer sequence of zeros). It could
be that I has no anti-roots.
Lemma 5.25. Let (I, γ) be a Gray code piece, and ρ ∈ I be a root. Then p−1 (ρ) ∩ I is a
connected sub-graph of I.
Proof. Let j be the position of the first ’∗’ bit in ρ. We claim that any two sequences ρ, ρ′ ∈
p−1 (ρ) ∩ I can only differ in the jth bit, and in the j + 1th bit if the latter is non-zero. Assume
that they differ in the rth bit with r > j + 1. Then this bit must be visible in I. This implies
that there is a sequence in I whose first ′ ∗′ bit is at position > j, contradicting the definition
of a root. This is enough, since then ρ and ρ′ are connected by an edge corresponding to a
generator in B.
Definition 5.26. Let (I, γ) ⊂ Γ be a central Gray code piece with I = [γ −n , · · · , γ 0 , · · · γn ].
We say that (I, γ) is a quasi-level if the roots of I are contained in the two leftmost vertices
{γ −n , γ −n+1 }, its antiroots are contained in the two rightmost vertices {γ n−1 , γ n }, and there is
at least one anti-root, or if the symmetric situation (exchanging left and right) occurs.
A quasi-level has depth j, where j is the position of the first non-zero bit in a root.
The reason for the name is that a quasi-level is essentially isomorphic the the finite Schreier
graph for the action of M on the j-th level of the tree, up to a bounded number of edges close
to the left and right extremities.
Lemma 5.27. If (I, γ) bi-branches, then it is a quasi-level.
Proof. Assume that (I, γ) bi-branches. Let ρ ∈ I be a root. By Lemma 5.25 and the fact that
(I, γ) bi-branches, we conclude that ρ ∈
/ I c . Hence, by Remark 5.24 and up to exchanging left
and right, we may suppose that all roots are contained in I l \ I c . We need to check that there
is at least one anti-root and that anti-roots belong to I r \ I c . Let j be the position of the first
∗ in a root. By Lemma 5.23, there is at least a visible bit in I r \ I c which is not visible in
I c and is ∗ in at least one point of I c . We claim that such a bit is necessarily at position j.
Let r be its position. Assume that r > j. Then it is preceded either by the prefix 0r−1 or by
the prefix 0r−2 ∗. In both cases the first ∗-bit in the corresponding sequence is at position ≥ j,
contradicting the fact that the roots are contained in the two left-most vertices. Assume that
r < j. Since the bit at position r is zero in the root and ∗ in a point in I c , there is an edge in I c
where it switches from 0 to ∗. But then at this point it is also be visible, contradiction. It follows
that there is a point in I r \ I c that has a visible bit at position j. Since this point is not a root,
it also has a ∗ at position j − 1. It follows that it is an anti-root. If there was another anti-root
within I c , the bit at position j would be visible in I c , contradicting the previous reasoning.
Lemma 5.28. Let (I, Γ) be a quasi-level. Then it is uniquely determined by its marginals.
19
Proof. Let α ∈ I l \ I c be the root closest to the center of I, and ω ∈ I r \ I c be the anti-root
closest to the center. Then the preimages p−1 (α) and p−1 (β) in Il and Ir are connected graphs,
and can be recognized by looking at the isomorphism classes of Il and Ir only. By looking at
these finite graphs and at their positions it is possible to recognize the letters at position j and
j + 1 in γ and the exact value of j. This information is enough to reconstruct the quasi-level.
References
[AAMBV16]
Gideon Amir, Omer Angel, Nicolás Matte Bon, and Bálint Virág. The Liouville
property for groups acting on rooted trees. Annales de l’IHP, 2016. To appear.
[AAV13]
Gideon Amir, Omer Angel, and Bálint Virág. Amenability of linear-activity
automaton groups. J. Eur. Math. Soc. (JEMS), 15(3):705–730, 2013.
[AV12]
Gideon Amir and Bálint Virág. Speed exponents of random walks on groups.
2012. Preprint, arXiv:1203.6226.
[AV14]
Gideon Amir and Bálint Virág. Positive speed for high-degree automaton groups.
Groups Geom. Dyn., 8(1):23–38, 2014.
[BKN10]
Laurent Bartholdi, Vadim A. Kaimanovich, and Volodymyr V. Nekrashevych.
On amenability of automata groups. Duke Math. J., 154(3):575–598, 2010.
[BM08]
S. Bezuglyi and K. Medynets. Full groups, flip conjugacy, and orbit equivalence
of Cantor minimal systems. Colloq. Math., 110(2):409–429, 2008.
[Bri14]
Jérémie Brieussel. Folner sets of alternate directed groups. Ann. Inst. Fourier
(Grenoble), 64(3):1109–1130, 2014.
[BV05]
Laurent Bartholdi and Bálint Virág. Amenability via random walks. Duke Math.
J., 130(1):39–56, 2005.
[Cor14]
Yves Cornulier.
Groupes pleins-topologiques [d’après Matui, Juschenko,
Monod,...]. Astérisque, (361):Exp. No. 1064, 2014. Séminaire Bourbaki. Vol.
2012/2013.
[DDMN10]
Daniele D’Angeli, Alfredo Donno, Michel Matter, and Tatiana Nagnibeda.
Schreier graphs of the Basilica group. J. Mod. Dyn., 4(1):167–205, 2010.
[GPS99]
Thierry Giordano, Ian F. Putnam, and Christian F. Skau. Full groups of Cantor
minimal systems. Israel J. Math., 111:285–320, 1999.
[GPS04]
Thierry Giordano, Ian Putnam, and Christian Skau. Affable equivalence relations and orbit structure of Cantor dynamical systems. Ergodic Theory Dynam.
Systems, 24(2):441–475, 2004.
[Gri84]
R. I. Grigorchuk. Degrees of growth of finitely generated groups and the theory
of invariant means. Izv. Akad. Nauk SSSR Ser. Mat., 48(5):939–985, 1984.
[GS83]
Narain Gupta and Saı̈d Sidki. On the Burnside problem for periodic groups.
Math. Z., 182(3):385–388, 1983.
[GW15]
Eli Glasner and Benjamin Weiss. Uniformly recurrent subgroups. In Recent
trends in ergodic theory and dynamical systems, volume 631 of Contemp. Math.,
pages 63–75. Amer. Math. Soc., Providence, RI, 2015.
[GŻ02]
Rostislav I. Grigorchuk and Andrzej Żuk. On a torsion-free weakly branch group
defined by a three state automaton. Internat. J. Algebra Comput., 12(1-2):223–
246, 2002. International Conference on Geometric and Combinatorial Methods
in Group Theory and Semigroup Theory (Lincoln, NE, 2000).
[JM13]
Kate Juschenko and Nicolas Monod. Cantor systems, piecewise translations and
simple amenable groups. Ann. of Math. (2), 178(2):775–787, 2013.
20
[JMBMdlS15] Kate Juschenko, Nicolás Matte Bon, Nicolas Monod, and Mikael de la Salle.
Extensive amenability and an application to interval exchanges. 2015. Preprint,
arXiv:1503.04977.
[JNdlS13]
Kate Juschenko, Volodymyr Nekrashevych, and Mikael de la Salle. Extensions
of amenable groups by recurrent groupoids. 2013. Preprint, arXiv:1305.2637v2.
[Jus15]
Kate Juschenko. Non-elementary amenable subgroups of automata groups. 2015.
Preprint, arXiv:1504.00610.
[Mat06]
Hiroki Matui. Some remarks on topological full groups of Cantor minimal systems. Internat. J. Math., 17(2):231–251, 2006.
[Mat12]
Hiroki Matui. Homology and topological full groups of étale groupoids on totally
disconnected spaces. Proc. Lond. Math. Soc. (3), 104(1):27–56, 2012.
[MB15]
Nicolás Matte Bon. Topological full groups of minimal subshifts with subgroups
of intermediate growth. J. Mod. Dyn., 9(01):67–80, 2015.
[Nek05]
Volodymyr Nekrashevych. Self-similar groups, volume 117 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2005.
[Nek06]
Volodymyr Nekrashevych. Self-similar inverse semigroups and Smale spaces.
Internat. J. Algebra Comput., 16(5):849–874, 2006.
[Nek15]
Volodymyr Nekrashevych. Simple groups of dynamical origin. 2015. Preprint,
arXiv:1601.01033.
[Nek16]
Volodymyr Nekrashevych. Periodic groups from minimal actions of the infinite
dihedral group. 2016. Preprint, arXiv:1501.00722.
[Vor12]
Yaroslav Vorobets. Notes on the Schreier graphs of the Grigorchuk group. In
Dynamical systems and group actions, volume 567 of Contemp. Math., pages
221–248. Amer. Math. Soc., Providence, RI, 2012.
21
| 4 |
Plausibility and probability in deductive reasoning
Andrew MacFie∗
arXiv:1708.09032v2 [cs.AI] 22 Sep 2017
Abstract
We consider the problem of rational uncertainty about unproven mathematical statements, which Gödel and others have remarked on. Using
Bayesian-inspired arguments we build a normative model of fair bets
under deductive uncertainty which draws from both probability and the
theory of algorithms. We comment on connections to Zeilberger’s notion of
“semi-rigorous proofs”, particularly that inherent subjectivity is an obstacle.
Contents
1 A natural problem: Quantified deductive uncertainty
1.1 The phenomenon . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 The (modeling) problem . . . . . . . . . . . . . . . . . . . . .
2
2
3
2 Solutions
2.1 Formal representation of plausibilities . . . . . .
2.1.1 Plausibility functions . . . . . . . . . . .
2.1.2 Languages . . . . . . . . . . . . . . . . .
2.1.3 Epistemic quality vs. computation costs
2.1.4 Conditional plausibility . . . . . . . . . .
2.2 Rational plausibilities . . . . . . . . . . . . . . .
2.2.1 On bets and scoring rules . . . . . . . .
2.2.2 Epistemic quality . . . . . . . . . . . . .
4
4
4
5
5
7
7
7
8
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
. .
. .
.
3 (Potential) applications and extensions
11
3.1 Foundations of “semi-rigorous proofs” . . . . . . . . . . . . . . . 11
3.2 Logical omniscience in Bayesian epistemology . . . . . . . . .
12
3.3 Decision theory . . . . . . . . . . . . . . . . . . . . . . . . . .
12
References
∗
13
School of Mathematics and Statistics, Carleton University, Ottawa, Canada
1
1
1.1
A natural problem: Quantified deductive
uncertainty
The phenomenon
Epistemic uncertainty is usually defined as uncertainty among physical states
due to lack of data or information. By information we mean facts which we
observe from the external world. For example, whether it rains tomorrow is
a piece of information we have not observed, so we are uncertain about its
truth value. However, we also may have uncertainty about purely deductive
statements, which are completely determined by the information we have, due
to limited reasoning ability. That is, before we have proved or refuted a mathematical statement, we have some deductive uncertainty about whether there
is a proof or refutation.
Under deductive uncertainty, there is a familiar process of appraising a degree
of belief one way or the other, saying a statement has high or low plausibility.
We may express rough confidence levels in notable open conjectures such as
P 6= NP [20] or the Goldbach conjecture, and we also deal with plausibility in
everyday mathematical reasoning. Sometimes general patterns show up across
problems and we extrapolate them to new ones. If we have an algorithm and
are told it runs in time O(n lg n), we usually assume that this implies good
practical performance because this is a commonly observed co-occurrence. So
the plausibility of the running time being 1010! n⌈lg n⌉ is considered particularly
low. Any mathematical result seen as “surprising” must have been a priori
implausible. Et cetera. Many more examples of plausibility in mathematics
may be found in [27, 32].
In some instances it may be natural to quantify deductive uncertainty, and
perhaps speak of “probabilities”. For example, let d be the 10100 th decimal
digit of π. If we have not computed d and all we know is that, say, d is odd,
it feels like d has a uniform probability distribution over {1, 3, 5, 7, 9}. Mazur
[27, Sec. 2] would describe this as an application of the principle of insufficient
reason. We use the same “symmetry” argument to state the probability that
a given number n is prime via the prime number theorem or Fermat primality
test. Probabilities also show up in enumerative induction, where confidence in
a universal quantification increases as individual instances are verified. The
four color theorem is one of many cases where only positive instances could
be found and eventually a proof was given. Furthermore, to this theorem and
other similar claims there are relevant 0-1 laws [3] which state that the “conditional probability” of a uniform random instance being a counterexample,
given that counterexamples exist, goes to 1 asymptotically. With this fact one
can use individual instances to “update” a Bayesian probability on the universal statement. Bayesianism in practical mathematics has been discussed
previously in [5].
Notwithstanding the examples above, mathematicians generally leave their uncertainty unquantified. This may be due to haziness about, for example, what
2
a “60% chance” means, and how probability should be used, in the context
of deductive reasoning. One point to emphasize is that we are referring to
subjective uncertainty rather than any due to inherent “randomness” of mathematics. Of course there is nothing random about whether a proof exists of
a given mathematical statement. Frege, speaking on mathematical reasoning,
appears to note this lack of randomness as a problem: “the ground [is] unfavorable for induction; for here there is none of that uniformity which in other
fields can give the method a high degree of reliability” [11]. However, we only
require subjective uniformity in order to have an analogy to other forms of
uncertainty.
1.2
The (modeling) problem
Gödel mentions deductive probabilities in a discussion of empirical methods
in mathematics [14]:
It is easy to give examples of general propositions about integers
where the probability can be estimated even now. For example, the
probability of the proposition which states that for each n there
is at least one digit 6= 0 between the n-th and n2 -th digits of the
decimal expansion of π converges toward 1 as one goes on verifying
it for greater and greater n.
In commentary, Boolos naturally asks how such probabilities would be computed [4]:
One may, however, be uncertain whether it makes sense to ask
what the probability is of that general statement, given that it has
not been falsified below n = 1000000, or to ask for which n the
probability would exceed .999.
With Boolos, we want to know, how would subjective deductive probabilities
work in general? Are there right and wrong ways to assign these probabilities?
Do they even make sense? These questions have both positive and normative
versions; we focus on the normative.
Bayesianism is the theory of probabilities for physical uncertainty [40]. It
gives an interpretation of probability, where we take a probability space, and
interpret the probability measure as assigning subjective degrees of belief to
events which represent expressible physical states. Looking from the reverse
direction, Bayesianism argues from the nature of physical uncertainty to reach
a conclusion that degrees of belief should form a probability space. In this
second sense we can think of Bayesianism as a proof, where the premise is
physical uncertainty, the inferences are rationality arguments, and the conclusion is probability theory. There are two ways to make use of a proof to
learn something new. First, we can apply the theorem if we are able to satisfy
the premise. Here this would mean reducing deductive uncertainty to physical uncertainty by defining virtual information states. This general problem
of deductive probabilities has received some attention in the literature (the
3
sample space is taken to consist of complete formal theories) as we mention in
later sections. But what if there is no valid way to define virtual information
for deductive uncertainty, i.e. what if probability theory is not appropriate for
deductive uncertainty in this manner? What if something else is? The second
way to learn from a proof is to imitate the proof technique. Here we would
start with the premise of deductive uncertainty, proceed using analogous but
adapted rationality arguments, and reach a conclusion which is a set of mathematical rules possibly different from probability theory. We focus on this
approach.
The first step is to fix a concrete and unambiguous way to quantify uncertainty.
If we assign a number to our belief in a statement, what does that mean?
And is it always possible for us to do so? In Bayesianism, uncertainty is
quantified by a single real number from [0, 1] and a prominent operational
definition of this quantification is based on fair bets [35, Ch. 13, 40, Sec. 2.2.1].
This operationalization appears to work equally well for deductive uncertainty.
That is, anyone can express their uncertainty about a mathematical statement
φ using a number in [0, 1] which encodes the betting payoffs they consider
acceptable if they were to bet on φ. We call these values plausibilities. (This
is not to be confused with other usages such as “plausibility measures” [17,
Sec. 2.8]). We assume this operationalization is meaningful and understood;
for further details and justifications see Sec. 2.2.1.
In the context of physical uncertainty, Bayesianism adds constraints on what
plausibilities/probabilities should be for a rational agent, namely coherence (so
plausibilities are probabilities; see [30] for a modern asset pricing perspective),
conditionalization, regularity and other guidance for selecting priors [40]. Also,
probabilistic forecasts may be rated on accuracy using loss functions [25].
However, the assumptions of Bayesianism on which these constraints are based
do not necessarily still apply to deductive plausibilities and indeed we may
have additional or different requirements. Thus the precise question to answer is, what constraints should be put on deductive plausibilities and what
mathematical structure results?
2
2.1
2.1.1
Solutions
Formal representation of plausibilities
Plausibility functions
Fix an encoding of deductive statements into finite strings so that there is
a decision problem Π ⊆ {0, 1}∗ corresponding to the true statements. We
take an association of plausibilities to encoded statements as a function p :
{0, 1}∗ → [0, 1]. We call p a plausibility function. A plausibility function
represents an agent’s uncertainty about Π. One could also have defined a
plausibility function to take a finite sequence of input strings and return a
4
finite sequence of plausibilities, that is, working at the level of bet systems
instead of bets.
2.1.2
Languages
Finding proofs is a matter of computation, so our reasoning abilities are equivalent to our computational resources; and generally we will experience deductive
uncertainty when faced with any intractable computational problem. Importantly, we cannot meaningfully talk about problems with only one input, since
obviously the best output is the actual truth value. So we must talk of uncertainty about an entire set of inputs simultaneously.
Probability spaces consist of a measurable space and a probability measure. In
Bayesianism, the measurable space may be substituted by a “sentence space”
which is closed under logical operations. In the deductive case, any nontrivial
problem Π has an input set that is trivially closed under logical operations,
since any input is logically equivalent to “true” or “false”. We conclude that
the problem Π need not have any particular syntactic structure and we may
consider standard problems from theoretical computer science.
There is a line of research on deductive uncertainty where the inputs come
from a first-order language. Typically this work aims at finding composites of
logic and probability theory, and there is less focus on practicality. The most
recent work is by Garrabrant et al. [13] and [13, Sec. 1.2] reviews previous
literature. In the present work we instead restrict our attention to decidable
problems. We do this because inconsistent logics do not make sense in terms
of betting. So first-order logics are problematic due to Gödel’s first and second
incompleteness theorems.
2.1.3
Epistemic quality vs. computation costs
For a given problem Π ⊆ {0, 1}∗ we seek an epistemic improvement relation ≺Π
on plausibility functions, where q ≺Π p iff p is a strictly better uncertainty assignment for Π than q, ignoring any computational costs of the functions. For
example, if we decide to require regularity, we would say that for all functions
p and q, if p is regular and q is not then q ≺Π p. Guidance for selecting plausibility functions is then based on weighing ≺Π against computational costs. If
we are given a probability distribution on inputs, we take the distributional decision problem (Π, D) and consider a distribution-specific relation ≺(Π,D) . We
refer to the relation as an order but it is not necessarily total.
Improvement relations can be found in other modeling contexts. For algorithm running time sequences we use asymptotic relations such as big O or
polynomial growth vs. non-polynomial growth. Fine-grained relations may be
used if the model of computation is fixed. The Nash equilibrium represents a
kind of ordering which expresses that a strategy profile can be improved from
one player’s perspective. Pareto optimality is similar but for group rationality.
5
Another example is the Marshall/Kaldor-Hicks social welfare improvement relation in economics [9, 12]. This last relation is useful even though it cannot
be defined to be both transitive and antisymmetric.
In general we have a tradeoff between epistemic quality (whatever we determine that to be) and computational complexity. A theory of deductive uncertainty must not only define gradients of epistemic quality but dictate how to
make this tradeoff. If we allow arbitrary computations, the problem immediately disappears. E.g. there is a temptation to look into Solomonoff induction
[26] as a model of inductive reasoning applied to mathematical knowledge.
This would be an attempt to formalize, e.g. Pólya’s patterns of plausible reasoning [27, 32], such as “A analogous to B, B more credible =⇒ A somewhat
more credible”. However we must be cautious, because an incomputable prior
cannot be the correct tradeoff between quality and efficiency.
Computation costs may or may not be measured asymptotically. Asymptotic
means no finite set of inputs can make a difference. If we use asymptotic complexity this forces us to define ≺Π so that it is compatible, i.e. also asymptotic.
As an example, utility in game theory is generally not compatible with asymptotic computation costs. There are, however, game models which trade off
running time and other considerations in a non-trivial way, for example using
discounted utility. In economics and game theory, the concept of “bounded rationality” refers to decision making with limitations on reasoning/optimization
power, such as imperfect recall, time constraints, etc. [34]. We note some
economic models which incorporate bounded computation: game-playing Turing machines with bounded state set [28], automata models [31], machines as
strategies and utility affected by computation cost [10, 18], information assymetry in finance [1]. Rubinstein [34] said in 1998, “I have the impression
that many of us feel that the attempts to model bounded rationality have yet
to find the right track.” Perhaps progress has been made since then. If a game
model uses a practically awkward criterion for algorithm performance, simple models of computation may be used, or equilibria may be reasoned about
without analyzing a particular algorithm.
A simple approach to the tradeoff is to fix a resource bound and consider
as “feasible” only functions that can be computed within the bound. Then,
given ≺Π , we optimize over the subset of feasible plausibility functions. This
is the method we focus on in the remainder. E.g. we may assume the CobhamEdmonds thesis and consider ≺Π restricted to polynomial-time-computable
functions.
We make the assumption that we, as modelers, are always capable of analyzing given plausibility functions however is necessary to evaluate ≺Π and
analyze computational complexity. This is of course not true, as discussed
in [2, 29] which consider bounded rationality in economic systems. However
this is a worthy assumption since building meta-uncertainty into the model
creates a regress which would add significant complexity. Thus we can say
that optimizing ≺Π is the rational way to select a plausibility function even if
we are not currently able to do so constructively. Particularly, when we ana6
lyze functions according to an input distribution, the business of defining the
distribution is that of the unbounded analyst. In practice, e.g. approximation
algorithms are analyzed, even if the problem they attempt to approximate is
intractable.
2.1.4
Conditional plausibility
If we select plausibility functions by optimizing ≺Π over feasible functions, the
definition of feasibility could change over time or simply vary across contexts,
so in general we speak of conditional plausibility functions p(·|S), where S is an
oracle or complexity class or other representation of the resources available to
compute the function. Another interpretation is that, over time, computation
costs may effectively come down, enlarging the budget set of feasible functions.
This notation and terminology indicate an analogy where knowledge, in the
computational sense (roughly that of [15, Sec. 9.2.3, 16, Sec. 7.2]), takes the
place of information.
In full generality, an agent may have an internal state which affects the plausibility function computed. Specifically, over the life of the agent it may perform
background computations which add to a growing “knowledge base” so that
later plausibility functions are informed by a greater knowledge base than
previous ones. The background processes themselves are outside the scope
of our analysis; their performance is determined by such things as expected
times when queries arrive which would add complication. However, if the
background processes are assumed to run for a long time, they may generate knowledge inaccessible to an efficient algorithm serving a query. Different
knowledge may be (pre-)computed at different times, i.e. if all processes are
dovetailed and have different sequences of times when a certain input length
is ready. Alternatively, an agent may gain access to an external oracle. We
assume that any distribution on inputs stays fixed.
In Bayesianism, conditionalization is the process by which updates on new
information must occur. I.e. after observing A, our new probability of B is
P (B|A) = P (A ∩ B)/P (A). We note that conditionalization is a uniform
process in that there is a finite rule that performs updates for all events A
at all times. If there is an infinite set of possible S, we could restrict to
uniform-updating plausibility functions, i.e. those which take a representation
of S as a parameter. In, for example, Garrabrant’s model [13], the plausibility
function takes an additional parameter n, which is the number of stages to
run. However this level of analysis is beyond our scope.
2.2
Rational plausibilities
2.2.1
On bets and scoring rules
We give some further clarifying comments on betting situations. With fair
bets we implicitly assume linear utility in money but we can discharge the
7
assumption: First find the agent’s utility of money with lotteries then scale
units of money to have a linear relationship with utility. Bet decisions have
to include the chance to buy, sell short or do nothing; only choosing between
buy and nothing allows p ≡ 0 to mean no bets at all. Wagers must be finite,
otherwise analysis is trivial. It is possible that a bet event itself includes new
knowledge on which we should update before betting. This would be the case
if the party offering the bet is known to have access to a powerful oracle. For
operationalization purposes we consider situations where we do not update on
the bet itself; for example, if the offering party is known to be stupid or not selfinterested. Bets on mathematical facts may not have predetermined resolution
times. For some propositions neither a positive nor a negative resolution are
likely in a reasonable timeframe. And when the truth finally does come out,
an agent’s circumstances will have changed. Agents will put at least some
probability on resolution by any given time, so wagers can be scaled up so
that agents at least consider it worth their time to make a bet.
Other methods of eliciting quantified uncertainty are equivalent to betting.
For example, fair bet odds are equivalent to asking “what is an objective
chance p̄ such that your uncertainty of this event is equivalent to that of an
event with objective chance p̄?”. The betting elicitation is also equivalent
to strictly proper scoring rules, which go back to de Finetti and beyond [6].
This refers to a situation where the agent is asked for a number x ∈ [0, 1]
representing uncertainty about a proposition φ, and then the agent receives a
score of B([φ], x). Here we use Iverson brackets where [φ] is 1 if φ is true, and
0 otherwise. (The word “score” is a bit of a misnomer because lower scores
are better.) The scoring function B is strictly proper iff
arg min yB(1, x) + (1 − y)B(0, x) = y,
x
i.e. if [φ] is a Bernoulli(y)-distributed random variable, we obtain the optimal expected score by choosing x = E[φ] = y. Situations in which proper
scoring rules are being applied are subject to considerations similar to those
above for betting. We note that with both betting and scoring, the agent is
presented with a simple decision where the only computational difficulty comes
from the one instance of some decision problem. This allows us to conclude
that their equivalence holds in the computational setting as well.
2.2.2
Epistemic quality
It is desirable to be calibrated in terms of frequencies, which means
|{ω : ω ∈ Π ∩ Ω, p(ω) ≈ c}|
≈ c,
|{ω : ω ∈ Ω, p(ω) ≈ c}|
for all c ∈ [0, 1], where Ω is some set of inputs. We would expect this, for example, if plausibilities are compatible with objective chances. More generally, let
D = (Dn ) be a distribution ensemble and let F be a set of “simple” functions.
If the members of F are easy to compute, our plausibilities should make use
8
of any knowledge contained in these simple functions. We say a plausibility
function p is calibrated with respect to F if for all f ∈ F ,
| CovDn (f, 1Π ) − CovDn (f, p)| ≤ ǫ,
for small ǫ. If p fails to be calibrated for some f , then we are leaving easy
knowledge on the table. Formally we could say if p is calibrated and q is not,
then q ≺(Π,D) p.
Beyond calibration, we look at utility considerations. Since plausibilities encode how we make simple decisions, more desirable plausibility values are those
that lead to better outcomes from decisions. Dutch book arguments justifying
Bayesianism are essentially saying if you can avoid losing money in a bet, you
should. On the other hand, scoring rules are generally considered to index epistemic quality. In fact, the concepts of betting and scoring rules are essentially
the same. It is shown in [33] that Dutch books exist iff an agent’s forecasts are
strictly dominated, where domination means there is another plausibility assignment that obtains a better score in all outcomes, according to a continuous
proper scoring rule. Conditionalization also follows from optimizing a proper
scoring rule. We take this scoring rule characterization of Bayesianism (which
led to probability theory in that case) and apply it to deductive plausibilities
via analysis of plausibility functions.
Proper scoring rules conveniently associate a real number to each of our plausibilities. There are three possible approaches for chosing a rule. First, we may
simply say q ≺Π p if p performs no worse than q according to all proper scoring
rules, and p performs strictly better according to at least one; or alternatively,
if p performs strictly better for all proper scoring rules. Second, we may put a
distribution on proper scoring rules, and consider the expected score over all
rules. Third, we may use a single rule (technically a degenerate distribution).
The third option is simplest, although there are various proper scoring rules
and they may generally give different results. The Brier score has some desirable properties [36]. The logarithmic score also has desirable properties and
is closely related to information-theoretic entropy. However, given any proper
scoring rule, one can always construct a decision environment such that performance in the environment is precisely performance according to the scoring
rule.
Worst-case scoring of plausibility functions leads to trivialities since p ≡ 1/2
is optimal unless the problem can be exactly solved. An alternative in some
cases would be to consider inputs of length ≤ n rather than n. We focus on
the standard practice of considering inputs of the same length, but we employ
average-case analysis, i.e. using expected scores. We break into two cases
depending on whether the environment (input distribution) is fixed or may be
a function of p. We say arbitrage of the first type exists if p scores suboptimally
for a fixed input distribution. Arbitrage of the second type exists if there is
some computationally bounded agent which looks at p, then generates an input
distribution and scores better on that distribution than p does.
If inputs are distributed according to an ensemble D, we may say that q ≺(Π,D) p
if the expected score of p is less than that of q. This is the model used in
9
[24]. A very similar view is considering p as an unnormalized distribution on
strings of length n, and finding the statistical distance between the normalized
distribution and the normalized distribution corresponding to 1Π . There we
would require at least that E(p) = E(1Π ).
If the input distribution is not fixed, we are in a situation where an adversary
may chose the distribution, based on p. If I prove that u ∈ Π =⇒ v ∈ Π
but p(u) > p(v) I have found an inconsistency but also if I know w ∈ Π and
p(w) = 0, this is an inconsistency as well. Adversaries may exploit both kinds
of inconsistencies if they can find the flawed plausibilities, dominate them, and
produce an input distribution which highly weights those inputs. Technically,
if we know we are at risk to be exploited by an adversary offering a bet, we
will update rather than bet. But we want to do this as little as possible. We
still want to maximize the situations where we do bet, because those are the
ones where we stand to win.
We focus on worst-case adversaries because these are the ones most likely to
“find” us (thinking of zero-sum games). We must talk about bets asymptotically since there will always be individual bets that we lose. “Bets” here are
the adversary selling us the difference between our score and the adversary’s
score, on the input distribution generated by the adversary. For each input
length n, the bettor takes as input 1n and outputs the bet. An “implicit”
adversarial bettor would randomly generate a set of inputs and the bettor’s
plausibilities for those inputs. An “explicit” bettor deterministically outputs
a small set of inputs, weights on those inputs, and the bettor’s plausibilities
for the inputs. Small sets of weighted inputs could be made into a single bet,
where we are sold the difference of expected scores. The adversary’s output
distribution, or the weights, determine D.
We can consider the case of an infinite loss over infinite time, or finite loss in
finite time. Fix a scoring rule B. For infinite loss, a function p is exploited
by a complexity class C if there is a C p -samplable distribution D and C p computable plausibility function pC such that
X
EDn B(1Π , p) − EDn B(1Π , pC ) = ∞
n
and the terms are eventually nonnegative. Here C p -samplable and C p computable mean samplable/computable by a machine with resources
associated with a complexity class C, and oracle access to p. So we say
q ≺Π p if for some complexity class C, the plausibility function q is exploited
by C and p is not. In the case of finite loss, we say a function p is exploited
by a complexity class C if there is a C p -samplable distribution D and
C p -computable plausibility function pC such that
EDn B(1Π , p) ≫ EDn B(1Π , pC ),
10
n → ∞.
3
3.1
(Potential) applications and extensions
Foundations of “semi-rigorous proofs”
Zeilberger presented the concept of a “semi-rigorous proof” and predicted that
it might become acceptable by the mathematical community: “I can envision
an abstract of a paper, c. 2100, that reads: ‘We show, in a certain precise sense,
that the Goldbach conjecture is true with probability larger than 0.99999’” [41].
In order for such a result to be useful, there must be cases where plausibilities
are objective.
Are plausibilities objective or subjective? Should we expect people to agree on
plausibilities? There are various sources of subjectivity: how to embed individual questions in problems, first-order logic issues, and problems Π where ≺Π
has no unique optimum. For example, take Gödel’s π problem from Sec. 1.2:
Let φ represent the sentence, for each n there is at least one digit 6= 0 between
the n-th and n2 -th digits of the decimal expansion of π. First, there may not
be a proof or refutation of φ, in which case betting outcomes are undefined.
Second, if φ is decideable, its truth value is a single bit, so the optimal plausibility is simply the truth value. We would need to embed φ in a class of
problem instances to avoid this triviality, and there are different ways of doing
so. If the problem is too easy, the answer is still trivial. If it is hard, we do
not learn much from the answer. In the context of mathematics, we need not
consider adversarial inputs, so we fix an environmental distribution. Analyzing general methods of computing plausibilities, rather than individual values,
does makes sense since people typically use heuristics in a consistent manner
across problems. To condition on observed digits of π, we can allow access to
an oracle that checks φ for large ranges of the digits of π, but again we have
to define the oracle not to be too weak or too powerful.
We stated at the end of Sec. 1.1 that mathematicians may be uncomfortable
with putting too much focus on plausibilities. Gödel says, “I admit that every
mathematician has an inborn abhorrence to giving more than heuristic significance to such inductive arguments” [14]. Also, Corfield notes, “Pólya himself
had the intuition that two mathematicians with apparently similar expertise
in a field might have different degrees of belief in the truth of a result and treat
evidence for that result differently” [5]. However, the physics community has
had notable success using non-rigorous mathematical methods.
One practical issue with publishing probabilities for mathematical problems is
error amplification. If we take the conjunction of two “independent” uncertain
statements we end up with uncertainty greater than that of either of the original statements, which means confidence erodes over time. In mathematics
this is undesirable since we are used to taking arbitrarily long sequences of
conjunctions and implications with no loss of validity.
More on probabilistic proofs is found in [7]. The potential for models of uncertainty in mathematics to explain the use of large computations to increase
confidence in conjectures is noted in [5].
11
3.2
Logical omniscience in Bayesian epistemology
Even within traditional Bayesianism, the assumption of computational unboundedness can be undesirable; this is known as the problem of logical omniscience [19, 21, 39]. Some work has been done on formal models for logical nonomniscience, including a resource-bounded AIXI [22] and resource-bounded
Solomonoff prior [26]. Even approximate inference in Bayesian networks is
NP-hard although there is research on methods such as belief propagation
which succeed in special cases. The general problem remains, at least in the
following form. Solomonoff’s recursion-theoretic notion of extrapolation [23,
26] gives an ideal family of “universal” prior distributions m suitable for general agents. However, it lacks a useful improvement relation on approximations
since there is only an ideal, which is incomputable, and no guidance on how
to approximate it optimally. This sort of problem is why Ben Goertzel said in
2011, “the theory of rationality under limited computing resources is just not
that clear”.
Perhaps we can generalize some of the above modeling ideas to get a concept
of optimal feasible approximation to a probability measure. In the above we
could view our task as approximating trivial probability measures, with 0 for
false events, 1 for true, and then consider extending to other measures such
as m. We would have the rule: The epistemically normative feasible measure
is the optimal feasible approximation to the normative infeasible measure. For
measures such as m which are defined on {0, 1}∗, we may define normalized
probability distributions mn on strings of length n, or Bernoulli distributions
for each individual input. For each n, we can compare mn or m and the
approximating function using statistical distance to get a real sequence of error
values that can be used to define the quality of the approximation. This would
involve generalizing from proper scoring rules to, for example, the generalized
Kullback-Leibler divergence.
3.3
Decision theory
Our approach has been to focus on an epistemic problem and simple decision contexts, and not consider more complex decision making. It is not clear
what relationship plausibilities should have with a more general decision theory.
Also, in either physical or deductive uncertainty, expected utility maximization
must be modified in the case of bounded agents because tasks involved other
than plausibility assignment can be computationally hard as well. The usual
assumption of consistency allows simple decisions about bets and utilities to
uniquely determine arbitrarily complex decisions. In the face of computational
problems which can only be approximated by an actual agent, such as integration or optimization, this kind of consistency is infeasible. Thus lifting a
theory of bets to a theory of general decisions involves additionally considering
computational solutions to the various problems that are involved in general
decisions. Our approach here may already include important pieces of and
concepts for a more general theory.
12
Models of deductive uncertainty based on formal logic have been connected
to applications in highly reliable artificial intelligence [37], updateless decision
theory, reflective oracles [8], and logical counterfactuals [38]. It is a possibility
that the present models have uses there too.
Acknowledgements. I thank Zhicheng Gao and Corey Yanofsky for useful pointers, and Boaz Barak and Vadim Kosoy for crucial extended discussions.
References
[1]
S. Arora et al. “Computational complexity and information asymmetry in financial
products”. In: ICS. 2010, pp. 49–65.
[2]
R. J. Aumann. “Musings on information and knowledge”. In: Econ Journal Watch
2.1 (2005), pp. 88–96.
[3]
E. A. Bender, Z.-C. Gao, and L. B. Richmond. “Submaps of maps. I. General 0–1
laws”. In: Journal of Combinatorial Theory, Series B 55.1 (1992), pp. 104–117.
[4]
G. Boolos. “Introductory note to *1951”. In: Kurt Gödel Collected Works, Vol. III.
Ed. by S. Feferman. Oxford University Press, New York, 1995.
[5]
D. Corfield. Towards a Philosophy of Real Mathematics. Cambridge Univ Press, 2003.
[6]
B. de Finetti. “The role of ‘Dutch books’ and of ‘proper scoring rules’”. In: The
British Journal for the Philosophy of Science 32 (1981), p. 55.
[7]
K. Easwaran. “Probabilistic Proofs and Transferability”. In: Philosophia Mathematica
17.3 (2009), pp. 341–362. url: http://philmat.oxfordjournals.org/content/17/3/341.abstract.
[8]
B. Fallenstein, J. Taylor, and P. F. Christiano. “Reflective oracles: A foundation for
game theory in artificial intelligence”. In: International Workshop on Logic, Rationality and Interaction. Springer. 2015, pp. 411–415.
[9]
A. M. Feldman. “Kaldor-Hicks Compensation”. In: The New Palgrave Dictionary of
Economics and the Law 2 (1998), pp. 417–421. url: http://www.brown.edu/Departments/Economics/Facult
[10]
L. Fortnow and R. Santhanam. “Bounding rationality by discounting time”. In: arXiv
preprint arXiv:0911.3162 (2009).
[11]
G. Frege. The foundations of arithmetic. Die Grundlagen der Arithmetik. A logicomathematical enquiry into the concept of number. Eine logisch mathematische Untersuchung über den Begriff der Zahl, Containing a 1974 translation by J. L. Austin
and a reprinting of the 1884 German original, Third reprinting of the second revised
edition of 1953. Northwestern University Press, Evanston, Ill., 1884, xii+xi+119 pp.
(opposite pages numbered in duplicate).
[12]
D. Friedman. Price Theory: An Intermediate Text. South-Western, 1986. url:
http://www.daviddfriedman.com/Academic/Price_Theory/PThy_ToC.html.
[13]
S. Garrabrant et al. “Logical Induction”. In: CoRR abs/1609.03543 (2016). url:
http://arxiv.org/abs/1609.03543.
[14]
K. Gödel. “Some basic theorems on the foundations of mathematics and their implications (*1951)”. In: Kurt Gödel Collected Works, Vol. III. Ed. by S. Feferman.
Oxford University Press, New York, 1995.
[15]
O. Goldreich. Computational Complexity. A Conceptual Perspective. Cambridge University Press, Cambridge, 2008, pp. xxiv+606.
13
[16]
S. Goldwasser, S. Micali, and C. Rackoff. “The knowledge complexity of interactive
proof systems”. In: SIAM Journal on computing 18.1 (1989), pp. 186–208.
[17]
J. Y. Halpern. Reasoning about uncertainty. MIT press, 2005.
[18]
J. Y. Halpern and R. Pass. “Algorithmic rationality: Game theory with costly computation”. In: Journal of Economic Theory 156 (2015), pp. 246–268.
[19]
J. Y. Halpern and R. Pucella. “Dealing with logical omniscience: Expressiveness and
pragmatics”. In: Artificial intelligence 175.1 (2011), pp. 220–235.
[20]
L. A. Hemaspaandra. “Complexity Theory Column 36”. In: SIGACT News 33.2 (June
2002), pp. 34–47. issn: 0163-5700. url: http://doi.acm.org/10.1145/564585.564599.
[21]
V. Hendricks and J. Symons. “Epistemic Logic”. In: The Stanford Encyclopedia of
Philosophy. Ed. by E. N. Zalta. Fall 2015. Metaphysics Research Lab, Stanford University, 2015.
[22]
M. Hutter. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. 300 pages, http://www.hutter1.net/ai/uaibook.htm. Berlin: Springer,
2005. url: http://www.hutter1.net/ai/uaibook.htm.
[23]
R. Impagliazzo and L. LA. “No better ways to generate hard NP instances than picking uniformly at random”. In: Foundations of Computer Science, 1990. Proceedings.,
31st Annual Symposium on. IEEE. 1990, pp. 812–821.
[24]
V. Kosoy. “Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm”. In: CoRR abs/1608.04112 (2016). url: http://arxiv.org/abs/1608.04112.
[25]
T. L. Lai, S. T. Gross, D. B. Shen, et al. “Evaluating probability forecasts”. In: The
Annals of Statistics 39.5 (2011), pp. 2356–2382. url: http://statweb.stanford.edu/~ckirby/lai/pubs/201
[26]
M. Li and P. Vitányi. An introduction to Kolmogorov Complexity and its Applications.
Springer Science & Business Media, 2009.
[27]
B. Mazur. “Is it plausible?” In: Mathematical Intelligencer 36.1 (2014), pp. 25–33.
[28]
N. Megiddo and A. Wigderson. “On play by means of computing machines: preliminary version”. In: Proceedings of the 1986 Conference on Theoretical aspects of
reasoning about knowledge. Morgan Kaufmann Publishers Inc. 1986, pp. 259–274.
[29]
S. Modarres-Mousavi. “Methodological Foundations for Bounded Rationality as a
Primary Framework”. PhD thesis. Virginia Tech, 2002.
[30]
R. F. Nau. “De Finetti was right: probability does not exist”. In: Theory and Decision
51.2 (2001), pp. 89–124.
[31]
C. H. Papadimitriou and M. Yannakakis. “On Complexity As Bounded Rationality
(Extended Abstract)”. In: Proceedings of the Twenty-sixth Annual ACM Symposium
on Theory of Computing. STOC ’94. Montreal, Quebec, Canada: ACM, 1994, pp. 726–
733. url: http://doi.acm.org/10.1145/195058.195445.
[32]
G. Pólya. Mathematics and Plausible Reasoning: Patterns of Plausible Inference.
Vol. 2. Princeton University Press, 1968.
[33]
J. B. Predd et al. “Probabilistic coherence and proper scoring rules”. In:
IEEE Transactions on Information Theory 55.10 (2009), pp. 4786–4792. url:
http://www.princeton.edu/~osherson/papers/nuEll11.pdf.
[34]
A. Rubinstein. Modeling Bounded Rationality. MIT press, 1998.
[35]
S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. 3rd. Upper
Saddle River, NJ, USA: Prentice Hall Press, 2009.
[36]
R. Selten. “Axiomatic characterization of the quadratic scoring rule”. In: Experimental Economics 1.1 (1998), pp. 43–62.
[37]
N. Soares and B. Fallenstein. Aligning Superintelligence with Human Interests: A
Technical Research Agenda. Tech. rep. MIRI, 2014. url: https://pdfs.semanticscholar.org/d803/3a31449
14
[38]
N. Soares and B. Fallenstein. “Toward Idealized Decision Theory”. In: CoRR
abs/1507.01986 (2015). url: http://arxiv.org/abs/1507.01986.
[39]
W. Talbott. “Bayesian Epistemology”. In: The Stanford Encyclopedia of Philosophy.
Ed. by E. N. Zalta. Winter 2016. Metaphysics Research Lab, Stanford University,
2016.
[40]
J. Weisberg. “Varieties of Bayesianism”. In: Handbook of the History of Logic. Ed. by
D. Gabbay, S. Hartmann, and J. Woods. Vol. 10. Elsevier, 2011, pp. 477–552. url:
http://www.utm.utoronto.ca/~weisber3/articles/VarietiesvF.pdf.
[41]
D. Zeilberger. “Theorems for a price: tomorrow’s semi-rigorous mathematical culture”. In: Notices of the American Mathematical Society 40.8 (1993), pp. 978–981.
url: https://arxiv.org/pdf/math/9301202.pdf.
15
| 2 |
Adaptive and Resilient Soft Tensegrity Robots
John Rieffel1,∗ and Jean-Baptiste Mouret2,3,4,∗,†
1
2
3
4
∗
Union College, Schenectady, NY 12308, USA
Inria Nancy Grand - Est, Villers-lès-Nancy, F-54600, France
CNRS, Loria, UMR 7503, Vandœuvre-lès-Nancy, F-54500, France
Université de Lorraine, Loria, UMR 7503, Vandœuvre-ls-Nancy, F-54500, France
J.R. and J.-B. M. contributed equally to this work
†To whom correspondence should be addressed; E-mail:[email protected]
Preprint — February 21, 2018
arXiv:1702.03258v2 [cs.RO] 19 Feb 2018
Living organisms intertwine soft (e.g., muscle) and
hard (e.g., bones) materials, giving them an intrinsic
flexibility and resiliency often lacking in conventional
rigid robots. The emerging field of soft robotics seeks
to harness these same properties in order to create
resilient machines. The nature of soft materials, however, presents considerable challenges to aspects of
design, construction, and control – and up until now,
the vast majority of gaits for soft robots have been
hand-designed through empirical trial-and-error. This
manuscript describes an easy-to-assemble tensegritybased soft robot capable of highly dynamic locomotive
gaits and demonstrating structural and behavioral resilience in the face of physical damage. Enabling this
is the use of a machine learning algorithm able to discover effective gaits with a minimal number of physical trials. These results lend further credence to softrobotic approaches that seek to harness the interaction of complex material dynamics in order to generate
a wealth of dynamical behaviors.
Introduction
Unlike machines, animals exhibit a tremendous amount of resilience, due in part to their intertwining of soft tissues and rigid
skeletons. In nature, this suppleness leads to several compelling
behaviors which exploit the dynamics of soft systems. Octopi, for
example, are able to adaptively shape their limbs with “joints” in
order to perform efficient grasping (Sumbre et al., 2005). Jellyfish exploit their inherent elasticity in order to passively recover energy during swimming (Gemmell et al., 2013). Manduca sexta caterpillars have a mid-gut which acts like a “viscerallocomotory piston” – sliding forward ahead of the surrounding
soft tissues, shifting the animal’s center of mass forward well before any visible exterior change (Simon et al., 2010).
Taking inspiration from the natural world, the field of soft
robotics seeks to address some of the constraints of conventional
rigid robots through the use of compliant, flexible, and elastic
materials (Lipson, 2014; Wehner et al., 2016). Trimmer et al., for
instance, construct soft robots from silicone rubber, using shapememory alloy (SMA) micro-coil actuation, which can slowly
crawl in controlled fashion (Trimmer, 2008) or roll in an uncontrolled ballistic fashion (Lin et al., 2011). Similarly, research
Fig. 1. Concept of our soft tensegrity robot. Tensegrity structures are combinations of rigid elements (struts) joined at their
endpoints by tensile elements (spring or cables) that are kept stable by the interplay of pre-stress forces. A. The first tensegrity
structures appeared in art, with the sculptures of Kenneth Snelson (Skelton and de Oliveira, 2009; Snelson, 2012). B. They
have been subsequently used in architecture, for instance for the
Kurilpa bridge (Brisbane, Australia). C. More recently, tensegrity
has been found to be a good model of the mechanotransduction of living cells (Wang et al., 2001). D. Our tensegrity robot is
based on carbon struts and springs. It is is actuated by 3 vibrators
(glued to 3 of the struts) whose frequency is automatically tuned
by a trial-and-error learning algorithm (Methods). E. Thanks to
the tensegrity structure and to the compliance of the springs, our
robot will keep its integrity when deformed and spring back into its
initial form. A video is available in supplementary materials (Video
S1 — https://youtu.be/SuLQDhrk9tQ).
by Whitesides et al. uses pneumatic inflation to produce slow, dynamically stable crawling motions (Shepherd et al., 2011) as well
as fast, but less controlled tentacle-like grippers (Martinez et al.,
2013), combustion-driven jumpers (Bartlett et al., 2015) and a
self-contained microfluidic “octobot” (Wehner et al., 2016).
Despite their advantages, soft-material robots are difficult to
control by conventional means (Lipson, 2014; Shepherd et al.,
2011). They are by their very nature high dimensional dynamic
systems with an essentially infinite number of degrees of freedom. The elasticity and deformability which provide their appeal
come at the cost of resonances and tight dynamic coupling be-
arXiv preprint | 1
tween components (Trimmer, 2008), properties which are often
avoided, or at least suppressed, in conventional engineering approaches to robotic design. This complexity precludes the use
of many of the traditional kinematic and inverse-dynamics approaches to robotic control (Craig, 1989).
As a result, up until now, the locomotive gaits of most soft
robots have been developed by hand through empirical trial-anderror (Shepherd et al., 2011). This process can be both challenging and time consuming, particularly when seeking to fully
exploit the dynamical complexity of soft mechanisms. Importantly, this manual process also prevents these robots from adapting their control strategy when the context changes, for instance
when they encounter an unexpected type of terrain, or when they
are physically damaged.
In this work, we describe a new class of soft robot based upon
a tensegrity structure driven by vibration. Like many other soft
robots, this tensegrity robot is resilient, and can resist damage
when perturbed or crushed. Unlike other soft robots, however,
this particular modular tensegrity robot is easy to build, easy to
control, and thanks to a data-efficient reinforcement learning algorithm (Cully et al., 2015), it can autonomously discover how to
move, and quickly relearn and adapt its behavior when damaged.
Vibration is an increasingly popular method of sensor-free manipulation and control for automated systems (Berretty et al.,
2001). Rezik et al., for instance, developed a vibration-driven
planar manipulator (Reznik and Canny, 1998) able to perform
large-scale distributed planar control of small parts (Reznik
et al., 2000). In mobile robotics, stick-and-slip frictional motion (Joe, 2015; Vartholomeos and Papadopoulos, 2006) driven
by paired vibrating motors has been used in a variety of mobile robots (Parshakova et al., 2016; Rubenstein et al., 2012).
Often, these approaches use empirically-derived hand-tuned frequencies to generate motion, using linear interpolation of their
two motor speeds in order to smoothly generate a range of behaviors. One weakness of vibration-based approaches to locomotion
is that vibration of this type leads to unpredictable motion, even
when assuming perfectly consistent surfaces (Joe, 2015), which
presents a challenge to modeling and simulation.
Tensegrities are relatively simple mechanical systems, consisting of a number of rigid elements (struts) joined at their endpoints by tensile elements (cables or springs), and kept stable
through a synergistic interplay of pre-stress forces (Fig. 1 A-C).
Beyond engineering, properties of tensegrity have been demonstrated at all scales of the natural world, ranging from the tendinous network of the human hand (Valero-Cuevas et al., 2007)
to the mechanotransduction of living cells (Wang et al., 2001).
At every size, tensegrity structures exhibit two interesting features (Skelton and de Oliveira, 2009; Snelson, 2012): they have
an impressive strength-to-weight ratio, and they are structurally
robust and stable in the face of deformation. Moreover, unlike
many other soft robots, tensegrity structures are inherently modular (consisting of only struts and springs) and are therefore relatively easy to construct. They are simple enough to be baby toys
and featured in books for children activities (Ceceri, 2015), while
complex enough to serve as the basis for the next generation of
NASA’s planetary rovers (Caluwaerts et al., 2014).
The most common control method for tensegrity robots is to
slowly change the lengths of the struts and/or cables, causing
large-scale, quasi-static (rather than dynamic) structural deformations, which, in turn, make the robot move via tumbling and
rolling (Caluwaerts et al., 2014; Koizumi et al., 2012). As they
assume that the structure is relatively stiff throughout locomotion, such control strategies are not suitable for more compliant
soft tensegrity robots. In addition, they lead to slow locomotion
speeds.
Lately, researchers have begun investigating more dynamical
methods of tensegrity robot control. Bliss et al. have used central
pattern generators (CPGs) to produce resonance entrainment of
simulated non-mobile tensegrity structures (Bliss et al., 2012).
Mirletz et al. have used CPGs to produce goal-directed behavior
in simulated tensegrity-spine-based robots (Mirletz et al., 2015).
These efforts, however valuable, were all produced in simulated
environments, and have not yet been successfully transferred into
real-world robots. As Mirletz et al point out (Bliss et al., 2012),
the dynamic behavior of tensegrities is highly dependent upon the
substrate they interact with – this means that results developed in
simulated environments cannot necessarily be simply transferred
to real robots (in Evolutionary Robotics, this is known as the “Reality Gap” (Jakobi et al., 1995; Koos et al., 2013)).
More recently, Böhm and Zimmermann developed a
tensegrity-inspired robot actuated by an single oscillating electromagnet (Böhm and Zimmermann, 2013). Although this robot
was not a pure tensegrity (it rigidly connected multiple linear
struts), it was, compellingly, able to change between forward and
backward locomotion by changing the frequency of the oscillator. Vibration has been proposed as a means of controlling much
softer robots as well (Berger et al., 2015).
Here we explore the hypothesis that the inherent resonance and
dynamical complexity of real-world soft tensegrity robots can be
beneficially harnessed (rather than suppressed), and that, if properly excited (Oppenheim and Williams, 2001), it can resonate so
that the robot performs step-like patterns that enable it to locomote. To test this hypothesis and demonstrate the potential of
soft tensegrity robots, we designed a pocked-size, soft tensegrity
robot whose parameters were tuned to maximize resonance, and
whose goal is to locomote as fast as possible across flat terrain.
To find the right vibrational frequencies, we equipped the robot
with a data-efficient trial-and-error algorithm, which also allows
it to adapt when needed.
Results
Our soft tensegrity robot (Fig. 1D-E) is based upon a canonical six-bar tensegrity shape consisting of equal length composite
struts connected via 24 identical helical springs, with four springs
emanating from each strut end. Unlike most tensegrity structures,
which seek to maximize stiffness by, among other things, using
nearly inelastic cables (Oppenheim and Williams, 2001), here we
replace the cables with very elastic springs, with spring constants
chosen with the goal of producing suitably low natural frequencies of the structure, with corresponding large displacements –
in other words, to maximize suppleness. This allows the pocketsized robot to maintain its structural shape under normal operation, and yet be easily compressed flat in one’s hand. A variable
speed motor coupled to offset masses was then attached to three
of the struts in order to excite the natural frequencies of the structure. The motor and weight were chosen to be in a range consistent with preliminary models. Because of the difficulty in modeling real-world interactions of these tensegrity robots, as well as
the fact that we use a real-world optimization process described
below, variability in the exact placement of each motor on a strut
is allowed.
Like many robots, the tensegrity robot needs to use different
gaits in order to achieve lomocotion, depending on terrain. In our
case, these gaits are determined by the speeds of the three vi-
arXiv preprint | 2
A
B
A
B
missing spring
Fig. 2. Evaluation of the learning algorithm. A. Locomotion
speed after each of the 30 trials. The light zones represent the
25th and 75th percentiles. B. Locomotion speed after 30 trials.
The central mark is the median, the edges of the box are the
25th and 75th percentiles (inter-quartile range – IQR), the whiskers
corresponds to the range [25%−1.5×IQR, 75%+1.5×IQR], and
points outside of the whiskers are considered to be outliers (this
corresponds to the “interquartile rule”). Each condition is tested
with 20 independent runs of the algorithms.
Fig. 3. Experiments with a damaged robot. A. Damaged robot.
A spring is disconnected from the robot of Fig. 1. B. Locomotion speed after 30 trials. The central mark is the median,
the edges of the box are the 25th and 75th percentiles (interquartile range – IQR), the whiskers corresponds to the range
[25% − 1.5 × IQR, 75% + 1.5 × IQR], and points outside of the
whiskers are considered to be outliers (this corresponds to the
“interquartile rule”). Each condition is tested with 20 independent
runs of the algorithms.
bratory motors. As the exact properties of the terrain are seldom
known a priori, and because hand-designing gaits is time consuming (not to mention impossible when the robot is in remote
or hazardous environments) this robot finds effective motor frequencies by using a trial-and-error learning algorithm whose goal
is to maximize the locomotion speed.
Earlier work of ours (Khazanov et al., 2013, 2014) used interactive trial-and-error as well as automated hill climbing techniques to find optimal gaits for a tensegrity robot. These gaits,
could in turn, be incorporated into a simple state machine for directional control. However, these techniques required hundreds
of physical trials that were time consuming and produced significant wear on the physical robot. More importantly, the interactive procedure required a human in the loop, whereas we envision robots that can adapt autonomously to new situations (e.g. a
damage or a new terrain). The work described in this paper, by
automating the optimization process while minimizing the number of physical trials required, substantially reduces the amount
of human interaction required, and is an important step toward
full autonomy.
Here, as a substantial improvement upon these earlier timeintensive methods, we employ a Bayesian optimization algorithm
(Cully et al., 2015; Ghahramani, 2015; Shahriari et al., 2016),
which is a mathematical optimizer designed to find the maximum
of a performance function with as few trials as possible.
Conceptually, Bayesian optimization fits a probabilistic model
(in this case a Gaussian process (Rasmussen and Williams,
2006), see Methods) that maps motor speeds to locomotion
speed. Because the model is probabilistic, the algorithm can not
only predict which motor speeds are the most likely to be good,
but also associate it to a confidence level. Bayesian optimization
exploits this model to select the next trial by balancing exploitation – selecting motor speeds that are likely to make the robot
move faster – and exploration – trying combinations of motor
speeds that have not been tried so far (Methods). As an additional
benefit, this algorithm can take into account that observations are
by nature uncertain.
The Bayesian optimization algorithm usually starts with a constant prior for the expected observation (e.g., the expected speed
is 10 cm/s) and a few randomly chosen trials to initialize the
model. For this robot, however, common sense, along with preliminary modeling, suggests that speeds near the motor maximums are more likely to produce successful gaits, and that nearzero motor speeds are not expected to make the robot move. This
insight was substantiated in preliminary experiments: many effective gaits were produced by high motor speeds, both forward
and backward. Therefore, to speed up learning, we use a nonlinear prior model as follows: (i) if the three motor speeds are
close to 0, then we should expect a locomotion speed close to 0
and (ii) if all the motors are close to full speed (in any direction),
then we should expect the maximum locomotion speed (Methods
and Fig. 5D). Thanks to this prior, the Bayesian optimization algorithm does not need any random sample points to seed the prior
and instead starts with promising solutions. In spite of this prior
learning is still needed, because many combinations of motors at
full speeds make the robot tumble or rotate on itself, resulting
in low performance; in addition, subtle changes to motor speeds
can have dramatic effects upon the resulting robot gait.
We first evaluate the effectiveness of the learning algorithm
(Fig. 2). The performance function is the locomotion speed,
measured over 3 seconds, in any direction. If the robot turns
too much, that is if the yaw exceeds a threshold, the evaluation
is stopped (Methods). The covered distance is measured with
an external motion capture system (Methods), although similar
measurements can be obtained with an onboard visual odometry
system (Cully et al., 2015; Davison et al., 2007). We compare
three algorithms: random search, Bayesian optimization without
prior (using 10 random points to initialize the algorithm), and
Bayesian optimization with prior. Each algorithm is allowed to
test 30 different motor combinations (resulting in 90 seconds of
learning for each experiment) and is independently run 20 times
arXiv preprint | 3
Weight (W)
Body length
Best locomotion speed (S) [after learning]
Current drawn (full speed):
Power drawn at 5V (P)
Cost of transport at maximum speed (COT)
89g
13cm
15cm.s−1 (1.15 body lengths per second)
700mA
3.5W
262 (COT , WPS )
Table 1. Best locomotion speed, power consumption, and cost of transport for the gait that corresponds to the maximum speed at 5V.
For reference, a cost of transport (COT) of 262 is comparable to the COT of a mouse (more than 100), but much higher than a car or
a motorcycle (around 3) (Tucker, 1970).
to gather statistics. The results show that the best locomotion
speeds are obtained with the prior-based Bayesian optimization
(11.5cm/s, 5th and 95th percentiles [8.1, 13.7]), followed by the
prior-free Bayesian optimization (6.3cm/s[5.5, 12.4]). The worst
results are obtained with the random search (5.4cm/s[3.5, 9.9]).
The absolute best locomotion speed (15cm.s−1 was found with
Bayesian optimization (Table 1) and corresponds to 1.15 body
lengths per second. Overall, these experiments demonstrate that
the prior-based Bayesian optimization is an effective way to automatically discover a gait in only 30 trials with this robot. Videos
of a a typical gait is available as supplementary material (Video
S1).
We then investigate our hypothesis that the interplay between a
flexible tensegrity structure and vibration is the key for effective
locomotion. To do so, we designed a rigid replica of our robot
that does not contain any springs: the carbon fiber struts are held
in place with small carbon fiber rods (Fig. 4A). All the dimensions, strut positions, and motor positions are the same as for the
tensegrity version (Fig. 1D-E). We used the same learning algorithm as for the tensegrity robot and the same prior, since we have
the same intuitions about good control policies for the rigid robot
as for the soft one. We replicated the learning experiment 20
times. The results (Fig. 4B) show that the rigid replica moves at
about 60% of the speed of the tensegrity robot (7.1cm/s[5.6, 9.3]
vs 11.5cm/s[8.1, 13.7]), which suggests that the flexibility of the
tensegrity structure plays a critical role in its effective locomotion. In addition, we measured the amplitude of movement along
the vertical axis for the end of 4 struts, both with the soft tensegrity robot and the rigid replica; we repeated this measure with 50
random gaits in both cases. These measurements (Fig. 4C) show
that the markers move at least twice more when the structure is
flexible (2.3[1.5, 4.8] cm vs 0.99[0.61, 2.1] cm), which demonstrates that the structure amplifies the movements induced by the
vibrators.
In addition to being deformable, tensegrity structures often
maintain most of their shape when a link (a spring or a strut) is
missing, leading to relatively smooth failure modes. We evaluate
the ability of our robot to operate after such damage by removing
a spring (Fig. 3A). As the shape of the robot is changed, we relaunch the learning algorithms. The experiments reveal that successful, straight gaits can be found in 30 trials, although they are
significantly lower-performing than gaits obtained with the intact
robot (11.5cm/s[8.1, 13.7] versus 6.5cm/s[5.6, 8.2] Fig. 3B).
easily visualized (3 dimensions + performance, resulting in a
4D plot), we compute performance profiles (Mouret and Clune,
2015; Reuillon
et al., 2015): for each combination of 2 motor
speeds v1 , v2 , we report the best performance measured regardless of the speed of the third motor (Methods). The performance profiles (Fig. 5A) for the intact robot reveal that there
are
two high-performing regions,
roughly positioned around
− 100%, 100%, −100%
and
− 100%, −100%, 100% and
that the first region ( − 100%, 100%, −100% ) is where most
high-performing solutions can be found. This finding is consistent with the prior given to the learning algorithm (Fig. 5D),
which models that the best performance should be obtained with
a combination of −100% and +100% values. It should be emphasized that the best gaits do not correspond to the most extreme
values
for the motor speeds: the most reliable optima is around
−90%, 100%, −90% , mostly because too extreme values tend
to make the robot tumble. The best solutions for the rigid robots
are also found in the corners, that is, for combinations of +100%
and −100% motor speeds, but the measurements suggest that the
optimum might be different from the one obtained with the intact robot (more data would be needed to conclude). The data
for the damaged
robot show more clearly that the best solutions
are around − 100%, −100%, 100% , which corresponds to the
second optimum found for the intact robot (the lowest performing one).
The performance profiles thus demonstrate that the prior
knowledge given to the learning algorithm is consistent with the
three different robots (intact, rigid, and damaged), which suggests that it might be helpful in other situations (e.g., different
damage conditions). They also demonstrate that gaits that work
the best on the intact robot do not work on the damaged robot
(Fig. 5 A versus C, second column): this shows that the learning
algorithm is needed to adapt the gait if the robot is damaged.
Discussion
Soft tensegrity robots are highly resilient, easy to assemble with
the current technology, and made with inexpensive materials. In
summary, vibratory soft tensegrity robots recast most of the complexity of soft robotics – building and actuating soft structures –
into a much simpler class of robots – easy to build and to actuate – while keeping many of the attractive properties of soft
robots – e.g., resilience, deformability. Thanks to the learning alDuring all the reported experiments, we evaluated 20 × 30 × gorithm, our prototype can achieve locomotion speeds of more
3 = 1800 different gaits on the intact robot, 20 × 30 = 600 than 10 cm/s (more 1 body length per second) and learn new
gaits on the rigid robot (20 replicates, 30 trials for each repli- gaits in fewer than 30 trials, which allows it to adapt to damage or
cate, and 3 treatments), and 20 × 30 = 600 gaits on the dam- new situations. To our knowledge, this places it among the fastest
aged robot. We can use these points to draw a picture of the soft robots. Our soft tensegrity robots achieve this speed because
search space that does not depend on the learning algorithm they uniquely harness the flexibility and the resonance of tenseg(Fig. 5). Since the search space is too high-dimensional to be rity structures. Discovering methods of exploiting flexibility and
arXiv preprint | 4
A
B
C
Fig. 4. Experiments with a rigid robot (with priors). A. Rigid replica of our soft tensegrity robot. This robot is identical to
the robot of Fig. 1, except that it contains no spring: the carbon fiber struts are held in place with small carbon fiber rods. All the
dimensions, strut positions, and motor positions are the same as for the tensegrity version. B. Locomotion speed after 30 trials for
the intact (Fig. 1) and the rigid robot (A). Each condition is tested with 20 independent runs of the algorithm (Bayesian optimization
with priors). C. Maximum amplitude of the markers for random gaits. In each case, we captured the vertical position of the 4
markers for 50 random gaits of 3 seconds. We report the maximum height minus the minimum height (over the 4 markers). For the
box plots, the central mark is the median, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the most
extreme data points not considered outliers, and outliers are plotted individually.
resonance in this manner opens new research avenues for future
tensegrity structures, in particular when mechanical design can
be coupled with machine learning algorithms that automatically
identify how to control the resonances.
Although our soft tensegrity robots also to large extent benefit
from anisotropic friction, our effort is distinct other vibrationbased robots such as Kilobot (Rubenstein et al., 2012) and
RatChair (Parshakova et al., 2016) in several important ways.
First, because of the nature of the structure, opposing pairs
of vibrating motors aren’t effective - and as our results show,
small changes to our robot’s motor speeds can have large and
non-linear effects upon its behavior. This renders the linearinterpolation approach of that work ineffective. As a consequence, rather than relying upon hand-tuning, we instead employ
Bayesian Optimization in order to determine the most effective
vibrational frequencies in a minimum number of physical trials.
Another distinction is that our soft tensegrity robot’s intrinsic
resonance is tuned to respond the vibratory input of their actuators. The benefit of this tuned resonance is particularly noticeable
when the performance of the soft tensegrity robot is compared to
the rigid mock-tensegrity robot described in experiments (Fig.
4). These soft tensegrity robots also stand in contrast to other
more rigid tensegrity robots (Caluwaerts et al., 2014; Koizumi
et al., 2012), which generally try to suppress their resonance.
Harnessing flexibility and resonance opens new research avenues
for future soft robots, in particular when mechanical design can
be coupled with machine learning algorithms that automatically
identify how to control the resonances.
One of the more thought-provoking illustrations of the potential of soft tensegrity robots is best observed on the supplementary video, at slow speed: once properly tuned by the learning
algorithm, the vibrations induce large, visible deformations of
the structures that create a step-like pattern for the “feet” at the
end of the rigid struts (more quantitative results can be seen on
Fig. 4-C). These step-like patterns have the potential to allow
tensegrity robots to step over small irregularities of the ground
like a walking robot. Importantly, these patterns are made possible by the mix of soft and rigid elements in the same structure:
they are likely to be much harder to induce and control both with
a fully soft robot and with a fully rigid robot. A promising research avenue is to focus on how to control the movement of the
feet explicitly and make steps that are little disturbed as possible
by the irregularities of the floor.
An added benefit of vibrational locomotion for soft robotics is
that, although our current robot is tethered, it could in principle
be easy to power soft tensegrity robots with an embedded battery, by contrast with the many fluid-actuated soft robots (Lipson,
2014; Shepherd et al., 2011), which need innovative ways to store
energy (Wehner et al., 2016). Nevertheless, soft tensegrity robots
could excite their structure by other means; for instance, a flywheel that is rapidly decelerated could help the robot to achieve
fast movements (Romanishin et al., 2013), or high-amplitude,
low-frequency oscillations could be generated by moving a pendulum inside the structure (Chase and Pandya, 2012).
Early work of ours on mobile tensegrities (Khazanov et al.,
2013, 2014) used a rather simple interactive hill-climber in order to discover effective locomotive gaits, however this type of
simplistic stochastic search was suboptimal. While there may be
little qualitative difference between our earlier gaits and those described here, there are profound differences in terms of the time
and data efficiency of this Bayesian Optimization approach. Most
significantly, the hill-climber places no emphasis on reducing the
number of physical trials performed, and as a consequence required hundreds of trials and hours of experimentation before
discovering effective gaits. These repeated physical trials put unnecessary wear on the robot, and required a substantial amount of
human effort in resetting the robot between trials. Furthermore,
the OpenCV-based optical tracking of the robot was rudimentary
arXiv preprint | 5
A - intact robot
B - rigid robot
C - damaged robot
D - prior knowledge
Fig. 5. Performance profiles for all the conditions. These performance profiles show the performance potential of each combination
of 2 motor speeds (the third motor is considered as a “free variable”). Three plots are required to get a comprehensive picture of the
performance space: v1 vs v2 , v1 vs v3 , and v2 vs v3 . A. Intact robot (Fig. 1D). The profiles are computed with 1800 policy evaluations
(20 replicates × 30 trials × 3 sets of experiments – with prior, without prior, random search). B. Rigid robot (Fig 4A). The profiles
are computed with 600 policy evaluations (30 trials × 20 replicates). C. Damaged robot (Fig. 3). The profiles are computed with 600
policy evaluations (30 trials × 20 replicates). D. Prior knowledge. Prior knowledge used to guide the learning algorithm (Methods).
arXiv preprint | 6
and lacked the spatial precision required of more effective algorithms. The Bayesian Optimization approach we have used here,
along with the high precision Optitrack system, profoundly reduces the number of physical trials and the corresponding wear
on the robot, thereby increasing its capacity for faster and more
autonomous resilience and adaptivity.
We purposely designed the robot so that the search space is
as small as possible, which, in turn, makes it more likely for
the robot to be capable of adapting in a few trials. Put differently, one of the main strength of vibration-based locomotion is
to make the search problem as simple as possible. Although, in
principle, a variety of optimization techniques (e.g., simulated
annealing (Kirkpatrick et al., 1983)) might have been used, there
are compelling reasons why our adaptation algorithm is based on
Bayesian optimization, namely because (i) it is a principled approach to optimize an unknown cost/reward function when only
a few dozen of samples are possible (Shahriari et al., 2016) (by
contrast, the simulated annealing algorithm relies on the statistical properties of the search space, which are valid only with a
large number of samples (Kirkpatrick et al., 1983)), (ii) it can
incorporate prior knowledge in a theoretically sound way (including trusting real samples more than prior information) (Cully
et al., 2015), and (iii) it takes into account account the acquisition
noise (Rasmussen and Williams, 2006). For instance, Bayesian
optimization is the current method of choice for optimizing
the hyper-parameters of neural networks (Shahriari et al., 2016;
Snoek et al., 2012), because evaluating the learning abilities of
a neural network is both noisy and time-intensive. The downside
of Bayesian optimization is a relatively high computational cost:
the next sample is chosen by optimizing the acquisition function, which typically requires using a costly, non-linear optimizer
like DIRECT (Finkel, 2003) or CMA-ES (Hansen et al., 2003)
(our implementation uses CMA-ES, see Methods). Put differently, Bayesian optimization trades data with computation, which
makes it data-efficient, but computationally costly. As we mostly
care about data-efficiency, we neglect this cost in this work, but
it could be an issue on some low-power embedded computers.
Most black-box optimization (e.g. CMA-ES (Hansen et al.,
2003)) and direct policy search algorithms (e.g policy gradients
(Peters and Schaal, 2006)), could substitute Bayesian optimization as an adaptation algorithm by directly optimizing the reward
(instead of first modeling it with Gaussian process). While they
would not need time-intensive optimizations to select the next
sample to acquire, these algorithms are tailored for at least a
thousand evaluation (e.g. 104 to 105 evaluations in benchmarks
of 2D functions for black-box optimizers (Hansen et al., 2010)),
are not designed to incorporate priors on the reward function, and
are, at best, only tolerant to noisy functions. As a consequence,
while algorithms like CMA-ES could work as an adaptation algorithm, they appear to be a sub-optimal choice for online adaptation when only a few dozen of evaluations are possible.
Traditional Bayesian optimization uses a constant mean as a
prior (Calandra et al., 2014; Lizotte et al., 2007), that is, the only
prior knowledge is the expectation of the cost/reward. By contrast, we show here that it is effective to introduce some basic
intuitions about the system as a non-constant prior on the reward
function. We thus increase the data-efficiency while keeping the
learning algorithm theoretically consistent. Cully et al. (Cully
et al., 2015) also used a non-constant prior; however, (i) they
generated it using a physics simulator, which is especially challenging for a vibrating tensegrity robot, and (ii) they only computed this prior for a discrete set of potential solutions, which,
in turn, constrain Bayesian optimization to search only in this
set. Here we follow a more continuous approach as our prior is
a continuous function, and we show that relevant priors can be
defined without needing a physics simulator. The more general
problem of how to generate “ideal” priors is far from trivial. Intuitively, priors should come from a meta-learning process (Lemke
et al., 2015), for instance, an evolution-like process (Kirschner
and Gerhart, 2006), which would search for priors that would
work well in as many situations as possible (i.e., instincts). Effectively implementing such a process remains an open grand
challenge in machine learning and is sometimes called “metalearning” (Lemke et al., 2015).
Putting all these attractive features altogether, soft tensegrity robots combine simplicity, flexibility, performance, and resiliency, which makes this new class of robots a promising building block for future soft robots.
Of course, additional work is needed to have a more complete
theory of the “optimal suppleness” of soft robots. Intuitively, too
much suppleness would absorb the energy transmitted by the vibrator and prevent effective gaits; but, at the other end of the
spectrum, a rigid robot cannot generate the form changes that are
necessary for the most interesting gaits. This may be what we observed when we damaged the robot: by making the structure less
constrained, the shape of the robot may have become looser, and
“softer”, which impacted the maximum locomotion speed (alternatively, the removal of a spring might have prevented the transmission of some oscillations or some resonance modes). Nevertheless, for every kind of suppleness that we tried, the Bayesian
optimization algorithm was always capable of finding some effective gaits, which means that the “optimal softness” does not
need to be known a priori in order to discover effective locomotion. In this regard, trial-and-error approaches like the ones used
here provide a valuable ability to respond and adapt to changes
online in a rather robust manner, much like living systems (Cully
et al., 2015).
Several exciting open questions remain. So far, we have only
demonstrated the effectiveness of this technique on a single substrate rather than across an entire range of environments. A compelling question we look forward to exploring in future work,
for instance, is the extent to which the locomotive gaits we have
discovered are robust and self-stabilizing in the face of external
perturbations and changes in the substrate. Of course, the general
problem of robust locomotion of any robot, much less soft robots,
across multiple substrates and environments remains a relatively
open topic. Recent work has, for instance, explored hand-picked
strategies for the quasi-static locomotion of a cable-actuated
tensegrity on inclined surfaces (Chen et al., 2017). Our own ability to harness tensegrity vibration in order to induce large-scale
and dynamic structure offers a compelling and promising method
of discovering much more dynamic gaits for these environments.
Indeed, our robot design is already capable of interesting behavioral diversity, including several unique rolling behaviors, which
might be beneficial across environments - however we were unable to explore these more deeply due to the tethered nature of
this design. Nonetheless, the speed with which our algorithm
can learn effective gaits, especially when damaged, provides a
glimpse into how future soft robots could adapt to new and unexpected environments in situ, with no pre-existing knowledge or
experience of that environment.
This leads to the recognition that the the present prototype,
while more than sufficient to demonstrate the claims of this paper, is not yet fully autonomous : it relies on a tether for power,
uses an external motion capture to evaluate its performance (locomotion speed), and uses an offboard computer for the the learn-
arXiv preprint | 7
Fig. 6. An untethered version of the tensegrity robot. This
new robot, still under development will allow for more interesting dynamical behaviors such as rolling, as well as complex environments. This could in principle allow for completely on-board
learning as well.
ing algorithm. We are in the process of designing a fully wireless
and autonomous tensegrity robot, as illustrated by Figure 6. This
next generation of robot will be capable of substantially more
dynamical behaviors, such as rolling and jumping, and more capable of exploring complex environments. Evaluating the performance of locomotion techniques using on-board processing
could in principle by achieved either with accelerometers or with
an embedded camera paired with a visual odometry algorithm,
(Cully et al., 2015; Forster et al., 2014), but the vibrations and
the fast movements of the struts are likely to disturb many visual
algorithms. Additionally, the modular nature of this wireless strut
design means that we could explore an entire range of tensegrity
robot morphologies, including those with considerably more than
six struts.
Overall, our soft tensegrity robots move thanks to the complex interactions between the actuators (vibrators), the structure
(springs and struts), and the environment (the ground). This kind
of emergent behavior is central in the embodied intelligence theory (Pfeifer et al., 2007), which suggests that we will achieve better and more life-like robots if we encourage such deep couplings
between the body and the “mind” – here, the controller. However,
as demonstrated in the present work, trial-and-error learning algorithms offer a strongly viable approach to discovering these
emergent behaviors.
Material and Methods
Robot
The tensegrity used is defined by six equal length composite
struts which are connected to each other via 24 identical helical
springs, with four springs emanating from each strut end. This
follows the geometry described as TR-6 by Skelton (Skelton and
de Oliveira, 2009). Few actual machining operations are required
to produce the tensegrity. The six 9.4 cm long composite struts
are cut from 6.35 mm square graphite composite tubes (Goodwinds). The three 12mm vibrational motors (Precision Microdrives Model 312-107) were mounted to the flat outer surface of
the struts using hot melt adhesive. Both ends of each strut were
then tapped for 10-24 nylon screws fitted with nylon washers.
The hooked ends of the helical springs (Century Spring Stock
No. 5368) were attached directly to holes drilled through the nylon washers. The motors were connected via thin gauge magnet wire to Serial Motor Controllers (Pololu Qik 2s9v1 Dual Serial Motor Controller) connected in turn to a USB Serial Adapter
(SparkFun FTDI Basic Breakout board)
The specific spring constants were chosen in order to produce
relatively low natural frequencies and correspondingly large displacements of the structure while at the same time limiting estimated static deflection to 5% of strut length. In order to determine
this, a single strut was modeled as being connected to four linear
springs at each end, equally spaced around the radius, each at a
45◦ angle. Limiting static deflection to 5% of strut length results
in a spring constant value of 0.209 N/cm. Subsequently, the entire 6-bar structure was modeled by assuming that one strut was
be anchored in place and then using matrix structural analysis
to determine the natural frequencies. The vibrational motor was
then chosen that was capable of generating sufficient centrifugal
force at a suitable range of frequencies. Details of the modeling
and design are provided in (Khazanov et al., 2013).
Control policy
Each policy is defined by three PWM values that determine the
input voltage of the 3 vibrating motors (χ = [v1 , v2 , v3 ]), which
can take values between 0 (full speed, backward) and 1 (full
speed, forward); 0.5 corresponds to a speed of 0, that is, to no
movement.
Performance function
Each controller is tested for 3 seconds, then the Euclidean distance between the starting point and the end point is recorded.
The performance function is the distance (in cm/s) divided by 3.
If during the 3 second evaluation period the yaw of the robot exceeds 1 radian, the evaluation is stopped and the recorded distance is the distance between the starting point and the point
reached by the robot when it exceeded the yaw limit.
The policies are evaluated externally with a motion tracking
system (Optitrack Prime 13 / 8 cameras), but the same measurements can be obtained with an embedded camera connected to a
visual odometry system (Cully et al., 2015; Davison et al., 2007).
Profile plots
We use the profile plots to depict the search space and the prior
used by the learning algorithm (Fig. 5). For each pair of dimensions, we discretize the motor speeds into 25 bins. For each
bin, we compute pprof ile (v1 , v2 ) = maxv3 p(v1 , v2 , v3 ), where
p(v1 , v2 , v3 ) is the performance of the robot for motor speeds
v1 , v2 , v3 and pprof ile (v1 , v2 ) is the performance reported in the
profile. To get a comprehensive pictures, we need three plots:
pprof ile (v1 , v2 ), pprof ile (v1 , v3 ), and pprof ile (v2 , v3 ).
Learning algorithm
Our learning algorithm allows the robot to discover by trial-anderror the best rotation speeds for its three motors. It essentially
arXiv preprint | 8
implements a variant of Bayesian optimization, which is a stateof-the-art optimization algorithm designed to maximize expensive performance functions (a.k.a. cost functions) whose gradient cannot be evaluated analytically (Ghahramani, 2015; Shahriari et al., 2016). Like other model-based optimization algorithms
(e.g., surrogate-based algorithms (Booker et al., 1999; Forrester
and Keane, 2009; Jin, 2011), kriging (Simpson et al., 1998), or
DACE (Jones et al., 1998; Sacks et al., 1989)), Bayesian optimization models the objective function with a regression method,
uses this model to select the next point to acquire, then updates
the model, etc. until the algorithm has exhausted its budget of
function evaluations.
Here a Gaussian process models the objective function (Rasmussen and Williams, 2006), which is a common choice for
Bayesian optimization (Brochu et al., 2010; Calandra et al., 2014;
Ghahramani, 2015; Lizotte et al., 2007; Shahriari et al., 2016).
For an unknown cost function f , a Gaussian process defines the
probability distribution of the possible values f (x) for each point
x. These probability distributions are Gaussian, and are therefore
defined by a mean (µ) and a variance (σ 2 ). However, µ and σ 2
can be different for each x; a Gaussian process therefore defines
a probability distribution over functions:
parts of the search space – and exploitation – favoring parts that
the model predicts as promising. Numerous acquisition functions
have been proposed (e.g., probability of improvement, the expected improvement, or the Upper Confidence Bound (UCB)
(Brochu et al., 2010; Calandra et al., 2014; Shahriari et al.,
2016)); we chose UCB because it provided the best results in several previous studies (Brochu et al., 2010; Calandra et al., 2014)
and because of its simplicity. The equation for UCB is:
χt+1 = arg max(µt (x) + κσt (x))
(5)
x
where κ is a user-defined parameter that tunes the tradeoff between exploration and exploitation. We chose κ = 0.2.
Prior for the learning algorithm
The learning algorithm is guided by a prior that captures the idea
that the highest-performing gaits are likely to be a combination
of motors at full speed (in forward or in reverse). In our implementation, it is implemented with a Gaussian process defined by
9 hand-picked points and whose variance is ignored (equation 2).
The kernel function is the exponential kernel (equation 3), with
P (f (x)|x) = N (µ(x), σ 2 (x))
(1) β = 0.15.
The 9 hand-picked points (χ1 , · · · , χ9 ) are as follows (Fig 5where N denotes the standard normal distribution.
D):
At iteration t, if the performance [P1 , · · · , Pt ] = P1:t of the
χ1 = [−100%, −100%, −100%]
points [χ1 , · · · , χt ] = χ1:t has already been evaluated, then
χ2 = [−100%, −100%, +100%]
µt (x) and σt2 (x) are fitted as follows (Rasmussen and Williams,
χ3 = [−100%, 100%, −100%]
2006):
χ4 = [−100%, +100%, +100%]
χ5 = [+100%, −100%, −100%]
χ6 = [+100%, +100%, +100%]
µt (x) = k| K−1 P1:t
2
χ7 = [+100%, −100%, −100%]
σt2 (x) = k(x, x) + σnoise
− k| K−1 k
χ8 = [+100%, −100%, +100%]
where:
χ9 = [0%, 0%, 0%];
k(χ1 , χ1 ) · · · k(χ1 , χt )
(2)
P (χ1 ), · · · , P (χ8 ) = 0.3; P (χ9 ) = 0
.
.
.
2
..
..
..
K=
+ σnoise I
k=
k(χt , χ1 ) · · · k(χt , χt )
k(x, χ1 ) k(x, χ2 ) · · · k(x, χt )
Statistics
For all experiments, we report the 5th and 95th percentiles. We
used a two-tailed Mann-Whitney U test for all statistical tests.
For the box plots, the central mark is the median, the edges of the
box are the 25th and 75th percentiles (inter-quartile range – IQR),
the whiskers corresponds to the range [25% − 1.5 × IQR, 75% +
1.5 × IQR], and points outside of the whiskers are considered to
be outliers (this corresponds to the “interquartile rule”). For each
box plot, the result of the Mann-Whitney U test (two-tailed) is
indicated with stars: * means p ≤ 0.05, ** means p ≤ 0.01, ***
because this is the most common kernel in Bayesian optimization means p ≤ 0.001, and **** means p ≤ 0.0001.
and we did not see any reason to choose a different one (Brochu
et al., 2010; Shahriari et al., 2016). We fixed β to 0.15.
Computer code
An interesting feature of Gaussian processes is that they can
easily incorporate a prior µp (x) for the mean function, which http://members.loria.fr/JBMouret/src/
helps to guide the optimization process to zones that are known limbo-tensegrity.tar.gz ; this code will be reto be promising:
leased with an open-source license on Github for the final
publication.
µt (x) = µp (x) + k| K−1 (P1:t − µp (χ1:t ))
(4)
The matrix K is called the covariance matrix. It is based on
a kernel function k(x1 , x2 ) which defines how samples influence each other. Kernel functions are classically variants of the
Euclidean distance. Here we use the exponential kernel (Brochu
et al., 2010; Cully et al., 2015; Rasmussen and Williams, 2006;
Shahriari et al., 2016):
1
(3)
k(x1 , x2 ) = exp − 2 ||x1 − x2 ||2
β
In our implementation, the prior is a second Gaussian process Data availability
defined by hand-picked points (see the “prior” section below).
To select the next χ to test (χt+1 ), Bayesian optimization max- http://members.loria.fr/JBMouret/data/
imizes an acquisition function, a function that reflects the need tensegrity.tar.gz ; these data will be released on Dryad
to balance exploration – improving the model in the less known for the final publication.
arXiv preprint | 9
References
Nicholas W Bartlett, Michael T Tolley, Johannes TB
Overvelde, James C Weaver, Bobak Mosadegh, Katia
Bertoldi, George M Whitesides, and Robert J Wood.
A 3d-printed, functionally graded soft robot powered by
combustion. Science, 349(6244):161–165, 2015.
J.J. Craig. Introduction to Robotics. Addison-Wesley, 1989.
Antoine Cully, Jeff Clune, Danesh Tarapore, and JeanBaptiste Mouret. Robots that can adapt like animals.
Nature, 521(7553):503–507, 2015.
Andrew J Davison, Ian D Reid, Nicholas D Molton, and
Olivier Stasse. Monoslam: Real-time single camera
slam. IEEE transactions on pattern analysis and machine intelligence, 29(6):1052–1067, 2007.
Benjamin Berger, Alvin Andino, Andrew Danise, and John
Rieffel. Growing and evolving vibrationally actuated soft
robots. In Proceedings of the Companion Publication of Daniel E Finkel. Direct optimization algorithm user guide.
the 2015 Annual Conference on Genetic and EvolutionCenter for Research in Scientific Computation, North
ary Computation, pages 1221–1224. ACM, 2015.
Carolina State University, 2, 2003.
Robert-Paul Berretty, Ken Goldberg, Mark H Overmars,
and A Frank van der Stappen. Trap design for vibratory bowl feeders. The International Journal of Robotics
Research, 20(11):891–908, 2001.
Thomas K Bliss, Tetsuya Iwasaki, and Hilary Bart-Smith.
Resonance entrainment of tensegrity structures via cpg
control. Automatica, 48(11):2791–2800, 2012.
A. I. J. Forrester and A. J. Keane. Recent advances in
surrogate-based optimization. Progress in Aerospace
Sciences, 45(1):50–79, 2009.
Christian Forster, Matia Pizzoli, and Davide Scaramuzza.
Svo: Fast semi-direct monocular visual odometry. In
Robotics and Automation (ICRA), 2014 IEEE International Conference on, pages 15–22. IEEE, 2014.
Valter Böhm and Klaus Zimmermann. Vibration-driven mo- Brad J Gemmell, John H Costello, Sean P Colin, Colin J
Stewart, John O Dabiri, Danesh Tafti, and Shashank
bile robots based on single actuated tensegrity strucPriya. Passive energy recapture in jellyfish contributes
tures. In Robotics and Automation (ICRA), 2013 IEEE
to propulsive advantage over other metazoans. ProInternational Conference on, pages 5475–5480. IEEE,
ceedings of the National Academy of Sciences, 110(44):
2013.
17904–17909, 2013.
A. J. Booker, J. E. Dennis Jr, P. D. Frank, D. B. Serafini,
V. Torczon, and Mi. W Trosset. A rigorous framework for Zoubin Ghahramani. Probabilistic machine learning and
artificial intelligence. Nature, 521(7553):452–459, 2015.
optimization of expensive functions by surrogates. Structural optimization, 17(1):1–13, 1999.
Nikolaus Hansen, Sibylle D Müller, and Petros Koumoutsakos. Reducing the time complexity of the derandomEric Brochu, Vlad M Cora, and Nando De Freitas. A tutoized evolution strategy with covariance matrix adaptation
rial on bayesian optimization of expensive cost functions,
(cma-es). Evolutionary computation, 11(1):1–18, 2003.
with application to active user modeling and hierarchical
reinforcement learning. arXiv preprint arXiv:1012.2599, Nikolaus Hansen, Anne Auger, Raymond Ros, Steffen
2010.
Finck, and Petr Pošı́k. Comparing results of 31 algorithms from the black-box optimization benchmarking
Roberto Calandra, André Seyfarth, Jan Peters, and
bbob-2009. In Proceedings of the 12th annual conferMarc Peter Deisenroth. An experimental comparison of
ence companion on Genetic and evolutionary computabayesian optimization for bipedal locomotion. In Proc. of
tion, pages 1689–1696. ACM, 2010.
the IEEE International Conference on Robotics and Automation (ICRA), 2014.
Nick Jakobi, Phil Husbands, and Inman Harvey. Noise
and the reality gap: The use of simulation in evolutionKen Caluwaerts, Jérémie Despraz, Atıl Işçen, Andrew P
ary robotics. In European Conference on Artificial Life,
Sabelhaus, Jonathan Bruce, Benjamin Schrauwen, and
pages 704–720. Springer, 1995.
Vytas SunSpiral.
Design and control of compliant
tensegrity robots through simulation and hardware val- Y Jin. Surrogate-assisted evolutionary computation: Reidation. Journal of The Royal Society Interface, 11(98):
cent advances and future challenges. Swarm and Evo20140520, 2014.
lutionary Computation, 1(2):61–70, 2011.
Kathy Ceceri. Making Simple Robots: Exploring Cutting- Woong Yeol Joe. Vibration learning and control towards
Edge Robotics with Everyday Stuff. Maker Media, Inc.,
vibration actuated robotic systems. In Machine Learning
2015.
and Applications (ICMLA), 2015 IEEE 14th International
Conference on, pages 375–377. IEEE, 2015.
Richard Chase and Abhilash Pandya. A review of active mechanical driving principles of spherical robots. Donald R Jones, Matthias Schonlau, and William J Welch.
Robotics, 1(1):3–23, 2012.
Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492,
L.-H. Chen, B. Cera, E.L. Zhu, R. Edmunds, F. Rice,
1998.
E. Tang A. Bronars, S.R. Malekshahi, O. Romero, and
A.K. Agogino. Inclined surface locomotion strategies for Mark Khazanov, Ben Humphreys, William Keat, and John
spherical tensegrity robots. In Proceedings of the InterRieffel. Exploiting dynamical complexity in a physical
national Conference on Intelligent Robotics and Systems
tensegrity robot to achieve locomotion. Advances in Ar(IROS 2017), 2017.
tificial Life, ECAL, 12:965–972, 2013.
arXiv preprint | 10
Mark Khazanov, Jules Jocque, and John Rieffel. Evolution
of locomotion on a physical tensegrity robot. In ALIFE 14:
The Fourteenth Conference on the Synthesis and Simulation of Living Systems, pages 232–238, 2014.
Jan Peters and Stefan Schaal. Policy gradient methods
for robotics. In Intelligent Robots and Systems, 2006
IEEE/RSJ International Conference on, pages 2219–
2225. IEEE, 2006.
Scott Kirkpatrick, C Daniel Gelatt, Mario P Vecchi, et al. Rolf Pfeifer, Max Lungarella, and Fumiya Iida. Selforganization, embodiment, and biologically inspired
Optimization by simulated annealing. Science, 220
robotics. science, 318(5853):1088–1093, 2007.
(4598):671–680, 1983.
Marc W Kirschner and John C Gerhart. The plausibility of
life: Resolving Darwin’s dilemma. Yale University Press,
2006.
C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning. MIT Press, 2006. ISBN
0-262-18253-X.
Yuusuke Koizumi, Mizuho Shibata, and Shinichi Hirai.
Rolling tensegrity driven by pneumatic soft actuators. In
Proc. of the IEEE Int. Conf. on Robotics and Automation
(ICRA), pages 1988–1993. IEEE, 2012.
R. Reuillon, C. Schmitt, R. De Aldama, and J.-B. Mouret. A
new method to evaluate simulation models: The calibration profile (cp) algorithm. Journal of Artificial Societies
and Social Simulation, 18(1):12, 2015.
Sylvain Koos, Jean-Baptiste Mouret, and Stéphane Don- Dan Reznik and John Canny. The coulomb pump: A
novel parts feeding method using a horizontally-vibrating
cieux. The transferability approach: Crossing the reality
surface. In Robotics and Automation, 1998. Proceedgap in evolutionary robotics. IEEE Transactions on Evoings. 1998 IEEE International Conference on, volume 1,
lutionary Computation, 17(1):122–145, 2013.
pages 869–874. IEEE, 1998.
Christiane Lemke, Marcin Budka, and Bogdan Gabrys.
Metalearning: a survey of trends and technologies. Arti- Dan Reznik, Emil Moshkovich, and John Canny. Building
a universal planar manipulator. In Distributed Manipulaficial intelligence review, 44(1):117–130, 2015.
tion, pages 147–171. Springer, 2000.
H.-T. Lin, G Leisk, and B.A. Trimmer. Goqbot: A caterpillarinspired soft-bodied rolling robot. Bioinspiration and John W Romanishin, Kyle Gilpin, and Daniela Rus. Mblocks: Momentum-driven, magnetic modular robots. In
Biomimetics, 6:026007, 2011.
Proc. of the IEEE/RSJ International Conference on IntelHod Lipson. Challenges and opportunities for design, simligent Robots and Systems (IROS), pages 4288–4295.
ulation, and fabrication of soft robots. Soft Robotics, 1
IEEE, 2013.
(1):21–27, 2014.
Michael Rubenstein, Christian Ahler, and Radhika NagDaniel J Lizotte, Tao Wang, Michael H Bowling, and Dale
pal. Kilobot: A low cost scalable robot system for colSchuurmans. Automatic gait optimization with Gaussian
lective behaviors. In Proc. of the IEEE International Conprocess regression. In Proceedings of the the Internaference Robotics and Automation (ICRA), pages 3293–
tional Joint Conference on Artificial Intelligence (IJCAI),
3298. IEEE, 2012.
volume 7, pages 944–949, 2007.
Jerome Sacks, William J Welch, Toby J Mitchell, Henry P
Ramses V Martinez, Jamie L Branch, Carina R Fish, LiWynn, et al. Design and analysis of computer experihua Jin, Robert F Shepherd, Rui Nunes, Zhigang Suo,
ments. Statistical science, 4(4):409–423, 1989.
and George M Whitesides. Robotic tentacles with threedimensional mobility based on flexible elastomers. Ad- Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P
Adams, and Nando de Freitas. Taking the human out of
vanced Materials, 25(2):205–212, 2013.
the loop: A review of bayesian optimization. Proceedings
Brian T Mirletz, Perry Bhandal, Ryan D Adams, Adrian K
of the IEEE, 104(1):148–175, 2016.
Agogino, Roger D Quinn, and Vytas SunSpiral. Goaldirected cpg-based control for tensegrity spines with Robert F Shepherd, Filip Ilievski, Wonjae Choi, Stephen A
Morin, Adam A Stokes, Aaron D Mazzeo, Xin Chen,
many degrees of freedom traversing irregular terrain.
Michael Wang, and George M Whitesides. Multigait soft
Soft Robotics, 2(4):165–176, 2015.
robot. Proceedings of the National Academy of SciJean-Baptiste Mouret and Jeff Clune. Illuminating search
ences, 108(51):20400–20403, 2011.
spaces by mapping elites. arXiv:1504.04909 [cs, q-bio],
April 2015. URL http://arxiv.org/abs/1504. Michael A Simon, William A Woods Jr, Yevgeniy V Serebrenik, Sharotka M Simon, Linnea I van Griethuijsen,
04909. arXiv: 1504.04909.
John J Socha, Wah-Keat Lee, and Barry A Trimmer.
Irving J Oppenheim and William O Williams. Vibration
Visceral-locomotory pistoning in crawling caterpillars.
of an elastic tensegrity structure. European Journal of
Current biology, 20(16):1458–1463, 2010.
Mechanics-A/Solids, 20(6):1023–1031, 2001.
Timothy W Simpson, Timothy M Mauery, John J Korte, and
Tetiana Parshakova, Minjoo Cho, Alvaro Casinelli, and
Farrokh Mistree. Comparison of response surface and
Daniel Saakes. Ratchair: furniture learns to move itself
kriging models for multidisciplinary design optimization.
with vibration. In ACM SIGGRAPH 2016 Emerging TechAmerican Institute of Aeronautics and Astronautics, 98
nologies, page 19. ACM, 2016.
(7):1–16, 1998.
arXiv preprint | 11
R. E. Skelton and M. C. de Oliveira. Tensegrity systems,
volume 1. Springer, 2009.
Author Contributions
J.R. designed the robot; J.R. and J.-B.M. designed the
K. Snelson. The art of tensegrity. International Journal of
study, performed the experiments, analyzed the data, and
Space Structures, 27(2-3):71–80, 2012.
wrote the paper.
Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing Supplementary information
systems, pages 2951–2959, 2012.
Germán Sumbre, Graziano Fiorito, Tamar Flash, and
Binyamin Hochner. Neurobiology: motor control of flexible octopus arms. Nature, 433(7026):595–596, 2005.
B. Trimmer. New challenges in biorobotics: incorporating
soft tissue into control systems. Applied Bionics and
Biomechanics, 5(3), 2008.
Video S1
Presentation of our soft tensegrity robot. The video
shows the soft tensegrity robot in action: how it can locomote and how it can learn to compensate when damaged.
The video is available online one: https://youtu.be/
SuLQDhrk9tQ
Vance A Tucker. Energetic cost of locomotion in animals.
Comparative Biochemistry and Physiology, 34(4):841–
846, 1970.
Francisco J Valero-Cuevas, Jae-Woong Yi, Daniel Brown,
Robert V McNamara, Chandana Paul, and Hood Lipson.
The tendon network of the fingers performs anatomical
computation at a macroscopic scale. IEEE Transactions
on Biomedical Engineering, 54(6):1161–1166, 2007.
Panagiotis Vartholomeos and Evangelos Papadopoulos.
Analysis, design and control of a planar micro-robot
driven by two centripetal-force actuators. In Robotics and
Automation, 2006. ICRA 2006. Proceedings 2006 IEEE
International Conference on, pages 649–654. IEEE,
2006.
Ning Wang, Keiji Naruse, Dimitrije Stamenović, Jeffrey J
Fredberg, Srboljub M Mijailovich, Iva Marija TolićNørrelykke, Thomas Polte, Robert Mannix, and Donald E
Ingber. Mechanical behavior in living cells consistent
with the tensegrity model. Proceedings of the National
Academy of Sciences, 98(14):7765–7770, 2001.
Michael Wehner, Ryan L Truby, Daniel J Fitzgerald, Bobak
Mosadegh, George M Whitesides, Jennifer A Lewis, and
Robert J Wood. An integrated design and fabrication
strategy for entirely soft, autonomous robots. Nature,
536(7617):451–455, 2016.
Acknowledgments
This project received funding from the European Research
Council (ERC) under the European Union’s Horizon 2020
research and innovation programme (Project: ResiBots,
grant agreement No 637972); J.R. received funding for a
1-month visit at University of Lorraine, France. The authors
would like to thank Dorian Goepp for his invaluable help the
ResiBots team for their comments on this manuscript. The
authors would also like to thank Bill Keat for his help and
insight into the design of the robot, and all the undergraduates of Union College’s Evolutionary Robotics Group.
The pictures of Fig.1 are distributed under a Creative
Commons License. A: c Jonathan Rieke. B: c Margaret
Donald. C: c National Institute of Dental and Craniofacial
Research, National Institutes of Health (NIH).
arXiv preprint | 12
| 9 |
CPL: A Core Language for Cloud Computing
Technical Report for the Conference Publication [3]
Oliver Bračevac1
Sebastian Erdweg1
arXiv:1602.00981v2 [cs.PL] 5 Feb 2016
1
Guido Salvaneschi1
2
TU Darmstadt, Germany
Abstract
Lancaster University, UK
Platform.2 Services are bought as needed from the cloud
provider in order to adapt to customer demand,
An important and challenging task in the development process of cloud applications is deployment. Especially, deployment involves the distribution, configuration and composition
of (1) virtual machines that implement the application and its
services, and of (2) virtual machines that provide middleware
infrastructure such as load balancing, key-value stores, and
MapReduce. Deploying a cloud application can go wrong
and cause the application to malfunction. Possible causes
are software bugs in the application itself, but also wrong
configurations, such as missing library dependencies or inappropriate permissions for a shell script. Fixing mistakes
after deployment causes high costs and loss of reputation.
For example, in 2012, Knight Capital lost over $440 Million
over the course of 30 minutes due to a bug in its deployed
trading software,3 causing the disappearance of the company
from the market.
Considering that cloud applications can have deployment
sizes in the hundreds or thousands of virtual machines, manual configuration is error-prone and does not scale. Cloud
platforms address this issue with domain-specific languages
(DSLs) such as Amazon CloudFormation or Google Cloud
Deployment Manager. The purpose of these DSLs is to write
reusable deployment programs, which instruct the cloud platform to perform deployment steps automatically. A typical
deployment program specifies the required virtual machines
for the application infrastructure, how these virtual machines
are connected with each other, and how the application infrastructure connects to the pre-existing or newly created
middleware infrastructure of the cloud platform.
However, the modularity of current cloud deployment
DSLs is insufficient (detailed discussion in Section 2):
Unsafe Composition: Application services and deployment programs are written in different languages. Deployment DSLs configure application services by lexically expanding configuration parameters into application source
code before its execution. This approach is similar to a lex-
Running distributed applications in the cloud involves deployment. That is, distribution and configuration of application
services and middleware infrastructure. The considerable
complexity of these tasks resulted in the emergence of declarative JSON-based domain-specific deployment languages to
develop deployment programs. However, existing deployment
programs unsafely compose artifacts written in different languages, leading to bugs that are hard to detect before run time.
Furthermore, deployment languages do not provide extension
points for custom implementations of existing cloud services
such as application-specific load balancing policies.
To address these shortcomings, we propose CPL (Cloud
Platform Language), a statically-typed core language for
programming both distributed applications as well as their
deployment on a cloud platform. In CPL, application services
and deployment programs interact through statically typed,
extensible interfaces, and an application can trigger further
deployment at run time. We provide a formal semantics of
CPL and demonstrate that it enables type-safe, composable
and extensible libraries of service combinators, such as load
balancing and fault tolerance.
1.
Mira Mezini1,2
Introduction
Cloud computing [30] has emerged as the reference infrastructure for concurrent distributed services with high availability, resilience and quick response times, providing access
to on-demand and location-transparent computing resources.
Companies develop and run distributed applications on specific cloud platforms, e.g., Amazon AWS1 or Google Cloud
1 https://aws.amazon.com
2 https://cloud.google.com
3 http://www.bloomberg.com/bw/articles/2012-08-02/
[Copyright notice will appear here once ’preprint’ option is removed.]
knight-shows-how-to-lose-440-million-in-30-minutes.
1
2016/2/8
ical macro system and makes deployment programs unsafe
because of unintentional code injection and lexical incompatibilities.
No Extensibility: Middleware cloud services (e.g., elastic
load balancing, which may dynamically allocate new virtual
machines) are pre-defined in the cloud platform and only
referenced by the deployment program through external
interfaces. As such, there is no way to customize those
services during deployment or extend them with additional
features.
Stage Separation: Current deployment DSLs finish their
execution before the application services are started. Therefore, it is impossible to change the deployment once the application stage is active. Thus, applications cannot self-adjust
their own deployment, e.g., to react to time-varying customer
demand.
We propose CPL (Cloud Platform Language), a staticallytyped core language for programming cloud applications and
deployments. CPL employs techniques from programming
language design and type systems to overcome the issues outlined above. Most importantly, CPL unifies the programming
of deployments and applications into a single language. This
avoids unsafe composition because deployments and applications can exchange values directly via statically typed interfaces. For extensibility, CPL supports higher-order service
combinators with statically typed interfaces using bounded
polymorphism. Finally, CPL programs run at a single stage
where an application service can trigger further deployment.
To demonstrate how CPL solves the problems of deployment languages, we implemented a number of case studies.
First, we demonstrate type-safe composition through generically typed worker and thunk abstractions. Second, on top of
the worker abstraction, we define composable and reusable
service combinators in CPL, which add new features, such as
elastic load balancing and fault tolerance. Finally, we demonstrate how to model MapReduce as a deployment program in
CPL and apply our combinators, obtaining different MapReduce variants, which safely deploy at run time.
In summary, we make the following contributions:
• We analyze the problems with current cloud deployment
DSLs.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{ //...
"Parameters": {
"InstanceType": {
"Description": "WebServer EC2 instance type",
"Type": "String",
"Default": "t2.small",
"AllowedValues": [ "t2.micro", "t2.small ]",
"ConstraintDescription": "a valid EC2 instance type."
} //...
},
"Resources": {
"WebServer": {
"Type": "AWS::EC2::Instance",
"Properties": {
"InstanceType": { "Ref" : "InstanceType" } ,
"UserData": { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum update -y aws-cfn-bootstrap\n",
19
"/opt/aws/bin/cfn-init -v ",
"
--stack ", { "Ref" : "AWS::StackName" },
"
--resource WebServer ",
"
--configsets wordpress_install ",
"
--region ", { "Ref" : "AWS::Region" }, "\n"
]]}}, //...
20
21
22
23
24
25
},
26
}
27
28
29
30
31
32
33
34
35
36
37
38
},
"Outputs": {
"WebsiteURL": {
"Value":
{ "Fn::Join" :
["", ["http://", { "Fn::GetAtt" :
[ "WebServer", "PublicDnsName" ]}, "/wordpress" ]]},
"Description" : "WordPress Website"
}
}
}
Figure 1. A deployment program in CloudFormation
(details omitted, full version: https://s3.eu-central-1.amazonaws.
com/cloudformation-templates-eu-central-1/WordPress_Multi_AZ.
template.
The source code of the PLT Redex and Scala implementations
and of all case studies is available online:
https://github.com/seba--/djc-lang.
2.
Motivation
In this section, we analyze the issues that programmers encounter with current configuration and deployment languages
on cloud platforms by a concrete example.
2.1
State of the Art
Figure 1 shows an excerpt of a deployment program in
CloudFormation, a JSON-based DSL for Amazon AWS.
The example is from the CloudFormation documentation.
We summarize the main characteristics of the deployment
language below.
• We define the formal syntax and semantics of CPL to
model cloud platforms as distributed, asynchronous message passing systems. Our design is inspired by the Join
Calculus [12].
• We define the type system of CPL as a variant of System
• Input parameters capture varying details of a configuration
F with bounded quantification [24].
(Lines 2-10). For example, the program receives the
virtual machine instance type that should host the web
server for a user blog (Line 3). This enables reuse of the
program with different parameters.
• We formalize CPL in PLT Redex [9] and we provide a
concurrent implementation in Scala.
• We evaluated CPL with case studies, including a library of
typed service combinators that model elastic load balancing and fault tolerance mechanisms. Also, we apply the
combinators to a MapReduce deployment specification.
• CloudFormation programs specify named resources to be
created in the deployment (Lines 11-28), e.g., deployed
virtual machines, database instances, load balancers and
2
2016/2/8
even other programs as modules. The program in Figure 1
allocates a "WebServer" resource (Line 12), which is a
virtual machine instance. The type of the virtual machine
references a parameter (Line 15), that the program declared earlier on (Line 3). Resources can refer to each
other, for example, configuration parameters of a web
server may refer to tables in a database resource.
programs receive parameters and return values similar
to procedures and (2) deployment programs can be instantiated from inside other deployment programs, which
resembles modules. Since deployment is a complex engineering task, advanced language features are desirable
to facilitate programming in the large, e.g., higher-order
combinators, rich data types and strong interfaces.
• Certain configuration phases require to execute applica-
Two-phase Staging Deployment programs in current DSLs
execute before the actual application services, that is, information flows from the deployment language to the deployed application services, but not the other way around.
As a result, an application service cannot reconfigure a
deployment based on the run time state. Recent scenarios
in reactive and big data computations demonstrate that
this is a desirable feature [10].
tion code inside virtual machine instances after the deployment stage. Application code is often directly specified
in resource bodies (Lines 17-24). In the example, a bash
script defines the list of software packages to install on
the new machine instance (in our case a WordPress4 blog).
In principle, arbitrary programs in any language can be
specified.
• Deployment programs specify output parameters (Lines
Lack of Extensibility Resources and service references in
deployment programs refer to pre-defined abstractions of
the cloud platform, which have rigid interfaces. Cloud
platforms determine the semantics of the services. Programmers cannot implement their own variants of services
that plug into the deployment language with the same interfaces as the native services.
29-37), which may depend on input parameters and resources. Output parameters are returned to the caller after
executing the deployment program. In this example, it is
a URL pointing to the new WordPress blog.
• The deployment program is interpreted at run time by
the cloud platform which performs the deployment steps
according to the specification.
2.2
Informal Specification The behavior of JSON deployment
scripts is only informally defined. The issue is exacerbated
by the mix of different languages. As a result, it is hard
to reason about properties of systems implemented using
deployment programs.
Problems with Deployment Programs
In the following we discuss the issues with the CloudFormation example described above.
Internal Safety Type safety for deployment programs is
limited. Developers define “types” for resources of the
cloud platform, e.g., AWS::EC2::Instance (Line 13)
represents an Amazon EC2 instance. However, the typing
system of current cloud deployment languages is primitive
and only relies on the JSON types.
The issues above demand for a radical change in the
way programmers deploy cloud applications and in the way
application and deployment configuration code relate to each
other.
Cross-language Safety Even more problematic are issues
caused by cross-language interaction between the deployment language and the language(s) of the deployed application services. For example, the AWS::Region variable
is passed from the JSON specification to the bash script
(Line 24). However, the sharing mechanism is just syntactic replacement of the current value of AWS::Region
inside the script. Neither are there type-safety checks nor
syntactic checks before the script is executed. More generally, there is no guarantee that the data types of the
deployment language are compatible with the types of
the application language nor that the resulting script is
syntactically correct. This problem makes cloud applications susceptible to hygiene-related bugs and injection
attacks [4].
A solution to the problems identified in the previous section
requires an holistic approach where cloud abstractions are
explicitly represented in the language. Programmers should
be able to modularly specify application behavior as well
as reconfiguration procedures. Run time failures should be
prevented at compilation time through type checking.
These requirements motivated the design of CPL. In this
section, we present its syntax and the operational semantics.
3.
3.1
The Cloud Platform Language
Language Features in a Nutshell
Simple Meta-Theory: CPL should serve as the basis for investigating high-level language features and type systems
designed for cloud computations. To this end, it is designed as a core language with a simple meta-theory. Established language features and modeling techniques, such
as lexical scoping and a small-step operational semantics,
form the basis of CPL.
Low Abstraction Level Deployment languages typically
are Turing-complete but the abstractions are low-level and
not deployment-specific. For example, (1) deployment
Concurrency: CPL targets distributed concurrent computations. To this end, it allows the definition of independent
computation units, which we call servers.
4 http://wordpress.org
3
2016/2/8
sequence of joined service patterns and e is the body. A
service pattern x0 hxi in p declares a service named x0
with parameters x and a rule can only fire if all service
patterns are matched simultaneously. The same service
pattern can occur in multiple rules of a server.
Asynchronous Communication: Servers can receive parameterized service requests from other servers. To realistically model low-level communication within a cloud,
the language only provides asynchronous end-to-end communication, where the success of a service request is
not guaranteed. Other forms of communication, such as
synchronous, multicast, or error-checking communication,
can be defined on top of the asynchronous communication.
• A server spawn (spwn e) creates a new running server
instance at a freshly allocated server address i from
a given server image (srv r, m) represented by e. A
server image is a description of a server behavior plus
a server state – a buffer of pending messages. A realworld equivalent of server images are e.g., virtual machine
snapshots. A special case of a server image is the value 0,
which describes an inactive or shut down server.
Local Synchronization: Many useful concurrent and asynchronous applications require synchronization. We adopt
join patterns from the Join Calculus [12]. Join patterns are
declarative synchronization primitives for machine-local
synchronization.
• A fully qualified service reference e]x, where e denotes
a server address and x is the name of a service provided
by the server instance at e. Service references to server
instances are themselves values.
First-Class Server Images: Cloud platforms employ virtualization to spawn, suspend and duplicate virtual machines.
That is, virtual machines are data that can be stored and
send as payload in messages. This is the core idea behind CPL’s design and enables programs to change their
deployment at run time. Thus, CPL features servers as
values, called first-class servers. Active server instances
consist of an address, which points to a server image (or
snapshot). The server image embodies the current run
time state of a server and a description of the server’s
functionality, which we call server template. At run time,
a server instance may be overwritten by a new server image, thus changing the behavior for subsequent service
requests to that instance.
• A self-reference this refers to the address of the lexically
enclosing server template, which, e.g., allows one service
to call upon other services of the same server instance.
• An asynchronous service request e0 hei, where e0 rep-
resents a service reference and e the arguments of the
requested service.
• A parallel expression (par e) of service requests e to be
executed independently. The empty parallel expression
(par ε) acts as a noop expression, unit value, or null
process and is a value.
• A snapshot snap e yields an image of the server instance
Transparent Placement: Cloud platforms can reify new
machines physically (on a new network node) or virtually
(on an existing network node). Since this difference does
not influence the semantics of a program but only its nonfunctional properties (such as performance), our semantics
is transparent with respect to placement of servers. Thus,
actual languages based on our core language can freely
employ user-defined placement definitions and automatic
placement strategies. Also, we do not require that CPLbased languages map servers to virtual machines, which
may be inefficient for short-lived servers. Servers may as
well represent local computations executing on a virtual
machine.
3.2
which resides at the address denoted by e.
• A replacement repl e1 e2 of the server instance at address
e1 with the server image e2 .
Notation: In examples, p & p denotes pairs of join patterns
and e k e denotes pairs of parallel expressions. We sometimes
omit empty buffers when spawning servers, i.e., we write
spwn (srv r) for spwn (srv r, ε). To improve readability
in larger examples, we use curly braces to indicate the
lexical scope of syntactic forms. We write service names and
meta-level definitions in typewriter font, e.g., this]foo and
MyServer = srv { }. We write bound variables in italic
font, e.g., srv { lefthxi & righthyi . pairhx, yi }.
Core Syntax
Example. For illustration, consider the following server
template Fact for computing factorials, which defines three
rules with 5 services.6
Figure 2 displays the core syntax of CPL. An expression e is
either a value or one of the following syntactic forms:5
• A variable x is from the countable set N . Variables
1
identify services and parameters of their requests.
2
3
• A server template (srv r) is a first-class value that de-
Fact = srv {
mainhn, ki . //initialization
this]fachni k this]acch1i k this]outhki
4
scribes the behavior of a server as a sequence of reaction
rules r. A reaction rule takes the form p . e, where p is a
5
fachni & acchai . //recursive fac computation
6 For
the sake of presentation, we use ordinary notation for numbers,
arithmetics and conditionals, all of which is church-encodable on top of
CPL (cf. Section 3.5).
5 We
write a to denote the finite sequence a1 . . . an and we write ε to denote
the empty sequence.
4
2016/2/8
e ::= v | x | this | srv r | spwn e | e]x | ehei | par e
| snap e | repl e e
(Expressions)
i∈N
(Server Addresses)
r ::= p . e
(Reaction Rules)
v ::= srv r | i | i]x | par ε | (srv r, m) | 0
(Values)
p ::= xhxi
(Join Patterns)
E ::= [·] | spwn E | E]x | Ehei | ehe E ei
| par e E e | snap E | repl E e | repl e E
(Evaluation Contexts)
m ::= xhvi
(Message Values)
(Routing Tables)
x, y, z ∈ N
µ ::= ∅ | µ; i 7→ (srv r, m)
| µ; i 7→ 0
(Variable Names)
Figure 2. Expression Syntax of CPL.
e | µ −→ e0 | µ0
par e1 (par e2 ) e3 | µ −→ par e1 e2 e3 | µ
µ(i) = (s, m)
i]xhvi | µ −→ par ε | µ; i 7→ (s, m · xhvi)
s = srv r1 (p . eb ) r2
µ(i) = (s, m)
match(p, m) ⇓ (m0 , σ)
σb = σ ∪ {this := i}
par e | µ −→ par e σb (eb ) | µ; i 7→
i∈
/ dom(µ)
(s, m0 )
(s = 0 ∨ s = (srv r, m))
spwn s | µ −→ i | µ; i 7→ s
(s = 0 ∨ s = (srv r, m))
µ(i) = s
(C ONG)
E[e] | µ −→ E[e0 ] | µ0
(S NAP)
snap i | µ −→ s | µ
i ∈ dom(µ)
(PAR)
(s = 0 ∨ s = (srv r, m))
(R EPL)
repl i s | µ −→ par ε | µ; i 7→ s
(R CV)
Matching Rules:
(M ATCH0 )
match(ε, m) ⇓ (m, ∅)
(R EACT)
m = m1 (xhv1 . . . vk i) m2
σ = {xi := vi | 1 ≤ i ≤ k}
dom(σ) ∩ dom(σr ) = ∅
match(p, m1 m2 ) ⇓ (mr , σr )
(S PWN)
match(xhx1 . . . xk i p, m) ⇓ (mr , σ ∪ σr )
(M ATCH1 )
Figure 3. Small-step Operational Semantics of CPL.
e ::= . . . | Λα <: T. e | e [T ]
(Extended Expressions)
v ::= . . . |
Λα <:
p ::= xhx : T i
α, β, γ . . .
T. e
(Types)
(Extended Values)
T ::= Top | Unit | α | h T i | srv x : T | srv ⊥ | inst T
| img T | ∀α <: T. T
(Typed Join Patterns)
Γ ::= ∅ | Γ, α <: T | Γ, x : T | Γ, this : T
(Type Contexts)
(Type-Variables)
∆ ::= ∅ | ∆, i : T
(Location Typings)
Figure 4. Expression Syntax of CPL with Types
continuation k from the request out and the result res. The
rule expects the continuation to be a service reference and
sends a request to it with the final result as argument.
To compute a factorial, we create a server instance from
the template Fact and request service main:
if (n ≤ 1)
then this]reshai
else (this]fachn - 1i k this]accha * ni)
6
7
8
9
reshni & outhki . khni //send result to k
10
11
}
(spwn Fact)]mainh5, ki.
The first rule defines a service main with two arguments, an
integer n and a continuation k. The continuation is necessary
because service requests are asynchronous and thus, the
factorial server must notify the caller when the computation
finishes. Upon receiving a main request, the server sends
itself three requests: fac represents the outstanding factorial
computation, acc is used as an accumulator for the ongoing
computation, and out stores the continuation provided by the
caller.
The second rule of Fact implements the factorial function
and synchronously matches and consumes requests fac and
acc using join patterns. Upon termination, the second rule
sends a request res to the running server instance, otherwise
it decreases the argument of fac and updates the accumulator.
Finally, the third rule of Fact retrieves the user-provided
An example reduction trace is in the appendix.
3.3
Operational Semantics
We define the semantics of CPL as a small-step structural
operational semantics using reduction contexts E (Figure 2)
in the style of Felleisen and Hieb [8].
Figure 3 shows the reduction rules for CPL expressions.
Reduction steps are atomic and take the form e | µ −→ e0 | µ0 .
A pair e | µ represents a distributed cloud application, where
expression e describes its current behavior and µ describes
its current distributed state. We intend e as a description
of the software components and resources that execute and
reside at the cloud provider and do not model client devices.
We call the component µ a routing table, which is a finite
map. Intuitively, µ records which addresses a cloud provider
5
2016/2/8
x ∈ N ∪ {this}
Γ(x) = T
Γ|Σ`x:T
∀i. Γ | Σ ` ei : Unit
(T-VAR)
Γ | Σ ` par e : Unit
ri = pi . ei
pi,j = xi,j hyi,j : Ti,j i
Si,j = h Ti,j i
(∀ i, j, k. j 6= k → yi,j ∩ yi,k = ∅)
T = srv xi,j : Si,j
(∀ i, j, k, l. xi,j = xk,l → Ti,j = Tk,l )
ftv(T ) ⊆ ftv(Γ)
∀i. Γ, yi,j : Ti,j , this : T | Σ ` ei : Unit
(T-S RV)
Γ | Σ ` srv r : T
(T-0)
Γ | Σ ` 0 : img srv ⊥
Γ | Σ ` srv r : T
ri = pi . ei
pi,j = xi,j hyi,j : Ti,j i
(∀k.∃i, j.(mk = xi,j hvi,j i ∧ Γ | Σ ` vi,j : Ti,j ))
(T-I MG)
Γ | Σ ` (srv r, m) : img T
Γ | Σ ` e : inst T
Γ | Σ ` snap e : img T
Γ | Σ ` e1 : inst T
Γ | Σ ` e2 : img T
Γ | Σ ` repl e1 e2 : Unit
Γ | Σ ` e : img T
Γ | Σ ` spwn e : inst T
Γ | Σ ` e : h T1 . . . Tn i
(T-S NAP)
∀i. Γ | Σ ` ei : Ti
Γ | Σ ` ehe1 . . . en i : Unit
Γ, α <: T | Σ ` e : U
(T-R EPL)
Γ | Σ ` Λα <: T. e : ∀α <: T. U
Γ | Σ ` e : ∀α <: T2 . T
Γ ` T1 <: T2
ftv(T1 ) ⊆ ftv(Γ)
(T-S PWN)
Γ | Σ ` e [T1 ] : T {α := T1}
Σ(i) = img T
Γ | Σ ` i : inst T
(T-I NST)
Γ|Σ`e:T
Γ ` T <: U
Γ`e:U
Γ | Σ ` e : inst (srv x : T )
Γ | Σ ` e]xi : Ti
(T-PAR)
(T-R EQ)
(T-TA BS)
(T-TA PP)
(T-S UB)
(T-S VC)
Figure 5. Typing rules of CPL.
Γ ` T <: Top
α <: T ∈ Γ
Γ ` α <: T
∀j. ∃i. (yj = xi ∧ Γ ` Ti <: Uj )
Γ ` srv x : T <: srv y : U
Γ ` T <: U
Γ ` inst T <: inst U
Γ ` srv ⊥ <: srv T
Γ ` T <: U
(S-TOP)
Γ ` img T <: img U
∀i. Γ ` Ui <: Ti
(S-TVAR)
Γ ` h T1 , . . . , Tn i <: h U1 , . . . , Un i
Γ, α1 <: T ` U1 <: U2 {α2 := α1 }
(S-S RV)
Γ ` (∀α1 <: T. U1 ) <: (∀α2 <: T. U2 )
(S-I NST)
Γ ` T <: T
Γ ` T1 <: T2
(S-S RV⊥ )
Γ ` T2 <: T3
Γ ` T1 <: T3
(S-I MG)
(S-S VC)
(S-U NIV)
(S-R EFL)
(S-T RANS)
Figure 6. Subtyping rules of CPL.
assigns to the server instances that the cloud application
creates during its execution.7 We abstract over technical
details, such as the underlying network.
The first reduction rule (C ONG) defines the congruence
rules of the language and is standard. The second rule (PAR)
is technical. It flattens nested parallel expressions in order
to have a simpler representation of parallel computations.
The third rule (R CV) lets a server instance receive an asynchronous service request, where the request is added to the
instance’s buffer for later processing. Our semantics abstracts
over the technicalities of network communication. That is, we
consider requests i]xhvi that occur in a CPL expression to be
in transit, until a corresponding (R CV) step consumes them.
The fourth rule (R EACT) fires reaction rules of a server. It selects a running server instance (s, m), selects a reaction rule
(p . eb ) from it, and tries to match its join patterns p against
the pending service requests in the buffer m. A successful
match consumes the service requests, instantiates the body
eb of the selected reaction rule and executes it independently
in parallel.
7 This
bears similarity to lambda calculi enriched with references and a
store [31].
6
2016/2/8
Finally, let us consider the rules for spwn, snap and repl,
which manage server instances and images. Reduction rule
(S PWN) creates a new server instance from a server image,
where a fresh unique address is assigned to the server instance.
This is the only rule that allocates new addresses in µ. One can
think of this rule as a request to the cloud provider to create
a new virtual machine and return its IP address. Importantly,
the address that spwn yields is only visible to the caller.
The address can only be accessed by another expression if it
shares a common lexical scope with the caller. Thus, lexical
scope restricts the visibility of addresses. This also means
that the map µ is not a shared memory, but a combined, flat
view of disjoint distributed information.8
Reduction rule (S NAP) yields a copy of the server image
at address i, provided the address is in use. Intuitively, it
represents the invocation of a cloud management API to
create a virtual machine snapshot. Reduction rule (R EPL)
replaces the server image at address i with another server
image.
We define spwn, snap and repl as atomic operations.
At the implementation level, each operation may involve
multiple communication steps with the cloud provider, taking
noticeable time to complete and thus block execution for too
long, especially when the operation translates to booting
a new OS-level virtual machine. On the other hand, as
we motivated at the beginning of this section, servers may
not necessarily map to virtual machines, but in-memory
computations. In this case, we expect our three atomic
operations to be reasonably fast. Also, we do not impose
any synchronization mechanism on a server addresses, which
may result in data races if multiple management operations
access it in parallel. Instead, programmers have to write their
own synchronization mechanisms on top of CPL if required.
Matching satisfies the following property:
pleteness (⇐): Straightforward by induction on the number
k of service patterns in p.
Our semantics is nondeterminstic along 3 dimensions:
• If multiple server instances can fire a rule, (R EACT)
selects one of them nondeterminstically. This models
concurrent execution of servers that can react to incoming
service requests independently.
• If multiple rules of a server instance can fire, (R EACT)
selects one of them nondeterminstically. This is of lesser
importance and languages building on ours may fix a specific order for firing rules (e.g., in the order of definition).
• If multiple service request values can satisfy a join pat-
tern, (M ATCH1 ) selects one of them nondeterminstically.
This models asynchronous communication in distributed
systems, i.e., the order in which a server serves requests is
independent of the order in which services are requested.
More concrete languages based on CPL may employ
stricter ordering (e.g., to preserve the order of requests
that originate from a single server).
3.4
Placement of Servers.
We intentionally designed the semantics of CPL with transparency of server placement in mind. That is, a single abstraction in the language, the server instance, models all computations, irrespective of whether the instance runs on its own
physical machine or as a virtual machine hosted remotely –
indeed, placement transparency is a distinguishing feature of
cloud applications.
However, despite the behavior of servers being invariant to
placement, placement has a significant impact in real-world
scenarios and influences communication and computation
performance [2, 19]. The need to account for placement in
an implementation is critical considering that – servers being the only supported abstraction – every single let binding
and lambda abstraction desugars to a server spawn (cf. Section 3.5). In our concurrent Scala implementation, we support
an extended syntax for server spawns that allows programmers to declare whether a server instance runs in a new thread
or in the thread that executes the spawn. This provides a
simple mechanism for manually implementing placement
strategies.
A viable alternative to manual specification of placement
are automatic placement strategies. Together with server
migration, automatic placement strategies can adapt the
server layout to changing conditions. Based on our language,
a management system for a cloud infrastructure can formally
reason about optimal placement strategies. In future work,
we plan to implement these ideas in a distributed run-time
system for CPL (cf. Section 5.4).
Proposition 1 (Match soundness and completeness). Let p
be a sequence of join patterns with pi = xi hyi i, m and mr
sequences of service request values, and σ a substitution.
match(p, m) ⇓ (mr , σ) if and only if it exists a sequence mc
such that:
1. Sequence mc represents the requests values consumed
from m, that is, mr mc = m modulo permutation.
2. All consumed requests mc match the join patterns p, that
is, mc and p have the same length and mc,i = xi hvi i,
where yi and vi have the same length.
3. σ substitutes the parameters of the matched join patterns
with the actual arguments, that is,
σ = {yi := vi | 1 ≤ i ≤ k}
where k is the length of p.
Proof. Soundness (⇒): Straightforward induction on the
derivation of the judgment match(p, m1 ) ⇓ (m2 , σ). Com-
3.5
Derived syntax and base operations
Our core language is expressive enough to encode a wide
range of typical language constructs. To illustrate its expres-
8 This
approach is comparable to sets of definitions in the chemical soup of
the Join Calculus [12].
7
2016/2/8
siveness and for convenience in expressing example computations in the rest of the paper, we define derived syntax
for let-expressions, first-class functions, thunks, and base operations, all of which can be desugared to the core syntax
introduced above.
putation of a thunk and they can pass thunks to other servers.
Thunks play a significant role in distributed systems, because
they enable servers to distribute work over other servers dynamically.
Interestingly, lambdas as defined above do not give rise
to a useful implementation of thunks, because a computation
that is encoded as a lambda is already installed on a concrete
spawned server: Every lambda expression gives rise to exactly
one server instance that solely executes the body of this
lambda. In contrast, we desire an implementation of thunks
that allows us to dynamically allocate servers for executing a
thunk. To this end, we represent thunks as server templates:
thunk e
srv forcehki . khei
Since server templates are first-class in CPL, thunks can
be passed to other servers. A server can instantiate a thunk
any number of times and call the force request with a
continuation to get the result of the thunk.
Note that similarly to let, we substitute this in thunks
and lambda abstractions by the enclosing server instance to
make our encodings transparent.
Let bindings. The derived syntax for let bindings desugars
to the core syntax of the CPL as follows:
let x = e1 in e2
(spwn (srv lethxi . e2 ))]lethe1 i.
Evaluating let amounts to (a) spawning a new server instance that offers a service called let that will run e2 when
requested and (b) requesting this service with the bound expression e1 as an argument.
We also define derived syntax for a variant of let called
letk for cases in which the bound expression provides its
result through a continuation. This is to account for the
fact that often expressions in the CPL involve asynchronous
service calls that, instead of returning a value, pass it to a
continuation. The definition of letk is as follows:
letk x = e1 hei in e2
e1 he, (spwn (srv khxi . e2 ))]ki.
Here, we bind the variable x via continuation that we add to
the service request e1 hei, assuming e1 takes a continuation as
final argument. When e1 terminates, it calls the continuation
and thus triggers execution of e2 . For example, we can use
letk to bind and use the result of the Fact server shown
above:
letk n = (spwn Fact)]mainh5i in Log]writehni
Note that the desugaring for both variants of let wrap the
body e2 in a server template, which changes the meaning of
the self reference this in e2 . To counter this effect and to
make the derived syntax transparent, the desugaring that we
actually implemented substitutes free occurrences of this in
e2 to the server instance surrounding the let.
Base operations. While we can use Church encodings to
represent data types and their operations, it is more convenient (and more efficient in practice) to assume some built-in
base operations. In particular, we can take the liberty of assuming that base operations are synchronous and in direct
style, that is, base operations return values directly and do
not require continuation-passing style. For the remainder of
the paper, we assume built-in base operations on Booleans,
integers, floating points, tuples and lists. We added these operations in our implementation and it is easy to add further
base operations. To distinguish synchronous calls to base operations from asynchronous service requests, we use rounded
parentheses for base operations, for example, max(7, 11).
First-class functions. We can encode first-class functions
as server instances with a single service app:
λx. e
spwn (srv apphx, ki . T(e, k)), where k is fresh
Recall that service requests in CPL are asynchronous. In order to correctly propagate argument values and the result of
function bodies, we need to transform argument expressions
and function bodies into continuation-passing style, for example using the following transformation T:
4.
Type System
We designed and formalized a type system for CPL in the
style of System F with subtyping and bounded quantification [24]. The type system ensures that all service requests in
a well-typed program refer to valid service declarations with
the correct number of arguments and the right argument types.
Subtyping enables us to define public server interfaces, where
the actual server implementation defines private services, for
example, to manage internal state.
Figure 4 shows the syntax of types, typing contexts,
location typings as well as extensions to expressions and
values. Similar to lambda calculi with references, alongside
standard typing contexts Γ we also record the type of server
instances at each allocated address via location typings Σ.
A type T is either the top type Top, the unit type Unit, a
type variable α, a service type h T i representing a service
with arguments of type Ti , a server-template type srv x : T
of a server with services xi of type Ti , the special servertemplate type srv ⊥ for inactive servers, a server-instance
type inst T , a server-image type img T or a universal
T(λx. e, k) = khspwn (srv apphx, ki . T(e, k))i
T((f e), k) = T(f, (spwn (srv k1 hvf i .
where vf is fresh
T(e, (spwn (srv k2 hve i . where ve is fresh
vf ]apphve , ki))]k2 )))]k1 )
T(e, k)
= khei
For example, we can define and apply a function that instantiates a server-template argument:
(λx. spwn x)]apphFact, k0 i −→∗ k0 hFact0 i
Our encoding of first-class functions is similar to the one
proposed for the Join Calculus [12] and it also shows that our
language is Turing-complete. Moreover, it enables Churchencodings of data types such as numbers or lists.
Thunks. A thunk is a first-class value that represents a
packaged, delayed computation. Servers can force the com8
2016/2/8
type ∀α <: T1 . T2 . The syntax of typing contexts and the
extensions of expressions and values is standard: We have
type abstraction Λα<:T. e, and type application e [T ]. Finally,
we require type annotations in join patterns.
We define the typing judgment Γ | Σ ` e : T by the rules
depicted in figure 5. The rules are mostly straightforward.
(T-VAR) looks up the type of a variable or this in the context.
(T-PAR) requires that all expressions of a parallel expression
have type Unit.
(T-S RV) is the most complicated type rule. Intuitively,
the type of a server template is the set of all services that
the server offers. ri represents rule number i of the server
template, where pi,j is pattern number j of rule number i.
Patterns pi,j provide services xi,j , which have service type
Si,j . The type T of the server template then consists of all
provided services with their types. To make sure the server
template is well-typed, we check that join patterns are linear
(service parameters are distinct), services in different patterns
have consistent types and that all free type variables are bound
(ftv(T ) ⊆ ftv(Γ)). Finally, we check the right-hand side ei
of each reaction rule, where we bind all service parameters
yi,j as well as this.
Next, we define three introduction rules for server image
types. The first is (T-0), which specifies that 0 is an image of
an inert server. The second rule (T-I MG) types server image
values (srv r, m), where we require that srv r is a welltyped server template and each service request value in the
buffer m is understood by this server template. That is, each
value mk in m must correspond to a join pattern mentioned in
r and the arguments must have the types which are annotated
in the join pattern. The last introduction rule for server image
types is (T-S NAP) for snapshots, which requires that the
argument to snap is actually a server instance in order to
yield a corresponding server image.
Rule (T-R EPL) types replacements repl e1 e2 as Unit and
requires that replacements are preserving the interface of the
server instance to be replaced. That is, the first argument must
be an instance with interface type T and the second argument
an image type for the same interface type.
There are two introduction rules for server instances.
(T-S PWN) requires the argument of spwn to be a server image in order to yield a corresponding instance. Rule (T-I NST)
handles server addresses, which must be allocated in the
location typing Σ to a server image.
(T-S VC) defines that a service reference is well-typed if
the queried server provides a service of the required name.
(T-R EQ) requires that the target of a service request indeed is
a service reference and that the request has the right number
of arguments with the right types. The remaining four type
rules are standard.
Figure 6 defines the subtyping relation Γ ` T <: T . We
employ width subtyping and depth subtyping for servertemplate types, that is, the server subtype can provide more
services than the server supertype and the server subtype
can provide specialized versions of services promised by the
server supertype. A special case is rule (S-S RV⊥ ), which
specifies that the type srv ⊥ for inert servers is a subtype
of every other server template type. This ensures that 0
can be placed in every context requiring an image of type
img srv T . The other subtyping rules are straightforward.
Preservation. We prove preservation for our type system
using standard substitution lemmas [24]. The proofs appear
in the appendix at the end of the paper.
Lemma 1 (Substitution Lemma). If Γ, x : T1 | Σ ` e2 : T2
and Γ | Σ ` e1 : T1 then Γ | Σ ` e2 {x := e1 } : T2 .
Lemma 2 (Type Substitution Preserves Subtyping).
If Γ, α <: T 0 , Γ0 ` S <: T and Γ ` S 0 <: T 0 then
Γ, Γ0 σ ` Sσ <: T σ where σ = {α := S 0 }.
Lemma 3 (Type Substitution Lemma). If Γ, α <: S, Γ0 |
Σ ` e : T and Γ ` S 0 <: S then Γ, Γ0 σ | Σσ ` eσ : T σ
where σ = {α := S 0 }.
Lemma 4 (Location Typing Extension Preserves Types).
If Γ | Σ ` e : T and Σ ⊆ Σ0 , then Γ | Σ0 ` e : T .
Lemma 5 (Replacement). If D is a derivation with root
Γ ` E[e] : U , D0 D a derivation with root Γ0 ` e : U 0 and
Γ0 ` e0 : U 0 , then Γ ` E[e0 ] : U .
Theorem 1 (Preservation).
If Γ | Σ ` e : T and Γ | Σ ` µ and e | µ −→ e0 | µ0 , then
Γ | Σ0 ` e0 : T for some Σ0 , where Σ ⊆ Σ0 and Γ | Σ0 ` µ0 .
Note that the proof of the preservation theorem requires the
match soundness property from proposition 1, in order to
verify that after the reduction step of rule (R EACT) (fig. 2),
consumption of service requests and instantiation of rule bodies preserves the type of the enclosing parallel composition.
Progress. Our type system does not satisfy progress. For
example, the following program is well-typed and not a value
but cannot reduce:
(spwn (srv foohi & barhi . par ε))]foohi.
The service request foo resolves fine, but the server’s rule
cannot fire because it is lacking a request bar joined with foo.
Since our type system does not correlate service requests, it
cannot guarantee that join patterns must succeed eventually.
The integration of such a property is an interesting direction
of future work, but orthogonal to the main contributions of
this work.
Auxiliary Notation We adopt the following conventions.
We omit the type bound if it is Top, e.g., Λα<: Top. becomes
Λα. e. It is sometimes convenient to abbreviate longer type expressions. We introduce an abbreviation syntax for faux type
constructors, e.g, SVC[α1 , . . . αn ] := hα1 , h α2 , . . . , αn ii
defines a type SVC with n free type variables. Writing
SVC[T1 , . . . Tn ] denotes the type obtained by substituting
the free occurrences of α1 , . . . , αn with the provided types
T1 , . . . Tn . Instead of repeating type annotations of service
9
2016/2/8
parameters in every join pattern, we declare the service
types once at the beginning of server templates. For example,
srv (ahx : Inti.foohi) (ahx : Inti & bhy : h Bool ii.barhi)
becomes
srv a : hInti, b : hhBoolii
(ahxi . foohi) (ahxi & bhxi . barhi).
We define function types as
T1 , . . . Tn → T := hT1 , . . . , Tn , hT ii
following our function encoding in Section 3.5. We define the
union (srv x : T ) ∪ (srv y : U ) of two server-template types
as the server template that contains all services of both types.
The union is only defined if service names that occur in both
types have identical type annotations – they are merely for
syntactic convenience and do not represent real union types,
which we leave for future work.
5.
1
2
3
4
5
6
7
8
9
Figure 7. Basic worker factory.
That is, to execute a thunk on a worker, clients request the
work service which maps the thunk to a result value. In
addition, we allow workers to provide initialization logic
via a service init. Clients of a worker should request init
before they issue work requests. Figure 7 defines a factory for
creating basic workers, which have no initialization logic and
execute thunks in their own instance scope. In the following,
we define server combinators that enrich workers with more
advanced features.
To model locality – a worker uses its own computational
resources to execute thunks – the spawn of a thunk should
in fact not yield a new remote server instance. As discussed
in Section 3.4, to keep the core language minimal the operational semantics does not distinguish whether a server
is local or remote to another server. However, in our concurrent implementation of CPL, we allow users to annotate
spawns as being remote or local, which enables us to model
worker-local execution of thunks.
The combinators follow a common design principle. (i)
The combinator is a factory for server templates, which is
a server instance with a single make service. The service
accepts one or more server templates which implement the
TWorker interface, among possibly other arguments. (ii) Our
combinators produce proxy workers. That is, the resulting
workers implement the worker interface but forward requests
of the work service to an internal instance of the argument
worker.
CPL at Work
We present two case studies to demonstrate the adequacy of
CPL for solving the deployment issues identified in Section 2.
The case studies will also be subsequently used to answer
research questions about CPL’s features.
Firstly, we developed a number of reusable server combinators, expressing deployment patterns found in cloud computing. Our examples focus on load balancing and fault tolerance, demonstrating that programmers can define their own
cloud services as strongly-typed, composable modules and
address nonfunctional requirements with CPL. Secondly, we
use our language to model MapReduce [18] deployments for
distributed batch computations. Finally, we apply our server
combinators to MapReduce, effortlessly obtaining a type-safe
composition of services.
5.1
MkWorker[α] = srv {
make: () → TWorker[α]
makehki .
let worker = spwn srv {
init: hi, work: TThunk[α] → α
inithi . par ε //stub, do nothing
workhthnk, ki . (spwn thnk)]forcehki
} in khworkeri
}
Server Combinators
In Section 2, we identified extensibility issues with deployment languages, which prevents programmers from integrating their own service implementations. We show how to
implement custom service functionality with server combinators in a type-safe and composable way. Our combinators
are similar in spirit to higher-order functions in functional
programming.
As the basis for our combinators, we introduce workers,
i.e., servers providing computational resources. A worker
accepts work packages as thunks. Concretely, a worker
models a managed virtual machine in a cloud and thunks
model application services.
Following our derived syntax for thunks (Section 3.5),
given an expression e of type α, the type of thunk e is:
5.1.1
Load Balancing
A common feature of cloud computing is on-demand scalability of services by dynamically acquiring server instances and
distributing load among them. CPL supports the encoding of
on-demand scalability in form of a server combinator, that
distributes load over multiple workers dynamically, given a
user-defined decision algorithm.
Dynamically distributing load requires a means to approximate worker utilization. Our first combinator
MkLoadAware enriches workers with the ability to answer
getLoad requests, which sends the current number of pend-
TThunk[α] := srv force:hαi.
Service force accepts a continuation and calls it with the
result of evaluating e. A worker accepts a thunk and executes
it. At the type level, workers are values of a polymorphic type
TWorker[α] := srv init:hi, work:TThunk[α] → α.
10
2016/2/8
1
2
3
4
5
6
7
MkLoadAware[α, ω <: TWorker[α]] = srv {
make: ω → TLAWorker[α]
makehworker, ki .
let lWorker = srv {
instnc: hinst ωi, getLoad: () → Int, load: hInti
work: TThunk[α] → α, init: hi
//... initialization logic omitted
1
2
3
4
5
6
7
8
inithi . //spawn and init all child workers
letk spawned = mapkhworkers, λw:ω. spwn wi
in (this]instshspawnedi
kforeachhspawned, λinst:inst ω. inst]inithii)
8
//forwarding logic for work
workhthnk, ki & instnchwi & loadhni .
this]loadhn+1i k this]instnchwi
k letk res = w]workhthnki
in (khresi k this]donehi)
9
10
11
12
13
9
10
11
12
//forward to the next child worker
workhthnk, ki & instshli .
letk (w, l0 ) = choosehli
in (w]workhthnk, ki k this]instshl0 i)
} in khlbWorkeri
13
14
14
//callback logic for fullfilled requests
donehi & loadhni . this]loadhn-1i
15
16
15
16
17
17
getLoadhki & loadhni . khni k this]loadhni
} in khlWorkeri
18
19
20
MkBalanced[α, ω <: TLAWorker[α]] = srv {
make: (List[ω], Choose[ω]) → TWorker[α]
makehworkers, choose, ki .
let lbWorker = srv {
insts: hList[inst ω]i,
work: TThunk[α] → α, init: hi
18
}
Figure 9. Combinator for producing load-balanced workers.
}
Figure 8. Combinator for producing load-aware workers.
menting the TLAWorker[α] interface (Line 1), since choose
should be able to base its decision on the current load of the
workers.
In summary, mapping a list of workers with MkLoadAware
and passing the result to MkBalanced yields a composite,
load-balancing worker. It is thus easy to define hierarchies of
load balancers programmatically by repeated use of the two
combinators. Continuation passing style and the type system
enable flexible, type-safe compositions of workers.
ing requests of the work service, our measure for utilization.
Therefore, the corresponding type9 for load aware workers is
TLAWorker[α] := TWorker[α] ∪ srv getLoad:hhIntii.
The make service of the combinator accepts a server
template worker implementing the TWorker interface and
returns its enhanced version (bound to lWorker) back on the
given continuation k. Lines 10-13 implement the core idea of
forwarding and counting the pending requests. Continuation
passing style enables us to intercept and hook on to the
responses of worker after finishing work requests, which
we express in Line 13 by the letk construct.
By building upon load-aware workers, we can define a
polymorphic combinator MkBalanced that transparently introduces load balancing over a list of load-aware workers. The
combinator is flexible in that it abstracts over the scheduling
algorithm, which is an impure polymorphic function of type
5.1.2
Failure Recovery
Cloud platforms monitor virtual machine instances to ensure
their continual availability. We model failure recovery for
crash/omission, permanent, fail-silent failures [27], where a
failure makes a virtual machine unresponsive and is recovered
by a restart.
Following the same design principles of the previous section, we can define a failure recovery combinator
MkRecover, that produces fault-tolerant workers. Its definition is in the appendix of this report.
Self-recovering workers follow a basic protocol. Each time
a work request is processed, we store the given thunk and
continuation in a list until the underlying worker confirms
the request’s completion. If the wait time exceeds a timeout,
we replace the worker with a fresh new instance and replay
all pending requests. Crucial to this combinator is the repl
syntactic form, which swaps the running server instance
at the worker’s address: repl w (worker, ε). Resetting the
worker’s state amounts to setting the empty buffer ε in the
server image value we pass to repl.
Choose[ω] := List[inst ω] → Pair[inst ω, List[inst ω]].
Given a (church-encoded) list of possible worker instances,
such a function returns a (church-encoded) pair consisting of
the chosen worker and an updated list of workers, allowing
for dynamic adjustment of the available worker pool (elastic
load balancing).
Figure 9 shows the full definition of the MkBalanced
combinator. Similarly to Figure 8, the combinator is a factory which produces a decorated worker. The only difference
being that now there is a list of possible workers to forward
requests to. Choosing a worker is just a matter of querying
the scheduling algorithm choose (Lines 15-16). Note that
this combinator is only applicable to server templates imple-
5.2
MapReduce
In this section, we illustrate how to implement the MapReduce [7] programming model with typed combinators in
CPL, taking fault tolerance and load balancing into account.
9 The
union ∪ on server types is for notational convenience at the meta level
and not part of the type language.
11
2016/2/8
1
2
3
4
5
6
MapReduce[κ1 ,ν 1 ,κ2 ,ν 2 ,ν 3 ] = spwn srv {
make: (TMap[κ1 ,ν 1 ,κ2 ,ν 2 ],
TReduce[κ2 ,ν 2 ,ν 3 ],
TPartition[κ2 ],
∀α.() → TWorker[α],
Int) → TMR[κ1 ,ν 1 ,κ2 ,ν 3 ]
1
2
3
4
5
6
7
7
makehMap, Reduce, Partition, R, mkWorker, ki .
let sv = srv {
apphdata, k0 i . let
mworker =
mapValues(data, λv. mkWorker[List[Pair[κ1 , ν 2 ]]])
rworker =
mkMap(map(range(1, R), λi. (i, mkWorker[ν 3 ])))
grouper =
MkGrouperhPartition, R, Reduce
size(mworker), rworker, k0 i
in foreachhdata, λkey, val. {
let thnk = thunk Maphkey, vali
in get(mworker, key)]workhthnk, grouper]groupi}i
} in khsvi
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
8
9
10
MkGrouper[κ2 ,ν 2 ,ν 3 ] = spwn srv {
makehPartition: (κ2 , Int) → Int, R: Int,
Reduce: TReduce[κ2 ,ν 2 ,ν 3 ],
R: Int,
rworker: Map[Int, TWorker[ν 3 ]],
kr : hPair[κ2 ,ν 3 ]i,
k: hinst srv (group: hList[Pair[κ2 ,ν 2 ]]i)i .
let grpr = spwn srv {
//accumulated per-partition values for reduce
state: hMap[Int, Map[κ2 ,ν 2 ]]i,
11
//result callback invoked by mappers
group: hList[Pair[κ2 ,ν 2 ]]i,
12
13
14
//waiting state for phase one
await: hInti,
15
16
17
//trigger for phase two
done: hMap[Int, Map[κ2 ,ν 2 ]]i
18
19
20
//phase one: wait for mapper results
statehmi & grouphkvsi & awaithni .
letk m0 = foldkhkvs, m,
λm00 , kv. letk i = Partitionhfst(kv), Ri
in updateGroup(m00 , i, kv)i
in if (n > 0)
then (this]awaithn - 1i k this]statehm0 i)
else this]donehm0 i
21
}
22
23
Figure 10. MapReduce factory.
24
25
26
MapReduce facilitates parallel data processing – cloud platforms are a desirable deployment target. The main point we
want to make with this example is that CPL programs do not
exhibit the unsafe composition, non-extensibility and staging
problems we found in Section 2. Our design is inspired by
Lämmel’s formal presentation in Haskell [18].
Figure 10 shows the main combinator for creating
a MapReduce deployment, which is a first-class server.
Following Lämmel’s presentation, the combinator is generic
in the key and value types. κi denotes type parameters for
keys and νi denotes type parameters for values.
The combinator takes as parameters the Map function for
decomposing input key-value pairs into an intermediate list
of intermediate pairs, the Reduce function for transforming
grouped intermediate values into a final result value, the
Partition function which controls grouping and distribution
among reducers and the number R of reducers to allocate
(Line 8). Parameter mkWorker is a polymorphic factory of
type ∀α.() → TWorker[α]. It produces worker instances for
both the map and reduce stage.
Invoking make creates a new server template that on invocation of its app service deploys and executes a distributed
MapReduce computation for a given set of (church-encoded)
key-value pairs data and returns the result on continuation
k0 (Lines 9-10).
Firstly, workers for mapping and reducing are allocated
and stored in the local map data structures mworker and
rworker , where we assume appropriate cps-encoded functions that create and transform maps and sequences (Lines
11-14). Each key in the input data is assigned a new mapping worker and each partition from 1 to R is assigned a
27
28
29
//phase two: distribute to reducers
donehmi .
foreachhm, λi. data2.
foreachhdata2, λkey,vals.
let thnk = thunk Reducehkey, valsi
in get(rworker, i)]workhthnk, kr iii
} in grpr]statehemptyMapi k grpr]awaithRi k khgrpri
30
31
32
33
34
35
36
37
}
Figure 11. Grouper factory.
reducing worker. Additionally, a component for grouping and
distribution among reducers (grouper ) is allocated.
Secondly, the foreach invocation (Lines 18-20) distributes key-value pairs in parallel among mapping workers.
For each pair, the corresponding worker should invoke the
Map function, which we express as a thunk (Line 19, cf.
section 3.5). All resulting intermediate values are forwarded
to the grouper’s group service.
The grouper (Figure 11) consolidates multiple intermediate values with the same key and forwards them to the reducer
workers. It operates in phases: (1) wait for all mapper workers
to finish, meanwhile grouping incoming results (Lines 22-28)
and (2), assign grouped results to reducer workers with the
Partition function and distribute as thunks, which invoke
the Reduce function (Lines 31-35). All reduction results are
forwarded to the continuation kr . For brevity we omit the
final merge of the results.
12
2016/2/8
Thanks to our service combinators, we can easily address
non-functional requirements and non-intrusively add new features. The choice of the mkWorker parameter determines
which variants of MapReduce deployments we obtain: The
default variant just employs workers without advanced features, i.e.,
balancing trees, since the combinator implements the wellknown Composite design pattern from object-oriented programming. At the operational level, continuation passing
style enables flexible composition of components, e.g., for
stacking multiple features.
let make = Λα.(spwn MkWorker[α])]make
in MapReduce[κ1 ,ν 1 ,κ2 ,ν 2 ,ν 3 ]]makehf, r, p, R, make, ki
Dynamic Self-Adjustment In the case studies, we encountered the need of dynamically adapting the deployment configuration of an application, which is also known as “elasticity”.
For example, the load balancer combinator can easily support
dynamic growth or shrinkage of the list of available workers:
New workers need to be dynamically deployed in new VMs
(growth) and certain VMs must be halted and removed from
the cloud configuration when the respective workers are not
needed (shrinkage). Dynamic reconfiguration is not directly
expressible in configuration languages, due to the two-phase
staging. For example, configurations can refer to external
elastic load balancer services provided by the cloud platform,
but such services only provide a fixed set of balancing strategies, which may not suit the application. The load balancer
service can be regarded as a black box, which happens to
implement elasticity features. Also, a configuration language
can request load balancing services only to the fixed set of
machines which is specified in a configuration, but it is not
possible if the number of machines is unknown before execution, as in the MapReduce case study. In contrast, CPL users
can specify their own load balancing strategies and apply
them programmatically.
for appropriate choices of the other MapReduce parameters.
In order to obtain a variant, where worker nodes are
elastically load-balanced, one replaces make with makeLB
below, which composes the combinators from the previous
section:
let choose = ...//load balancing algorithm
makeLB = Λα.λk. {
letk w = (spwn MkWorker[α])]makehi
lw = (spwn MkLoadAware[α, TWorker[α]])]makehwi
in (spwn MkBalanced[α, TLAWorker[α]])
]makehmkList(lw), choose, ki
}
in makeLB
A similar composition with the fault tolerance combinator
yields fault tolerant MapReduce, where crashed mapper and
reducer workers are automatically recovered.
5.3
Discussion
We discuss how CPL performed in the case studies answering
the following research questions:
Q1 (Safety): Does CPL improve safety of cloud deployments?
5.4
Q2 (Extensibility): Does CPL enable custom and extensible
service implementations?
Interfacing with Cloud Platforms
A practical implementation of CPL requires (1) a mapping of
its concepts to real-world cloud platforms and (2) integrate
existing cloud APIs and middleware services written in other
languages. In the following, we sketch a viable solution; we
leave a detailed implementation for future work.
For (1), CPL programs can be compiled to bytecode and
be interpreted by a distributed run time hosted on multiple
virtual machines.
Concerning (2), we envision our structural server types
as the interface of CPL’s run time with the external world,
i.e., pre-existing cloud services and artifacts written in other
languages. CPL developers must write wrapper libraries to
implement typed language bindings. Indeed, CPL’s first-class
servers resemble (remote) objects, where services are their
methods and requests are asynchronous method invocations
(returning results on continuations). CPL implementations
hence can learn from work on language bindings in existing
object-oriented language run times, e.g., the Java ecosystem.
To ensure type safety, dynamic type checking is necessary at
the boundary between our run time and components written
in dynamically or weakly typed languages.
Note that the representation of external services and
artifacts as servers requires immutable addresses. That is,
the run time should forbid snap and repl on such objects,
Q3 (Dynamic self-adjustment): Does CPL improve flexibility
in dynamic reconfiguration of deployments ?
Safety CPL is a strongly-typed language. As such, it provides internal safety (Section 2). The issue of cross-language
safety (Section 2) does not occur in CPL programs, because
configuration and deployment code are part of the same application. In addition, the interconnection of components is
well-typed. For example, in the MapReduce case study, it is
guaranteed that worker invocations cannot go wrong due to
wrongly typed arguments. It is also guaranteed that workers
yield values of the required types. As a result, all mapper
and reducer workers are guaranteed to be compatible with
the grouper component. In a traditional deployment program,
interconnecting components amounts to referring to each others attributes, but due to the plain syntactic expansion, there
is no guarantee of compatibility.
Extensibility The possibility to define combinators in CPL
supports extensible, custom service implementations. At the
type system level, bounded polymorphism and subtyping ensure that service implementations implement the required
interfaces. The load balancing example enables nested load
13
2016/2/8
Formal Calculi for Concurrent and Distributed Services.
Milner’s CCS [20], the π calculus [21] and Hoare’s CSP [15]
have been studied as the foundation of parallel execution and
process synchronization.
Fournet’s and Gonthier’s Join Calculus [12] introduced
join patterns for expressing the interaction among a set of processes that communicate by asynchronous message passing
over communication channels. The model of communication
channels in this calculus more adequately reflects communication primitives in real world computing systems which
allows for a simpler implementation. In contrast, the notion
of channel in the previously mentioned process calculi would
require expensive global consensus protocols in implementations.
The design of CPL borrows join patterns from the Join
Calculus. Channels in the Join Calculus are similar to services
in CPL, but the Join Calculus does not have first-class and
higher-order servers with qualified names. Also, there is no
support for deployment abstractions.
The Ambient calculus [5] has been developed by Cardelli
and Gordon to model concurrent systems that include both
mobile devices and mobile computation. Ambients are a
notion of named, bounded places where computations occur
and can be moved as a whole to other places. Nested ambients
model administrative domains and capabilities control access
to ambients. CPL, in contrast, is location-transparent, which
is faithful to the abstraction of a singular entity offered by
cloud applications.
because it is in general impossible to reify a state snapshot of
the external world.
For the primitives spwn, snap, and repl, the run time
must be able to orchestrate the virtualization facilities of the
cloud provider via APIs. Following our annotation-based approach to placement (Section 3.4), these primitives either
map to local objects or to fresh virtual machines. Thus, invoking spwn v may create a new virtual machine hosting the
CPL run time, which allocates and runs v. For local servers,
v is executed by the run time that invoked spwn. One could
extend the primitives to allow greater control of infrastructurelevel concerns, such as machine configuration and geographic
distribution. From these requirements and CPL’s design targeting extensible services and distributed applications, it follows that CPL is cross-cutting the three abstraction layers
in contemporary cloud platforms: Infrastructure as a Service
(IaaS), Platform as a Service (PaaS) and Software as a Service
(SaaS) [30].
6.
Related Work
Programming Models for Cloud Computing. The popularity of cloud computing infrastructures [30] has encouraged
the investigation of programming models that can benefit
from on-demand, scalable computational power and feature
location transparency. Examples of these languages, often
employed in the context of big data analysis, are Dryad [16],
PigLatin [23] and FlumeJava [6]. These languages are motivated by refinements and generalizations of the original
MapReduce [7] model.
Unlike CPL, these models specifically target only certain
kinds of cloud computations, i.e., massive parallel computations and derivations thereof. They deliberately restrict the
programming model to enable automated deployment, and
do not address deployment programmability in the same language setting as CPL does. In this paper, we showed that the
server abstraction of CPL can perfectly well model MapReduce computations in a highly parametric way, but it covers
at the same time a more generic application programming
model as well as deployment programmability. Especially,
due to its join-based synchronization, CPL is well suited to
serve as a core language for modeling cloud-managed stream
processing.
Some researchers have investigated by means of formal
methods specific computational models or specific aspects of
cloud computing. The foundations in functional programming
of MapReduce have been studied by Lämmel [18]. In CPL
it is possible to encode higher-order functions and hence we
can model MapReduce’s functionality running on a cloud
computing platform. Jarraya et al. [17] extend the Ambient
calculus to account for firewall rules and permissions to verify
security properties of cloud platforms. To the best of our
knowledge, no attempts have been done in formalizing cloud
infrastructures in their generality.
Languages for Parallel Execution and Process Synchronization. Several languages have been successfully developed/extended to support features studied in formal calculi.
JoCaml is an ML-like implementation of Join Calculus
which adopts state machines to efficiently support join patterns [11]. Polyphonic C# [1] extends C# with join-like concurrency abstractions for asynchronous programming that
are compiler-checked and optimized. Scala Joins [14] uses
Scala’s extensible pattern matching to express joins. The Join
Concurrency Library [26] is a more portable implementation of Polyphonic C# features by using C# 2.0 generics.
JEScala [29] combines concurrency abstraction in the style
of the Join Calculus with implicit invocation.
Funnel [22] uses the Join Calculus as its foundations and
supports object-oriented programming with classes and inheritance. Finally, JErlang [25] extends the Erlang actor-based
concurrency model. Channels are messages exchanged by
actors, and received patterns are extended to express matching of multiple subsequent messages. Turon and Russo [28]
propose an efficient, lock-free implementation of the join
matching algorithm demonstrating that declarative specifications with joins can scale to complex coordination problems
with good performance – even outperforming specialized algorithms. Fournet et al. [13] provide an implementation of the
Ambient calculus. The implementation is obtained through a
formally-proved translation to JoCaml.
14
2016/2/8
CPL shares some features with these languages, basically
those built on the Join Calculus. In principle, the discussion
about the relation of CPL to Join Calculus applies to these
languages as well, since the Join Calculus is their shared
foundation. Implementations of CPL can benefit from the
techniques developed in this class of works, especially [26].
7.
[12] C. Fournet and G. Gonthier. The reflexive CHAM and the
join-calculus. In POPL ’96, pages 372–385. ACM, 1996.
[13] C. Fournet, J.-J. Lévy, and A. Schmitt. An Asynchronous,
Distributed Implementation of Mobile Ambients. TCS ’00,
pages 348–364. Springer-Verlag, 2000.
[14] P. Haller and T. Van Cutsem. Implementing Joins Using
Extensible Pattern Matching. In COORDINATION ’08, volume
5052 of LNCS, pages 135–152. Springer, 2008.
Conclusions and Future Work
[15] C. A. R. Hoare. Communicating Sequential Processes. Commun. ACM, 21(8):666–677, Aug. 1978.
We presented CPL, a statically typed core language for
defining asynchronous cloud services and their deployment
on cloud platforms. CPL improves over the state of the art for
cloud deployment DSLs: It enables (1) statically safe service
composition, (2) custom implementations of cloud services
that are composable and extensible and (3) dynamic changes
to a deployed application. In future work, we will implement
and expand core CPL to a practical programming language
for cloud applications and deployment.
[16] M. Isard and Y. Yu. Distributed Data-parallel Computing Using
a High-level Programming Language. SIGMOD ’09, pages
987–994. ACM, 2009.
[17] Y. Jarraya, A. Eghtesadi, M. Debbabi, Y. Zhang, and
M. Pourzandi. Cloud calculus: Security verification in elastic
cloud computing platform. In CTS’12, pages 447–454, May
2012.
[18] R. Lämmel. Google’s MapReduce programming model —
Revisited. Science of Computer Programming, 70(1):1 – 30,
2008.
Acknowledgments
This work has been supported by the European Research
Council, grant No. 321217.
[19] X. Meng, V. Pappas, and L. Zhang. Improving the Scalability
of Data Center Networks with Traffic-aware Virtual Machine
Placement. In INFOCOM, pages 1154–1162. IEEE, 2010.
References
[20] R. Milner. A Calculus of Communicating Systems. SpringerVerlag New York, Inc., Secaucus, NJ, USA, 1982.
[1] N. Benton, L. Cardelli, and C. Fournet. Modern concurrency
abstractions for C#. ACM TOPLAS, 26(5):769–804, Sept. 2004.
[21] R. Milner, J. Parrow, and D. Walker. A Calculus of Mobile
Processes, I. Information and Computation, 100(1):1 – 40,
1992.
[2] N. Bobroff, A. Kochut, and K. A. Beaty. Dynamic Placement of
Virtual Machines for Managing SLA Violations. In Integrated
Network Management, pages 119–128. IEEE, 2007.
[22] M. Odersky. An Introduction to Functional Nets. In Applied
Semantics, volume 2395 of LNCS, pages 333–377. Springer,
2002.
[3] O. Bračevac, S. Erdweg, G. Salvaneschi, and M. Mezini. CPL:
A Core Language for Cloud Computing. MODULARITY ’16.
ACM, 2016.
[23] C. Olston, B. Reed, U. Srivastava, R. Kumar, and A. Tomkins.
Pig Latin: A Not-so-foreign Language for Data Processing.
SIGMOD ’08, pages 1099–1110. ACM, 2008.
[4] M. Bravenboer, E. Dolstra, and E. Visser. Preventing Injection
Attacks with Syntax Embeddings. GPCE ’07, pages 3–12.
ACM, 2007.
[24] B. C. Pierce. Types and Programming Languages. MIT press,
2002.
[5] L. Cardelli and A. D. Gordon. Mobile ambients. Theoretical
Computer Science, 240(1):177 – 213, 2000.
[25] H. Plociniczak and S. Eisenbach. JErlang: Erlang with joins.
In COORDINATION ’10, volume 6116 of LNCS, pages 61–75.
Springer, 2010.
[6] C. Chambers, A. Raniwala, F. Perry, S. Adams, R. R. Henry,
R. Bradshaw, and N. Weizenbaum. FlumeJava: Easy, efficient
data-parallel pipelines. PLDI ’10, pages 363–375, 2010.
[26] C. Russo. The joins concurrency library. In PADL ’07, volume
4354 of LNCS, pages 260–274. Springer, 2007.
[7] J. Dean and S. Ghemawat. MapReduce: Simplified Data
Processing on Large Clusters. Commun. ACM, 51(1):107–113,
Jan. 2008.
[27] A. S. Tanenbaum and M. v. Steen. Distributed Systems:
Principles and Paradigms (2nd Edition). Prentice-Hall, Inc.,
Upper Saddle River, NJ, USA, 2006.
[8] M. Felleisen and R. Hieb. The Revised Report on the Syntactic Theories of Sequential Control and State. Theoretical
Computer Science, 103(2):235–271, 1992.
[28] A. J. Turon and C. V. Russo. Scalable Join Patterns. In
OOPSLA ’11, pages 575–594. ACM, 2011.
[9] M. Felleisen, R. B. Findler, and M. Flatt. Semantics Engineering with PLT Redex. MIT Press, 2009.
[29] J. M. Van Ham, G. Salvaneschi, M. Mezini, and J. Noyé.
JEScala: Modular Coordination with Declarative Events and
Joins. MODULARITY ’14, pages 205–216. ACM, 2014.
[10] R. C. Fernandez, M. Migliavacca, E. Kalyvianaki, and P. Pietzuch. Integrating Scale Out and Fault Tolerance in Stream
Processing using Operator State Management. In SIGMOD
’13, pages 725–736. ACM, June 2013.
[30] L. M. Vaquero, L. Rodero-Merino, J. Caceres, and M. Lindner.
A Break in the Clouds: Towards a Cloud Definition. SIGCOMM Comput. Commun. Rev., 39(1):50–55, Dec. 2008.
[11] F. L. Fessant and L. Maranget. Compiling Join-Patterns.
Electronic Notes in Theoretical Computer Science, 16(3):205 –
224, 1998. HLCL’98.
[31] A. Wright and M. Felleisen. A Syntactic Approach to Type
Soundness. Information and Computation, 115(1):38 – 94,
15
2016/2/8
A.
1994. ISSN 0890-5401.
Example Reduction
To illustrate the small-step operational semantics of the CPL,
we investigate the reduction trace of a service request for
computing the factorial of 3 using the server template Fact
defined above. Here, we assume k0 is some continuation
interested in the result of the computation. We write ∅ to
denote the empty routing table and (Factn , m) to denote a
routing table entry at address n for a server instance of the
template Fact with buffer m. Further we write n/ri to refer to
rule number i of the server at n. For brevity, reductions of the
rule (PAR), as well as reductions of if-then-else and arithmetic
expressions are omitted. Multiple subsequent reduction steps
of a rule (R) are denoted by (R)∗ .
(spwn Fact)]mainh3, k0 i
S PWN
−−−→
R CV
−−→
R EACT 0/r1
−−−−−−−→
R CV∗
−−−→
0]mainh3, k0 i
par ε
0]fach3i
k 0]acch1i
k 0]outhk0 i
R CV∗ ,R EACT 0/r
R CV∗ ,R EACT 0/r
2
−−−−−−−−−−−→
0]fach1i
0]acch6i
R CV∗ ,R EACT 0/r
2
−−−−−−−−−−−→
0]resh6i
R CV,R EACT 0/r
k
| {(Fact0 , fach3i acch1i
outhk0 )}
0
| (Fact , outhk0 i)
k
| (Fact0 , outhk0 i)
par ε
2
−−−−−−−−−−−→
0]fach2i
0]acch3i
3
−−−−−−−−−−→
|∅
| (Fact0 , ε)
| (Fact0 , mainh3, k0 i
| (Fact0 , ε)
k0 h6i
| (Fact0 , outhk0 i)
| (Fact0 , ε) .
The reduction starts with the instantiation of the server
template Fact. Next, we serve service main on the server
instance Fact0 , which yields 3 service requests to Fact0 . We
can fire the second rule of Fact0 three times by matching
services fac and acc. In fact, the second rule of Fact0 is the
only rule that can fire. The first two times, the argument of
fac is larger than 1, so we execute the else branch, which
yields updated values for fac and acc. The third time, the
argument of fac is 1, so we execute the then branch, which
yields a service request res with the final result. Finally, we
execute the third rule of Fact0 , which matches services res
and out to forward the final result to the continuation k0 .
We can spawn multiple instances of Fact and compute
factorials in parallel:
(spwn Fact)]mainh3, k0 i k (spwn Fact)]mainh5, k1 i
Due to the nondeterminism of our semantics, some of the
possible reduction traces interleave computations of both
factorials. However, since requests always contain the target
address and rule R EACT operates solely on a server instance’s
buffer, there cannot be any interference between two different
instances of Fact. This way, e.g., each instance of Fact has
its own accumulator.
16
2016/2/8
B.
Case Studies
underlying worker and installs a continuation that notifies
the proxy that the request completed using service done. The
third rule accepts this request and removes the corresponding
request from the list of pending requests.
Finally, the last rule checks if any of the pending requests
has a timeout. If this happens, the rule replaces the old worker
instance by a new one via repl, effectively resetting the state
of the worker and re-initializing it. In parallel, all pending
requests are replayed.
In the following, we give the full definition of the server
combinators and actor supervision case studies, which we
omitted in section 5 due to space limitations.
B.1
Server Combinators for Cloud Computing
B.1.1
Failure Recovery
The combinator for failure recovery:
1
2
MkRecover[α,ω <:TWorker[α]] = spwn srv {
make: (ω, Int) → TWorker[α]
C.
Definition 1. The typed language extends evaluation contexts
with type applications:
E ::= . . . | E [T ].
The reduction relation is extended by an additional contraction rule:
3
makehworker, timeout, ki .
let selfrecovering = srv {
init: hi,
work: TThunk[α] → α,
instnc: hinst ωi
pending: hList[(Int, Int, TThunk[α], hαi)]i,
done: hInti
4
5
6
7
8
9
10
Type System Proofs
(Λα <: U. e) [T ] −→ e {α := T } . (TA PPA BS)
11
Definition 2. We write Σ ⊆ Σ0 if for all (i : T ) ∈ Σ,
(i : T ) ∈ Σ0 holds.
//initialization
inithi . let w = (spwn worker) in
w]inithi k this]insthwi k this]pendinghNili
12
13
14
Definition 3. A routing table µ is well typed with respect to
Γ, Σ (written Γ | Σ ` µ), if dom(µ) = dom(Σ) and for all
i ∈ dom(µ), Γ | Σ ` µ(i) : Σ(i) holds.
15
//store and forward work requests to instnc
workhthnk, ki & instnchwi & pendinghxsi .
let ID = freshID() in
let now = localTime() in
this]pendingh(ID, now, thnk, k) :: xsi
k this]instnchwi
k (letk r = w]workhthnki
in (khri k this]donehIDi))
16
17
18
19
20
21
22
23
Note. In the proofs we use the standard variable convention. That is, bound variables are assumed to be distinct and
can be renamed if necessary so that no variable capture can
occur in substitutions.
Lemma 6 (Substitution Lemma). If Γ, x : T1 | Σ ` e2 : T2
and Γ | Σ ` e1 : T1 then Γ | Σ ` e2 {x := e1 } : T2 .
24
//work completion by instnc
donehIDi & pendinghxsi .
filterk(xs, λp. fst(p) 6= ID, this]pendingi)
25
26
27
Proof. By induction on the typing derivation D of Γ, x : T1 |
Σ ` e2 : T2 . In each case we assume Γ | Σ ` e1 : T1 .
28
//check for timeouts, restart instnc if needed
Basis:
pendinghxsi & instnchwi .
let now = localTime() in
(T-VAR): Therefore e2 = y for y ∈ N ∪ {this} and
letk late = exists(xs,
(Γ, x : T1 )(y) = T2 . Case distinction:
λp. now - snd(p) > timeout) in
x = y: Therefore e2 = x, T2 = T1 and e2 {x := e1 } =
if (late) then
e1 . From this and Γ | Σ ` e1 : T1 we obtain a
(repl w (worker,ε); (w]inithi k this]instnchwi))
derivation of Γ | Σ ` e2 {x := e1 } : T2 .
k foreachhxs, λp. this]workhthrd(p), frth(p)ii
x 6= y: Therefore e2 {x := e1 } = y {x := e1 } = y = e2 ,
else
(this]pendinghxsi k this]instnchwi)
hence Γ | Σ ` e2 {x := e1 } : T2 , since the assumption
} in khselfrecoveringi
x : T1 can be dropped.
29
30
31
32
33
34
35
36
37
38
39
40
}
(T-I NST): Immediate, since the context Γ is not considered
in the premise.
(T-0): Immediate.
Service make accepts a stoppable worker and an integer
timeout parameter. The first rule of the self-recovering worker
initializes the list of pending requests to the empty list
Nil . The second rule accepts work requests. It generates a
fresh ID for the request and adds the request to the list of
pending requests together with the ID, the local timestamp
and the continuation (we assume functions freshID and
localTime). The rule forwards the work request to the
Inductive step:
Induction hypothesis (IH): The property holds for all proper
subderivations of the derivation D of Γ, x : T1 | Σ ` e2 : T2 .
(T-PAR): From the conclusion of the rule it holds that
e2 = par e02 , T2 = Unit and from its premises Γ, x : T1 |
17
2016/2/8
(T-TA PP): Therefore e2 = e02 [T20 ], T2 = T200 {α := T20 },
ftv(T20 ) ⊆ ftv(Γ), Γ, x : T1 ` T20 <: T2000 and Γ, x : T1 |
Σ ` e02 : ∀α <: T2000 . T200 . Applying (IH) yields a derivation
of Γ | Σ ` e02 {x := e1 } : ∀α <: T2000 . T200 . Note that
Γ, x : T1 ` T20 <: T2000 implies Γ ` T20 <: T2000 , since
assumptions on variables do not play a role in subtyping
rules. With rule (T-TA PP) we obtain a derivation of
Γ | Σ ` e02 {x := e1 } [T20 ] : T2 , which is also a derivation
of Γ | Σ ` e2 {x := e1 } : T2 as desired.
(T-S UB): By premise of the rule, Γ, x : T1 | Σ ` e2 : T20
and Γ, x : T1 ` T20 <: T2 . The latter implies Γ ` T20 <:
T2 , since assumptions on variables are not required in
subtyping rules. Apply (IH) to obtain a derivation of
Γ | Σ ` e2 {x := e1 } : T20 . Together with the previously
established facts and (T-S UB) we obtain a derivation of
Γ | Σ ` e2 {x := e1 } : T2 as desired.
Σ ` e02,i : T2 for each e02,i in the sequence e02 . Applying (IH) to each of the derivations in the premise
yields Γ | Σ ` e02,i {x := e1 } : T2 for each i. Together with rule (T-PAR) we obtain a derivation for
Γ | Σ ` par e02 {x := e1 } : T2 , which is also a
derivation for Γ | Σ ` e2 {x := e1 } : T2 as desired, since par e02 {x := e1 } = (par e02 ) {x := e1 } =
e2 {x := e1 }.
(T-S RV): It holds that e2 = srv r, ri = pi . e0i , T2 =
srv xi,j : Si,j , ftv(T2 )
⊆
ftv(Γ, x
:
T1 ),
pi,j
=
xi,j hyi,j : Ti,j i, Si,j
=
hTi,j i and
Γ, x : T1 , yi,j : Ti,j , this : T | Σ ` e0i : Unit for each
ri in r. Note that ftv(T2 ) ⊆ ftv(Γ), since ftv(Γ, x :
T1 ) = ftv(Γ) by definition of ftv.
Case distinction:
x = this: From the derivations of Γ, x : T1 , yi,j : Ti,j ,
this : T | Σ ` e0i : Unit we obtain derivations
for Γ, yi,j : Ti,j , this : T | Σ ` e0i : Unit since the
assumption this : T shadows x : T1 . Since server
templates bind this, it follows that e2 {x := e1 } =
e2 . Together with the other assumptions from the
original derivation we obtain a derivation for Γ |
Σ ` e2 {x := e1 } : T2 with rule (T-S RV) as desired.
v 6= this: For each ri in r it holds that x distinct from
y i,j by the variable convention. From the derivation
of Γ, x : T1 , yi,j : Ti,j , this : T | Σ ` e0i : Unit we
obtain by permutation a derivation of Γ, yi,j : Ti,j ,
this : T, x : T1 | Σ ` e0i : Unit. With (IH) we obtain a
derivation
for
Γ, yi,j : Ti,j , this : T | Σ ` e0i {x := e1 } : Unit.
From these intermediate derivations and the assumptions from the original (T-S RV) derivation we obtain by (T-S RV) a derivation of Γ | Σ ` srv (pi .
e0i {x := e1 }) : T2 , which is also a derivation of
Γ | Σ ` e2 {x := e1 } : T2 as desired.
(T-S PWN): Therefore e2 = spwn e02 , T2 = inst T20 ,
and Γ, x : T1 | Σ ` e02 : img T20 . Applying (IH)
yields Γ | Σ ` e02 {x := e1 } : img T20 . Together
with rule (T-S PWN) we obtain a derivation of Γ |
Σ ` spwn (e02 {x := e1 }) : T2 , which is also a derivation of Γ | Σ ` e2 {x := e1 } : T2 as desired, since
spwn (e02 {x := e1 })
=
(spwn e02 ) {x := e1 }
= e2 {x := e1 }.
(T-S VC), (T-R EQ), (T-I MG), (T-S NAP), (T-R EPL) : Straightforward application of (IH) and substitution.
(T-TA BS): Therefore e2 = Λα <: T20 . e02 , T2 = ∀α <: T20 . T200
and Γ, x : T1 , α <: T20 | Σ ` e02 : T200 . By the variable
convention, it holds that α is not free in T1 . Therefore, by
permutation we obtain a derivation of Γ, α <: T20 , x : T1 |
Σ ` e02 : T200 . Together with (IH) we obtain a derivation
for Γ, α <: T20 | Σ ` e02 {x := T1 } : T200 . Extending this
derivation with rule (T-TA BS), we obtain a derivation of
Γ | Σ ` Λα <: T20 . e02 {x := T1 } : T2 , which is also a
derivation of Γ | Σ ` e2 {x := e1 } : T2 as desired.
Lemma 7 (Type Substitution Preserves Subtyping).
If Γ, α <: T 0 , Γ0 ` S <: T and Γ ` S 0 <: T 0 then
Γ, Γ0 σ ` Sσ <: T σ where σ = {α := S 0 }.
Proof. By induction on the typing derivation D of Γ, α <:
T 0 , Γ0 ` S <: T . In each case we assume Γ ` S 0 <: T 0 and
σ = {α := S 0 }.
Basis:
(S-T OP): Therefore T = Top. By rule (S-T OP) it holds that
Γ, Γ0 σ ` Sσ <: Top, i.e., Γ, Γ0 σ ` Sσ <: T σ as desired.
(S-R EFL): Therefore S = T . Γ, Γ0 σ ` Sσ <: T σ holds by
rule (S-R EFL).
(S-TVAR): Therefore S = α0 , α0<:T ∈ (Γ, α<:T 0 , Γ0 ). Case
distinction:
α0 6= α: Immediate by rule (S-TVAR).
α0 = α: Therefore S = α, Sσ = T 0 , and T 0 = T = T σ.
Apply rule (S-R EFL).
(S-S RV⊥ ): Immediate by rule (S-S RV⊥ ).
Inductive step:
Induction hypothesis (IH): The property holds for all proper
subderivations of the derivation Γ, α <: T 0 , Γ0 ` S <: T .
(S-S RV): Therefore S = srv x : S2 , T = srv y : T2 and for
each j there is i such that yj = xi and Γ, α <: T 0 , Γ0 `
S2,i <: T2,j . Applying (IH) yields Γ, Γ0 σ ` S2,i σ <: T2,j σ.
Together with rule (S-S RV) we obtain a derivation of
Γ, Γ0 σ ` srv x : S2 σ <: srv y : T2 σ, i.e., Γ, Γ0 σ ` Sσ <:
T σ as desired.
(S-I NST), (S-I MG), (S-S VC), (S-T RANS): Straightforward
application of (IH).
(S-U NIV): Therefore S = ∀α1 <: U. S2 , T = ∀α2 <: U. T2
and Γ, α<:T 0 , Γ0 , α1<:U ` S2<:T2 {α2 := α1 }. Together
with (IH) we obtain a derivation of Γ, Γ0 σ, α1 <: U σ `
S2 σ <: T2 {α2 := α1 } σ, i.e., Γ, Γ0 σ, α1 <: U σ ` S2 σ <:
18
2016/2/8
Γ, α <: S, Γ0 ` T2 <: T3 it holds that Γ, Γ0 σ ` T2 σ <: T3 σ.
From ftv(T2 ) ⊆ ftv(Γ, α<:S, Γ0 ) it holds that ftv(T2 σ) ⊆
ftv(Γ, Γ0 σ). Applying rule (T-TA PP) yields a derivation
of Γ, Γ0 σ | Σσ ` e2 σ [T2 σ] : (T 0 σ) {α0 := T2 σ}, i.e.,
Γ, Γ0 σ | Σσ ` (e2 [T2 ])σ : (T 0 {α0 := T2 })σ, i.e.,
Γ, Γ0 σ | Σσ ` eσ : T σ as desired.
(T-S UB): By premise of the rule, Γ, α <: S, Γ0 | Σ ` e : T 0
and Γ, α <: S, Γ0 ` T 0 <: T . Applying (IH) to the former
yields Γ, Γ0 σ | Σσ ` eσ : T 0 σ. By lemma 7 and
Γ, α <: S, Γ0 ` T 0 <: T it holds that Γ, Γ0 σ ` T 0 σ <: T σ.
Thus, by rule (T-S UB) we obtain a derivation of Γ, Γ0 σ |
Σσ ` eσ : T σ as desired.
T2 σ {α2 := α1 } (since by our variable convention, we
may assume α2 6= α and α1 6= α). Together with rule
(S-U NIV) we obtain a derivation of Γ, Γ0 σ ` ∀α1 <:
U σ. S2 σ <: ∀α2 <: U σ. T2 σ, i.e., Γ, Γ0 σ ` Sσ <: T σ
as desired.
Lemma 8 (Type Substitution Lemma). If Γ, α <: S, Γ0 |
Σ ` e : T and Γ ` S 0 <: S then Γ, Γ0 σ | Σσ ` eσ : T σ
where σ = {α := S 0 }.
Proof. By induction on the typing derivation D of Γ, α <:
S, Γ0 | Σ ` e : T . In each case we assume Γ ` S 0 <: S and
σ = {α := S 0 }.
Lemma 9 (Location Typing Extension Preserves Types).
If Γ | Σ ` e : T and Σ ⊆ Σ0 , then Γ | Σ0 ` e : T .
Basis:
(T-VAR): Therefore e = x for x ∈ N ∪ {this} and
(Γ, α<:S, Γ0 )(x) = T . By the variable convention, it holds
that α is not bound in Γ, therefore (Γ, Γ0 σ)(x) = T σ,
which holds by a structural induction on Γ0 . Together with
(T-VAR) we obtain Γ, Γ0 σ | Σσ ` eσ : T σ as desired.
(T-I NST): Immediate, since Γ is not considered in the
premises.
(T-0): Immediate by rule (T-0).
Proof. Straighforward induction on the derivation for Γ |
Σ ` e : T.
Theorem 2 (Preservation). If Γ | Σ ` e : T and Γ | Σ ` µ
and e | µ −→ e0 | µ0 , then Γ | Σ0 ` e0 : T for some Σ0 ,
where Σ ⊆ Σ0 and Γ | Σ0 ` µ0 .
Proof. By induction on the typing derivation D of Γ |
Σ ` e : T . We always assume e | µ −→ e0 | µ0 for
some e0 , µ0 . Otherwise, e is stuck (e 6−→) and the property
trivially holds. We also assume Γ | Σ ` µ in each case.
Inductive step:
Induction hypothesis (IH): The property holds for all proper
subderivations of the derivation D of Γ, α<: S, Γ0 | Σ ` e : T .
Basis:
(T-PAR): From the conclusion of the rule it holds that e =
(T-VAR), (T-I NST), (T-0): Immediate, since e is stuck.
par e2 , T = Unit and from its premises Γ, α <: S, Γ0 |
Σ ` e2,i : T for each e2,i in the sequence e2 . Applying
Inductive step:
(IH) yields Γ, Γ0 σ | Σσ ` e2,i σ : T σ for each i. Together
Induction hypothesis (IH): The property holds for all proper
with T σ = Unitσ = Unit = T , by (T-PAR) we obtain a
subderivations of the derivation D of Γ | Σ ` e : T .
derivation of Γ, Γ0 σ | Σσ ` par e2 σ : T , which is also a
derivation of Γ, Γ0 | Σσ ` eσ : T σ as desired.
(T-S RV), (T-I MG), (T-TA BS): The property trivially holds,
since in each case, e is stuck.
(T-S RV): Therefore e = srv r, ri = pi .ei , T = srv xi,j : Si,j ,
ftv(T ) ⊆ ftv(Γ, α <: S, Γ0 ), pi,j = xi,j hyi,j : Ti,j i,
(T-PAR): From the conclusion of the rule it holds that
Si,j = hTi,j i and Γ, α <: S, Γ0 , yi,j : Ti,j , this : T |
e = par e1 , T = Unit and from its premises Γ |
Σ ` ei : Unit for each ri in r. Applying (IH) yields
Σ ` e1,i : Unit for each e1,i in the sequence e1 . By
Γ, Γ0 σ, yi,j : Ti,j σ, this : T σ | Σσ ` ei σ : Unit for each
the structure of e, there are three possible rules which can
i in ri . Note that ftv(T ) ⊆ ftv(Γ, α <: S, Γ0 ), Γ ` S 0 <: S
be at the root of the derivation for e | µ −→ e0 | µ0 :
0
0
and σ = {α := S } imply ftv(T σ) ⊆ ftv(Γ, Γ σ). By
(PAR): Therefore e = par e11 (par e12 ) e13 and e0 =
applying σ to the types in the assumptions of the original
par e11 e12 e13 and µ0 = µ. From the premises
derivation D, we obtain with the previously established
of (T-PAR) it holds that Γ | Σ ` e12,k : Unit for
facts a derivation of Γ, Γ0 σ | Σσ ` eσ : T σ as desired by
each e12,k in the sequence e12 . Choose Σ0 = Σ.
rule (T-S RV).
Together with the previously established facts we
(T-I MG), (T-S NAP), (T-R EPL), (T-S PWN),(T-S VC),(T-R EQ), (T-TA BS):obtain a derivation of Γ | Σ0 ` par e11 e12 e13 : T by
Straightforward application of (IH) and substitution.
(T-PAR), Σ ⊆ Σ0 and Γ | Σ0 ` µ0 as desired.
(T-TA PP): Therefore e = e2 [T2 ], T = T 0 {α0 := T2 },
(R EACT): Therefore e = par e00 , e0 = par e00 σb (eb ),
ftv(T2 ) ⊆ ftv(Γα <: S, Γ0 ), Γ, α <: S, Γ0 ` T2 <: T3 and
µ0 = µ[i 7→ (s, m0 )]. By the premises of (R EACT),
Γ, α <: S, Γ0 | Σ ` e2 : ∀α0 <: T3 . T 0 . Applying (IH)
µ(i) = (srv r1 (p . eb ) r2 , m), match(p, m) ⇓
yields Γ, Γ0 σ | Σσ ` e2 σ : (∀α0 <: T3 . T 0 )σ, hence
(m0 , σ) and σb = σ ∪ {this := i}. Choose Σ0 = Σ.
Γ, Γ0 σ | Σσ ` e2 σ : ∀α0 <: T3 σ. T 0 σ. By lemma 7 and
Since Γ | Σ ` µ, µ(i) is well typed. From its shape
19
2016/2/8
it is typed by rule (T-I MG) as some img T 0 . Thus,
by the premises of (T-I MG), s is typed as srv T 0 and
each element in the buffer m is a valid request value
for the server template s. By the match soundness
and completeness lemma from the paper, each request
value in the buffer m0 occurs in m. Hence, we obtain
a derivation for Γ, Σ ` (s, m0 ) : img T 0 . Thus,
Γ | Σ ` µ0 and also Γ | Σ0 ` µ0 .
From the shape of server template s, it must be
typed by rule (T-S RV) as the last step in a derivation Ds . Therefore, from the premises of this rule, we
obtain a derivation for Γ, yl,k : Tl,k , this : srv T 0 |
Σ ` eb : Unit, i.e., Γ, yl,k : Tl,k , this : srv T 0 |
Σ0 ` eb : Unit, since Σ0 = Σ. The yl,k : Tl,k are
the arguments in the join pattern p. Applying the
substitution lemma 6 multiple times to the latter
derivation yields a derivation of Γ | Σ0 ` eb : Unit.
Γ | Σ0 ` σb (eb ) : Unit. The first application of the
lemma to eliminate this : srv T 0 is justified by the
derivation Ds . The other applications to eliminate
yl,k : Tl,k are justified by the match soundness and
completeness lemma, which guarantees that the selection of argument values in the substitution σ are
from matching service request values in m, which is
well-typed under Γ | Σ0 . Hence Γ | Σ0 ` σ(yl,k ) : Tl,k
holds.
Finally, since Σ0 = Σ, Γ | Σ ` e00 : Unit, we have
Γ | Σ0 ` e00 : Unit. Together with Γ | Σ0 ` eb : Unit
by rule (T-PAR), we obtain Γ | Σ0 ` par e00 eb : Unit,
i.e., Γ | Σ0 ` e0 : T . This together with the previously
established Γ | Σ0 ` µ0 is the property we wanted to
show.
(C ONG): Therefore e = E[e1,j ] for an expression e1,j in
the sequence e1 and e0 = E[e01,j ] for some e01,j , where
e1,j | µ −→ e01,j | µ0 . Since Γ | Σ ` e1,j : Unit,
it follows from (IH) that there is a derivation of Γ |
Σ0 ` e01,j : Unit with Σ ⊆ Σ0 and Γ | Σ ` µ0 From
Γ | Σ ` e1,i : Unit for each i 6= j, Σ ⊆ Σ0 and
lemma 9, we obtain derivations Γ | Σ0 ` e1,i : Unit.
Together with Γ | Σ0 ` e01,j : Unit we obtain a
derivation for Γ | Σ0 ` e0 : T by rule (T-PAR) as
desired.
(T-S NAP): Therefore e = snap e1 , T = img T 0 and
Γ | Σ ` e1 : inst T 0 . By the structure of e, there are two
possible rules which can be at the root of the derivation
for e | µ −→ e0 | µ0 :
(S NAP): Therefore, e1 = i ∈ N, e0 = (srv r, m) or
e0 = 0, and µ0 = µ. From Γ | Σ ` e1 : inst T 0
and the shape of e1 , rule (T-I NST) is the root of the
corresponding derivation. Thus, i ∈ Σ and Σ(i) =
inst T 0 by the premises of this rule. Choose Σ0 = Σ.
Since Γ | Σ ` µ, µ = µ0 and Σ0 = Σ, it also holds
that Γ | Σ0 ` µ0 . Hence Γ | Σ0 ` e0 : T as desired.
(C ONG): Therefore, e0 = snap e2 for some e2 , and
e1 | µ −→ e2 | µ0 holds. Applying this together with
Γ| Σ ` e1 : inst T 0 to (IH) yields Γ | Σ00 ` e2 : inst T 0
for Σ00 , where Σ ⊆ Σ00 and Γ | Σ00 ` µ0 . Together with
rule (T-S NAP) we obtain Γ | Σ00 ` snap e2 : img T 0 ,
i.e., Γ | Σ00 ` e0 : T . Choose Σ0 = Σ00 .
(T-R EPL): Therefore e = repl e1 e2 , T = Unit,
Γ| Σ ` e1 : inst T 0 and Γ| Σ ` e2 : img T 0 . By the
structure of e, there are two possible rules which can be
at the root of the derivation for e | µ −→ e0 | µ0 :
(R EPL): Therefore, e1 = i ∈ N, i ∈ dom(µ), e2 =
(srv r, m) or e2 = 0, e0 = par ε and µ0 = mu[i 7→
s]. Hence i is well typed under Γ | Σ as inst T 0 .
Together with Γ| Σ ` e2 : img T 0 and Γ | Σ ` µ
it holds that Γ | Σ ` µ0 . Choose Σ0 = Σ. Apply rule
(T-PAR) to obtain Γ | Σ0 ` e0 : T as desired.
(C ONG): Therefore, e0 = repl e01 e02 for some e01 , e02 and
either e1 | µ −→ e01 | µ0 , e2 = e02 or e2 | µ −→ e02 |
µ0 , e1 = e01 holds. We only show the first case, the
other is similar. Apply (IH) to Γ| Σ ` e1 : inst T 0
and e1 | µ −→ e01 | µ0 to obtain Σ00 with Σ ⊆ Σ00
and Γ | Σ00 ` µ0 and Γ | Σ00 ` e01 : inst T 0 . Choose
Σ0 = Σ00 . Apply lemma 9 to Γ| Σ ` e2 : img T 0 in
order to obtain Γ| Σ0 ` e2 : img T 0 . Finally, apply
rule (T-R EPL) to obtain Γ | Σ0 ` e0 : T as desired.
(T-S PWN): Therefore e = spwn e00 , T = inst T 0 and
Γ | Σ ` e00 : img T 0 . By the structure of e, there are two
possible rules which can be at the root of the derivation of
e −→ e0 :
(S PWN): Therefore e00 = (srv r, m) or e00 = 0, e0 =
i ∈ N, i ∈
/ dom(µ) and µ0 = µ[i 7→ e00 ]. With
Γ | Σ ` µ and definition 3 it follows that i ∈
/ dom(Σ).
Choose Σ0 = Σ[i 7→ img T 0 ]. By rule (T-I NST) and
definition of Σ0 , it holds that Γ | Σ0 ` i : inst T 0 , i.e.,
Γ | Σ0 ` e0 : T . By construction, Σ ⊆ Σ0 .
What is left to show is Γ | Σ0 ` µ0 :
First note dom(µ) = dom(Σ) and for all j ∈ dom(Σ),
Σ(j) = Σ0 (j), µ(j) = µ0 (j) and Γ | Σ ` µ(j) : Σ(j).
Hence Γ | Σ ` µ0 (j) : Σ0 (j) and by lemma 9,
Γ | Σ0 ` µ0 (j) : Σ0 (j) for each j ∈ dom(Σ).
From Γ | Σ ` e00 : img T 0 , Σ ⊆ Σ0 and lemma 9
we obtain Γ | Σ0 ` e00 : img T 0 . Together with
the definitions of Σ0 , µ0 , this is a derivation for Γ |
Σ0 ` µ0 (i) : Σ0 (i).
In summary, we have established that
Γ | Σ0 ` µ0 (j) : Σ0 (j) for all j ∈ dom(µ) ∪ {i} =
dom(µ0 ) = dom(Σ0 ). By definition 3, this means
Γ | Σ0 ` µ0 , what was left to show.
(C ONG): Therefore, by the structure of e, it holds that
e = E[e00 ] for the context E[·] = spwn [·]. By the
premise of (C ONG) we obtain e00 | µ −→ e000 | µ0 ,
hence e0 = E[e000 ] = spwn e000 . Applying the (IH) to
Γ | Σ ` e00 : img T 0 and e00 | µ −→ e000 | µ0 yields a
derivation of Γ | Σ0 ` e000 : img T 0 for some Σ0 with
20
2016/2/8
Σ ⊆ Σ0 and Γ | Σ0 ` µ0 . From the previous typing
derivation and rule (T-S PWN) we obtain a derivation
for Γ | Σ0 ` spwn e000 : inst T 0 , i.e., Γ | Σ0 ` e0 : T
as desired.
=
e00 ]xi , T
=
T1,i ,
(T-S VC): Therefore e
00
Γ | Σ ` e : inst srv x : T1 , where xi : T1,i occurs
in the sequence x : T1 . Since e | µ −→ e0 | µ0 by assumption, e00 cannot be a value, otherwise e too would
be a value and hence stuck. Together with the structure
of e, reduction rule (C ONG) is the only possible root of
the derivation of e | µ −→ e0 | µ0 , where e = E[e00 ].
Hence e0 = E[e000 ] = e000 ]xi and e00 | µ −→ e000 | µ0
by the premise of (C ONG). Applying the (IH) to e00 |
µ −→ e000 | µ0 and Γ | Σ ` e00 : inst srv x : T1 yields
a derivation of Γ | Σ0 ` e000 : inst srv x : T1 , for some
Σ0 with Σ ⊆ Σ0 and Γ | Σ0 ` µ0 . By rule (T-S VC) and
the previously established facts, we obtain a derivation
for Γ | Σ0 ` e000 ]xi : T1,i , which is also a derivation of
Γ | Σ0 ` e0 : T as desired.
(T-R EQ): Therefore e = e00 he1 . . . en i, T = Unit,
Γ ` e00 : hT1 . . . Tn i and (Γ ` ei : Ti )1∈1...n . Since
e −→ e0 by assumption, there is an expression in the
set {e00 , e1 , . . . , en } which is not a value, otherwise e
is a value and stuck. Together with the structure of e,
reduction rule (C ONG) is the only possible root of the
derivation of e −→ e0 . Therefore e = E[e000 ], where
E[·] = [·]he1 . . . en i or E[·] = e00 he11 [·] e22 i. For any of
the possible shapes of E, we can straightforwardly apply
the (IH) to obtain a derivation of Γ ` e0 : T as desired.
(T-TA PP): Therefore e = e00 [T1 ], T = T 0 {α := T1 },
ftv(T1 ) ⊆ ftv(Γ), Γ ` T1 <: T2 and Γ | Σ ` e00 : ∀α <:
T2 . T 0 . By the structure of e, there are two possible rules
which can be at the root of the derivation of e | µ −→ e0 |
µ0 :
(TA PPA BS): Therefore e00 = Λα <: T2 . e000 and hence
e0 = e000 {α := T1 }. From the structure of e, e00 and
the available rules, there is a proper subderivation in
D of Γ, α <: T2 | Σ ` e000 : T 0 . Together with Γ `
T1<: T2 and the type substitution lemma 8, we obtain a
derivation for Γ | Σ ` e000 {α := T1 } : T 0 {α := T1 }.
Choose Σ0 = Σ, then the previous derivation also is a
derivation of Γ | Σ0 ` e0 : T as desired.
(C ONG): Straightforward application of the (IH) similar
to the previous cases.
(T-S UB): By premise of the rule, Γ | Σ ` e : T 0 and
Γ ` T 0<: T . Apply (IH) to the former and then (T-S UB) to
obtain a derivation of Γ | Σ0 ` e0 : T and an appropriate
Σ0 .
21
2016/2/8
| 6 |
Empirical Evaluation of a Thread-Safe Dynamic
Range Min-Max Tree using HTM
arXiv:1603.05922v1 [cs.DC] 18 Mar 2016
Erick Elejalde, Jose Fuentes-Sepúlveda, and Leo Ferres
University of Concepcion, Concepcion, Chile,
{eelejalde|jfuentess|lferres}@udec.cl
Abstract. Succinct trees, such as wavelet trees and those based on, for instance,
range Min-Max trees (RMMTs), are a family of practical data structures that
store information close to their information-theoretic space lower bound. These
structures are often static; meaning that once they are built, nodes cannot be
added, deleted or modified. This read-only property simplifies concurrency. However, newer versions of these data structures allow for a fair degree of dynamism.
Parallel programming using Hardware Transactional Memory(HTM), has been
available in mainstream microprocessors since a few years ago. One limitation
of HTM is still on the size of each transaction. This is why HTM’s use, for the
moment, is limited to operations that involve few memory addresses that need to
be updated atomically, or where the level of concurrency is low. We provide the
first available implementation of a concurrent, dynamic RMMT based on HTM,
and we compare empirically how well HTM performs compared to a naive implementation using locks. We have shown that because of the formal properties of
RMMTs, HTM is a good fit for adding concurrency to otherwise slow lock-based
alternatives. We have also shown that HTM performs better than locks when
the number of write operations increase, making it a practical structure to use
in several write-intensive contexts. This is, as far as we know, the only practical
implementation of RMMTs thoroughly tested using HTM.
Keywords: concurrent data structures, dynamic range min-max tree, hardware
transactional memory
1
Introduction
Succinct trees (such as wavelet trees (WTs) [1] and those based on, for instance, range
Min-Max trees [2], to name some of the most common) are a family of practical data
structures that store information close to their information-theoretic space lower bound.
These structures are often “static”; meaning that once they are built, nodes cannot be
added, deleted or modified. This read-only property simplifies concurrency. However,
newer versions of these data structures allow for a fair degree of dynamism [2]. As far as
we know, no research has been done on making succinct data structures both dynamic
and concurrent, taking advantage of the processing power of multicore architectures.
One problem in making these structures concurrent is that common locks are usually
slow. Parallel programming using Hardware Transactional Memory(HTM) [3], has been
available in mainstream microprocessors since a few years ago. Perhaps given its novelty,
there are still a few restrictions using HTM in practice. One is the limitation on the size
of each transaction. The more instructions a transaction involves, the more likely it will
fail. This is why HTM’s use, for the moment, is limited to operations that involve few
memory addresses that need to be updated atomically, or where the level of concurrency
is low.
Interestingly, the critical regions of dynamic structures such as the range Min-Max
tree are small [4,5]. In the WTs for example, the height of the tree depends on the size of
the alphabet, not the text. The range Min-Max tree, in turn, used to index the succinct
representation of any tree, the height is constant under the word-RAM model. However,
implementing concurrent dynamic trees always poses a challenge. Particularly when any
data modification involves the update of the entire path from the root to, at least, one
leaf. Since every path shares at least the root with other paths, update operations generate a lot of contention for concurrent accesses, even for read-only queries. Some other
methods for allowing concurrency in dynamic trees, like patching the tree, usually need
a considerable amount of extra space. The use of extra space is generally not advised
whenever succinct structures are required. The alternative is a lock-based implementation. In this case, since every writing operation must reach the leaves, modify it and then
traverse back to the root updating all nodes in the path, it is almost unavoidable to use
a global lock over the entire data structure to protect threads from leaving the tree in
an invalid state.
Our contribution in this paper is two-fold: we provide the first available implementation of a concurrent, dynamic range Min-Max tree based on HTM, and we compare
empirically how well HTM performs compared to a naive implementation using locks,
given that these two techniques present a similar degree of complexity for the programmer.
2
Implementation
In these pages we focus on dynamic, concurrent Range Min-Max trees (RMMTs). The
RMMT structure has been described at length in [2], but only theoretically, and its
practical behavior is still poorly understood. Concurrency aspects of RMMTs, in turn,
have not yet been explored at all. RMMTs may be implemented in different ways. The
original paper lists two of them: a balanced binary tree and a B-tree. The binary tree
version has been implemented by [6], where the authors use a splay tree to improve
the practical performance of some operations. Given that our idea hinges on the control
of the height of the tree and the alignment of the nodes in memory, we opted for the
alternative route, a B+ tree. In [7] the authors evaluate the performance of HTM for a
similar indexing structure but with a focus on Data Bases. In our case each node needs to
store more information about the excesses but the arity of our tree is smaller. Given the
underlying information, the operations over the data structure need to follow a different
semantic. For example, insertions and deletions in our data structure have to be made in
pairs (we have to remove/insert open-close matching parenthesis corresponding to a node
of the underlying tree). We also complement their experiments with a bigger number of
threads to investigate the behavior of our data structure under stress conditions.
To encode any tree succinctly, we can use a balanced parentheses (BP) representation,
created from a depth-first search on the tree: upon reaching a new node, a bit value is
written, say 1, representing an open parenthesis. A bit value 0 is written after visiting
all of its children. Since we only need one bit to represent an open or close parenthesis,
and every node involves both an open and a close parenthesis, this representation takes
2n bits of space. To be able to answer queries, such as navigating to an specific node and
deleting or inserting nodes, and make it efficiently, we need to build an index over the
BP sequence. Here is where the RMMT comes into play. The BP sequence is partitioned
in blocks of S bits and each block is assigned to a leaf of the RMMT. From the leaves
to the root, each node represents the range of the BP sequence covered by its subtree.
Each node stores the total excess on its range. The excess is the number of parenthesis
opened but not yet closed in the range. The excess up to a given position in the sequence
represents the depth of the underlying tree at that point. Each node in the RMMT also
stores the maximum excess in the range, the minimum excess, how many times this
minimum value appears, and finally the number of parenthesis in its range.
Since the set of HTM instructions that we used (Intel TSX) have a granularity of CPU
cache lines (64 bytes for Haswell), this means that one transaction might abort because
of false-sharing with another transaction. So we tried to ensure that each transaction’s
updates affect as few nodes as possible, counting as affected nodes all those whose cache
line is modified. To make the tree more HTM friendly, we force each node to take an
entire cache line, so modifications on a given node do not affect cache lines of other
transactions working in parallel over a different set of nodes. To do this without wasting
any memory, we select the k-ary of the tree in a way that the k pointers added to the
information needed by the node will complete the 64 bytes of memory of a cache line.
In our implementation, each node of the RMMT may have up to five children. In
our 64-bit architecture the five pointers plus the five values of excess mentioned above
add up to 60 bytes per node. The other 4 bytes needed to pad the cache line are used
to differentiate the internal nodes from the leaves. In the leaves, the five-word size space
used by the pointers is used instead to store the actual range of bits of the BP sequence.
To keep the tree balanced, the normal procedure of B-trees is followed. A node is
split if it overflows the maximum number of children. If we perform a delete operation
and a node is left with less than half the maximum number of children, it tries to “steal”
some children from an adjacent node, but if the victim node cannot spare any child, both
nodes are merged.
3
Experiments
All algorithms were implemented in the C programming language and compiled using
GCC 4.9.2 using the -O3 optimization flag. The experiments were carried out on an
Intel Xeon E3-1225 v3, with 4 physical cores running at 3.2 GHz. There were only 4
hardware threads running. The computer runs Linux 3.13.0-49-generic, in 64-bit mode.
This machine has a per-core L1 and L2 caches of sizes 32KB and 256KB, respectively
and a shared L3 cache of 8192KB, with a 12,211,900KB (∼12GB) DDR RAM memory.
All alternatives solutions were compared in terms of throughput using the usual highresolution (nanosecond) C functions in htime.hi.
Our experiments consisted of creating a given number of threads and then let each
one runs for an specific amount of time (10 seconds in our set-up) in a loop. In every
iteration, each thread has a predefined chance of entering the data structure as a reader
or as a writer. For the lock-based version we use a global lock over the entire data
structure. In the case of HTM, one transaction includes all operations, from finding the
leaf to updating the path to the root. Each transaction, if it aborts, falls back to a
lock-protected critical section.
We run the experiments making the transaction to retry in case of an abort from 0
to 2 times before going through the fallback. Although the absolute values change, the
Fig. 1. Scaling on HTM and RW-Lock Fig. 2. Varying the writing percentage on HTM
(wikipedia)
and RW-Lock (wikipedia)
relative trend between HTM and Locks keeps the same. Even the commit-fallback ratio
stays almost the same.
Besides the critical section, each iteration runs a non-critical section that repeatedly
1
of the total amount
modifies a private vector. The non-critical section always runs for 100
of time assigned to the outer loop. We where more interested in the interaction between
multiple thread than how many operations a single thread could perform, so we ran
experiments varying the number of concurrent threads from 10 to 260.
For the lock version we are using the RW-lock from the Posix pthread library. Since, in
practical applications, an important part of the accesses to this structure are read-only.
We tested our algorithm on different inputs. All of them are succinct representations of a tree. We report here the results for the XML tree of the Wikipedia dump1
(498,753,914 parentheses).
Figure 1 shows that, at the start, both the HTM and lock-based implementation of the
RMMT behave similarly (we report the mean over three repetitions of the experiment).
Beyond 50 threads, however, HTM shows noticeably higher throughput. As expected,
the slowdown in the lock-based version coincides with an increase in HTM fallbacks
(slow path). At this point, threads begin to interfere with each other. Even with this
increase in fallbacks, the HTM throughput continues growing at a fast pace. Around
approximately 120 threads HTM also starts to decelerate as it gets closer to 10000 ops,
where our implementation reaches its limit performance.
We have also compared the behavior of both alternatives (HTM and Locks) varying
the percentage of writers. Figure 2 shows these results. For environments with a low
percentage of writers the lock-based version behaves comparably to the HTM implementation because of the RW-lock. In other words, a few writers do not generate too much
contention. At 50% of writing operations, HTM throughput exceeds the lock-based implementation by 20% for 160 threads and 26% with 260 threads. Thus, in write-intensive
environments, such as many XML documents in the web, the use of HTM represents an
important improvement over locks.
1
http://dumps.wikimedia.org/enwiki/20150112/enwiki-20150112-pages-articles.
4
Conclusions and Future Work
We have shown that because of the formal properties of RMMTs, HTM is a good fit for
adding concurrency to otherwise slow lock-based alternatives. We have also shown that
HTM performs better than locks when the number of write operations increase, making
it a practical structure to use in several write-intensive contexts. This is, as far as we
know, the only practical implementation of RMMTs thoroughly tested using HTM.
In the future, we plan to test the behavior of other alternatives to allow concurrency,
such as NUMA-based locking. These may well complement the current proposal.
We could also explore batch processing, grouping a few insertions and deletions in the
same BP block in a single transaction. It could be the case that a set of update operations
“cancel” each other’s effect on the excesses counters and prevent having to spread the
actualization to higher levels in the RMMT, conflicting with other transactions.
We also want to experiment changing the data structure to make it more HTM
friendly. For instance, storing the length of the segment on a different structure may
reduce conflicts. The length value changes frequently, even when all other values remain
unchanged.
References
1. G. Navarro, “Wavelet trees for all,” in Combinatorial Pattern Matching, ser. Lecture Notes
in Computer Science, J. Krkkinen and J. Stoye, Eds. Springer Berlin Heidelberg, 2012,
vol. 7354, pp. 2–26. [Online]. Available: http://dx.doi.org/10.1007/978-3-642-31265-6 2
2. G. Navarro and K. Sadakane, “Fully functional static and dynamic succinct trees,”
ACM Trans. Algorithms, vol. 10, no. 3, pp. 16:1–16:39, May 2014. [Online]. Available:
http://doi.acm.org/10.1145/2601073
3. M. Herlihy and J. E. B. Moss, “Transactional memory: Architectural support for lock-free
data structures,” in Proceedings of the 20th Annual International Symposium on Computer
Architecture, ser. ISCA ’93. New York, NY, USA: ACM, 1993, pp. 289–300. [Online].
Available: http://doi.acm.org/10.1145/165123.165164
4. J. Fuentes-Sepúlveda, E. Elejalde, L. Ferres, and D. Seco, “Efficient wavelet tree construction
and querying for multicore architectures,” in Experimental Algorithms - 13th International
Symposium, SEA 2014, Copenhagen, Denmark, June 29 - July 1, 2014. Proceedings, 2014,
pp. 150–161.
5. L. Ferres, J. Fuentes-Sepúlveda, M. He, and N. Zhe, “Parallel construction of succinct trees,”
in Experimental Algorithms - 14th International Symposium, SEA 2015, Paris, France, June
29 - July 1, 2015. Proceedings.
6. S. Joannou and R. Raman, “Dynamizing succinct tree representations,” in Proceedings of the
11th International Conference on Experimental Algorithms, ser. SEA’12. Berlin, Heidelberg:
Springer-Verlag, 2012, pp. 224–235.
7. T. Karnagel, R. Dementiev, R. Rajwar, K. Lai, T. Legler, B. Schlegel, and
W. Lehner, “Improving in-memory database index performance with intel R transactional
synchronization extensions,” in 20th IEEE International Symposium on High Performance
Computer Architecture, HPCA 2014, Orlando, FL, USA, February 15-19, 2014, 2014, pp.
476–487. [Online]. Available: http://dx.doi.org/10.1109/HPCA.2014.6835957
| 8 |
Exemplar or Matching: Modeling DCJ Problems
with Unequal Content Genome Data
Zhaoming Yin2 , Jijun Tang1,3∗ , Stephen W. Schaeffer4 and David A. Bader2?
arXiv:1705.06559v1 [cs.DS] 18 May 2017
1
3
School of Computer Science and Technology, Tianjin University, China
2
School of Computational Science and Engineering,
Georgia Institute of Technology, USA
Dept. of Computer Science and Engineering, University of South Carolina, USA
4
The Huck Institutes of Life Sciences, Pennsylvania State University, USA
Abstract. The edit distance under the DCJ model can be computed in
linear time for genomes with equal content or with Indels. But it becomes
NP-Hard in the presence of duplications, a problem largely unsolved
especially when Indels are considered. In this paper, we compare two
mainstream methods to deal with duplications and associate them with
Indels: one by deletion, namely DCJ-Indel-Exemplar distance; versus
the other by gene matching, namely DCJ-Indel-Matching distance. We
design branch-and-bound algorithms with set of optimization methods
to compute exact distances for both. Furthermore, median problems are
discussed in alignment with both of these distance methods, which are to
find a median genome that minimizes distances between itself and three
given genomes. Lin-Kernighan (LK ) heuristic is leveraged and powered
up by sub-graph decomposition and search space reduction technologies
to handle median computation. A wide range of experiments are conducted on synthetic data sets and real data sets to show pros and cons
of these two distance metrics per se, as well as putting them in the median computation scenario.
Keywords: Genome Rearrangement, Double-cut and Join (DCJ ), LinKernighan Heuristic.
1
Introduction
Over the last years, many distance metrics have been introduced to calculate the
dissimilarity between two genomes by genome rearrangement [2,3,5,30]. Among
them, DCJ distance is largely studied in recent years due to its capability to
model various forms of rearrangement events, with a cheap cost of linear time
computation. However, when consiering duplications, the distance computation
becomes NP-hard [10] and APX-hard [1,12] for various distance models. There
are two approaches to treat duplications, both are targeted at removing duplicated genes, so that existing linear algorithms can be utilized subsequently.
?
Corresponding Authors
2
The first approach identifies the so called exemplar genes [23] in order to retain
one copy gene in each duplicated gene family, while the other assigns one-toone matching to every duplicated genes in each gene family [24, 25]. Situated in
the context of duplications, gene insertion and deletion (Indels) are also important rearrangement events that results in unequal contents [8]. Pioneer works
were conducted to study the sorting and distance computation by reversals with
Indels [17]. Later on, the DCJ-Indel distance metric was introduced to take advantages of the DCJ model. Braga et al [7] proposed the first framework to
compute the DCJ-Indel distance; Compeau later simplified the problem with a
much more elegant distance formula [13]. In this paper, we adapt the previous
research results to design algorithms that procure the ability to handle both
duplications and Indels when computing DCJ distance.
As evolutionary analysis generally involves more than two species, it is necessary
to extend the above distances to deal with multiple genomes. Since three species
form the smallest evoliutionary tree, it is critical to study the median problem,
which is to construct a genome that minimizes the sum of distances from itself
to the three input genomes [6, 18]. The median problem is NP-hard under most
distance metrics [4, 9, 21, 27]. Several exact algorithms have been implemented
to solve the DCJ median problems on both circular [27, 29] and linear chromosomes [26,28]. Some heuristics are brought forth to improve the speed of median
computation, such as linear programming (LP) [9], local search [16], evolutionary programming [14], or simply searching on one promising direction [22]. All
these algorithms are intended for solving median problems with equal content
genomes, which are highly unrealistic in practice. In this paper, we implement a
Lin-Kernighan heuristic leveraging the aforementioned two distance metrics to
compute DCJ median when duplications and Indels are considered.
2
2.1
Background
Genome Rearrangement Events and their Graph
Representations
Genome Rearrangement Events The ordering of a genome can be changed
through rearrangement events such as reversals and transpositions. Fig 1 shows
examples of different events of a single chromosome (1 -2 3 4 -5 6 7). In the examples, we use signed numbers to represent different genes and their orientations.
Genome rearrangement events involve with multiple combinatorial optimization
problems and graph representation is common to abstract these problems. In this
part, we will address the foundations of using the breakpoint graph to abstract
genome rearrangement events.
Breakpoint Graph Given an alphabet A, two genomes Γ and Π are represented
by two strings of signed (+ or −) numbers (representing genes) from A. Each
gene a ∈ A is represented by a pair of vertices head ah and tail at ; If a is positive
3
Fig. 1. Example of different rearrangement events.
(a) Example of BPG
(b) Example of DCJ
Fig. 2. Examples of BPG; and DCJ operations.
ah is putted in front of at , otherwise at is putted in front of ah . For a, b ∈ A, if
a, b ∈ Γ and are adjacent to each other, their adjacent vertices will be connected
by an edge. For a telomere genes, if it exists in a circular chromosome, two end
vertices will be connected by an edge; if it exists in a linear chromosome, two end
vertices will be connected to a special vertex called CAP vertex. If we use one
type of edges to represent adjacencies of gene order Γ and another type of edges
to represent adjacencies of gene order Π, the resulting graph with two types of
edges is called breakpoint graph (BPG). Fig 2(a) shows the BPG for gene order
Γ (1,-2,3,-6,5) (edge type: solid edges) which has one circular chromosome and
Π (1,2,3,7,4) (edge type: dashed edges) which has one linear chromosome.
DCJ operation Double-cut and join (DCJ ) operations are able to simulate
all rearrangement events. In a BPG, these operations cut two edges (within one
genome) and rejoin them using two possible combinations of end vertices (shown
in Fig 2(b)).
2.2
Distance computation
DCJ distance DCJ distance of genomes with the same content can be easily
calculated by enumerating the number of cycles/paths in the BPG [30], which
is of linear complexity.
DCJ-Indel distance When Indels are introduced in BPG, with two genomes
Γ and Π, the vertices and edges of a closed walk form a cycle. In Fig 2(a), the
4
walk (1t , (1t ; 2h ), 2h , (2h ; 3h ), 3h , (3h ; 2t ), 2t , (2t ; 1t ), 1t ) is a cycle. A vertex v is
π-open (γ-open) if v 6∈ Γ (v 6∈ Π). An unclosed walk in BPG is a path. Based
on different kinds of ends points of paths, we can classify paths into different
types. If the two ends of a path are CAP vertices, we simply denote this path
as p0 . If a path is ended by one open vertex and one CAP, we denote it as pπ
(pγ ). If a path is ended by two open vertices, we denote it by the types of its two
open vertices: for instance, pπ,γ represents a path that ends with a π-open vertex
and a γ-open vertex. In Fig 2(a), the walk (5t , (5t ; 1h ), 1h , (1h ; CAP ), CAP ) is a
pγ path and the walk (6t , (6t ; 3t ), 3t , (3t ; 7h ), 7h ) is a pγ,π path. A path is even
(odd), if it contains even (odd) number of edges. In [13], if |A|= N the DCJ
distance between two genomes with Indels but without duplications is calculated
by equation (1). We call this distance DCJ-Indel distance. From this equation,
we can easily get the DCJ-Indel distance between Γ and Π in Fig 2(a) as 4.
dindel (Γ, Π) = N − [|c|+|pπ,π |+|pγ,γ |+bpπ,γ c]
(1)
1 0
γ
π
π
γ
+ (|peven |+min(|podd |, |peven |) + min(|podd |, |peven |) + δ)
2
Where δ = 1 only if pπ,γ is odd and either |pπodd |> |pγeven |, |pγodd |> |pγeven | or
|pπodd |< |pγeven |, |pγodd |< |pγeven |; Otherwise, δ = 0.
DCJ-Exemplar(Matching) distance There are in general two approaches
to cope with duplicated genes. One is by removing all but keeping one copy in
a gene family to generate an exemplar pair [23] and the other is by relabeling
duplicated genes to ensure that every duplicated gene has unique number [24,
25]. Both of these two distances can be computed with BPG using branch-andbound methods. For both of the distance metrics, the upper bound can be easily
derived by assigning an arbitrary mapping to two genomes then computing their
mutual distance. In paper [23] regarding exemplar distance, it’s proved that
by removing all occurrences of unfixed duplicated gene families, the resulting
distance is monotony decreasing, hence the resulting distance can be served as
a lower bound. In paper [11] regarding matching distance, the authors proposed
a way for computing lower bounds by measuring the number of breakpoints
between two genomes, which might not directly imply the lower bound between
genomes with Indels. However, it is still possible to use this method to find a
‘relaxed’ lower bound.
Distance Estimation Note that mathematically optimized distance might not
reflect the true number of biological events, thus several estimation methods such
as EDE or IEBP are used to rescale these computed distances [19] to better fit
true evolutionary history.
2.3
Median Computation
If there are three given genomes, the graph constructed by pre-defined BPG
rule is called a Multiple Breakpoint Graph (MBG). Figure 3(a) shows an ex-
5
(a) MBG
(b) 0-matching
(c) Adequate subgraph
and edge shrinking
Fig. 3. (top) Examples of MBG with three input genomes: (1,2,3,4) (solid edges);
(1,2,-3,4) (dashed edges) and (2,3,1,-4) (dotted edges).; (middle) 0-matching operation;
(bottom) edge shrinking operations.
ample of MBG with three input genomes. When there are only equal content
genomes, the DCJ median problem can be briefly described by finding a maximum matching (which is called 0-matching) in MBG. Figure 3(b) shows an
example of 0-matching which is represented by gray edges. In [29], it is proven
that a type of sub-graph called adequate sub-graph (AS) could be used to decompose the graph with edge shrinking operations, which are shown in Figure 3(c).
Unfortunately, there is no branch-and-bound based median algorithm that deals
with unequal content genomes. In the following section, we will show that it is
actually difficult to design such algorithm.
3
3.1
Approaches
Proposed Distance Metrics
We have discussed DCJ, DCJ-Indel and DCJ-Exemplar(Matching) distances,
here we formally define the DCJ-Indel-Exemplar(Matching) distances as follows:
Definition 1. An exemplar string is constructed by deleting all but one occurrence of each gene family. Among all possible exemplar strings, the minimum
distance that one exemplar string returns is the DCJ-Indel-Exemplar distance.
Definition 2. A matching string is constructed by assigning a one-to-one mapping to each occurrence of genes in a gene family and relabel them to distinct
markers. Among all possible matching strings, the minimum distance that one
matching string returns is the DCJ-Indel-Matching distance.
6
Fig. 4. Examples of exemplar and matching distance in the form of BPG representation.
Figure 4 shows examples of BPG representation of exemplar mapping from
genome Γ (1, -2, 3, 2, -6, 5) and genome Π (1, 2, 3, 7, 2, 4) to Γ (1, 3, 2, 6, 5) and genome Π (1, 3, 7, 2, 4), and a matching that mapping from genome
Γ (1, -2, 3, 2, -6, 5) and genome Π (1, 2, 3, 7, 2, 4) to Γ (1, -2, 3, 2’, -6, 5) and
genome Π (1, 2’, 3, 7, 2, 4).
We can use branch-and-bound methods which are applied in DCJ-Exemplar
(Matching) distances to solve these two distances.
3.2
Optimization Methods
Optimal Assignments Although branch-and-bound algorithms are based on
enumerating the number of cycles/path in BPG, it is not necessary to enumerate
every component in the graph, as both [11, 25] indicated that there are some
specific patterns in BPG which can be fixed before the distance computation.
In this paper, we will extend their result in our optimization methods for DCJIndel-Exemplar(Matching) distances.
To begin with, we define some terms for future explanation. There are two categories of vertices in a BPG: one connects exactly one edge of each edge type (in
this paper edge types are expressed by such as dotted, dashed edges etc.), they
are called regular vertices; the other connects fewer or more than one edges of
each edge type, they are called irregular vertices. A subgraph in a BPG that only
7
contains regular vertices is defined as regular subgraph, while one that contains
irregular vertices is defined as irregular subgraph. In BPG with two genomes Γ
and Π, vertices and edges of a closed walk form a cycle.
Theorem 1. In a BPG, an irregular subgraph which is a cycle of length 2 can
be fixed before computation without losing accuracy.
Proof. Without loss of generality, the proof is sound for both DCJ-Indel-Exemplar
and DCJ-Indel-Matching distances. We prove the theorem under two cases:
1. for the subgraph in the component which only contains cycles, this is a case
that is exactly the same as mentioned in [25], proof.
2. for the subgraph in the component which contains paths, since no type of
the paths has count more than one (which is the count of a cycle), following
the similar proof strategy in [25], we can get the same conclusion.
Adopting Morph Graph Methods to Condense BPG If a gene family has
multiple copies of the gene, its corresponding two vertices (head and tail) in the
BPG will have degree of more than one. In contrary, vertex representations of
those singleton genes always have degree of one or zero. Once an ‘exemplar’ or
‘matching’ is fixed, only edges incident to vertices that have degree of more than
one have been changed. We can view the computation of exemplar or matching
distance as the process of morphing (or streaming) [32] the BPG in order to find
an ad hoc shape of the BPG that achieves optimality. Following this hint, we
can bridge out all vertices that are stable and just investigate these dynamically
changing vertices without lossing accuracy. Suppose there are V vertices in the
BPG, where Vs are stable and Vd are dynamic, the asymptotic speedup for this
morph BPG strategy will be O( VVd ).
Harness the Power of Divide-and-Conquer Approach to Reduce the
Problem Space In the paper by Nguyen et al [20], the authors proposed a
divide and conquer method to quickly calculate the exemplar distance. Inspired
by their idea, we propose the following divide-and-conquer method to compute
the above two distances based on the BPG. We have the follow observation:
Theorem 2. The DCJ-Indel-Exemplar (Matching) distance is optimal iff the
choices of exemplar edges (cycle decomposition) in each connected components
of BPG are optimal.
Proof. Since it’s obvious that for regular connected component of BPG, there
is only one choice of edges, the proof under this case is trivial. For irregular
connected component of BPG, we prove by contrary: suppose there is another
edge selection that can result in a better distance, based on the corresponding
BPG, there must be at least one connected component that has a better edge
selection, replacing it with a better edge selection will result in a better distance,
which violates the assumption.
8
22
Algorithm 1: DCJIndelExem(Matc)Distance
Input: G1 and G2
Output: Minimum distance d
optimization methods on G1 , G2 ;
0
0
G1 , G2 ←randomly init exemplar(matching) of all duplicated genes of G1 , G2 ;
G∗1 , G∗2 ←remove all duplicated genes of G1 , G2 ;
0
0
min_ub ← DCJIndel(G1 , G2 ) ;
∗
∗
min_lb ← DCJIndel(G1 , G2 ) ;
Init search list L of size min_ub − min_lb and insert G1 , G2 ;
while min_ub > min_lb do
+
G+
1 , G2 ← pop from L[min_lb];
for pair ∈ all mappings of next available duplicated gene do
+
+
+
G+
1 , G2 ← G1 , G2 fix the exemplar(matching) of pair ;
0
0
+
G+
1 , G2 ←randomly init exemplar(matching) of rest duplicated genes
+
G1 , G+
2;
+∗
+∗
+
G1 , G2 ←remove rest duplicated genes G+
1 , G2 ;
+0
+0
ub ← DCJIndel(G1 , G2 ) ;
+∗
lb ← DCJIndel(G+∗
1 , G2 ) ;
if lb > min_ub then
+
discard G+
1 , G2
if ub < min_ub then
min_ub = ub;
else if ub = max_lb then
return d = ub ;
else
+
insert G+
1 , G2 into L[lb]
23
return d = min_lb;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Combining three optimization methods in tandem with the branch-and-bound
framework, we can summarize our algorithm to compute DCJ-Indel-Exemplar
(Matching) distance as outlined in Algorithm 1.
3.3
Adapting Lin-Kernighan Heuristic to Find the Median Genome
Problem Statement Not surprisingly, finding the median genome that minimizes the DCJ-Indel-Exemplar(Matching) distance is challenging. To begin with,
given three input genomes, there are multiple choices of possible gene content
selections for the median; however, since identifying gene content is simpler and
there exists very accurate and fast methods to fulfil the task [15], we are more
interested on a relaxed version of the median problem that assumes known gene
content on the median genome. Which is formally defined as:
9
Definition: Given the gene content of a median genome, and gene orders of
three input genomes. Find an adjacency of the genes of the median genome
that minimize the DCJ-Indel-Exemplar(Matching) distance between the median
genome and the three input genomes.
The DCJ-Indel-Exemplar(Matching) median problem is not even in the class
of NP because there is no polynomial time algorithm to verify the results.
It is hard to design an exact branch-and-bound algorithm for the DCJ-IndelExemplar(Matching) median problem mainly because the DCJ-Indel distance
violates the property of triangular inqueality which is required for a distance
metrics [31]. Furthermore, when there are duplicated genes in a genome, it is
possible that there are multiple edges of the same type connecting to the same
vertex of a 0-matching, which leads to ambiguity in the edge shrinking step and
makes the followed branch-and-bound search process very complicated and extremely hard to implement. To overcome these problems, we provide an adaption
of Lin-Kernighan (LK ) heuristic to help solving this challenging problem.
Design of the Lin-Kernighan Heuristic The LK heuristic can generally be
divided into two steps: initialize the 0-matching for the median genome, and LK
search to get the result.
The initialization problem can be described as: given the gene contents of three
input genomes, find the gene content of the median genome that minimizes the
sum of the number of Indels and duplications operations required to transfer the
median gene content to the gene contents of the other three genomes. In this
paper, we design a very simple rule to initialize the median gene content: given
the counts of each gene family occurred in the three genomes, if two or three
counts are the same, we simply select this count as the number of occurrence of
the gene family in the median genome; if all three counts are different, we select
the median count as the number of occurrence of the gene family in the median
genome.
After fixing the gene content for the median genome, we randomly set up the
0-matching in the MBG. The followed LK heuristic selects two 0-matching edges
on the MBG of a given search node and performs a DCJ operation, obtaining the
MBG of a neighboring search node. We expand the search frontier by keeping all
neighboring search nodes to up until the search level L1. Then we only examine
and add the most promising neighbors to the search list until level L2. The
search is continued when there is a neighbor solution yielding a better median
score. This solution is then accepted and a new search is initialized from the
scratch. The search will be terminated if there are no improvement on the result
as the search level limits have been reached and all possible neighbors have been
enumerated. If L1 = L2 = K, the algorithm is called K-OPT algorithm.
Adopting Adequate Sub-graphs to Simplify Problem Space By using
the adequate subgraphs [26, 29], we can prove that they are still applicable for
decomposing the graph in the DCJ-Indel-Exemplar(Matching) median problem.
10
Algorithm 2: DCJIndelExem(Matc)Median
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Input: MBG G, Search Level L1 and L2
Output: 0-matching of G
Init search list L of size L1;
Init 0-matching of G;
currentLevel ← 0 and Improved ← true;
while Improved = true do
currentLevel ← 0 and Improved ← f alse;
Insert G into L[0];
while currentLevel < L2 do
G0 ← pop from list L[currentLevel];
if G0 improves the median score then
G ← G0 ;
Improved ← true and break ;
if currentLevel < L1 then
for x ∈ ∀ 0-matching pairs of G do
G0 ← perform DCJ on G0 using x;
if num_pair(x) > δ then Insert G0 into L[currentLevel + 1] ;
else
G0 ← perform DCJ on G0 using x = argmax num_pair(x) ;
x
18
19
20
if num_pair(x) > δ then Insert G0 into L[currentLevel + 1] ;
currentLevel ← currentLevel + 1 ;
return 0-matching of G;
Lemma 1. As long as the irregular vertices do not involve, regular subgraphs
are applicable to decompose MBG.
Proof. If there are d number of vertices that contain duplicated edges in MBG,
we can disambiguate the MBG by generating different subgraphs that contain
only one of the duplicateQ
edge. We call these subgraphs disambiguate MBG, (dMBG), and there are O( i<d deg(i)) number of d-MBGs. If a regular adequate
subgraph exists in the MBG, it must also exists in every d-MBG. Based on the
0-matching solution, we can transform every d-MBG into completed d-MBG (cdMBG) by constructing the optimal completion [13] between 0-matching and all
the other three types of edges. After this step, the adequate subgraphs in every
d-MBG still exist in every cd-MBG, thus we can use these adequate subgraphs
to decompose cd-MBG for each median problem without losing accuracy.
Search Space Reduction Methods The performance bottleneck with the median computation is in the exhaustive search step, because for each search level
we need to consider O(|E|2 ) possible number of edge pairs, which is O(|E|2L1 )
in total. Unlike the well-studied traveling salesman problem (TSP) where it
is cheap to find the best neighbor, here we need to compute the DCJ-Indel-
11
Exemplar(Matching) problem,NP-hard distance, which makes this step extremely
expensive to conclude. Noticing that if we search neighbors on edges that are
on the same 0-i color altered connected component (0-i-comp), the DCJ-IndelExemplar(Matching) distance for genome 0 and genome i is more likely to reduce [32], thus we can sort each edge pair by how many 0-i-comp they share.
Suppose the number of 0-i-comp that an edge pair x share is num_pair(x),
when the algorithm is in the exhaustive search step (currentLevel < L1), we set
a threshold δ and select the edge pairs that satisfy num_pair(x) > δ to add into
the search list. When it comes to the recursive deepening step, we select the edge
pair that satisfy argmax num_pair(x) to add into the search list. This strategy
x
has two merits: 1) some of the non-promising neighbor solution is eliminated to
reduce the search space; 2) the expensive evaluation step which make a function
call to DCJ-Indel-Exemplar(Matching) distance is postponed to the time when
a solution is retrieved from the search list.
The LK based median computation algorithm is as Algorithm 2 shows.
4
Experimental Results
We implement our code with python and C++: the python code realized the
optimization methods while the C++ code is implemented on a parallel branchand-bound framework OPTKit. We conduct extensive experiments to evaluate
the accuracy and speed of our distance and median algorithms using both simulated and real biological data. Experimental tests ran on a machine with linux
operating system configured with 16 Gb of memory and an Intel(R) Xeon(R)
CPU E5530 16 core processor, each core has 2.4GHz of speed. All of the experiments ran with a single thread. We choose to use g++-4.8.1 as our compiler.
4.1
Distance Computation
To the best of our knowlege, there is no software package that can handle both
duplications and Indels. We compare our DCJ-Indel-Exemplar (Matching) distances with GREDO [25], a software package based on linear programming that
can handle duplications.
Simulated Data The simulated data sets are generated with genomes containing 1000 genes. The Indels rate is set (γ) as 5%, inline with the duplication
rate (φ) as 10%. Considering GREDO can not process Indel data, all Indels for
GREDO are removed. We compare the change of distance estimation with the
variation of mutation rate (θ, which grows from 10% to 100%. The experimental
results for simulated data are displayed in Figure 5.
1. For computational time, since the results of time spans over a range of thousands of seconds, we display the time with log scale to construe results clearly.
12
(a) Time result for simulated data.
(b) Distance result for simulated data.
Fig. 5. Experimental results for distance computation using simulated data.
(a) γ = φ = 0% and θ varies from 10% to 100%. (b) γ = φ = 5% and θ varies from 10% to 60%.
Fig. 6. Experimental results for median computation applying DCJ-Indel-Exemplar
distance.
When the mutation rate is less than 50%, all three methods perform similarly, with the fact that GREDO is faster than both of our branch-and-bound
methods. However, GREDO slows down dramatically when the mutation
13
Distance Results
Data
Time Results
GREDO Exem Matc GREDO Exem Matc
brownrat_chicken
1678
24546 24704 3604.28 172.73 7.45
brownrat_gorilla
1274
17922 17966 5707.13 12.64 12.10
brownrat_human
1083
17858 17900 3725.76 12.14 12.19
brownrat_mouse
790
15433 15445 3725.66 14.51 15.06
chicken_gorilla
1491
16379 16421 3725.62
7.54
7.57
chicken_human
1521
16231 16276 3725.65
7.74
7.47
chicken_mouse
1528
15712 15745 3726.03
9.82
8.16
gorilla_human
486
17798 17798 3607.63 13.94 13.81
gorilla_mouse
860
18914 18935 4816.31 12.60 12.13
human_mouse
749
18126 18144
94.64
12.45 12.61
Table 1. Experimental results for disntance computation with real data set.
(a) γ = φ = 5% and θ varies from 10% to 100%. (b) γ = φ = 10% and θ varies from 10% to 100%.
Fig. 7. Experimental results for median computation applying DCJ-Indel-Matching
distance.
rate is increased, while our branch-and-bound based method takes less increased time to finish.
14
2. For computational accuracy, we show the distance results corrected by EDE
approach which is one of the best true distance estimator. As for simulated
data, we can see that when the mutation rate is small (< 50%) GREDO under estimate the distance as opposed to our two branch-and-bound methods;
but it will over estimate the distance with the growth of mutation rate.
Real data We prepare the real data sets using genomes downloaded from Ensenble and processed them following the instructions in [25]. The real data set
contains 5 species: brown-rat, chicken, human, mouse and gorilla. For DCJ-IndelExemplar (Matching) distance, we only convert the Ensenmble format to adapt
the data to our program. Meanwhile, just as the simulated data, all Indels in
real data set for GREDO are removed. The results for real data are shown in
Table 1.
1. For computational time, the branch-and-bound method shows orders of magnitudes of speed up compared with GREDO. We analyze the data, the reason
can be construed as the existance of multiple connected comonent in BPG.
So that our method can divide the graph into much smaller size, versus
GREDO which doesn’t have this mechanism.
2. For computational accuracy, the distance results of the real data gives us a
taste of how frequently Indels happend in the genome evolution. We can see
orders of magnitude of difference between our distance results and GREDO,
which is mainly due to the large amount of Indels in the real data set.
Note that we did not change the way GREDO compute its distance as in
paper [25], in the real distance computation, we should consider Indels in
alignment with duplications.
4.2
Median Computation
Median Computation We simulate the median data of three genomes using
the same strategy as in the distance simulation. In our experiments, each genome
is “evolved” from a seed genome, which is identity, and they all evolve with the
same evolution rate (θ, γ and φ). The sequence length in the median experiments
are reduced to 50, due to performance issues.
DCJ-Indel-Exemplar median We analyze the result of using LK algorithm
with L1 = 2 and L2 = 3, and the K-OPT algorithm of K = 2. Search space
reduction methods are used, with δ = 2 and δ = 3 respectively.
1. To begin with, we compare our result along with equal content data, since
there are already benchmark programs to help us performing analysis. We
run the exact DCJ median solver (we use the one in [32]) to compare our
heuristic with the exact median results. In Fig 6(a), it shows the accuracy
of our heuristic versus the exact result. It is shown that when θ ≤ 60%, all
results of the LK and K-OPT methods are quite close to the exact solver.
15
For parameter of δ = 2, both LK and K-OPT methods can generate exactly
the same results for most of the cases.
2. As for the median results for unequal contents, we set both γ and φ to 5%
and increase the mutation (inversion) rate θ from 10% to 60%. We compare
our results with the accumulated distance of the three genomes to their
simulation seed. Although it can not show the accuracy of our method (since
we do not have an exact solver), it can be used as an indicator of how
close that our method is to the real evolution. Fig 6(b) shows that when
δ = 3, both the LK and K-OPT algorithms get results quite close to the
real evolutionary distance.
DCJ-Indel-Matching median Since DCJ-Indel-Exemplar median has already
given us the result of how LK performs against exact solver, and how different
parameters of LK performs. With these things in mind, we choose to use LK
with L1 = 2 and L2 = 3 having δ = 2 as the configuration for our DCJ-IndelMatching median solver. We use the same data as in the previous experiments,
and the experimental results are shown in Figure 7(a) and Figure 7(b). We can
see that in general, the new implementation is quite close to the real result when
γ = 5% and φ = 5% and slightly worse than real result when γ = 10% and
φ = 10%.
5
Conclusion
In this paper, we proposed a new way to compute the distance and median
between genomes with unequal contents (with Indels and duplications). Our
distance method can handle Indels which is ubiquitous in the real data set,
and is proved to be more efficient as opposed to GREDO. We designed a LinKernighan based method to compute median, which can get close to optimal
results in alignment with the exact median solver, and our methods can handle
duplications and Indels as well.
6
Acknowledgements
This Research was sponsored in part by the NSF OCI-0904461 (Bader), OCI0904179, IIS-1161586 (Tang) and OCI- 0904166 (Schaeffer).
References
1. Angibaud, S., Fertin, G., Rusu, I., Thévenin, A., Vialette, S.: On the approximability of comparing genomes with duplicates. J. Graph Algorithms Appl. 13(1),
19–53 (2009)
16
2. Bader, D.A., Moret, B.M.E., Yan, M.: A linear-time algorithm for computing inversion distance between signed permutations with an experimental study. Journal
of Computational Biology 8, 483–491 (2001)
3. Bafna, V., Pevzner, P.A.: Sorting by transpositions. SIAM J. Discrete Math. 11(2),
224–240 (1998)
4. Bergeron, A., Mixtacki, J., Stoye, J.: On sorting by translocations. In: Journal of
Computational Biology. pp. 615–629. Springer (2005)
5. Blin, G., Chauve, C., Fertin, G.: The breakpoint distance for signed sequences.
In: Proc. CompBioNets 2004. vol. Text in Algorithms, Volume 3, pp. 3–16. King’s
College London (2004)
6. Bourque, G., Pevzner, P.A.: Genome-Scale Evolution: Reconstructing Gene Orders
in the Ancestral Species. Genome Res. 12(1), 26–36 (2002)
7. Braga, M.D.V., Willing, E., Stoye, J.: Genomic distance with dcj and indels. In:
Proceedings of the 10th international conference on Algorithms in bioinformatics.
pp. 90–101. WABI’10, Springer-Verlag, Berlin, Heidelberg (2010)
8. Brewer, C., Holloway, S., Zawalnyski, P., Schinzel, A., FitzPatrick, D.: A chromosomal duplication map of malformations: Regions of suspected haplo and triplolethality and tolerance of segmental aneuploidy in humans. The American Journal
of Human Genetics 64(6), 1702 – 1708 (1999)
9. Caprara, A.: The Reversal Median Problem. INFORMS Journal on Computing
15(1), 93–113 (2003)
10. Chauve, C., Fertin, G., Rizzi, R., Vialette, S.: Genomes containing duplicates are
hard to compare. In: Proc Int. Workshop on Bioinformatics Research and Applications (IWBRA). LNCS, vol. 3992, pp. 783–790. Springer-Verlag, Reading, UK
(2006)
11. Chen, X., Zheng, J., Fu, Z., Nan, P., Zhong, Y., Lonardi, S., Jiang, T.: Assignment of orthologous genes via genome rearrangement. IEEE/ACM Trans. Comput.
Biology Bioinform. 2(4), 302–315 (2005)
12. Chen, Z., Fu, B., Zhu, B.: Erratum: The approximability of the exemplar breakpoint distance problem. In: FAW-AAIM. p. 368 (2012)
13. Compeau, P.E.C.: A simplified view of dcj-indel distance. In: Proceedings of
the 12th international conference on Algorithms in Bioinformatics. pp. 365–377.
WABI’12, Springer-Verlag, Berlin, Heidelberg (2012)
14. Gao, N., Yang, N., Tang, J.: Ancestral genome inference using a genetic algorithm
approach. PLoS ONE 8(5) (2013)
15. Hu, F., Zhou, J., Zhou, L., Tang, J.: Probabilistic reconstruction of ancestral gene
orders with insertions and deletions. IEEE/ACM Trans. Comput. Biology Bioinform. 11(4), 667–672 (2014), http://doi.ieeecomputersociety.org/10.1109/
TCBB.2014.2309602
16. Lenne, R., Solnon, C., Stutzle, T., Tannier, E., Birattari, M.: Reactive Stochastic
Local Search Algorithms for the Genomic Median Problem. In: Carlos Cotta, J.v.H.
(ed.) Eighth European Conference on Evolutionary Computation in Combinatorial
Optimisation (EvoCOP). pp. 266–276. LNCS, Springer (Mar 2008)
17. Mabrouk, N.E.: Sorting Signed Permutations by Reversals and Insertions/Deletions of Contiguous Segments. Journal of Discrete Algorithms 1(1), 105–
122 (2001)
18. Moret, B.M.E., Tang, J., san Wang, L., Warnow, Y.: Steps toward accurate reconstructions of phylogenies from gene-order data. J. Comput. Syst. Sci 65, 508–525
(2002)
17
19. Moret, B.M.E., Wang, L.S., Warnow, T., Wyman, S.K.: New approaches for reconstructing phylogenies from gene order data. In: ISMB (Supplement of Bioinformatics). pp. 165–173 (2001)
20. Nguyen, C.T., Tay, Y.C., Zhang, L.: Divide-and-conquer approach for the exemplar
breakpoint distance. Bioinformatics 21(10), 2171–2176 (May 2005)
21. Pe’er, I., Shamir, R.: The median problems for breakpoints are np-complete. Elec.
Colloq. on Comput. Complexity 71 (1998)
22. Rajan, V., Xu, A.W., Lin, Y., Swenson, K.M., Moret, B.M.E.: Heuristics for the
inversion median problem. BMC Bioinformatics 11(S-1), 30 (2010)
23. Sankoff, D.: Genome rearrangement with gene families. Bioinformatics 15(11), 909–
917 (1999)
24. Shao, M., Lin, Y.: Approximating the edit distance for genomes with duplicate
genes under dcj, insertion and deletion. BMC Bioinformatics 13(S-19), S13 (2012)
25. Shao, M., Lin, Y., Moret, B.M.E.: An exact algorithm to compute the dcj distance
for genomes with duplicate genes. In: RECOMB. pp. 280–292 (2014)
26. Xu, A.W.: Dcj median problems on linear multichromosomal genomes: Graph representation and fast exact solutions. In: RECOMB-CG. pp. 70–83 (2009)
27. Xu, A.W.: A fast and exact algorithm for the median of three problem: A graph decomposition approach. Journal of Computational Biology 16(10), 1369–1381 (2009)
28. Xu, A.W., Moret, B.M.E.: Gasts: Parsimony scoring under rearrangements. In:
WABI. pp. 351–363 (2011)
29. Xu, A.W., Sankoff, D.: Decompositions of multiple breakpoint graphs and rapid
exact solutions to the median problem. In: Proceedings of the 8th international
workshop on Algorithms in Bioinformatics. pp. 25–37. WABI ’08, Springer-Verlag,
Berlin, Heidelberg (2008)
30. Yancopoulos, S., Attie, O., Friedberg, R.: Efficient sorting of genomic permutations
by translocation, inversion and block interchange. Bioinformatics 21(16), 3340–
3346 (2005)
31. Yancopoulos, S., Friedberg, R.: Sorting genomes with insertions, deletions and
duplications by dcj. In: Nelson, C.E., Vialette, S. (eds.) RECOMB-CG. Lecture
Notes in Computer Science, vol. 5267, pp. 170–183. Springer (2008)
32. Yin, Z., Tang, J., Schaeffer, S.W., Bader, D.A.: Streaming breakpoint graph analytics for accelerating and parallelizing the computation of dcj median of three
genomes. In: ICCS. pp. 561–570 (2013)
| 8 |
arXiv:1708.02854v1 [math.ST] 9 Aug 2017
FUNCTIONAL ESTIMATION AND HYPOTHESIS TESTING
IN NONPARAMETRIC BOUNDARY MODELS
MARKUS REISS AND MARTIN WAHL
Abstract. Consider a Poisson point process with unknown support
boundary curve g, which forms a prototype of an irregular statistical
model. We
R address the problem of estimating non-linear functionals of
the form Φ(g(x))dx. Following a nonparametric maximum-likelihood
approach, we construct an estimator which is UMVU over Hölder balls
and achieves the (local) minimax rate of convergence. These results hold
under weak assumptions on Φ which are satisfied if Φ(u) = |u|p . As
an application, we consider the problem of estimating the Lp -norm and
derive the minimax separation rates in the corresponding nonparametric
hypothesis testing problem. Structural differences to results for regular
nonparametric models are discussed.
1. Introduction
Point processes serve as canonical models for dealing with support estimation. Poisson point processes (PPP) appear in the continuous limit of
nonparametric regression models with one-sided or irregular error variables,
cf. Meister and Reiß [13], and thus form counterparts of the Gaussian white
noise (GWN) model. In this paper we consider a PPP on [0, 1] × R with
intensity function
λg (x, y) = n1(y > g(x)),
x ∈ [0, 1], y ∈ R,
(1.1)
where g is an unknown support boundary curve. In Korostelev and Tsybakov [8, Chapter 8] the problem of functional estimation for related image
boundary detection problems has been studied. The minimax rate of convergence for linear functionals is n−(β+1/2)/(β+1) over the Hölder ball
C β (R) = {g : [0, 1] → R : |g(x) − g(y)| 6 R|x − y|β ∀x, y ∈ [0, 1]}
with β ∈ (0, 1] and radius R > 0. For the PPP model Reiß and Selk [15]
build up a nonparametric maximum-likelihood approach and construct an
unbiased estimator achieving this rate. Besides minimax optimality, their
estimator has the striking property of being UMVU (uniformly of minimum
variance among all unbiased estimators) over C β (R).
2010 Mathematics Subject Classification. 62G05, 62G10, 62G32, 62M30.
Key words and phrases. Poisson point process, Support estimation, Non-linear functionals, Minimax hypothesis testing.
Financial support by the DFG via Research Unit 1735 Structural Inference in Statistics:
Adaptation and Efficiency is gratefully acknowledged.
1
2
MARKUS REISS AND MARTIN WAHL
Rate
estimate kgkpp
estimate kgkp
testing
PPP
n−(β+1/2)/(β+1)
n−(β+1/(2p))/(β+1)
n−(β+1/(2p))/(β+1)
GWN
p = 2: n−4β/(4β+1) ∨ n−1/2
p even: n−β/(2β+1−1/p)
n−β/(2β+1/2+(1/2−1/p)+ )
Table 1. Minimax rates in the Poisson point process (PPP)
and Gaussian white noise (GWN) model.
We are interested in non-linear functionals of the form
Z 1
Φ(g(x)) dx,
F (g) =
(1.2)
0
where Φ : R → R is a known weaklyRdifferentiable function with derivative
u
Φ0 ∈ L1loc (R) (i.e. Φ(u) = Φ(0) + 0 Φ0 (v) dv, u ∈ R, holds). We show
that it is still possible to construct an unbiased estimator of F (g) which is
UMVU over C β (R). Moreover, under weak assumptions on Φ0 , the estimator
achieves the local minimax rate of convergence kΦ0 ◦ gk2 n−(β+1/2)/(β+1) . An
important class of functionals of the form (1.2) is given by Φ(u) = |u|p ,
p > 1.
Based on these results we consider the testing problem H0 : g = g0 versus
H1 : g ∈ {g0 + h ∈ C β (R) : khkp > rn }, where the nonparametric alternative
is separated by a ball of radius rn > 0 in Lp -norm k · kp . We show that
the minimax separation rate is n−(β+1/(2p))/(β+1) and that this rate can be
achieved by a plug-in test, using a minimax optimal estimator of the Lp -norm
of g. In particular, the minimax rates of testing and estimation coincide,
and they are located strictly between the parametric rate 1/n and the rate
n−β/(β+1) corresponding to the problem of estimating the function g itself
(see e.g. Jirak, Meister and Reiß [5] and the references therein).
These fundamental questions have been studied extensively in the mean
regression and Gaussian√white noise (GWN) model (in the sequel, we consider the noise level 1/ n for comparison). Significant differences appear.
Consider, for instance, the case Φ(u) = |u|p with p ∈ N. For p even and β
large enough, the smooth functional (1.2) can be estimated with the parametric rate of convergence n−1/2 , using the method from Ibragimov, Nemirovski and Khasminski [3] (see Table 1 for the case p = 2 and the monograph by Nemirovski [14] for more general functionals). Estimation of the
Lp -norm has been considered by Lepski, Nemirovski and Spokoiny [11]. For
p even, the optimal rate of convergence is n−β/(2β+1−1/p) , while for p odd,
the standard nonparametric rate n−β/(2β+1) can only be improved by log n
factors. The first two rows of Table 1 compare these GWN estimation rates
with the PPP rates. A structural difference is that for vanishing regularity
β ↓ 0 the GWN exponents tend to zero such that the convergence rates become arbitrarily slow, while in the PPP case the rates always remain faster
FUNCTIONAL ESTIMATION IN NONPARAMETRIC BOUNDARY MODELS
3
Figure 1. Testing rate exponents for the Poisson point process (PPP) and Gaussian white noise (GWN) model as a
function of the regularity β.
than n−1/2 and n−1/(2p) , respectively. This phenomenon will be further explained at the beginning of Section 2. More generally, the PPP rates all
hold universally for all 1 6 p < ∞, while the GWN rates depend on p in
a very delicate way, showing that Lp -norm estimation is to some extent a
regular estimation problem in the otherwise rather irregular PPP statistical
model.
Further differences arise in the testing problem, which for the GWN model
is studied in the monograph by Ingster and Suslina [4]. There the testing
problem H0 : g = 0 versus H1 : g ∈ {h ∈ L2 ([0, 1]) : khkp > rn and khkβ,q 6
R} is considered, where k · kβ,q is either a Sobolev or a Besov norm where
smoothness is measured in Lq -norm. For instance, in the case 1 6 p 6 2
and q = ∞, the minimax separation rate is n−2β/(4β+1) which coincides with
the minimax rate for estimating the Lp -norm if p = 2 but not if p = 1. The
general minimax GWN separation rates for the case q > p are given in the
last row of Table 1 (for the cases 1 6 p 6 2, p 6 q 6 ∞ and 2 < p = q < ∞),
results for the case q < p can be found in Lepski and Spokoiny [12]. Figure
1 visualises the differences between the GWN and the PPP case by plotting
the separation rate exponents for the range of p ∈ [1, ∞) as a function of the
regularity β. In the GWN model the rates become arbitrarily slow when β
approaches zero and they do not change for p ∈ [1, 2] (elbow effect), which
is not the case in the PPP case. The absence of an elbow effect in the PPP
model may be explained by a different Hellinger geometry: the Hellinger
distance is given by an L1 -distance between the curves, while it is based on
the L2 -distance in the GWN model.
The paper is organised as follows. In Section 2 we construct the estimator,
compute its mean and variance using the underlying point process geometry
and martingale arguments, and derive the (local) minimax rates of convergence. In Section 3 and 4, we focus on the special case where Φ(u) = |u|p
4
MARKUS REISS AND MARTIN WAHL
and apply our results to the problem of estimating the Lp -norm and to the
above hypothesis testing problem.
2. Estimation of non-linear functionals
2.1. The estimator. Let (Xj , Yj )j>1 be the observed support points of a
Poisson point process on [0, 1] × R with intensity function given by (1.1).
The support boundary curve g is supposed to lie in the Hölder ball C β (R)
with β ∈ (0, 1]. The aim is to estimate the functional in (1.2). Similarly to
[15], our estimation method can be motivated as follows. Suppose that we
know a deterministic function ḡ ∈ C β (R) with ḡ(x) > g(x) for all x ∈ [0, 1].
Then the sum
1X 0
Φ (Yj )1 ḡ(Xj ) > Yj
(2.1)
n
j>1
is a.s. finite, has expectation equal to
Z Z
Z 1
1 1
0
Φ (y)1 ḡ(x) > y λg (x, y) dydx =
Φ(ḡ(x)) − Φ(g(x)) dx
n 0 R
0
and variance equal to
Z 1Z
1
Φ0 (y)2 1 ḡ(x) > y λg (x, y) dydx
2
n 0 R
Z Z
1 1
Φ0 (y)2 1 ḡ(x) > y > g(x) dydx,
=
n 0 R
(2.2)
provided the last integral is finite (see e.g. [9, Lemma 1.1] or [10, Theorem
4.4]). Thus,
Z 1
1X 0
pseudo
F̂
=
Φ(ḡ(x)) dx −
Φ (Yj )1 ḡ(Xj ) > Yj
n
0
j>1
forms an unbiased pseudo-estimator (relying on the knowledge of ḡ) of F (g)
whose variance is given by (2.2). The closer ḡ is to g, the smaller the
variance. Concerning the rate results for Lp -norms in Table 1 note that
already the very minor knowledge of some upper bound of g suffices to
construct an estimator with convergence rate n−1/2 , which explains why in
the PPP case even for β ↓ 0 estimation and testing rates remain consistent.
The main idea is now to find a data-driven upper bound of g which is as
small as possible. A solution to this problem is given by
ĝ M LE (x) = min Yk + R|x − Xk |β ,
x ∈ [0, 1],
k>1
which is the maximum-likelihood estimator over C β (R) [15, Section 3]. Then
the sum
1X 0
Φ (Yj )1 ĝ M LE (Xj ) > Yj
n
j>1
FUNCTIONAL ESTIMATION IN NONPARAMETRIC BOUNDARY MODELS
5
is a.s. finite and satisfies
h1 X
i
E
Φ0 (Yj )1 ĝ M LE (Xj ) > Yj
n
j>1
Z 1Z
1
=
Φ0 (y) E 1 ĝ M LE (x) > y λg (x, y) dydx
n 0 R
Z 1
Z 1
M LE
=
E Φ(ĝ
(x)) dx −
Φ(g(x)) dx,
0
0
provided that the integral in the second line is well-defined. For the first
equality observe that
1 ĝM LE (Xj ) > Yj = 1 min Yk + R|Xj − Xk |β > Yj
k>1:k6=j
where the term j = k can be dropped. This implies that the observation
(Xj , Yj ) can be integrated out, by following the usual arguments for computing sums with respect to a Poisson process (see e.g. [10, Theorem 4.4]).
To summarise, we propose the following estimator
Z 1
1X 0
M LE
F̂
=
Φ(ĝ M LE (x)) dx −
Φ (Yj )1 ĝ M LE (Xj ) > Yj ,
(2.3)
n
0
j>1
which is indeed an unbiased estimator of F (g) under the appropriate integrability condition.
2.1. Proposition. Suppose that
Z 1Z ∞
|Φ0 (g(x) + u)| P ĝ M LE (x) − g(x) > u dudx < ∞.
0
(2.4)
0
Then F̂ M LE from (2.3) is an unbiased estimator of F (g).
R 1 TheR 1above argument also works for more general functionals of the form
0 · · · 0 Φ(x1 , g(x1 ), . . . , xm , g(xm )) dx1 . . . dxm , but then involves complex
expressions in mixed partial derivatives of Φ. We therefore focus on estimation of the basic functional F .
2.2. The martingale approach. In this section, we present a martingalebased analysis of the estimator F̂ M LE in (2.3). The following result extends
[15, Theorem 3.2] to non-linear functionals.
2.2. Theorem. Suppose that the right-hand side in (2.5) below is finite.
Then the estimator F̂ M LE is UMVU over g ∈ C β (R) with variance
Z Z
1 1 ∞ 0
M LE
Var(F̂
)=
(Φ (g(x) + u))2 P ĝ M LE (x) − g(x) > u dudx.
n 0 0
(2.5)
2.3. Remark. If the right-hand side in (2.5) is finite, then Condition (2.4)
holds since P(ĝ M LE (x) − g(x) > u) is integrable in u, see also (2.7) below.
6
MARKUS REISS AND MARTIN WAHL
Proof. We first show the formula for the variance. Let λ = (λt ) be the
R1
process defined by λt = n 0 1 g(x) 6 t 6 ĝ M LE (x) dx, t ∈ R. Making a
linear change of variables, the right-hand side in (2.5) can be written as
i
hZ ∞
−2
Φ0 (s)2 λs ds ,
n E
t0
where t0 is a lower bound for g. In the proof of Theorem 3.2 in [15], it is
shown that the pure counting process N = (Nt ) defined by
X
Nt =
1 Yj 6 t ∧ ĝM LE (Xj ) , t > t0 ,
j>1
Rt
has compensator A = (At ) given by At = t0 λs ds and that M = N −
A is a square-integrable martingale with respect to the filtration Ft =
σ((Xj , Yj )1(Yj 6 t), j > 1). Its predictable quadratic variation is
Z t
hM it =
λs ds
t0
(see also [7, Proposition 2.32]). We conclude that (e.g. by [6, Theorem
26.2])
Z t
Z t
X
0
0
0
M LE
(Φ · M )t =
Φ (s) dMs =
Φ (Yj )1 Yj 6 t ∧ ĝ
(Xj ) −
Φ0 (s)λs ds
t0
t0
j>1
is an L2 -bounded martingale with
t
Z
0
Φ0 (s)2 λs ds,
hΦ · M it =
t0
E[hΦ0
noting that
· M it ] is bounded by the right-hand side in (2.5), which is
finite by assumption. For t → ∞ the process ((Φ0 · M )t ) converges almost
surely to
(Φ0 · M )∞
Z
X
0
M LE
=
Φ (Yj )1 Yj 6 ĝ
(Xj ) −
=
Φ0 (s)λs ds
t0
j>1
X
∞
Φ (Yj )1 Yj 6 ĝ
0
M LE
(Xj ) − n
Z
1
Φ(ĝ
0
j>1
M LE
Z
(x)) dx + n
1
Φ(g(x)) dx.
0
Moreover, the process (hΦ0 · M it ) converges almost surely and in L1 to
Z ∞
0
hΦ · M i∞ =
Φ0 (s)2 λs ds.
t0
Hence, unbiasedness and (2.5) follow from
E[(Φ0 · M )∞ ] = 0
and
E[(Φ0 · M )2∞ − hΦ0 · M i∞ ] = 0,
which holds due to the L2 -convergence of Φ0 · M [6, Corollary 6.22].
(2.6)
FUNCTIONAL ESTIMATION IN NONPARAMETRIC BOUNDARY MODELS
7
Finally, the fact that F̂ M LE is UMVU follows from the Lehmann-Scheffé
theorem and [15, Proposition 3.1] which says that (ĝ M LE (x) : x ∈ [0, 1]) is
a sufficient and complete statistic for C β (R).
2.3. Rates of convergence. In this section, we derive convergence rates
for the estimator F̂ M LE . By [15, Equation (3.3)], we have the following
deviation inequality for x ∈ [0, 1]:
β+1
−1
n(2R) β u β
, if u ∈ [0, 2R], (2.7)
β+1
P ĝ M LE (x) − g(x) > u 6 exp −
exp − n u − 2R , if u > 2R.
β+1
Thus, the right-hand side in (2.5) is finite if (Φ0 )2 has at most exponential
growth with parameter strictly smaller than n. In particular, this holds for
Φ(u) = |u|p , p > 1, in which case we have Φ0 (u) = p|u|p−1 sgn(u). A more
detailed analysis gives:
2.4. Corollary. Let p > 1 be a real number and consider Φ(u) = |u|p ,
g ∈ C β (R). Then
Z 1
1X
M LE
F̂p
=
|ĝ M LE (x)|p dx −
p|Yj |p−1 sgn(Yj )1 ĝ M LE (Xj ) > Yj
n
0
j>1
is an unbiased estimator of kgkpp with
M LE
− 2βp+1
2p−2 − 2β+1
p 2
β+1
β+1
,n
,
E (F̂p
− kgkp ) 6 C max kgk2p−2 n
(2.8)
(2.9)
where C is a constant depending only on R, β and p. Here, we use the
notation k · kq also for q < 1 with kgk00 := 1.
2.5. Remark. In the proof, a more precise upper bound is derived in which
the dependence on the constant R is explicit, see (2.11). For an asymptotically more precise result see Corollary 2.8 below.
2.6. Remark. Since Φ(u) = |u|p is non-negative, the positive part (F̂pM LE )+
of F̂pM LE always improves the estimator. This means that F̂pM LE is not an
admissible estimator in the decision-theoretic sense, while (F̂pM LE )+ on the
other hand is no longer unbiased.
Proof. Throughout the proof C > 0 denotes a constant depending only on β
and p that may change from line to line. By Theorem 2.2 and the discussion
above, we have
Z Z
M LE
1 1 ∞ 2
p 2
p |u+g(x)|2p−2 P ĝ M LE (x)−g(x) > u dudx.
E (F̂p
−kgkp ) =
n 0 0
Applying (2.7) and the inequality |u + g(x)|2p−2 6 22p−2 (u2p−2 + |g(x)|2p−2 ),
the last term is bounded by
Z
n(2R)− β1 u β+1
β
p2 22p−2 2R
2p−2
2p−2
(kgk2p−2 + u
) exp −
du
n
β+1
0
8
MARKUS REISS AND MARTIN WAHL
Z
p2 22p−2 ∞
2R
2p−2
+
(kgk2p−2
) exp − n u −
du =: (I) + (II).
2p−2 + u
n
β+1
2R
By a linear substitution, we have for q > 0
Z 2R
n(2R)− β1 u β+1
β
q
u exp −
du
β+1
0
Z ∞
β(q+1)
β(q+1)
β+1
q+1
− β+1
β+1
β+1
v q exp − v β dv
6 (β + 1)
(2R)
n
0
= β(β + 1)
βq−1
β+1
(2R)
q+1
β+1
β(q + 1) β(q+1)
−
Γ
n β+1 .
β+1
(2.10)
Consequently,
1
2p−2
n
(I) 6 CR β+1 kgk2p−2
− 2β+1
β+1
2p−1
+ CR β+1 n
− 2βp+1
β+1
.
Next, consider the remainder term (II). We have
Z ∞
2R
− 2βRn
du = n−1 e β+1
exp − n u −
β+1
2R
and
Z ∞
Z ∞
2R
nβu
2p−2
u
exp − n u −
du 6
u2p−2 exp −
du
β+1
β+1
2R
2R
Z
β + 1 2p−1 ∞
− βRn
6
v 2p−2 exp(−v) dv 6 Cn−2p+1 e β+1 .
nβ
2βRn/(β+1)
Note that the last integral can be computed using partial integration. Thus
−2
(II) 6 Ckgk2p−2
2p−2 n e
− 2βRn
β+1
− βRn
β+1
+ Cn−2p e
.
Summarising, we have
2p−1
1
− 2βp+1
2p−2 − 2β+1
n β+1 + CR β+1 n β+1
E (F̂ M LE − kgkpp )2 6 CR β+1 kgk2p−2
−2
+ Ckgk2p−2
2p−2 n e
− 2βRn
β+1
− βRn
β+1
+ Cn−2p e
,
and the claim follows.
(2.11)
One might wonder whether F̂pM LE achieves the rate n−(β+1/2)/(β+1) uniformly over g ∈ C β (R) ∩ Bp (R) with the Lp -ball Bp (R) = {g ∈ Lp ([0, 1]) :
kgkp 6 R}. For 1 6 p 6 2 this follows from the inclusion Bp (R) ⊆ B2p−2 (R).
For p > 2 this holds as well and is a consequence of the following useful
Lemma (with q = 2p − 2) providing a simple interpolation result. Results of
this type are well known (cf. Bergh and Löfström [1]), but since only Hölder
semi-norms appear, we provide a self-contained proof in the appendix.
2.7. Lemma. Let 1 6 p 6 q 6 ∞ and f ∈ C β (R). Then we have
kf kq 6 Ckf kp max(1, R/kf kp )
1/p−1/q
β+1/p
,
where C > 0 is a constant depending only on β, p and q and the right-hand
side is understood to be zero for f = 0.
FUNCTIONAL ESTIMATION IN NONPARAMETRIC BOUNDARY MODELS
9
Let us come to another corollary of Theorem 2.2 which provides an asymptotic local minimax result under weak assumptions on the functional:
2.8. Corollary. Suppose that there is a constant C > 0 such that |Φ0 (u)| 6
C exp(C|u|) for all u ∈ R. Let f ∈ C β (R). Suppose that kΦ0 ◦ f k2 6= 0
and that the map F 0 : C β (R) ⊆ L2 ([0, 1]) → L2 ([0, 1]), F 0 (g) = Φ0 ◦ g is
continuous at g = f with respect to the L2 -norms. Then the estimator
F̂nM LE = F̂ M LE satisfies the local asymptotic upper bound
1
2β+1
β 2R β+1
lim lim sup sup n β+1 Eg (F̂nM LE −F (g))2 6 βΓ β+1
kΦ0 ◦f k22 .
β+1
δ→0 n→∞
g∈C β (R):
kf −gk2 6δ
2.9. Remark. By Lemma 2.7 continuity of F 0 on C β (R) with respect to
L2 -norm implies continuity with respect to supremum norm. Under the
assumptions of Corollary 2.8, one can indeed show that the functional F is
Fréchet-differentiable in f along C β (R) with derivative F 0 (f ) = Φ0 ◦ f .
Proof. By Theorem 2.2 and Equation (2.7), we have
Z
n(2R)− β1 u β+1
β
M LE
1 2R 0
2
2
kΦ ◦ (u + g)k2 exp −
du
Eg (F̂n
− F (g)) 6
n 0
β+1
Z
1 ∞ 0
nβu
2
+
kΦ ◦ (u + g)k2 exp −
du.
n 2R
β+1
By Lemma 2.7, applied to f − g and with p = 2, q = ∞, we infer from
g ∈ C β (R) with kf − gk2 6 δ that
kf − gk∞ 6 C 0 R1/(2β+1) δ 2β/(2β+1)
(2.12)
holds with some constant C 0 , provided that δ 6 1/R. Using that Φ0 has
at most exponential growth, we get that kΦ0 ◦ (u + g)k22 6 C exp(C|u|)
uniformly over all g ∈ C β (R) with kf − gk2 6 δ (adjusting C appropriately).
This shows that the second integral is of order n−1 and thus asymptotically
negligible for our result. For every fixed δ 0 > 0 the first integral from δ 0 to
R is exponentially small in n. Thus, for any δ 0 > 0 the left-hand side in
Corollary 2.8 is bounded by
Z δ0
n(2R)− β1 u β+1
β
β
0
2
du.
lim lim sup sup n β+1
kΦ ◦ (u + g)k2 exp −
δ→0 n→∞ g∈C β (R):
β+1
0
kf −gk2 6δ
(2.13)
By the continuity of F 0 at f and the fact that kΦ0 ◦ f k2 6= 0, for every ε > 0
there exist δ, δ 0 > 0 such that kΦ0 ◦ (u + g)k2 6 (1 + ε)kΦ0 ◦ f k2 for all |u| 6 δ 0
and g ∈ C β (R) with kf − gk2 6 δ. We conclude that (2.13) is bounded by
(using the computation in (2.10) for q = 0)
β 2R 1
β+1
βΓ
kΦ0 ◦ f k22 ,
β+1 β+1
and the claim follows.
10
MARKUS REISS AND MARTIN WAHL
2.4. Lower bounds. In this section we establish lower bounds corresponding to Corollaries 2.4 and 2.8. We will apply the method of two fuzzy
hypotheses (see [16, Chapter 2.7.4]) with a prior corresponding to independent non-identical Bernoulli random variables. Our main result states a
local asymptotic lower bound in the case that Φ is continuously differentiable. Possible extensions are discussed afterwards.
2.10. Theorem. Let Φ be continuously differentiable and f ∈ C β (R) with
kΦ0 ◦ f k2 6= 0. Then there is a constant c1 > 0, depending only on β, such
that
2β+1
1
lim lim inf inf sup n β+1 Eg (F̂ − F (g))2 > c1 R β+1 kΦ0 ◦ f k22 .
δ→0 n→∞
F̂
g∈C β (R):
kf −gk2 6δ
The infimum is taken over all estimators in the PPP model with intensity
(1.1).
Proof. We want to apply the χ2 -version of the method of two fuzzy hypotheses as described in [16, Theorem 2.15]. Consider the functions
gθ =
m
X
θk gk
with θk ∈ {0, 1}
k=1
and
β
gk (x) = cRh K
x − (k − 1)h
h
= cRhβ+1 Kh (x − (k − 1)h)
with h = 1/m, triangular kernel K(u) = 4(u ∧ (1 − u))1[0,1] (u), Kh (·) =
K(·/h)/h and c > 0 sufficiently small such that gθ ∈ C β (R) for all m and θ.
Let πn be the probability measure on {0, 1}m obtained when θ1 , . . . , θm are
independent (non-identical) Bernoulli random variables with success probabilities p1 , . . . , pm . Let Pg denote the law of the observations in the PPP
model with intensity function (1.1). We set P0,n = Pf and
Z
P1,n (·) = Pf +gθ (·)πn (dθ).
In order to obtain the result, it suffices to find m > 1 and probabilities
p1 , . . . , pm (both depending on n) as well as a constant c1 > 0, only depending on β, and an absolute constant c2 < ∞, such that
(i) For n → ∞ the prior satisfies πn (kgθ k2 6 δ) → 1 and
β+1/2
1/2
−
πn F (f + gθ ) > F (f ) + 2c1 kΦ0 ◦ f k2 R β+1 n β+1 → 1;
(ii) lim supn→∞ χ2 (P1,n , P0,n ) 6 c2 .
We start with the following lemma on the χ2 -distance.
FUNCTIONAL ESTIMATION IN NONPARAMETRIC BOUNDARY MODELS
11
P
2
2.11. Lemma. Suppose that the success probabilities satisfy m
k=1 pk = 1.
Then
Z
Z
dP1,n 2
2
χ (P1,n , P0,n ) =
g1 (x) dx −1 −1
dP0,n −1 6 exp exp n
dP0,n
I1
holds, where I1 = [0, h).
R
R
Proof of Lemma 2.11. We abbreviate gk = Ik gk (x) dx, where Ik = [(k −
1)h, kh). Let us first see that
m
R
Y
dP1,n
=
1 − pk + pk en gk 1 ∀Xj ∈ Ik : Yj > f (Xj ) + gk (Xj ) . (2.14)
dP0,n
k=1
Indeed, by definition the left hand side is equal to
Y
X Y
dPf +gθ
(1 − pk )
pk
dPf
k:θk =0
k:θk =1
θ∈{0,1}m
R
Y
X Y
=
(1 − pk )
pk en gk 1 ∀Xj ∈ Ik : Yj > f (Xj ) + gk (Xj )
=
θ∈{0,1}m
m
Y
k:θk =0
k:θk =1
1 − pk + pk en
R
gk
1 ∀Xj ∈ Ik : Yj > f (Xj ) + gk (Xj ) ,
k=1
where we used the formula (see [9, Theorem 1.3] or [15, Section 3])
R
dPf +gθ
= en gθ 1 ∀j : Yj > f (Xj ) + gθ (Xj )
dPf
R
Y
=
en gk 1 ∀Xj ∈ Ik : Yj > f (Xj ) + gk (Xj )
k:θk =1
in the first equality. By the defining properties of the PPP, under P0,n , the
right-hand side in (2.14) is a product of independent randomR variables and
the corresponding indicators have success probabilities e−n gk . Thus we
obtain
Z
m
R
Y
dP1,n 2
dP0,n =
((1 − pk )2 + 2pk (1 − pk ) + p2k en gk )
dP0,n
=
6
k=1
m
Y
(1 + p2k (en
k=1
m
Y
2
n
epk (e
R
R
gk
g1 −1)
− 1))
n
= ee
R
g1 −1
,
k=1
where we used the bound 1 + x 6 ex and the assumption
Using Lemma 2.11 and the identity
Z
n
g1 (x) dx = cRnhβ+1 ,
I1
Pm
2
k=1 pk
= 1.
12
MARKUS REISS AND MARTIN WAHL
we get (ii) provided P
that we choose m = 1/h of size (Rn)1/(β+1) and
2
p1 , . . . , pm such that m
k=1 pk = 1. Thus it remains to choose the pk such
that the second convergence in (i) is satisfied.
We first consider the case that Φ0 ◦ f > 0. Let ε > 0 be a small constant
to be chosen later. Since Φ0 is uniformly continuous on compact intervals,
there is a δ > 0 such that
Z 1
Z 1
Z 1
Z 1
Φ(f (x)+g(x)) dx−
Φ(f (x)) dx >
Φ0 (f (x))g(x) dx−ε
|g(x)| dx
0
0
0
0
C β (R)
for all g ∈
with kf − gk2 6 δ (using (2.12) above). Thus, for n
sufficiently large, we get
F (f + gθ ) − F (f ) > hΦ0 ◦ f, gθ i − εh1, gθ i
=
m
X
0
θk hΦ ◦ f, gk i − ε
m
X
θk h1, gk i
k=1
k=1
β+1
= cRh
X
m
0
θk hΦ ◦ f, Kh (· − (k − 1)h)i − ε
m
X
θk .
k=1
k=1
Setting ak = hΦ0 ◦ f, Kh (· − (k − 1)h)i, this can be written as
X
m
m
X
β+1
F (f + gθ ) − F (f ) > cRh
a k θk − ε
θk ,
k=1
(2.15)
k=1
The first sum is a weighted sum of independent non-identical Bernoulli random variables and the maximising choice for the success probabilities is
ak
pk =
(2.16)
kak2
(the ak satisfy ak > 0 since we assumed Φ0 ◦ f > 0). By the mean value
theorem and the fact that Φ0 ◦ f is continuous, we get ak = Φ0 (f (xk )) with
xk ∈ [(k − 1)h, kh] and also
Z 1
m
1 X q
1
q
kakq =
ak →
(Φ0 (f (x))q dx = kΦ0 ◦ f kqq
as n → ∞ (2.17)
m
m
0
k=1
for each q > 1. Using the Chebyshev inequality we get
!
!
m
m
X
X
πn
ak θk < kak2 /2 = πn
ak (θk − pk ) < −kak2 /2
k=1
k=1
6
4
Pm
2
k=1 ak pk (1
kak22
− pk )
64
kak3
kak2
3
and the latter converges to 0 as n → ∞ by (2.17). Similarly,
!
!
m
m
X
X
πn
θk > 2kak1 /kak2 = πn
(θk − pk ) > kak1 /kak2
k=1
k=1
FUNCTIONAL ESTIMATION IN NONPARAMETRIC BOUNDARY MODELS
6
kak22
Pm
k=1 pk (1
kak21
− pk )
6
13
kak2
kak1
and the latter converges to 0 as n → ∞ by (2.17). Combining these two
bounds with (2.15) we get
1
2 kak1
√ kak2 − ε √
→1
πn F (f + gθ ) − F (f ) > cRhβ+1/2
2 m
m kak2
√ (2.18)
as n →√∞. This implies (i) if ε is chosen small enough since kak2 / m and
kak1 /( mkak2 ) have non-zero limits by (2.17) and the assumption kΦ0 ◦
f k2 6= 0. This completes the proof in the case Φ0 ◦ f > 0.
If Φ0 ◦ f 6 0, then we may follow the same line of arguments where (ii)
is replaced with a left-deviation inequality (which corresponds to apply the
above arguments to the functional F−Φ ). Next, if Φ0 ◦ f takes both, positive
and negative values, then we may choose pk = ak+ /ka+ k2 (resp. pk =
ak− /ka− k2 ) leading to a lower bound with kΦ0 ◦ f k22 replaced by k(Φ0 ◦ f )+ k22
(resp. k(Φ0 ◦ f )− k22 ). Summing up both lower bounds gives the claim in the
general case.
2.12. Remark. If Φ is convex, then we can replace (2.15) by
F (f + gθ ) − F (f ) > hΦ0 ◦ f, gθ i = cRhβ+1
m
X
a k θk ,
k=1
leading to a shortening of the above proof. In this case the lower bound also
holds without continuity of Φ0 . The arguments, however, must be adapted
slightly since the convergence in (2.17) may not hold in this case.
2.13. Remark. By making the constants in the proof of Proposition 2.10
explicit, one can also establish non-asymptotic lower bounds which include
lower-order terms. Consider for instance Φ(u) = |u|p , p ∈ N and f ≡ a > 0.
Then we have
X
X
p
m
p p−j j j βj+1
p
F (a + gθ ) − a =
θk
a c R h
kKkjj
j
j=1
k=1
X
m
>
θk max(pap−1 cRhβ+1 , cp Rp kKkpp hβp+1 ). (2.19)
k=1
We choose
√
p1 = · · · = pm = 1/ m and m = b2(cRn)1/(β+1) c.
In order to ensure kgθ k2 6 δ, it suffices that m > 1 and 2cRhβ 6 δ hold,
which is satisfied if n > c1 with c1 depending only on c, R and δ. Now,
by Lemma 2.11 and the choice of m we have χ2 (P0,n , P1,n ) 6 ee−1 − 1.
Moreover, using the simplification of Remark 2.12, (2.18) becomes
√
1
πn F (a+gθ ) > ap + max(pap−1 cRhβ+1/2 , cp Rp kKkpp hβp+1/2 ) > 1−4/ m.
2
14
MARKUS REISS AND MARTIN WAHL
Inserting the value of h and applying [16, Theorem 2.15 (iii)], we get
1/2
β+1/2
p−1/2
βp+1/2
−
−
inf sup Pg |F̂ − F (g)| > max(c2 pap−1 R β+1 n β+1 , c3 R β+1 n β+1 )
F̂ g∈C β (R):
ka−gk2 6δ
√
1
exp(−(ee−1 − 1)) − 2/ m,
4
provided that n > c1 , where c2 is a constant depending only on R and β
and c3 is a constant depending only on R, β and p. Thus we obtain a lower
bound which has the form of the upper bound in Corollary 2.4 (resp. (2.11)).
>
2.14. Remark. In the case of linear functionals the above proof can be used
to obtain the lower bound in [15, Theorem 2.6]. Instead of using the method
of fuzzy hypothesis, one can also try to apply the method used in Reiß and
Selk [15] and Korostelev and Tsybakov [8] which is based on a comparison
of the minimax risk with a Bayesian risk. This works for instance for the
special case Φ(s) = |s|p , p ∈ N, and f ≡ a > 0, but it is not clear whether
this structurally different prior can produce the correct lower bounds more
generally.
3. Hypothesis testing
3.1. Main result. In this section we use the previous results to address the
hypothesis testing problem
H0 : g = g0
H1 : g ∈ g0 + Gn ,
vs.
where g0 is a known function and
Gn = Gn (β, R, p, rn ) = {g ∈ C β (R) : kgkp > rn }.
In the sequel, we restrict to the case g0 = 0, since the general case can be
reduced to this one by a simple shift of the observations. We propose the
following plug-in test
ψn,p = 1 F̂pM LE > rnp /2 ,
(3.1)
with the estimator F̂pM LE from (2.8). We follow a minimax approach to
hypothesis testing, see e.g. [4, Chapter 2.4]. Our main result of this section
states that ψn,p achieves the minimax separation rates:
3.1. Theorem. Let p > 1 be a real number and
rn∗ = n
−
β+1/(2p)
β+1
.
Then, the following holds as n → ∞:
(a) If rn /rn∗ → ∞, then the tests ψn,p from (3.1) satisfy
E0 [ψn,p ] + sup Eg [1 − ψn,p ] → 0.
g∈Gn
FUNCTIONAL ESTIMATION IN NONPARAMETRIC BOUNDARY MODELS
15
(b) If rn /rn∗ → 0, then we have
inf E0 [ψn ] + sup Eg [1 − ψn ] → 1,
ψn
g∈Gn
where the infimum is taken over all tests in the PPP model with
intensity (1.1).
3.2. Proof of the upper bound. Throughout the proof C > 0 denotes
a constant depending only on R, β and p that may change from line to
line. Under the null hypothesis we have, using the Chebyshev inequality
and Corollary 2.4,
∗ 2p
− 2βp+1
4E0 [(F̂pM LE )2 ]
n β+1
rn
M LE
p
E0 [ψn,p ] = P0 (F̂p
> rn /2) 6
6C
=C
2p
2p
rn
rn
rn
(3.2)
and by assumption the right-hand side tends to zero as n → ∞. Next,
consider the type-two error Eg [1 − ψn,p ] with g ∈ Gn . Let k ∈ N be such that
2k−1 rnp 6 kgkpp < 2k rnp and set rn,k = 2k/p rn . By the Chebyshev inequality,
we have
Eg [1 − ψn,p ] = Pg (F̂pM LE < rnp /2) = Pg (kgkpp − F̂pM LE > kgkpp − rnp /2)
p
/4)
6 Pg (kgkpp − F̂pM LE > rn,k
6
16Eg [(F̂pM LE − kgkpp )2 ]
2p
rn,k
.
(3.3)
Now, we may restrict ourselves to the case that
2p−2
n
kgk2p−2
− 2β+1
β+1
>n
− 2βp+1
β+1
.
(3.4)
Indeed, if (3.4) does not hold, then the maximal type-two error converges
to zero as n → ∞, by using the same argument as in (3.2). By (3.3), (3.4)
and Corollary 2.4, we obtain
Eg [1 − ψn,p ] 6
− 2β+1
β+1
2p−2 n
Ckgk2p−2 2p
rn,k
.
(3.5)
Let us consider the cases 1 6 p 6 2 and p > 2 separately. If 1 < p 6 2, then
we have kgk2p−2 6 kgkp 6 rn,k by the Hölder inequality and the definition
of k. Thus, for 1 6 p 6 2, we get
∗ 2
∗ 2
− 2β+1
2β+1/p
n β+1
rn
rn
− 2β+1
+ β+1
β+1
Eg [1 − ψn,p ] 6 C 2
6C
n
6C
.
rn,k
rn
rn,k
Taking the supremum over all g ∈ Gn , the right-hand side tends to zero as
n → ∞. Next, consider the case p > 2. Applied with q = 2p−2 > p, Lemma
2.7 gives
1−2/p
2p−2
β+1/p .
kgk2p−2
max(1, kgk−1
p )
2p−2 6 Ckgkp
(3.6)
16
MARKUS REISS AND MARTIN WAHL
If kgkp > 1, then the claim follows as in the case 1 6 p 6 2. If kgkp 6 1,
then by (3.5) and (3.6), we have
−2−
1−2/p
− 2β+1
Eg [1 − ψn,p ] 6 Crn,k β+1/p n β+1
∗ 2β+1
∗ 2β+1
2β+1 β+1/2p
rn β+1/p β+1/p
rn β+1/p
− 2β+1
β+1
β+1
=C
n
6C
.
rn,k
rn
Again, taking the supremum over all g ∈ Gn , the right-hand side tends to
zero as n → ∞. This completes the proof of (i).
R
3.3. Proof of the lower bound. We set P1,n (·) = Pgθ (·)πn (dθ) and
P0,n = P0 with gθ and πn as in the proof of Proposition 2.10 with the choice
√
p1 = · · · = pm = 1/ m.
(3.7)
By [4, Proposition 2.9 and Proposition 2.12], in order that Theorem 3.1 (ii)
holds, we have to show that as n → ∞,
(i) πn (gθ ∈ Gn ) → 1,
(ii) χ2 (P1,n , P0,n ) → 0.
For (i), note that
kgθ kp =
m
X
θk
1/p
cRhβ+1/p kKkp .
k=1
By the Chebyshev inequality, we have
X
X
m
m
1/p
√
√
πn
θk
6 2−1/p m1/(2p) = πn
(θk − 1/ m) 6 − m/2
k=1
k=1
√
√
4m(1/ m)(1 − 1/ m)
6
,
m
where the right-hand side tends to zero as m → ∞. Thus (i) holds provided
that we choose m−1 = h of size
1
c1 rnβ+1/(2p)
with c1 > 0 depending only on R and p. Moreover, by Lemma 2.11 and
(3.7), we have
χ2 (P1,n , P0,n ) 6 exp exp(cRnhβ+1 ) − 1 − 1.
Inserting the above choice of h, the last expression goes to zero as n → ∞,
since
β+1
β+1
nrnβ+1/(2p) = (rn /rn∗ ) β+1/(2p) → 0.
This completes the proof.
FUNCTIONAL ESTIMATION IN NONPARAMETRIC BOUNDARY MODELS
17
4. Estimating the Lp -norm
Finally let us consider the problem of estimating the Lp -norm of g. We
define the estimator T̂ of kgkp by
1/p
1/p
T̂ = max(F̂pM LE , 0)
= (F̂pM LE )+ .
Our main result of this section is as follows:
4.1. Theorem. Let p > 1 be a real number. Then we have
−
sup Eg [|T̂ − kgkp |] 6 Cn
β+1/(2p)
β+1
g∈C β (R)
with a constant C > 0 depending only on R, β and p.
4.2. Remark. For the problem of estimating g in L∞ -norm, Drees, Neumeyer
and Selk [2] established the rate (n−1 log n)β/(β+1) (in a boundary regression
model). In particular, they use this result to analyse goodness-of-fit tests
for parametric classes of error distributions.
4.3. Remark. Note that we can consider the minimax risk over the whole
Hölder class C β (R) in the case of estimating the norm kgkp . In distinction
to Corollary 2.4, the upper bound does not depend on any Lq -norm of g.
Inspecting the proof, we see more precisely that the minimax rate is driven
by functions whose Lp -norm is smaller than n−(β+1/(2p))/(β+1) . For functions which have a substantially larger norm we get the rate of convergence
n−(β+1/2)/(β+1) corresponding to a smooth functional. This is explained by
the fact that the Lp -norm is a non-smooth functional at g = 0.
4.4. Remark. There is a close connection between Theorem 4.1 and Theorem 3.1. First of all, the upper bound in Theorem 3.1 follows from Theorem
4.1 by using e.g. [4, Proposition 2.17]. Moreover, the lower bound in Theorem 3.1 implies that the rate in Theorem 4.1 is optimal (if not, the plug-in
test ψn,p would give a better minimax rate of testing, again by [4, Proposition 2.17]). In particular, we conclude that the minimax rates of testing
and estimation coincide.
Proof. Throughout the proof C > 0 denotes a constant depending only on
R, β and p that may change from line to line. Since the case p = 1 is
covered in Corollary 2.4, we restrict to the case p > 1. By the convexity
of y 7→ y p , we have (for non-negative real numbers a 6= b the inequality
(bp − ap )/(b − a) > max(a, b)p−1 holds)
|T̂ − kgkp | 6
|T̂ p − kgkpp |
kgkp−1
p
.
Hence,
Eg [|T̂ − kgkp |] 6
Eg [(T̂ p − kgkpp )2 ]1/2
kgkp−1
p
6
Eg [(F̂pM LE − kgkpp )2 ]1/2
kgkp−1
p
,
(4.1)
18
MARKUS REISS AND MARTIN WAHL
where we also used the fact that T̂ p = (F̂pM LE )+ improves F̂pM LE (see also
Remark 2.6). On the other hand, we also have |T̂ − kgkp | 6 |T̂ | + kgkp ,
which leads to
Eg [|T̂ − kgkp |] 6 Eg [T̂ p ]1/p + kgkp
6 Eg [|T̂ p − kgkpp |]1/p + 2kgkp
6 Eg [(F̂pM LE − kgkpp )2 ]1/(2p) + 2kgkp ,
(4.2)
where we applied the Hölder inequality and the concavity of the function
y 7→ y 1/p (for non-negative real numbers a 6= b the inequality (a + b)1/p 6
a1/p + b1/p holds).
If kgkp 6 n−(β+1/(2p))/(β+1) , then by (4.2) and Corollary 2.4 it suffices to
show
max kgk2p−2
2p−2 n
− 2β+1
β+1
− 2βp+1
β+1
1/(2p)
,n
−
6 Cn
β+1/(2p)
β+1
,
which itself follows from kgk2p−2 6 Cn−β/(β+1) . For p 6 2 the latter holds
because of kgk2p−2 6 kgkp 6 n−(β+1/(2p))/(β+1) . For p > 2 this is implied by
Lemma 2.7:
kgk2p−2 6 C max(kgkp , kgk(β+1/(2p−2))/(β+1/p)
) 6 Ckgkpβ/(β+1/(2p)) 6 Cn−β/(β+1) ,
p
using first kgkp 6 1 and then 1/(2p − 2) > 1/(2p).
In the opposite case kgkp > n−(β+1/(2p))/(β+1) we apply (4.1), Corollary
2.4 and obtain the result if
β+1/2
βp+1/2
− β+1
− β+1
−(β+1/(2p))/(β+1)
n
max kgkp−1
6 Ckgkp−1
.
,
n
p n
2p−2
For p 6 2 this follows again by kgk2p−2 6 kgkp . For p > 2 Lemma 2.7 yields
the bound
p−1
(1/2−1/(2p))/(β+1)
max(1, kgkp(1/2−(p−1)/p)/(β+1/p) ) 6 Ckgkp−1
,
kgkp−1
p n
2p−2 6 Ckgkp
using ((p − 1)/p − 1/2)(β + 1/(2p)) = (1/2 − 1/p)(β + 1/(2p)) < (1/2 −
1/(2p))(β+1/p). Inserting the bound thus gives the result also for p > 2.
5. Appendix: proof of Lemma 2.7
Let us first show that the general case can be deduced from the special
case q = ∞ and suppose that
1/p
kf k∞ 6 Ckf kp max 1, R/kf kp β+1/p
(5.1)
holds. Clearly, we have
q−p
kf kqq 6 kf k∞
kf kpp .
(5.2)
Now, if kf kp > R, then (5.1) and (5.2) give kf kq 6 C 1−p/q kf kp . On the
other hand, if kgkp 6 R, then (5.1) and (5.2) give
kf kqq 6 C q−p kf kqp (R/kf kp )
(q−p)/p
β+1/p
FUNCTIONAL ESTIMATION IN NONPARAMETRIC BOUNDARY MODELS
19
and thus
kf kq 6 C 1−p/q kf kp (R/kf kp )
1/p−1/q
β+1/p
.
C β (R),
It remains to prove (5.1). Using the definition of
we get
Z min(1,(kf k∞ /R)1/β )
Z 1
(kf k∞ − Rxβ )p dx.
|f (x)|p dx >
kf kpp =
0
0
Setting a = kf k∞ and b = (kf k∞ /R)1/β , we obtain
Z 1∧b
Z 1∧b
Z 1
(1 − (x/b)β )p dx
(a − a(x/b)β )p dx = ap
|f (x)|p dx >
0
0
0
Z
1
p p
> min a , a b
(1 − y β )p dy,
0
where we make the substitution x = by if b 6 1 and use the inequality
1 − (x/b)β > 1 − xβ if b > 1. Thus we have proven
1
kf kp > kf k∞ min 1, kf k∞ /R βp k1 − y β kp ,
which gives (5.1).
References
[1] J. Bergh and J. Löfström. Interpolation spaces. An introduction. Springer-Verlag,
Berlin-New York, 1976.
[2] H. Drees, N. Neumeyer, and L. Selk. Estimation and hypotheses testing in boundary
regression models. https://arxiv.org/abs/1408.3979, 2014.
[3] I. Ibragimov, A. Nemirovski, and R Khasminski. Some problems of nonparametric
estimation in gaussian white noise. Theory Probab. Appl., 31:391–406, 1986.
[4] Y. I. Ingster and I. A. Suslina. Nonparametric goodness-of-fit testing under Gaussian
models. Springer-Verlag, New York, 2003.
[5] M. Jirak, A. Meister, and M. Reiß. Adaptive function estimation in nonparametric
regression with one-sided errors. Ann. Stat., 42:1970–2002, 2014.
[6] O. Kallenberg. Foundations of modern probability. Springer-Verlag, New York, second
edition, 2002.
[7] A. F. Karr. Point processes and their statistical inference. Marcel Dekker, Inc., New
York, second edition, 1991.
[8] A. P. Korostelev and A. B. Tsybakov. Minimax theory of image reconstruction.
Springer-Verlag, New York, 1993.
[9] Y. A. Kutoyants. Statistical inference for spatial Poisson processes. Springer-Verlag,
New York, 1998.
[10] G. Last and M. Penrose. Lectures on the Poisson Process. to be published as IMS
Textbook by Cambridge University Press.
[11] O. Lepski, A. Nemirovski, and V. Spokoiny. On estimation of the lr norm of a regression function. Probab. Theory Related Fields, 113:221–253, 1999.
[12] O. V. Lepski and V. G. Spokoiny. Minimax nonparametric hypothesis testing: the
case of an inhomogeneous alternative. Bernoulli, 5:333–358, 1999.
[13] A. Meister and M. Reiß. Asymptotic equivalence for nonparametric regression with
non-regular errors. Probab. Theory Relat. Fields, 155:201–229, 2013.
[14] A. Nemirovski. Topics in non-parametric statistics. Springer, Berlin, 2000.
[15] M. Reiß and L. Selk. Efficient estimation of functionals in nonparametric boundary
models. Bernoulli, 23:1022–1055, 2017.
20
MARKUS REISS AND MARTIN WAHL
[16] A. Tsybakov. Introduction to nonparametric estimation. Springer, 2009.
Institut für Mathematik, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany.
E-mail address: [email protected] and [email protected]
| 1 |
Cyber-Physical Architecture Assisted by Programmable
Networking
Jose Rubio-Hernan1 , Rishikesh Sahay2 , Luca De Cicco3 , and Joaquin Garcia-Alfaro1
arXiv:1802.02360v1 [cs.CR] 7 Feb 2018
1
Institut Mines-Télécom, Télécom SudParis, France
2
Technical University of Denmark, Denmark
3
Politecnico di Bari, Italy
Abstract. Cyber-physical technologies are prone to attacks, in addition to faults and
failures. The issue of protecting cyber-physical systems should be tackled by jointly addressing security at both cyber and physical domains, in order to promptly detect and
mitigate cyber-physical threats. Towards this end, this letter proposes a new architecture
combining control-theoretic solutions together with programmable networking techniques
to jointly handle crucial threats to cyber-physical systems. The architecture paves the way
for new interesting techniques, research directions, and challenges which we discuss in our
work.
Keywords: Cyber-Physical Systems (CPS), Programmable Networking, Software-Defined
Networks (SDN), Supervisory Control and Data Acquisition (SCADA), Control Theory.
1
Introduction
Cyber-physical systems integrate physical infrastructures over computing network resources, in
an effort to reduce complexity and costs of traditional control systems. Modern industrial control systems are a proper example. They have evolved from networked-control systems that route
their underlying information in terms of physical measurement and feedback control. Traditional
security, from a network and computing security standpoint, is able to cover cyber threats, but
fails at addressing physical threats. Control-theoretic solutions, combined with network computing security techniques, can lead to powerful solutions to cover both physical and cyber-physical
attacks at the same time [13, 15, 18]. Nevertheless, guaranteeing the resilience of these systems
(i.e., to keep offering critical functionality under attack conditions) is still an open and critical
problem to solve [23]. We argue that the use of Programmable Networking is a plausible solution
to efficiently handle such a problem.
Programmable networking is a paradigm to manage network resources in a programmatic
way. It can be used to decouple static network architectures and ease their management. Additionally, it facilitates the decentralization of network control and processing of network data.
Control and data domains can be reprogrammed dynamically to enforce novel network applications and functionalities. Software-Defined Networking (SDN) [6] is a promising technology
associated to the concept of programmable networking. The use of programmable networking
allows decoupling the control domain from the data domain [4]. Novel network functionality can
be devised and deployed depending on the current requirements. This includes the enforcement
of protection schemes to recover the system elements from attacks [3, 9, 22]. The use of programmable networking increases the visibility of controllers in terms of failure and attacks. It
enables network operators to take control over the network flows based on dynamic conditions [1].
It also allows to control data domain devices from application-layer controllers, hence increasing the flexibility in terms of network reconfiguration. For instance, applications at the control
2
Rubio-Hernan, Sahay, De Cicco, Garcia-Alfaro
domain can dynamically deploy policies over the networked-devices via high-level programming
languages [20].
In this letter, we present a novel approach towards the development of a programmable cyberphysical system. Network and physical controllers get connected towards coordinating resilience
strategies, e.g., to maintain the resilient properties of the system under failure and attacks, at
any layer of the resulting architecture. The proposed architecture is experimentally validated
by means of current programmable and cyber-physical technologies. More specifically, we show
a proof-of-concept design combining SCADA (Supervisory Control And Data Acquisition) protocols and SDN-enhanced topologies, in order to evaluate the improvement of detection and
mitigation capabilities of the resulting system.
The remainder sections of this letter are organized as follows. Section 2 presents our envisioned architecture. Section 3 discusses about the advantages and limitations of our proposal,
w.r.t. experimental feasibility of our work. Section 4 closes the letter with some conclusions and
perspectives of future work.
2
2.1
Our Proposal
Combining two complementary paradigms
Cyber-physical systems and programmable networking are two complementary paradigms, but often separately addressed by two different research communities (control and computing-network
communities). Both paradigms use similar elements, that can be presented either following
control-theoretic architectures [10] or via computer-based programmatic definitions [12]. In the
control community, cyber-physical systems are regarded as a particular class of networked-control
systems [10] (cf. Figure 1). Particularly, feedback control is managed by the following elements.
Controllers, located within the cyber layer of the system (i.e., the layer related to the network
and computing resources), monitor and supervise information produced by physical sensors reporting measurements from physical processes. Based on the sensor measurements, controllers
dynamically compute corrective actions which are put in place by system actuators, to steer the
physical processes to the desired states.
Programmable networking can be represented using similar elements and definitions [12], as
depicted in Figure 1. In such a representation, the controller is governed by software elements,
supervising both the management and the control domains of a programmable architecture.
The controller manages the behavior of all the interactions at the data domain, where network
elements (e.g., network switches) follow some precise policies directed by the controller (e.g., to
enforce routing and security strategies). The remainder elements of the network get permanently
monitored by the controller, and orchestrated via control policies to improve, e.g., the routing
of traffic or the enforcement of network security countermeasures. To conduct monitoring at
the data domain, several network probes (referred to as meters in Figure 1) report networking
measurements to the controller.
2.2
Towards resilient control systems
A crucial goal of both cyber-physical designs and programmable networking is to ensure resilient
control systems. Resilience is a property that has been studied for many years in different areas
of knowledge. Laprie [8] defines the resilience of a system as the persistence of dependability when
facing changes in the system. Dependability has also been expressed by Ellison et al. as the
avoidance of failures that are unacceptably frequent or severe [2]. From these two definitions, we
see that resilience deals with the management of operational functionality that is crucial for a
Cyber-Physical Architecture Assisted by Programmable Networking
3
system, and that cannot be stopped. In other words, system functionality that shall be properly
accomplished. For instance, the cooling service of reactor in a nuclear plant is a proper example
of a critical functionality.
Other system functionalities may be seen as less important. They can even be temporarily
stopped or partially accomplished. Such type of functions can be seen as secondary functions. A
printing service for employees in the aforementioned nuclear plant scenario is a proper example
of a secondary function. Another important element to take into consideration is the severity of
failures. The more severe a failure is, the more it will affect the related system. Also, the more
severe the failure, the harder will be for the system to recover the nominal system functionalities.
Several works have focused on these issues when addressing the survivability of critical system
functionalities (see, for instance, reference [11] and citations thereof).
Finally, a resilient control system shall able to [16]: (i) detect undesirable threats; (ii) minimize the impact of such threats; and (iii) mitigate them (i.e., recover to normal operation in
a reasonable time). Solutions include the use of redundancy (e.g., use of software replicas), the
enforcement of system segmentation, and the inclusion of different levels of protection (e.g., se-
(a) Proposed architecture, combining feedback control and programmable networking
(b) Closed-loop feedback control
Network
Effectors
Probes
Physical
System
Programmable
Network Controller
Actuators
Sensors
Feedback
Controller
(c) Programmable networking
Fig. 1. Proposed architecture
4
Rubio-Hernan, Sahay, De Cicco, Garcia-Alfaro
cure handshakes, message signatures, and encryption). In the end, the goal is to include enough
security measures to closing the system monitoring loop with additional responses that mitigate
the undesirable events. In other words, inclusion of mitigation techniques to keep processes in
a normal operating state while the system is confronted to failures and attacks. The challenge
of satisfying those previous requirements on a cyber-physical system is the difficulty of properly differentiating failures from attacks, since their underlying security models (e.g., in terms of
mitigation) differ as well.
2.3
Cyber-Physical Architecture Assisted by Programmable Networking
Taking into account the aforementioned descriptions and goals, in this section we propose a
resilient cyber-physical architecture assisted by programmable networking. The proposed architecture allows creating a cross control layer between the physical and the cyber layers. The
resulting design aims to enable the combination of different security models, e.g., from a security
and safety standpoint. For instance, to enable adaptive mitigation of threats, making possible to
differentiate between faults, failures and attacks, prior enforcing the eventual mitigation strategies. The use of programmable networking allows to move to a higher level of abstraction when
analyzing the system threats (in contrast to traditional solutions anchored at the data domain),
moving the defense at the same level of cyber-physical adversaries, assumed to be entities with
equivalent powers (e.g., in terms of observability and controllability).
Figure 1(a) shows our proposed framework of a programmable networking assisted cyberphysical system. The framework contains different components of programmable networking
(PN) technologies and cyber-physical systems (CPS): (1) The data domain is mainly comprised
of two sub-spaces: A physical space composed by physical sensors and actuators, which are used
by the CPS to communicate with the physical processes located in the physical system (also located in the physical space). These devices communicate with the Feedback controller to manage
the physical processes; And a network space composed by PN switches, which are programmable
network switches controlled dynamically. This dynamic control enables us to minimize the deployment cost of the network framework, as to improve and manage many other network features [21],
e.g., QoS and security requirements. PN switches use a centralized framework with an interface
to control and manage all the network features dynamically; (2) The management and control
domain contains two different controllers working on a joint and coordinated way, to fully cover
the control of the resulting framework. The controllers are: (i) a Feedback controller and (ii) a
Programmable Network (PN) controller.
The Feedback controller is made of two sub-components: (a) A feedback controller that is
in charge of enforcing the dynamical control objectives (fast dynamics are involved); (b) A
supervisor controller that communicates in a bi-directional way with the PN controller in the
following way. The PN controller, based on measurements provided by probes and feedback
provided by the Feedback controller, is able to detect a possible threat acting on the control
path. In response to such a threat, the PN controller provides a corrective measure to mitigate
the impact. The PN controller can be seen as a computing entity that is located at an external
location (e.g., a kind of Network Operating System [6]). For instance, it provides resources and
abstractions to manage the system using a centralized or a decentralized model [17].
Together, both controllers manage the data domain. Feedback controller manages the physical
system through physical sensors and actuators deployed at the physical layer. And the PN
controller estimates and manages the data domain through network probes and effectors —
deployed at the management and control domain. It is worth noting that the PN controller
uses system identification tools to estimate the behavior matrices of the physical system. As
a result, it can compare the nominal cyber-physical system model (i.e., the behavior matrices)
Cyber-Physical Architecture Assisted by Programmable Networking
5
with the model estimated by the Feedback controller. The correlation between both estimations
allows to detect anomalies in the system (cf. reference [17], for a sample technique based on
the same principle). Notice that such anomalies can either be unintentional failures, or stealthy
attacks disguised by intentional modifications produced by a malicious entity who is willing to
disrupt the physical system [18, 19]. Assuming a system with appropriate measure to detect
both failures and attacks, the PN controller can react to those anomalies, by enforcing some
mitigation policies. At the data domain, we have network probes and effectors, conducting data
monitoring —if instructed by the control domain. Network probes monitor the traffic in the
data domain and provide the information to the PN controller. The PN controller analyses the
information and forwards control actions to the effectors. Network rules at the control domain
are responsible to enforcing such actions. For instance, when a network probe finds tampered
traffic in a network path, it provides the tampered information to the control domain. Then, the
PN controller, located at the control domain, checks for the available resources in, e.g., a path
lookup component, and provides new routes to enforce the action. For instance, it may redirect
the tampered traffic to provide fair share of network bandwidth w.r.t. legitimate traffic.
Another important component of the management and control domain is the Path Lookup.
Paths in the framework are pre-computed. Normally, a list of paths are maintained at the control
domain. The path lookup component maintains a table of paths (from the ingress switch to the
egress switch) sorted according to the quality of service provided by the paths, associated to
unique labels. Paths are later assigned to flows based on the traffic class that they belong to.
This is a crucial component of the overall system. Resiliency to attacks can be implemented
by using multi-path communication between cyber-physical system components or by activating
new paths (even suboptimal) to evade an attack. For example, legitimate flows are assigned to
high priority paths while suspicious flows are assigned to paths containing middleboxes or paths
having low bandwidth and longer in terms of hops, to finally forward the malicious flows through
paths which lead to a sinkhole.
3
Validation and Discussion
To validate the feasibility of our proposal, we are currently implementing a proof-of-concept prototype based on the architecture proposed in this letter. The prototype combines the components
of a cyber-physical system together with programmable networking technology. It builds upon
Mininet [7], a network emulator tool which provides a rapid prototyping for programmable networking. The protocol used to instantiate the programmable networking techniques is OpenFlow.
It allows controlling the path of the network traffic through programmable networking switches.
For the implementation of cyber-physical system, we use the SCADA (Supervisory Control And
Data Acquisition) Modbus protocol [14]. Further information about the proof-of-concept prototype and ongoing results is available online.
For the time being, the programmable networking controller of the prototype instructs the
network probes of a cyber-physical system (e.g., autonomous movable vehicles) to monitor traffic from data domains, in order to investigate the existence of malicious traffic evidences. More
specifically, the programmable networking controller receives traffic header details (such as source
IP address, destination IP address and protocol) along with payload data containing networkedfeedback measurements from the physical domains. The programmable networking controller
cooperates with the Feedback controller. It analyzes anomalous details and may shares information with the Feedback controller. Whenever the level of suspicious events reaches a configurable
threshold, the programmable networking controller may decide to send the measurements to, e.g.,
quarantine subnetwork, by redirecting the suspicious traffic through alternative routing paths [5].
This solution, based on the path lookup module of Mininet, uses network traffic labeling to mark
6
Rubio-Hernan, Sahay, De Cicco, Garcia-Alfaro
down the suspicious events. This is implemented in practice by adding an additional functionality to the Mininet programmable networking controllers that deploys mitigation rules to handle
suspicious traffic at the edge of the programmable networking switches of the cyber-physical
system. The redirection of traffic is enforced by the network effector of the system, in order to
enable quarantining plans. Some more practical details and video captures of the proof-of-concept
prototype are available online at http://j.mp/TSPCPSDN.
The aforementioned prototype validates the architecture proposed in this letter. It promotes
cooperation between controllers located at both management and control domains of a cyberphysical system. Nevertheless, further analysis is required w.r.t. the duality of goals of each
controller paradigm represented in the architecture depicted in Figure 1(a). In some cases, the
two combined paradigms may be driven with different objectives. A more thorough study shall
be conducted to identify how the controllers achieve a common goal (e.g., guaranteeing cyberresilience of the underlying system) withing interfering their primary objectives (i.e., networking
and physical control objectives). To make a concrete example, the Feedback controller can report
an anomalous deviation of the sensor readings from the nominal behavior as a feedback to the
PN controller. This issue could be due to sensors being faulty or due to an attack whose surface
intersects with the control paths. Unfortunately, the Feedback controller alone is not able to
distinguish between the two events since it has no view of the underlying network (information
hiding). In response to such a situation, the Feedback controller sends an alert signal to the
programmable networking controller which verifies if the control path (which is unknown to
the Feedback controller) is likely to be under attack. In case of an attack is detected by the
programmable networking controller, it puts in place the corrective measures (i.e., segregate
malicious traffic programmatically) and sends a signal to the Feedback controller, to report that
a corrective action has been taken into account. Notice that sophisticated attacks to the system
could escape the detectors running solely in the cyber-layer. When this is managed as well by
the Feedback controller, stealthy attacks hidden as failure and faults will be identified. A more
precise evaluation of this type of scenarios will be provided in a forthcoming publication.
4
Conclusion
This letter shows that programmable networking and feedback control can be combined together
in order to build higher resilient cyber-physical systems. We argued that the construction a
programmable networking-assisted cyber-physical architecture, improves detection and mitigation of cyber-physical attacks. It allows cooperation between traditional Feedback controllers
and programmable networking devices, to allow more efficient mitigation of threats (e.g., by
providing evidences about stealthy cyber-physical attacks disguised as failures and faults). Next
steps include a more thorough analysis about the cooperation of controllers, as well as investigation about the effective activation of cyber-physical resilience by combining novel detection and
mitigation strategies.
Acknowledgment
The research in this paper has received partial funding from the Cyber-CNI Chair of the Institut
Mines-Telecom (http://chaire-cyber-cni.fr/).
References
1. M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat. Hedera: Dynamic Flow
Scheduling for Data Center Networks. In Proceedings of the 7th USENIX Conference on Networked
Cyber-Physical Architecture Assisted by Programmable Networking
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
7
Systems Design and Implementation, NSDI’10, pages 19–19, Berkeley, CA, USA, 2010. USENIX
Association.
R. J. Ellison, D. A. Fisher, R. C. Linger, H. F. Lipson, and T. Longstaff. Survivable network systems:
An emerging discipline. Technical report, Carnegie-mellon Univ Pittsburgh PA Software Engineering
Inst, 1997.
S. K. Fayaz, Y. Tobioka, V. Sekar, and M. Bailey. Bohatei: Flexible and Elastic DDoS Defense.
In 24th USENIX Security Symposium (USENIX Security 15), pages 817–832. USENIX Association,
Aug 2015.
O. N. Foundation. Software-Defined Networking: The New Norm for Networks. Technical report,
Open Networking Foundation, April 2012.
N. Hachem, H. Debar, and J. Garcia-Alfaro. HADEGA: A novel MPLS-based mitigation solution
to handle network attacks. In 31st IEEE International conference on Performance Computing and
Communications (IPCCC 2012), pages 171–180. IEEE, December 2012.
D. Kreutz, F. M. V. Ramos, P. E. Verissimo, C. E. Rothenberg, S. Azodolmolky, and S. Uhlig.
Software-Defined Networking: A Comprehensive Survey. Proceedings of the IEEE, 103(1):14–76,
Jan 2015.
B. Lantz, B. Heller, and N. McKeown. A Network in a Laptop: Rapid Prototyping for Softwaredefined Networks. In 9th ACM SIGCOMM Workshop on Hot Topics in Networks, pages 19:1–19:6,
New York, NY, USA, 2010. ACM.
J.-C. Laprie. From dependability to resilience. In 38th IEEE/IFIP Int. Conf. On Dependable Systems
and Networks, pages G8–G9, 2008.
J. Li, S. Berg, M. Zhang, P. Reiher, and T. Wei. Drawbridge: Software-defined DDoS-resistant Traffic
Engineering. In Proceedings of the 2014 ACM Conference on SIGCOMM, pages 591–592, New York,
NY, USA, 2014. ACM.
M. Lindberg and K.-E. Arzen. Feedback Control of Cyber-physical Systems with Multi Resource
Dependencies and Model Uncertainties. In 2010 31st IEEE Real-Time Systems Symposium, pages
85–94, Nov 2010.
R. C. Linger, N. R. Mead, and H. F. Lipson. Requirements definition for survivable network systems.
In In Proceedings of Third International Conference on Requirements Engineering, pages 14–23.
IEEE, 1998.
N. Matni, A. Tang, and J. C. Doyle. Technical report: A case study in network architecture tradeoffs.
Technical report, Technical Report, 2015.
Y. Mo, S. Weerakkody, and B. Sinopoli. Physical Authentication of Control Systems: Designing Watermarked Control Inputs to Detect Counterfeit Sensor Outputs. IEEE Control Systems, 35(1):93–
109, February 2015.
Modbus Organization. Official Modbus Specifications, 2016, http://www.modbus.org/specs.php,
Last access: January 2018.
F. Pasqualetti, F. Dorfler, and F. Bullo. Control-Theoretic Methods for Cyberphysical Security:
Geometric Principles for Optimal Cross-Layer Resilient Control Systems. IEEE Control Systems,
35(1):110–127, Feb 2015.
C. Queiroz Batista Da Silva. A holistic approach for measuring the survivability of SCADA systems.
PhD thesis, RMIT University, 2012.
J. Rubio-Hernan, L. De Cicco, and J. Garcia-Alfaro. Event-Triggered Watermarking Control to
Handle Cyber-Physical Integrity Attacks. In 21st Nordic Conference in Secure IT Systems (NordSec
2016), Oulu (Finland), pages 3–19. Springer, Nov 2016.
J. Rubio-Hernan, L. De Cicco, and J. Garcia-Alfaro. Adaptive Control-Theoretic Detection of Integrity Attacks against Cyber-Physical Industrial Systems. Transactions on Emerging Telecommunications Technologies, 2017.
J. Rubio-Hernan, L. De Cicco, and J. Garcia-Alfaro. On the use of Watermark-based Schemes to
Detect Cyber-Physical Attacks. EURASIP Journal on Information Security, 2017(1):8, Jun 2017.
R. Sahay, G. Blanc, Z. Zhang, and H. Debar. ArOMA: An SDN based autonomic DDoS mitigation
framework. Computers and Security, 70(Supplement C):482 – 499, 2017.
S. Sharma. Programmable Ethernet Switches and Their Applications. PhD thesis, Stony Brook, NY,
USA, 2008. AAI3406709.
8
Rubio-Hernan, Sahay, De Cicco, Garcia-Alfaro
22. S. Shin, P. A. Porras, V. Yegneswaran, M. W. Fong, G. Gu, and M. Tyson. FRESCO: Modular
Composable Security Services for Software-Defined Networks. In Proceedings of the ISOC Network
and Distributed System Security Symposium (NDSS), 2013.
23. D. I. Urbina, J. Giraldo, A. A. Cardenas, J. Valente, M. Faisal, N. O. Tippenhauer, J. Ruths,
R. Candell, and H. Sandberg. Survey and New Directions for Physics-Based Attack Detection
in Control Systems. In Grant/Contract Reports (NISTGCR), pages 1–37. National Institute of
Standards and Technology (NIST), Nov 2016.
| 3 |
A Simple Exponential Family Framework
for Zero-Shot Learning
arXiv:1707.08040v3 [cs.LG] 25 Jan 2018
Vinay Kumar Verma♯ and Piyush Rai♯
♯
Dept. of Computer Science & Engineering, IIT Kanpur, India
{vkverma,piyush}@cse.iitk.ac.in
Abstract. We present a simple generative framework for learning to predict previously unseen classes, based on estimating class-attribute-gated class-conditional
distributions. We model each class-conditional distribution as an exponential family distribution and the parameters of the distribution of each seen/unseen class
are defined as functions of the respective observed class attributes. These functions can be learned using only the seen class data and can be used to predict
the parameters of the class-conditional distribution of each unseen class. Unlike
most existing methods for zero-shot learning that represent classes as fixed embeddings in some vector space, our generative model naturally represents each
class as a probability distribution. It is simple to implement and also allows leveraging additional unlabeled data from unseen classes to improve the estimates of
their class-conditional distributions using transductive/semi-supervised learning.
Moreover, it extends seamlessly to few-shot learning by easily updating these
distributions when provided with a small number of additional labelled examples
from unseen classes. Through a comprehensive set of experiments on several
benchmark data sets, we demonstrate the efficacy of our framework1 .
1 Introduction
The problem of learning to predict unseen classes, also popularly known as ZeroShot Learning (ZSL), is an important learning paradigm which refers to the problem of recognizing objects from classes that were not seen at training time [13,26].
ZSL is especially relevant for learning “in-the-wild” scenarios, where new concepts
need to be discovered on-the-fly, without having access to labelled data from the novel
classes/concepts. This has led to a tremendous amount of interest in developing ZSL
methods that can learn in a robust and scalable manner, even when the amount of supervision for the classes of interest is relatively scarce.
A large body of existing prior work for ZSL is based on embedding the data into a
semantic vector space, where distance based methods can be applied to find the most
likely class which itself is represented as a point in the same semantic space [26,20,33].
However, a limitation of these methods is that each class is represented as a fixed point
in the embedding space which does not adequately account for intra-class variability [2,18]. We provide a more detailed overview of existing work on ZSL in the Related
Work section.
1
Code & data are available: https://github.com/vkverma01/Zero-Shot/
2
Another key limitation of most of the existing methods is that they usually lack
a proper generative model of the data. Having a generative model has several advantages [19]. For example, (1) data of different types can be modeled in a principled way
using appropriately chosen class-conditional distributions; (2) unlabeled data can be
seamlessly integrated (for both seen as well as unseen classes) during parameter estimation, leading to a transductive/semi-supervised estimation procedure, which may be
useful when the amount of labeled data for the seen classes is small, or if the distributions of seen and unseen classes are different from each other [11]; and (3) a rich
body of work, both frequentist and Bayesian, on learning generative models [19] can be
brought to bear during the ZSL parameter estimation process.
Motivated by these desiderata, we present a generative framework for zero-shot
learning. Our framework is based on modelling the class-conditional distributions of
seen as well as unseen classes using exponential family distributions [3], and further
conditioning the parameters of these distributions on the respective class-attribute vectors via a linear/nonlinear regression model of one’s choice. The regression model allows us to predict the parameters of the class-conditional distributions of unseen classes
using only their class attributes, enabling us to perform zero-shot learning.
In addition to the generality and modelling flexibility of our framework, another
of its appealing aspects is its simplicity. In contrast with various other state-of-the-art
methods, our framework is very simple to implement and easy to extend. In particular,
as we will show, parameter estimation in our framework simply reduces to solving a
linear/nonlinear regression problem, for which a closed-form solution exists. Moreover,
extending our framework to incorporate unlabeled data from the unseen classes, or a
small number of labelled examples from the unseen classes, i.e., performing few-shot
learning [23,17] is also remarkably easy under our framework which models classconditional distributions using exponential family distributions with conjugate priors.
2 A Generative Framework For ZSL
In zero-shot learning (ZSL) we assume there is a total of S seen classes and U unseen
classes. Labelled training examples are only available for the seen classes. The test data
is usually assumed to come only from the unseen classes, although in our experiments,
we will also evaluate our model for the setting where the test data could come from
both seen and unseen classes, a setting known as generalised zero-shot learning [6].
We take a generative modeling approach to the ZSL problem and model the classconditional distribution for an observation x from a seen/unseen class c (c = 1, . . . , S +
U ) using an exponential family distribution [3] with natural parameters θc
p(x|θc ) = h(x) exp θ⊤
(1)
c φ(x) − A(θ c )
where φ(x) denotes the sufficient statistics and A(θ c ) denotes the log-partition function. We also assume that the distribution parameters θc are given conjugate priors
p(θ c |τ 0 , ν 0 ) ∝ exp(θ⊤
c τ 0 − ν 0 A(θ c ))
(2)
3
Given a test example x∗ , its class y∗ can be predicted by finding the class under
which x∗ is most likely (i.e., y∗ = arg maxc p(x∗ |θc )), or finding the class that has the
largest posterior probability given x∗ (i.e., y∗ = arg maxc p(θc |x∗ )). However, doing
this requires first estimating the parameters {θc }S+U
c=S+1 of all the unseen classes.
Given labelled training data from any class modelled as an exponential family
distribution, it is straightforward to estimate the model parameters θc using maximum likelihood estimation (MLE), maximum-a-posteriori (MAP) estimation, or using
fully Bayesian inference [19]. However, since there are no labelled training examples
from the unseen classes, we cannot estimate the parameters {θc }S+U
c=S+1 of the classconditional distributions of the unseen classes.
To address this issue, we learn a model that allows us to predict the parameters θc
for any class c using the attribute vector of that class via a gating scheme, which is
basically defined as a linear/nonlinear regression model from the attribute vector to the
parameters. As is the common practice in ZSL, the attribute vector of each class may
be derived from a human-provide description of the class or may be obtained from an
external source such as Wikipedia in form of word-embedding of each class. We assume
that the class-attribute of each class is a vector of size K. The class-attribute of all the
K
classes are denoted as {ac }S+U
c=1 , ac ∈ R .
2.1 Gating via Class-Attributes
We assume a regression model from the class-attribute vector ac to the parameters θc
of each class c. In particular, we assume that the class-attribute vector ac is mapped via
a function f to generate the parameters θc of the class-conditional distribution of class
c, as follows
θc = fθ (ac )
(3)
Note that the function fθ itself could consist of multiple functions if θ c consists of
multiple parameters. For concereteness, and also to simplify the rest of the exposition,
we will focus on the case when the class-conditional distribution is a D dimensional
Gaussian, for which θc is defined by the mean vector µc ∈ RD and a p.s.d. covariD×D
ance matrix Σ c ∈ S+
. Further, we will assume Σ c to be a diagonal matrix defined
2
2
2
as Σ c = diag(σ c ) where σ 2c = [σc1
, . . . , σcD
]. Note that one can also assume a full
covariance matrix but it will significantly increase the number of parameters to be estimated. We model µc and σ 2c as functions of the attribute vector ac
µc = fµ (ac )
σ 2c = fσ2 (ac )
(4)
(5)
Note that the above equations define two regression models. The first regression
model defined by the function fµ has ac as the input and µc as the output. The second
regression model defined by fσ2 has ac as the input and σ 2 as the output. The goal is
to learn the functions fµ and fσ2 from the available training data. Note that the form of
these functions is a modelling choice and can be chosen appropriately. We will consider
both linear as well as nonlinear functions.
4
2.2 Learning The Regression Functions
Using the available training data from all the seen classes c = 1, . . . , S, we can form
empirical estimates of the parameters {µ̂c , σ̂ 2c }Sc=1 of respective class-conditional distributions using MLE/MAP estimation. Note that, since our framework is generative,
both labeled as well as unlabeled data from the seen classes can be used to form the empirical estimates {µ̂c , σ̂ 2c }Sc=1 . This makes our estimates of {µ̂c , σ̂ 2c }Sc=1 reliable even
if each seen class has very small number of labeled examples. Given these estimates for
the seen classes
µ̂c = fµ (ac )
c = 1, . . . , S
(6)
σ̂ 2c = fσ2 (ac )
c = 1, . . . , S
(7)
We can now learn fµ using “training” data {ac , µ̂c }Sc=1 and learn fσ2 using training
data {ac , σˆ2 c }Sc=1 . We consider both linear and nonlinear regression models for learning these.
The Linear Model For the linear model, we assume µ̂c and σ̂ 2c to be linear functions
of the class-attribute vector ac , defined as
µ̂c = Wµ ac
ρ̂c = log σ̂ 2c = Wσ2 ac
c = 1, . . . , S
c = 1, . . . , S
(8)
(9)
where the regression weights Wµ ∈ RD×K , Wσ2 ∈ RD×K , and we have re-parameterized
2
D
σ̂ 2c ∈ RD
+ to ρ̂c ∈ R as ρ̂c = log σ̂ c .
We use this re-parameterization to map the output space of the second regression
model fσ2 (defined by Wσ2 ) to real-valued vectors, so that a standard regression model
can be applied (note that σ̂ 2c is positive-valued vector).
Estimating Regression Weights of Linear Model: We will denote M = [µ̂1 , . . . , µ̂S ] ∈
RD×S , R = [ρ̂1 , . . . , ρ̂S ] ∈ RD×S , and A = [a1 , . . . , aS ] ∈ RK×S . We can then
write the estimation of the regression weights Wµ as the following problem
Ŵµ = arg min ||M − Wµ A||22 + λµ ||Wµ ||22
Wµ
(10)
This is essentially a multi-output regression [7] problem Wµ : as 7→ µ̂s with least
squares loss and an ℓ2 regularizer. The solution to this problem is given by
Ŵµ = MA⊤ (AA⊤ + λµ IK )−1
(11)
Likewise, we can then write the estimation of the regression weights Wσ2 as the
following problem
Ŵσ2 = arg min ||R − Wσ2 A||22 + λσ2 ||Wσ2 ||22
Wσ 2
The solution of the above problem is given by
(12)
5
Ŵσ2 = RA⊤ (AA⊤ + λσ2 IK )−1
(13)
Given Ŵµ and Ŵσ2 , parameters of the class-conditional distribution of each unseen class c = S + 1, . . . , S + U can be easily computed as follows
µ̂c = Ŵµ ac
(14)
σ̂ 2c
(15)
= exp(ρ̂c ) = exp(Ŵσ2 ac )
The Nonlinear Model For the nonlinear case, we assume that the inputs {ac }Sc=1 are
mapped to a kernel induced space via a kernel function k with an associated nonlinear
mapping φ. In this case, using the representer theorem [24], the solution for the two regression models fµ and fσ2 can be written as the spans of the inputs {φ(ac )}Sc=1 . Note
that mappings φ(ac ) do not have to be computed explicitly since learning the nonlinear
regression model only requires dot products φ(ac )⊤ φ(ac′ ) = k(ac , ac′ ) between the
nonlinear mappings of two classes c and c′ .
Estimating Regression Weights of Nonlinear Model: Denoting K to be the S ×S kernel matrix of the pairwise similarities of the attributes of the seen classes, the nonlinear
model fµ is obtained by
α̂µ = arg min ||M − αµ K||22 + λµ ||αµ ||22
(16)
αµ
where α̂µ is a D × S matrix consists of the coefficients of the span of {φ(ac )}Sc=1
defining the nonlinear function fµ .
Note that the problem in Equation 16 is essentially a multi-output kernel ridge regression [7] problem, which has a closed form solution. The solution for α̂µ is given
by
α̂µ = M(K + λµ IS )−1
(17)
Likewise, the nonlinear model fσ2 is obtained by solving
α̂σ2 = arg min ||M − ασ2 K||22 + λσ2 ||ασ2 ||22
(18)
ασ2
where α̂σ2 is a D × S matrix consists of the coefficients of the span of {φ(ac )}Sc=1
defining the nonlinear function fσ2 . The solution for α̂σ2 is given by
α̂σ2 = R(K + λµ IS )−1
(19)
Given α̂µ , α̂σ2 , parameters of class-conditional distribution of each unseen class
c = S + 1, . . . , S + U will be
µ̂c = α̂µ kc
σ̂ 2c = exp(ρ̂c ) = exp(α̂σ2 kc )
(20)
(21)
where kc = [k(ac , a1 ), . . . , k(ac , aS )]⊤ denotes an S × 1 vector of kernel-based similarities of the class-attribute of unseen class c with the class-attributes of all the seen
classes.
6
Other Exponential Family Distributions Although we illustrated our framework taking the example of Gaussian class-conditional distributions, our framework readily generalizes to the case when these distributions are modelled using any exponential family
distribution. The estimation problems can be solved in a similar way as the Gaussian
case with the basic recipe remaining the same: Form empirical estimates of the parameters Θ = {θ̂c }Sc=1 for the seen classes using all the available seen class data and then
learn a linear/nonlinear regression model from the class-attributes A (or their kernel
representation K in the nonlinear case) to Θ.
In additional to its modeling flexibility, an especially remarkable aspect of our generative framework is that it is very easy to implement, since both the linear model as
well as the nonlinear model have closed-form solutions given by Eq. 11 and Eq. 13, and
Eq. 17 and Eq. 19, respectively (the solutions will be available in similar closed-forms
in the case of other exponential family distributions). A block-diagram describing our
framework is shown in Figure 1. Note that another appealing aspect of our framework is
its modular architecture where each of the blocks in Figure 1 can make use of a suitable
method of one’s choice.
Fig. 1. Block-diagram of our framework. Ds denotes the seen class data (can be labeled (and optionally also unlabeled); As denotes seen class attributes; Au denotes unseen class attributes; Θ̂ s
denotes the estimated seen class parameters; Θ̂ u denotes the estimated unseen class parameters.
The last stage - transductive/few-shot refinement - is optional (Section 2.3 and 4.2)
2.3 Transductive/Semi-Supervised Setting
The procedure described in Section 2.2 relies only on the seen class data (labeled and,
optionally, also unlabeled). As we saw for the Gaussian case, the seen class data is used
to form empirical estimates of the parameters {µ̂c , σ̂ 2c }Sc=1 of the class-conditional distributions of seen classes, and then these estimates are used to learn the linear/nonlinear
regression functions fµ and fσ2 . These functions are finally used to compute the parameters {µ̂c , σ̂ 2c }S+U
c=S+1 of class-conditionals of unseen classes. We call this setting
the inductive setting. Note that this procedure does not make use of any data from the
unseen classes. Sometimes, we may have access to unlabeled data from the unseen
classes.
7
Our generative framework makes it easy to leverage such unlabeled data from the
unseen classes to further improve upon the estimates {µ̂c , σ̂2c }S+U
c=S+1 of their classconditional distributions. In our framework, this can be done in two settings, transductive and semi-supervised, both of which leverage unlabeled data from unseen classes,
but in slightly different ways. If the unlabeled data is the unseen class test data itself, we
call it the transductive setting. If this unlabeled data from the unseen classes is different
from the actual unseen class test data, we call it the semi-supervised setting.
In either setting, we can use an Expectation-Maximization (EM) based procedure
that alternates between inferring the labels of unlabeled examples of unseen classes and
using the inferred labels to update the estimates of the parameters {µ̂c , σ̂2c }S+U
c=S+1 of
the distributions of unseen classes.
For the case when each class-conditional distribution is a Gaussian, this procedure
is equivalent to estimating a Gaussian Mixture Model (GMM) using the unlabeled
u
data {xn }N
n=1 from the unseen classes. The GMM is initialized using the estimates
2 S+U
{µ̂c , σ̂ c }c=S+1 obtained from the inductive procedure of Section 2.2. Note that each of
the U mixture components of this GMM corresonds to an unseen class.
The EM algorithm for the Gaussian case is summarized next
1. Initialize mixing proportions π = [π1 , . . . , πU ] uniformly set mixture parameters
as Θ = {µ̂c , σ̂2c }S+U
c=S+1
2. E Step: Infer the probabilities for each xn belonging to each of the unseen classes
c = S + 1, . . . , S + U as
πc N (xn |µ̂c , σ̂ 2c )
p(yn = c|xn , π, Θ) = P
2
c πc N (xn |µ̂c , σ̂ c )
3. M Step: Use to inferred class labels to re-estimate π and Θ = {µ̂c , σ̂ 2c }S+U
c=S+1 .
4. Go to step 2 if not converged.
Note that the same procedure can be applied even when each class-conditional distribution is some exponential family distribution other than Gaussian. The E and M steps
in the resulting mixture model are straightforward in that case as well. The E step will
simply require the Gausian likelihood to be replaced by the corresponding exponential
family distribution’s likelihood. The M step will require doing MLE of the exponential
family distribution’s parameters, which has closed-form solutions.
2.4 Extension for Few-Shot Learning
In few-shot learning, we assume that a very small number of labeled examples may
also be available for the unseen classes [23,17]. The generative aspect of our framework, along with the fact the the data distribution is an exponential family distribution
with a conjugate prior on its parameters, makes it very convenient for our model to
be extended to this setting. The outputs {µ̂c , σ̂ 2c }S+U
c=S+1 of our generative zero-shot
learning model can naturally serve as the hyper-parameters of a conjugate prior on parameters of class-conditional distributions of unseen classes, which can then be updated
given a small number of labeled examples from the unseen classes. For example, in the
Gaussian case, due to conjugacy, we are able to update the estimates {µ̂c , σ̂ 2c }S+U
c=S+1 in
8
a straightforward manner when provided with such labeled data. In particular, given a
2
c
small number of labeled examples {xn }N
n=1 from an unseen class c, µ̂c and σ̂ c can be
easily updated as
P c
µ̂c + N
(F S)
n=1 xn
(22)
µc
=
1 + Nc
−1
Nc
1
(F S)
σ 2c
=
(23)
2 + σ2
σ̂ c
PNc
where σ 2 = N1c n=1
(xn −µ̂c )2 denotes the empirical variance of the Nc observations
from the unseen class c.
A particularly appealing aspect of our few-shot learning model outlined above is
that it can also be updated in an online manner as more and more labelled examples
become available from the unseen classes, without having to re-train the model from
scratch using all the data.
3 Related Work
Some of the earliest works on ZSL are based on predicting attributes for each example [13]. This was followed by a related line of work based on models that assume that
the data from each class can be mapped to the class-attribute space (a shared semantic
space) in which each seen/unseen class is also represented as a point [26,1,33]. The
mapping can be learned using various ways, such as linear models or feed forward neural networks or convolutional neural networks. Predicting the label for a novel unseen
class example then involves mapping it to this space and finding the “closest” unseen
class. Some of the work on ZSL is aimed at improving the semantic embeddings of
concepts/classes. For example, [29] proposed a ZSL model to incorporate relational
information about concepts. In another recent work, [4] proposed a model to improve
the semantic embeddings using a metric learning formulation. A complementary line of
work to the semantic embedding methods is based on a “reverse” mapping, i.e., mapping the class-attribute to the observed feature space [32,37].
In contrast to such semantic embedding methods that assume that the classes are
collapsed onto a single point, our framework offers considerably more flexibility by
modelling each class using its own distribution. This makes our model more suitable for
capturing the intra-class variability, which the simple point-based embedding models
are incapable of handling.
Another popular approach for ZSL is based on modelling each unseen class as
a linear/convex combination of seen classes [20] or of a set of “abstract” or “basis”
classes [22,5]. The latter class of methods, in particular, can be seen as a special case of
our framework since, for our linear model, we can view the columns of the D × K regression weights as representing a set of K basis classes. Note however that our model
has such regression weights for each parameter of the class-conditional distribution,
allowing it to be considerably more flexible. Moreover, our framework is also significantly different in other ways due to its fully generative framework, due to its ability
to incorporate unlabeled data, performing few-shot learning, and its ability to model
different types of data using an appropriate exponential family distribution.
9
A very important issue in ZSL is the domain shift problem which may arise if the
seen and unseen class come from very different domains. In these situations, standard
ZSL models tend to perform badly. This can be somewhat alleviated using some additional unlabeled data from the unseen classes. To this end, [11] provide a dictionary
learning based approach for learning unseen class classifiers in which the dictionary is
adapted to the unseen class domain. The dictionary adaptation is facilitated using unlabeled data from the unseen classes. In another related work, [8] leverage unlabeled
data in a transductive ZSL framework to handle the domain shift problem. Note that
our framework is robust to the domain shift problem due to its ability to incorporate unlabeled data from the unseen classes (the transductive setting). Our experimental results
corroborate this.
Semi-supervised learning for ZSL can also be used to improve the semantic embedding based methods. [16] provide a semi-supervised method that leverages prior
knowledge for improving the learned embeddings. In another recent work, [37] present
a model to incorporate unlabeled unseen class data in a setting where each unseen class
is represented as a linear combination of seen classes. [34] provide another approach,
motivated by applications in computer vision, that jointly facilitates the domain adaptation of attribute space and the visual space. Another semi-supervised approach presented in [15] combines a semisupervised classification model over the observed classes
with an unsupervised clustering model over unseen classes together to address the zeroshot multi-class classification.
In contrast to these models for which the mechanism for incorporating unlabeled
data is model-specific, our framework offers a general approach for doing this, while
also being simple to implement. Moreover, for large-scale problems, it can also leverage
more efficient solvers (e.g., gradient methods) for estimating the regression coefficients
associated with class-conditional distributions.
4 Experiments
We evaluate our generative framework for zero-shot learning (hereafter referred to as
GFZSL) on several benchmark data sets and compare it with a number of state-ofthe-art baselines. We conduct our experiments on various problem settings, including
standard inductive zero-shot learning (only using seen class labeled examples), transductive zero-shot learning (using seen class labeled examples and unseen class unlabeled examples), and few-shot learning (using seen class labeled examples and a very
small number of unseen class labeled examples). We report our experimental results on
the following benchmark data sets:
– Animal with Attribute(AwA): The AwA data set contains 30475 images with 40
seen classes (training set) and 10 unseen classes (test set). Each class has a humanprovided binary/continuous 85-dimensional class-attribute vector [12]. We use continuous class-attributes since prior works have found these to have more discriminative power.
– Caltech-UCSD Birds-200-2011 (CUB-200): The CUB-200 data set contains 11788
images with 150 seen classes (training set) and 50 unseen class (test set). Each image has a binary 312-dimensional class-attribute vector, specifying the presence or
10
absence of various attribute of that image [28]. The attribute vectors for all images
in a class are averaged to construct its continuous class-attribute vector [2]. We use
the same train/test split for this data set as used in [2].
– SUN attribute (SUN): The SUN data set contains 14340 images with 707 seen
classes (training set) and 10 unseen classes (test set). Each image is described by
a 102-dimensional binary class-attribute vector. Just like the CUB-200 data set,
we average the attribute vectors of all images in each class to get its continuous
attribute vector [10]. We use the same train/test split for this data set as used in
[10].
For image features, we considered both GoogleNet features [27] and VGG-19(4096)
fc7 features [25] and found that our approach works better with VGG-19. All of the
state-of-the-art baselines we compare with in our experiments use VGG-19 fc7 features
or GoogleNet features [27]. For the nonlinear (kernel) variant of our model, we use a
quadratic kernel. Our set of experiments include:
– Zero-Shot Learning: We consider both inductive ZSL as well as transductive ZSL.
• Inductive ZSL: This is the standard ZSL setting where the unseen class parameters are learned using only seen class data.
• Transductive ZSL: In this setting [34], we also use the unlabeled test data
while learning the unseen class parameters. Note that this setting has access to
more information about the unseen class; however, it is only through unlabeled
data.
– Few-Shot Learning: In this setting [23,17], we also use a small number of labelled
examples from each unseen class.
– Generalized ZSL: Whereas standard ZSL (as well as few-shot learning) assumes
that the test data can only be from the unseen classes, generalized ZSL assumes
that the test data can be from unseen as well as seen classes. This is usually a more
challenging setting [6] and most of the existing methods are known to be biased
towards predicting the seen classes.
We use the standard train/test split as given in the data description section. For
selecting the hyperparameters, we further divide the train set further into train and validation set. In our model, we have two hyper-parameter λµ and λσ2 , which we tune
using the validation dataset. For AwA, from the 40 seen classes, a random selection of
30 classes are used for the training set and 10 classes are used for the validation set. For
CUB-200, from the 150 seen classes, 100 are used for the training set and rest 50 are
used for the validation set. Similarly, for the SUN dataset from the 707 seen classes,
697 are used for the training set and rest 10 is used for the validation set. We use crossvalidation on the validation set to choose the best hyperparameter [λµ , λσ2 ] for the each
data set and use these for testing on the unseen classes.
4.1 Zero-Shot Learning
In our first set of experiments, we evaluate our model for zero-shot learning and compare with a number of state-of-the-art methods, for the inductive setting (which uses
only the seen class labelled data) as well as the transductive setting (which uses the
seen class data and the unseen class unlabeled data).
11
Inductive ZSL Table-1 shows our results for the inductive ZSL setting. The results of
the various baselines are taken from the corresponding papers. As shown in the Table-1,
on CUB-200 and SUN, both of our models (linear and nonlinear) perform better than all
of the other state-of-the-art methods. On AwA, our model has only a marginally lower
test accuracy as compared to the best performing baseline [34]. However, we also have
an average improvement 5.67% on all the 3 data sets as compared to the overall best
baseline [34]. Among baselines using VGG-19 features (bottom half of Table-1), our
model achieves a 21.05% relative improvement over the best baseline on the CUB-200
data, which is considered to be a difficult data set with many fine-grained classes.
Method
AwA
CUB-200
SUN
Average
Akata et al. [2]
Qiao et al. [21]
Xian et al. [31]
Changpimyo et al.[5]
Wang et al.[29]
66.70
66.46±0.42
71.9
72.9
75.99
50.1
29±0.28
45.5
54.7
33.48
–
–
–
62.7
–
–
–
–
63.43
–
Lampert et al.[14]
Romera and Torr[22]
Bucher et al.[4]
Z. Zhang et al.[35]
Wang et al.[30]
Z. Zhang et al.[34]
GFZSL: Linear
GFZSL: Nonlinear
57.23
75.32±2.28
77.32±1.03
79.12±0.53
79.2±0.0
81.03±0.88
79.90
80.83
–
–
43.29±0.38
41.78±0.52
46.7±0.0
46.48±1.67
52.09
56.53
72.00
82.10± 0.32
84.41±0.71
83.83±.29
–
84.10±1.51
86.50
86.50
–
–
68.34
68.24
–
70.53
72.23
74.59
Table 1. Accuracy(%) of different type of images features. Top: Deep features like AlexNet,
GoogleNet, etc. Bottom: Deep VGG-19 features. The ’-’ indicates that this result was not reported.
In contrast to other models that embed the test examples in the semantic space and
then find the most similar class by doing a Euclidean distance based nearest neighbor
search, or models that are based on computing the similarity scores between seen and
unseen classes [33], for our models, finding the “most probable class” corresponds to
computing the distance of each test example from a distribution. This naturally takes
into account the shape and spread of the class-conditional distribution. This explains
the favourable performance of our model as compared to the other methods.
Transductive Setting For transductive ZSL setting [9,35,36], we follow the procedure
described in Section 2.3 to estimate parameters of the class-conditional distribution of
each unseen class. After learning the parameters, we find the most probable class for
each test example by evaluating its probability under each unseen class distribution and
assign it to the class under which it has the largest probability. Table-2 and 3 compare
our results from the transductive setting with other state-of-the-art baselines designed
for the transductive setting. In addition to accuracy, we also report precision and recall
results of our model and the other baselines (wherever available). As we can see from
Table-2 and 3, both of our models (linear and kernel) outperform the other baselines
on all the 3 data sets. Also comparing with the inductive setting results presented in
12
Table-1, we observe that our generative framework is able to very effectively leverage
unlabeled data and significantly improve upon the results of a purely inductive setting
Method
AwA
CUB-200
SUN
Average
Guo et al.[9]
Romera et al.[22]+ Zhang et al. [36]
Zhang et al.[35]+Zhang et al. [36]
Zhang et al.[34]+Zhang et al. [36]
GFZSL: Linear
GFZSL: Kernel
78.47
84.30
92.08±0.14
88.04±0.69
94.20
94.25
–
–
55.34±0.77
55.81±1.37
57.14
63.66
82.00
37.50
86.12±0.99
85.35±1.56
87.00
87.00
–
–
77.85
76.40
79.45
80.63
Table 2. ZSL accuracy(%) obtained in the transductive setting: results reported using the VGG19 feature. Average Precision and recall for the all dataset with its standard daviation over the
100 iteration. The ’-’ indicates that this result was not reported in the original paper.
Average Precision
Average Recall
Method
AwA
CUB-200
SUN
AwA
CUB-200
SUN
Zhang et al.[35]+Zhang et al. [36]
Zhang et al.[34]+Zhang et al. [36]
GFZSL: Linear
GFZSL: Kernel
91.37±14.75
89.19±15.09
93.70
93.80
57.09±27.91
57.20±25.96
57.90
64.09
85.96±10.15
86.06± 12.36
87.40
87.40
90.28±8.08
86.04±9.82
92.20
92.30
55.73±31.80
55.77±26.54
57.40
63.96
86.00±13.19
85.50±13.68
87.00
87.00
Table 3. ZSL precision and recall scores obtained in the transductive setting: results reported
using the VGG-19 features. Average Precision and recall for the all dataset with its standard
daviation over the 100 iteration. Note: Precision and recall scores not available for Guo et al.[9]
and Romera et al.[22]+ Zhang et al. [36]
Dataset
AwA
CUB-200
SUN
Method
2
5
10
15
20
GFZSL
SVM
GFZSL
SVM
GFZSL
SVM
87.96±1.47
74.81
60.84 ±1.39
46.19
75.57± 4.79
56.00
91.64 ±0.81
83.19
64.81± 1.14
59.33
83.05± 3.60
77.00
93.31 ±0.50
90.44
68.44± 1.21
68.75
82.09 ±3.30
78.00
94.01 ±.36
91.22
70.11± 0.93
73.87
–
–
94.30 ±0.33
92.04
71.23± 0.87
75.42
–
–
Table 4. Accuracy(%) in the few-shot learning setting: For each data set, the accuracies are reported using 2, 5, 10, 15, 20 labeled examples for each unseen class
13
4.2 Few-shot Learning (FSL)
We next perform an experiment with the few-shot learning setting [23,17] where we
provide each model with a small number of labelled examples from each of the unseen
classes. For this experiment, we follow the procedure described in Section 4.2 to learn
the parameters of the class-conditional distributions of the unseen classes. In particular,
we train the inductive ZSL model (using only the seen class training data) and the
refine the learned model further using a very small number of labelled examples from
the unseen classes (i.e., the few-shot learning setting).
To see the effect of knowledge transfer from the seen classes, we use a multiclass
SVM as a baseline that is provided with the same number of labelled examples from
each unseen class. In this experiment, we vary the number of labelled examples of
unseen classes from 2 to 20 (for SUN we only use 2, 5, and 10 due to the small number
of labelled examples). In Figure-2, we also compare with standard (inductive) ZSL
which does not have access to the labelled examples from the unseen classes. Our results
are shown Table-4 and Figure-2.
As shown in Table-4 (all data sets) and Figure-2, the classification accuracy on the
unseen classes shows a significant improvement over the standard inductive ZSL, even
with as few as 2 or 5 additional labelled examples per class. We also observe that the
few-shot learning method outperform multiclass SVM which only relies on the labelled
data from the unseen classes. This demonstrates the advantage of the knowledge transfer
from the seen class data.
Accuracy using FSL on AwA
95
Accuracy
90
85
80
FSL
IZSL
SVM-FSL
75
70
2
4
6
8
10
12
14
16
18
20
22
No. of data points
Fig. 2. (On AwA data): A comparison on classification accuracies of the few-shot learning variant
of our model with multi-class SVM (training on labeled examples from seen classes) and the
inductive ZSL
14
4.3 Generalized Few-Shot Learning (GFSL)
We finally perform an experiment on the more challenging generalized few-shot learning setting [6]. This setting assumes that test examples can come from seen as well as
unseen classes. This setting is known to be notoriously hard [6]. In this setting, although
the ZSL models tend to do well on predicting test examples from seen classes, the performance on correctly predicting the unseen class example is poor [6] since the trained
models are heavily biased towards predicting the seen classes.
One way to mitigate this issue could be to use some labelled examples from the
unseen classes (akin to what is done in few-shot learning). We, therefore, perform a
similar experiment as in Section 4.2. In Table-5, we show the results of our model on
classifying the unseen class test examples in this setting.
As shown in Table-5, our model’s accuracies on the generalized FSL task improve as
it gets to see labelled examples from unseen classes. However, it is still outperformed
by a standard multiclass SVM. The better performance of SVM can be attributed to
the fact that it is not biased towards the seen classes since the classifier for each class
(seen/unseen) is learned independently.
Our findings are also corroborated by other recent work on generalized FSL [6]
and suggest the need of finding more robust ways to handle this setting. We leave this
direction of investigation as a possible future work.
Dataset
AwA
CUB-200
SUN
Method
GFZSL
SVM
GFZSL
SVM
GFZSL
SVM
2
25.32 ± 2.43
40.84
6.64 ± 0.87
25.97
1.17 ± 1.16
9.94
5
37.42 ± 1.60
60.81
15.12 ± 1.17
37.98
4.20 ± 1.77
20.00
10
43.20 ± 1.39
75.36
22.02 ± 0.76
47.10
9.48 ± 2.22
27.00
15
45.09 ± 1.17
77.00
25.03 ± 0.71
53.87
–
–
20
45.96 ± 1.09
77.10
26.47 ± 0.83
54.42
–
–
Table 5. Accuracies (%) in the generalized few-shot learning setting.
5 Conclusion
We have presented a flexible generative framework for zero-shot learning, which is
based on modelling each seen/unseen class using an exponential family class-conditional
distribution. In contrast to the semantic embedding based methods for zero-shot learning which model each class as a point in a latent space, our approach models each
class as a distribution, where the parameters of each class-conditional distribution are
functions of the respective class-attribute vectors. Our generative framework allows
learning these functions easily using seen class training data (and optionally leveraging
additional unlabeled data from seen/unseen classes).
An especially appealing aspect of our framework is its simplicity and modular architecture (cf., Figure 1) which allows using a variety of algorithms for each of its
building blocks. As we showed, our generative framework admits natural extensions
to other related problems, such as transductive zero-shot learning and few-shot learning. It is particularly easy to implement and scale to a large number of classes, using
15
advances in large-scale regression. Our generative framework can also be extended to
jointly learn the class attributes from an external source of data (e.g., by learning an additional embedding model with our original model). This can be an interesting direction
of future work. Finally, although we considered a point estimation of the parameters of
class-conditional distributions, it is also possible to take a fully Bayesian approach for
learning these distributions. We leave this possibility as a direction for future work.
Acknowledgements: This work is supported by a grant from Tower Research CSR,
Dr. Deep Singh and Daljeet Kaur Fellowship, and Research-I Foundation, IIT Kanpur.
Vinay Verma acknowledges support from Visvesvaraya Ph.D. fellowship.
References
1. A KATA , Z., P ERRONNIN , F., H ARCHAOUI , Z., AND S CHMID , C. Label-embedding for
attribute-based classification. In CVPR (2013).
2. A KATA , Z., R EED , S., WALTER , D., L EE , H., AND S CHIELE , B. Evaluation of output
embeddings for fine-grained image classification. In CVPR (2015).
3. B ROWN , L. D. Fundamentals of statistical exponential families. Institute of Mathematical
Statistics (1986).
4. B UCHER , M., H ERBIN , S., AND J URIE , F. Improving semantic embedding consistency by
metric learning for zero-shot classification. arXiv preprint arXiv:1607.08085 (2016).
5. C HANGPINYO , S., C HAO , W.-L., G ONG , B., AND S HA , F. Synthesized classifiers for
zero-shot learning. In CVPR (2016).
6. C HAO , W.-L., C HANGPINYO , S., G ONG , B., AND S HA , F. An empirical study and analysis
of generalized zero-shot learning for object recognition in the wild. In ECCV (2016).
7. F RIEDMAN , J., H ASTIE , T., AND T IBSHIRANI , R. The elements of statistical learning,
vol. 1. Springer series in statistics Springer, Berlin, 2001.
8. F U , Y., H OSPEDALES , T. M., X IANG , T., AND G ONG , S. Transductive multi-view zeroshot learning. PAMI (2015).
9. G UO , Y., D ING , G., J IN , X., AND WANG , J. Transductive zero-shot recognition via shared
model space learning. In AAAI (2016).
10. JAYARAMAN , D., AND G RAUMAN , K. Zero-shot recognition with unreliable attributes. In
NIPS (2014).
11. KODIROV, E., X IANG , T., F U , Z., AND G ONG , S. Unsupervised domain adaptation for
zero-shot learning. In ICCV (2015).
12. K RIZHEVSKY, A., AND H INTON , G. Learning multiple layers of features from tiny images.
13. L AMPERT, C. H., N ICKISCH , H., AND H ARMELING , S. Learning to detect unseen object
classes by between-class attribute transfer. In CVPR (2009).
14. L AMPERT, C. H., N ICKISCH , H., AND H ARMELING , S. Attribute-based classification for
zero-shot visual object categorization. PAMI (2014).
15. L I , X., AND G UO , Y. Max-margin zero-shot learning for multi-class classification. In
AISTATS (2015).
16. L I , X., G UO , Y., AND S CHUURMANS , D. Semi-supervised zero-shot classification with
label representation learning. In CVPR (2015).
17. M ENSINK , T., G AVVES , E., AND S NOEK , C. G. Costa: Co-occurrence statistics for zeroshot classification. In CVPR (2014).
18. M UKHERJEE , T., AND H OSPEDALES , T. Gaussian visual-linguistic embedding for zeroshot recognition. In EMNLP (2016).
19. M URPHY, K. P. Machine learning: a probabilistic perspective. MIT press, 2012.
16
20. N OROUZI , M., M IKOLOV, T., B ENGIO , S., S INGER , Y., S HLENS , J., F ROME , A., C OR RADO , G. S., AND D EAN , J. Zero-shot learning by convex combination of semantic embeddings. ICLR (2014).
21. Q IAO , R., L IU , L., S HEN , C., AND VAN DEN H ENGEL , A. Less is more: zero-shot learning
from online textual documents with noise suppression. In CVPR (2016).
22. ROMERA -PAREDES , B. An embarrassingly simple approach to zero-shot learning. In ICML
(2015).
23. S ALAKHUTDINOV, R., T ENENBAUM , J. B., AND T ORRALBA , A.
Learning with
hierarchical-deep models. PAMI (2013).
24. S CH ÖLKOPF, B., AND S MOLA , A. J. Learning with kernels: support vector machines,
regularization, optimization, and beyond. MIT press, 2001.
25. S IMONYAN , K., AND Z ISSERMAN , A. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556 (2014).
26. S OCHER , R., G ANJOO , M., M ANNING , C. D., AND N G , A. Zero-shot learning through
cross-modal transfer. In NIPS (2013).
27. S ZEGEDY, C., L IU , W., J IA , Y., S ERMANET, P., R EED , S., A NGUELOV, D., E RHAN , D.,
VANHOUCKE , V., AND R ABINOVICH , A. Going deeper with convolutions. In CVPR (2015).
28. WAH , C., B RANSON , S., W ELINDER , P., P ERONA , P., AND B ELONGIE , S. The caltechucsd birds-200-2011 dataset.
29. WANG , D., L I , Y., L IN , Y., AND Z HUANG , Y. Relational knowledge transfer for zero-shot
learning. In AAAI (2016).
30. WANG , Q., AND C HEN , K. Zero-shot visual recognition via bidirectional latent embedding.
arXiv preprint arXiv:1607.02104 (2016).
31. X IAN , Y., A KATA , Z., S HARMA , G., N GUYEN , Q., H EIN , M., AND S CHIELE , B. Latent
embeddings for zero-shot classification. In CVPR (2016).
32. Z HANG , L., X IANG , T., AND G ONG , S. Learning a deep embedding model for zero-shot
learning. arXiv preprint arXiv:1611.05088 (2016).
33. Z HANG , Z., AND S ALIGRAMA , V. Zero-shot learning via semantic similarity embedding.
In ICCV (2015).
34. Z HANG , Z., AND S ALIGRAMA , V. Learning joint feature adaptation for zero-shot recognition. arXiv preprint arXiv:1611.07593 (2016).
35. Z HANG , Z., AND S ALIGRAMA , V. Zero-shot learning via joint latent similarity embedding.
In CVPR (2016).
36. Z HANG , Z., AND S ALIGRAMA , V. Zero-shot recognition via structured prediction. In ECCV
(2016).
37. Z HAO , B., W U , B., W U , T., AND WANG , Y. Zero-shot learning via revealing data distribution. arXiv preprint arXiv:1612.00560 (2016).
| 2 |
Rate-Interference Tradeoff in OFDM-based
Cognitive Radio Networks
arXiv:1801.07562v1 [eess.SP] 10 Jan 2018
Ebrahim Bedeer, Octavia A. Dobre, Mohamed H. Ahmed, and Kareem E. Baddour †
Faculty of Engineering and Applied Science, Memorial University, St. John’s, NL, Canada
† Communications Research Centre, Ottawa, ON, Canada
Email: {e.bedeer, odobre, mhahmed}@mun.ca, [email protected]
Abstract—In cognitive radio (CR) networks, secondary users
(SUs) are allowed to opportunistically access the primary users
(PUs) spectrum to improve the spectrum utilization; however, this
increases the interference levels at the PUs. In this paper, we consider an orthogonal frequency division multiplexing OFDM-based
CR network and investigate the tradeoff between increasing the
SU transmission rate (hence improving the spectrum utilization)
and reducing the interference levels at the PUs. We formulate
a new multiobjective optimization (MOOP) problem that jointly
maximizes the SU transmission rate and minimizes its transmit
power, while imposing interference thresholds to the PUs. Further,
we propose an algorithm to strike a balance between the SU
transmission rate and the interference levels to the PUs. The
proposed algorithm considers the practical scenario of knowing
partial channel state information (CSI) of the links between the
SU transmitter and the PUs receivers. Simulation results illustrate
the performance of the proposed algorithm and its superiority
when compared to the work in the literature.
I. I NTRODUCTION
The current spectrum underutilization problem is a result
of the traditional inefficient spectrum allocation policies rather
than the scarcity of the wireless radio spectrum [1]. To improve
the spectrum utilization, the concept of dynamic spectrum
access was proposed in recent years [1]. Cognitive radio (CR)
promoted this concept by allowing secondary (or unlicensed)
users (SUs) to opportunistically access the spectrum holes in
primary (or licensed) users (PUs) frequency spectrum, subject
to constrained degradations to the PUs performance [1]. Orthogonal frequency division multiplexing (OFDM) is widely
recognized as an attractive candidate for the SUs transmission
due to its capabilities in analyzing the spectral activities of
PUs [2].
The CR is capable of adapting its transmission to the
surrounding environment conditions, with two target objectives
[3]: 1) improving the spectrum utilization by maximizing the
transmission rate of SUs for a given bandwidth and 2) controlling the amount of interference leaked to the PUs receivers
due to the SUs transmission. Considering both objectives is
a challenging task, as they are conflicting, i.e., increasing
the transmission rate of SUs is accompanied by an excessive
interference levels to PUs and vice versa. Therefore, there is a
tradeoff between the two objectives and it should be carefully
investigated in order to have a flexible design that improves the
overall performance of the CR networks. In the literature, this
design flexibility was not fully exploited, as all the proposed
algorithms focused on maximizing the SUs transmission rate,
with predefined thresholds for the leaked interference to PUs
(i.e., without minimizing the interference to PUs) [4]–[7].
In this paper, we provide a mathematical framework for the
rate-interference tradeoff in the OFDM-based CR networks.
This is achieved by formulating a multiobjective optimization
(MOOP) problem that jointly maximizes the SU transmission
rate and minimizes its transmit power. We additionally set
predefined interference thresholds for each PU as constraints.
We consider partial channel-state information (CSI) knowledge
on the links between the SU transmitter and the PUs receivers
and full CSI knowledge between the SU transmitter and
receiver pair. More specifically, for the SU transmitter and PUs
receivers links, we consider the following practical scenarios:
1) knowledge of the path loss and 2) knowledge of the path
loss and channel statistics (i.e., the fading distribution and
its parameters). For comparison purposes, we additionally
consider knowledge of the path loss and full CSI, providing an
upper bound on the SU achievable performance. We propose
a low complexity algorithm to solve the MOOP problem.
Simulation results show the performance of the proposed
algorithm and illustrate the SU performance degradation due
to partial CSI knowledge. Additionally, the results show the
advantages of using the proposed algorithm (in terms of the
energy efficiency and the leaked interference to PUs) when
compared to other algorithms proposed in the literature.
The remainder of the paper is organized as follows. Section
II introduces the system model. The MOOP problem is formulated and solved and the proposed algorithm is summarized
in Section III. Simulation results are presented in Section IV,
while conclusions are drawn in Section V.
II. S YSTEM M ODEL
A. System Description
The available spectrum is divided into L channels that
are licensed to L PUs. PUs do not necessarily fully occupy
their licensed spectrum temporally and/or spatially; hence, an
SU may access such spectrum holes as long as no harmful
interference occurs to frequency-adjacent PUs due to adjacent
channel interference (ACI) or to other PUs operating in the
same frequency band at distant location due to co-channel
interference (CCI) [3]. Without loss of generality, we assume
that the SU decides to use subchannel m of bandwidth Bm ;
this decision can be reached by visiting a database administrated by a third party (e.g., telecomm. authorities), or by
optionally sensing the PUs radio spectrum. We assume that the
SU accesses subchannel m using OFDM with N subcarriers.
Unlike most of the work done in the literature [4]–[6],
we assume partial CSI knowledge on the links between the
SU transmitter and PUs receivers (this is because estimating
the instantaneous channel gains of such links is practically
challenging without the PUs cooperation). More specifically,
we assume: 1) knowledge of the path loss, which is practically
possible especially in applications with stationary nodes. In
such a case, the path loss exponent and the node locations can
be estimated with high accuracy [8]; and 2) knowledge of the
path loss and channel statistics (i.e., the fading distribution and
its parameters), which is a reasonable assumption for certain
wireless environments. For example, in non-line-of-sight urban
environments, a Rayleigh distribution is usually assumed for
the magnitude of the fading channel coefficients [7]. The case
of full CSI knowledge on the links between the SU transmitter
and PUs receivers represents an upper bound on the achievable
SU performance and is additionally provided in the numerical
results section to characterize the performance loss due to
the partial CSI knowledge. We should note that following
the common practice in the literature, we assume that the
instantaneous channel gains between the SU transmitter and
receiver pair are available through a delay- and error-free
feedback channel [4]–[7].
B. Modeling of the CCI and ACI constraints with partial CSI
knowledge
1) Case 1—Knowledge of the path loss: The transmit power
of the SU on subchannel m should be limited to a certain
(m)
threshold Pth to protect the mth distant PU receiver, i.e.,
PN
(m)
−0.1 PL(dm )
10
i=1 pi ≤ Pth , where PL(dm ) is the distancebased path loss in dB at distance dm from the SU and pi is the
allocated power per subcarrier i, i = 1, ..., N . To reflect the
SU transmitter’s power amplifier limitations and/or to satisfy
regulatory maximum power limits, the SU transmit
PN power
should be limited to a certain threshold Pth , i.e., i=1 pi ≤
Pth . Hence, the constraint
total transmit
power is
on the (m)
−
PN
Pth
formulated as i=1 pi ≤ Pth , 10−0.1 PL(dm ) , where [x, y]−
represents min(x, y). To simplify the notation and without loss
(m)
of generality, we assume that
CCI constraint is written as
Pth
10−0.1 PL(dm )
< Pth . Hence, the
sin(πx)
πx .
The ACI
ℓ = 1, ..., L,
(2)
bandwidth of the ℓth PU, and sinc(x) =
constraint can be further written as
N
X
(ℓ)
pi ̟i
(ℓ)
(ℓ)
≤ Pth XCase 1 ,
i=1
(ℓ)
where XCase 1 = 10−0.11 PL(dℓ ) is the channel knowledge coefficient from the SU transmitter to the ℓth PU receiver for the
case of only knowing the path loss.
2) Case 2—Knowledge of the path loss and
channel statistics: P
The CCI constraint is written as
(m)
(m)
(m)
N
|Hsp |2 10−0.1 PL(dm ) i=1 pi ≤ Pth , where Hsp is
the channel gain to the distant mth PU receiver. Since
(m)
Hsp is not perfectly known at the SU transmitter, the CCI
(m)
constraint is limited below the threshold Pth with at least a
(m)
probability of Ψth . This is formulated as
!
N
X
(m)
(m)
−0.1 PL(dm )
(m) 2
pi ≤ Pth
≥ Ψth . (3)
Pr |Hsp | 10
i=1
A non-line-of-sight propagation environment is assumed;
(m)
therefore, the channel gain Hsp can be modeled as a zero(m)
mean complex Gaussian random variable, and hence, |Hsp |2
follows an exponential distribution [9]. Accordingly, the statistical constraints in (3) can be evaluated as
!
ν (m)
(m)
(m)
1 − exp −
Pth
≥ Ψth , (4)
PN
−0.1
PL(d
)
m
10
i=1 pi
1
where ν (m)
is the mean of the exponential distribution. Equation (4) can be further simplified as
N
X
(m)
(m)
pi ≤ Pth XCase 2 ,
(5)
i=1
ν (m)
(m)
− ln(1−Ψth ) 10−0.1 PL(dm )
(m)
where XCase 2 =
is the channel
knowledge coefficient from the SU transmitter to the mth PU
receiver for the case of knowing the path loss and the channel
statistics. Similarly, the ACI constraint can be written as
N
X
(ℓ)
pi ̟i
(ℓ)
(ℓ)
≤ Pth XCase 2 ,
ℓ = 1, ..., L,
(6)
i=1
N
X
(m)
(m)
pi ≤ Pth XCase 1 ,
(1)
i=1
(m)
where XCase 1 = 10−0.11PL(dm ) represents the channel knowledge coefficient from the SU transmitter to the mth PU receiver
for the case of only knowing the path loss.
The ACI is mainly due to the power spectral leakage of
the SU subcarriers to the PUs receivers. This depends on the
power allocated to each SU subcarrier and the spectral distance
between the SU subcarriers and the PUs receivers [2]. The ACI
to the ℓth PU receiverP
should be limited to a certain threshold
(ℓ)
(ℓ)
(ℓ)
≤ Pth , ℓ = 1, ..., L,
Pth as 10−0.1 PL(dℓ ) N
i=1 pi ̟i
B
Rf + ℓ
(ℓ)
where ̟i = Ts i,ℓ B2ℓ sinc2 (Ts f )df , Ts is the SU OFDM
fi,ℓ − 2
symbol duration, fi,ℓ is the spectral distance between the
SU subcarrier i and the ℓth PU frequency band, Bℓ is the
(ℓ)
where XCase 2 =
ν (ℓ)
(ℓ)
− ln(1−Ψth ) 10−0.1 PL(dℓ )
is the channel
knowledge coefficient to the ℓth PU for the case of knowing
the path loss and the channel statistics.
III. O PTIMIZATION P ROBLEM
A LGORITHM
AND
P ROPOSED
Recently, MOOP has attracted researchers’ attention due
to its flexible and superior performance over single objective
optimization approaches [10]. For most of the MOOP problems, due to the contradiction and incommensurability of the
competing objective functions, it is not possible to find a single
solution that optimizes all the objective functions simultaneously. In other words, there is no solution that improves one of
the objective functions without deteriorating other objectives.
However, a set of non-dominated, Pareto optimal solutions
exists and it is the decision maker’s (the SU in our case)
responsibility to choose its preferred optimal solution [11]. We
solve the MOOP problem by linearly combining the normalized competing rate and transmit power objectives into a single
objective function. For that, positive weighting coefficients are
used [11]. These coefficients reflects the SU preferences to the
surrounding environment, the wireless application, and/or the
target performance.
We formulate a MOOP problem that jointly minimizes the
SU transmit power and maximizes its transmission rate, while
guaranteeing acceptable levels of CCI and ACI to the existing
PUs receivers, as
min
pi
N
X
pi
and
i=1
N
X
C1 :
subject to
i=1
N
X
C2 :
N
X
max
pi
log2 (1 + pi
i=1
|Hi |2
),
σn2 + Ji
(m)
pi ≤ Pth X (m) ,
(ℓ)
pi ̟i
≤
(ℓ)
Pth X (ℓ) ,
i = 1, ..., N, (11)
ℓ = 1, ..., L,
i=1
C3 : pi ≥ 0,
i = 1, ..., N,
(m)
(7)
(m)
∈
where X (m)
∈
{XCase 1 , XCase 2 } and X (ℓ)
(ℓ)
(ℓ)
{XCase 1 , XCase 2 } represent the channel knowledge
coefficients from the SU transmitter to the mth and ℓth
PUs receivers, respectively, Hi is the channel gain of
subcarrier i, i = 1, ..., N , between the SU transmitter and
receiver pair, σn2 is the variance of the additive white Gaussian
noise (AWGN), and Ji is the interference from all PUs to the
SU subcarrier i, i = 1, ..., N (it depends on the SU receiver
windowing function and power spectral density of the PUs
[4]–[7]). The MOOP problem in (7) can be written as a linear
combination of the multiple normalized transmit power and
rate objectives as
min α
pi
N
X
pi − (1 − α)
i=1
subject to
2
N
X
log2 (1 + γi pi ),
i=1
C1—C3,
Proposition 1: The optimization problem in (8) is convex
and the optimal solution is in the form
+
1−α
−1
∗
pi =
− γi
, i = 1, ..., N,
(9)
ln(2)α
PN
PN
(ℓ)
(ℓ)
(m)
if i=1 p∗i < Pth X (m) and i=1 p∗i ̟i < Pth X (ℓ) , ℓ =
+
1, ..., L, where [x] represents max(0, x); and is in the form
+
1−α
−1
∗
− γi
, i = 1, ..., N, (10)
pi =
ln(2) (α + λN +1 )
PN
PN
(ℓ)
(ℓ)
(m)
if i=1 p∗i ≥ Pth X (m) and i=1 p∗i ̟i < Pth X (ℓ) , ℓ =
PN ∗
1, ..., L, where λN +1 is calculated to satisfy
i=1 pi =
(m) (m)
Pth X ; and is in the form
+
1
−
α
− γi−1 ,
p∗i =
PL
(ℓ)
(ℓ)
ln(2) α + ℓ=1 λN +2 ̟i
(8)
i|
where γi = σ|H
is the channel gain to noise plus interfer2
n +Ji
ence ratio and α (0 ≤ α ≤ 1) is the weighting coefficient that
represents the relative importance of the competing objectives,
i.e., higher values of α favor minimizing the transmit power,
while lower values of α favor maximizing the rate. It is worthy
to mention that for α = 0 the MOOP problem in (8) reduces
to the rate maximization problem in [4], [5], [7], while for α
= 1, the optimal solution is zero as the objective is solely to
minimize the transmit power. We assume that the SU chooses
the proper value of α depending on the application and/or
the surrounding environment. For example, if the transmission
rate/time is crucial, then the SU chooses lower values of α. On
the other hand, if reducing the transmit power/interference to
existing PUs (as the sensing process is not fully reliable and/or
the channel to the PUs is not perfectly known), protecting the
environment, and, hence, the energy efficiency is important,
then higher values of α are selected.
PN
(m)
PN
PN
(m)
PN
(ℓ)
(ℓ)
if i=1 p∗i < Pth X (m) and i=1 p∗i ̟i ≥ Pth X (ℓ) , ℓ =
P
(ℓ)
∗ (ℓ)
=
1, ..., L, where λN +2 is calculated to satisfy N
i=1 pi ̟i
(ℓ) (ℓ)
Pth X , ℓ = 1, ..., L, and is in the form
+
1
−
α
− γi−1 ,
p∗i =
PL
(ℓ)
(ℓ)
ln(2) α + λN +1 + ℓ=1 λN +2 ̟i
i = 1, ..., N,
(ℓ)
(12)
(ℓ)
if i=1 p∗i ≥ Pth X (m) and i=1 p∗i ̟i ≥ Pth X (ℓ) , ℓ =
(ℓ)
1, ..., L, where λN +1 and λN +2 are calculated to satisfy
PN ∗
P
(ℓ)
(m) (m)
N
∗ (ℓ)
= Pth X (ℓ) , ℓ =
and
i=1 pi = Pth X
i=1 pi ̟i
1, ..., L, respectively.
Proof : See Appendix.
The proposed algorithm can be formally stated as follows:
Proposed Algorithm
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
(m)
(ℓ)
INPUT σn2 , Hi , α, Pth , Pth , X (m) , and X (ℓ) , ℓ = 1, ..., L.
for i = 1, ..., N do
p∗i is given by (9).
end
Pfor ∗
P
(m) (m)
(ℓ)
∗ (ℓ)
if N
and N
< Pth X (ℓ) then
i=1 pi ≥ Pth X
i=1 pi ̟i
∗
pi is given by (10).
P
(m) (m)
λN+1 is calculated to satisfy N
p∗ = P
X
.
i=1
PN ∗
PNi ∗ th
(m) (m)
(ℓ)
(ℓ) (ℓ)
else if
p
<
P
X
and
p
̟
≥
P
X
i
i
i
th
th
i=1
i=1
then
p∗i is given by (11).
P
(ℓ)
(ℓ)
∗ (ℓ)
λN+2 is calculated to satisfy N
= Pth X (ℓ) , ℓ =
i=1 pi ̟i
1, ..., L.P
P
(ℓ)
(m) (m)
∗
∗ (ℓ)
≥ Pth X (ℓ)
else if N
and N
i=1 pi ≥ Pth X
i=1 pi ̟i
then
p∗i is given by (12).
PN ∗
(ℓ)
λN+1 and λN+2 are calculated to satisfy
=
i=1 pi
PN ∗ (ℓ)
(m) (m)
(ℓ) (ℓ)
Pth X
and
= Pth X , ℓ = 1, ..., L,
i=1 pi ̟i
respectively.
end if
OUTPUT p∗i , i = 1, ..., N .
The proposed algorithm is briefly explained as follows.
Steps 2 to 4 find the optimal solution assuming that both
the CCI and ACI constraints are inactive. Based on this
Without loss of generality, we assume that the OFDM SU
coexists with a frequency-adjacent PU and a co-channel PU.
The SU parameters are: number of subcarriers N = 128
and subcarrier spacing ∆f = 1.25NMHz . The propagation path
3×108
loss parameters are: exponent = 4, wavelength = 900×10
6 =
0.33 meters, distance between SU transmitter and receiver pair
equal to 1 km, distance to the ℓth PU dℓ = 1.2 km, distance to
the mth PU dm = 5 km, and reference distance d0 = 100 m. A
Rayleigh fading environment is considered, where the average
channel power gains between the SU transmitter and receiver
pair E{|Hi |2 }, between the SU transmitter and the receiver
(ℓ)
of the ℓth PU E{|Hsp |2 }, and between the SU transmitter
(m)
and the receiver of the mth PU E{|Hsp |2 } are set to 0
dB. The PU bandwidth is set to 312.5 kHz. The variance
of the AWGN σn2 is assumed to be 10−15 W and the PU
signal is assumed to be an elliptically filtered white noiselike process [4], [7] of variance σn2 . Representative results
are presented in this section, which were obtained through
Monte Carlo simulations for 104 channel realizations. Unless
(m)
otherwise mentioned, the value of the probabilities Ψth and
(ℓ)
(m)
(ℓ‘)
Ψth is set to 0.9, Pth = 10−11 W, and Pth = 10−11 W.
The transmit power and transmission rate objectives are scaled
during simulations so that they are approximately within the
same range [11]. For convenience, presented numerical results
are displayed in the original scales.
Fig. 1 shows the interference leaked to the mth PU receiver
(m)
as a function of Pth for different values of α and for different
degrees of CSI knowledge. As can be seen, increasing the
value of α reduces the leaked interference to the mth PU for
all the cases of CSI knowledge. This can be easily explained,
as increasing α gives more weight to minimizing the transmit power objective function and less weight to maximizing
the transmission rate objective function in (8). Accordingly,
increasing α reduces the CCI to the mth PU receiver, but also
the SU achievable rate. The interference leaked to the mth
(m)
PU receiver increases linearly with increasing Pth for lower
(m)
(m)
values of Pth and saturates for higher values of Pth . This
(m)
can be explained as follows. For lower values of Pth , the
interference leaked to the mth PU receiver is higher than the
Interference to mth PU (×10−12 W)
7
6
α = 0.25
α = 0.50
α = 0.75
5
Full CSI
4
PL
3
2
PL and channel statistics
1
0
1
4
8
12
(m)
Pth (×10−12 W)
16
20
(m)
Fig. 1: Interference leaked to the mth PU as a function of Pth for
different values of α and for different degree of CSI knowledge.
1000
800
Rate (×∆f bps)
assumption, if the CCI constraint is not inactive while the ACI
constraints are inactive, the optimal solution is given by steps
5 to 7. Otherwise, if the CCI constraint is inactive and the ACI
constraints are not inactive, the optimal solution is given by
steps 8 to 10. Finally, if both the CCI and ACI constraints are
not inactive the solution is given by steps 11 to 13.
The complexity of the proposed algorithm can be analyzed
as follows. The authors in [12] showed that the Lagrange
(ℓ)
multipliers λN +1 and λN +2 , ℓ = 1, ..., L, that satisfy the
CCI and ACI constraints, respectively, can be obtained with
linear complexity of the number of subcarrier N , i.e., O(N ).
Therefore, the computational complexity of the proposed algorithm can be analyzed as follows. Steps 2 to 4 require a
complexity of O(N ); steps 5 to 7, 8 to 10, and 11 to 13
require a complexity of O(N 2 ). Thus, the worst case computational complexity of the proposed algorithm is calculated as
O(N ) + O(N 2 ) + O(N 2 ) + O(N 2 ) = O(N 2 ).
IV. N UMERICAL R ESULTS
α = 0.25
α = 0.50
α = 0.75
600
Full CSI
400
PL
200
PL and channel statistics
1
4
8
12
(m)
Pth (×10−12 W)
16
20
(m)
Fig. 2: SU rate as a function of Pth for different values of α and
for different degree of CSI knowledge.
(m)
value of Pth and hence, it is limited by the threshold value.
(m)
On the other hand, for higher values of Pth , the interference
leaked to the mth PU receiver is below the threshold value
as it is minimized by the proposed algorithm, and hence, it
is kept constant. As expected, knowing the full CSI allows
the SU to exploit this knowledge and to transmit with higher
power (without violating the interference constraints at the
PUs) and higher rate (as it is shown in the discussion of
Fig. 2); this represents an upper bound on the achievable
performance. On the other hand, the partial CSI knowledge
reduces the transmission opportunities of the SU in order not
to violate the interference constraints. It is worthy to mention
that the case of knowing only the path loss generates higher
interference levels to the existing PUs when compared to the
case of knowing the path loss and the channel statistics. This
is because the latter case imposes predefined probabilities
(m)
(ℓ)
Ψth and Ψth on violating the CCI and ACI constraints,
respectively, while for the former case the CCI and ACI can
(m)
be violated uncontrollably. Reducing the values of Ψth and
(ℓ)
Ψth produces higher interference levels to the PUs; results are
not provided here due to space limitations.
Fig. 2 depicts the SU achievable rate as a function of
8
α = 0.25
α = 0.50
α = 0.75
SU transmit power (×10−3 W)
Interference to th PU (×10−12 W)
15
10
Full CSI
5
PL
6
α = 0 [7]
α = 0.25
α = 0.50
α = 0.75
4
2
PL and channel statistics
0
1
4
8
12
( )
Pth (×10−12 W)
16
0
1
20
(ℓ)
Fig. 3: Interference leaked to the ℓth PU as a function of Pth for
different values of α and for different degree of CSI knowledge.
4
8(m)
12
Pth (×10−12 W)
16
20
Fig. 4: Comparison between the SU transmit power of the proposed
algorithm and the algorithm in [7], with the latter corresponding to
α = 0.
(m)
16
Energy efficiency (×106 bits/joule)
Pth for different values of α and for different degrees of
CSI knowledge. Similar to the discussion of Fig. 1, the SU
(m)
achievable rate saturates for higher values of Pth . This is
because the SU transmit power saturates in such a case.
As expected, increasing the value of α decreases the SU
achievable rate. Further, knowing the full CSI results in higher
transmission rate when compared to the partial CSI knowledge.
Fig. 3 shows the interference leaked to the ℓth PU receiver as
(ℓ)
a function of Pth for different values of α and for different
degrees of CSI knowledge. As can be seen, increasing the
(ℓ)
value of Pth increases the interference leaked to the ℓth PU.
As expected, increasing the value of α reduced the interference
leaked to the ℓth PU receiver and knowing the full CSI enables
the SU to transmit higher power and higher transmission rates
without violating the interference constraints. The interference
leaked to the ℓth PU receiver does not saturate for higher values
(ℓ)
of Pth as it is not included in the objective function.
Fig. 4 compares the SU transmit power of the proposed
algorithm with that of the work in [7]. It is worthy to mention
that the optimization problem in [7] can be obtained by setting
α = 0 in the MOOP problem in (8). After matching the
operating conditions, one can see that the proposed algorithm
produces lower SU transmit power; hence, lower interference
levels to the mth and ℓth PU receivers are generated. On the
other hand, the work in [7] achieves higher SU transmission
rate. However, in Fig. 5, the energy efficiency (in bits/joule)
of the work in [7] and that of the proposed work are compared
for the same operating conditions. As can be noticed, the
proposed algorithm is more energy efficient when compared to
the work in [7] with even less complexity (the complexity of
the algorithm in [7] is O(N 3 )). The energy efficiency saturates
for the same range over which the SU transmit power saturates,
as seen in Fig. 4.
14
12
α = 0 [7]
α = 0.25
α = 0.50
α = 0.75
10
8
6
4
2
1
4
16
20
Fig. 5: Comparison between the SU energy efficiency of the proposed
algorithm and the algorithm in [7], with the latter corresponding to
α = 0.
the work in the literature that focused only on maximizing the
SU transmission rate. Simulation results showed the flexibility
of the proposed algorithm, which can provide different SU
rates and interference levels to the PUs. Further, results showed
that the obtained solution is more energy efficient when
compared with that of other works in the literature, at the
cost of no additional complexity. In future work, we plan to
extended the MOOP approach to the case of multiple SUs.
A PPENDIX : P ROOF
OF
P ROPOSITION 1
The MOOP problem in (8) is convex and it can be solved
by applying the Karush-Khun-Tucker (KKT) conditions (i.e.,
transforming the inequalities constraints to equality constraints
by adding non-negative slack variables) [13]. The Lagrangian
function L(p, y, λ) is expressed as
V. C ONCLUSIONS
In this paper, we considered an OFDM-based CR network
and adopted a multiobjective optimization approach to investigate the tradeoff between increasing the SU transmission rate
and reducing the SU transmit power (hence the interference to
the PUs). This formulation is considered as a generalization of
8
12
(m)
Pth (×10−12 W)
L(p, y, λ) = α
N
X
pi − (1 − α)
N
X
log2 (1 + γi pi )
i=1
i=1
#
"N
X
(m) (m)
2
2
pi − Pth X
+ yN +1
+λi −pi + yi + λN +1
i=1
+
L
X
(ℓ)
λN +2
"N
X
(ℓ)
pi ̟i
−
(ℓ)
Pth X (ℓ)
+
(ℓ)
(yN +2 )2
i=1
ℓ=1
#
,
(13)
h
iT
(ℓ)
2
2
where y
=
y12 , ..., yN
,
(y
)
and λ
=
+1
N +2
h
iT
(ℓ)
λ1 , ..., λN +1 , λN +2 , ℓ = 1, ..., L, are the vectors of the
slack variables and Lagrange multipliers of length N + L + 1.
The optimal solution is found when ∇L(p, y, λ) = 0 as
∂L
(1 − α)
=α−
− λi + λN +1
∂pi
ln(2)(pi + γi−1 )
L
X
(ℓ)
(ℓ)
+
λN +2 ̟i = 0, (14a)
ℓ=1
∂L
= −pi + yi2 = 0,
∂λi
N
X
∂L
(m)
2
pi − Pth X (m) + yN
=
+1 = 0,
∂λN +1
i=1
∂L
N
X
(ℓ)
(ℓ)
(14b)
(14c)
(ℓ)
pi ̟i − Pth X (ℓ) + (yN +2 )2 = 0, (14d)
(ℓ)
∂λN +2
i=1
∂L
= 2λi yi = 0,
(14e)
∂yi
∂L
= 2λN +1 yN +1 = 0,
(14f)
∂yN +1
∂L
(ℓ)
(ℓ)
(14g)
= 2λN +2 yN +2 = 0.
∂yN +2
=
It can be seen that (14a)–(14g) represent 3N +2L+2 equations
in the 3N + 2L + 2 unknown components of the vectors p, y,
and λ. Equation (14e) implies that either λi = 0 or yi = 0,
(14f) implies that either λN +1 = 0 or yN +1 = 0, and (14g)
(ℓ)
(ℓ)
implies that either λN +2 = 0 or yN +2 = 0, ℓ = 1, ..., L.
Hence, eight possible cases exist, as follows:
—Case 1: Setting λi = 0 (i.e., pi > 0), λN +1 = 0 (i.e.,
PN
P
(m) (m)
(ℓ)
(ℓ)
<
), and λN +2 = 0 (i.e., N
i=1 pi < Pth X
i=1 pi ̟i
(ℓ) (ℓ)
Pth X ) results in the optimal solution on the form
+
1−α
−1
∗
pi =
− γi
, i = 1, ..., N.
(15)
ln(2)α
—Case 2: Setting λi = 0 (i.e., pi > 0), yN +1 = 0 (i.e.,
PN
PN
(ℓ)
(m) (m)
(ℓ)
), and λN +2 = 0 (i.e., i=1 pi ̟i <
i=1 pi = Pth X
(ℓ)
Pth X (ℓ) ) results in the optimal solution on the form
+
1−α
− γi−1 , i = 1, ..., N, (16)
p∗i =
ln(2) (α + λN +1 )
PN
(m)
where λN +1 is calculated to satisfy i=1 p∗i = Pth X (m) .
—Case 3: Setting λi = 0 (i.e., pi > 0), λN +1 = 0 (i.e.,
PN
P
(m) (m)
(ℓ)
(ℓ)
=
), and yN +2 = 0 (i.e., N
i=1 pi ̟i
i=1 pi < Pth X
(ℓ)
Pth X (ℓ) ) results in the optimal solution on the form
+
1
−
α
− γi−1 ,
p∗i =
PL
(ℓ)
(ℓ)
ln(2) α + ℓ=1 λN +2 ̟i
i = 1, ..., N, (17)
PN
(ℓ)
(ℓ)
=
where λN +2 are calculated to satisfy
i=1 pi ̟i
(ℓ)
Pth X (ℓ) , ℓ = 1, ..., L.
—Case 4: Setting λi = 0 (i.e., pi > 0), yN +1 = 0 (i.e.,
PN
PN
(ℓ)
(m) (m)
(ℓ)
), and yN +2 = 0 (i.e., i=1 pi ̟i =
i=1 pi = Pth X
(ℓ)
Pth X (ℓ) ) results in the optimal solution on the form
+
1−α
− γi−1 ,
p∗i =
PL
(ℓ)
(ℓ)
ln(2) α + λN +1 + ℓ=1 λN +2 ̟i
i = 1, ..., N, (18)
PN
(ℓ)
where λN +1 and λN +2 are calculated to satisfy i=1 p∗i =
PN
(ℓ)
(ℓ)
(m)
Pth X (m) and i=1 pi ̟i = Pth X (ℓ) , respectively.
—Case 5: Setting yi = 0 (i.e., pi = 0), λN +1 = 0 (i.e.,
PN
PN
(ℓ)
(m) (m)
(ℓ)
), and λN +2 = 0 (i.e., i=1 pi ̟i <
i=1 pi < Pth X
(ℓ) (ℓ)
Pth X ) results in the optimal solution p∗i = 0.
—Case 6: Setting yi = 0 (i.e., pi = 0), yN +1 = 0 (i.e.,
PN
PN
(ℓ)
(m) (m)
(ℓ)
), and λN +2 = 0 (i.e., i=1 pi ̟i <
i=1 pi = Pth X
(ℓ)
Pth X (ℓ) ) is invalid as it implies that p∗i = 0 which violates
PN ∗
(m) (m)
(m)
, Pth 6= 0.
i=1 pi = Pth X
—Case 7: Setting yi = 0 (i.e., pi = 0), λN +1 = 0 (i.e.,
PN
P
(ℓ)
(m) (m)
(ℓ)
=
), and yN +2 = 0 (i.e., N
i=1 pi ̟i
i=1 pi < Pth X
(ℓ) (ℓ)
Pth X ) is invalid as it implies that p∗i = 0 which violates
PN
(ℓ) (ℓ)
(ℓ)
(ℓ)
, Pth 6= 0, ℓ = 1, ..., L.
i=1 pi ̟i = Pth X
—Case 8: Setting yi = 0 (i.e., pi = 0), yN +1 = 0 (i.e.,
PN
PN
(ℓ)
(m) (m)
(ℓ)
), and yN +2 = 0 (i.e., i=1 pi ̟i =
i=1 pi = Pth X
(ℓ) (ℓ)
Pth X ) is invalid as it implies that p∗i = 0 which violates
PN ∗
PN
(m) (m)
(m)
(ℓ)
=
, Pth 6= 0 and
i=1 pi = Pth X
i=1 pi ̟i
(ℓ) (ℓ)
(ℓ)
Pth X , Pth 6= 0, ℓ = 1, ..., L.
The solution p∗i satisfies the KKT conditions [13], and,
hence, it is an optimal solution (the proof is not provided due
to space limitations).
R EFERENCES
[1] E. Bedeer, M. Marey, O. Dobre, and K. Baddour, “On partially overlapping coexistence for dynamic spectrum access in cognitive radio,” in
Proc. IEEE CAMAD, Jun. 2011, pp. 143–147.
[2] T. A. Weiss and F. K. Jondral, “Spectrum pooling: an innovative strategy
for the enhancement of spectrum efficiency,” IEEE Commun. Mag.,
vol. 42, no. 3, pp. S8–14, Mar. 2004.
[3] S. Haykin, “Cognitive radio: brain-empowered wireless communications,” IEEE J. Sel. Areas Commun., vol. 23, no. 2, pp. 201–220, Feb.
2005.
[4] G. Bansal, M. Hossain, and V. Bhargava, “Optimal and suboptimal power
allocation schemes for OFDM-based cognitive radio systems,” IEEE
Trans. Wireless Commun., vol. 7, no. 11, pp. 4710–4718, Nov. 2008.
[5] Y. Zhang and C. Leung, “An efficient power-loading scheme for OFDMbased cognitive radio systems,” IEEE Trans. Veh. Technol., vol. 59, no. 4,
pp. 1858–1864, May 2010.
[6] A. Attar, O. Holland, M. Nakhai, and A. Aghvami, “Interference-limited
resource allocation for cognitive radio in orthogonal frequency-division
multiplexing networks,” IET Commun., vol. 2, no. 6, pp. 806–814, Jul.
2008.
[7] G. Bansal, M. Hossain, and V. Bhargava, “Adaptive power loading
for OFDM-based cognitive radio systems with statistical interference
constraint,” IEEE Trans. Wireless Commun., no. 99, pp. 1–6, Sep. 2011.
[8] N. Salman, A. Kemp, and M. Ghogho, “Low complexity joint estimation
of location and path-loss exponent,” IEEE Wireless Commun. Lett.,
vol. 1, no. 4, pp. 364–367, Aug. 2012.
[9] J. Proakis, Digital Communications. McGraw-Hill, New York NY, 2001.
[10] E. Bedeer, O. Dobre, M. H. Ahmed, and K. E. Baddour, “Joint optimization of bit and power loading for multicarrier systems,” IEEE Wireless
Commun. Lett., vol. 2, no. 4, pp. 447–450, Aug. 2013.
[11] K. Miettinen, Nonlinear Multiobjective Optimization. Springer, 1999.
[12] D. P. Palomar and J. R. Fonollosa, “Practical algorithms for a family of
waterfilling solutions,” IEEE Trans. Signal Process., vol. 53, no. 2, pp.
686–695, Feb. 2005.
[13] S. Boyd and L. Vandenberghe, Convex Optimization.
Cambridge
University Press, 2004.
| 3 |
Automated Synthesis of Safe and Robust PID
Controllers for Stochastic Hybrid Systems
Fedor Shmarov1 , Nicola Paoletti2 , Ezio Bartocci3 , Shan Lin4 , Scott A. Smolka2 , and
Paolo Zuliani1
arXiv:1707.05229v2 [cs.SY] 7 Sep 2017
1
4
School of Computing, Newcastle University, UK
{f.shmarov,paolo.zuliani}@ncl.ac.uk
2 Department of Computer Science, Stony Brook University, NY, USA
{nicola.paoletti,sas}@cs.stonybrook.edu
3 Faculty of Informatics, TU Wien, Austria
[email protected]
Department of Electrical and Computer Engineering, Stony Brook University, NY, USA
[email protected]
Abstract. We present a new method for the automated synthesis of safe and robust Proportional-Integral-Derivative (PID) controllers for stochastic hybrid systems. Despite their widespread use in industry, no automated method currently
exists for deriving a PID controller (or any other type of controller, for that matter) with safety and performance guarantees for such a general class of systems.
In particular, we consider hybrid systems with nonlinear dynamics (Lipschitzcontinuous ordinary differential equations) and random parameters, and we synthesize PID controllers such that the resulting closed-loop systems satisfy safety
and performance constraints given as probabilistic bounded reachability properties. Our technique leverages SMT solvers over the reals and nonlinear differential equations to provide formal guarantees that the synthesized controllers
satisfy such properties. These controllers are also robust by design since they
minimize the probability of reaching an unsafe state in the presence of random
disturbances. We apply our approach to the problem of insulin regulation for
type 1 diabetes, synthesizing controllers with robust responses to large random
meal disturbances, thereby enabling them to maintain blood glucose levels within
healthy, safe ranges.
1
Introduction
Proportional-Integrative-Derivative (PID) controllers are among the most widely deployed and well-established feedback-control techniques. Application areas are diverse
and include industrial control systems, flight controllers, robotic manipulators, and
medical devices. The PID controller synthesis problem entails finding the values of
its control parameters (proportional, integral and derivative gains) that are optimal in
terms of providing stable feedback control to the target system (the plant) with desired
response behavior. Despite the limited number of parameters, this problem is far from
trivial, due to the presence of multiple (and often conflicting) performance criteria that
a controller is required to meet (e.g., normal transient response, stability).
Developing PID controllers for cyber-physical systems is even more challenging
because their dynamics are typically hybrid, nonlinear, and stochastic in nature. Moreover, it is imperative that the closed-loop controller-plus-plant system is safe (i.e., does
not reach a bad state) and robust (i.e., exhibits desired behavior under a given range
of disturbances). To the best of our knowledge, however, the current techniques for
synthesizing PID controllers (see e.g., [33,9,13]) simply ignore these issues and do not
provide any formal guarantees about the resulting closed-loop system.
In this paper, we present a new framework for the automated synthesis of PID controllers for stochastic hybrid systems such that the resulting closed-loop system provably satisfies a given (probabilistic) safety property in a robust way with respect to
random disturbances. Specifically, we formulate and tackle two different, yet complementary, problems: controller synthesis, i.e., find a PID controller that minimizes the
probability of violating the property, thus ensuring robustness against random perturbations; and maximum disturbance synthesis, i.e., find, for a given controller, the largest
disturbance that the resulting control system can sustain without violating the property.
To the best of our knowledge, we are the first to present a solution to these problems
(see also the related work in Section 6) with formal guarantees.
It is well known that safety verification is an inherently difficult problem for nonlinear hybrid systems — it is in general undecidable, hence it must be solved using
approximation methods. Our technique builds on the frameworks of delta-satisfiability
[16] and probabilistic delta-reachability [31] to reason formally about nonlinear and
stochastic dynamics. This enables us to circumvent undecidability issues by returning
solutions with numerical guarantees up to an arbitrary user-defined precision.
We express safety and performance constraints as probabilistic bounded reachability properties, and encode the synthesis problems as SMT formulae over ordinary
differential equations. This theory adequately captures, besides the reachability properties, the hybrid nonlinear dynamics that we need to reproduce, and leverages appropriate
SMT solvers [17,30] that can solve the delta-satisfiability problem for such formulae.
We demonstrate the utility of our approach on an artificial pancreas case study,
i.e. the closed-loop insulin regulation for type 1 diabetes. In particular, we synthesize
controllers that can provide robust responses to large random meal disturbances, while
keeping the blood glucose level within healthy, safe ranges.
To summarize, in this paper, we make the following main contributions:
– We provide a solution to the PID controller synthesis and maximum disturbance
synthesis problems using an SMT-based framework that supports hybrid plants with
nonlinear ODEs and random parameters.
– We encode in the framework safety and performance requirements, and state the
corresponding formal guarantees for the automatically synthesized PID controllers.
– We demonstrate the practical utility of our approach by synthesizing provably safe
and robust controllers for an artificial pancreas model.
2
Background
Hybrid systems extend finite-state automata by introducing continuous state spaces and
continuous-time dynamics [2]. They are especially useful when modeling systems that
combine discrete and continuous behavior such as cyber-physical systems, including
biomedical devices (e.g., infusion pumps and pacemakers). In particular, continuous
dynamics is usually expressed via (solutions of) ordinary differential equations (ODEs).
To capture a wider and more realistic family of systems, in this work we consider hybrid
systems whose behavior depends on both random and nondeterministic parameters,
dubbed stochastic parametric hybrid systems (SPHS) [31]. In particular, our synthesis
approach models both the target system and its controller as a single SPHS. It is thus
important to adopt a formalism that allows random and nondeterministic parameters:
the former are used to model system disturbances and plant uncertainties, while the
latter are used to constrain the search space for the controller synthesis.
Definition 1. (SPHS)[31] A Stochastic Parametric Hybrid System is a tuple H =<
Q, ϒ, X, P,Y, R, jump, goal >, where
Q = {q0 , · · · , qm } is the set of modes (discrete states) of the system;
ϒ ⊆ {(q, q0 ) : q, q0 ∈ Q} is the set of possible mode transitions (discrete dynamics);
X = [u1 , v1 ] × · · · × [un , vn ] × [0, T ] ⊂ Rn+1 is the continuous system state space;
P = [a1 , b1 ] × · · · × [ak , bk ] ⊂ Rk is the parameter space of the system, which is
represented as P = PR × PN , where PR is domain of random parameters and PN is
the domain of nondeterministic parameters (and either domain may be empty);
– Y = {yq (p) : q ∈ Q, p ∈ X × P} is the continuous dynamics where yq : X × P → X;
– R = {g(q,q0 ) (p) : (q, q0 ) ∈ ϒ, p ∈ X × P} is the set of ‘reset’ functions g(q,q0 ) : X ×
P → X × P defining the continuous state at time t = 0 in mode q0 after taking the
transition from mode q.
–
–
–
–
and predicates (or relations)
– jump(q,q0 ) (p) is true iff the discrete transition (q, q0 ) ∈ ϒ may occur upon reaching
state (p, q) ∈ X × P × Q,
– goalq (p) is true iff p ∈ X × P is a goal state for mode q.
The goal predicate is the same for all modes and is used to define the safety requirements for the controller synthesis (see (4.6) in Section 4). We assume that the SPHS has
an initial state (x0 , q0 ) ∈ X × Q. The continuous dynamics Y is given as an initial-value
problem with Lipschitz-continuous ODEs over a bounded time domain [0, T ], which
have a unique solution for any given initial condition p ∈ X × P (by the Picard-Lindelöf
theorem). System parameters are treated as variables with zero derivative, and thus are
part of the initial conditions. Finally, parameters may be random discrete/continuous
(capturing system disturbances and uncertainties) with an associated probability measure, and/or nondeterministic (i.e. the parameters to synthesize), in which case only
their bounded domain is known.
Probabilistic Delta-Reachability: For our purposes we need to consider probabilistic
bounded reachability: what is the probability that a SPHS (which models system and
controller) reaches a goal state in a finite number of discrete transitions? Reasoning
about reachability in nonlinear hybrid systems entails deciding first-order formulae over
the reals. It is well known that such formulae are undecidable when they include, e.g.,
trigonometric functions. A relaxed notion of satisfiability (δ-satisfiability [16]) can be
utilized to overcome this hurdle, and SMT solvers such as dReal [17] and iSAT-ODE
[10] can “δ-decide” a wide variety of real functions, including transcendental functions
and solutions of nonlinear ODEs. (Essentially, those tools implement solving procedures that are sound and complete up to a given arbitrary precision.)
A probabilistic extension of bounded reachability in SPHSs was presented in [31],
which basically boils down to measuring the goal set, i.e. the set of parameter points
for which the system satisfies the reachability property. Recall that the set of goal states
for a SPHS is described by its goal predicate. When nondeterministic parameters are
present, the system may exhibit a range of reachability probabilities, depending on the
value of the nondeterministic
parameters. That is, the reachability probability is given
R
by a function Pr(ν) = G(ν) dP, defined for any ν ∈ PN , where G(ν) is the goal set and
P is the probability measure of the random parameters. The ProbReach tool utilizes the
notion of δ-satisfiability when computing the goal set, thereby computing probabilistic
δ-reachability [30]. In particular, ProbReach computes probability enclosures for the
range of function Pr over parameter sets N ⊆ PN , i.e., intervals [a, b] such that
∀ν ∈ N
Pr(ν) ∈ [a, b]
(2.1)
where 0 6 a 6 b 6 1 (but a = b can only be achieved in very special cases, of course). To
solve our synthesis problems we leverage ProbReach’s formal approach and statistical
approach for the computation of probability enclosures.
Formal Approach: ProbReach guarantees that the returned enclosures satisfy (2.1) formally and numerically [30]. In particular, any enclosure either has a desired width
ε ∈ Q+ , or the size of the corresponding parameter box N ⊆ PN is smaller than a given
lower limit. The computational complexity of this approach increases exponentially
with the number of parameters, so it might not be feasible for large systems.
Statistical Approach: It trades computational complexity with correctness guarantees
[31], by solving approximately the problem of finding a value ν∗ for the nondeterministic parameters that minimizes (maximizes) the reachability probability Pr:
ν∗ ∈ arg min Pr(ν)
ν∗ ∈ arg max Pr(ν) .
(2.2)
ν∈PN
ν∈PN
ProbReach returns an estimate ν̂ for ν∗ and a probability enclosure [a, b] that are statistically and numerically guaranteed to satisfy:
Prob(Pr(ν̂) ∈ [a, b]) > c
(2.3)
where 0 < c < 1 is an arbitrary confidence parameter. In general, the size of the enclosure [a, b] cannot be arbitrarily chosen due to undecidability reasons, although it
may be possible to get tighter enclosures by increasing the numerical precision of
δ-reachability. Also, the statistical approach utilizes a Monte Carlo (Cross Entropy)
method, so it cannot guarantee that ν̂ is a global optimum, i.e., that satisfies (2.2).
PID control: A PID control law is the sum of three kinds of control actions, Proportional, Integral and Derivative actions, each of which depends on the error value, e,
i.e. the difference between a target trajectory, or setpoint sp, and the measured output
of the system y. At time t, the resulting control law u(t) and error e(t) are given by:
Z t
u(t) = K p e(t) + Ki
| {z } |
P
0
e(τ) dτ + Kd ė(t),
| {z }
{z
}
D
e(t) = sp(t) − y(t)
(2.4)
I
where constants K p , Ki and Kd are called gains and fully characterize the PID controller.
The above control law assumes a continuous time domain, which is quite common
in the design stage of a PID controller. Alternatively, PID control can be studied over
discrete time, where the integral term is replaced by a sum and the derivative by a
finite difference. However, the analysis of discrete-time PID controllers is impractical
for non-trivial time bounds because they induce a discrete transition for each time step,
and thus, they directly affect the unrolling/reachability depth required for the bounded
reachability analysis, which is at the core of our synthesis method.
3
PID Control of Hybrid Plants
We formally characterize the
P
system given by the feedback
loop between a plant SPHS H
PLANT
I
and a PID controller, so called
SPHS
closed-loop system (see Figure
D
1). We would like to stress that
we support plants specified as
hybrid systems, given that a variety of systems naturally exFig. 1. PID control loop
hibit hybrid dynamics (regardless of the controller). For instance, in the artificial pancreas case study of Section 5,
discrete modes are used to describe different meals, while the glucose metabolism is
captured by a set of ODEs.
We assume that the controller is an additive input and can manipulate only one of
the state variables of H, xu , and that for each mode q of H, there is a measurement
function hq that provides the output of the system at q. To enable synthesis, we further
assume that the PID controller gains k = (K p , Ki , Kd ) are (unknown) nondeterministic
parameters with domain K. To stress this dependency, below we use the notation u(k,t)
to denote the PID control law of Equation 2.4.
Definition 2 (PID-SPHS control system). Let H = hQ, ϒ, X, P, Y , R, jump, goali
be a plant SPHS, and let u be a PID controller (2.4) with gain parameters k ∈ K ⊂
R3 . For q ∈ Q, let hq : X → R be the corresponding measurement function. Let xu be
the manipulated state variable, iu ∈ {1, . . . , n} be the corresponding index in the state
vector, and sp : [0,t] → R be the desired setpoint. The PID-SPHS control system with
plant H is the SPHS H k u = hQ, ϒ, X, P × K,Y 0 , R0 , jump, goali, where
– Y 0 = {y0q (p, k,t) : q ∈ Q, p ∈ X ×P, k ∈ K,t ∈ [0, 1]}, where the continuous dynamics
of each state variable with index i = 1, . . . , n is given by
(
yq,i (p,t) + u(k,t) if i = iu
0
yq,i (p, k,t) =
yq,i (p,t)
otherwise
where yq,i is the corresponding continuous dynamics in the plant SPHS H, and
u(k,t) is the PID law described in (2.4), with error
e(t) = sp(t) − hq (y0q (p, k,t)); and
– R0 = {g0(q,q0 ) (p, k,t) : (q, q0 ) ∈ ϒ, p ∈ X × P, k ∈ K,t ∈ [0, T ]}, where g0(q,q0 ) (p, k,t) =
g(q,q0 ) (p,t), i.e. the reset g0(q,q0 ) is not affected by the controller parameters k and is
equal to the corresponding reset of the plant H, g(q,q0 ) .
In other words, the PID-SPHS control system is obtained by applying the same
PID controller to the continuous dynamics of each discrete mode of the hybrid plant,
meaning that the PID-SPHS composition produces the same number of modes of the
plant SPHS. We remark that external disturbances as well as plant uncertainties can be
encoded through appropriate random variables in the plant SPHS.
4
Safe and Robust PID Controller Synthesis
In this section we first illustrate the class of synthesis properties of interest, able to capture relevant safety and performance objectives. Second, we formulate the PID control
synthesis problem and the related problem of maximum disturbance synthesis.
We remark that our main objective is designing PID controllers with formal safety
guarantees, i.e. a given set of bad states should never be reached by the system, or
reached with very small probability. Similarly, we aim to synthesize controllers able to
guarantee, by design, prescribed performance levels. For instance, the designer might
need to keep the settling time within strict bounds, or avoid large overshoot.
To this purpose, we consider two well-established performance measures, the fundamental index (FI) and the weighted fundamental index (FIw ) [24,25]5 , defined by:
Z t
FI(t) =
0
(e(τ))2 dτ
Z t
FIw (t) =
τ2 · (e(τ))2 dτ.
(4.5)
0
FI and FIw quantify the cumulative error between output and set-point, thus providing
a measure of how much the system deviates from the desired behavior. Crucially, they
also indirectly capture key transient response measures such as steady-state error, i.e.
the value of e(t) when t → ∞, or maximum overshoot, i.e. the highest deviation from the
setpoint6 . In fact, small FI values typically indicate good transient response (e.g. small
5
6
FI and FIw are also also known as “integral of square error” and “integral of square time
weighted square error”, respectively.
In PID theory, transient response measures are often evaluated after applying a step function
to the set-point. However, we do not restrict ourselves to this scenario.
overshoot or short rise-time), while FIw weighs errors with the corresponding time, in
this way stressing steady state errors.
We now formulate the main reachability property for the synthesis of safe and robust controllers, which is expressed by predicate goal. The property captures the set of
bad states that the controller should avoid (predicate bad) as well as performance constraints through upper bounds FI max , FIwmax ∈ R+ on the allowed values of FI and FIw ,
respectively, and is given by:
goal = bad ∨ (FI > FI max ) ∨ (FIw > FIwmax ).
(4.6)
In the case that the designer is not interested in constraining FI or FIw , we allow FI max
and FIwmax to be set +∞.
We now introduce the PID controller synthesis problem that aims at synthesizing
the control parameters yielding the minimal probability of reaching the goal (i.e. the
undesired states). Importantly, this corresponds to minimizing the effects on the plant
of random disturbances, that is, to maximizing the robustness of the resulting system.
We remark that the unrolling depth and the goal predicate are implicit in the reachability probability function Pr (see Section 2).
Problem 1 (PID controller synthesis). Given a PID-SPHS control system H k u with
unknown control parameters k ∈ K, find the parameters k∗ that minimize the probability
of reaching the goal:
k∗ ∈ arg min Pr(k).
k∈K
For the duality between safety and reachability, Problem 1 is equivalent to synthesizing
controllers that maximize the probability that ¬goal always holds. If H k u has no random parameters (but only nondeterministic parameters), then Problem 1 is equivalent
to synthesizing, if it exists, a controller that makes goal unsatisfiable.
As previously explained, the control parameters k that we aim to synthesize must
be defined as nondeterministic parameters in the SPHS H k u. Crucially, we can employ
both the formal and the statistical approach alike to solve this problem.
In general, it is not possible to know the exact minimizing parameter because of the
inherent undecidability. However, using the formal approach one could select the synthesized controller parameter k∗ as the midpoint of the parameter box whose enclosure
has the least midpoint. Through the following proposition, we show that this solution
can be made arbitrarily precise when all of the returned enclosures have length ≤ ε, the
user-defined parameter that determines the desired length of the enclosure as explained
in Section 2 (however, this cannot be always guaranteed).
Proposition 1. Suppose that the returned enclosures by the formal approach have all
length ≤ ε. Let P∗ be the actual minimal probability, and let k∗ be the solution of the
formal approach for Problem 1. Then, it holds that
3
Pr(k∗ ) < P∗ + ε .
2
Proof. See Appendix A.
On the other hand, the statistical algorithm returns an over-approximation P̂ of the
minimum probability, c-confidence interval [P̂] such that P̂ ∈ [P̂], and synthesized parameters k∗ whose reachability probability is included in [P̂] with probability at least c,
as per Equations 2.2 and 2.3.
Below, we define the maximum disturbance synthesis problem, aimed at finding,
given a concrete controller, the maximum disturbance value that the resulting control
system can support without violating a given property. This problem is complementary
to the PID synthesis problem, since it allows the designer to formally evaluate the robustness of a known controller, possibly synthesized in a previous step. Specifically, we
assume that the disturbance is represented by a vector of nondeterministic parameters
d in the plant SPHS, and that d ranges over some bounded domain D.
Problem 2 (Maximum disturbance synthesis). Given a PID-SPHS control system H k u
with known control parameters k∗ ∈ K and unknown disturbance d ∈ D, and a probability threshold p, find the highest disturbance d∗ for which the probability of reaching
the goal does not exceed p, i.e. such that:
d∗ = max {d ∈ D | Pr(d) ≤ p} .
For the duality between safety and reachability, the probability of reaching goal is below
p if and only if the probability that ¬goal always holds is above 1 − p. If H k u has no
random parameters (but only nondeterministic parameters), then Problem 2 reduces to
finding the largest disturbance for which the PID-SPHS system either reaches or does
not reach the goal.
Note that the maximum disturbance synthesis problem is fundamentally different
from the controller synthesis problem, because the kind of parameters that we seek
to synthesize represent external factors that cannot be controlled. That is why we are
interested in knowing the maximum (worst-case) value they can attain such that the
requirements are met with given probability constraints. In particular, we restrict to
upper-bound constraints because we want to limit the probability of reaching a given
goal (undesired) state, even though lower bound constraints can be equally supported
by the synthesis method.
Problem 2 is solved through the formal approach, which allows identifying the parameters boxes whose probability enclosures are guaranteed to be below the threshold
p, i.e., they are intervals of the form [Pmin , Pmax ] with Pmax ≤ p. Then, the synthesized
parameter d∗ is selected as the highest value among all such parameter boxes.
It follows that the returned d∗ is guaranteed to meet the probability constraint
(Pr(d∗ ) ≤ p), but, due to the iterative refinement, d∗ under-estimates the actual maximum disturbance. In this sense, d∗ is a safe under-approximation. The reason is that
there might exist some “spurious” parameter boxes [d] (not returned by the algorithm),
i.e. such that p lies within the corresponding probability enclosure [P] and [d] contains
a disturbance value d0 that is higher than the synthesized d∗ and that, at the same time,
meets the constraint Pr(d0 ) ≤ p.
The statistical approach cannot be applied in this case, because it relies on the Cross
Entropy method, which is designed for estimation and optimization purposes and is not
suitable for decision problems. Note indeed that the probability bound ≤ p induces a
Boolean (and not quantitative) property.
5
Case Study: Artificial Pancreas
We evaluate our method on the closed-loop control of insulin treatment for Type 1
diabetes (T1D), also known as the artificial pancreas (AP) [20]. Together with model
predictive control, PID is the main control technique for the AP [32,22], and is found
as well in commercial devices [23].
The main requirement for the AP is to keep blood glucose (BG) levels within tight,
healthy ranges, typically between 70-180 mg/dL, in order to avoid hyperglycemia (BG
above the healthy range) and hypoglycemia (BG below the healthy range). While some
temporary, postprandial hyperglycemia is typically admissible, hypoglycemia leads to
severe health consequences, and thus, it should be avoided as much as possible. This is
a crucial safety requirement, which we will incorporate in our synthesis properties.
The AP consists of a continuous glucose monitor that provides glucose measurements to a control algorithm regulating the amount of insulin injected by the insulin
pump. The pump administers both basal insulin, a low and continuous dose that covers
insulin needs outside meals, and bolus insulin, a single high dose for covering meals.
Meals represent indeed the major disturbance in insulin control, which is why stateof-the-art commercial systems7 can only regulate basal insulin and still require explicit
meal announcements by the patient for bolus insulin. To this purpose, robust control
methods have been investigated [28,34,27], since they are able to minimize the impact
of input disturbances (in our case, meals) on the plant (the patient). Thus, they have the
potential to provide full closed-loop control of bolus insulin without manual dosing by
the patient, which is inherently error-prone and hence, dangerous. Our method for the
synthesis of safe and robust controllers is therefore particularly meaningful in this case.
5.1
Plant Model
To model the continuous system’s dynamics (e.g., glucose and insulin concentrations),
we consider the well-established nonlinear model of Hovorka et al. [21].
At time t, the input to the system is the infusion rate of bolus insulin, u(t), which
is computed by the PID controller. The system output y(t) is given by state variable
Q1 (t) (mmol), describing the amount of BG in the accessible compartment, i.e. where
measurements are taken, for instance using finger-stick blood samples. For simplicity,
we did not include a model of the continuous glucose monitor (see e.g. [35]) that instead
measures glucose in the tissue fluid, but we assume continuous access to blood sugar
values. The state-space representation of the system is as follows:
ẋ(t) = F (x(t), u(t), DG ) ,
y(t) = Q1 (t)
(5.7)
where x is the 8-dimensional state vector that evolves according to the nonlinear ODE
system F (see Appendix B for the full set of equations and parameters). The model
assumes a single meal starting at time 0 and consisting of an amount DG of ingested
carbohydrates. Therefore, parameter DG represents our input disturbance.
7
MINIMED 670G by Medtronic https://www.medtronicdiabetes.com/products/
minimed-670g-insulin-pump-system
Instead of the BG mass Q1 (t), in the discussion of the results we will mainly evaluate the BG concentration G(t) = Q1 (t)/VG , where VG is the BG distribution volume.
The error function of the PID controller is defined as e(t) = sp − Q1 (t) with the
constant set point sp corresponding to a BG concentration of 110 mg/dL. Multiple
meals can be modeled through a stochastic parametric hybrid system with one mode
for each meal. In particular, we consider a one-day scenario consisting of three random
meals (breakfast, lunch and dinner), resulting in the SPHS of Figure 2.
DG := DG1
Meal 1
t = T1
(DG := DG2 ) ∧ (t := 0)
Meal 2
t = T2
(DG := DG3 ) ∧ (t := 0)
Meal 3
Fig. 2. Stochastic parametric hybrid system modelling a scenario of 3 meals over 24 hours. Above
each edge, we report the corresponding jump conditions, below, the resets.
The model features five random, normally-distributed parameters: the amount of
carbohydrates of each meal, DG1 ∼ N (40, 10), DG2 ∼ N (90, 10) and DG3 ∼ N (60, 10),
and the waiting times between meals, T1 ∼ N (300, 10) and T2 ∼ N (300, 10).
A meal containing DG1 grams of carbohydrates is consumed at time 0. When the
time in the first mode reaches T1 minutes the system makes a transition to the next
mode Meal 2 where the value of the variable DG is set to DG2 and the time is reset to
0. Similarly, the system transitions from mode Meal 2 to Meal 3, resetting variables
DG and t to DG3 and 0, respectively. All remaining variables are not reset at discrete
transitions.
Basal insulin and initial state: The total insulin infusion rate is given by u(t) + ub where
u(t) is the dose computed by the PID controller, and ub is the basal insulin. As typically
done, the value of ub is chosen in order to guarantee a steady-state BG value of Q1 = sp,
and the steady state thus obtained is used as the initial state of the system.
We denote with C0 the basal controller that switches off the PID controller and
applies only ub (i.e., K p , Ki and Kd are equal to 0).
5.2
Experiments
We apply the formal and statistical techniques of ProbReach to synthesize the controller parameters K p , Kd and Ki (Problem 1) and the maximum safe disturbance DG
(Problem 2), considering the probabilistic reachability property of Section 4. All experiments in this section were conducted on a 32-core (Intel Xeon 2.90GHz) Ubuntu 16.04
machine, and the obtained results for the synthesized controllers are summarized in Table 1. We also validate and assess performance of the controllers over multiple random
instantiations of the meals, which is reported in Figure 3.
PID controller synthesis Typical healthy glucose levels vary between 4 and 10 mmol/L.
Since avoiding hypoglycemia (G(t) < 4 mmol/L) is the main safety requirement of the
artificial pancreas, while (temporary) hyperglycemia can be tolerated and is inescapable
after meals, we will consider a BG range of [4, 16] for our safety properties. In this way
we protect against both hypoglycemia and very severe levels of hyperglycemia.
Given that the basal insulin level is insufficient to cover meal disturbances, the basal
controller C0 prevents hypoglycemia but causes severe hyperglycemia when a large
meal is consumed (DG > 80) or when the BG level is not low enough by the time the
next meal is consumed (see Figure 3).
We used the statistical engine of ProbReach to synthesize several controllers (see
Table 1), over domains Kd ∈ [−10−1 , 0], Ki ∈ [−10−5 , 0] and K p ∈ [−10−3 , 0], which
minimize the probability of reaching a bad state at any time instant in the modes Meal 1,
Meal 2 and Meal 3 (reachability depth of 0, 1 or 2, respectively).
The set of unsafe glucose ranges is captured by predicate bad = G(t) 6∈ [4, 16]. Controller C1 was synthesized considering only safety requirements, corresponding to the
reachability specification goal = bad (see Equation 4.6). On the other hand, controllers
C2 , C3 and C4 were obtained taking into account also performance constraints, by using
the default specification (4.6): goal = bad ∨ (FI > FI max ) ∨ (FIw > FIwmax ). Thresholds
FI max and FIwmax have been set to gradually stricter values, respectively to 3.5 × 106 and
70 × 109 for C2 , 3 × 106 and 50 × 109 for C3 , and 2.7 × 106 and 30 × 109 for C4 .
# Kd
C0
C1
C2
C3
C4
(×102 )
0
-6.02
-5.73
-6.002
-6.24
Ki
(×107 )
0
-3.53
-3.00
-1.17
-7.55
Kp
(×104 )
0
-6.17
-6.39
-6.76
-5.42
CPUsyn
P
CPUP Dmax
G1 CPUmax
0
[0.97322,1]
176 69.4 2,327
92,999 [0.19645,0.24645] 4,937 88.07 3,682
156,635 [0.31307,0.36307] 64,254 87.62 3,664
98,647 [0.65141,0.70141] 59,215 88.23 3,881
123,726
[0.97149,1]
11,336 88.24 3,867
Table 1. Results of PID controller synthesis, where: # – name of the synthesized controller, Kd ,
Ki and K p – synthesized values of the gain constants characterizing the corresponding controller
(Problem 1), CPUsyn – CPU time in seconds for synthesizing the controller parameters, P – 99%confidence interval for the reachability probability, CPUP – CPU time in seconds for computing P
for synthesized controller, Dmax
G1 – synthesized maximum meal disturbance for which the system
never reaches the unsafe state, CPUmax – CPU time in seconds for obtaining Dmax
G1 .
Due to the high computational complexity of the artificial pancreas model, the controller synthesis was performed in two steps. First, the values of K p , Ki and Kd were
synthesized using a coarse precision (i.e., desired width for confidence intervals P) for
computing the probability estimates during the nondeterministic parameter search. Second, the confidence intervals for the obtained controllers were computed with a higher
precision. The values of CPUsyn and CPUP in Table 1 represent CPU times used for
solving these two steps. The high computation times are due to the fact that the solvers
incorporated by ProbReach solve ODEs in a guaranteed manner which is, for general
Lipschitz-continuous ODEs, a PSPACE-complete problem, and thus, it is the main bottleneck of the implemented algorithms.
Besides C0 that unsurprisingly yields the highest probability of safety violation
(highest P for the reachability probability), results in Table 1 evidence that controllers
C1 , . . . ,C4 fail to maintain the safe state with increasingly higher probability. As we shall
see in more detail later, this behaviour is mostly due to the performance constraints that
become harder and harder to satisfy.
Maximum disturbance synthesis We solve Problem 2 for each of the obtained controllers in Table 1. We consider a domain of [0, 120] for the maximum meal disturbance,
and apply the formal approach of ProbReach for synthesizing the maximum size Dmax
G1
of the first meal, such that, given any disturbance DG1 ∈ [0, Dmax
G1 ], the system does not
reach the unsafe state within 12 hours. Note that this corresponds to setting the probability threshold p of Problem 2 to 0. Since we are interested in just one meal, we consider
a reachability depth of 0 (path length of 1) for the bounded reachability property.
The results in Table 1 indicate that applying a PID controller increases the size of
the allowed meal from approximately 69g of the basal controller to about 88g, and at
the same time, the difference between the synthesized controllers is negligibly small.
Although introducing a controller does not increase the maximum disturbance dramatically with respect to the basal case, a PID control decreases the BG level sufficiently enough so that a subsequent meal of similar size can be consumed without the
risk of experiencing severe hyperglycemia. In contrast, C0 does not bring the glucose
level low enough before the following meal.
Note that, being normally distributed with mean 90 g, the second random meal exceeds such obtained maximum disturbances, which explains why the synthesized controllers fail with some probability to avoid unsafe states.
Performance and safety evaluation In this experiment, we evaluate safety and performance of the controllers by simulating 1,000 instantiations of the random meals.
Such obtained glucose profiles and statistics are reported in Figure 3. No hypoglycemia
episode (G < 4) was registered.
Plots evidence that all four synthesized controllers (C1 , . . . ,C4 ) perform dramatically
better than the basal controller C0 , which stays, on the average, 23.59% of the time
in severe hyperglycemia (see index tbad ). In particular, all the traces simulated for C0
violate the safe BG constraints G ∈ [4, 16] (100% value of %bad ).
On the other hand, controllers C1 , . . . ,C4 violate safe BG constraints for 17-22% of
their traces, but this happens only for a very short while (no more than 0.45% of the
time) after the second (the largest) meal. This comes with no surprise since we already
formally proven that the second meal exceeds the allowed maximum meal disturbance.
C0 has the worst performance in terms of FI and FIw , with mean FI and FIw values
(indices FI and FIw , resp.) significantly larger than those of C1 , . . . ,C4 . Among the
synthesized controllers, C3 has the best steady-state behavior (as visible in Figure 3,
plot d), keeping the glucose level very close to the set point towards the end of the
simulation. C3 yields indeed the best mean FIw value (index FIw ), while the worse
steady-state behavior is observed for C4 . On the other hand, mean FI values are very
similar, meaning that C1 , . . . ,C4 maintain the BG levels equally far from the set point
on the average.
One would expect C4 to have the best performance in terms of FIw , since it was
synthesized with the stricter FIw constraint (FIwmax = 30 × 109 ). This constraint is, how-
ever, too strong to be satisfied, as demonstrated by the 100% value of index %FIw >FIwmax
(see Figure 3), implying all traces fail to satisfy FIw ≤ FIwmax . In general, we observe
that strengthening the performance constraints leads to higher chances of violating them
(see the last three indices of Figure 3). We conclude that performance constraints (and
their violation) largely contribute to the reachability probabilities computed by ProbReach (see Table 1) for C2 ,C3 and C4 , whose traces violate FI or FIw constraints for
28%, 67%, and 100% of the times, respectively.
(a) C0
C0
C1
C2
C3
C4
tbad
23.59%
0.45%
0.45%
0.51%
0.35%
(b) C1
−6
%bad FI (×10
100% 20.27
22%
3.21
21.4%
3.21
24.2%
3.24
17.3% 3.21
(c) C2
)
−9
FIw (×10
653.89
66.32
60.91
44.93
129.05
)
(d) C3
(e) C4
%FI>FI max %FIw >FIwmax %FI>FI max ∨FIw >FIwmax
NA
NA
NA
NA
NA
NA
28.5%
14%
28.5%
67.2%
21.7%
67.2%
86.5%
100%
100%
Fig. 3. BG profiles simulated for 1,000 random meals (shaded blue lines). Grey areas indicate
healthy BG ranges (G ∈ [4, 16]). Dashed black lines indicate the ideal setpoint. tbad : mean proportion of time where G 6∈ [4, 16] (all traces yielded G > 4, i.e. no hypoglycemia). %bad : proportion of traces violating G ∈ [4, 16]. FI and FIw : mean FI and FIw , resp. %FI>FI max , %FIw >FIwmax
and %FI>FI max ∨FIw >FIwmax : proportion of traces violating, resp., either and both performance constraints. The best value for each index is highlighted in bold.
6
Related Work
A number of approaches have been proposed for the PID control of nonlinear and
stochastic systems. Among these, nonlinear PID control [33] defines the controller gains
as nonlinear functions of the system state, even though performance guarantees have
been established only for subclasses of nonlinear systems. Adaptive PID (APID) control [13] supports nonlinear plants with partly unknown dynamics, but no requirements
can be guaranteed by design since the unknown dynamics is estimated via sampling the
plant output. In contrast, we can synthesize controllers with guaranteed performance for
a large class of nonlinear systems (Lipschitz-continuous) while retaining the complete
system dynamics. This allows for a fully model-based approach to controller synthesis, which is key in safety-critical applications, where, on the contrary, the model-free
online tuning of APID is potentially dangerous.
PID control for Markov jump systems, i.e. where the plant is a linear system with
stochastic coefficients, is solved as a convex optimization problem in [18,19], while
in [9], robust PID control for stochastic systems is reduced to a constrained nonlinear optimization problem. Compared to these approaches, we support models where
stochasticity is restricted to random (both discrete and continuous) parameters, with
nondeterministic (i.e., arbitrary) parameters and much richer nonlinear dynamics. Another key strength of our method with respect to the above techniques is that design
specifications are given in terms of probabilistic reachability properties. These provide
rigor and superior expressiveness and can encode common performance indices for PID
controllers [25], as shown in Section 4.
Other related work includes the Simplex architecture [29] where, whenever the plant
is at risk of entering an unsafe state, the system switches from a high-performance advanced controller to a pre-certified (safe) baseline controller (with worse performance),
leading to a potential trade-off between safety and performance. In our approach, performance and safety are instead equal cohorts in the synthesis process. Unlike Simplex,
in the Control Barrier Function (CBF) approach [3], there is no baseline controller to
fall back on: a CBF minimally perturbs a (possibly erroneous) control input to the plant
so the plant remains in the safe region. As far as we know, neither Simplex nor CBFs
have been designed with a stochastic plant model in mind.
The controller synthesis problem under safety constraints (bounded STL properties
in this case) is also considered in [12]. The main differences between this approach
and ours is that they focus on Model Predictive rather than PID control, and their system model does not support stochastic parameters. There are a number of formal approaches (e.g., [1]) to control synthesis that consider the sample-and-hold schema typical of discrete-time controllers, but they do not yield PID controllers and cannot handle
stochastic hybrid systems. Verification of hybrid control systems with non-deterministic
disturbances is considered in [26] and solved through a combination of explicit model
checking and simulation. However, unlike our method, it does not support controller
synthesis and arbitrary probability distributions for the disturbances.
There has been a sizable amount of work on tools for formal analysis of probabilistic reachability, although they all have limitations that make them unsuitable for our
approach. SiSAT [15] uses an SMT approach for probabilistic hybrid systems with discrete nondeterminism, while continuous nondeterminism is handled via Monte Carlo
techniques only [11]; UPPAAL [7] uses statistical model checking to analyze nonlinear
stochastic hybrid automata; ProHVer [36] computes upper bounds for maximal reachability probabilities, but continuous random parameters are analyzed via discrete overapproximations [14]; U-Check [5] enables parameter synthesis and statistical model
checking of stochastic hybrid systems [4]). However, this approach is based on Gaussian process emulation and optimisation, and provides only statistical guarantees and
requires certain smoothness conditions on the satisfaction probability function.
Other approaches to solving SMT problems over nonlinear real arithmetic include
the complete (over polynomials), yet computationally expensive, cylindrical algebraic
decomposition method implemented in solvers like Z3 [8], as well as a recent method [6]
based on the incremental linearization of nonlinear functions. However, none of these
support ODEs and transcendental functions.
7
Conclusions and Future Work
The design of PID controllers for complex, safety-critical cyber-physical systems is
challenging due to the hybrid, stochastic, and nonlinear dynamics they exhibit. Motivated by the need for high-assurance design techniques in this context, in this paper we presented a new method for the automated synthesis of PID controllers for
stochastic hybrid systems from probabilistic reachability specifications. In particular,
our approach can provide rigorous guarantees of safety and robustness for the resulting
closed-loop system, while ensuring prescribed performance levels for the controller. We
demonstrated the effectiveness of our approach on an artificial pancreas case study, for
which safety and robustness guarantees are paramount.
As future work, we plan to study more advanced variants of the PID design such as
nonlinear PID controllers, as well as investigate how common PID tuning heuristics can
be integrated in our automated approach to speed up the search for suitable controllers.
Acknowledgements: Research supported in part by EPSRC (UK) grant EP/N031962/1,
FWF (Austria) S 11405-N23 (RiSE/SHiNE), AFOSR Grant FA9550-14-1-0261 and
NSF Grants IIS-1447549, CNS-1446832, CNS-1445770, CNS-1445770, CNS-1553273,
CNS-1536086, CNS 1463722, and IIS-1460370.
References
1. V. Alimguzhin, F. Mari, I. Melatti, I. Salvo, and E. Tronci. Linearising discrete time hybrid
systems. IEEE Transactions on Automatic Control, PP(99):1–1, 2017.
2. R. Alur, C. Courcoubetis, T. A. Henzinger, and P.-H. Ho. Hybrid automata: An algorithmic
approach to the specification and verification of hybrid systems. In Hybrid Systems, volume
736 of LNCS, pages 209–229, 1992.
3. A. D. Ames and J. Holley. Quadratic program based nonlinear embedded control of series
elastic actuators. In CDC, pages 6291–6298. IEEE, 2014.
4. E. Bartocci, L. Bortolussi, L. Nenzi, and G. Sanguinetti. System design of stochastic models
using robustness of temporal properties. Theor. Comput. Sci., 587:3–25, 2015.
5. L. Bortolussi, D. Milios, and G. Sanguinetti. U-check: Model checking and parameter synthesis under uncertainty. In QEST, volume 9259 of LNCS, pages 89–104, 2015.
6. A. Cimatti, A. Griggio, A. Irfan, M. Roveri, and R. Sebastiani. Invariant checking of NRA
transition systems via incremental reduction to LRA with EUF. In TACAS, volume 10205 of
LNCS, pages 58–75, 2017.
7. A. David, K. Larsen, A. Legay, M. Mikučionis, and D. B. Poulsen. UPPAAL SMC tutorial.
International Journal on Software Tools for Technology Transfer, 17(4):397–415, 2015.
8. L. De Moura and N. Bjørner. Z3: An efficient SMT solver. In TACAS, volume 4963 of LNCS,
pages 337–340, 2008.
9. P. L. T. Duong and M. Lee. Robust PID controller design for processes with stochastic
parametric uncertainties. Journal of Process Control, 22(9):1559–1566, 2012.
10. A. Eggers, M. Fränzle, and C. Herde. SAT modulo ODE: A direct SAT approach to hybrid
systems. In ATVA, pages 171–185, 2008.
11. C. Ellen, S. Gerwinn, and M. Fränzle. Statistical model checking for stochastic hybrid systems involving nondeterminism over continuous domains. International Journal on Software
Tools for Technology Transfer, 17(4):485–504, 2015.
12. S. S. Farahani, V. Raman, and R. M. Murray. Robust model predictive control for signal
temporal logic synthesis. In ADHS, 2015.
13. M. Fliess and C. Join. Model-free control. International Journal of Control, 86(12):2228–
2252, 2013.
14. M. Fränzle, E. M. Hahn, H. Hermanns, N. Wolovick, and L. Zhang. Measurability and safety
verification for stochastic hybrid systems. In HSCC, pages 43–52, 2011.
15. M. Fränzle, T. Teige, and A. Eggers. Engineering constraint solvers for automatic analysis
of probabilistic hybrid automata. J. Log. Algebr. Program., 79(7):436–466, 2010.
16. S. Gao, J. Avigad, and E. M. Clarke. Delta-decidability over the reals. In LICS, pages
305–314, 2012.
17. S. Gao, S. Kong, and E. M. Clarke. dReal: An SMT solver for nonlinear theories over the
reals. In CADE-24, volume 7898 of LNCS, pages 208–214, 2013.
18. L. Guo and H. Wang. PID controller design for output PDFs of stochastic systems using
linear matrix inequalities. IEEE T. Sys, Man, and Cyb., Part B (Cyb.), 35(1):65–71, 2005.
19. S. He and F. Liu. Robust stabilization of stochastic markovian jumping systems via
proportional-integral control. Signal Processing, 91(11):2478–2486, 2011.
20. R. Hovorka. Closed-loop insulin delivery: from bench to clinical practice. Nature Reviews
Endocrinology, 7(7):385–395, 2011.
21. R. Hovorka et al. Nonlinear model predictive control of glucose concentration in subjects
with type 1 diabetes. Physiological Measurement, 25(4):905, 2004.
22. L. M. Huyett et al. Design and evaluation of a robust PID controller for a fully implantable artificial pancreas. Industrial & Engineering Chemistry Research, 54(42):10311–10321, 2015.
23. S. S. Kanderian Jr and G. M. Steil. Apparatus and method for controlling insulin infusion
with state variable feedback, July 15 2014. US Patent 8,777,924.
24. W. S. Levine. The control handbook. CRC Press, 1996.
25. Y. Li, K. H. Ang, G. C. Chong, W. Feng, K. C. Tan, and H. Kashiwagi. CAutoCSDevolutionary search and optimisation enabled computer automated control system design.
International Journal of Automation and Computing, 1(1):76–88, 2004.
26. T. Mancini, F. Mari, A. Massini, I. Melatti, F. Merli, and E. Tronci. System level formal
verification via model checking driven simulation. In CAV, volume 8044 of LNCS, pages
296–312, 2013.
27. N. Paoletti, K. S. Liu, S. A. Smolka, and S. Lin. Data-driven robust control for type 1 diabetes
under meal and exercise uncertainties. In CMSB, accepted, 2017.
28. R. S. Parker, F. J. Doyle, J. H. Ward, and N. A. Peppas. Robust H∞ glucose control in diabetes
using a physiological model. AIChE Journal, 46(12):2537–2549, 2000.
29. L. Sha. Using simplicity to control complexity. IEEE Software, 18(4):20–28, 2001.
30. F. Shmarov and P. Zuliani. ProbReach: Verified probabilistic δ-reachability for stochastic
hybrid systems. In HSCC, pages 134–139. ACM, 2015.
31. F. Shmarov and P. Zuliani. Probabilistic hybrid systems verification via SMT and Monte
Carlo techniques. In HVC, volume 10028 of LNCS, pages 152–168, 2016.
32. G. M. Steil et al. The effect of insulin feedback on closed loop glucose control. The Journal
of Clinical Endocrinology & Metabolism, 96(5):1402–1408, 2011.
33. Y. Su, D. Sun, and B. Duan. Design of an enhanced nonlinear PID controller. Mechatronics,
15(8):1005–1024, 2005.
34. P. Szalay, G. Eigner, and L. A. Kovács. Linear matrix inequality-based robust controller
design for type-1 diabetes model. IFAC Proceedings Volumes, 47(3):9247–9252, 2014.
35. M. E. Wilinska et al. Simulation environment to evaluate closed-loop insulin delivery systems in type 1 diabetes. Journal of diabetes science and technology, 4(1):132–144, 2010.
36. L. Zhang, Z. She, S. Ratschan, H. Hermanns, and E. M. Hahn. Safety verification for probabilistic hybrid systems. In CAV, volume 6174 of LNCS, pages 196–211, 2010.
A
Proof of Proposition 1
Proof. Let [km ] be the parameter box from which k∗ was selected, and let [Pm ] =
[Pm⊥ , Pm> ] be the corresponding probability enclosure with minimal midpoint. In the best
case, [Pm ] has also the least lower bound, implying that P∗ ∈ [Pm ] and in turn that,
⊥ , P> ] with
Pr(k∗ ) ≤ P∗ + ε. In the worst case, there exists another enclosure, [PM ] = [PM
M
⊥
⊥
a better lower bound than [Pm ], i.e, with PM < Pm . This implies that the actual minimal
probability might be in [PM ] and not in [Pm ], which induces a worst-case probability er⊥ , leading to Pr(k∗ ) ≤ P∗ + ε + P⊥ − P⊥ . Now note that P⊥ − P⊥ cannot
ror of Pm⊥ − PM
m
m
M
M
exceed the half length of [PM ], because otherwise [PM ] would be the enclosure with the
lowest midpoint. It follows that Pr(k∗ ) < P∗ + ε + ε/2.
B
Gluco-regulatory ODE model
Q̇1 (t) = −F01 − x1 Q1 + k12 Q2 − FR + EGP0 (1 − x3 ) + 0.18UG ;
−t
DG AG
te tmaxG ;
Q̇2 (t) = x1 Q1 − (k12 + x2 )Q2 ; UG (t) =
2
0.18tmaxG
Q1 (t) ˙
S1
S1 − S2
; S1 (t) = u(t) + ub −
; S˙2 (t) =
;
G(t) =
VG
tmaxI
tmaxI
˙ = S2 − ke I; ẋi (t) = −kai xi + kbi I; (i = 1, 2, 3)
I(t)
tmaxI VI
(B.8)
The model consists of three subsystems:
– Glucose Subsystem: it tracks the masses of glucose (in mmol) in the accessible
(Q1 (t)) and non-accessible (Q2 (t)) compartments, G(t) (mmol/L) represents the
glucose concentration in plasma, EGP0 (mmol/min) is the endogenous glucose production rate and UG (t) (mmol/min) defines the glucose absorption rate after consuming DG grams of carbohydrates. DG represents the main external disturbance
of the system.
– Insulin Subsystem: it represents absorption of subcutaneously administered insulin.
It is defined by a two-compartment chain, S1 (t) and S2 (t) measured in U (units of
insulin), where u(t) (U/min) is the administration of insulin computed by the PID
controller, ub (U/min) is the basal insulin infusion rate and I(t) (U/L) indicates the
insulin concentration in plasma.
– Insulin Action Subsystem: it models the action of insulin on glucose distribution/transport,
x1 (t), glucose disposal, x2 (t), and endogenous glucose production, x3 (t) (unitless).
The model parameters are given in Table 2.
par
value
w
100
ka1
0.006
kb1
0.0034
tmaxI
55
F01 0.0097 · w
EGP0 0.0161 · w
par
ke
ka2
kb2
VI
tmaxG
AG
value
0.138
0.06
0.056
0.12 · w
40
0.8
par
k12
ka3
kb3
VG
FR
value
0.066
0.03
0.024
0.16 · w
0
Table 2. Parameter values for the glucose-insulin regulatory model. w (kg) is the body weight.
| 3 |
1
Cepstral Analysis of Random Variables: Muculants
arXiv:1506.04518v2 [math.PR] 13 Nov 2017
Christian Knoll, Student Member, IEEE, Bernhard C. Geiger, Member, IEEE, and Gernot Kubin, Member, IEEE
Abstract—An alternative parametric description for discrete
random variables, called muculants, is proposed. In contrast to
cumulants, muculants are based on the Fourier series expansion,
rather than on the Taylor series expansion, of the logarithm
of the characteristic function. We utilize results from cepstral
theory to derive elementary properties of muculants, some of
which demonstrate behavior superior to those of cumulants. For
example, muculants and cumulants are both additive. While the
existence of cumulants is linked to how often the characteristic
function is differentiable, all muculants exist if the characteristic function satisfies a Paley-Wiener condition. Moreover,
the muculant sequence and, if the random variable has finite
expectation, the reconstruction of the characteristic function from
its muculants converge. We furthermore develop a connection
between muculants and cumulants and present the muculants of
selected discrete random variables. Specifically, it is shown that
the Poisson distribution is the only distribution where only the
first two muculants are nonzero.
Index Terms—Higher-order statistics, cepstrum, cumulants,
discrete random variables, Fourier series expansion
I NTRODUCTION
UMULANTS are parametric descriptors of random variables (RVs) and are commonly used to analyze nonGaussian processes [1] or the effects of nonlinear systems [2].
Unlike moments, cumulants are orthogonal descriptors and
satisfy a homomorphism property [3].
As attractive as these properties are, there are several
drawbacks: First, since cumulants are the Taylor series coefficients of the logarithm of the distribution’s characteristic
function [1], they constitute a complete description only for
infinitely differentiable characteristic functions. Second, there
are no general results regarding the behavior of the sequence
of cumulants; the sequence may even diverge (this problem,
though, can be mitigated by the definition of q-moments and
q-cumulants [4], [5]). Hence, a reconstruction of a distribution
function in terms of its cumulants [6], [7], [8] may not
converge. Third, the Marcinkiewicz theorem [9] states that any
RV either has infinitely many cumulants or up to second order
only; hence, every cumulant-based approximation is either
Gaussian or does not correspond to a RV at all. It follows that
cumulants are not well suited for hypothesis testing, except
in the important case of testing Gaussianity. Fourth, if the
RV takes values in the set of integers, then the characteristic
function is periodic, and the Taylor series expansion fails to
remain the most natural approach.
In this paper, motivated by the latter shortcoming, we
replace the Taylor series expansion by a Fourier transform.
While in general the resulting description is a functional, for
C
The authors are with the Signal Processing and Speech Communication
Laboratory, Graz University of Technology, 8010 Graz, Austria. (e-mail:
[email protected], [email protected], [email protected]).
integer RVs, the Fourier transform degenerates to a Fourier series expansion. The resulting Fourier coefficients – henceforth
called muculants – are a parametric (i.e., finite or countable)
description, retaining several properties of cumulants while
removing several of their shortcomings. For example, muculants are orthogonal and additive, but truncating the series
leads to a bounded approximation error that converges to
zero with increasing order. Also the existence of muculants
is less problematic than the one of cumulants, as the former is
ensured if the characteristic function satisfies a Paley-Wiener
condition. Finally, even though the Poisson distribution is the
only distribution for which only the first two muculants are
nonzero (see Section IV for the muculants of selected discrete
distributions), there exist distributions with more than two (but
still only finitely many) nonzero muculants.
The sequence of operations – Fourier transform, (complex)
logarithm, and inverse Fourier transform – is also essential
in cepstral analysis, originally introduced to investigate the
influence of echo [10] and to represent nonlinear systems [11].
Today the cepstrum is widely used in speech processing [12].
Our analysis of RVs is thus deeply rooted in signal processing.
Existence and properties of muculants, stated in Section II,
are based on properties of the cepstrum, or more generally,
properties of the Fourier transform. Moreover, a connection
between muculants and cumulants, presented in Section III
also finds counterparts in cepstral analysis [13], [14].
About terminology: The name cepstrum is derived from
reversing the first syllable of the word spectrum: While the
cepstrum is situated in the original (e.g., time) domain, this
terminology was introduced to emphasize that this sequence of
operations can provide fundamentally different insights [10].
Following this approach, we call our parametric descriptors
muculants, a reversal of the first syllable of cumulants.
I. P RELIMINARIES
Let X be a real-valued RV and let ΦX (µ) denote its
characteristic function
(1)
ΦX (µ) := E eµX ,
where µ ∈ R. The characteristic function always exists since
it is an expectation of a bounded function (see [9], [15], [16]
for the theory, properties, and applications of characteristic
functions). Two RVs are identically distributed iff (if and only
if) their characteristic functions are equal. Moreover, every
characteristic function satisfies the following properties [17,
p. 13]: (i) ΦX (µ) is uniformly continuous everywhere, (ii)
ΦX (µ)|µ=0 = 1, (iii) |ΦX (µ)| ≤ 1, and (iv) ΦX (µ) is
Hermitian. Finally, if X and Y are independent, then the characteristic function of Z = X + Y is ΦZ (µ) = ΦX (µ) · ΦY (µ).
2
Let X be a discrete RV taking values from the set of integers
Z. It can be described by its probability mass function (PMF)
fX : Z → [0, 1], where
∀ξ ∈ Z :
fX [ξ] := P(X = ξ).
(2)
P∞
In this case, (1) equates to ΦX (µ) = ξ=−∞ fX [ξ]eµξ , i.e.,
ΦX (µ) is the inverse Fourier transform of fX and periodic.
We call a PMF causal if fX [ξ] = 0 for ξ < 0. We call a
PMF minimum-phase if it is causal and its z-transform
ΨX (z) :=
∞
X
fX [ξ]z −ξ
(3)
ξ=0
has all its zeros inside the unit circle.1 The characteristic
function is related to this z-transform via ΦX (µ) = ΨX (eµ ).
For a complex function z(t) without zeros on the unit circle,
the complex logarithm is uniquely defined as
log z(t) = ln |z(t)| + · arg∗ (z(t)),
(4)
∗
where ln is the natural logarithm and arg (z(t)) = Arg(z(t))+
2kπ is continuous for k ∈ Z [18]; i.e., one formally represents
log(z(t)) as a single-valued analytic function on a Riemann
surface [19]. The computation of such a continuous phase
function is essential for the estimation of the complex cepstrum in practical cases [20], [18], [21], [22]. In contrast to the
principal value of the complex logarithm (cf. [20, p. 1009]),
the complex logarithm defined in (4) satisfies, for complex
functions w(t) and z(t),
log(w(t)z(t)) = log(w(t)) + log(z(t)).
(5)
Finally, the cumulants {κX [n]}n∈Z are the coefficients of
the Taylor series expansion of log ΦX (µ), i.e.,
log ΦX (µ) =
∞
X
κX [n]
n=1
(µ)n
n!
(6)
provided the sum on the r.h.s. exists. Specifically, if E(Xn ) <
∞, then ΦX (µ) is n times continuously differentiable [17,
p. 48], and we obtain the n-th cumulant as
κX [n] =
dn log ΦX (µ)
n dµn
µ=0
.
(7)
If X and Y are independent, then κX+Y [n] = κX [n]+ κY [n],
i.e., cumulants are additive. For elementary results on cumulants the reader is referred to [23] and [24].
II. M UCULANTS : D EFINITION
AND
P ROPERTIES
The definition of the muculants follows the definitions of
the cepstrum [20, Ch. 13], with the main difference that the
roles of Fourier transform and inverse Fourier transform are
reversed. While some properties of the muculants are directly
transferred from cepstral analysis, several results are based on
the fact that we operate on a probability space.
Definition II.1 (Complex Muculants). The complex muculants {ĉX [n]}n∈Z are the coefficients of the Fourier series
expansion (if it exists) of log ΦX (µ), i.e.,
1 Possible
poles are inside the unit circle by construction, since every PMF
is absolutely summable.
log ΦX (µ) =
∞
X
ĉX [n]eµn .
(8)
n=−∞
Note that (8) is a one-to-one correspondence only if log is
uniquely defined, as in (4). If ΦX (µ) has zeros on the unit
circle, phase jumps by π may be introduced and one cannot
determine an unambiguous phase curve [18]. Since (8) must
be one-to-one to establish the desirable property of additivity,
we consider only RVs for which ΦX (µ) has no zeros on the
unit circle. We note in passing that these phase ambiguities
are unproblematic for power muculants, i.e., the Fourier
coefficients of ln |ΦX (µ)|2 , which are complete parametric
descriptors of minimum-phase PMFs (see [25], [26, p. 215]).
With log z(t) defined as in (4), the complex muculants are:
Z π
1
log ΦX (µ)e−µn dµ.
(9)
ĉX [n] =
2π −π
This integral is well-defined
iff the integrand is absolutely
Rπ
integrable, i.e., iff −π | log ΦX (µ)|dµ < ∞. Using (4) and
applying the triangle inequality, we get
|ln |ΦX (µ)||≤|log(ΦX (µ))|≤|ln |ΦX (µ)||+|arg∗(ΦX (µ))|. (10)
The phase of ΦX (µ) is continuous, thus its integral over the
compact set [−π, π] is finite. Since moreover |ΦX (µ)| ≤ 1,
the integral in (9) is well-defined iff
Z π
1
ln |ΦX (µ)|dµ > −∞.
(11)
2π −π
Note that this condition rules out the existence of muculants
for RVs whose characteristic function vanishes on an interval.
The condition in (11) reminds of the Paley-Wiener condition [27, p. 423]. Loosely speaking and translated to the
language of probabilities, this condition states that iff (11)
holds, then to ln |ΦX (µ)| a unique phase can be associated
which guarantees that the corresponding PMF is minimumphase. Causality of fX is thus a sufficient (but not necessary)
condition for (11).
The following theorem, proved in Appendix A, summarizes
the main properties of complex muculants:
Theorem 1 (Properties of Complex Muculants). Let X be
an RV with PMF fX supported on Z and let {ĉX [n]}n∈Z be
the complex muculants defined in Definition II.1. Then, the
following properties hold:
1) ĉX [n] ∈ R.
2) ĉX [0] ≤ 0.
3) If fX (ξ) = fX (−ξ),P
then ĉX [n] = ĉX [−n].
4) If E (X) < ∞, then n ĉX [n] = 0 and the series in (8)
converges pointwise.
5) lim ĉX [n] = 0. If E (X) < ∞, then ĉX [n] = O(1/n).
n→±∞
6) If X and Y are independent, then ĉX+Y [n] = ĉX [n] +
ĉY [n].
Note that finite expectation is required in properties 4
and 5, because one can construct characteristic functions which
are nowhere differentiable. E.g., the Weierstrass function
3
P∞
1 n
2
cos(3n µ) is nowhere differentiable, but continuous everywhere and a valid characteristic function [17, p. 47].
That muculants of an RV with finite expectation must sum
to zero makes truncating the Fourier series as problematic as
truncating the cumulant expansion, i.e., the truncated series
need not correspond to a valid PMF. However, by Parseval’s
theorem and the fact that ĉX [n] = O(1/n), the squared
error between the true and the approximated characteristic
function stays bounded. Hence, muculants may behave better
than cumulants when used in functionals of distributions, such
as entropy or informational divergence. Moreover, while there
exists no distribution with finitely many (but more than two)
nonzero cumulants, distributions with, e.g., only three nonzero
complex muculants exist.
Finally, property 6 states that muculants are, just as cumulants, additive descriptors. This retains the benefits of cumulants while eliminating some of their drawbacks particularly
problematic with discrete RVs.
n=1
III. L INKING C UMULANTS
AND
M UCULANTS
The z-transform points at a close connection between cumulants and the cepstrum [13], [14], thus suggesting a connection
between cumulants and muculants. Suppose that log ΦX (µ) is
continuous, that its first (k −1) derivatives are continuous, and
that its k-th and (k+1)-th derivatives are piecewise continuous.
We then obtain with [28, Th 3.22] that
∞
X
dk
log
Φ
(µ)
=
(jn)k ĉX [n]eµn .
X
dµk
n=−∞
(12)
Evaluating the l.h.s. at µ = 0 yields the k-th cumulant k κX [n]
(if it exists, cf. (7)); evaluating the r.h.s. at µ = 0 then yields
κX [n] =
∞
X
nk ĉX [n],
(13)
n=−∞
where, abusing terminology by ignoring the fact that the
muculants need not be non-negative, we call the r.h.s. the k-th
non-central moment of the complex muculants.
In [14], the moments of the cepstrum are connected to the
moments of the original sequence, yielding a recursive formula
to compute the former from the latter. An equation similar
to (13), called Good’s formula [29], expresses the cumulants
in terms of the moments of a random vector.
IV. M UCULANTS OF S ELECTED RV S
In this section, we present the complex muculants for a
selection of discrete distributions, summarized in Table I.
A. Poisson Distribution
With λ > 0, the characteristic function of the Poisson
µ
distribution is ΦX (µ) = eλ(e −1) . All cumulants exist and
equal λ. A fortiori, E(X) = κX [1] = λ, and with (8) we can
write
∞
X
µ
log(eλ(e −1) ) = λeµ − λ =
ĉX [n] · eµn .
(14)
n=−∞
Equating the coefficients yields ĉX [0] = −λ, ĉX [1] = λ, and
ĉX [n] = 0 for n 6= 0, 1. With property 4 of Theorem 1, the
Poisson distribution is thus the only distribution with all but
the first two muculants being zero.
B. Degenerate Distribution
The PMF for X ≡ M is the Kronecker delta, i.e., fX [ξ] =
δ[ξ − M ] and ΦX (µ) = eµM . While ĉX [0] = 0, for n 6= 0
the complex muculants are given as
Z π
M
1
(−1)n+1 .
(15)
µM e−µn dµ =
ĉX [n] =
2π −π
n
Since in this case log ΦX (µ) = µM is not periodic, one
cannot expect (13) to hold: Indeed, while κ3 = 0, the third
non-central moment of the complex muculants diverges.
C. Minimum-Phase Distribution
Suppose that fX is minimum-phase. Its z-transform
Q∞
−1
)
k=1 (1 − ok z
ΨX (z) = A · Q∞
−1 )
(1
−
p
z
k
k=1
(16)
has all poles pk and zeros ok inside the unit circle, i.e.,
|ok | < 1 and |pk | < 1 for all k. This applies, for example,
to the geometric distribution, the Bernoulli distribution and
the binomial distribution with p < 0.5, and the negative
binomial distribution. Exploiting ΦX (µ) = ΨX (eµ ), (5), and
the Mercator series, which for |z| ≤ 1 and z 6= −1 reads
log(1 + z) =
∞
X
(−1)m+1 m
z ,
m
m=1
(17)
the complex muculants are obtained by, cf [20, p. 1011]
log ΦX (µ) = log A +
| {z }
=ĉX [0]
∞ X
∞
X
pn − on
k
n=1 k=1
|
k
n
{z
=ĉX [n]
e−µn .
(18)
}
Specifically, for minimum-phase PMFs, we obtain ĉX [n] =
0 for n < 0. Note further that (17) can be used to derive
a recursive relation among the complex muculants which, at
least under special conditions, admits computing the complex
muculants from the PMF without requiring a Fourier transform
or a complex logarithm [20, p. 1022].
V. D ISCUSSION
AND
F UTURE W ORK
We have argued that truncating the muculant series (8) results in a bounded error that decreases with increasing number
of summands, a property that does not hold for the cumulant
series (6). Although this is an advantage of muculants over
cumulants, the question under which circumstances a truncated
muculant series represents a PMF remains open. One may
investigate, for example, the approximation of a distribution
by one with a given number of nonzero muculants. A second
line of research could investigate expressions for functionals
of discrete distributions (such as entropy and informational divergence) based on muculants, thus complementing cumulantbased expressions for continuous distributions, cf. [30].
4
TABLE I
M UCULANTS ĉX [n] OF SELECTED DISTRIBUTIONS . W E ALSO PRESENT THE CUMULANTS κX [n] AND THE CHARACTERISTIC FUNCTIONS ΦX (µ).
Distribution
Poisson
Degenerate
Bernoulli
(p < 0.5)
.
Bernoulli
(p > 0.5)
Geometric
Negative
Binomial
Binomial
fX [ξ]
(
ξ
e−λ λξ! ,
0,
ΦX (µ)
ξ≥0
else
µ −1)
eλ(e
κX [n] = λ
eµM
κX [n] =
if ξ = 0
if ξ = 1
else
1 − p + peµ
κX [n + 1] = p(1 − p)
if ξ = 0
if ξ = 1
else
peµ
δ[ξ − M ]
1 − p,
p,
0,
1 − p,
p,
0,
κX [n]
1−p+
(
(1 − p)pξ , ξ ≥ 0
0,
else
(
ξ+N−1
(1 − p)N pξ ,
ξ
0,
(
N ξ
p (1 − p)(N−ξ) ,
ξ
0,
ξ≥0
else
1−p
1−peµ
if k = 1
else
κX [n + 1] = p(1 −
dκX [n]
dp
X [n]
p) dκdp
κX [n + 1] = ρ(1 + ρ) dκdρ[n]
X
ρ = (1 − p)/p
1−p
1−peµ
ξ≥0
else
(
M,
0,
N
(1 − p + peµ )N
That the Poisson distribution has only two nonzero muculants (cf. Section IV-A) makes the presented theory attractive
for hypothesis testing. Future work shall thus develop numerical methods to estimate muculants from data, together with
estimator variances and confidence bounds. Based on that, a
test whether a distribution is Poisson or not shall be formalized
and compared to existing hypothesis tests.
We presented a condition for the existence of muculants
in (11); however, it is not clear whether there exist distributions
violating this condition. The search for a concrete example, or
preferably, for a more general statement about existence is
within the scope of future work. Similar investigations shall
be devoted to the convergence of (8), which for the moment
is guaranteed only for distributions with finite expectation
(cf. property 4 of Theorem 1).
Finally, the theory of muculants shall be extended to continuous RVs, even though this requires a muculant function rather
than a muculant sequence. In this context, the connection between the muculant function and/or cumulants of a continuous
RV and the muculants of a discrete RV obtained by uniformly
quantizing the former might be of interest. A fundamental
step towards these results lies in the fact that quantization
periodically repeats the characteristic function [31].
A PPENDIX A
P ROOF OF T HEOREM 1
1) The Fourier transform of a Hermitian function is realvalued [20, p. 83]; ΦX (µ) is Hermitian, and so is log ΦX (µ).
2) We have:
Z π
Z π
(b)
1
(a) 1
log ΦX (µ)dµ =
ln |ΦX (µ)|dµ ≤ 0
ĉX [0] =
2π −π
2π −π
where (a) follows from (4) and the fact that the phase of
ΦX (µ) has odd symmetry; (b) then follows from |ΦX (µ)| ≤ 1.
N times the cumulant
of the geometric distribution
N times the cumulant
of the Bernoulli distribution
ĉX [n]
−λ, if n = 0
λ,
if n = 1
0,
else
(
0,
if n = 0
M
n+1 , else
(−1)
n
− p),
if n =
log(1n+1
n
(−1)
p
,
if n >
n
1−p
0,
else
log
p,
n
(−1)n+1
1 + 1−p
,
n
p
(−1)n+1
,
n
− p), if n = 0
log(1
pn
,
if n > 0
0,n
else
0
0
if n = 0
if n < 0
if n > 0
N times the muculant
of the geometric distribution
N times the muculant
of the Bernoulli distribution
3) If the PMF is an even function, then ΦX (µ) and
log ΦX (µ) are real; the muculants have even symmetry by
Fourier transform properties.
4) If E(X) < ∞, then ΦX (µ) is uniformly continuous and
continuously differentiable; since log is piecewise continuous,
we have that log ΦX (µ) is piecewise continuous and piecewise
differentiable; indeed,
ΦX (µ)′
d
log ΦX (µ) =
dµ
ΦX (µ)
(19)
where ΦX (µ) vanishes on an at most countable set (otherwise
the muculants would not exist, cf. (11)). Since the Fourier
transform of a periodic, piecewise continuous and piecewise differentiable function converges pointwise [28, p. 105],
pointwise convergence of (8) follows. That, in this case, the
muculants sum to zero, follows from evaluating (8) at µ = 0
and the fact that ΦX (µ)|µ=0 = 1.
5) That limn→±∞ ĉX [n] = 0 is a direct consequence of
the Lebesgue-Riemann theorem and the fact that log ΦX (µ)
is absolutely integrable [28, p. 94]. If E(X) < ∞, note that
pointwise convergence implies bounded variation [32, p.70];
the result about the order of convergence follows from [33].
6) If X and Y are independent RVs, then ΦX+Y (µ) =
ΦX (µ) · ΦY (µ). The desired result follows from (5) and the
linearity of the Fourier series expansion.
ACKNOWLEDGMENT
The authors would like to thank Franz Lehner, Institute
of Mathematical Structure Theory, Graz University of Technology, Gennadiy Chystyakov, Faculty of Mathematics, Universität Bielefeld, and Paul Meissner for fruitful discussions
during the preparation of this manuscript.
5
R EFERENCES
[1] J. M. Mendel, “Tutorial on higher-order statistics (spectra) in signal
processing and system theory: theoretical results and some applications.”
Proceedings of the IEEE, vol. 79, no. 3, pp. 278–305, 1991.
[2] C. L. Nikias and A. P. Petropulu, Higher-Order Spectra Analysis: A
Nonlinear Signal Processing Framework, Englewood Cliffs, NJ, 1993.
[3] R. A. Fisher and E. A. Cornish, “Moments and cumulants in the specification of distributions,” Revue de l’Institut international de Statistique,
pp. 307–320, 1938.
[4] C. Tsallis, A. R. Plastino, and R. F. Alvarez-Estrada, “Escort mean
values and the characterization of power-law-decaying probability densities.” Journal of Mathematical Physics, vol. 50, no. 4, 2009.
[5] A. Rodriguez and C. Tsallis, “A generalization of the cumulant expansion. application to a scale-invariant probabilistic model.” Journal of
Mathematical Physics, vol. 51, no. 7, 2010.
[6] H. Cramer, Mathematical Methods of Statistics. Princeton University
Press, 1957.
[7] V. V. Petrov, Sums of Independent Random Variables. Springer Verlag,
1975.
[8] S. Blinnikov and R. Moessner, “Expansions for nearly Gaussian distributions,” Astronomy and Astrophysics Supplement Series, vol. 130, pp.
193–205, 1998.
[9] E. Lukacs, Characteristic Functions. 2nd rev. Griffin, 1970.
[10] B. P. Bogert, M. J. R. Healy, and J. W. Tukey, “The quefrency
analysis of time series for echoes: cepstrum, pseudo-autocovariance,
cross-cepstrum, and saphe cracking,” in Proceedings Symp. Time Series
Analysis, M. Rosenblatt, Ed. Wiley, 1963, pp. 209–243.
[11] A. V. Oppenheim, “Superposition in a class of nonlinear systems.” Ph.D.
dissertation, Massachusetts Inst of Tech Camridge Research Lab of
Electronics, 1965.
[12] A. V. Oppenheim and R. Schafer, “From frequency to quefrency: A
history of the cepstrum,” IEEE Signal Processing Magazine, vol. 21,
no. 5, pp. 95–106, 2004.
[13] M. R. Schroeder, “Direct (nonrecursive) relations between cepstrum and
predictor coefficients,” IEEE Transactions on Acoustics, Speech and
Signal Processing, vol. 29, pp. 297–301, 1981.
[14] A. Khare and T. Yoshikawa, “Moment of cepstrum and its applications,”
IEEE Transactions on Signal Processing,, vol. 40, no. 11, pp. 2692–
2702, 1992.
[15] E. Lukacs, “A survey of the theory of characteristic functions,” Advances
in Applied Probability, vol. 4, pp. 1–38, 1972.
[16] E. Lukacs and R. G. Laha, Applications of Characteristic Functions.
Griffin, 1964.
[17] T. M. Bisgaard and Z. Sasvári, Characteristic Functions and Moment
Sequences: Positive Definiteness in Probability. Nova Publishers, 2000.
[18] J. M. Tribolet, “A new phase unwrapping algorithm,” IEEE Transactions
on Acoustics, Speech and Signal Processing, vol. 2, pp. 170–177, 1977.
[19] J. W. Brown and R. V. Churchill, Complex Variables and Applications.
McGraw Hill, 2009.
[20] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing,
3rd ed. Pearson Education, Limited, 2010.
[21] R. McGowan and R. Kuc, “A direct relation between a signal time series
and its unwrapped phase.” IEEE Transactions on Acoustics, Speech and
Signal Processing, vol. 30, no. 5, pp. 719–726, 1982.
[22] Z. N. Karam, “Computation of the one-dimensional unwrapped phase,”
Master’s thesis, Massachusetts Institute of Technology, 2006.
[23] L. Mattner, “What are cumulants?” Documenta Mathematica, vol. 4, pp.
601–622, 1999.
[24] G. C. Rota and J. Shen, “On the combinatorics of cumulants.” Journal
of Combinatorial Theory, vol. Series A 91.1, pp. 283–304, 2000.
[25] C. Knoll, B. C. Geiger, and G. Kubin, “The muculants–a higher-order
statistical approach,” arXiv preprint arXiv:1506.04518v1, 2015.
[26] A. Papoulis, The Fourier Integral and its Applications. McGraw Hill,
1962.
[27] A. Papoulis and S. U. Pillai, Probability, Random Variables, and
Stochastic Processes, ser. McGraw-Hill electrical and electronic engineering series. McGraw-Hill, 2002, vol. 4th edition.
[28] S. B. Damelin and W. Miller, Jr., The Mathematics of Signal Processing.
Cambridge University Press, 2011.
[29] I. J. Good, “A new formula for cumulants,” Mathematical Proceedings
of the Cambridge Philosophical Society, vol. 78, pp. 333–337, 1975.
[30] C. S. Withers and S. Nadarajah, “Negentropy as a function of cumulants,” Tech. Rep., 2011, research Report No. 15, Probability and
Statistics Group, School of Mathematics, The University of Manchester.
[31] B. Widrow, I. Kollar, and M. C. Liu, “Statistical theory of quantization,”
IEEE Transactions on Instrumentation and Measurement, vol. 45, pp.
353–361, 1996.
[32] M. C. Pereyra and L. A. Ward, Harmonic analysis: from Fourier to
wavelets. American Mathematical Soc., 2012, vol. 63.
[33] M. Taibleson, “Fourier coefficients of functions of bounded variation,”
Proceedings of the American Mathematical Society, vol. 18, no. 4, p.
766, 1967.
| 1 |
Analysis of gradient descent methods with
non-diminishing, bounded errors
arXiv:1604.00151v3 [cs.SY] 18 Sep 2017
Arunselvan Ramaswamy
∗
Dept. of Computer Science and Automation
Indian Institute of Science
Bengaluru - 560012, India.
Shalabh Bhatnagar
†
Dept. of Computer Science and Automation
Indian Institute of Science
Bengaluru - 560012, India.
September 19, 2017
Abstract
The main aim of this paper is to provide an analysis of gradient descent (GD) algorithms with gradient errors that do not necessarily vanish,
asymptotically. In particular, sufficient conditions are presented for both
stability (almost sure boundedness of the iterates) and convergence of
GD with bounded, (possibly) non-diminishing gradient errors. In addition to ensuring stability, such an algorithm is shown to converge to a
small neighborhood of the minimum set, which depends on the gradient
errors. It is worth noting that the main result of this paper can be used to
show that GD with asymptotically vanishing errors indeed converges to
the minimum set. The results presented herein are not only more general
when compared to previous results, but our analysis of GD with errors is
new to the literature to the best of our knowledge. Our work extends the
contributions of Mangasarian & Solodov, Bertsekas & Tsitsiklis and Tadić
& Doucet. Using our framework, a simple yet effective implementation of
GD using simultaneous perturbation stochastic approximations (SP SA),
with constant sensitivity parameters, is presented. Another important
improvement over many previous results is that there are no ‘additional’
restrictions imposed on the step-sizes. In machine learning applications
where step-sizes are related to learning rates, our assumptions, unlike
those of other papers, do not affect these learning rates. Finally, we
present experimental results to validate our theory.
∗ email:[email protected]
† email:[email protected]
1
1
Introduction
Let us suppose that we are interested in finding a minimum (local/global) of a
continuously differentiable function f : Rd → R. The following gradient descent
method (GD) is often employed to find such a minimum:
xn+1 = xn − γ(n)∇f (xn ).
(1)
In the above equation, {γ(n)}n≥0 is the given step-size sequence and ∇f : Rd →
Rd is a continuous map such that k∇f (x)k ≤ K(1 + kxk), K > 0 and x ∈ Rd .
GD is a popular tool to implement many machine learning algorithms. For
example, the backpropagation algorithm for training neural networks employs
GD due to its effectiveness and ease of implementation.
When implementing (1), one often uses gradient estimators such as Kieferwolfowitz estimator [8], simultaneous perturbation stochastic approximation
(SP SA) [10], etc., to obtain estimates of the true gradient at each stage which
in turn results in estimation errors (n in (2)). This is particularly true when
the form of f or ∇f is unknown. Previously in the literature, convergence of GD
with errors was studied in [5]. However, their analysis required the errors to go
to zero at the rate of the step-size (vanish asymptotically at a prescribed rate).
Such assumptions are difficult to enforce and may adversely affect the learning
rate when employed to implement machine learning algorithms, see Chapter 4.4
of [6]. In this paper, we present sufficient conditions for both stability (almost
sure boundedness) and convergence (to a small neighborhood of the minimum
set) of GD with bounded errors, for which the recursion is given by
xn+1 = xn − γn (∇f (xn ) + n ).
(2)
In the above equation n is the estimation error at stage n such that ∀n kn k ≤
(a.s. in the case of stochastic errors) for a fixed > 0 (positive real). As an
example, consider the problem of estimating the average waiting time of a customer in a queue.
R The objective function J, for this problem, has the following
form: J(x) = w d(F (w | x)) = E[W (x)], where W (x) is the “waiting time”
random variable with distribution F (· | x), with x being the underlying parameter (say the arrival or the service rate). In order to define J at every x, one will
need to know the entire family of distributions, {F (· | x) | x ∈ Rd }, exactly. In
such scenarios, one often works with approximate definitions of F which in turn
lead to approximate gradients, i.e, gradients with errors. More generally, the
gradient errors could be inherent to the problem at hand or due to extraneous
noise. In such cases, there is no reason to believe that these errors will vanish
asymptotically. To the best of our knowledge, this is the first time an analysis
is done for GD with biased/unbiased stochastic/deterministic errors that are
not necessarily diminishing, and without imposing ‘additional’ restrictions on
step-sizes over the usual standard assumptions, see (A2) in Section 3.1.
Our assumptions, see Section 3.1, not only guarantee stability but also guarantee convergence of the algorithm to a small neighborhood of the minimum
set, where the neighborhood is a function of the gradient errors. If kn k → 0 as
n → ∞, then it follows from our main result (Theorem 2) that the algorithm
converges to an arbitrarily small neighborhood of the minimum set. In other
words, the algorithm indeed converges to the minimum set. It may be noted that
we do not impose any restrictions on the noise-sequence {n }n≥0 , except that
2
almost surely for all n kn k ≤ for some fixed > 0. Our analysis uses techniques developed in the field of viability theory by [1], [2] and [3]. Experimental
results supporting the analyses in this paper are presented in Section 5.
1.1
Our contributions
(1) Previous literature such as [5] requires kn k → 0 as n → ∞ for it’s analysis
to work. Further, both [5] and [9] provide conditions that guarantee one of
two things (a) GD diverges almost surely or (b) converges to the minimum set
almost surely. On the other hand, we only require kn k ≤ ∀ n, where > 0
is fixed a priori. Also, we present conditions under which GD with bounded
errors is stable (bounded almost surely) and converges to an arbitrarily small
neighborhood of the minimum set almost surely. Note that our analysis works
regardless of whether or not kn k tends to zero. For more detailed comparisons
with [5] and [9] see Section 3.2.
(2) The analyses presented herein will go through even when the gradient errors
are “asymptotically bounded” almost surely. In other words, kn k ≤ for all
n ≥ N almost surely. Here N may be sample path dependent.
(3) Previously, convergence analysis of GD required severe restrictions on the
step-size, see [5], [10]. However, in our paper we do not impose any such restrictions on the step-size. See Section 3.2 (specifically points 1 and 3) for more
details.
(4) Informally, the main result of our paper, Theorem 2, states the following.
One wishes to simulate GD with gradient errors that are not guaranteed to vanish over time. As a consequence of allowing non-diminishing errors, we show
the following: There exists (δ) > 0 such that the iterates are stable and converge
to the δ-neighborhood of the minimum set (δ being chosen by the simulator) as
long as kn k ≤ (δ) ∀ n.
(5) In Section 4.2 we discuss how our framework can be exploited to undertake
convenient yet effective implementations of GD. Specifically, we present an
implementation using SP SA, although other implementations can be similarly
undertaken.
2
Definitions used in this paper
[Minimum set of a function] This set consists of all global and local minima
of the given function.
[Upper-semicontinuous map] We say that H is upper-semicontinuous, if
given sequences {xn }n≥1 (in Rn ) and {yn }n≥1 (in Rm ) with xn → x, yn → y
and yn ∈ H(xn ), n ≥ 1, then y ∈ H(x).
[Marchaud Map] A set-valued map H : Rn → {subsets of Rm } is called
Marchaud if it satisfies the following properties: (i) for each x ∈ Rn , H(x) is
convex and compact; (ii) (point-wise boundedness) for each x ∈ Rn , sup kwk
w∈H(x)
< K (1 + kxk) for some K > 0; (iii) H is upper-semicontinuous.
Let H be a Marchaud map on Rd . The differential inclusion (DI) given by
ẋ ∈ H(x)
(3)
is guaranteed to have at least one solution that is absolutely
P continuous. The
reader is referred to [1] for more details. We say that x ∈
if x is an absolutely
3
continuous map that satisfies (3). The set-valued semiflow Φ associated with
d
(3) is defined on [0, +∞)
P × R as:
Φt (x) = {x(t)S| x ∈ , x(0) = x}. Let B × M ⊂ [0, +∞) × Rd and define
ΦB (M ) =
Φt (x).
t∈B, x∈M
[Limit setTof a solution] The limit set of a solution x with x(0) = x is given
by L(x) = t≥0 x([t, +∞)).
d
[Invariant
P set] M ⊆ R is invariant if for every x ∈ M there exists a trajectory, x ∈ , entirely in M with x(0) = x, x(t) ∈ M , for all t ≥ 0.
[Open and closed neighborhoods of a set] Let x ∈ Rd and A ⊆ Rd , then
d(x, A) := inf{ka − yk | y ∈ A}. We define the δ-open neighborhood of A by
N δ (A) := {x | d(x, A) < δ}. The δ-closed neighborhood of A is defined by
N δ (A) := {x | d(x, A) ≤ δ}.
[Br (0) and B r (0)] The open ball of radius r around the origin is represented
by Br (0), while the closed ball is represented by B r (0). In other words, Br (0) =
{x | kxk < r} and B r (0) = {x | kxk ≤ r}.
[Internally chain transitive set] M ⊂ Rd is said to be internally chain transitive if M is compact and for every x, y ∈ M , > 0 and T > 0 we have the
following: There exists n and Φ1 , . . . , Φn that are n solutions to the differential
inclusion ẋ(t) ∈ h(x(t)), points x1 (= x), . . . , xn+1 (= y) ∈ M and n real numbers
t1 , t2 , . . . , tn greater than T such that: Φiti (xi ) ∈ N (xi+1 ) and Φi[0,ti ] (xi ) ⊂ M
for 1 ≤ i ≤ n. The sequence (x1 (= x), . . . , xn+1 (= y)) is called an (, T ) chain
in M from x to y. If the above property only holds for all x = y, then M is
called chain recurrent.
[Attracting set & fundamental neighborhood] A ⊆ Rd is attracting if it is
compact and there exists a neighborhood U such that for any > 0, ∃ T () ≥ 0
with Φ[T (),+∞) (U ) ⊂ N (A). Such a U is called the fundamental neighborhood
of A.
[Attractor set] An attracting set that is also invariant is called an attractor
set. The basin of attraction of A is given by B(A) = {x | ωΦ (x) ⊂ A}.
[Lyapunov stable] The above set A is Lyapunov stable if for all δ > 0, ∃ > 0
such that Φ[0,+∞) (N (A)) ⊆ N δ (A).
[Upper-limit of a sequence of sets, Limsup] Let {Kn }n≥1 be a sequence
of sets in Rd . The upper-limit of {Kn }n≥1 is given by, Limsupn→∞ Kn :=
{y | lim d(y, Kn ) = 0}.
n→∞
We may interpret that the lower-limit collects the limit points of {Kn }n≥1 while
the upper-limit collects its accumulation points.
3
3.1
Assumptions and comparison to previous literature
Assumptions
Recall that GD with bounded errors is given by the following recursion:
xn+1 = xn − γ(n)g(xn ),
(4)
where g(xn ) ∈ G(xn ) ∀ n and G(x) := ∇f (x) + B (0), x ∈ Rd . In other words,
the gradient estimate at stage n, g(xn ), belongs to an -ball around the true
4
gradient ∇f (xn ) at stage n. Note that (4) is consistent with (2) of Section 1.
Our assumptions, (A1)-(A4) are listed below.
(A1) G(x) := ∇f (x) + B (0) for some fixed > 0. ∇f is a continuous function
such that k∇f (x)k ≤ K(1 + kxk) for all x ∈ Rd , for some K > 0.
(A2) {γ(n)}
rate) sequence such that: γ(n) > 0
Pn≥0 is the step-size
P (learning
∀n,
γ(n) = ∞ and
γ(n)2 < ∞. Without loss of generality we let
n≥0
n≥0
sup γ(n) ≤ 1.
n
Note that G is an upper-semicontinuous map since ∇f is continuous and pointwise bounded. For each c ≥ 1, we define Gc (x) := {y/c | y ∈ G(cx)}. Define
G∞ (x) := co hLimsupc→∞ Gc (x)i, see Section 2 for the definition of Limsup.
Given S ⊆ Rd , the convex closure of S, denoted by cohSi, is the closure of the
convex hull of S. It is worth noting that Limsupc→∞ Gc (x) is non-empty for every x ∈ Rd . Further, we show that G∞ is a Marchaud map in Lemma 1. In other
words, ẋ(t) ∈ −G∞ (x(t)) has at least one solution that is absolutely continuous,
see [1]. Here −G∞ (x(t)) is used to denote the set {−g | g ∈ G∞ (x(t))}.
(A3) ẋ(t) ∈ −G∞ (x(t)) has an attractor set A such that A ⊆ Ba (0) for some
a > 0 and B a (0) is a fundamental neighborhood of A.
Since A ⊆ Ba (0) is compact, we have that sup kxk < a. Let us fix the following
x∈A
sequence of real numbers: sup kxk = δ1 < δ2 < δ3 < δ4 < a.
x∈A
(A4) Let cn ≥ 1 be an increasing sequence of integers such that cn ↑ ∞ as
n → ∞. Further, let xn → x and yn → y as n → ∞, such that
yn ∈ Gcn (xn ), ∀n, then y ∈ G∞ (x).
It is worth noting that the existence of a global Lyapunov function for
ẋ(t) ∈ −G∞ (x(t)) is sufficient to guarantee that (A3) holds. Further,
(A4) is satisfied when ∇f is Lipschitz continuous.
Lemma 1. G∞ is a Marchaud map.
Proof. From the definition of G∞ and G we have that G∞ (x) is convex, compact
and sup kyk ≤ K(1 + kxk) for every x ∈ Rd . It is left to show that G∞ is
y∈G(x)
an upper-semicontinuous map. Let xn → x, yn → y and yn ∈ G∞ (xn ), for all
n ≥ 1. We need to show that y ∈ G∞ (x). We present a proof by contradiction.
Since G∞ (x) is convex and compact, y ∈
/ G∞ (x) implies that there exists a
linear functional on Rd , say f , such that sup f (z) ≤ α − and f (y) ≥ α + ,
z∈G∞ (x)
for some α ∈ R and > 0. Since yn → y, there exists N > 0 such that for
all n ≥ N , f (yn ) ≥ α + 2 . In other words, G∞ (x) ∩ [f ≥ α + 2 ] 6= φ for all
n ≥ N . We use the notation [f ≥ a] to denote the set {x | f (x) ≥ a}. For
the sake of convenience let us denote the set Limsupc→∞ Gc (x) by A(x), where
x ∈ Rd . We claim that A(xn ) ∩ [f ≥ α + 2 ] 6= φ for all n ≥ N . We prove
this claim later, for now we assume that the claim is true and proceed. Pick
zn ∈ A(xn )∩[f ≥ α + 2 ] for each n ≥ N . It can be shown that {zn }n≥N is norm
bounded and hence contains a convergent subsequence, {zn(k) }k≥1 ⊆ {zn }n≥N .
Let lim zn(k) = z. Since zn(k) ∈ Limsupc→∞ (Gc (xn(k) )), ∃ cn(k) ∈ N such that
k→∞
5
1
, where wn(k) ∈ Gcn(k) (xn(k) ). We choose the sequence
kwn(k) − zn(k) k < n(k)
{cn(k) }k≥1 such that cn(k+1) > cn(k) for each k ≥ 1.
We have the following: cn(k) ↑ ∞, xn(k) → x, wn(k) → z and wn(k) ∈
Gcn(k) (xn(k) ), for all k ≥ 1. It follows from assumption (A4) that z ∈ G∞ (x).
Since zn(k) → z and f (zn(k) ) ≥ α + 2 for each k ≥ 1, we have that f (z) ≥ α + 2 .
This contradicts the earlier conclusion that sup f (z) ≤ α − .
z∈h∞ (x)
It remains to prove that A(xn ) ∩ [f ≥ α + 2 ] 6= φ for all n ≥ N . If this were
not true, then ∃{m(k)}k≥1 ⊆ {n ≥ N } such that A(xm(k) ) ⊆ [f < α + 2 ] for
all k. It follows that G∞ (xm(k) ) = co(A(xm(k) )) ⊆ [f ≤ α + 2 ] for each k ≥ 1.
Since yn(k) → y, ∃N1 such that for all n(k) ≥ N1 , f (yn(k) ) ≥ α + 3
4 . This is a
contradiction.
3.2
Relevance of our results
(1) Gradient algorithms with errors have been previously studied by Bertsekas
and Tsitsiklis [5]. They impose the following restriction on the estimation errors: kn k ≤ γ(n)(q + pk∇f (xn )k) ∀ n, where p, q > 0. If the iterates are stable
then kn k → 0. In order to satisfy the aforementioned assumption the choice
of step-size may be restricted, thereby affecting the learning rate (when used
within the framework of a learning algorithm). In this paper we analyze the
more general and practical case of bounded kn k which does not necessarily go
to zero. Further none of the assumptions used in our paper impose further
restrictions on the step-size, other than standard requirements, see (A2).
(2) The main result of Bertsekas and Tsitsiklis [5] states that the GD with
errors either diverges almost surely or converges to the minimum set almost
surely. An older study by Mangasarian and Solodov [9] shows the exact same
result as [5] but for GD without estimation errors (n = 0 ∀ n). The main
results of our paper, Theorems 1 & 2 show that if the GD under consideration
satisfies (A1)-(A4) then the iterates are stable (bounded almost surely). Further, the algorithm is guaranteed to converge to a given small neighborhood of
the minimum set provided the estimation errors are bounded by a constant that
is a function of the neighborhood size. To summarize, under the more restrictive
setting of [5] and [9] the GD is not guaranteed to be stable, see the aforementioned references, while the assumptions used in our paper are less restrictive
and guarantee stability under the more general setting of bounded error GD. It
may also be noted that ∇f is assumed to be Lipschitz continuous by [5]. This
turns out to be sufficient (but not necessary) for (A1) & (A4) to be satisfied.
(3) The analysis of Spall [10] can be used to analyze a variant of GD that uses
SP SA as the gradient estimator. Spall introduces a gradient sensitivity parameter cn in order to control the estimation error n at stage n. It is assumed that
2
P
cn → 0 and n≥0 γ(n)
< ∞, see A1, Section III, [10]. Again, this restricts
cn
the choice of step-size and affects the learning rate. In this setting our analysis
works for the more practical scenario where cn = c for all n i.e., a constant, see
Section 4.2.
(4) The important advancements of this paper are the following: (i) Our framework is more general and practical since the errors are not required to go to
zero; (ii) We provide easily verifiable, non-restrictive set of assumptions that
ensure almost sure boundedness and convergence of GD and (iii) Our assump-
6
tions (A1)-(A4) do not affect the choice of step-size.
(5) Tadić and Doucet [11] showed that GD with bounded non-diminishing
errors converges to a small neighborhood of the minimum set. They make
the following key assumption: (A) There exists p ∈ (0, 1], such that for every compact set Q ⊂ Rd and every ∈ [0, ∞), m(AQ, ) ≤ MQ p , where
AQ, = {f (x) | x ∈ Q, kf (x)k ≤ } and MQ ∈ [1, ∞).
Note that m(A) is the Lebesgue measure of the set A ⊂ Rd . The above
assumption holds if f is d0 times differentiable, where d < d0 < ∞, see [11]
for details. In comparison, we only require that the chain recurrent set of f
be a subset of it’s minimum set. One sufficient condition for this is given in
Proposition 4 of Hurley [7].
Remark 1. Suppose the minimum set M of f , contains the chain recurrent set
of ẋ(t) = −∇f (x(t)), then it can be shown that GD without errors ( = 0 in (4))
will converge to M almost surely, see [4]. On the other hand suppose there are
chain recurrent points outside M, it may converge to this subset (of the chain
recurrent set) outside M. In Theorem 2, we will use the upper-semicontinuity
of chain recurrent sets (Theorem 3.1 of Benaı̈m, Hofbauer and Sorin [3]), to
show that GD with errors will converge to a small neighborhood of the limiting
set of the “corresponding GD without errors”. In other words, GD with errors
converges to a small neighborhood of the minimum set provided the corresponding
GD without errors converges to the minimum set. This will trivially happen if
the chain recurrent set of ẋ(t) = −∇f (x(t)) is a subset of the minimum set
of f , which we implicitly assume is true. Suppose GD without errors does not
converge to the minimum set, then it is reasonable to expect that GD with errors
may not converge to a small neighborhood of the minimum set.
Suppose f is continuously differentiable and it’s regular values (i.e., x for
which ∇f (x) 6= 0) are dense in Rd , then the chain recurrent set of f is a subset
of it’s minimum set, see Proposition 4 of Hurley [7]. We implicitly assume that
an assumption of this kind is satisfied.
4
Proof of stability and convergence
We use (4) to construct the linearly interpolated trajectory, x(t) for t ∈ [0, ∞).
Pn−1
First, define t(0) := 0 and t(n) :=
i=0 γ(i) for n ≥ 1. Then, define x(t(n)) :=
xn and for t ∈ [t(n), t(n+1)], x(t) is the continuous linear interpolation of x(tn )
and x(tn+1 ). We also construct the following piece-wise constant trajectory g(t),
t ≥ 0 as follows: g(t) := g(xn ) for t ∈ [t(n), t(n + 1)), n ≥ 0.
We need to divide time, [0, ∞), into intervals of length T , where T = T (δ2 −
δ1 ) + 1. Note that T (δ2 − δ1 ) is such that Φt (x0 ) ∈ N δ2 −δ1 (A) for t ≥ T (δ2 − δ1 ),
where Φt (x0 ) denotes solution to ẋ(t) ∈ G∞ (x(t)) at time t with initial condition
x0 and x0 ∈ B a (0). Note that T (δ2 − δ1 ) is independent of the initial condtion
x0 , see Section 2 for more details. Dividing time is done as follows: define
T0 := 0 and Tn := min{t(m) : t(m) ≥ Tn−1 + T }, n ≥ 1. Clearly,
there exists a subsequence {t(m(n))}n≥0 of {t(n)}n≥0 such that Tn = t(m(n))
∀ n ≥ 0. In what follows we use t(m(n)) and Tn interchangeably.
To show stability, we use a projective scheme where the iterates are projected
periodically, with period T , onto the closed ball of radius a around the origin,
B a (0). Here, the radius a is given by (A3). This projective scheme gives rise to
7
the following rescaled trajectories x̂(· ) and ĝ(· ). First, we construct x̂(t), t ≥ 0:
x(t)
Let t ∈ [Tn , Tn+1 ) for some n ≥ 0, then x̂(t) := r(n)
, where r(n) = kx(Tan )k ∨ 1
−
(a is defined in (A3)). Also, let x̂(Tn+1 ) := lim x̂(t), t ∈ [Tn , Tn+1 ). The
t↑Tn+1
g(t)
r(n) .
‘rescaled g iterates’ are given by ĝ(t) :=
Let xn (t), t ∈ [0, T ] be the solution (upto time T ) to ẋn (t) = −ĝ(Tn + t),
with the initial condition xn (0) = x̂(Tn ), recall the definition of ĝ(· ) from the
beginning of Section 4. Clearly, we have
Z t
n
x (t) = x̂(Tn ) −
ĝ(Tn + z) dz.
(5)
0
We begin with a simple lemma which essentially claims that {xn (t), 0 ≤ t ≤ T |
n ≥ 0} = {x̂(Tn + t), 0 ≤ t ≤ T | n ≥ 0}. The proof is a direct consequence of
the definition of ĝ and is hence omitted.
Lemma 2. For all n ≥ 0, we have xn (t) = x̂(Tn + t), where t ∈ [0, T ].
It directly follows from Lemma 2 that {xn (t), t ∈ [0, T ] | n ≥ 0} = {x̂(Tn +
t), t ∈ [0, T ] | n ≥ 0}. In other words, the two families of T -length trajectories,
{xn (t), t ∈ [0, T ] | n ≥ 0} and {x̂(Tn + t), t ∈ [0, T ] | n ≥ 0}, are really one and
the same. When viewed as a subset of C([0, T ], Rd ), {xn (t), t ∈ [0, T ] | n ≥ 0}
is equi-continuous and point-wise bounded. Further, from the Arzela-Ascoli
theorem we conclude that it is relatively compact. In other words, {x̂(Tn +t), t ∈
[0, T ] | n ≥ 0} is relatively compact in C([0, T ], Rd ).
Lemma 3. Let r(n) ↑ ∞, then any limit point of {x̂(Tn + t), t ∈ [0, T ] : n ≥ 0}
Rt
is of the form x(t) = x(0) + 0 g∞ (s) ds, where y : [0, T ] → Rd is a measurable
function and g∞ (t) ∈ G∞ (x(t)), t ∈ [0, T ].
Proof. For t ≥ 0, define [t] := max{t(k) | t(k) ≤ t}. Observe that for any
t ∈ [Tn , Tn+1 ), we have ĝ(t) ∈ Gr(n) (x̂([t])) and kĝ(t)k ≤ K (1 + kx̂([t])k), since
Gr(n) is a Marchaud map. Since x̂(· ) is the rescaled trajectory obtained by
periodically projecting the original iterates onto a compact set, it follows that
x̂(· ) is bounded a.s. i.e., sup kx̂(t)k < ∞ a.s. It now follows from the
t∈[0,∞)
observation made earlier that sup kĝ(t)k < ∞ a.s.
t∈[0,∞)
Thus, we may deduce that there exists a sub-sequence of N, say {l} ⊆ {n},
such that x̂(Tl +· ) → x(· ) in C [0, T ], Rd and ĝ(m(l)+· ) → g∞ (· ) weakly in
L2 [0, T ], Rd . From Lemma 2 it follows that xl (· ) → x(· ) in C [0, T ], Rd .
Letting r(l) ↑ ∞ in
Z t
xl (t) = xl (0) −
ĝ(t(m(l) + z)) dz, t ∈ [0, T ],
0
Rt
we get x(t) = x(0) − 0 g∞ (z)dz for t ∈ [0, T ]. Since kx̂(Tn )k ≤ 1 we have
kx(0)k ≤ 1.
Since ĝ(Tl + · ) → g∞ (· ) weakly in L2 [0, T ], Rd , there exists {l(k)} ⊆ {l}
such that
N
1 X
ĝ(Tl(k) + · ) → g∞ (· ) strongly in L2 [0, T ], Rd .
N
k=1
8
Further, there exists {N (m)} ⊆ {N } such that
N (m)
X
1
ĝ(Tl(k) + · ) → g∞ (· ) a.e. on [0, T ].
N (m)
k=1
Let us fix t0 ∈ {t |
1
N (m)
PN (m)
k=1
ĝ(Tl(k) + t) → g∞ (t), t ∈ [0, T ]}, then
N (m)
X
1
ĝ(Tl(k) + t0 ) = g∞ (t0 ).
N (m)→∞ N (m)
lim
k=1
Since G∞ (x(t0 )) is convex and compact (Proposition 1), to showthat g∞ (t0 ) ∈
G∞ (x(t0 )) it is enough to show lim d ĝ(Tl(k) + t0 ), G∞ (x(t0 )) = 0. Suppose
l(k)→∞
this is not true and ∃ > 0 and {n(k)} ⊆ {l(k)} such that d ĝ(Tn(k) + t0 ), G∞ (x(t0 )) >
. Since {ĝ(Tn(k) + t0 )}k≥1 is norm bounded, it follows that there is a convergent sub-sequence. For convenience, assume lim ĝ(Tn(k) + t0 ) = g0 , for some
k→∞
g0 ∈ Rd . Since ĝ(Tn(k) +t0 ) ∈ Gr(n(k)) (x̂([Tn(k) +t0 ])) and lim x̂([Tn(k) +t0 ]) =
k→∞
x(t0 ), it follows from assumption (A4) that g0 ∈ G∞ (x(t0 )). This leads to a
contradiction.
Note that in the statement of Lemma 3 we can replace ‘r(n) ↑ ∞’ by ‘r(k) ↑
∞’, where {r(k))} is a subsequence of {r(n)}. Specifically we can conclude that
any limit point of {x̂(Tk + t), t ∈ [0, T ]}{k}⊆{n} in C([0, T ], Rd ), conditioned on
Rt
r(k) ↑ ∞, is of the form x(t) = x(0) − 0 g∞ (z) dz, where g∞ (t) ∈ G∞ (x(t))
for t ∈ [0, T ]. It should be noted that g∞ (· ) may be sample path dependent
(if n is stochastic then g∞ (· ) is a random variable). Recall that sup kxk =
x∈A
δ1 < δ2 < δ3 < δ4 < a (see the sentence following (A3) in Section 3.1). The
following is an immediate corollary of Lemma 3.
Corollary 1. ∃ 1 < R0 < ∞ such that ∀ r(l) > R0 , kx̂(Tl +· ) − x(· )k < δ3 − δ2 ,
where {l} ⊆ N and x(· ) is a solution (up to time T ) of ẋ(t) ∈ −G∞ (x(t)) such
that kx(0)k ≤ 1. The form of x(· ) is as given by Lemma 3.
Proof. Assume to the contrary that ∃ r(l) ↑ ∞ such that x̂(Tl +· ) is at least
δ3 − δ2 away from any solution to the DI. It follows from Lemma 3 that there
exists a subsequence of {x̂(Tl + t), 0 ≤ t ≤ T : l ⊆ N} guaranteed to converge,
in C([0, T ], Rd ), to a solution of ẋ(t) ∈ −G∞ (x(t)) such that kx(0)k ≤ 1. This
is a contradiction.
Remark 2. It is worth noting that R0 may be sample path dependent. Since
T = T (δ2 − δ1 ) + 1 we get kx̂([Tl + T ])k < δ3 for all Tl such that kx(Tl )k(=
r(l)) > R0 .
4.1
Main Results
We are now ready to prove the two main results of this paper. We begin
by showing that (4) is stable (bounded a.s.). In other words, we show that
supkr(n)k < ∞ a.s. Once we show that the iterates are stable we use the main
n
results of Benaı̈m, Hofbauer and Sorin to conclude that the iterates converge to a
closed, connected, internally chain transitive and invariant set of ẋ(t) ∈ G(x(t)).
9
Theorem 1. Under assumptions (A1) − (A4), the iterates given by (4) are
stable i.e., supkxn k < ∞ a.s. Further, they converge to a closed, connected,
n
internally chain transitive and invariant set of ẋ(t) ∈ G(x(t)).
Proof. First, we show that the iterates are stable. To do this we start by assuming the negation i.e., P (sup r(n) = ∞) > 0. Clearly, there exists {l} ⊆ {n} such
n
that r(l) ↑ ∞. Recall that Tl = t(m(l)) and that [Tl + T ] = max{t(k) | t(k) ≤
Tl + T }.
We have kx(T )k < δ2 since x(· ) is a solution, up to time T , to the DI given
by ẋ(t) ∈ G∞ (x(t)) and T = T (δ2 − δ1 ) + 1. Since the rescaled trajectory
is obtained by projecting onto a compact set, it follows that the trajectory is
bounded. In other words, sup kx̂(t)k ≤ Kw < ∞, where Kw could be sample
t≥0
path dependent. Now, we observe that there exists N such that all of the
following happen:
(i) m(l) ≥ N =⇒ r(l) > R0 . [since r(l) ↑ ∞]
(ii) m(l) ≥ N =⇒ kx̂([Tl + T ])k < δ3 . [since r(l) > R0 and Remark 2]
δ4 −δ3
. [since γ(n) → 0]
(iii) n ≥ N =⇒ γ(n) < K(1+K
ω)
We have sup kxk = δ1 < δ2 < δ3 < δ4 < a (see the sentence following (A3)
x∈A
in Section 3.1 for more details). Let m(l) ≥ N and Tl+1 = t(m(l + 1)) =
t(m(l) + k + 1) for some k > 0. If Tl + T 6= Tl+1 then t(m(l) + k) = [Tl + T ],
else if Tl + T = Tl+1 then t(m(l) + k + 1) = [Tl + T ]. We proceed assuming
that Tl + T 6= Tl+1 since the other case can be identically analyzed. Recall that
−
x̂(Tn+1
) = limt↑t(m(n+1)) x̂(t), t ∈ [Tn , Tn+1 ) and n ≥ 0. Then,
−
x̂(Tl+1
) = x̂(t(m(l) + k)) − γ(m(l) + k)ĝ(t(m(l) + k)).
Taking norms on both sides we get,
−
kx̂(Tl+1
)k ≤ kx̂(t(m(l) + k))k + γ(m(l) + k)kĝ(t(m(l) + k))k.
As a consequence of the choice of N we get:
kĝ(t(m(l) + k))k ≤ K (1 + kx̂(t(m(l) + k)k) ≤ K (1 + Kω ) .
(6)
Hence,
−
kx̂(Tl+1
)k ≤ kx̂(t(m(l) + k))k + γ(m(l) + k)K(1 + Kω ).
−
In other words, kx̂(Tl+1
)k < δ4 . Further,
−
kx̂(Tl+1
)k
kx(Tl+1 )k
δ4
=
<
< 1.
kx(Tl )k
kx̂(Tl )k
a
(7)
It follows from (7) that kx(Tn+1 )k < δa4 kx(Tn )k if kx(Tn )k > R0 . From
Corollary 1 and the aforementioned we get that the trajectory falls at an exponential rate till it enters B R0 (0). Let t ≤ Tl , t ∈ [Tn , Tn+1 ) and n + 1 ≤ l,
be the last time that x(t) jumps from within B R0 (0) to the outside of the ball.
It follows that kx(Tn+1 )k ≥ kx(Tl )k. Since r(l) ↑ ∞, x(t) would be forced to
make larger and larger jumps within an interval of length T + 1. This leads to a
contradiction since the maximum jump size within any fixed time interval can
10
be bounded using the Gronwall inequality. Thus, the iterates are shown to be
stable.
It now follows from Theorem 3.6 & Lemma 3.8 of Benaı̈m, Hofbauer
and Sorin [2] that the iterates converge almost surely to a closed, connected,
internally chain transitive and invariant set of ẋ(t) ∈ G(x(t)).
Now that the GD with non-diminishing, bounded errors, given by (4), is
shown to be stable (bounded a.s.), we proceed to show that these iterates in
fact converge to an arbitrarily small neighborhood of the minimum set. The
proof uses Theorem 3.1 of Benaı̈m, Hofbauer and Sorin [3] that we state below.
First, we make a minor comment on the limiting set of GD with errors.
Recall from Remark 1 that the chain recurrent set of ẋ(t) = −∇f (x(t)) is a
subset of M, where M is the minimum set of f . We consider two cases: (a) M
is the unique global attractor of ẋ(t) = −∇f (x(t)); (b) M comprises of multiple
local attractors. Suppose we are in case (a), it can be shown that any compact
neighborhood, M ⊆ K ⊂ Rd , is a fundamental neighborhood of M. It follows
from Theorem 1 that the iterates are bounded almost surely. In other words,
x(t) ∈ K0 , ∀ t ≥ 0, for some compact set K0 , that could be sample path dependent, such that M ⊆ K0 . In this case, GD with errors is expected to converge
to a small neighborhood of M. Suppose we are in case (b), we need to consider
M0 ⊆ M such that the aforementioned K0 is a fundamental neighborhood of it.
In this case, GD with errors is expected to converge to a small neighborhood of
M0 .
We are now ready to present Theorem 3.1, [3]. The statement has been interpreted to the setting of this chapter for the sake of convenience.
[Theorem 3.1, [3]] Given δ > 0, there exists (δ) > 0 such that the chain
recurrent set of ẋ(t) = −∇f (x(t)) + B r (0) is within the δ-open neighborhood
of the chain recurrent set of ẋ(t) = −∇f (x(t)) for all r ≤ (δ).
Theorem 2. Given δ > 0, there exists (δ) > 0 such that the GD with bounded
errors given by (4) converges to N δ (M), the δ-neighborhood of the minimum
set of f , provided < (δ). Here is the bound for estimation errors from
assumption (A1).
Proof. As stated in Remark 1, the chain recurrent set of ẋ(t) = −∇f (x(t)) is
assumed to be a subset of the minimum set of f . Note
that the iterates given
by (4) track a solution to ẋ(t) ∈ − ∇f (x(t)) + B (0) . It follows from Theorem
3.1, [3] that (4) converge to a δ-neighborhood of the chain recurrent set provided
< (δ). In other words, GD with errors converges to a small neighborhood of
the minimum set provided GD without errors is guaranteed to converge to the
minimum set.
4.2
Implementing GD methods using SPSA
Gradient estimators are often used in the implementation of GD methods such
as SP SA, [10]. When using SP SA the update rule for the ith coordinate is
given by
f (xn + cn ∆n ) − f (xn − cn ∆n )
xin+1 = xin − γ(n)
,
(8)
2cn ∆in
11
where xn = x1n , . . . , xdn is the underlying parameter, ∆n = ∆1n , . . . , ∆dn is a
sequence of perturbation random vectors such that ∆in , 1 ≤ i ≤ d, n ≥ 0 are
i.i.d.. It is common to assume ∆in to be symmetric, Bernoulli distributed, taking
values ±1 w.p. 1/2. The sensitivity parameter cn is such that the following are
2
P
γ(n)
assumed: cn → 0 as n → ∞;
< ∞, see A1 of [10]. Further,
n≥0
cn
cn needs to be chosen such that the estimation errors go to zero. This, in
particular, could be difficult since the form of the function f is often unknown.
One may need to run experiments to find each cn . Also, smaller values of
cn in the initial iterates tends to blow up the variance which in turn affects
convergence. For these reasons, in practice, one often lets cn := c (a small
constant) for all n. If we assume additionally that the second derivative of f
is bounded, then it is easy to see that the estimation errors are bounded by
(c) such that (c) → 0 as c → 0. Thus, keeping cn fixed to c forces the
estimation errors to be bounded at each stage. In other words, SPSA
with a constant sensitivity parameter falls under the purview of the
framework presented in this paper. Also, it is worth noting that the iterates
are assumed to be stable (bounded a.s.) in [10]. However in our framework,
stability is shown under verifiable conditions even when cn = c, n ≥ 0.
We arrive at the important question of how to choose this constant c in
practice such that fixing cn := c we still get the following: (a) the iterates
are stable and (b) GD implemented in this manner converges to the minimum
set. Suppose the simulator wants to ensure that the iterates converge to a δneighborhood of the minimum set i.e., N δ (M), then it follows from Theorem 2
that there exists (δ) > 0 such that the GD converges to N δ (M) provided the
estimation error at each stage is bounded by (δ). Now, c is chosen such that
(c) ≤ (δ). The simulation is carried out by fixing the sensitivity parameters
to this c. As stated earlier one may need to carry out experiments to find such
a c. However, the advantage is that we only need to do this once before starting
the simulation. Also, the iterates are guaranteed to be stable and converge to
the δ-neighborhood of the minimum set provided (A1)-(A4) are satisfied.
5
Experimental results
The experiments presented in this section consider a quadratic objective function f : Rd → R with f (x) := xT Qx, where Q is a positive definite matrix.
The origin is the unique global minimizer of f . On the other hand, if one were
to conduct these experiments using f with multiple local minima, then their
results are expected to be similar.
5.1
Exp.1: SPSA with constant sensitivity parameters
(SPSA-C)
First we consider SP SA with constant sensitivity parameters to find the minimum set of f . This scheme is given by (8) but with cn = c for all n, and we
refer to it as SPSA-C.
Parameter settings:
(1) The positive definite matrix Q and the starting point x0 were randomly
chosen. (2) The dimension d = 10. The number of iterations of SPSA-C
12
−34
−35
−36
−38
−37
log(||x8000||)
0
2
4
6
8
value of c
Figure 1: Average performance variation of 20 independent simulation runs as
a function of the sensitivity parameter c.
13
10
−34
−35
log(||x8000||)
−36
−37
−38
0
2
4
6
value of c
Figure 2: Two sample runs.
14
8
10
was 8000. (3) c was varied from 0.1 to 10. For each value of c, SPSA-C was
run for 8000 iterations and kx8000 k was recorded. Since origin is the unique
global minimizer of f , kx8000 k records the distance of the iterate after 8000
iterations from the origin. (4) For 0 ≤ n ≤ 7999, we chose the following stepsize sequence: a(n) = (n mod 1800)+100 , n ≥ 1. This step-size sequence seems
to expedite the convergence of the iterates to the minimum set. We were able
to use this sequence since our framework does not impose extra restrictions on
step-sizes, unlike [10].
Since we keep the sensitivity parameters fixed the implementation was greatly
simplified. Based on the theory presented in this paper, for larger values of c
one expects the iterates to be farther from the origin than for smaller values of
c. This theory is corroborated by the experiment illustrated in Fig. 1.
Note that to generate Q we first randomly generate a column-orthonormal
matrix U and let Q := U ΣU T , where Σ is a diagonal matrix with strictly positive
entries. To generate U , we sample it’s entries independently from a Gaussian
distribution and then apply Gram-Schmidt orthogonalization to the columns.
Fig. 1 shows the average performance of 20 independent simulation runs (for
each c) of the experiment, where Q and x0 were randomly chosen for each run;
Fig. 2 shows two sample runs. In Fig. 1 and 2 the x-axis represents the values
of c ranging from 0.1 to 10 in steps of 0.01. The y-axis in Fig. 1 represents
the logarithm of the average
of corresponding
distances from the origin after
P20
8000 iterations i.e., log 1/20 i=1 kxi8000 k , where xi8000 is the iterate-value
after 8000 runs from the ith simulation. The y-axis in Fig. 2 represents the
logarithm of the corresponding distances from the origin after 8000 iterations
i.e., log(kx8000 k). Note that for c close to 0, x8000 ∈ Be−38 (0) while for c close
to 10, x8000 ∈ Be−32 (0) only. Also note that the graph has a series of “steep
rises” followed by “plateaus”. These indicate that for values of c within the
same plateau the iterate converges to the same neighborhood of the origin. As
stated earlier for larger values of c the iterates are farther from the origin than
for smaller values of c.
5.2
Exp.2: GD with constant gradient errors
For the second experiment we ran the following recursion for 1000 iterations:
xn+1 = xn + 1/n (Qxn + ) , where
(9)
(a) the starting point x0 was randomly chosen and dimension d = 10. (b) the
matrix Q was a randomly generated
positive
definite matrix (Q is generated as
√
√
explained before). (c) = / d . . . / d , is the constant noise-vector added
at each stage and ∈ R.
Since Q is positive definite, we expect (9) to converge to the origin when = 0
in the noise-vector. A natural question to ask is the following: If a “small” noisevector is added at each stage does the iterate sequence still converge to a small
neighborhood of the origin or do the iterates diverge? It can be verified that (9)
satisfies (A1)-(A4) of Section 3.1 for any ∈ R. Hence it follows from Theorem 1
that the iterates are stable and do not diverge. In other words, the addition of
such a noise does not accumulate and force the iterates to diverge. As in the first
experiment we expect the iterates to be farther from the origin for larger values
15
0.020
0.015
0.005
0.010
||x1000||
0.5
1.0
1.5
value of ε
Figure 3: Average performance variation of 20 independent simulation runs as
a function of the neighborhood parameter .
16
2.0
of . This is evidenced by the plots in Fig. 3 and 4. As before, Fig. 3 shows the
average performance of 20 independent simulation runs (for each ) and Fig. 4
shows three of these sample runs. The x-axis in Fig. 3 and 4 represents values
of the parameter in (9) that varies from 0.1 to 2 i.e., kk varies from 0.1 to
2 in steps of 0.01. The y-axis in Fig. 3 represents the
average distance of the
P20
iterate from the origin after 1000 iterations i.e., 1/20 i=1 kxi1000 k, where xi1000
is the iterate-value after 1000 iterations from the ith run. The y-axis in Fig. 3
represents kxi1000 k. For close to 0 the iterate (after 1000 iterations) is within
B0.0003 (0) while for close to 2 the iterate (after 1000 iterations) is only within
B0.1 (0).
17
0.025
0.020
||x1000||
0.015
0.010
0.005
0.5
1.0
value of ε
Figure 4: Three sample runs.
18
1.5
2.0
6
Extensions and conclusions
In this paper we have provided sufficient conditions for stability and convergence (to a small neighborhood of the minimum set) of GD with bounded and
(possibly) non-diminishing errors. To the best of our knowledge our analysis
of GD with errors is new to the literature. In addition to being easily verifiable, the assumptions presented herein do not affect the choice of step-size.
Finally, experimental results presented in Section 5 are seen to validate the
theory. An important step in the analysis of ‘GD with errors’ is to show stability (almost sure boundedness) of the iterates. It is worth noting that this step
is not straightforward even in the case of asymptotically vanishing errors, i.e.,
n → 0 as n → ∞. An extension to our main results is the introduction of
an additional martingale
noise term Mn+1 at stage n. Our results will continue
P
to hold provided n≥0 γ(n)Mn+1 < ∞ a.s. Another extension is to analyze
implementations of GD using Newton’s method with bounded, (possibly) nondiminishing errors. To see this, define G(x) := H(x)−1 ∇f (x) + B (0) in (A1);
G∞ changes accordingly. Here H(x) (assumed positive definite) denotes the
Hessian evaluated at x. Theorems 1 & 2 hold under this new definition of G
and appropriate modifications of (A1) − (A4). Our analysis is valid in situations
where the function f is not differentiable at some points, however, the error in
the gradient estimate at any stage is bounded. An interesting future direction
will be to derive convergence rates of gradient schemes with non-diminishing
errors. More generally, it would be interesting to derive convergence rates of
stochastic approximation algorithms with set-valued mean-fields.
References
[1] J. Aubin and A. Cellina. Differential Inclusions: Set-Valued Maps and
Viability Theory. Springer, 1984.
[2] M. Benaı̈m, J. Hofbauer, and S. Sorin. Stochastic approximations and
differential inclusions. SIAM Journal on Control and Optimization, pages
328–348, 2005.
[3] M. Benaı̈m, J. Hofbauer, and S. Sorin. Perturbations of set-valued dynamical systems, with applications to game theory. Dynamic Games and
Applications, 2(2):195–205, 2012.
[4] M. Benaı̈m. A dynamical system approach to stochastic approximations.
SIAM J. Control Optim., 34(2):437–472, 1996.
[5] D.P. Bertsekas and J.N. Tsitsiklis. Gradient convergence in gradient methods with errors. SIAM Journal on Optimization, 10(3):627–642, 2000.
[6] S.S. Haykin. Neural networks and learning machines, volume 3. Pearson
Education Upper Saddle River, 2009.
[7] M. Hurley. Chain recurrence, semiflows, and gradients. Journal of Dynamics and Differential Equations, 7(3):437–456, 1995.
[8] J. Kiefer and J. Wolfowitz. Stochastic estimation of the maximum of a
regression function. The Annals of Mathematical Statistics, 23(3):462–466,
1952.
19
[9] O.L. Mangasarian and M.V. Solodov. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optimization
Methods and Software, 4(2):103–116, 1994.
[10] J.C. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. Automatic Control, IEEE Transactions
on, 37(3):332–341, 1992.
[11] V.B. Tadić and A. Doucet. Asymptotic bias of stochastic gradient search.
In Decision and Control and European Control Conference (CDC-ECC),
2011 50th IEEE Conference on, pages 722–727. IEEE, 2011.
20
| 3 |
arXiv:1703.05725v1 [math.GR] 16 Mar 2017
IMMUTABILITY IS NOT UNIFORMLY DECIDABLE IN
HYPERBOLIC GROUPS
DANIEL GROVES AND HENRY WILTON
Abstract. A finitely generated subgroup H of a torsion-free hyperbolic group G is called immutable if there are only finitely many
conjugacy classes of injections of H into G. We show that there
is no uniform algorithm to recognize immutability, answering a
uniform version of a question asked by the authors.
In [4] we introduced the following notion which is important for the
study of conjugacy classes of solutions to equations and inequations
over torsion-free hyperbolic groups, and also for the study of limit
groups over (torsion-free) hyperbolic groups.
Definition 1. [4, Definition 7.1] Let G be a group. A finitely generated subgroup H of G is called immutable if there are finitely many
injective homomorphisms φ1 , . . . , φN : H → G so that any injective
homomorphism φ : H → G is conjugate to one of the φi .
We gave the following characterization of immutable subgroups.
Lemma 2. [4, Lemma 7.2] Let Γ be a torsion-free hyperbolic group. A
finitely generated subgroup of Γ is immutable if and only if it does not
admit a nontrivial free splitting or an essential splitting over Z.
The following corollary is immediate.
Corollary 3. Let Γ be a torsion-free hyperbolic group and suppose
that H is a finitely generated subgroup. If for every action of H on a
simplicial tree with trivial or cyclic edge stabilizers H has a global fixed
point then H is immutable.
If Γ is a torsion-free hyperbolic group then the immutable subgroups
of Γ form some of the essential building blocks of the structure of Γ–
limit groups. See [4] and [5] for more information.
Date: March 16, 2017.
The work of the first author was supported by the National Science Foundation
and by a grant from the Simons Foundation (#342049 to Daniel Groves).
The third author is partially funded by EPSRC Standard Grant number
EP/L026481/1. This paper was completed while the third author was participating in the Non-positive curvature, group actions and cohomology programme at
the Isaac Newton Institute, funded by EPSRC Grant number EP/K032208/1.
1
2
DANIEL GROVES AND HENRY WILTON
In [4, Theorem 1.4] we proved that given a torsion-free hyperbolic
group Γ it is possible to recursively enumerate the finite tuples of Γ
which generate immutable subgroups. This naturally lead us to ask
the following
Question 4. [4, Question 7.12] Let Γ be a torsion-free hyperbolic group.
Is there an algorithm that takes as input a finite subset S of Γ and
decides whether or not the subgroup hSi is immutable?
We are not able to answer this question, but we can answer the
uniform version of this question in the negative, as witnessed by the
following result. It is worth remarking that the algorithm from [4,
Theorem 1.4] is uniform, in the sense that one can enumerate pairs
(Γ, S) where Γ is a torsion-free hyperbolic group (given by a finite
presentation) and S is a finite subset of words in the generators of Γ
so that hSi is immutable in Γ.
Theorem 5. There is no algorithm which takes as input a presentation
of a (torsion-free) hyperbolic group and a finite tuple of elements, and
determines whether or not the tuple generates an immutable subgroup.
Proof. Let Γ0 be a non-elementary, torsion-free, hyperbolic group with
Property (T) and let {a, b} ∈ Γ0 be such that ha, bi is a nonabelian
free, malnormal and quasi-convex subgroup of Γ0 . There are many
hyperbolic groups with Property (T) (see, for example, [9]). The existence of such a pair {a, b} follows immediately from [6, Theorem C].
Throughout our proof, Γ0 and {a, b} are fixed.
Consider a finitely presented group Q with unsolvable word problem
(see [7]), and let G be a hyperbolic group that fits into a short exact
sequence
1 → N → G → Q ∗ Z → 1,
where N is finitely generated and has Kazhdan’s Property (T). Such a
G can be constructed using [2, Corollary 1.2], by taking H from that
result to be a non-elementary hyperbolic group with Property (T), and
recalling that having Property (T) is closed under taking quotients.
Let t be the generator for the second free factor in Q ∗ Z. Given a
word u in the generators of Q, define words
cu = tut−2 ut,
and
du = utut−1 u.
Claim 1. If u =Q 1 then hcu , du i = {1} in Q ∗ Z. If u 6=Q 1 then
hcu , du i is free of rank 2 in Q ∗ Z.
IMMUTABILITY IS NOT UNIFORMLY DECIDABLE IN HYPERBOLIC GROUPS3
Proof of Claim 1. The first assertion of the claim is obvious, and the
second follows from the fact that if u is nontrivial in Q then any reduced
word in {cu , du }± yields a word in {t, u}± which is in normal form in
the free product Q ∗ Z, and hence is nontrivial in Q ∗ Z.
We lift the elements cu , du ∈ Q ∗ Z to elements c̄u , d¯u ∈ G.
Claim 2. Given words cu and du , it is possible to algorithmically find
¯ u zu i is quasi-convex and
words wu , xu , yu , zu ∈ N so that hwu c̄u xu , yu dd
free of rank 2.
Proof of Claim 2. It is well known (see, for example, [1, Lemma 4.9])
that in a δ-hyperbolic space a path which is made from concatenating
geodesics whose length is much greater than the Gromov product at
the concatenation points is a uniform-quality quasi-geodesic, and in
particular not a loop.
By considering geodesic words representing c̄u and d¯u , it is possible
to find long words in the generators of N as in the statement of the
claim so that any concatenation of (wu c̄u xu )± and (yu d¯u zu )± is such a
quasigeodesic. From this, it follows immediately that the free group
hwu c̄u xu , yu d¯u zu i is quasi-isometrically embedded and has free image in
G. This can be done algorithmically because the word problem in G
is (uniformly) solvable, so we can compute geodesic representatives for
words and calculate Gromov products.
Let gu = wu c̄u xu and hu = yu d¯u zu , and let Ju = hgu , hu i. Note
that the image of Ju in Q is either trivial (if u =Q 1) or free of rank
2 (otherwise). Therefore, if u =Q 1 then Ju ∩ N = Ju and otherwise
Ju ∩ N = {1}.
Now consider the group
Γu = G ∗{gu =a,hu =b} Γ0 .
Since ha, bi is malnormal and quasiconvex in Γ0 and hgu , hu i is quasiconvex in G, the group Γu is hyperbolic by the Bestvina–Feighn Combination Theorem [3].
Let Ku = hN, Γ0 i ≤ Γu . We remark that a presentation for Γu and
generators for Ku as a subgroup of Γu can be algorithmically computed
from the presentations of G and Γ0 and the word u.
Claim 3. If u =Q 1 then Ku is immutable. If u 6=Q 1 then Ku splits
nontrivially over {1} and so is not immutable.
Proof of Claim 3. Let Nu = N ∩ Ju . We observed above that if u =Q 1
then Nu = Ju , and that if u 6=Q 1 then Nu = {1}. By considering the
4
DANIEL GROVES AND HENRY WILTON
induced action of Ku on the Bass-Serre tree of the splitting of Γu given
by the defining amalgam, we see that in case u =Q 1 we have
Ku ∼
= N ∗{g =a,h =b} Γ0 ,
u
u
whereas in case u 6=Q 1 we have
Ku ∼
= N ∗ Γ0 .
Thus, if u 6=Q 1 then Ku splits nontrivially as a free product, as required.
On the other hand, suppose that u =Q 1, and suppose that Ku
acts on a tree T with trivial or cyclic edge stabilizers. Since Property
(T) groups have Property (FA) [8], N and Γ0 must act elliptically on
T . However, if they do not have a common fixed vertex, then their
intersection (which is free of rank 2) must fix the edge-path between
the fixed point sets for N and for Γ0 , contradicting the assumption
that edge stabilizers are trivial or cyclic. Thus, there is a common
fixed point for N and Γ0 , and so Ku acts on T with global fixed point.
It follows from Corollary 3 that Ku is immutable, as required.
An algorithm as described in the statement of the theorem would
(when given the explicit presentation of Γu and the explicit generators
for Ku ) be able to determine whether or not Ku is immutable. In turn,
this would decide the word problem for Q, by Claim 3. Since this is
impossible, there is no such algorithm, and the proof of Theorem 5 is
complete.
Remark 6. By taking only a cyclic subgroup to amalgamate in the
definition of Γu , instead of a free group of rank 2, it is straightforward
to see that one cannot decide whether non-immutable subgroups split
over {1} or over {Z}.
References
[1] Ian Agol, Daniel Groves, and Jason Fox Manning. An alternate proof of Wise’s
malnormal special quotient theorem. Forum Math. Pi, 4:e1, 54, 2016.
[2] Igor Belegradek and Denis Osin. Rips construction and Kazhdan property (T).
Groups Geom. Dyn., 2(1):1–12, 2008.
[3] M. Bestvina and M. Feighn. A combination theorem for negatively curved
groups. J. Differential Geom., 35(1):85–101, 1992.
[4] Daniel Groves and Henry Wilton. Conjugacy classes of solutions to equations
and inequations over hyperbolic groups. J. Topol., 3(2):311–332, 2010.
[5] Daniel Groves and Henry Wilton. The structure of limit groups over relatively
hyperbolic groups. arxiv.org/abs/1603.07187, 2016.
[6] Ilya Kapovich. A non-quasiconvexity embedding theorem for hyperbolic groups.
Math. Proc. Cambridge Philos. Soc., 127(3):461–486, 1999.
IMMUTABILITY IS NOT UNIFORMLY DECIDABLE IN HYPERBOLIC GROUPS5
[7] P. S. Novikov. Ob algoritmičeskoı̆ nerazrešimosti problemy toždestva slov v teorii
grupp. Trudy Mat. Inst. im. Steklov. no. 44. Izdat. Akad. Nauk SSSR, Moscow,
1955.
[8] Yasuo Watatani. Property T of Kazhdan implies property FA of Serre. Math.
Japon., 27(1):97–103, 1982.
[9] A. Żuk. Property (T) and Kazhdan constants for discrete groups. Geom. Funct.
Anal., 13(3):643–670, 2003.
Daniel Groves, Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago, 322 SEO, M/C 249,
851 S. Morgan St., Chicago, IL 60607-7045, USA
E-mail address: [email protected]
Henry Wilton, Centre for Mathematical Sciences, Wilberforce Road,
Cambridge, CB3 0WB, UNITED KINGDOM
E-mail address: [email protected]
| 4 |
Distributed Complexity of Large-Scale Graph Computations
Gopal Pandurangan∗
Peter Robinson†
Michele Scquizzato‡
arXiv:1602.08481v5 [cs.DC] 25 Nov 2017
Abstract
Motivated by the increasing need to understand the distributed algorithmic foundations
of large-scale graph computations, we study some fundamental graph problems in a messagepassing model for distributed computing where k > 2 machines jointly perform computations
on graphs with n nodes (typically, n k). The input graph is assumed to be initially randomly
partitioned among the k machines, a common implementation in many real-world systems.
Communication is point-to-point, and the goal is to minimize the number of communication
rounds of the computation.
Our main contribution is the General Lower Bound Theorem, a theorem that can be used to
show non-trivial lower bounds on the round complexity of distributed large-scale data computations. The General Lower Bound Theorem is established via an information-theoretic approach
that relates the round complexity to the minimal amount of information required by machines
for solving a problem. Our approach is generic and this theorem can be used in a “cookbook”
fashion to show distributed lower bounds in the context of other problems (including nongraph problems). We present two applications by showing (almost) tight bounds for the round
complexity of two fundamental graph problems, namely PageRank computation and triangle
enumeration. Our approach, as demonstrated in the case of PageRank, can yield tight lower
bounds for problems (including, and especially, under a stochastic partition of the input) where
communication complexity techniques are not obvious. Our approach, as demonstrated in the
case of triangle enumeration, can yield stronger round lower bounds as well as message-round
tradeoffs compared to approaches that use communication complexity techniques. To demonstrate that these lower bounds are tight in general, we then present algorithms that are close to
the lower bounds; these algorithms exhibit a round complexity which scales superlinearly in k,
improving significantly over previous results. Specifically, we show the following results:
• PageRank: We show a lower bound of Ω̃(n/k 2 ) rounds, and present a distributed algorithm
that computes the PageRank of all the nodes of a graph in Õ(n/k 2 ) rounds.1
• Triangle enumeration: We show that there exist graphs with m edges where any distributed
algorithm requires Ω̃(m/k 5/3 ) rounds. This result implies the first non-trivial lower bound
of Ω̃(n1/3 ) rounds for the congested clique model, which is tight up to logarithmic factors.
We also present a distributed algorithm that enumerates all the triangles of a graph in
Õ(m/k 5/3 + n/k 4/3 ) rounds.
∗
Department of Computer Science,
University of Houston,
Houston,
TX 77204,
USA.
E-mail: [email protected]. Supported, in part, by US-Israel Binational Science Foundation grant
2008348, and by NSF grants CCF-1527867, CCF-1540512, IIS-1633720, and CCF-1717075.
†
Department
of
Computer
Science,
Royal
Holloway,
University
of
London,
UK.
E-mail: [email protected].
‡
School of Computer Science and Communication, KTH Royal Institute of Technology, Sweden.
E-mail: [email protected]. Supported, in part, by the European Research Council (ERC) under the European Union’s
Horizon 2020 research and innovation programme under grant agreement No 715672.
1
Notation Ω̃ hides a 1/polylog(n) factor; Õ hides a polylog(n) factor and an additive polylog(n) term.
1
Introduction
The focus of this paper is on the distributed processing of large-scale data, in particular, graph
data, which is becoming increasingly important with the rise of massive graphs such as the Web
graph, social networks, biological networks, and other graph-structured data and the consequent
need for fast distributed algorithms to process such graphs. Several large-scale graph processing
systems such as Pregel [49] and Giraph [1] have been recently designed based on the message-passing
distributed computing model [48, 58]. In these systems, the input graph, which is simply too large
to fit into a single machine, is distributed across a group of machines that are connected via a
communication network and the machines jointly perform computation in a distributed fashion
by sending/receiving messages. A key goal in distributed large-scale computation is to minimize
the amount of communication across machines, as this typically dominates the overall cost of
the computation. Indeed, Reed and Dongarra in a recent CACM article [61] on distributed Big
Data computing emphasize: “It is important for all of computer science to design algorithms that
communicate as little as possible, ideally attaining lower bounds on the amount of communication
required.”
We study fundamental graph problems in a message-passing distributed computing model and
present almost tight bounds on the number of communication rounds needed to solve these problems. In the model, called the k-machine model [37] (explained in detail in Section 1.1), the input
graph (or more generally, any other type of data) is distributed across a group of k machines that
are pairwise interconnected via a communication network. The k machines jointly perform computations on an arbitrary n-vertex input graph (where typically n k) distributed among the
machines. The communication is point-to-point via message passing. The goal is to minimize the
round complexity, i.e., the number of communication rounds, given some (bandwidth) constraint
on the amount of data that each link of the network can deliver in one round. (As explained in
Section 1.1, the communication cost is assumed to be the dominant cost – which is typically the
case in Big Data computations — and hence the goal of minimizing the number of communication
rounds.) We address a fundamental issue in distributed computing of large-scale data: What is the
distributed (round) complexity of solving problems when each machine can see only a portion of
the input and there is a limited bandwidth for communication? We would like to quantify the round
complexity of solving problems as a function of the size of the input and the number of machines
used in the computation. In particular, we would like to quantify how the round complexity scales
with the number of machines used: more precisely, does the number of rounds scale linearly (or
even super-linearly) in k? And what is the best possible round complexity for various problems?
A main contribution of this paper is a technique that can be used to show non-trivial (almost
tight) lower bounds on the distributed complexity (number of communication rounds) of large-scale
data computations and show its applications to graph problems.
1.1
The Model
We now describe the adopted model of distributed computation, the k-machine model (a.k.a. the
Big Data model ), introduced in [37] and further investigated in [60, 19, 56, 7]. The model consists
of a set of k > 2 machines {M1 , M2 , . . . , Mk } that are pairwise interconnected by bidirectional
point-to-point communication links. Each machine executes an instance of a distributed algorithm.
The computation advances in synchronous rounds where, in each round, machines can exchange
messages over their communication links and perform some local computation. Each link is assumed
1
to have a bandwidth of B bits per round, i.e., B bits can be transmitted over each link in each
round; unless otherwise stated, we assume B = Θ(polylog n).2 Machines do not share any memory
and have no other means of communication. We assume that each machine has access to a private
source of true random bits. We say that algorithm A has -error if, in any run of A, the output
of the machines corresponds to a correct solution with probability at least 1 − . To quantify the
performance of a randomized (Monte Carlo) algorithm A, we define the round complexity of A to
be3 the worst-case number of rounds required by any machine when executing A ([39, 37]).
Local computation within a machine is considered to happen instantaneously at zero cost, while
the exchange of messages between machines is the costly operation.4 However, we note that in all
the algorithms of this paper, every machine in every round performs lightweight computations; in
particular, these computations are bounded by a polynomial (typically, even linear) in the size of
the input assigned to that machine.
Although the k-machine model is a general model of distributed computation that can be applied
to study any (large-scale data) problem, in this paper we focus on investigating graph problems
in it. Specifically, we are given an input graph G with n vertices, each associated with a unique
integer ID from [n], and m edges. To avoid trivialities, we will assume that n > k (typically,
n k). Initially, the entire graph G is not known by any single machine, but rather partitioned
among the k machines in a “balanced” fashion, i.e., the nodes and/or edges of G must be partitioned
approximately evenly among the machines. We assume a vertex-partition model, whereby vertices
(and their incident edges) are partitioned across machines. Specifically, the type of partition that
we will assume throughout is the random vertex partition (RVP), i.e., vertices (and their incident
edges) of the input graph are assigned randomly to machines. (This is the typical way used by
many real graph processing systems, such as Pregel [49] and Giraph [1, 17], to partition the input
graph among the machines; it is easy to accomplish, e.g., via hashing.)
More formally, in the random vertex partition model each vertex of G is assigned independently
and uniformly at random to one of the k machines.5 If a vertex v is assigned to machine Mi we say
that Mi is the home machine of v and, with a slight abuse of notation, write v ∈ Mi . When a vertex
is assigned to a machine, all its incident edges are known to that machine as well, i.e., the home
machine initially knows the IDs of the neighbors of that vertex as well as the identities of their
home machines (and the weights of the corresponding edges in case G is weighted). For directed
graphs, we assume that out-edges of vertices are known to the assigned machine. (However, we note
that our lower bounds hold even if both in- and out- edges are known to the home machine.) An
immediate property of the RVP model is that the number of vertices at each machine is balanced,
2
There is an alternate (but equivalent) way to view this communication restriction: instead of putting a bandwidth
restriction on the links, we can put a restriction on the amount of information that each machine can communicate
(i.e., send/receive) in each round. The results that we obtain in the bandwidth-restricted model will also apply to
the latter model [37]. Also, our bounds can be easily rewritten in terms of the B parameter.
3
The round complexity not only captures the (potential) speed up possible, but it also implicitly captures the
communication cost of the algorithm as well, since links can transmit only a limited amount of bits per round.
4
This assumption is reasonable in the context of large-scale data, e.g., it has been made in the context of theoretical
analysis of MapReduce, see e.g., [45] for a justification. Indeed, typically in practice, even when assuming that links
have a bandwidth of order of gigabytes of data per second, the amount of data that has to be communicated can be
in order of tera- or peta-bytes which generally dominates the overall computation cost [45].
5
An alternate partitioning model, called the random edge partition (REP) model has also been studied [74, 56]:
here, each edge of G is assigned independently and randomly to one of the k machines. One can extend our results
to get bounds for the REP model since it is easy to show that one can transform the input partition from one model
to the other in Õ(m/k2 + n/k) rounds.
2
i.e., each machine is the home machine of Θ̃(n/k) vertices with high probability (see [37]); we shall
assume this throughout the paper. A convenient way to implement the RVP model is through
hashing: each vertex (ID) is hashed to one of the k machines. Hence, if a machine knows a vertex
ID, it also knows where it is hashed to.
Eventually, in a computation each machine Mi , for each 1 6 i 6 k, must set a designated local
output variable oi (which need not depend on the set of vertices assigned to machine Mi ), and the
output configuration o = ho1 , . . . , ok i must satisfy certain feasibility conditions w.r.t. problem P.
For example, for the minimum spanning tree problem each oi corresponds to a set of edges, and the
edges in the union of such sets must form an MST of the input graph. Similarly, when considering
the PageRank problem, each oi corresponds to PageRank values of (one or more nodes), such that
the PageRank value of each node of the graph should be output by at least one machine.
1.2
Our Results
We present a general information-theoretic approach which is useful for showing non-trivial round
complexity lower bounds for certain graph problems in the k-machine model. This approach can be
useful in the context of showing time lower bounds for many other problems (including non-graph
problems) in a distributed setting where the input is partitioned across many machines and the
output size is large. Using our approach we show almost tight (up to logarithmic factors) lower
bounds for two fundamental, seemingly unrelated, graph problems, namely PageRank computation
and triangle enumeration (see Section 1.5 for a background and applications of these problems). To
demonstrate the near-tightness of our lower bound, we also present (almost) optimal distributed
randomized algorithms (which are optimal up to a polylog(n) factor) for these problems. All these
algorithms exhibit a round complexity that scales superlinearly in k, improving significantly over
previous results.
1. PageRank Computation. In Section 2.3 we show an almost tight lower bound of Ω̃(n/k 2 )
rounds. In Section 3.1 we present an algorithm that computes the PageRank of all nodes of a graph
in Õ(n/k 2 ) rounds, thus improving over the previously known bound of Õ(n/k) [37].
2. Triangle Enumeration. In Section 2.4 we show that there exist graphs with m edges where
any distributed algorithm requires Ω̃(m/k 5/3 ) rounds.6 We show that this bound is tight up to a
polylog(n) factor by presenting a matching upper bound. In Section 3.2 we present an algorithm
that enumerates all the triangles of a graph in Õ(m/k 5/3 + n/k 4/3 ) rounds, where m is the number
of edges. Our bound improves over the previous best bound of Õ(n7/3 /k 2 ) rounds [37].7
We also show that our results for triangle enumeration imply new bounds for the well-studied
congested clique model [47]. Specifically, our lower bound for the k-machine model implies a Ω̃(n1/3 )
lower bound for the congested clique, which holds for any algorithms, including randomized ones
(see Corollary 1). Notice that this does not contradict the impossibility result of [25], which
states that super-constant lower bounds for the congested clique would give new lower bounds in
circuit complexity: because of the size required by any solution for triangle enumeration, Remark 3
in [25] does not apply. To the best of our knowledge, this is the first super-constant lower bound
known for the congested clique model. (Previous bounds were known for weaker versions of the
model, e.g., which allowed only broadcast or applied only to deterministic algorithms [25], or for
implementations of specific algorithms [14].)
6
7
Notation Ω̃ hides a 1/polylog(n) factor.
Õ hides a polylog(n) factor and an additive polylog(n) term.
3
Our bounds for triangle enumeration also applies to the problem of enumerating all the missing
edges that would close a triangle (also called triads). Our techniques and results can be generalized
to enumerating other small subgraphs such as cycles and cliques.
1.3
Overview of Techniques
Lower Bounds. In Theorem 1 we prove a general result, the General Lower Bound Theorem,
which relates the round complexity in the k-machine model to the minimal amount of information
required by machines for correctly solving a problem. While PageRank and triangle enumeration are
fundamentally different problems, we derive lower bounds for both problems via the “information
to running time” relationship of Theorem 1. The General Lower Bound Theorem gives two probabilistic bounds that should be satisfied in order to obtain a lower bound on the round complexity
of any problem. The two bounds together capture the decrease in uncertainty (also called surprisal
— cf. Section 2) that happens to some machine as a result of outputting the solution. We can
show that this “surprisal change” represents the maximum expected “Information Cost” over all
machines which can be used to lower bound the run time. The proof of the General Lower Bound
Theorem makes use of information-theoretic machinery, yet its application requires no use of information theory; the machinery is hidden in the proof which makes it easier to apply it to derive lower
bounds for various problems. We conjecture that Theorem 1 can be used to obtain lower bounds
for various problems (including non-graph problems) that have a relatively large output size (e.g.,
minimum spanning tree, shortest paths, sorting, matrix multiplication etc.) thus complementing
the approach based on communication complexity (see, e.g., [59, 22, 51, 26, 53, 25, 37, 56, 55] and
references therein). In fact, our approach, as demonstrated in the case of triangle enumeration, can
yield stronger round lower bounds as well as message-round tradeoffs compared to approaches that
use communication complexity techniques (more on this in the next paragraph). Our approach,
as demonstrated in the case of PageRank, can yield tight lower bounds for problems (including,
and especially, under a stochastic/random partition of the input) where communication complexity
techniques are not obvious. In fact, for many problems, applying the General Lower Bound Theorem gives non-trivial lower bounds in a fairly straightforward way that are not (at least easily)
obtainable by communication complexity techniques. To give an example, consider the problem
of distributed sorting where we have n elements are distributed randomly across the k machines
(each machine gets n/k elements on average) and the requirement is that, at the end, i-th machine
must hold the (i − 1)k + 1, (i − 1)k + 2, . . . , i · k-th order statistics. One can use the General Lower
Bound Theorem to show a lower bound of Ω̃(n/k 2 ) for this problem (this is tight, as there exists
an Õ(n/k 2 )-round sorting algorithm). Note that the same lower bound (under a random partition)
is harder to show using communication complexity techniques.8
We also note that tight round complexity lower bounds do not always directly follow from exploiting message (bit) complexity lower bounds which can be obtained by leveraging communication
complexity lower bounds. For example, for the problem of triangle enumeration, even assuming
the highest possible message lower bound of Ω(m),9 this would directly imply a round lower bound
of Ω̃(m/k 2 ) (since Θ(k 2 ) messages can be exchanged in one round) and not the tight Ω̃(m/k 5/3 )
8
Assuming an adversarial (worst-case) balanced partition (i.e., each machine gets n/k elements), using multi-party
communication complexity techniques one can show the same lower bound [55]; but this is harder to show under
random partition.
9
Note that the problem can be trivially solved in O(m) messages; see also [74] where a Ω̃(m) message lower bound
is shown for triangle detection in the edge partition model using communication complexity techniques.
4
shown in this paper. Furthermore, our approach can show round-message tradeoffs giving stronger
message lower bounds for algorithms constrained to run in a prescribed round bound compared
to what one can obtain using communication complexity approaches. In particular, for triangle
enumeration, we show that any round-optimal algorithm that enumerates all triangles with high
probability in the k-machine model needs to exchange a total of Ω̃(mk 1/3 ) messages in the worst
case.
We emphasize that our General Lower Bound theorem, gives non-trivial lower bounds only when
the output size is large enough, but it still works seamlessly across all output sizes. To illustrate
this, we note that the triangle enumeration lower bound of Ω̃(m/k 5/3 ) is true only for dense graphs,
i.e., m = Θ(n2 ). In fact the “true” lower bound shown by our Theorem is Ω̃(t/k)2/3 ), where t is the
number of triangles (the output size); this bound can be shown to apply even for sparse (random)
graphs by extending our analysis.
We point out that an entropy-based information-theoretic argument has been used in prior
work [37]. However, there is a crucial difference, as explained next. In [37], it was shown that
Ω̃(n/k) is a lower bound for computing a spanning tree (ST) of a graph. (This also implies the
same lower bound for other fundamental problems such as computing an MST.) However, this lower
bound holds under the criterion that the machine which hosts the vertex (i.e., its home machine)
must know at the end of the computation the status of all of its incident edges (whether they belong
to a ST or not) and output their respective status. The lower bound proof exploits this criterion
to show that any algorithm will require some machine receiving Ω(n) bits of information, and since
any machine has k − 1 links, this gives a Ω̃(n/k) lower bound. This lower bound proof fails, if
we require the final status of each edge to be known by some machine (different machines might
know the status of different edges); indeed under this output criterion, it can be shown that MST
can be solved in Õ(n/k 2 ) rounds [56]. On the other hand, the lower bound proof technique of this
paper applies to the less restrictive (and more natural) criterion that any machine can output any
part of the solution. This calls for a more nuanced approach which has to explicitly exploit the
distribution of the input across all the k machines; this issue was also raised in the recent work
of [15, 16] that showed communication complexity lower bounds in arbitrary network topologies.
In [8], a direct sum theorem is shown that yields a communication complexity lower bound for
set disjointness. The method of [8] can be applied to obtain lower bounds for functions F that
can be “decomposed” as F (x, y) = f (g(x1 , y1 ), . . . , g(xn , yn )), by reduction from the information
complexity of the function g. These methods do not seem applicable to our setting as we are
considering problems where the output size is large.
We note that our approach can be used to show lower bounds in other distributed computing
models as well. For example, our approach gives a non-trivial lower bound of Ω̃(n1/3 ) lower bound
for the congested clique. Also, after the publication of an earlier version of this paper in arXiv
[57], a subsequent work by Izumi and Le Gall [34], using our information-theoretic approach, shows
a lower bound for triangle enumeration in the standard CONGEST distributed computing model
(where the input graph is also the communication graph). Indeed, our lower bound result on
k-machine model directly implies a lower bound for the CONGEST model.10
Upper Bounds. The Conversion Theorem of [37] directly translates algorithms implemented in a
CONGEST message passing model to the k-machine model and almost all the previous algorithms
[37, 19, 60] were derived using this theorem. In contrast, the present paper does not use the
10
In our arXiv version, our lower bound was weaker by a logarithmic factor; this has been improved in the current
version.
5
Conversion Theorem; instead, it gives direct solutions for the problems at hand in the k-machine
model, leading to improved algorithms with significantly better round complexity.
While our algorithms for the two different problems use techniques that are specific to each
problem, we point out a simple, but key, unifying technique that proves very useful in designing
fast algorithms called as randomized proxy computation.11 The randomized proxy computation is
crucially used to distribute the communication and computation across machines (in a randomized
fashion) to avoid congestion at any particular machine. This helps in load-balancing congestion
at any given machine by redistributing it evenly across the k machines. This is achieved, roughly
speaking, by re-assigning the executions of individual nodes uniformly at random among the machines. For example, a simple use of the strategy in the triangle enumeration algorithm (see
Section 3.2) is as follows: each edge in the graph is assigned a random machine as its proxy; the
proxy does computation “associated” with the edge. This helps in alleviating the congestion associated with machines having high degree nodes. A slightly more sophisticated use of proxies is used
in PageRank (see Section 3.1) as well. The proxy computation allows one to move away from the
communication pattern imposed by the topology of the input graph (which can cause congestion
at a particular machine) to a more balanced communication overall.
1.4
Related Work
The theoretical study of large-scale graph computation in distributed systems is relatively new.
Several works have been devoted to developing MapReduce graph algorithms (e.g., see [46, 42, 45,
36, 3] and references therein). We note that the flavor of theory developed for MapReduce is quite
different compared to this paper. Minimizing communication is also a key motivation in MapReduce
algorithms (e.g., see [45]); however this is generally achieved by making sure that the data is made
small enough quickly (in a small number of MapReduce rounds) to fit into the memory of a single
machine.12 For example, in the PageRank algorithm, see e.g., the MST MapReduce algorithm
of [42]. The k-machine model was proposed in [37] motivated by distributed large-scale graph
processing systems such as Pregel, and has been further investigated in [60, 19, 56, 7]; a similar
model was also considered by Woodruff and Zhang [74]. For a comparison of the k-machine model
with other parallel and distributed models proposed for large-scale data processing, including Bulk
Synchronous Parallel (BSP) model [69], MapReduce [36], and the congested clique, we refer to [70].
In particular, according to [70], “Among all models with restricted communication the “big data”
[k-machine] model is the one most similar to the MapReduce model”.
The k-machine model is closely related to the BSP model [69]; it can be considered to be a
simplified version of BSP, where local computation is ignored and synchronization happens at the
end of every round (the synchronization cost is ignored). Unlike BSP which has a lot of different
parameters (which typically makes it harder to prove rigorous theoretical bounds [70]), the kmachine model is characterized by one parameter (the number of machines) which allows one to
develop and prove clean bounds and serves as a basis for comparing various distributed algorithms.
11
Similar techniques that has used in parallel and distributed computation in different applications and contexts,
see e.g., [68, 66].
12
We note that in the k-machine model, the memory usage is also implicitly captured. For example, consider the
PageRank algorithm of this paper. Each machine starts with a 1/k fraction of the input size (i.e., Õ((m + n)/k + ∆))
and since the algorithm takes Õ(n/k2 ) rounds, the total number of messages received by a machine during the
entire execution of the algorithm is Õ(n/k). Furthermore, since the local computation uses only Õ(n/k) space (i.e.,
essentially linear in the size of the input restricted to that machine), the overall memory used remains the same as
the initial input to the machine.
6
The k-machine model is also closely related to the classical CON GEST model [58], and in
particular to the congested clique model, which recently has received considerable attention (see,
e.g., [47, 44, 43, 25, 50, 14, 33, 30]). The main difference is that the k-machine model is aimed
at the study of large-scale computations, where the size n of the input is significantly bigger than
the number of available machines k, and thus many vertices of the input graph are mapped to
the same machine, whereas the two aforementioned models are aimed at the study of distributed
network algorithms, where n = k and each vertex corresponds to a dedicated machine. More “local
knowledge” is available per vertex (since it can access for free information about other vertices in
the same machine) in the k-machine model compared to the other two models. On the other hand,
all vertices assigned to a machine have to communicate through the links incident on this machine,
which can limit the bandwidth (unlike the other two models where each vertex has a dedicated
processor). These differences manifest in how the best algorithms are designed for the congested
clique verses the k-machine model. In particular, the best known distributed algorithm in the
congested clique model may not directly yield the fastest algorithm in the k-machine model (see
Section 4.1). There have been several works in recent database literature investigating distributed
computation of graph problems in particular, connected components, subgraph enumeration, and
PageRank in a message passing framework, e.g., see [27, 40, 31] and references therein. We point
out that a lower bound for triangle enumeration has been recently obtained in the Massively Parallel
Computation (MPC) model [9] (a model with some similarities with the k-machine model) under
a worst-case input partition [38]; but this does not directly translate to our lower bound in the
k-machine model which holds even under a random partition.
There are several works on triangle enumeration and counting in centralized, distributed, and
streaming settings (see, e.g., [67] and references therein), but none of these seem to yield the sought
complexity in the k-machine model.
1.5
Preliminaries
PageRank. PageRank is one of the most important measures to rank the importance of nodes in
a graph, and was first proposed to rank Web pages [13]. The PageRank of a graph G = (V, E) is
defined as follows. Let be a small constant which is fixed ( is called the reset probability, i.e.,
with probability the random walk restarts from a node chosen uniformly at random among all
nodes in the network). The PageRank (vector) of a graph (e.g., see [4, 6, 10, 21]) is the stationary
distribution vector π of the following special type of random walk: at each step of the random
walk, with probability the walk restarts from a randomly chosen node, and with probability 1 −
the walk follows a randomly chosen outgoing (neighbor) edge from the current node and moves to
that neighbor. Computing the PageRank and its variants efficiently in various computation models
has been of tremendous research interest in both academia and industry. For a detailed survey of
PageRank see, e.g., [10, 41].
There are mainly two broad approaches to the PageRank computation (see, e.g., [5]). One is
the use of linear algebraic techniques (e.g., the Power Iteration [54]), and the other approach is
Monte Carlo [4]. In the Monte Carlo method, the basic idea is to approximate PageRank by directly
simulating the corresponding random walk and then estimating the stationary distribution with
the performed walk’s distribution [23, 4].
Triangle enumeration. The triangle enumeration problem (sometimes referred to as triangle listing) is to enumerate all the triangles in a graph, where a triangle is a set of three vertices connected
7
to each other. This problem has attracted much interest because of its numerous practical applications, including the analysis of social processes in networks [73, 28], community detection [12],
dense subgraph mining [71], joins in databases [52], and the solution of systems of geometric constraints [29]. The interested reader may refer to [18, 11] for additional applications.
Triangle detection and triangle counting are also well-studied problems, and potentially significantly easier than triangle enumeration; however, we emphasize that for many applications,
including all the aforementioned ones, triangle detection and triangle counting are not enough, and
a complete enumeration of all the triangles is required.
The problem of finding triplets of vertices (called triads in social network analysis [72]) that
consists of two edges and one missing edge has obvious applications, e.g., in social networks, where
a missing triangle can be used to recommend friends, which are represented by nodes, to people
who have a mutual friend, where the friendship relation is represented by edges [32]. The problem
of enumerating small subgraphs and cliques have numerous applications [72, 71, 18, 11].
2
2.1
Lower Bounds
A General Lower Bound Theorem
We state a theorem, called the General Lower Bound Theorem, which provides us with a general
way to obtain round complexity lower bounds in the k-machine model. We will apply this theorem
to derive lower bounds for two graph problems, namely, PageRank computation (Section 2.3) and
triangle enumeration (Section 2.4). The reader can skip the proof of the Lower Bound Theorem on
first reading and go to the above respective sections to see the how the theorem is used to derive
lower bounds (knowledge of the proof is not needed for its application).
Consider an n-vertex input graph G partitioned across the machines via the random-vertex
partition in the k-machine model. Note that the input graph G is sampled from a probability
distribution on a (suitably chosen) set of graphs G. (For example, in the case of PageRank, G is
the set of all possible instantiations of the lower bound graph H shown in Figure 1.) Consider
a partitioning p = (p1 , . . . , pk ) of an input graph G. We use boldface p to denote a vector and
pi to denote the i-th entry of p, which corresponds to the subgraph assigned to machine Mi . In
our analysis, we frequently condition on the event that a subgraph pi ⊆ G is assigned to a certain
machine Mi . To simplify the notation, we also use pi to denote the event that this happens, e.g.,
Pr[E | pi ] is the probability of event E conditioned on the assignment of pi to machine Mi .
Let Πi be the random variable representing the transcript of the messages received by machine
Mi across its k − 1 links when executing algorithm A for (at most) T rounds, and let GP be the
set of all possible partitions of the graphs in G among the k machines. The execution of A is fully
determined by the given input partitioning p ∈ GP and the public random bit string R ∈ RS,
where RS is the set of all possible strings that are used as random bit string by the algorithm.
Note that R is itself a random variable. Similarly to above, we write Pr[E | pi , r] when conditioning
event E on the events that the public random string is r and machine Mi obtains subgraph pi as its
input, where p = (p1 , . . . , pi , . . . , pk ) and (p, r) ∈ GP × RS. We use Ai (p, r) to denote the output
of the algorithm for a given (p, r) by a machine Mi .
Theorem 1 (General Lower Bound Theorem). Let IC = IC(n, k) be a positive function that we
call information cost and let Z be a random variable depending only on the input graph. Consider
IC
a T -round -error algorithm A, for some = o( H[Z]
), where H[Z] is the entropy of Z. Let Good ⊆
8
GP × RS be a set of pairs (p, r) where p = (p1 , . . . , pk ) ∈ GP is an input partition and r ∈ RS
is a public random string, and |Good| > (1 − − n−Ω(1) )|GP × RS|. Suppose that, for every
(p, r) ∈ Good, there exists a machine Mi receiving input graph pi and outputting Ai (p, r), such
that
H[Z]−o(IC)
Pr[Z = z | pi , r] 6 12
,
(1)
H[Z]−IC
Pr[Z = z | Ai (p, r), pi , r] > 12
,
(2)
for every z that has nonzero probability conditioned on events {Ai (p, r), pi , r}. Then, if B denotes
the per-round communication link bandwidth, it holds that
IC
T =Ω
.
(3)
Bk
Intuition. We can think of Premise (1) as bounding the initial knowledge of the machines about
the random variable Z. On the other hand, Premise (2) shows that at least one machine is able
to increase its knowledge about the value of Z eventually, which we formalize by conditioning
on its output in addition to the initial knowledge. Then, if there is a large set (called Good) of
inputs where these premises hold, then our theorem says that the worst case time of the algorithm
must be sufficiently large. These insights are formally captured by the self-information or surprisal
of an event E, which is defined as log2 (1/Pr[E]) [62] and measures the “amount of surprise” or
information contained in observing an event. Premises (1) and (2) imply that, from some machine
Mi ’s point of view, the occurrence of {Z = z} is “Ω(IC) more surprising” given its initial knowledge,
compared to having this observation after computing the output. We can show that this “surprisal
change” IC bounds from below the maximum communication cost over all machines. In this light,
1
(3) tells us that the run time of the algorithm is roughly a kB
-fraction of the maximum expected
information cost.
2.2
Proof of the General Lower Bound Theorem
Our proof of Theorem 1 makes use of some standard definitions in information
theory (see, e.g., [20]).
P
For random variables X, Y , and W , the entropy is defined as H[X] = − x Pr[X = x] log2 Pr[X = x]
and the conditional entropy is given by
X
H[X | Y ] =
Pr[Y = y] H[X | Y = y].
(4)
y
We also recall the definition of mutual information between X and Y given some event {W = w}
as I[X; Y | W = w] = H[X | W = w] − H[X | Y, W = w] and the definition of conditional mutual
information between X and Y conditioned on W as
I[X; Y | W ] = H[X | W ] − H[X | W, Y ].
(5)
Critical Index. For a given input graph partition p and a random string r, we are interested
in identifying the machine that has the maximum expected value of the amount of information
9
that its output reveals about the random variable Z. This motivates us to define the critical index
function as
`(p, r) := arg max I[Outi ; Z | pi , r]
(6)
16i6k
and define random variables
Π∗ (p, r) = Π`(p,r) (p, r) and Out∗ (p, r) = Out`(p,r) (p, r).
(7)
Intuitively speaking, for each (p, r) ∈ GP × RS, the random variable Out∗ is the output of the
machine Mi (where i depends on p, r) that attains the maximum mutual information between its
output and the random variable Z when executing algorithm A with (p, r). For a given (p, r), we
use
p∗ = p`(p,r)
(8)
to denote the input partition of machine M`(p,r) . Note that Z depends only on the input graph
whereas Π∗ , P∗ , and Out∗ depend on the input graph and, in addition, also on the chosen partition
p and random string r. From (6), we immediately obtain the following property of the critical
index.
Observation 1. For all (p, r) ∈ GP × RS, it holds that I[Out∗ ; Z | p∗ , r] > I[Outi ; Z | pi , r], where
p∗ = p`(p,r) and p = (p1 , . . . , p`(p,r) , . . . , pk ).
Lemma 1. For every (p, r) ∈ GP × RS where p = (p1 , . . . , p∗ , . . . , pk ), it holds that
I[Π∗ ; Z | p∗ , r] > I[Out∗ ; Z | p∗ , r].
(9)
Proof. Consider some (p, r) ∈ GP × RS, where p = (p1 , . . . , pi , . . . , pk ). It holds that
I[Π∗ ; Z | p∗ , r] > max I[Πi ; Z | pi , r]
(by Obs. 1)
16i6k
= max (H[Z | pi , r] − H[Z | Πi , pi , r]).
16i6k
The random variable Outi which represents the output of machine Mi is fully determined by the
transcript Πi , the choice of the input graph, its partition (i.e., the random variable Pi ), and the
random bits R. Therefore, we can use the bound H[Z | Πi , pi , r] 6 H[Z | Outi , pi , r] in the righthand side of the above inequality to obtain
I[Π∗ ; Z | p∗ , r] > max (H[Z | pi , r] − H[Z | Outi , pi , r])
16i6k
= max I[Outi ; Z | pi , r]
16i6k
= I Out`(p,r) ; Z | p`(p,r) , r
= I[Out∗ ; Z | p∗ , r],
(by definition of critical index, cf. (6))
(by (7), and (8))
and the lemma follows.
Lemma 2. For all (p, r) ∈ Good where p = (p1 , . . . , pk ), there is an i ∈ [k] (which satisfies (2) in
the premise of the theorem) such that I[Outi ; Z | pi , r] > IC − o(IC).
10
Proof. For a given (p, r) ∈ Good, let Mi be the machine satisfying Premise (2). By definition,
I[Outi ; Z | pi , r] = H[Z | pi , r] − H[Z | Outi , pi , r].
(10)
In the remainder of the proof, we will bound the
P entropy terms on the right-hand side. Using the
definition of entropy, leveraging (1), and since z Pr[Z = z | pi , r] = 1, we obtain
X
Pr[Z = z | pi , r] log2 Pr[Z = z | pi , r]
H[Z | pi , r] = −
z
> (H[Z] − o(IC))
X
Pr[Z = z | pi , r]
z
> H[Z] − o(IC).
(11)
To simplify the notation, we use “Ai (p, r)” as a shorthand for the event “Outi = Ai (p, r)”. By
definition, we have
X
H[Z | Outi , pi , r] =
Pr[Ai (p, r)] H[Z | Ai (p, r), pi , r]
(p,r)
=
X
Pr[Ai (p, r)] H[Z | Ai (p, r), pi , r] +
X
Pr[Ai (p, r)] H[Z | Ai (p, r), pi , r]
(p,r)∈Good
/
(p,r)∈Good
6
X
X
Pr[Ai (p, r)] H[Z | Ai (p, r), pi , r] + H[Z]
Pr[Ai (p, r)],
(p,r)∈Good
/
(p,r)∈Good
(12)
where we have used the fact that H[Z] > H[Z | Ai (p, r), pi , r]. We can bound the entropy term in
the left sum where (p, r) is restricted to
Pthe set Good, by recalling that Mi satisfies Premise (2).
By the definition of entropy, and since z Pr[Z = z | Ai (p, r), pi , r] = 1, we obtain
X
H[Z | Ai (p, r), pi , r] = −
Pr[Z = z | Ai (p, r), pi , r] log2 Pr[Z = z | Ai (p, r), pi , r]
z
6 (H[Z] − IC)
X
Pr[Z = z | Ai (p, r), pi , r]
z
6 H[Z] − IC.
(13)
According to our model, the input graph and its partitioning among the machines correspond to
choosing, uniformly at random, an element from GP, whereas the random string r is uniformly
selected from RS. We obtain
Pr[Good] =
|Good|
|GP×RS|
> 1 − − n−Ω(1) ;
Pr[GP × RS \ Good] 6 + n−Ω(1) .
Since the output of machine Mi is fully determined by (p, r), we have
X
X
Pr[Ai (p, r)] =
Pr[(p, r)] = Pr[S],
(p,r)∈S
(p,r)∈S
11
(14)
(15)
for any set S ⊆ GP × RS; in particular, for S = Good and S = (GP × RS) \ Good. Therefore,
applying the bound derived in (13), we can deduce from (12) that
X
X
Pr[Ai (p, r)] + H[Z]
Pr[Ai (p, r)]
H[Z | Outi , pi , r] 6 (H[Z] − IC)
(p,r)∈Good
/
(p,r)∈Good
6 (H[Z] − IC) 1 − − n−Ω(1) + H[Z] + n−Ω(1) .
(by (14) and (15))
Observe that H[Z] · n−Ω(1) = o(1) since Z depends only on the input graph, assuming a sufficiently
IC
large constant in the exponent of n. By the premise of Theorem 1, we have = o( H[Z]
) and
IC 6 H[Z], hence · H[Z] = o(IC) and also · o(IC) = o(IC). From this we conclude
H[Z | Outi , pi , r] 6 H[Z] − IC + o(IC).
Plugging this bound and (11) into the right-hand side of (10) completes the proof.
Recall that Lemma 1 holds for any (p, r) ∈ GP × RS; in particular, even if we restrict our
choice to the set Good. Thus, for (p, r) ∈ Good, where p = (p1 , . . . , pk ), let i ∈ [k] be the index for
which Lemma 2 holds (which is the index of the machine satisfying Premises 1 and 2). This yields
I[Π∗ ; Z | p∗ , r] > I[Out∗ ; Z | p∗ , r]
> I[Outi ; Z | pi , r]
= IC − o(IC).
(by Lemma 1)
(by Obs. 1)
(by Lemma 2)
(16)
It follows that the transcript of machine M`(p,r) reduces the entropy of (Z | p∗ , r) by at least
IC − o(IC) bits in expectation. To complete the proof of Theorem 1, we will argue that the run
time needs to be large, as otherwise M`(p,r) cannot have received sufficiently many bits. Recall
that the run time T is the maximum time required by any machine Mi , over all random strings
and input assignments, i.e., T = max(p,r) T (p, r).
Lemma 3. Suppose that a machine M1 can receive a message of at most B bits on each of its
k − 1 links in a single round. Let Γ be the bits received by M1 over its k − 1 links during T rounds.
Then, Γ can take at most 2(k−1)(B+1)T distinct values.
Proof. Since the absence of a message can be recognized in the synchronous model, there are at most
2B + 1 < 2B+1 distinct possibilities for the communication received over a single link of bandwidth
B in any given round. Thus, we can view the communication received over M1 ’s k − 1 links as a
word ω1 of length k − 1, where each character of ω1 is chosen from an alphabet of size (at most)
2B+1 , resulting in 2(B+1)(k−1) possible choices for ω1 . Finally, we view Γ, i.e., the communication
received over the T rounds, as a word of length T , where the alphabet size of each character is
6 2(B+1)(k−1) , yielding 2(B+1)(k−1)T many choices in total.
From Lemma 3, we know that the maximum number of bits of information that machine
M` (p, r) can receive during T rounds is O(B k T ), and hence
IC
T = max T (p, r) > Ω
.
(17)
Bk
(p,r)
This completes the proof of Theorem 1.
12
2.3
A Lower Bound for PageRank Computation
Theorem 2. Let A be an algorithm that computes a δ-approximation of the PageRank vector of
an n-node graph for a small constant δ > 0 (depending on the reset probability), and suppose
n
that A succeeds with probability > 1 − o(1/k). Then, the run time of A is Ω B·k
2 , assuming a
2
communication link bandwidth of B bits per round and k = Ω(log n) machines. This holds even
when the input graph is assigned to the machines via random vertex partitioning.
We first give a high-level overview of the proof. As input graph G, we construct a weakly connected directed graph where the direction of certain “important” edges is determined by a random
bit vector, and assign random IDs to all the vertices. Flipping the direction of an important edge
changes the PageRank of connected vertices by a constant factor and hence any (correct) algorithm
needs to know about these edge directions. It is crucial that the vertex IDs are chosen randomly,
to ensure that knowing just the direction of important edges is not sufficient for computing the
PageRank of the adjacent nodes, as these random vertex IDs “obfuscate the position” of a vertex
in the graph. This means that a machine needs to know both, the direction of an important edge
and the IDs of the connected vertices to be able to output a correct result. By using a Chernoff
bound, we can show that the random vertex partitioning of the input graph does not reveal too
many edge-directions together with the matching vertex IDs to a single machine. This sets the
stage for applying our generic lower bound theorem (Theorem 1) to obtain a lower bound on the
run time.
2.3.1
The Lower Bound Graph
In the remainder of this section we will prove Theorem 2. We consider the following directed
graph H (see Figure 1) of n vertices and m = n − 1 edges; for simplicity, we assume that m/4
is an integer. Let X = {x1 , x2 , . . . , xm/4 }, U = {u1 , u2 , . . . , um/4 }, T = {t1 , t2 , . . . , tm/4 }, and
V = {v1 , v2 , . . . , vm/4 }, and let V (G) = {X ∪ U ∪ T ∪ V ∪ {w}}. The edges between these vertices
are given as follows: For 1 6 i 6 m/4, there is a directed edge ui → ti , a directed edge ti → vi , and
a directed edge vi → w. The edges between ui and xi (these are the “important” edges mentioned
above) are determined by a bit vector b of length m/4 where each entry bi of b is determined by a
fair coin flip: If bi = 0 then there is an edge ui → xi , otherwise there is an edge xi → ui . Lemma 4
shows that, for any 1 6 i 6 m/4 and for any < 1, there is a constant factor separation between
the PageRank of any node vi if we switch the direction of the edge between xi and ui .
Lemma 4. The following holds for the PageRank value of vertices vi of G, for 1 6 i 6 n/4: If
2 /2)
2 )
bi = 0, then PageRank(vi ) = (2.5−2+
. Otherwise, if bi = 1, then PageRank(vi ) > (3−3+
.
n
n
For any < 1, there is a constant factor (where the constant depends on ) separation between the
two cases.
Proof. We will determine an estimate of PageRank(vi ) using the distributed random walk approach
described at the beginning of Section 3.1, whereby the expected number of random walk tokens
addressed to one node, multiplied by /cn log n, gives a high-probability estimate of the PageRank
value of the node. The expected value of ψvi is
(1 − )2
E[ψvi |bi = 0] = c log n 1 + (1 − ) +
2
13
X[1] = 1
x1
u1
t1
v1
u2
t2
v2
X[1] = 0
X[2] = 1
x2
X[2] = 0
w
...
...
...
...
um/4
tm/4
vm/4
X[m/4] = 1
xm/4
X[m/4] = 0
Figure 1: The graph H used to derive a lower bound on the round complexity of PageRank
computations.
and
E[ψvi |bi = 1] = c log n 1 + (1 − ) + (1 − )2 + (1 − )3 .
Therefore,
PageRank(vi ) =
(2.5 − 2 + 2 /2)
n
if bi = 0, and
PageRank(vi ) >
(3 − 3 + 2 )
n
if bi = 1.
The Input Graph Distribution. We now build our input graph G as follows. Let m = n − 1,
and let ID be the random variable representing a set of n unique integers chosen uniformly at
random from {S ⊂ [1, poly(n)] : |S| = n}. Assigning each vertex of H a unique integer from ID (in
an arbitrary predetermined way) yields a graph G. Let G denote the set of graphs G determined
by all possible (different) ID assignments to all possible instances of H considering all possible edge
directions. Let GP be the set of all input graph partitions (i.e., consists of all graphs in G and their
all possible input partitions) among the k machines and let RS be the set of all random strings
used by a given PageRank algorithm A. Let Bal ⊆ GP be the set of all input partitions where each
machine receives Θ̃(n/k) vertices of the input graph. Note that (p, r) ∈ GP × RS fully determines
the run of A. We assume that each machine Mi outputs a set {(π1 , id1 ), . . . , (π` , id` )}, where πj
refers to the PageRank value of the vertex with ID idj . Note that we neither make any assumptions
on which machine being the one that outputs the PageRank of a specific vertex v (which could be
a machine that holds no initial knowledge about v and its ID) nor on the individual sizes of these
output sets.
Discovering Weakly Connected Paths of Vertices. By the random vertex partitioning, each
machine Mi initially holds Θ̃(n/k) vertices in total. More specifically, Mi receives random sets
14
Xi ⊆ X, Ui ⊆ U , Ti ⊆ T , and Vi ⊆ V , each containing O(n log(n)/k) vertices (and their IDs). As
machine Mi also gets to know the incident edges of these vertices, Mi can locally check if a path
induced by some (xj1 , uj2 , tj3 , vj4 ) ∈ Xi × Ui × Ti × Vi is weakly connected, i.e., j1 = · · · = j4 . Since
Mi learns the output pair (PageRank(v), idv ) at zero cost, we upper bound the number of such
paths that the machines learn initially by using a Chernoff bound.
n
Lemma 5. With probability at least 1 − n−4 , the initial graph partition reveals at most O n log
2
k
weakly connected paths between vertices in X and V to every machine.
Proof. Throughout this proof, we consider a machine Mi . If a vertex is assigned to Mi , then
machine Mi knows its incident edges and the IDs of their endpoints. Therefore, Mi can discover a
weakly connected path (between X and V ) in one of the following ways: (1) Mi obtains xj ∈ X
and tj ∈ T ; (2) Mi obtains uj ∈ X and vj ∈ V . The argument is similar in both cases and hence
we focus on (1) for the rest of this proof. By the random vertex partition process, the probability
that xj and tj both are assigned to machine Mi is k12 . Since all vertices are assigned independently
at random, a standard Chernoff bound shows that with high probability O(n log n/k 2 ) matching
vertex pairs (xj , tj ) are assigned to machine Mi . Taking a union bound over the k machines,
completes the proof.
Good Inputs. We define Good ⊆ Bal × RS to be the set of all (balanced) inputs and random
strings where (1) A correctly outputs the PageRank of each vertex, (2) partition p is “balanced”,
i.e., each machine is assigned O(n log n/k) vertices (and hence O(n log n/k) edges since m = O(n)),
and (3) the partitioning is such that each machine knows at most O((n log n)/k 2 ) weakly connected
paths initially; we define Bad = GP × RS \ Good.
Lemma 6. (A) For any (p, r) ∈ Good, algorithm A is correct and there must be at least one ma-
chine Mi whose output list contains at least Ω(n/k) vertices in V . (B) |Good| > 1 − o(1/k) − n−Ω(1) |GP×
RS|.
Proof. Part (A) follows directly from the definition of Good. For (B), note that A succeeds with
probability > 1 − o(1/k). Moreover, the random vertex partitioning ensures that each machine
receives Θ̃(n log(n)/k)
vertices with probability > 1 − n−4 . Hence, the above is true for at least a
−4
1 − o(1/k) − n -fraction of the possible graph partition and random string pairs in GP ×RS.
For instantiating Theorem 1, we show in Lemma 7 and Lemma 8 that we can satisfy the
m
Premises (1) and (2), by setting IC = 4k
= Θ( nk ). Plugging the above value of IC in equation 3
gives the claimed lower bound result.
Lemma 7. Let Z be the random variable representing the set of pairs {(b1 , v1 ), . . . , (bm/4 , vm/4 )},
where bj refers to the direction of the edge (xj , uj ) in the weakly connected path (xj , uj , tj , vj ) of
the input graph (cf. Figure 1). Then, for each (p, r) ∈ Good, where p = (p1 , . . . , pk ), and every
2
possible choice z, it holds that Pr[Z = z | pi , r] 6 2−(m/4−O(n log(n)/k )) .
Proof. Consider a (p, r) ∈ Good where p = (p1 , . . . , pi , . . . , pk ). We know by Lemma 6.(1), that
algorithm A correctly computes the PageRank and some machine (without loss of generality) Mi
outputs at least Ω(n/k) PageRank values.
By Lemma 4, we know that algorithm A can only correctly output PageRank(vj ) at machine
Mi if Mi knows the direction of the edge between uj and xj (from Lemma 4, since the direction
15
of the corresponding edge can be derived from the PageRank value). This means that if machine
Mi outputs the PageRank for vj as a pair (πj , vj ), then it can reconstruct the pair (bj , vj ), for any
1 6 j 6 m/4.
Since (p, r) ∈ Good, it follows by Lemma 5 that each machine Mi learns at most η =
O(n log(n)/k 2 ) output entries of V for free by inspecting its assigned input. In addition to these η
entries, Mi might know partial information about the remaining Ω(n) − η pairs.
It follows that, for each of the other weakly connected paths that are not concerned with its
η already known PageRank values, Mi either has initial knowledge of the index ` of the respective
vertex v` ∈ Vi , or it knows the edge direction b` between x` and u` , but not both. Notice that
knowledge of the vertex ID of v` reveals no additional information about the index ` since we choose
vertex IDs uniformly at random. We refer to these paths as being partially known to Mi .
It follows that, for each index j for which the path is partially known to Mi , there are two
possibilities (0, vj ) and (1, vj ), each of which is equally likely, according to the input distribution.
2
Therefore, taking into account the initial input assignment, we still have at least 2m/4−O(n log(n)/k )
possible choices for z, i.e., the output of Mi concerning vertices in V , each of which is equally likely
2
without conditioning on further knowledge. It follows that Pr[Z = z | pi , r] 6 2−(m/4−O(n log(n)/k )) .
Lemma 8. For each (p, r) ∈ Good, where p = (p1 , . . . , pk ), there exists a machine Mi with output
Ai (p, r) such that, for every choice of z of Z (defined in Lemma 7) that has nonzero probability
m
m
conditioned on Ai (p, r), pi , r, it holds that Pr[Z = z | Ai (p, r), pi , r] > 1/2 4 − 4k .
Proof. From Lemma 6, we know that there is a machine Mi that outputs at least m/4k PageRank
values of vertices in V . Let λ be the total number of pairs (bj , vj ), where bj is the direction of the
edge (xj , uj ) in the weakly connected path (xj , uj , tj , vj ) (cf. Lemma 7) that remain unknown to
machine Mi conditioned on its input pi , random string r, and its output oi .
Observing that the size of its output oi is > m/4k and the fact that we can recover the pair
m
(bj , vj ) if Mi outputs the PageRank of vj (see proof of Lemma 7), it follows that λ 6 m
4 − 4k and
m
m
hence there are 2 4 − 4k distinct choices for z. Notice the probability bound is minimized if each
m
m
remaining possible choices of for z are equally likely. This implies that Pr[Z | oi , pi , r] > 1/2 4 − 4k
as required.
2.4
A Lower Bound for Triangle Enumeration
We first give a high-level overview of the proof. The input graphs that we use for our lower bounds
are sampled from according to the Gn,1/2 Erdös-Renyi random graph model. We will argue that
enumerating triangles implies a large reduction of the entropy of the characteristic vector of edges
Z, i.e., Z is a bit vector whose entries reflect the presence/absence of an edge in the input graph.
We prove that initially, the machines do not have significant knowledge of Z, which is equivalent
to having a small probability for the event {Z = z}, for any z. Moreover, we show that eventually
any machine that outputs t/k triangles, for a parameter t, must have reduced its uncertainty about
Z by ≈ (t/k)2/3 bits. In other words, the information obtained by such a machine throughout the
course of the algorithm is high. We apply Theorem 1 to obtain a lower bound on the run time of
the algorithm. This yields the following result.
Theorem 3. There is a class of graphs G of n nodes such that every distributed
that
2 algorithm
n
solves triangle enumeration in the k-machine model has a time complexity of Ω B·k5/3 , assuming a
16
link bandwidth of B bits per round, k = Ω(log n) machines, and an error probability of < o(k −2/3 ).
This holds even when the input graph is assigned to the machines via random vertex partitioning.
The Input Graph Distribution. We choose our input graphs according to the Erdös-Renyi
random graph model Gn,1/2 , which samples an n-node graph where each possible edge is included
independently with probability 1/2. We use GP to denote the set of all possible partitions of all
possible sampled n-node graphs and, similarly to before, denote the set of all random strings used
by the algorithm by RS.
Let Z be the characteristic vector of the edges13 of the input graph G. Note that the execution
of A is fully determined by the given graph input partitioning p = (p1 , . . . , pk ) ∈ GP and the shared
(among all machines) random bit string r ∈ RS, where RS is the set of all possible strings that
are used as random bit string by the algorithm. Hence we have |GP × RS| possible outcomes when
running A on a graph sampled from G.
Good Inputs. We define Good ⊆ GP × RS to be the set of input pairs (p, r) such that (1) A
performs correctly for the graph partitioning p of graph G and the random string r, (2) partition
p is “balanced”, i.e., each machine is assigned O(n log(n)/k)
vertices (and hence O(n2 log(n)/k)
edges), and (3) G has > t triangles, for some fixed t = Θ( n3 ).
Lemma 9 (Good Inputs). (A) For every (p, r) ∈ Good, at least one machine outputs > t/k
triangles when executing algorithm A with (p, r), and (B) |Good| > (1 − 0 )|GP × RS|, where
0 = − n−Ω(1) .
Proof. Part (1) is immediate from the definition of Good. For (2), note that A succeeds with
probability > 1 − and the random vertex partitioning guarantees a balanced partition with
probability > 1 − n−4 . We know from Equation 4.10 in [35], that the number of triangles in a input
graph G sampled from Gn,1/2 is Θ( n3 ) with probability > 1 − e−Ω(1) , and hence the size of Good
contains all except at most a (1 − − n−3 )-fraction of the graphs in GP × RS.
Lemma 10. Let random variable Z denote the characteristic vector of the edges of the sampled
input graph G. For every (p, r) ∈ Good where p = (p1 , . . . , pk ) and every characteristic edge vector
n
2
z, it holds that Pr[Z = z | pi , r] 6 1/2( 2 )−O(n log(n)/k) , for every i ∈ [1, k].
Proof. For any (p, r) ∈ Good, every machine has knowledge of at most O(|E(G)| log n/k) =
O(n2 log n/k) edges initially (see Property (2) above). Consider any machine Mi . Since the random vertex partitioning (among machines) and the sampling of the input graph are independent,
n
2
there are at least 2( 2 )−O(n log n/k) choices for the remaining edges, all of which are equally likely
n
2
according to the random graph model, i.e., Pr[Z = z | pi , r] 6 2−(( 2 )−O(n log(n)/k)) .
Lemma 11. Let (p, r) ∈ Good, where p = (p1 , . . . , pk ). There exists a machine Mi with output
Ai (p, r) such that, for every edge vector z that has non-zero probability conditioned on Ai (p, r), pi ,
n
2
2/3
r, Pr[Z = z | Ai (p, r), pi , r] > 1/2( 2 )−O(n log(n)/k)−Ω((t/k) ) .
Proof. By assumption (p, r) ∈ Good, which means that the machines output all t = Θ( n3 ) triangles. Thus there is some machine Mi that outputs at least t/k triangles. We will bound from below
the number of edges known by machine Mi conditioned on its output and its input assignment.
The characteristic vector specifies the graph G. Order the n2 possible edges in some fixed ordering; if the jth
edge in this ordering appears in G, then Zj = 1, otherwise it is 0.
13
17
Initially, Mi discovers t3 = t3 (Pi ) “local” triangles (for which it knows all 3 edges) by inspecting
its assigned portion of the input graph given by Pi . Since we are restricting the inputs to be in
Goodi , we know that the edges known to Mi are bounded by O(n2 log n/k) and hence the number
of triangles formed using these edges is
t3 = O((n2 log n/k)3/2 ) = O(n3 log3/2 (n)/k 3/2 ).
We call a triangle λ undetermined w.r.t. Mi , if Mi is unaware of at least 1 edge of λ initially.
Formally, λ is undetermined if there are two input graphs G, G0 where λ exists in G but not in G0
and both graphs are compatible with the input pi assigned to machine Mi .
By the above, we have at least kt − t3 undetermined triangles that are output by Mi . From
Equation (10) in [63], we know that the number of distinct edges necessary for representing `
2/3
triangles is Ω(`2/3 ). This means that at least kt − t3
edges are required for representing the
undetermined triangles of Mi . We can divide the undetermined triangles into two sets, one set T1
contains triangles that have vertex allocated to Mi , and the other set T2 contains triangles that have
no vertex allocated to Mi . Set T1 contributes |T1 |/(n log n/k) unknown edges, since the number of
vertices allocated to this machine is O(n log n/k), whereas T2 contributes 1/3 · (|T2 |)2/3 unknown
edges. These two sets of unknown edges might overlap, hence we need to consider the maximum
over them, which can be shown to be Ω((t/k −t3 )2/3 ). Hence it is possible to recover Ω((t/k −t3 )2/3 )
edges from Mi ’s output that were unknown to Mi initially. Let η denote the number of unknown
edges of Z when Mi outputs its solution. Taking into account the initially known edges, we have
η6
2
2/3
2/3
2
n
t
n log n
n
t
n log n
−Ω
−O
− t3
=
−Ω
−O
2
2
k
k
k
k
(18)
possible edges that are unknown to Mi , since t3 = o( kt ). Since we have sampled the edges of the
input graph following the Gn,1/2 random graph model, it follows that, for any z that has nonzero
probability given Mi ’s output and initial assignment, Pr[Z = z | oi , pi , r] = 2−η . The lemma follows
by replacing η with the upper bound of (18).
Proof of Theorem 3. We are now ready to instantiate Theorem 1 where Z is the characteristic
vector of edges as defined above. Note that Lemma
2 10 and Lemma 11 satisfy Premises (1) and (2).
t 2/3
n2
Note that Ω( k ) = Ω( k2/3 ). Setting IC = Θ kn2/3 , completes the proof of Theorem 3.
A tight lower bound in the congested clique. Our analysis extends in a straightforward
way to the congested clique model where, in a synchronous complete network of n machines, every
machine u receives exactly 1 input vertex of the input graph and gets to know all its incident edges.
Together with the deterministic upper bound of O(n1/3 ) shown in [24], this implies the following:
Corollary 1. The round complexity of enumerating
all triangles in the congested clique of n nodes
n1/3
with high probability of success is Ω B , assuming a link bandwidth of B bits. This bound is
tight up to logarithmic factors.
Message complexity lower bounds. We point out that it is possible to extend Theorem 1 to
yield new message complexity bounds for algorithms that attain an efficient time complexity. We
outline the high-level argument for triangle enumeration. Consider an algorithm with a run time
18
2
bound of Theorem 3, i.e., T = Õ( kn5/3 ) assuming a bandwidth of B = O(log n) bits. According to
2
our model each machine can receive at most µ = Õ( kn2/3 ) bits in total in T rounds. Lemma 10 tells
us that every machine has very little initial knowledge about the t triangles in the graph given its
initial graph assignment, when considering inputs chosen from Good. On the other hand, inspecting
the proof of Lemma 11, we can observe that a machine Mj who outputs tj triangles needs to receive
2/3
Ω̃(tj ) bits of information. If we restrict the algorithm to terminate within T rounds, this means
3
that each machine can output at most O(n3 /k) triangles, as this requires µ = O(( nk )2/3 ) bits of
information. This implies that the output per machine must be roughly balanced and every machine
2
needs to receive Ω(µ) bits of information, yielding a message complexity of Ω̃(k kn2/3 ) = Ω̃(n2 k 1/3 ).
In particular, this rules out algorithms that aggregate all input information at a single machine
(which would only require O(m) messages in total). From the above, we have the following.
Corollary 2 (Message Complexity Lower bound). Consider an algorithm A that enumerates all
2
triangles with high probability and terminates in Õ( kn5/3 ) rounds. Then, the total message complexity
in the k-machine model of A is Ω̃(n2 k 1/3 ). For Õ(n1/3 )-rounds algorithms in the congested clique,
the message complexity is Ω̃(n7/3 ).
3
3.1
Upper Bounds
An Almost Optimal Algorithm for PageRank Computation
In this section we present a simple distributed algorithm to compute the PageRank vector of an
input graph in the k-machine model. This algorithm has a round complexity of Õ(n/k 2 ), which
significantly improves over the previously best known solution, a Õ(n/k)-round algorithm presented
in [37].
We first recall the distributed random walk-based Monte-Carlo algorithm for computing PageRank,
for a given reset probability , as described in [23]. This algorithm is designed and analyzed in the
standard CONGEST distributed model, where each vertex of the graph executes the algorithm.
The algorithm is as follows. Initially, each vertex creates c log n random walk tokens, where c = c()
is a parameter defined in [23] (c() is inversely proportional to ), which are then forwarded according to the following process: when a node u receives some random walk token ρ, it terminates the
token with probability and, with probability 1 − , forwards it to a neighbor of u chosen uniformly
at random. Each node keeps a variable ψv , for each of its nodes v, which counts the number of random walk tokens that were addressed to v (i.e., the total number of all random walks that visit v).
v
Each node v then estimates its PageRank by computing cnψ
log n . It can be shown that this estimate
gives a δ-approximation, for any constant δ > 0, to the PageRank value of each node v with high
probability, and that this algorithm terminates in O(log n/) rounds with high probability [23].
The key idea to obtain such a fast runtime is to send only the counts of the random walks, instead
of keeping track of the random walks from different sources. Clearly, only the number (i.e., count)
of the random walks visiting a node at any step is required to estimate the PageRank. Note that
a straightforward implementation of the above random walk-based algorithm might yield a suboptimal running time in the k-machine model. (In fact, applying the Conversion Theorem of [37]
to implement the above algorithm gives only Õ(n/k) time.) The main issue is that some machine
might receive too many random walks destined for the nodes in that machine. For example, during
some step of the random walk it might happen that n different walks are destined to different nodes
19
in the same machine, causing Ω(n) congestion at some machine leading to a Ω(n/k) bound. For
example, in a star-like topology, the center vertex c which resides at some machine M1 might need
to receive n random walks from its neighbors, hence causing a round complexity of Ω̃(n/k). In
the above example, since there is only one high degree vertex, we can get around this problem by
sending only the counts. However, the situation is less clear if Ω(n) tokens are destined for different
nodes in the same machine.
To avoid the above pitfalls, we describe an approach that directly exploits the k-machine model.
On the one hand, our goal is to reduce the total amount of communication while, on the other hand,
we need to ensures that the incurred message complexity is balanced for the available machines.
This motivates us to treat vertices differently depending on how many tokens they hold. We say
that a vertex u has low-load in iteration r if, conceptually, the machine that hosts u considers > k
tokens to be held at u. Otherwise, we say that u has high-load in iteration r. Note that, throughout
the course of our algorithm, the value of tokens[v] depends on the topology of the input graph and
hence a vertex can change its status w.r.t. being a high-load or low-load vertex.
In our algorithm (Algorithm 1), each machine M stores an array tokens[u], which has an entry
for each vertex u hosted at M . Initially, we generate Θ(log n) tokens for each vertex which we
use as the initialization value of tokens. Then, we mimic the (parallel) random walk steps of [23]
by performing Θ(log(n)/) iterations where, in each iteration, each machine M first considers the
tokens stored for its low-load vertices. For each such token held at one of its vertices u, M uniformly
at random selects a neighboring vertex v and keeps track of how many tokens have chosen v in
a separate array α[v]. In particular, M also increments the same entry α[v] if v is chosen as
the destination for some token of a distinct low-load vertex w at M . Then, M sends a message
hα[v], dest:vi for each v where α[v] is nonzero, which is subdelivered to the destination machine
using the random routing technique (cf. Lemma 15). This ensures that these messages can be
delivered in Õ(n/k 2 ) rounds in parallel.
We now describe how high-load vertices are processed, each of which can hold up to O(n log n)
tokens. To avoid potentially sending a large number of messages for a single high-load vertex u,
machine M considers the index set I of machines that host at least 1 neighbor of u. Then, for each
token of u, machine M samples an index from I and keeps track of these counts in an array β,
which has an entry for each machine in I. Finally, M generates one message of type hβ[j], src:ui,
for each entry j where β[j] > 0 and sends this count message directly to the respective destination
machine. We show that these messages can be delivered in Õ(n/k 2 ) rounds by proving that, with
high probability, each machine holds at most Õ(n/k 2 ) high-load vertices in any given iteration of
the algorithm.
Lemma 12. Algorithm 1 correctly computes the PageRank with high probability.
Proof. For the purpose of showing correctness, we ignore the potential issues caused by congestion,
which we will focus on in the subsequent lemmas. [23] show that the random walk process, where
each token is either terminated with probability or forwarded with probability 1 − to a neighbor
chosen uniformly at random, approximates the PageRank of the graph. Thus it is sufficient to show
that Algorithm 1 adheres to this random walk process.
Consider a node u and suppose that u holds ` tokens. If ` < k, then according to Lines 8-16,
we increment the corresponding entry of array α[v], for some uniformly at random chosen neighbor
v of u and send a message hcv , dest : vi to the machine M 0 hosting v. Upon receiving the message,
M 0 increases its token count of v, as required.
20
Algorithm 1 Computing the PageRank with reset probability > 0. Code for machine Mi .
1: Let Vi denote the vertices hosted by machine Mi .
2: Initalize array tokens[u] ← dc log ne, for u ∈ Vi , where c > 0 is a suitable constant. . tokens[u]
represents the current number of tokens at vertex u.
3: for Θ(log(n)/) iterations do
4:
for u ∈ Vi do
5:
sample t from distribution Binomial(tokens[u], )
6:
tokens[u] ← tokens[u] − t.
. Terminate each token with probability
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
Initialize array α[v] ← 0, for each v ∈ V
. Process the low-load vertices
for each vertex u ∈ Vi where tokens[u] < k do
let Nu ⊆ V be the set of neighbors of vertex u
while tokens[u] > 0 do
sample v uniformly at random from Nu
α[v] ← α[v] + 1
tokens[u] ← tokens[u] − 1
for each v ∈ Vi where α[v] > 0 do
send message hα[v], dest: vi to the machine hosting vertex v using random routing
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
for each vertex u ∈ Vi where tokens[u] > k do
. Process the high-load vertices
let I ⊆ [k] be the index set of the machines that host a neighbor of u
initialize array β[j] ← 0, for each j ∈ I
while tokens[u] > 0 do
let nj,u be number of neighbors ofu hosted at machine
Mj and let du be u’s degree
P
nk,u
n1,u
sample index j from distribution du , . . . , du
. Note kj=1 nj,u = du
β[j] ← β[j] + 1
tokens[u] ← tokens[u] − 1
for each j ∈ I where β[j] > 0 do
send message hβ[j], src: ui to machine Mj
28:
29:
30:
31:
32:
33:
34:
35:
36:
for each received message of type hcw , dest: wi do
tokens[w] ← tokens[w] + cw
for each received message of type hcv , src: vi do
while cv > 0 do
let Nv ⊆ V be the set of neighbors of v hosted at Mi
sample w uniformly at random from Nv
tokens[w] ← tokens[w] + 1
cv ← cv − 1
Now, suppose that ` > k and consider an arbitrary neighbor v of u, hosted on machine M 0 and
assume that M 0 hosts nu > 1 neighbors of u in total. For any token of u, it follows from Line 23
that we choose machine M 0 with probability nduu , where du is the degree of u in the graph.
The algorithm then sends a message of type hcu , src : ui to machine M 0 where cu is the number
21
of tokens of u for which M 0 was sampled as the destination machine. Upon processing this message
in Lines 31-36, M 0 delivers each token to its locally hosted neighbors of u uniformly at random,
and hence a specific neighbor v receives a token with with probability n1u .
Combining these observations, we conclude that v receives a token with probability nduu n1u = d1u ,
conditioned on the token not having been terminated in Line 6 with probability , which corresponds
to the random walk process of [23].
Lemma 13. Every machine Mi sends at most Õ(n/k) messages in any iteration r with high
probability.
Proof. First, we consider messages that Mi needs to send on behalf of its hosted high-load vertices.
Recall that each message generated in Line 27 due to a high-load vertex u, contains all tokens
addressed to its neighbors on a specific machine. Thus, we classify the high-load vertices into send
bins S0 , S1 , . . . , Sdlog ke , according to the number of distinct messages that they require to be sent
and, for each j, 0 6 j 6 log2 k, we define the bin
k
k
Sj = v ∈ V (G)
6
tokens[v]
6
.
(19)
2j+1
2j
As the algorithm combines token messages sent by high-load vertex v to its neighbors located on
some machine M2 into a single message, the total number of messages generated for any high-load
vertex in iteration r is at most k − 1, and hence every v is in some bin Sj .
Since Θ(log n) tokens are generated initially for each vertex, we have Θ(n log n) tokens in total,
j+1
which implies that |Sj | 6 2 nk log n , for all j. By the random vertex partitioning, we know that
a machine Mi receives Õ(|Sj |/k) vertices from Sj with probability > 1 − n−4 ; we denote this set
by Si,j . Taking a union bound over the iterations of the algorithms (assuming a constant reset
probability ), the log2 k distinct bins, and over the k machines, it follows that
∀Mi ∀j, 0 6 j 6 log2 k : |Si,j | 6
2j+1 n log n
,
k2
(20)
with probability > 1 − n−2 . According to (19), each vertex in bin Sj can hold up to k/2j tokens,
and hence (20) tells us that the total number of messages produced by vertices in Sj on machine
Mi is
j+1
k
2 n log n k
n log n
O |Si,j | · j = Õ
= Õ
,
2
k2
2j
k
for any 0 6 j 6 log k. Since we have Θ(log k) bins, the total number of messages generated on
machine Mi for its high-load vertices is Õ(n/k) · Θ(log k) = Õ(n/k).
Now, consider the low-load vertices at Mi . For the sake of our analysis, we make the pessimistic
assumption that all destinations chosen by the sampling process of the low-load vertices (cf. Lines 816) are hosted on distinct machines. Since, by definition low-load vertices hold at most k − 1 tokens
each, we can apply the same bin-classification argument as for high-load vertices to bound the total
number of messages that Mi needs to generate by Õ(n/k). The lemma follows, by taking a union
bound over the rounds O(log(n)/) rounds of the algorithm.
Lemma 14. Consider any iteration r of Algorithm 1. Then, with high probability all messages
generated can be delivered in Õ(n/k 2 ) rounds.
22
Proof. We first argue that each machine needs to receive at most Õ(n/k) messages that were
generated due to low-load vertices in Line 16, which accroding to the random routing result, can be
delivered in Õ(n/k 2 ) rounds. To this end, we proceed similarly to the analysis
in Lemma 13. That is,
k
we define receive bins R0 , R1 , . . . , Rdlog ke for iteration i, where Rj = v ∈ V (G) | 2j+1
6 λv 6 2kj
and λv is the random variable that counts the number of tokens from low-load vertices received by
v in iteration r. Using the properties of the random vertex partitioning, we can show that each
machine holds Õ(|Rj |/k) vertices from Rj with probability > 1 − n−2 , and hence the total number
of messages that each machine needs to receive (over all receive bins) is Õ(n/k) and hence all of
these messages can be delivered in Õ(n/k 2 ) rounds due to Lemma 15. Finally, it is shown in [23]
that all tokens are terminated in O(log n/) steps and thus, assuming that > 0 is a small constant,
the claim follows by a union bound over the iterations of the algorithm.
We now focus on high-load vertices. Given that there are O(n log n) tokens in total, the number
of high-load vertices in iteration r is at most O(n log(n)/k), since by definition each of them needs
to hold at least k tokens. Thus, by the random-vertex partitioning, we can show that each machine
hosts at most O(n log(n)/k 2 ) fraction of these nodes with high probability and hence sending these
messages directly to the respective destination machines requires Õ(n/k 2 ) rounds.
To show that the result holds for low-load vertices, Since the intermediate machines (which
then forward the messages to the actual receivers) are chosen uniformly at random, by Lemma 15
all the messages are delivered in Õ(n/k 2 ) rounds.
From Lemma 14 we conclude that all messages generated in a single iteration of Algorithm 1
can be delivered in Õ(n/k 2 ) rounds with high probability. A union bound implies the following
result.
Theorem 4. There is a distributed algorithm for the k-machine model that computes a δ-approximation
of the PageRanks of an n-node graph with high probability in Õ(n/k 2 ) rounds, for any constant δ.
3.2
An Almost Optimal Algorithm for Triangle Enumeration
In this section we present a randomized algorithm that enumerates all the triangles of an input
graph G = (V, E), and that terminates in Õ(m/k 5/3 + n/k 4/3 ) rounds w.h.p. It is important
to notice that this bound does not match the (existential) Ω̃(m/k 5/3 ) lower bound provided in
Section 2 only for very sparse graphs.
Our algorithm is a generalization of the algorithm TriPartition of Dolev et al. for the congested
clique model [24], with some crucial differences explained below. The key idea, which in its generality can be traced back to [2], is to partition the set V of nodes of G in k 1/3 subsets of n/k 1/3
nodes each, and to have each of the k machines to examine the edges between pairs of subsets of
one of the (k 1/3 )3 = k possible triplets of subsets (repetitions are allowed).
Our algorithm is as follows. Each node picks independently and uniformly at random one color
from a set C of k 1/3 distinct colors through a hash function h : V → C initially known by all the
machines. This gives rise to a color-based partition of the vertex set V into k 1/3 subsets of Õ(n/k 1/3 )
nodes each, w.h.p. A deterministic assignment of triplets of colors, hard-coded into the algorithm,
logically assigns each of the k possible triplets of such subsets to one distinct machine. Each
machine then collects all the edges between pairs of subsets in its triplet. This is accomplished in
two steps: (1) For each of the edges it holds, each machine designates one random machine (among
the k machines) as the edge proxy for that edge, and sends all its edges to the respective edge
proxies. The designation of an edge itself is done by the following proxy assignment rule (this is
23
necessary to avoid congestion at any one machine): A machine that has a node v whose degree is
at least 2k log n requests all other machines to designate the respective edge proxies for each of the
incident edges of node v. If two machines request each other to designate the same edge (since their
endpoints are hosted by the respective machines), then such a tie is broken randomly. (2) In the
second step, all the machines collect their required edges from the respective proxies: since each
edge proxy machine knows the hash function h as well as the deterministic assignment of triplets,
it can send each edge to the machines where it is needed. Then, each machine simply enumerates
all the triangles in its local subgraph.
Our algorithm differs from the one in [24] in the way the k 1/3 subsets of vertices are constructed,
in the use of proxy computation and in the routing of messages, which in our algorithm is randomized and hence requires a more involved analysis, allowing for a better time complexity for graphs
where the number of edges m is o(n2 ).
We now argue that the above algorithm correctly enumerates all the triangles of an input graph
G (initially stored according to the RVP), and analyze its round complexity. To this end, we need
to bound the number of edges in the subgraph induced in G by a triplet (which consists of three
disjoint k 1/3 -size subsets as partitioned earlier). Since the number of edges between pairs of subsets
in the triplet is less or equal than the number of edges of the subgraph induced by a triplet, the
claimed bound also applies to the former quantity. We point out that this is roughly equivalent to
bound the number of edges in the subgraph induced by a set of random nodes of a graph, a general
quantity interesting in its own right. First notice that in order to show concentration on the number
of edges of the subgraph induced by each triplet, we cannot simply apply a Chernoff bound, since
edges are not independently distributed. Also, mimicking the argument for the proof of Lemma 4.1
in [37] would give us only an Õ(m/k 1/3 ) upper bound, since we would be overcounting edges (as we
would be counting also those edges with just one endpoint in the given machine) and the application
of Bernstein’s inequality would require independence on the random variables. Instead, we shall
use the following result due to Rödl and Ruciński [64].14
Proposition 1 ([64, Proposition 1]). Let, for a graph G = (V, E), m < ηn2 , and let R be a
random subset of V of size |R| = t such that t > 1/3η. Let e(G[R]) denote the number of edges in
the subgraph induced by R. Then,
Pr e(G[R]) > 3ηt2 < t · e−ct
for some c > 0.
We will also need one general property of the k-machine model that will be a key ingredient for
the analysis of our algorithms. Specifically, we show how fast some specific routing can be done in
the k-machine model.
Lemma 15. Consider a complete network of k machines, where each link can carry one message of
O(polylog n) bits at each round. If each machine is source of O(x) messages whose destinations are
distributed independently and uniformly at random, or each machine is destination of O(x) messages
whose sources are distributed independently and uniformly at random, then all the messages can be
routed in O((x log x)/k) rounds w.h.p.
14
A careful inspection of the argument used by Rödl and Ruciński to establish this result reveals that the additional
condition t > 1/3η, missing from their statement, is necessary for the result to hold. In fact, as stated, their result
is implicitly assuming that both n and t grow to infinity [65].
24
Proof. We shall prove the statement for the case in which each machine is source of O(x) messages.
The other case and its analysis are symmetric.
Since destinations of messages are chosen randomly, we choose to route each message to its
(random) destination machine through the link that directly connects the source to the destination
machine (which always exists because the network is complete). By a classic balls-into-bins result,
each of the k − 1 links of each machine is responsible for carrying O((x log x)/k) messages w.h.p.,
and the result follows.
Theorem 5. There is a distributed algorithm for the k-machine model that enumerates all the
triangles of an m-edge graph in Õ(m/k 5/3 + n/k 4/3 ) rounds with high probability.
Proof. Since there are (k 1/3 )3 = k possible triplets of non-intersecting subsets of n/k 1/3 nodes, all
possible triangles are actually examined, and this proves the correctness of the algorithm.
We now argue that the algorithm terminates in Õ(m/k 5/3 + n/k 4/3 ) rounds w.h.p. As part
of the argument used to prove Lemma 4.1 of [37] it is shown that every machine initially stores
Õ(m/k + ∆) edges, where ∆ is the maximum degree of the graph. If we apply Lemma 15 directly,
the communication phase that assigns the edges to their random proxies takes Õ(m/k 2 + ∆/k)
rounds w.h.p. We now show that the proxy assignment rule allows us show an Õ(m/k 5/3 ) bound
for this phase for every non-sparse graph.
Clearly, by the random proxy assignment, each machine receives only Õ(m/k) messages. We
next argue that each machine is responsible for designating only Õ(m/k) edges w.h.p. Then, by
Lemma 15, the time to send all the designation messages is Õ(m/k 2 ) rounds.
For the sake of the analysis, we partition the non-isolated nodes of the input graph into log n
sets, based on their degree: the i-th set contains all the nodes whose degree is in [∆/2i , ∆/2i+1 ),
0 6 i 6 log n−1. We now focus on the number of messages sent by some machine M . By a standard
Chernoff bound, a node vi with degree di in the i-th set has Õ(di /k) neighbors in M w.h.p. If ni is
number of nodes in the i-th set, then the total number of neighbors (and hence messages) that M
will send with respect to nodes in this set is Õ(ni di /k) w.h.p. Summing over all the log n sets we
P n−1
have that the total number of messages sent by M is log
Õ(ni di /k) = Õ(m/k) w.h.p. (via the
i=0
union bound). Applying union bound over all the machines, we have that the same bound holds
for every machine.
The above argument does not take into account the messages sent by a machine initially to
request designation of an edge. A machine needs one round (to broadcast to all the other machines)
to request such a designation. If any machine M sends f > k polylog n requests, that means that
it has f nodes with degree at least 2k log n. By the RVP, this implies that with high probability
the total number of nodes with degree at least 2k log n is at least Ω(f k). Hence the number of
edges in the graph is m = Ω̃(f k 2 ). Therefore the number of rounds needed for broadcast, Õ(f ), is
subsumed by Õ(m/k 5/3 ).
Next we analyze the re-routing of each edge e from its edge proxy to all the machines that
are assigned a copy of both of the endpoints of e. Observe that any two nodes, and therefore
any edge, can be held by at most k 1/3 different machines: consider an edge (a, b), and pick one
machine M that has to receive it because among its three subsets of nodes, one (call it A) contains
a and one (call it B) contains b. Edge (a, b) can be assigned only to those machines which contain
both subsets A and B, and there are only k 1/3 − 1 such machines in addition to M . Hence, rerouting the edges entails mk 1/3 messages to be traveling across the network.15 We first bound the
15
Notice that each node is replicated k2/3 times in the system, and therefore each edge is replicated k4/3 times;
25
number of edges received by each machine. Fix one machine M . We shall apply Proposition 1
with t = dn log n/k 1/3 for some positive constant d. We have two cases. If m > nk 1/3 /6d log n
then m > n2 /6t, which in turn implies 2m/n2 > 1/3t, and thus we can apply Proposition 1 with
η = 2m/n2 obtaining, for machine M ,
"
#
2m dn log n 2
1/3
Pr e(G[R]) > 3 2
< te−cdn log n/k ,
1/3
n
k
that is, since k 6 n,
6d2 m log2 n
Pr e(G[R]) 6
> 1 − e−Ω(log n) .
k 2/3
Hence we can apply Lemma 15 with x = Õ(m/k 2/3 ), which yields a round complexity of Õ(m/k 5/3 )
w.h.p. Now observe that each proxy has to send Õ(m/k 2/3 ) edges. We can apply Lemma 15 with
x = Õ(m/k 2/3 ), which implies that the number of rounds needed for the proxies to send their edges
is Õ(m/k 5/3 ) w.h.p., completing the analysis for the case m > nk 1/3 /6d log n.
On the other hand, if m < nk 1/3 /6d log n we shall apply Proposition 1 with η = 1/3t =
1/3
k /3dn log n, obtaining
"
#
k 1/3
dn log n 2
1/3
Pr e(G[R]) > 3
< te−cdn log n/k ,
3dn log n
k 1/3
that is, since k 6 n,
dn log n
> 1 − e−Ω(log n) .
Pr e(G[R]) 6
1/3
k
Hence in this case we can apply Lemma 15 as before, but now with x = Õ(n/k 1/3 ). The theorem
follows.
4
Conclusions
We presented a general technique for showing lower bounds on the round complexity of distributed
message-passing computation model and showed its application to two graph problems — PageRank
and triangle enumeration. We also presented almost optimal algorithms for these problems which
can be efficiently implemented in practice. Our lower bound technique works by relating the size of
the output to the number of communication rounds needed and can be used to show lower bounds
for other problems where the output size is large (significantly more than the number of machines)
such as sorting, matrix multiplication, shortest paths, maximum matching, clustering, and densest
subgraph.
References
[1] Giraph, http://giraph.apache.org/.
however, we only need to re-route copies of edges that are internal to the triplets, and therefore copies of edges that
have one endpoint in one triplet and the other endpoint in a different triplet need not be communicated. Hence, the
total number of edges to be communicated is mk1/3 and not mk2/3 .
26
[2] F. N. Afrati and J. D. Ullman. Optimizing multiway joins in a Map-Reduce environment.
IEEE Trans. Knowl. Data Eng., 23(9):1282–1298, 2011.
[3] A. Andoni, A. Nikolov, K. Onak, and G. Yaroslavtsev. Parallel algorithms for geometric graph
problems. In Proceedings of the 46th ACM Symposium on Theory of Computing (STOC),
pages 574–583, 2014.
[4] K. Avrachenkov, N. Litvak, D. Nemirovsky, and N. Osipova. Monte carlo methods in pagerank
computation: When one iteration is sufficient. SIAM J. Number. Anal., 45(2):890–904, 2007.
[5] B. Bahmani, K. Chakrabarti, and D. Xin. Fast personalized pagerank on mapreduce. In Proc.
of ACM SIGMOD Conference, pages 973–984, 2011.
[6] B. Bahmani, A. Chowdhury, and A. Goel. Fast incremental and personalized pagerank.
PVLDB, 4:173–184, 2010.
[7] S. Bandyapadhyay, T. Inamdar, S. Pai, and S. V. Pemmaraju. Near-optimal clustering in
the k-machine model. In Proceedings of the 19th International Conference on Distributed
Computing and Networking (ICDCN), 2018. To appear.
[8] Z. Bar-Yossef, T. S. Jayram, R. Kumar, and D. Sivakumar. An information statistics approach
to data stream and communication complexity. J. Comput. Syst. Sci., 68(4):702–732, 2004.
[9] P. Beame, P. Koutris, and D. Suciu. Skew in parallel query processing. In Proceedings of
the 33rd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems
(PODS), pages 212–223, 2014.
[10] P. Berkhin. A survey on pagerank computing. Internet Mathematics, 2(1):73–120, 2005.
[11] J. W. Berry, L. A. Fostvedt, D. J. Nordman, C. A. Phillips, C. Seshadhri, and A. G. Wilson. Why do simple algorithms for triangle enumeration work in the real world? Internet
Mathematics, 11(6):555–571, 2015.
[12] J. W. Berry, B. Hendrickson, R. A. LaViolette, and C. A. Phillips. Tolerating the community
detection resolution limit with edge weighting. Physical Review E, 83(5), 2011.
[13] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. In Proceedings of the 7th International World-Wide Web Conference (WWW), pages 107–117, 1998.
[14] K. Censor-Hillel, P. Kaski, J. H. Korhonen, C. Lenzen, A. Paz, and J. Suomela. Algebraic
methods in the congested clique. In Proceedings of the 34th ACM Symposium on Principles of
Distributed Computing (PODC), pages 143–152, 2015.
[15] A. Chattopadhyay, J. Radhakrishnan, and A. Rudra. Topology matters in communication.
Electronic Colloquium on Computational Complexity (ECCC), 21:74, 2014.
[16] A. Chattopadhyay, J. Radhakrishnan, and A. Rudra. Topology matters in communication.
In Proceedings of the 55th IEEE Annual Symposium on Foundations of Computer Science
(FOCS), pages 631–640, 2014.
27
[17] A. Ching, S. Edunov, M. Kabiljo, D. Logothetis, and S. Muthukrishnan. One trillion edges:
Graph processing at facebook-scale. PVLDB, 8(12):1804–1815, 2015.
[18] S. Chu and J. Cheng. Triangle listing in massive networks. ACM Trans. Knowl. Discov. Data,
6(4):17, 2012.
[19] F. Chung and O. Simpson. Distributed algorithms for finding local clusters using heat kernel
pagerank. In Proceedings of the 12th Workshop on Algorithms and Models for the Web-graph
(WAW), pages 77–189, 2015.
[20] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience, 2006.
[21] A. Das Sarma, S. Gollapudi, and R. Panigrahy. Estimating pagerank on graph streams. J.
ACM, 58(3):13, 2011.
[22] A. Das Sarma, S. Holzer, L. Kor, A. Korman, D. Nanongkai, G. Pandurangan, D. Peleg, and
R. Wattenhofer. Distributed verification and hardness of distributed approximation. SIAM J.
Comput., 41(5):1235–1265, 2012.
[23] A. Das Sarma, A. R. Molla, G. Pandurangan, and E. Upfal. Fast distributed PageRank
computation. Theor. Comput. Sci., 561:113–121, 2015.
[24] D. Dolev, C. Lenzen, and S. Peled. “Tri, tri again”: Finding triangles and small subgraphs
in a distributed setting. In Proceedings of the 26th International Symposium on Distributed
Computing (DISC), pages 195–209, 2012.
[25] A. Drucker, F. Kuhn, and R. Oshman. On the power of the congested clique model. In
Proceedings of the 33rd ACM Symposium on Principles of Distributed Computing (PODC),
pages 367–376, 2014.
[26] M. Elkin, H. Klauck, D. Nanongkai, and G. Pandurangan. Can quantum communication speed
up distributed computation? In Proceedings of the 33rd ACM Symposium on Principles of
Distributed Computing (PODC), pages 166–175, 2014.
[27] X. Feng, L. Chang, X. Lin, L. Qin, and W. Zhang. Computing connected components with linear communication cost in pregel-like systems. In Proceeeding of the 32nd IEEE International
Conference on Data Engineering (ICDE), pages 85–96, 2016.
[28] B. Foucault Welles, A. Van Devender, and N. Contractor. Is a friend a friend?: Investigating
the structure of friendship networks in virtual worlds. In Proceedings of the 28th International
Conference on Human Factors in Computing Systems, pages 4027–4032, 2010.
[29] I. Fudos and C. M. Hoffmann. A graph-constructive approach to solving systems of geometric
constraints. ACM Trans. Graph., 16(2):179–216, 1997.
[30] M. Ghaffari and M. Parter. MST in log-star rounds of congested clique. In Proceedings of the
2016 ACM Symposium on Principles of Distributed Computing (PODC), pages 19–28, 2016.
[31] T. Guo, X. Cao, G. Cong, J. Lu, and X. Lin. Distributed algorithms on exact personalized
pagerank. In Proceedings of the 2017 ACM International Conference on Management of Data
(SIGMOD), pages 479–494, New York, NY, USA, 2017. ACM.
28
[32] B. Haeupler, G. Pandurangan, D. Peleg, R. Rajaraman, and Z. Sun. Discovery through gossip.
Random Struct. Algorithms, 48(3):565–587, 2016.
[33] J. W. Hegeman, G. Pandurangan, S. V. Pemmaraju, V. B. Sardeshmukh, and M. Scquizzato.
Toward optimal bounds in the congested clique: Graph connectivity and MST. In Proceedings
of the 34th ACM Symposium on Principles of Distributed Computing (PODC), pages 91–100,
2015.
[34] T. Izumi and F. L. Gall. Triangle finding and listing in CONGEST networks. In Proceedings of
the ACM Symposium on Principles of Distributed Computing (PODC), pages 381–389, 2017.
[35] S. Janson. Large deviations for sums of partly dependent random variables. Random Struct.
Algorithms, 24(3):234–248, 2004.
[36] H. J. Karloff, S. Suri, and S. Vassilvitskii. A model of computation for MapReduce. In
Proceedings of the 21st annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages
938–948, 2010.
[37] H. Klauck, D. Nanongkai, G. Pandurangan, and P. Robinson. Distributed computation of
large-scale graph problems. In Proceedings of the 26th Annual ACM-SIAM Symposium on
Discrete Algorithms (SODA), pages 391–410, 2015.
[38] P. Koutris, P. Beame, and D. Suciu. Worst-case optimal algorithms for parallel query processing. In Proceedings of the 19th International Conference on Database Theory (ICDT), pages
8:1–8:18, 2016.
[39] E. Kushilevitz and N. Nisan. Communication Complexity. Cambridge University Press, 1997.
[40] L. Lai, L. Qin, X. Lin, Y. Zhang, and L. Chang. Scalable distributed subgraph enumeration.
PVLDB, 10(3):217–228, 2016.
[41] A. N. Langville and C. D. Meyer. Survey: Deeper inside pagerank. Internet Mathematics,
1(3):335–380, 2003.
[42] S. Lattanzi, B. Moseley, S. Suri, and S. Vassilvitskii. Filtering: a method for solving graph
problems in MapReduce. In Proceedings of the 23rd ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 85–94, 2011.
[43] C. Lenzen. Optimal deterministic routing and sorting on the congested clique. In Proceedings
of the 32nd ACM Symposium on Principles of Distributed Computing (PODC), pages 42–50,
2013.
[44] C. Lenzen and R. Wattenhofer. Tight bounds for parallel randomized load balancing. In
Proceedings of the 43rd ACM Symposium on Theory of Computing (STOC), pages 11–20,
2011.
[45] J. Leskovec, A. Rajaraman, and J. D. Ullman. Mining of Massive Datasets. Cambridge
University Press, 2014.
[46] J. Lin and C. Dyer. Data-Intensive Text Processing with MapReduce. Morgan and Claypool
Publishers, 2010.
29
[47] Z. Lotker, B. Patt-Shamir, E. Pavlov, and D. Peleg. Minimum-weight spanning tree construction in O(log log n) communication rounds. SIAM J. Comput., 35(1):120–131, 2005.
[48] N. A. Lynch. Distributed Algorithms. Morgan Kaufmann Publishers Inc., 1996.
[49] G. Malewicz, M. H. Austern, A. J. C. Bik, J. C. Dehnert, I. Horn, N. Leiser, and G. Czajkowski. Pregel: a system for large-scale graph processing. In Proceedings of the 2010 ACM
International Conference on Management of Data (SIGMOD), pages 135–146, 2010.
[50] D. Nanongkai. Distributed approximation algorithms for weighted shortest paths. In Proceedings of the 46th ACM Symposium on Theory of Computing (STOC), pages 565–573, 2014.
[51] D. Nanongkai, A. D. Sarma, and G. Pandurangan. A tight unconditional lower bound on
distributed randomwalk computation. In Proceedings of the 30th Annual ACM Symposium on
Principles of Distributed Computing (PODC), pages 257–266, 2011.
[52] H. Q. Ngo, C. Ré, and A. Rudra. Skew strikes back: new developments in the theory of join
algorithms. SIGMOD Record, 42(4):5–16, 2013.
[53] R. Oshman. Communication complexity lower bounds in distributed message-passing. In Proceedings of the 21st International Colloquium on Structural Information and Communication
Complexity (SIROCCO), pages 14–17, 2014.
[54] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing
order to the web. Technical report, Stanford InfoLab, 1999.
[55] G. Pandurangan, D. Peleg, and M. Scquizzato. Message lower bounds via efficient network
synchronization. In Proceedings of the 23rd International Colloquium on Structural Information
and Communication Complexity (SIROCCO), pages 75–91, 2016.
[56] G. Pandurangan, P. Robinson, and M. Scquizzato. Fast distributed algorithms for connectivity
and MST in large graphs. In Proceedings of the 28th ACM Symposium on Parallelism in
Algorithms and Architectures (SPAA), pages 429–438, 2016.
[57] G. Pandurangan, P. Robinson, and M. Scquizzato. Tight bounds for distributed graph computations. CoRR, abs/1602.08481, 2016.
[58] D. Peleg. Distributed Computing: A Locality-Sensitive Approach. Society for Industrial and
Applied Mathematics, 2000.
[59] D. Peleg and V. Rubinovich. A near-tight lower bound on the time complexity of distributed
minimum-weight spanning tree construction. SIAM J. Comput., 30(5):1427–1442, 2000.
[60] J. Qiu, S. Jha, A. Luckow, and G. C. Fox. Towards hpc-abds: An initial high-performance big
data stack, in building robust big data ecosystem. 2014.
[61] D. A. Reed and J. Dongarra. Exascale computing and big data. Commun. ACM, 58(7):56–68,
2015.
[62] F. M. Reza. An introduction to information theory. Courier Corporation, 1961.
30
[63] I. Rivin. Counting cycles and finite dimensional Lp norms. arXiv:math/0111106, 2001.
[64] V. Rödl and A. Ruciński. Random graphs with monochromatic triangles in every edge coloring.
Random Struct. Algorithms, 5(2):253–270, 1994.
[65] A. Ruciński. Personal communication, 2017.
[66] C. Scheideler. Universal Routing Strategies for Interconnection Networks, volume 1390 of
Lecture Notes in Computer Science. Springer, 1998.
[67] S. Suri and S. Vassilvitskii. Counting triangles and the curse of the last reducer. In Proceedings
of the 20th International Conference on World Wide Web (WWW), pages 607–614, 2011.
[68] L. G. Valiant. A scheme for fast parallel communication. SIAM J. Comput., 11(2):350–361,
1982.
[69] L. G. Valiant. A bridging model for parallel computation. Commun. ACM, 33(8):103–111,
1990.
[70] S. Vassilvitskii. Models for parallel computation (a hitchhikers’ guide to massively parallel
universes), http://grigory.us/blog/massively-parallel-universes/, 2015.
[71] N. Wang, J. Zhang, K.-L. Tan, and A. K. H. Tung. On triangulation-based dense neighborhood
graph discovery. Proc. VLDB Endow., 4(2):58–68, 2010.
[72] S. Wasserman and K. Faust. Social Network Analysis: Methods and Applications. Cambridge
University Press, 1994.
[73] D. J. Watts and S. H. Strogatz. Collective dynamics of ‘small-world’ networks. Nature,
393:440–442, 1998.
[74] D. P. Woodruff and Q. Zhang. When distributed computation is communication expensive.
Distrib. Comput., 30(5):309–323, 2017.
31
| 3 |
Ordinary Differential Equation Methods
For Markov Decision Processes
and Application to Kullback–Leibler Control Cost∗
arXiv:1605.04591v2 [math.OC] 22 Oct 2016
Ana Bušić†
Sean Meyn‡
January 15, 2018
Abstract
A new approach to computation of optimal policies for MDP (Markov decision process) models
is introduced. The main idea is to solve not one, but an entire family of MDPs, parameterized
by a scalar ζ that appears in the one-step reward function. For an MDP with d states, the family
of value functions {h∗ζ : ζ ∈ R} is the solution to an ODE,
d ∗
dζ hζ
= V(h∗ζ )
where the vector field V : Rd → Rd has a simple form, based on a matrix inverse.
This general methodology is applied to a family of average-cost optimal control models in
which the one-step reward function is defined by Kullback-Leibler divergence. The motivation
for this reward function in prior work is computation: The solution to the MDP can be expressed
in terms of the Perron-Frobenius eigenvector for an associated positive matrix. The drawback
with this approach is that no hard constraints on the control are permitted. It is shown here that
it is possible to extend this framework to model randomness from nature that cannot be modified by the controller. Perron-Frobenius theory is no longer applicable – the resulting dynamic
programming equations appear as complex as a completely unstructured MDP model. Despite
this apparent complexity, it is shown that this class of MDPs admits a solution via this ODE
technique. This approach is new and practical even for the simpler problem in which randomness
from nature is absent.
Keywords: Markov decision processes, Computational methods, Distributed control.
AMS Subject Classifications: 90C40, 93E20, 60J22, 93E35, 60J20, 90C46
1
Introduction
This paper concerns average-cost optimal control for Markovian models. It is assumed that there is
a one-step reward w that is a function of state-input pairs. For a given policy that defines the input
∗
Research supported by French National Research Agency grant ANR-12-MONU-0019, and NSF grants 1609131
and CPS-1259040.
†
Inria and the Computer Science Department of École Normale Supérieure, Paris, France ([email protected],
http://www.di.ens.fr/~busic/).
‡
Department of Electrical and Computer Engineering at the University of Florida, Gainesville ([email protected],
http://www.meyn.ece.ufl.edu/).
1
2
ODEs for MDPs with application to K-L Cost — January 15, 2018
as a function of present and past state values, the resulting average reward is the limit infimum,
T
1X
w(X(t), U (t))
η = lim inf
T →∞ T
(1)
t=1
where X = {X(t) : t ≥ 0}, U = {U (t) : t ≥ 0} are the state and input sequences. Under general
conditions, the maximum over all policies is deterministic and independent of the initial condition,
and the optimal policy is state-feedback — obtained as the solution to the average-reward optimality
equations (AROE) [2, 18].
1.1
Background
In this paper the state space X on which X evolves is taken to be finite, but possibly large. It is
well known that computation of a solution to the AROE may be difficult in such cases. This is one
motivation for the introduction of approximation techniques such as reinforcement learning [3, 23].
An interesting alternative is to change the problem so that it is easily solved. In all of the
prior work surveyed here, the setting is average-cost optimal control, so that the reward function is
replaced by a cost function.
Brockett in [6] introduces a class of controlled Markov models in continuous time. The model
and cost function are formulated so that the optimal control problem is easily solved numerically.
The ODE technique is applied to this special class of MDPs in Section 2.4.
The theory developed in Section 3 was inspired by the work of Todorov [24], the similar earlier
work of Kárný [11], and the more recent work [10, 16]. The state space X is finite, the action
space U consists of probability mass functions on X, and the controlled transition matrix is entirely
determined by the input as follows:
P{X(t + 1) = x0 | X(t) = x, U (t) = µ} = µ(x0 ) ,
x, x ∈ X, µ ∈ U .
(2)
The MDP has a simple solution only under special conditions on the cost function. It is assumed
in [24, 10, 16] that it is the sum two terms: The first is a cost function on X, which is completely
unstructured. The second term is a “control cost”, defined using Kullback–Leibler (K-L) divergence
(also known as relative entropy).
The control cost is based on deviation from control-free behavior (modeled by a nominal transition matrix P0 ). In most applications, P0 captures randomness from nature. For example, in a
queueing model there is uncertainty in inter-arrival times or service times. An optimal solution in
this framework would allow modification of arrival statistics and service statistics, which may be
entirely infeasible. In this paper the K-L cost framework is broadened to include constraints on the
pmf µ appearing in (2).
1.2
Contributions
The new approach to computation proposed in this paper is based on the solution of an entire
family of MDP problems. Section 2 begins with a general MDP model in which the one-step reward
function is a smooth function of a real parameter ζ ∈ R.
For each ζ, the solution to the average-reward optimization problem is based on a relative value
function h∗ζ : X → R. Under general conditions it is shown that these functions are obtained as the
solution to an ordinary differential equation
d ∗
dζ hζ
= V(h∗ζ )
ODEs for MDPs with application to K-L Cost — January 15, 2018
3
Consequently, the solution to an entire family of MDPs can be obtained through the solution of a
single ordinary differential equation (ODE).
Following the presentation of these general results, the paper focuses on the class of MDPs with
transition dynamics given in (2): the input space is a subset of the simplex in Rd , and the cost
function c is defined with respect to K-L divergence (see (23) and surrounding discussion). The
optimal control formulation is far more general than in the aforementioned work [24, 10, 16], as it
allows for inclusion of exogenous randomness in the MDP model.
The dynamic programming equations become significantly more complex in this generality, so
that in particular, the Perron-Frobenious computational approach used in prior work is no longer
applicable. Nevertheless, the ODE approach can be applied to solve the family of MDP optimal control problems. The vector field V : Rd → Rd has special structure that further simplifies computation
of the relative value functions.
Simultaneous computation of the optimal policies is essential in applications to “demand dispatch” for providing virtual energy storage from a large collection of flexible loads [1, 16, 7]. In these
papers, randomized policies are designed for each of many thousands of electric loads in a distributed
control architecture. In this context it is necessary to compute the optimal transition matrix P̌ζ for
each ζ. Prior to the present work it was not possible to include any exogenous uncertainty in the
load model.
In the companion paper [8], the results of the present paper are applied to distributed control
of flexible loads, including thermostatically controlled devices such as refrigerators and heating
systems. This paper also contains extensions of the ODE method to generate transition matrices
with desirable properties, without consideration of optimality.
The remainder of the paper is organized as follows. Section 2 sets the notation for the MDP
models in a general setting, and presents an ODE approach to solving the AROE under minimal
conditions on the model. Section 3 describes the Kullback–Leibler cost criterion. Special structure of
optimal policies obtained in Theorem 3.4 leads to a simple representation of the ODE in Theorem 3.5.
Conclusions and topics for future research are contained in Section 4.
2
2.1
ODE for MDPs
MDP model
Consider an MDP with finite state space X = {x1 , . . . , xd }; the action space U is an open subset of Rm . The state process is denoted X = (X(0), X(1), . . . ), and the input process U =
(U (0), U (1), . . . ). The dynamics of the model are defined by a controlled transition matrix : for
x, x0 ∈ X, and u ∈ U, this is defined by
Pu (x, x0 ) = P{X(t + 1) = x0 | X(t) = x, U (t) = u}
where the right hand side is assumed independent of t = 0, 1, 2, . . . .
The one-step reward function is parameterized by a scalar ζ ∈ R. It is assumed to be continuously
differentiable in this parameter, with derivative denoted
Wζ (x, u) =
d
dζ wζ (x, u) .
(3)
Unless there is risk of confusion, dependency on ζ will be suppressed; in particular, we write w
rather than wζ .
There may be hard constraints: For each x ∈ X, there is an open set U(x) ⊂ U consisting of
feasible inputs U (t) when X(t) = x.
ODEs for MDPs with application to K-L Cost — January 15, 2018
4
The optimal reward η ∗ is defined to be the maximum of η in (1) over all policies. Under general
conditions on the model, η ∗ is deterministic, and is independent of x. Under further conditions, this
value and the optimal policy are characterized by the AROE:
n
o
X
max w(x, u) +
Pu (x, x0 )h∗ (x0 ) = h∗ (x) + η ∗
(4)
u∈U(x)
x0
in which the function h∗ : X → R is called the relative value function. The stationary policy φ∗ is
obtained from the AROE: φ∗ (x) ∈ U is a maximizing value of u in (4) for each x [2, 18].
Structure for the optimal average reward is obtained under minimal assumptions:
Proposition 2.1. Suppose that the following hold:
(i) The welfare function is affine in its parameter: wζ (x, u) = w0 (x, u) + ζW (x, u) for some
function W and all x, u.
(ii) For each ζ, the optimal reward ηζ∗ exists, is deterministic, and is independent of the initial
condition.
(iii) For each ζ, the optimal reward ηζ∗ is achieved with a stationary policy φ∗ζ , and under this
policy, the following ergodic limits exist for each initial condition:
T
1X
wζ (X(t), U (t)) ,
T →∞ T
ηζ∗ = lim
T
1X
W (X(t), U (t))
T →∞ T
wζ = lim
t=1
t=1
Then, ηζ∗ is convex as a function of ζ, with sub-derivative wζ :
ηζ∗ ≥ ηζ∗0 + (ζ − ζ0 )wζ0 ,
for all ζ, ζ0 ∈ R .
Proof. Convexity of ηζ∗ will follow from the lower bound. Alternatively, convexity is implied by
the linear programming representation of average-cost optimal control, where ηζ∗ is defined as the
maximum of linear functions of ζ [15, 4].
To obtain the lower bound, choose any ζ, ζ0 ∈ R, and consider the average reward based on wζ ,
obtained using U (t) = φ∗ζ0 (X(t)) for all t ≥ 0. We then have,
ηζ∗
T
1X
≥ lim inf
wζ (X(t), U (t))
T →∞ T
1
T →∞ T
= lim
t=1
T
X
T
1X
wζ (X(t), U (t)) − wζ0 (X(t), U (t))
T →∞ T
wζ0 (X(t), U (t)) + lim
t=1
t=1
The first summation on the right hand side is equal to ηζ∗0 . The second reduces to (ζ − ζ0 )wζ0 on
substituting wζ − wζ0 = (ζ − ζ0 )W .
t
u
We next introduce an ODE that solves the AROE for each ζ.
5
ODEs for MDPs with application to K-L Cost — January 15, 2018
2.2
ODE solution
To construct an ordinary differential equation for h∗ζ requires several assumptions. The first is a
normalization: The relative value function is not unique, since we can add a constant to obtain a
new solution. We resolve this by fixing a state x◦ ∈ X, and assume that h∗ζ (x◦ ) = 0 for each ζ.
For any function h : X → R, we define a new function on X × U via,
X
Pu h (x) =
Pu (x, x0 )h(x0 )
x0
Similar notation is used for an uncontrolled transition matrix.
Assumptions
(i) For each ζ, a solution to the AROE (h∗ζ , ηζ∗ ) exists, with hζ (x◦ ) = 0, and the pair is continuously
differentiable in ζ. Moreover, the function of (x, u, ζ) defined by,
qζ∗ (x, u) = wζ (x, u) + Pu h∗ζ (x)
is jointly continuously differentiable in (ζ, u), with the representation
d ∗
dζ qζ (x, u)
= Wζ (x, u) + Pu Hζ∗ (x) in which
Hζ∗ (x) =
d ∗
dζ hζ (x) .
(5)
(ii) The stationary policy exists as the minimum
φ∗ζ (x) = arg min qζ∗ (x, u) ,
u∈U(x)
x ∈ X,
and is continuously differentiable in ζ for each x.
(iii) The optimal transition matrix P̌ζ is irreducible, with unique invariant pmf denoted πζ , where
P̌ζ (x, x0 ) = Pu∗ (x, x0 ),
u∗ = φ∗ζ (x), x, x0 ∈ X .
All of these assumptions hold for the class of MDP models considered in Section 3.
These assumptions imply that for each ζ there is a solution Hζ to Poisson’s equation,
W̌ζ + P̌ζ Hζ = Hζ + wζ
(6)
P
in which W̌ζ (x) = Wζ (x, φ∗ζ (x)), and wζ =
x πζ (x)W̌ζ (x). It is assumed throughout that the
solution is normalized, with Hζ (x◦ ) = 0; there is a unique solution to (6) with this normalization
[17, Thm. 17.7.2].
The function qζ∗ is the “Q-function” that appears in Q-learning [3]. Under (i) and (ii), it follows
from the AROE that for each x and ζ,
qζ∗ (x, φ∗ζ (x)) = min wζ (x, u) + Pu h∗ζ (x) = h∗ζ (x) + ηζ∗
(7)
u∈U(x)
MDP vector field
In Theorem 2.2 it is shown that the family of relative value functions solves an ODE. A function
h : X → R is regarded as a vector in Rd . The vector field V is not homogeneous, so it is regarded as
a mapping V : Rd+1 → Rd . For a given function h : X → R and ζ ∈ R, the function V(h, ζ) is defined
through the following steps:
ODEs for MDPs with application to K-L Cost — January 15, 2018
6
1. Obtain a policy: φ(x) = arg maxu {wζ (x, u) + Pu h (x)}.
2. Obtain a transition matrix P̌ (x, x0 ) = Pφ(x) (x, x0 ), x, x0 ∈ X.
3. Obtain the solution to Poisson’s equation, W̌ + P̌ H = H + w, in which W̌ (x) = Wζ (x, φ(x)),
x ∈ X, and w is the steady-state mean of W̌ under this policy. The solution is normalized so that
H(x◦ ) = 0.
4. Set V(h, ζ) = H.
Theorem 2.2. Under the assumptions of this section:
(i) The family of relative value functions {h∗ζ } is a solution to the ordinary differential equation,
d ∗
dζ hζ
(ii)
d ∗
dζ ηζ
= wζ , with
X
πζ (x)Wζ (x, φ∗ζ (x)) ,
wζ =
= V(h∗ζ , ζ)
(8)
where πζ is the invariant pmf for P̌ζ .
x
(iii) If the derivative Wζ (x, u) is independent of ζ and u for each x, then the ODE is homogeneous. That is, for each h, the function V(h, ζ) does not depend on ζ.
Proof. The domain of V is defined to be any h for which the solution to (1)–(3) is possible. The
domain may not include all functions h, but it is defined for any of the relative value functions {h∗ζ };
this is true by the assumptions imposed in the theorem.
If Wζ is independent of ζ and u, then wζ (x, u) = w0 (x, u) + ζW (x) for each x, u, ζ. It follows
that φ is independent of ζ in step 1, and W̌ is independent of ζ in step 3. Hence the vector field is
independent of ζ.
d ∗
ηζ
To complete the proof it remains to establish (i), which will lead to the representation for dζ
in part (ii) of the theorem.
The assumption that U is open and that qζ∗ is continuously differentiable is used to apply the
first-order condition for optimality of φ∗ζ (x):
0=
∂ ∗
q (x, u)
∂u ζ
u=φ∗ζ (x)
On differentiating each side of the AROE in the form (7), we obtain from the chain-rule
n
o
n
o
∗
∗
∗
∗
d
d
h
(x)
+
η
=
q
(x,
φ
(x))
ζ
ζ
ζ
ζ
dζ
dζ
∂
∂
∂ ∗
∗
=
qζ (x, u)
+
q
(x,
u)
φ∗ζ (x)
∂ζ
∂u ζ
u=φ∗ζ (x)
u=φ∗ζ (x) ∂ζ
∂ ∗
=
q (x, u)
∂ζ ζ
u=φ∗ζ (x)
= W̌ζ (x) + Pu Hζ∗ (x)
u=φ∗ζ (x)
where in the last equation we have applied (5). Rearranging terms leads to the fixed point equation
W̌ζ + P̌ζ Hζ∗ = Hζ∗ +
d ∗
dζ ηζ
Taking the mean of each side with respect to πζ implies that
completes the proof that (8) holds.
d ∗
dζ ηζ
= wζ . This establishes (ii), and
t
u
ODEs for MDPs with application to K-L Cost — January 15, 2018
7
Extensions
An ODE can be constructed for the discounted cost problem with discounted factor β ∈ (0, 1). The
DROE (discounted-reward optimality equation) is another fixed point equation, similar to (4):
n
o
X
max w(x, u) + β
Pu (x, x0 )h∗ (x0 ) = h∗ (x)
u∈U(x)
x0
Step 3 in the construction of the vector field for the ODE is modified as follows: Obtain the solution
to W̌ + β P̌ H = H. The solution is unique, so no normalization is possible (or needed).
For both average- and discounted-reward settings, an ODE can be constructed when U is a finite
set rather than an open subset of Rm . In this case, under general conditions, the vector field V is
continuous and piecewise smooth, and the optimal policy is piecewise constant as a function of ζ.
We next consider two simple examples to illustrate the conclusions in the average cost setting.
The ODE is homogeneous in all of the examples that follow.
2.3
Example 1: Linear-quadratic model
Consider first the simple scalar linear system,
X(t + 1) = αX(t) + U (t) + N (t + 1)
2 . The state
in which 0 < α < 1. The disturbance N is i.i.d. with zero mean, and finite variance σN
space and action space are the real line, X = U = R. The reward function is taken to be quadratic
in the state variable, w(x, u) = −ζx2 − c(u), so that W (x, u) = −x2 is independent of both ζ and
u. The cost function c : R → R is continuously differentiable and convex, its derivative c0 is globally
Lipschitz continuous, and c(0) = c0 (0) = 0.
It is assumed in this example that ζ ≥ 0. It can be shown that the relative value function h∗ζ is
a concave function of x under these assumptions; it is normalized so that h∗ζ (0) = 0 for each ζ ≥ 0
(that is, x◦ = 0).
The ODE can be developed even in this infinite state-space setting.
Notation is simplified if this is converted to an average-cost optimization problem, with one-step
cost function cζ (x, u) = ζx2 + c(u). We let gζ∗ = −h∗ζ , which is a convex function on R. The AROE
becomes the average-cost optimality equation,
(9)
min cζ (x, u) + Pu gζ∗ (x) = gζ∗ (x) + γζ∗
u
with γζ∗ = −ηζ∗ . The optimal policy is the minimizer,
φ∗ζ (x) = arg min c(u) + Pu gζ∗ (x)
u
= arg min c(u) + E[gζ∗ (αx + u + N1 )]
u
The ODE is modified as follows. Let K denote the set of non-negative convex functions g : R → R,
and construct the vector field so that V : K → K. For given g ∈ K, we must define V(g). Since we
are minimizing cost, step 1 in the construction
of the vector field becomes,
Obtain a policy: φ(x) = arg min c(u) + E[g(αx + u + N1 )] .
u
This is a convex optimization problem whose solution can be obtained numerically.
Step 2 is obtained as follows:
P̌ (x, A) = P{αx + φ(x) + N1 ∈ A},
x ∈ X, A ∈ B(X).
8
ODEs for MDPs with application to K-L Cost — January 15, 2018
The solution to Poisson’s equation in step 3 of the ODE construction is more challenging. This
might be approximated using a basis, as in TD-learning [3, 23].
1.4
d ∗
dζ bζ
1.2
b∗ζ
=
1
1
1 − (α − kζ )2
0.8
0.6
0.4
0.2
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
ζ
Figure 1: Solution to the ODE (10) for the linear-quadratic example.
If the control cost is quadratic, c(u) = u2 , then the relative value function is also quadratic, so
that gζ∗ (x) = −h∗ζ (x) = bx2 , with b ≥ 0, and b = 0 only if ζ = 0. The optimal policy is linear,
u = −kx for some gain k. The vector field in the ODE can be restricted to functions of this form:
For any b ≥ 0,
(i) Obtain a policy
2
φ(x) = arg min u2 + bE[(αx + u + N1 )2 ] = arg min u2 + b[(αx + u)2 + σN
]
u
u
This gives u = −kx, with k = bα/(1 + b).
(ii) With a linear policy we obtain, P̌ (x, A) = P{(α − k)x + N1 ∈ A}.
(iii) Obtain the solution to Poisson’s equation, W + P̌ H = H + w, in which w is the steady-state
mean of W under this policy. Since W (x, u) = −x2 is quadratic, it follows that H is also quadratic,
H(x) = −Bx2 , with
1
−B =
, and k given in (i)
1 − (α − k)2
(iv) Set V(g) = −H.
d ∗
gζ (x) = Bx2 , x ∈ R, ζ ≥ 0.
That is, dζ
The ODE reduces to a differential equation for the coefficient b = b∗ζ :
d ∗
dζ bζ
=
1
,
1 − (α − kζ )2
kζ = b∗ζ α/(1 + b∗ζ )
On substitution this simplifies to,
d ∗
dζ bζ
=
1
1 − (α/(1 + b∗ζ ))2
(10)
with boundary condition b∗ζ |ζ=0 = 0. Fig. 1 shows the solution to this ODE for ζ ∈ [0, 1] with
α = 0.95.
ODEs for MDPs with application to K-L Cost — January 15, 2018
2.4
9
Example 2: Brockett’s MDP
The theory in this paper is restricted to models in discrete time, but the main results carry over to
the continuous time setting. An illustration is provided using a class of MDP models introduced in
[6].
As assumed in the present paper, the state space X = {x1 , . . . , xd } is finite, and the input evolves
on an open subset of of Rm . The dynamics are defined by a controlled rate matrix (or generator),
Au (x, x0 ) = A(x, x0 ) +
m
X
uk B k (x, x0 )
(11)
k=1
where A and {B 1 , . . . , B m } are d × d matrices. It is assumed that the input is defined by state
feedback U (t) = φ(X(t)). The associated controlled rate matrix
A(x, x0 ) = Aφ(x) (x, x0 ) ,
x, x0 ∈ X,
defines the transition rates for X under this policy:
A(x, x0 ) = lim
t↓0
1
P{X(t) = x0 | X(0) = x} − I{x = x0 }
t
Adapting the notation to the present paper, the cost function is taken of the form
cζ (x, u) = ζκ(x) + 21 kuk2 .
in which κ : X → R. For the continuous time model, the average-cost optimality equation becomes
(12)
min cζ (x, u) + Au gζ∗ (x) = γζ∗
u
in which γζ∗ is the optimal average cost, gζ∗ is the relative value function, and
Au gζ∗ (x) =
X
Au (x, x0 )gζ∗ (x0 ),
x0
x ∈ X.
It is assumed that gζ∗ (x◦ ) = 0 for some state x◦ and all ζ.
The minimizer in (12) defines the optimal policy φ∗ (x). For this model and cost function, the
minimizer can be obtained by taking the gradient with respect to u and setting this equal to zero
to obtain:
X
φ∗k (x) = −
B k (x, x0 )gζ∗ (x0 ) ,
x ∈ X.
(13)
x0
The ODE to solve (12) takes the following steps. First, if ζ = 0 then obviously φ∗ ≡ 0 and
d ∗
≡ 0. This sets the initial condition for the ODE. For other ζ we have as before that G∗ζ = dζ
gζ
∗
solves Poisson’s equation for the generator Ǎζ obtained with policy φζ :
g0∗
κ(x) +
X
Ǎζ (x, x0 )G∗ζ (x0 ) = κζ ,
j
where κζ is the steady-state mean of κ under this policy, and
Ǎζ (x, x0 ) = Aφ∗ζ (x) (x, x0 )
x ∈ X,
(14)
ODEs for MDPs with application to K-L Cost — January 15, 2018
10
The following numerical example from [6] will clarify the construction of the ODE. In this
example m = 1 so that u is scalar-valued, and X = {1, 2, 3}. Denote B = B 1 , where in this example
−1 1
0
−1 1 0
A = 1 −2 1 ,
B= 0 0 0
0
1 −1
0 1 −1
The input is restricted to {u ∈ R : u > −1}. For ζ > 0, the cost function is designed to penalize
the first and third states: c(1) = c(3) =√3, and c(x2 ) = 0. In [6] the case ζ = 1/2 is considered, for
which it is shown that φ∗ (1) = φ∗ (3) = 12 − 3, and φ∗ (x2 ) = 0.
Written in vector form, Poisson’s equation (14) becomes
3b + Aζ vζ = κζ e
(15)
in which bT = (1, 0, 1), eT = (1, 1, 1), vζ (i) = G∗ζ (xi ), and
Aζ (i, j) = A(i, j) + φ∗ζ (xi )B(i, j) ,
1 ≤ i, j ≤ 3.
This example is designed to have simple structure. From the form of the optimal policy (13),
it follows that φ∗ζ (2) = 0 for any ζ. Moreover, from symmetry of the model it can be shown
that φ∗ζ (1) = φ∗ζ (3), and gζ∗ (1) = gζ∗ (3). We take x◦ = 2 so that gζ∗ (2) = 0. Consequently, the
3-dimensional ODE for gζ∗ will reduce to a one-dimensional ODE for ξζ := gζ∗ (1).
The expression for the optimal policy (13) also gives
X
φ∗ζ (1) = −ξζ
B(1, j)b(j) = ξζ
j
And, since the second row of B is zero, it follows that Aζ = A + ξζ B.
d
We have dζ
ξζ = G∗ζ (1) = vζ (1), and Poisson’s equation (15) becomes
3b + vζ (1)Aζ b = κζ e
The first two rows of this vector equation give
3 + [−1 + ξζ (−1)]vζ (1) = κζ
0 + 2vζ (1) = κζ
Substituting the second equation into the first gives
d
dζ ξζ
= vζ (1) =
3
3 + ξζ
On making the change of variables fζ = 3 + ξζ we obtain
1 d 2
2 dζ fζ
d
= fζ dζ
fζ = 3,
whose solution is given by fζ2 = f02 + 6ζ, with f02 = 9.
In summary, φ∗ζ (1) = ξζ = −3 + fζ , giving
p
φ∗ζ (1) = φ∗ζ (3) = −3 + 9 + 6ζ ,
φ∗ζ (2) = 0.
It is necessary to restrict ζ to the interval (−5/6, ∞) to respect the constraint that φ∗ζ (x) > −1 for
all x.
d ∗
d
Based on the formula dζ
γζ = κζ and the preceding formula κζ = 2vζ (1) = 2 dζ
ξζ , it follows that
p
γζ∗ = 2ξζ = −6 + 2 9 + 6ζ ,
ζ > −5/6 ;
a concave function of ζ, as predicted by Prop. 2.1.
11
ODEs for MDPs with application to K-L Cost — January 15, 2018
3
MDPs with Kullback–Leibler Cost
The general results of the previous section are now applied to a particular class of MDP models.
3.1
Assumptions and Notation
The dynamics of the MDP are assumed of the form (2), where the action space consists of a convex
subset of probability mass functions on X. The welfare function is assumed to be affine in ζ, as
assumed in Prop. 2.1. To maintain notational consistency with prior work [7, 16, 8] we denote
ζ ∈ R,
wζ = w0 + ζU ,
(16)
and assume that U : X → R is a function of the state only. In the notation of Prop. 2.1, we have
W (x, u) = U(x) for all x, u. Under these conditions, it was shown in Theorem 2.2 that the ODE (8)
is homogeneous.
The first term w0 in (16) is the negative of a control cost. Its definition begins with the specification of a transition matrix P0 that describes nominal (control-free) behavior. It is assumed to be
irreducible and aperiodic. Equivalently, there is n0 ≥ 1 such that for each x, x0 ∈ X,
P0n (x, x0 ) > 0,
x ∈ X, n ≥ n0 .
(17)
It follows that P0 admits a unique invariant pmf, denoted π0 .
In the MDP model we deviate from this nominal behavior, but restrict to transition matrices
satisfying P (x, · ) ≺ P0 (x, · ) for each x. In fact, the optimal solutions will be equivalent:
P (x, x0 ) > 0 ⇐⇒ P0 (x, x0 ) > 0,
for all x, x0 ∈ X
(18)
Under this condition it follows that P is also irreducible and aperiodic.
The following representation will be used in different contexts throughout the paper. Any function h : X × X → R is regarded as an unnormalized log-likelihood ratio: Denote for x, x0 ∈ X,
Ph (x, x0 ) := P0 (x, x0 ) exp h(x0 | x) − Λh (x) ,
(19)
in which h(x0 | x) is the value of h at (x, x0 ) ∈ X × X, and Λh (x) is the normalization constant,
X
(20)
Λh (x) := log
P0 (x, x0 ) exp h(x0 | x)
x0
For any transition matrix P , an invariant pmf is interpreted as a row vector, so that invariance
can be expressed πP = π. Any function f : XP→ R is interpreted as a d-dimensional column vector,
and we use the standard notation P f (x) = x0 P (x, x0 )f (x0 ), x ∈ X.
The fundamental matrix is the inverse,
Z = [I − P + 1 ⊗ π]−1
(21)
where 1 ⊗ π is a matrix in which each row is identical, and equal to π. If P is irreducible and
aperiodic, then it can be expressed as a power series:
Z=
∞
X
[P − 1 ⊗ π]n
n=0
with [P − 1 ⊗ π]0 := I (the d × d identity matrix), and [P − 1 ⊗ π]n = P n − 1 ⊗ π for n ≥ 1.
(22)
ODEs for MDPs with application to K-L Cost — January 15, 2018
12
The Donsker-Varadhan rate function is denoted,
K(P kP0 ) =
X
π(x)P (x, x0 ) log
P (x, x0 )
x,x0
P0 (x, x0 )
(23)
Letting Π(x, x0 ) = π(x)P (x, x0 ) and Π0 (x, x0 ) = π(x)P0 (x, x0 ), we have
K(P kP0 ) = D(ΠkΠ0 )
(24)
where D denotes K-L divergence. It is called a “rate function” because it defines the relative entropy
rate between two stationary Markov chains, and appears in the theory of large deviations for Markov
chains [12].
For the transition matrix Ph defined in (19), the rate function can be expressed in terms of
its invariant pmf πh , the bivariate pmf Πh (x, x0 ) = πh (x)Ph (x, x0 ), and the log moment generating
function (20):
X
K(Ph kP0 ) =
Πh (x, x0 ) h(x0 | x) − Λh (x)
x,x0
=
X
x,x0
Πh (x, x0 )h(x0 | x) −
X
πh (x)Λh (x)
(25)
x
As in [24, 10, 16], the rate function is used here to model the cost of deviation from the nominal
transition matrix P0 : the control objective in this prior work can be cast as the solution to the
convex optimization problem,
(26)
ηζ∗ = max ζπ(U) − K(P kP0 ) : πP = π
π,P
where U : X → R, and the maximum is over all transition matrices.
Nature & nurture In many applications it is necessary to include a model of randomness from
nature along with the randomness introduced by the local control algorithm (nurture). This imposes
additional constraints in the optimization problem (26).
Consider a Markov model in which the full state space is the cartesian product of two finite
state spaces: X = Xu × Xn , where Xu are components of the state that can be directly manipulated
through control. The “nature” components Xn are not subject to direct control. For example, these
variables may be used to model service and arrival statistics in a queueing model, or uncertainty in
terrain in an application to robotics.
Elements of X are denoted x = (xu , xn ). Any state transition matrix under consideration is
assumed to have the following conditional-independence structure,
P (x, x0 ) = R(x, x0u )Q0 (x, x0n ), x ∈ X, x0u ∈ Xu , x0n ∈ Xn
(27)
P
P
where x0u R(x, x0u ) = x0n Q0 (x, x0n ) = 1 for each x. The matrix Q0 is out of our control – this
models dynamics such as the weather.
To underscore the generality of this model, consider a standard MDP model with finite state
space S, finite action space A, and controlled transition law %. Letting Φ denote the state process
and U the input process, we have for any two states s, s0 , and any action a,
P{Φ(t + 1) = s0 | Φ(t) = s, U (t) = a} = %(s0 | s, a)
A randomized policy is defined by a function φ : A × S → [0, 1] for which φ( · | s) is a probability
law on A for each s ∈ S.
ODEs for MDPs with application to K-L Cost — January 15, 2018
13
Proposition 3.1. Consider the MDP model with transition law % and randomized policy φ. For
each t ≥ 0 denote Xn (t) = Φ(t) and Xu (t) = U (t − 1), where X(0) = (U (−1), Φ(0)) is the initial
condition. Then X = (X u , X n ) is a Markov chain on X = A × S, with transition matrix of the form
(27), where for x, x0 ∈ X,
Q0 (x, x0n ) = %(x0n | xn , xu ),
R(x, x0u ) = φ(x0u | xn ),
Proof. From the definitions and Bayes’ rule,
P{Xu (t + 1) = x0u , Xn (t + 1) = x0n | X(t) = x}
= P{Xn (t + 1) = x0n | Xu (t + 1) = x0u , X(t) = x}P{Xu (t + 1) = x0u | X(t) = x}
= P{Φ(t + 1) = x0n | U (t) = x0u , X(t) = x}P{U (t) = x0u | X(t) = x}
Recall that X(t) = (Φ(t), U (t − 1)). The pair (U (t − 1), Φ(t + 1)) are conditionally independent
given (Φ(t), U (t)), so that the right hand side becomes,
P{Φ(t + 1) = x0n | U (t) = x0u , Φ(t) = xn }P{U (t) = x0u | Φ(t) = xn }
This establishes the desired result:
P{Xu (t + 1) = x0u , Xn (t + 1) = x0n | X(t) = x}
= %(x0n | xn , xu )φ(x0u | xn )
t
u
3.2
Optimal control with Kullback–Leibler cost
We consider now the optimization problem (26), subject to the structural constraint (27), with Q0
fixed. The maximizer defines a transition matrix that is denoted,
P̌ζ = arg max ζπ(U) − K(P kP0 ) : πP = π
(28)
P
It is shown in Prop. 3.3 that this can be cast as a convex program, even when subject to the
structural constraint (27). The optimization variable in the convex program will be taken to be
pmfs Π on the product space X × Xu .
Define for each π and R the pmf on X × Xu ,
Ππ,R (x, x0u ) = π(x)R(x, x0u ), x ∈ X, x0u ∈ Xu .
(29)
P
The pmf π can be recovered from Ππ,R via π(x) = x0u Ππ,R (x, x0u ), x ∈ X, and the matrix R can
also be recovered via R(x, x0u ) = Ππ,R (x, x0u )/π(x), provided π(x) > 0.
The following result shows that we can restrict to R for which Ππ,R ≺ Ππ0 ,R0 .
Lemma 3.2. For any transition matrix P ,
K(P kP0 ) < ∞ ⇐⇒ Ππ,R ≺ Ππ0 ,R0
Proof. If K(P kP0 ) < ∞ then P (x, · ) ≺ P0 (x, · ). This implies that R(x, · ) ≺ R0 (x, · ) for each
x ∈ X satisfying π(x) > 0, and also that π ≺ π0 .
ODEs for MDPs with application to K-L Cost — January 15, 2018
14
Hence, if K(P kP0 ) < ∞, then for each x and x0u ,
π0 (x)R0 (x, x0u ) = 0 ⇒ π(x)R0 (x, x0u ) = 0
⇒ π(x)R(x, x0u ) = 0 ,
which establishes one implication: Ππ,R ≺ Ππ0 ,R0 whenever K(P kP0 ) < ∞.
Conversely, if Ππ,R ≺ Ππ0 ,R0 then K(P kP0 ) < ∞ by the definition of K, and the convention
s log(s) = 0 when s = 0.
t
u
Lemma 3.2 is one step towards the proof of the following convex program representation of (26):
Proposition 3.3. The objective function in (26) is concave in the variable Π = Ππ,R , subject to
the convex constraints,
Π is a pmf on X × Xu
(30a)
Π0 (x, x0u )
π0 (x)R0 (x, x0u )
Π ≺ Π0 , with
=
X
X
Q0 (x, x0n )Π(x, x0u ) =
Π(x0 , xu )
x
xu
(30b)
for x0 = (x0u , x0n ) ∈ X
(30c)
It admits an optimizer Π∗ (x, x0u ) = π̌ζ (x)Řζ (x, x0u ), in which π̌ζ (x) > 0 for each x. Consequently,
there exists an optimizer P̌ζ for (28), with invariant pmf π̌ζ .
Proof. We first consider the constraints: (a) is by definition, and (c) is the invariance constraint for
(π, P ). Constraint (b) is without loss of generality, given Lemma 3.2.
Next we turn to the objective function: The function to be maximized in (26) can be expressed
X
ζπ(U) − K(P kP0 ) =
π(x)w(x, R)
x
in which
w(x, R) = ζU(x) −
X
= ζU(x) −
X
P (x, x0 ) log
P (x, x0 )
x0
x0u
R(x, x0u ) log
P0 (x, x0 )
R(x, x0 )
(31)
u
R0 (x, x0u )
The second equation follows from the assumption that P depends on R through (27). Multiplying
each side by π(x) and summing over x we obtain a representation in terms of the variable Ππ,R ,
with ζπ(U) − K(P kP0 ) =
X
Ππ,R (x, x0u )U(x) − D(Ππ,R kΠπ,R0 )
ζ
x,x0u
The K-L divergence D is known to be jointly convex in its two arguments [9]. Since Ππ,R0 is a linear
function of Ππ,R , this establishes concavity.
The existence of an optimizer follows from the fact that the function to be optimized is continuous
as a function of Ππ,R , and the domain of optimization (30a–30c) is compact.
t
u
ODEs for MDPs with application to K-L Cost — January 15, 2018
15
It is shown in Theorem 3.4 that the optimal value ηζ∗ together with a relative value function h∗ζ
solve the average reward optimization equation (AROE):
n
o
X
max w(x, R) +
P (x, x0 )h∗ζ (x0 ) = h∗ζ (x) + ηζ∗
(32)
R
x0
Recall that the relative value function is not unique, since a new solution is obtained by adding a
non-zero constant; the normalization h∗ζ (x◦ ) = 0 is imposed, where x◦ ∈ X is a fixed state.
The proof of Theorem 3.4 is given in the Appendix.
Theorem 3.4. There exist optimizers {π̌ζ , P̌ζ : ζ ∈ R}, and solutions to the AROE {h∗ζ , ηζ∗ : ζ ∈ R}
with the following properties:
(i) The optimizer P̌ζ can be obtained from the relative value function h∗ζ as follows:
P̌ζ (x, x0 ) := P0 (x, x0 ) exp hζ (x0u | x) − Λhζ (x)
where for x ∈ X, x0u ∈ Xu ,
hζ (x0u | x) =
X
Q0 (x, x0n )h∗ζ (x0u , x0n ),
(33)
(34)
x0n
and Λhζ (x) is the normalizing constant (20) with h = hζ .
(ii) {π̌ζ , P̌ζ , h∗ζ , ηζ∗ : ζ ∈ R} are continuously differentiable in the parameter ζ.
t
u
The fact that the domain of optimization (30a–30c) is compact was helpful in establishing the
existence of an optimizer. However, the results in Section 2 require that the action space be an open
set. To apply the results of Section 2 we can apply Theorem 3.4 (i), which justifies the restriction
(18). The restricted action space is an open subset of Rm for some m < d.
Representations for the derivatives in Theorem 3.4 (ii), in particular the derivative of Λh∗ζ with
respect to ζ, lead to a representation for the ODE used to compute the optimal transition matrices
{P̌ζ }.
3.3
ODE Solution
It is shown here that the assumptions of Theorem 2.2 hold, and hence the relative value functions
{h∗ζ : ζ ∈ R} can be obtained as the solution to an ODE.
At the start of Section 2 is is assumed that the action space is an open subset of Euclidean space,
and this assumption is required in Theorem 2.2. This can be imposed without loss of generality
since any optimizer satisfies (18).
It is convenient to generalize the problem slightly here. Let {h◦ζ : ζ ∈ R} denote a family
of functions on X, continuously differentiable in the parameter ζ. They are not necessarily relative
value functions, but we maintain the structure established in Theorem 3.4 for the family of transition
matrices. Denote,
X
hζ (x0u | x) =
Q0 (x, x0n )h◦ζ (x0u , x0n ), x ∈ X, x0u ∈ Xu
(35)
x0n
and then define as in (19),
Pζ (x, x0 ) := P0 (x, x0 ) exp hζ (x0u | x) − Λhζ (x)
(36)
ODEs for MDPs with application to K-L Cost — January 15, 2018
16
The function Λhζ : X → R is a normalizing constant, exactly as in (20):
Λh◦ζ (x) := log
X
x0
P0 (x, x0 ) exp hζ (x0u | x)
We begin with a general method to construct a family of functions {h◦ζ : ζ ∈ R} based on an
ODE. Using notation similar to Theorem 2.2, the ODE is expressed,
d ◦
dζ hζ
= V(h◦ζ ) ,
ζ ∈ R,
(37)
with boundary condition h◦0 ≡ 0. A particular instance of the method will result in h◦ζ = h∗ζ for each
ζ.
Assumed given is a mapping H◦ from transition matrices to functions on X. Following this, the
vector field V is obtained through the following two steps: For a function h : X → R,
(i) Define a new transition matrix via (19),
Ph (x, x0 ) := P0 (x, x0 ) exp h(x0u | x) − Λh (x) , x, x0 ∈ X,
P
in which h(x0u | x) = x0n Q0 (x, x0n )h(x0u , x0n ), and Λh (x) is a normalizing constant.
(38)
(ii) Compute H ◦ = H◦ (Ph ), and define V(h) = H ◦ . It is assumed that the functional H◦ is
constructed so that H ◦ (x◦ ) = 0 for any P .
In [8] the functional H◦ is designed to ensure desirable properties in the “demand dispatch”
application that is the focus of that paper. It is shown here that a particular choice of the function
H◦ will provide the solution to the collection of MDPs (26). Its domain will include only transition matrices that are irreducible and aperiodic. For any transition matrix P in this domain, the
fundamental matrix Z is obtained using (21), and then H ◦ = H◦ (P ) is defined as
X
H ◦ (x) =
[Z(x, x0 ) − Z(x◦ , x0 )]U(x0 ),
x∈X
(39)
x0
The function H ◦ is a solution to Poisson’s equation,
P H◦ = H◦ − U + U
where
U
(also written π(U)) is the steady-state mean of U:
X
U :=
π(x)U(x)
(40)
(41)
x
The proof of Theorem 3.5 is given in the Appendix.
Theorem 3.5. Consider the ODE (37) with boundary condition h◦0 ≡ 0, and with H ◦ = H◦ (P )
defined using (39) for each transition matrix P that is irreducible and aperiodic.
The solution to this ODE exists, and the resulting functions {h◦ζ : ζ ∈ R} coincide with the
relative value functions {h∗ζ : ζ ∈ R}. Consequently, P̌ζ = Phζ for each ζ.
t
u
ODEs for MDPs with application to K-L Cost — January 15, 2018
17
We sketch here the main ideas of the proof of Theorem 3.5.
The Implicit Function Theorem is used to establish differentiability of the relative value functions and average reward as a function of ζ. The ODE representation can then be obtained from
Theorem 2.2.
The next step is to establish the particular form for the ODE. The statement of the theorem is
equivalent to the representation Hζ∗ = H◦ (P̌ζ ) for each ζ, where h∗ζ is the relative value function, P̌ζ
is defined in (28), and
d ∗
Hζ∗ = dζ
hζ
(42)
The first step in the proof of (42) is a fixed point equation that follows from the AROE. The following
identity is given in Prop. A.2:
ζU + Λh∗ζ = h∗ζ + ηζ∗ .
(43)
A representation for the derivative of the log moment generating function is obtained in Lemma B.4,
X
d
∗ (x) =
Λ
P̌ζ (x, x0 )Hζ∗ (x0 ) .
h
dζ
ζ
x0
Differentiating each side of (43) then gives,
U + P̌ζ Hζ∗ = Hζ∗ +
d ∗
dζ ηζ .
(44)
d ∗
This is Poisson’s equation, and it follows that π̌ζ (U) = dζ
ηζ . Moreover, since h∗ζ (x◦ ) = 0 for every ζ,
∗
◦
we must have Hζ (x ) = 0 as well. Since the solution to Poisson’s equation with this normalization
is unique, we conclude that (42) holds, and hence Hζ∗ = H◦ (P̌ζ ) as claimed.
4
Conclusions
It is surprising that an MDP can be solved using an ODE under general conditions, and fortunate
that this ODE admits simple structure in the K-L cost framework that is a focus of the paper.
It is likely that the ODE has special structure for other classes of MDPs, such as the “rational
inattention” framework of [21, 22, 19, 20]. The computational efficiency of this approach will depend
in part on numerical properties of the ODE, such as its sensitivity for high dimensional models.
Finally, it is hoped that this approach will lead to new approaches to approximate dynamic
programming or reinforcement learning.
Appendices
A
AROE and Duality
Based on the linear programming (LP) approach to dynamic programming [15, 4], it can be shown
that the AROE is the dual of the primal problem (26). The relative value function h∗ is the dual
variable associated with the invariance constraint π = πP [4]. To prove Theorem 3.4 we require
properties of the primal and dual.
The primal (26) is equivalently expressed,
nX
o
ηζ∗ = max
π(x)w(x, R) : (27) holds, and π = πP
(45)
π,R
x
18
ODEs for MDPs with application to K-L Cost — January 15, 2018
The AROE becomes,
n
o
X
max w(x, R) +
P (x, x0 )h∗ζ (x0 ) = h∗ζ (x) + ηζ∗
R
(46)
x0
It will be shown that (46) can be interpreted as a dual of the convex program (45). We first
characterize its optimizer, denoted Řζ . This representation is based on the convex duality between
K-L divergence and the log-moment generating function recalled in Lemma A.1.
Fix a pmf µ0 on X. For any function F : X → R, the log-moment generating function is denoted
nX
o
Λ(F ) = log
µ0 (x) exp(F (x))
x
P
The mean of a function F under an arbitrary pmf µ is denoted µ(F ) = x µ(x)F (x). The following
lemma can be regarded as a consequence of Kullback’s inequality (see eqn (4.5) of [13]); see also
Theorem 3.1.2 of [9].
Lemma A.1. The log-moment generating function is the convex dual of relative entropy,
Λ(F ) = max{µ(F ) − D(µkµ0 )}
µ
where the maximum is over all pmfs, and is achieved uniquely with,
µ∗ (x) = µ0 (x) exp{F (x) − Λ(F )},
x ∈ X.
t
u
The following representation for Řζ easily follows.
Prop. A.2 was previously stated in (43).
The fixed point equation appearing in
Proposition A.2. The control matrix maximizing the left hand side of (46) is given by,
Řζ (x, x0u ) = R0 (x, x0u ) exp h∗ζ (x0u | x) − Λh∗ζ (x) .
(47)
Consequently, the AROE is equivalent to the fixed point equation ζU + Λh∗ζ = h∗ζ + ηζ∗ .
Proof. Using (31), the AROE becomes h∗ζ (x) + η ∗ =
o
n
X
max w(x, R) +
R(x, x0u )Q0 (x, x0n )h∗ζ (x0u , x0n ) .
R
x0u , x0n
Recalling the notation h∗ζ (x0u | x) in (34), we obtain h∗ζ (x) + η ∗ =
ζU(x) + max
R
nX
x0u
R(x, x0u )h∗ζ (x0u | x) −
X
R(x, x0u ) log
x0u
R(x, x0 ) o
u
R0 (x, x0u )
(48)
For fixed x denote F (x0u ) = h∗ζ (x0u | x), µ0 (x0u ) = R0 (x, x0u ) and µ(x0u ) = R(x, x0u ), x0u ∈ Xu . The
maximization variable in (48) is µ, and the maximization problem we must solve is,
max{µ(F ) − D(µkµ0 )}
µ
The formula for the optimizer µ∗ in Lemma A.1 gives the expression for Řζ in (47).
The fact that the optimal value is Λ(F ) implies the fixed point equation (43).
t
u
ODEs for MDPs with application to K-L Cost — January 15, 2018
19
It is established next that the AROE does indeed hold, by constructing a dual of (45) obtained
though a relaxation of the invariance constraint. A dual functional ϕ∗ζ is defined for any function
h : X → R via
nX
o
ϕ∗ζ (h) = max
π(x) w(x, R) + (P − I)h (x)
π,R
x
where (π, R) are now independent variables, and P is obtained from R via (27). We have ϕ∗ζ (h) ≥ ηζ∗
for any h, and there is no duality gap:
Proposition A.3. There exists h∗ζ such that ϕ∗ζ (h∗ζ ) = ηζ∗ . The pair (h∗ζ , ηζ∗ ) is a solution to the
AROE (32).
Proof. To show that there is no duality gap we apply Prop. 3.3, which establishes that the primal
is a convex program, and hence a sufficient condition is Slater’s condition [5, Section 5.3.2]. This
condition holds because Ππ0 ,R0 is in the relative interior of the constraint-set for the primal.
Since there is no duality gap, it then follows that there exists a maximizer for ϕ∗ζ , denoted h∗ζ ,
which satisfies ηζ∗ = ϕ∗ζ (h∗ζ ). To obtain the AROE, consider this representation: ηζ∗ =
nX
o
max
π(x) max w(x, R) + P h∗ζ (x) − h∗ζ (x)
π
R
x
The maximum over pmfs π is the same as the maximum over x:
ηζ∗ = max max w(x, R) + P h∗ζ (x) − h∗ζ (x)
x
R
To complete the proof we must remove the maximum over x. For this, recall that π0 and hence π̌ζ
have full support (they are strictly positive on all of X).
Prop. A.2 implies that the maximum over R is uniquely given by Řζ in (47), so that
ηζ∗ = max w(x, Řζ ) + P̌ζ h∗ζ (x) − h∗ζ (x)
x
Averaging over the optimizing pmf π̌ζ gives, by invariance,
X
π̌ζ (x)w(x, Řζ )
ηζ∗ =
x
=
X
x
π̌ζ (x) w(x, Řζ ) + P̌ζ h∗ζ (x) − h∗ζ (x) .
Because π̌ζ (x) > 0 for every x, it follows that the AROE (46) holds:
ηζ∗ = w(x, Řζ ) + P̌ζ h∗ζ (x) − h∗ζ (x)
= max w(x, R) + P h∗ζ (x) − h∗ζ (x)
R
t
u
B
Derivatives
The proof of Part (ii) of Theorem 3.4 is obtained through a sequence of lemmas. We first obtain
an alternative representation for the fixed point equation (43). Evaluating this equation at x◦ , and
recalling that h∗ζ (x◦ ) = 0 gives,
ηζ∗ = ζU(x◦ ) + Λh∗ζ (x◦ )
(49)
20
ODEs for MDPs with application to K-L Cost — January 15, 2018
Let I denote the function on X that is identically equal to 1, and for any function h and ζ ∈ R define
a new function on X via
F(ζ, h) := h − ζU − Λh + [ζU(x◦ ) + Λh (x◦ )]I
(50)
The fixed point equation becomes (43) becomes,
F(ζ, h∗ζ ) = 0.
(51)
The proof of Theorem 3.4 will require the Implicit Function Theorem. The following version of
this result is taken from [14, Theorem 11.5].
Proposition B.1 (Implicit Function Theorem). Suppose that A ⊂ Rn and B ⊂ Rm are open, and
that F : A × B → Rm is continuously differentiable. Suppose moreover that there exists (x0 , y 0 ) ∈
A × B for which the following hold: F (x0 , y 0 ) = 0, and the matrix ∂/∂yF (x0 , y 0 ) has rank m.
Then, there is a ball O ⊂ A about x0 and a continuously differentiable function g : O → B such
the equation F (x, y) = 0 is uniquely determined by y = g(x), for each x ∈ O.
t
u
To apply Prop. B.1, we take F = F and (x, y) = (ζ, h), so that n = 1 and m = d. We apply the
result to any (ζ0 , h∗ζ0 ) to establish that the mapping ζ → h∗ζ is C 1 .
For this we require a representation for the derivative of F with respect to the variable h. The
derivative is represented as a d × d matrix, defined so that for any function g : X → X,
F(ζ, h + εg)
x
= F(ζ, h)
+ε
x
X ∂
F (ζ, h)
∂h
0
x ∈X
x,x0
g(x0 ) + o(ε)
The following follows from (50) and elementary calculus:
Lemma B.2. The function F is continuously differentiable in (ζ, h). The partial derivative with
respect to the second variable is,
∂
F (ζ, h) = I − Ph + I ⊗ ν,
∂h
in which Ph is the transition matrix defined in (38), and I ⊗ ν represents a d × d matrix with each
row equal to ν, and with ν(x) = Ph (x◦ , x), x ∈ X.
t
u
Invertibility of the derivative with respect to h is obtained in the following:
Lemma B.3. The following inverse exists as a power series,
Zh = [I − Ph + I ⊗ ν]−1 =
∞
X
(Ph − I ⊗ ν)n
n=0
in which ν is defined in Lemma B.2. Moreover, νZh is the unique invariant pmf for Ph .
Proof. It is easily established by induction that for each n ≥ 1,
(Ph − I ⊗ ν)n = Phn − I ⊗ νn ,
where νn = νPhn−1 . Recall that Ph is irreducible and aperiodic since this is true for P0 . Consequently,
as n → ∞ we have νn → πh and Phn → I ⊗ πh , where πh is invariant for Ph . The convergence is
geometrically fast, which establishes the desired inverse formula.
ODEs for MDPs with application to K-L Cost — January 15, 2018
21
From the foregoing we have (Ph − I ⊗ ν)n I = Phn I − I = 0 for n ≥ 1, which implies that Zh I = I.
From this we obtain,
Zh Ph − I ⊗ ν = Zh (Ph − I ⊗ ν) = Zh − I
Multiplying each side by ν gives νZh Ph = νZh , so that µh := νZh is invariant. We also have
µh (X) = νZh I = ν(X) = 1, where we have used again the identity Zh I = I. Hence µh = πh as
claimed.
t
u
Since h∗ζ is continuously differentiable in ζ, it follows from (43) that the same is true for ηζ∗ . The
d ∗
following result provides a representation. The formula for dζ
ηζ could be anticipated from Prop. 2.1.
Lemma B.4. For each ζ we have,
d
∗
dζ Λhζ
where Hζ∗ =
d ∗
dζ hζ ,
= P̌ζ Hζ∗ (x)
and
d ∗
dζ ηζ
= π̌ζ (U)
Proof. The first result holds by the definition of Λh and Hζ∗ . To obtain the second identity, we
differentiate each side of (43) to obtain Poisson’s equation (44). On taking the mean of each side of
(43) with respect to π̌ζ , and using invariance π̌ζ P̌ζ = π̌ζ , we obtain,
d ∗
ηζ ).
π̌ζ (U) + π̌ζ (Hζ∗ ) = π̌ζ (Hζ∗ ) + π̌ζ ( dζ
t
u
Proof of Theorem 3.4. Part (i) is contained in Prop. A.2.
Part (ii): Combining Lemma B.2 and Lemma B.3 we see that the conclusions of Prop. B.1
hold for each pair (ζ0 , h∗ζ0 ). This shows that h∗ζ is a continuously differentiable function of ζ, and
hence P̌ζ is also continuously differentiable. To see that π̌ζ is continuously differentiable, apply the
representation in Lemma B.3.
t
u
C
Optimal ODE solution
We now prove Theorem 3.5.
The boundary condition is immediate from the assumptions: h∗0 is a constant, since P̌ζ = P0 .
Under the assumption that h∗ζ (x◦ ) = 0 for each ζ, it follows that h∗0 (x) = 0 for each x. It remains
to show that the relative value function solves the ODE,
d ∗
dζ hζ
= H◦ (Pζ ),
with H◦ defined in (39).
On differentiating each side of (43) we obtain,
U(x) +
d
∗
dζ Λhζ (x)
=
d ∗
dζ hζ (x)
+
d ∗
dζ ηζ
Based on the definition (34),
Λh∗ζ (x) = log
X
x0u
R0 (x, x0u ) exp(h∗ζ (x0u | x))
ODEs for MDPs with application to K-L Cost — January 15, 2018
it follows that
d
∗
dζ Λhζ (x)
X
x0u
22
=
−1 X
d ∗ 0
R0 (x, x0u ) exp(h∗ζ (x0u | x))
hζ (xu | x)
R0 (x, x0u ) exp(h∗ζ (x0u | x)) dζ
x0u
The equation simplifies as follows:
d
∗
dζ Λhζ (x)
=
X
=
X
=
X
x0u
d ∗ 0
Řζ (x, x0u ) dζ
hζ (xu | x)
d
Řζ (x, x0u ) dζ
x0u
X
Q0 (x, x0n )h∗ζ (x0u , x0n )
x0n
d ∗ 0
P̌ζ (x, x0 ) dζ
hζ (x )
x
Let H =
d ∗
dζ hζ (x)
and γ =
d ∗
dζ ηζ .
From the foregoing we obtain,
U(x) +
X
P̌ζ (x, x0 )H(x0 ) = H(x) + γ
x0
This is Poisson’s equation, with γ = π̌ζ (U). This shows that h∗ζ is the solution to the ODE defined
in Theorem 3.5, establishing (i) and (ii) of the theorem.
References
[1] P. Barooah, A. Bušić, and S. Meyn, Spectral decomposition of demand-side flexibility for reliable ancillary services in a smart grid, in Proc. 48th Annual Hawaii International Conference on System Sciences (HICSS), Kauai, Hawaii, 2015, pp. 2700–2709,
doi:10.1109/HICSS.2015.325.
[2] D. Bertsekas and S. Shreve, Stochastic Optimal Control: The Discrete-Time Case, Athena
Scientific, 1996.
[3] D. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, Atena Scientific, Cambridge, Mass, 1996.
[4] V. S. Borkar, Convex analytic methods in Markov decision processes, in Handbook of Markov
decision processes, vol. 40 of Internat. Ser. Oper. Res. Management Sci., Kluwer Acad. Publ.,
Boston, MA, 2002, pp. 347–375.
[5] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, New
York, 1st ed., 2004.
[6] R. Brockett, Optimal control of observable continuous time Markov chains, in IEEE Conference on Decision and Control, Dec 2008, pp. 4269–4274, doi:10.1109/CDC.2008.4738725.
[7] A. Bušić and S. Meyn, Passive dynamics in mean field control, in Proc. 53rd IEEE Conference
on Decision and Control, Dec 2014, pp. 2716–2721, doi:10.1109/CDC.2014.7039805.
[8] A. Bušić and S. Meyn, Distributed randomized control for demand dispatch.
arXiv:1603.05966v1 – and to appear, IEEE Conference on Decision and Control, March 2016.
ODEs for MDPs with application to K-L Cost — January 15, 2018
23
[9] A. Dembo and O. Zeitouni, Large Deviations Techniques And Applications, Springer-Verlag,
New York, second ed., 1998.
[10] P. Guan, M. Raginsky, and R. Willett, Online Markov decision processes with
Kullback-Leibler control cost, IEEE Trans. Automat. Control, 59 (2014), pp. 1423–1438,
doi:10.1109/TAC.2014.2301558.
[11] M. Kárný, Towards fully probabilistic control design, Automatica, 32 (1996), pp. 1719 –1722.
[12] I. Kontoyiannis and S. P. Meyn, Large deviations asymptotics and the spectral theory of
multiplicatively regular Markov processes, Electron. J. Probab., 10 (2005), pp. 61–123 (electronic).
[13] S. Kullback, Certain inequalities in information theory and the Cramer-Rao inequality, Ann.
Math. Statist., 25 (1954), pp. 745–751.
[14] L. H. Loomis and S. Sternberg, Advanced calculus, Addison-Wesley Reading, MA, 1968.
[15] A. S. Manne, Linear programming and sequential decisions, Management Sci., 6 (1960),
pp. 259–267.
[16] S. Meyn, P. Barooah, A. Bušić, Y. Chen, and J. Ehren, Ancillary service to the grid
using intelligent deferrable loads, IEEE Trans. Automat. Control, 60 (2015), pp. 2847–2862,
doi:10.1109/TAC.2015.2414772.
[17] S. P. Meyn and R. L. Tweedie, Markov chains and stochastic stability, Cambridge University Press, Cambridge, second ed., 2009. Published in the Cambridge Mathematical Library.
1993 edition online.
[18] M. L. Puterman, Markov decision processes: discrete stochastic dynamic programming, John
Wiley & Sons, 2014.
[19] E. Shafieepoorfard, M. Raginsky, and S. Meyn, Rational inattention in controlled
Markov processes, in American Control Conference (ACC), 2013, June 2013, pp. 6790–6797,
doi:10.1109/ACC.2013.6580906.
[20] E. Shafieepoorfard, M. Raginsky, and S. P. Meyn, Rationally inattentive control of
Markov processes, SIAM J. Control Optim., 54 (2016), pp. 987–1016, doi:10.1137/15M1008476,
http://dx.doi.org/10.1137/15M1008476, arXiv:http://dx.doi.org/10.1137/15M1008476.
[21] C. A. Sims, Implications of rational inattention, Journal of monetary Economics, 50 (2003),
pp. 665–690.
[22] C. A. Sims, Rational inattention: Beyond the linear-quadratic case, The American economic
review, (2006), pp. 158–163.
[23] R. Sutton and A. Barto, Reinforcement Learning: An Introduction, MIT Press. On-line edition at http://www.cs.ualberta.ca/~sutton/book/the-book.html, Cambridge, MA, 1998.
[24] E. Todorov, Linearly-solvable Markov decision problems, in Advances in Neural Information
Processing Systems 19, B. Schölkopf, J. Platt, and T. Hoffman, eds., MIT Press, Cambridge,
MA, 2007, pp. 1369–1376.
| 3 |
Contracting isometries of CAT(0) cube complexes and
acylindrical hyperbolicity of diagram groups
Anthony Genevois
February 22, 2018
arXiv:1610.07791v1 [math.GR] 25 Oct 2016
Abstract
The main technical result of this paper is to characterize the contracting isometries
of a CAT(0) cube complex without any assumption on its local finiteness. Afterwards,
we introduce the combinatorial boundary of a CAT(0) cube complex, and we show that
contracting isometries are strongly related to isolated points at infinity, when the complex
is locally finite. This boundary turns out to appear naturally in the context of Guba and
Sapir’s diagram groups, and we apply our main criterion to determine precisely when an
element of a diagram group induces a contracting isometry on the associated Farley cube
complex. As a consequence, in some specific case, we are able to deduce a criterion to
determine precisely when a diagram group is acylindrically hyperbolic.
Contents
1 Introduction
2
2 Preliminaries
5
3 Contracting isometries from hyperplane
3.1 Quasiconvex geodesics I . . . . . . . . .
3.2 Contracting convex subcomplexes I . . .
3.3 Well-separated hyperplanes . . . . . . .
3.4 Contracting isometries I . . . . . . . . .
configurations
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12
12
13
14
17
4 Combinatorial boundary
4.1 Generalities . . . . . . . . . . . .
4.2 Quasiconvex geodesics II . . . . .
4.3 Contracting convex subcomplexes
4.4 Contracting isometries II . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
19
22
24
.
.
.
.
.
28
28
34
37
43
47
vs. other boundaries
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
53
54
55
. .
. .
II
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 Acylindrical hyperbolicity of diagram groups
5.1 Diagram groups and their cube complexes . . .
5.2 Combinatorial boundaries of Farley complexes .
5.3 Contracting isometries . . . . . . . . . . . . . .
5.4 Some examples . . . . . . . . . . . . . . . . . .
5.5 Cocompact case . . . . . . . . . . . . . . . . . .
A Combinatorial boundary
A.1 Simplicial boundary .
A.2 Roller boundary . . .
A.3 Guralnik boundary . .
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
1
1
Introduction
Given a metric space X, an isometry g ∈ X is contracting if
• g is loxodromic, ie., there exists x0 ∈ X such that n 7→ g n · x0 defines a quasiisometric embedding Z → X;
• if Cg = {g n · x0 | n ∈ Z}, then the diameter of the nearest-point projection of any
ball disjoint from Cg onto Cg is uniformly bounded.
For instance, any loxodromic isometry of a Gromov-hyperbolic space is contracting. In
fact, if a group G acts by isometries on a metric space, the existence of a contracting
isometry seems to confer to G a “hyperbolic behaviour”. To make this statement more
precise, one possibility is to introduce acylindrically hyperbolic groups as defined in
[Osi13].
Definition 1.1. Let G be a group acting on a metric space X. The action G y X is
acylindrical if, for every d ≥ 0, there exist some constants R, N ≥ 0 such that, for every
x, y ∈ X,
d(x, y) ≥ R ⇒ #{g ∈ G | d(x, gx), d(y, gy) ≤ d} ≤ N .
A group is acylindrically hyperbolic if it admits a non-elementary (ie., with an infinite
limit set) acylindrical action on a Gromov-hyperbolic space.
Acylindrically hyperbolic groups may be thought of as a generalisation of relatively
hyperbolic groups. See for example [Osi13, Appendix] and references therein for more
details. The link between contracting isometries and acylindrically hyperbolic groups is
made explicit by the following result, which is a consequence of [BBF14, Theorem H]
and [Osi13, Theorem 1.2]:
Theorem 1.2. If a group acts by isometries on a geodesic space with a WPD contracting
isometry, then it is either virtually cyclic or acylindrically hyperbolic.
We do not give the precise definition of a WPD isometry. The only thing to know in our
paper is that any isometry turns out to be WPD if the action of our group is properly
discontinuous.
In this paper, we focus on a specific class of geodesic spaces: the CAT(0) cube complexes,
ie., simply connected cellular complexes obtained by gluing cubes of different dimensions
by isometries along their faces so that the link of every vertex is a simplicial flag complex.
See Section 2 for more details. The first question we are interested in is: when is an
isometry of a CAT(0) cube complex contracting? The answer we give to this question is
the main technical result of our paper. Our criterion essentially deals with hyperplane
configurations in the set H(γ) of the hyperlanes intersecting a combinatorial axis γ of a
given loxodromic isometry g. We refer to Section 3 for precise definitions.
Theorem 1.3. Let X be a CAT(0) cube complex and g ∈ Isom(X) an isometry with a
combinatorial axis γ. The following statements are equivalent:
(i) g is a contracting isometry;
(ii) joins of hyperplanes (H, V) with H ⊂ H(γ) are uniformly thin;
(iii) there exists C ≥ 1 such that:
– H(γ) does not contain C pairwise transverse hyperplanes,
– any grid of hyperplanes (H, V) with H ⊂ H(γ) is C-thin;
2
(iv) g skewers a pair of well-separated hyperplanes.
Remark 1.4. The equivalence (i) ⇔ (iv) generalises a similar criterion established in
[CS15, Theorem 4.2], where the cube complex is supposed uniformly locally finite.
In view of our application to diagram groups, it turned out to be natural to introduce a
new boundary of a CAT(0) cube complex; in the definition below, if r is a combinatorial
ray, H(r) denotes the set of the hyperplanes which intersect r.
Definition 1.5. Let X be a CAT(0) cube complex. Its combinatorial boundary is the
poset (∂ c X, ≺), where ∂ c X is the set of the combinatorial rays modulo the relation:
r1 ∼ r2 if H(r1 ) = H(r2 ); and where the partial order ≺ is defined by: r1 ≺ r2 whenever
a
H(r1 ) ⊂ H(r2 ).
a
Here, we used the following notation: given two sets X, Y , we say that X and Y are
almost-equal, denoted by X = Y , if the symmetric difference between X and Y is finite;
a
we say that X is almost-included into Y , denoted by X ⊂ Y , if X is almost-equal to
a
a subset of Y . In fact, this boundary is not completely new, since it admits strong
relations with other boundaries; see Appendix A for more details.
A point in the combinatorial boundary is isolated if it is not comparable with any other
point. Now, thanks to Theorem 1.3, it is possible to read at infinity when an isometry
is contracting:
Theorem 1.6. Let X be a locally finite CAT(0) cube complex and g ∈ Isom(X) an
isometry with a combinatorial axis γ. Then g is a contracting isometry if and only if
γ(+∞) is isolated in ∂ c X.
Thus, the existence of a contracting isometry implies the existence of an isolated point in
the combinatorial boundary. Of course, the converse cannot hold without an additional
hypothesis on the action, but in some specific cases we are able to prove partial converses.
For instance:
Theorem 1.7. Let G be a group acting on a locally finite CAT(0) cube complex X with
finitely many orbits of hyperplanes. Then G y X contains a contracting isometry if
and only if ∂ c X has an isolated vertex.
Theorem 1.8. Let G be a countable group acting on a locally finite CAT(0) cube complex
X. Suppose that the action G y X is minimal (ie., X does not contain a proper Ginvariant combinatorially convex subcomplex) and G does not fix a point of ∂ c X. Then
G contains a contracting isometry if and only if ∂ c X contains an isolated point.
As mentionned above, the main reason we introduce combinatorial boundaries of CAT(0)
cube complexes is to apply our criteria to Guba and Sapir’s diagram groups. Loosely
speaking, diagram groups are “two-dimensional free groups”: in the same way that free
groups are defined by concatenating and reducing words, diagram groups are defined by
concatenating and reducing some two-dimensional objets, called semigroup diagrams.
See Section 5.1 for precise definitions. Although these two classes of groups turn out to
be quite different, the previous analogy can be pushed further. On the one hand, free
groups act on their canonical Cayley graphs, which are simplicial trees; on the other
hand, diagram groups act on the natural Cayley graphs of their associated groupoids,
which are CAT(0) cube complexes, called Farley complexes. Moreover, in the same way
that the boundary of a free group may be thought of as a set of infinite reduced words,
the combinatorial boundary of a Farley cube complex may be thought of as a set of
infinite reduced diagrams. See Section 5.2 for a precise description.
3
If g is an element of a diagram group, which is absolutely reduced (ie., the product g n is
reduced for every n ≥ 1), let g ∞ denote the infinite diagram obtained by concatenating
infinitely many copies of g. Then, thanks to Theorem 1.6 and a precise description of the
combinatorial boundaries of Farley cube complexes, we are able to deduce the following
criterion:
Theorem 1.9. Let G be a diagram group and g ∈ G\{1} be absolutely reduced. Then
g is a contracting isometry of the associated Farley complex if and only if the following
two conditions are satisfied:
• g ∞ does not contain any infinite proper prefix;
• for any infinite reduced diagram ∆ containing g ∞ as a prefix, all but finitely many
cells of ∆ belong to g ∞ .
Of course, although it is sufficient for elements with few cells, this criterion may be
difficult to apply in practice, because we cannot draw the whole g ∞ . This why we give
a more “algorithmic” criterion in Section 5.3.
Thus, we are able to determine precisely when a given element of a diagram group is a
contracting isometry. In general, if there exist such isometries, it is not too difficult to
find one of them. Otherwise, it is possible to apply Theorem 1.7 or Theorem 1.8; we
emphasize that Theorem 1.7 may be particularly useful since diagram groups often act
on their cube complexes with finitely many orbits of hyperplanes. Conversely, we are
able to state that no contracting isometry exists if the combinatorial boundary does not
contain isolated points, see for instance Example 5.23.
Therefore, we get powerful tools to pursue the cubical study of negatively-curved properties of diagram groups we initialized in [Gen15a]. To be precise, the question we are
interested in is:
Question 1.10. When is a diagram group acylindrically hyperbolic?
Thanks to Theorem 1.2, we are able to deduce that a diagram group is acylindrically
hyperbolic if it contains a contracting isometry and if it is not cyclic. Section 5.4
provides families of acylindrically hyperbolic diagram groups which we think to be of
interest. On the other hand, a given group may be represented as a diagram group in
different ways, and an acylindrically hyperbolic group (eg., a non-abelian free group)
may be represented as a diagram group so that there is no contracting isometry on the
associated cube complex. We do not know if it is always possible to a find a “good”
representation.
Nevertheless, a good class of diagram groups, where the problem mentionned above
does not occur, corresponds to the case where the action on the associated complex
is cocompact. Using the notation introduced below, they correspond exactly to the
diagram groups D(P, w) where the class of w modulo the semigroup P is finite. We
call them cocompact diagram groups. Focusing on this class of groups, we prove (see
Theorem 5.45 for a precise statement):
Theorem 1.11. A cocompact diagram group decomposes naturally as a direct product of
a finitely generated free abelian group and finitely many acylindrically hyperbolic diagram
groups.
On the other hand, acylindrically hyperbolic groups cannot split as a direct product
of two infinite groups (see [Osi13, Corollary 7.3]), so we deduce a complete answer of
Question 1.10 in the cocompact case:
4
Corollary 1.12. A cocompact diagram group is acylindrically hyperbolic if and only if
it is not cyclic and it does not split as a non-trivial direct product.
Notice that this statement is no longer true without the cocompact assumption. Indeed,
Thompson’s group F and the lamplighter group Z o Z are diagram groups which are
not acylindrically hyperbolic (since they do not contain a non-abelian free group) and
they do not split non-trivially as a direct product. Another consequence of Theorem
1.11 is that a cocompact diagram group either is free abelian or has a quotient which
is acylindrically hyperbolic. Because acylindrically hyperbolic groups are SQ-universal
(ie., any countable group can be embedded into a quotient of the given group; see [Osi13,
Theorem 8.1]), we get the following dichotomy:
Corollary 1.13. A cocompact diagram group is either free abelian or SQ-universal.
In our opinion, cocompact diagram groups turn out to be quite similar to (finitely
generated) right-angled Artin groups. In this context, Theorem 1.11 should be compared
to the similar statement, but more general, [BC12, Theorem 5.2] (it must be noticed
that, although the statement is correct, the proof contains a mistake; see [MO13, Remark
6.21]). Compare also [BC12, Lemma 5.1] with Proposition 5.50. Notice that there
exist right-angled Artin groups which are not (cocompact) diagram groups (see [GS99,
Theorem 30]), and conversely there exist cocompact diagram groups which are not rightangled Artin groups (see Example 5.43).
The paper is organized as follows. In Section 2, we give the prerequisites on CAT(0)
cube complexes needed in the rest of the paper. Section 3 is essentially dedicated to
the proof of Theorem 1.3, and in Section 4, we introduce combinatorial boundaries of
CAT(0) cube complexes and we prove Theorem 1.6, as well as Theorem 1.7 and Theorem
1.8. Finally, in Section 5, we introduce diagram groups and we apply the results of the
previous sections to deduce the various statements mentionned above. We added an
appendix at the end of this paper to compare the combinatorial boundary with other
known boundaries of CAT(0) cube complexes.
Acknowledgement. I am grateful to Jean Lécureux for having suggested me a link
between the combinatorial boundary and the Roller boundary, and to my advisor, Peter
Haïssinsky, for all our discussions.
2
Preliminaries
A cube complex is a CW complex constructed by gluing together cubes of arbitrary
(finite) dimension by isometries along their faces. Furthermore, it is nonpositively curved
if the link of any of its vertices is a simplicial flag complex (ie., n + 1 vertices span a
n-simplex if and only if they are pairwise adjacent), and CAT(0) if it is nonpositively
curved and simply-connected. See [BH99, page 111] for more information.
Alternatively, CAT(0) cube complexes may be described by their 1-skeletons. Indeed,
Chepoi notices in [Che00] that the class of graphs appearing as 1-skeletons of CAT(0)
cube complexes coincides with the class of median graphs, which we now define.
Let Γ be a graph. If x, y, z ∈ Γ are three vertices, a vertex m is called a median point
of x, y, z whenever
d(x, y) = d(x, m) + d(m, y), d(x, z) = d(x, m) + d(m, z), d(y, z) = d(y, m) + d(m, z).
Notice that, for every geodesics [x, m], [y, m] and [z, m], the concatenations [x, m]∪[m, y],
[x, m] ∪ [m, z] and [y, m] ∪ [m, z] are also geodesics; furthermore, if [x, y], [y, z] and [x, z]
are geodesics, then any vertex of [x, y] ∩ [y, z] ∩ [x, z] is a median point of x, y, z.
5
The graph Γ is median if every triple (x, y, z) of pairwise distinct vertices admits a
unique median point, denoted by m(x, y, z).
Theorem 2.1. [Che00, Theorem 6.1] A graph is median if and only if it is the 1-skeleton
of a CAT(0) cube complex.
A fundamental feature of cube complexes is the notion of hyperplane. Let X be a
nonpositively curved cube complex. Formally, a hyperplane J is an equivalence class of
edges, where two edges e and f are equivalent whenever there exists a sequence of edges
e = e0 , e1 , . . . , en−1 , en = f where ei and ei+1 are parallel sides of some square in X.
Notice that a hyperplane is uniquely determined by one of its edges, so if e ∈ J we say
that J is the hyperplane dual to e. Geometrically, a hyperplane J is rather thought of
as the union of the midcubes transverse to the edges belonging to J.
The neighborhood N (J) of a hyperplane J is the smallest subcomplex of X containing
J, i.e., the union of the cubes intersecting J. In the following, ∂N (J) will denote
the union of the cubes of X contained in N (J) but not intersecting J, and X\\J =
(X\N (J)) ∪ ∂N (J). Notice that N (J) and X\\J are subcomplexes of X.
Theorem 2.2. [Sag95, Theorem 4.10] Let X be a CAT(0) cube complex and J a hyperplane. Then X\\J has exactly two connected components.
The two connected components of X\\J will be refered to as the halfspaces associated
to the hyperplane J.
Distances `p . There exist several natural metrics on a CAT(0) cube complex. For
example, for any p ∈ (0, +∞), the `p -norm defined on each cube can be extended to a
distance defined on the whole complex, the `p -metric. Usually, the `1 -metric is referred
to as the combinatorial distance and the `2 -metric as the CAT(0) distance. Indeed, a
CAT(0) cube complex endowed with its CAT(0) distance turns out to be a CAT(0) space
[Lea13, Theorem C.9], and the combinatorial distance between two vertices corresponds
to the graph metric associated to the 1-skeleton X (1) . In particular, combinatorial
geodesics are edge-paths of minimal length, and a subcomplex is combinatorially convex
if it contains any combinatorial geodesic between two of its points.
In fact, the combinatorial metric and the hyperplanes are strongly linked together:
the combinatorial distance between two vertices corresponds exactly to the number of
hyperplanes separating them [Hag08, Theorem 2.7], and
Theorem 2.3. [Hag08, Corollary 2.16] Let X be a CAT(0) cube complex and J a
hyperplane. The two components of X\\J are combinatorially convex, as well as the
components of ∂N (J).
This result is particularly useful when it is combined with the following well-known Helly
property; see for example [Rol98, Theorem 2.2].
6
Theorem 2.4. If C is a finite collection of pairwise intersecting combinatorially subT
complexes, then the intersection
C is non-empty.
C∈C
Combinatorial projection. In CAT(0) spaces, and so in particular in CAT(0) cube
complexes with respect to the CAT(0) distance, the existence of a well-defined projection
onto a given convex subspace provides a useful tool. Similarly, with respect to the
combinatorial distance, it is possible to introduce a combinatorial projection onto a
combinatorially convex subcomplex, defined by the following result.
Proposition 2.5. [Gen15a, Lemma 1.2.3] Let X be a CAT(0) cube complex, C ⊂ X be
a combinatorially convex subspace and x ∈ X\C be a vertex. Then there exists a unique
vertex y ∈ C minimizing the distance to x. Moreover, for any vertex of C, there exists
a combinatorial geodesic from it to x passing through y.
The following result makes precise how the combinatorial projection behaves with respect
to the hyperplanes.
Proposition 2.6. Let X be a CAT(0) cube complex, C a combinatorially convex subspace, p : X → C the combinatorial projection onto C and x, y ∈ X two vertices. The
hyperplanes separating p(x) and p(y) are precisely the hyperplanes separating x and y
which intersect C. In particular, d(p(x), p(y)) ≤ d(x, y).
Proof of Proposition 2.6. A hyperplane separating p(x) and p(y) separates x and y
according to [Gen16, Lemma 2.10]. Conversely, let J be a hyperplane separating x and
y and intersecting C. Notice that, according to Lemma 2.7 below, if J separates x and
p(x), or y and p(y), necessarily J must be disjoint from C. Therefore, J has to separate
p(x) and p(y).
Lemma 2.7. [Gen16, Lemma 2.8] Let X be a CAT(0) cube complex and N ⊂ X a
combinatorially convex subspace. Let p : X → N denote the combinatorial projection
onto N . Then every hyperplane separating x and p(x) separates x and N .
The following lemma will be particularly useful in this paper.
Lemma 2.8. Let X be a CAT(0) cube complex and C1 ⊂ C2 two subcomplexes with C2
combinatorially convex. Let p2 : X → C2 denote the combinatorial projection onto C2 ,
and p1 the nearest-point projection onto C1 , which associates to any vertex the set of
the vertices of C1 which minimize the distance from it. Then p1 ◦ p2 = p1 .
Proof. Let x ∈ X. If x ∈ C2 , clearly p1 (p2 (x)) = p1 (x), so we suppose that x ∈
/ C2 . If
z ∈ C1 , then according to Proposition 2.5, there exists a combinatorial geodesic between
x and z passing through p2 (x). Thus,
d(x, z) = d(x, p2 (x)) + d(p2 (x), z).
In particular, if y ∈ p1 (x),
d(x, C1 ) = d(x, y) = d(x, p2 (x)) + d(p2 (x), y).
The previous two equalities give:
d(p2 (x), z) − d(p2 (x), y) = d(x, z) − d(x, C1 ) ≥ 0.
Therefore, d(p2 (x), y) = d(p2 (x), C1 ), ie., y ∈ p1 (p2 (x)). We have proved p1 (x) ⊂
p1 (p2 (x)).
It is worth noticing that we have also proved that d(x, C1 ) = d(x, p2 (x)) + d(p2 (x), C1 ).
Thus, for every y ∈ p1 (p2 (x)), once again because there exists a combinatorial geodesic
between x and y passing through p2 (x) according to Proposition 2.5,
7
d(x, y) = d(x, p2 (x)) + d(p2 (x), y) = d(x, p2 (x)) + d(p2 (x), C1 ) = d(x, C1 ),
ie., y ∈ p1 (x). We have proved that p1 (p2 (x)) ⊂ p1 (x), concluding the proof.
We conclude with a purely technical lemma which will be used in Section 3.3.
Lemma 2.9. Let X be a CAT(0) cube complex, N ⊂ X a combinatorially convex
subspace and x, y ∈
/ N two vertices. Fix a combinatorial geodesic [x, y] and choose a
vertex z ∈ [x, y]. Then d(z, N ) ≤ d(x, N ) + d(y, N ).
Proof. For convenience, if A, B ⊂ X are two sets of vertices, let ∆(A, B) denote
the number of hyperplanes separating A and B. According to Lemma 2.7, ∆({a}, N ) =
d(a, N ) for every vertex a ∈ X. Notice that, because z ∈ [x, y] implies that no hyperplane
can separate z and {x, y} ∪ N , we have
d(z, N ) = ∆(z, N ) = ∆({x, y, z}, N ) + ∆({x, z}, N ∪ {y}) + ∆({y, z}, N ∪ {x}),
and
d(x, N ) = ∆(x, N ) = ∆(x, N ∪ {y, z}) + ∆({x, z}, N ∪ {y}) + ∆({x, y, z}, N ),
and
d(y, N ) = ∆(y, N ) = ∆(y, N ∪ {x, z}) + ∆({y, z}, N ∪ {x}) + ∆({x, y, z}, N ).
Now, it is clear that d(z, N ) ≤ d(x, N ) + d(y, N ).
Disc diagrams. A fundamental tool to study CAT(0) cube complexes is the theory
of disc diagrams. For example, they were extensively used by Sageev in [Sag95] and by
Wise in [Wis12]. The rest of this section is dedicated to basic definitions and properties
of disc diagrams.
Definition 2.10. Let X be a nonpostively curved cube complex. A disc diagram is a
continuous combinatorial map D → X, where D is a finite contractible square complex
with a fixed topological embedding into S2 ; notice that D may be non-degenerate, ie.,
homeomorphic to a disc, or may be degenerate. In particular, the complement of D in
S2 is a 2-cell, whose attaching map will be refered to as the boundary path ∂D → X
of D → X; it is a combinatorial path. The area of D → X, denoted by Area(D),
corresponds to the number of squares of D.
Given a combinatorial closed path P → X, we say that a disc diagram D → X is
bounded by P → X if there exists an isomorphism P → ∂D such that the following
diagram is commutative:
∂D
7/ X
O
P
According to a classical argument due to van Kampen [VK33] (see also [MW02, Lemma
2.17]), there exists a disc diagram bounded by a given combinatorial closed path if and
only if this path is null-homotopic. Thus, if X is a CAT(0) cube complex, then any
combinatorial closed path bounds a disc diagram.
As a square complex, a disc diagram contains hyperplanes: they are called dual
curves. Equivalently, they correspond to the connected components of the reciprocal
images of the hyperplanes of X. Given a disc diagram D → X, a nogon is a dual curve
homeomorphic to a circle; a monogon is a subpath, of a self-intersecting dual curve,
homeomorphic to a circle; an oscugon is a subpath of a dual curve whose endpoints are
the midpoints of two adjacent edges; a bigon is a pair of dual curves intersecting into
two different squares.
8
Figure 1: From left to right: a nogon, a monogon, an oscugon and a bigon.
Theorem 2.11. [Wis12, Lemma 2.2] Let X be a nonpositively curved cube complex and
D → X a disc diagram. If D contains a nogon, a monogon, a bigon or an oscugon,
then there exists a new disc diagram D0 → X such that:
(i) D0 is bounded by ∂D,
(ii) Area(D0 ) ≤ Area(D) − 2.
Let X be a CAT(0) cube complex. A cycle of subcomplexes C is a sequence of
subcomplexes C1 , . . . , Cr such that C1 ∩Cr 6= ∅ and Ci ∩Ci+1 6= ∅ for every 1 ≤ i ≤ r −1.
A disc diagram D → X is bounded by C if ∂D → X can be written as the concatenation
of r combinatorial geodesics P1 , . . . , Pr → X such that Pi ⊂ Ci for every 1 ≤ i ≤ r.
The complexity of such a disc diagram is defined by the couple (Area(D), length(∂D)),
and a disc diagram bounded by C will be of minimal complexity if its complexity is
minimal with respect to the lexicographic order among all the possible disc diagrams
bounded by C (allowing modifications of the paths Pi ). It is worth noticing that such a
disc diagram does not exist if our subcomplexes contain no combinatorial geodesics. On
the other hand, if our subcomplexes are combinatorially geodesic, then a disc diagram
always exists.
Our main technical result on disc diagrams is the following, which we proved in
[Gen16, Theorem 2.13].
Theorem 2.12. Let X be a CAT(0) cube complex, C = (C1 , . . . , Cr ) a cycle of subcomplexes, and D → X a disc diagram bounded by C. For convenience, write ∂D as
the concatenation of r combinatorial geodesics P1 , . . . , Pr → X with Pi ⊂ Ci for every
1 ≤ i ≤ r. If the complexity of D → X is minimal, then:
(i) if Ci is combinatorially convex, two dual curves intersecting Pi are disjoint;
(ii) if Ci and Ci+1 are combinatorially convex, no dual curve intersects both Pi and
Pi+1 .
In general, a disc diagram D → X is not an embedding. However, the proposition
below, which we proved in [Gen16, Proposition 2.15], determines precisely when it is an
isometric embedding.
Proposition 2.13. Let X be a CAT(0) cube complex and D → X a disc diagram which
does not contain any bigon. With respect to the combinatorial metrics, ϕ : D → X is
an isometric embedding if and only if every hyperplane of X induces at most one dual
curve of D.
We mention two particular cases below.
Corollary 2.14. Let C = (C1 , C2 , C3 , C4 ) be a cycle of four subcomplexes. Suppose that
C2 , C3 , C4 are combinatorially convex subcomplexes. If D → X is a disc diagram of
minimal complexity bounded by C, then D → X is an isometric embedding.
9
Proof. First, we write ∂D → X as the concatenation of four combinatorial geodesics
P1 , P2 , P3 , P4 → X, where Pi ⊂ Ci for every 1 ≤ i ≤ 4. Suppose that there exist two
dual curves c1 , c2 of D induced by the same hyperplane J. Because a combinatorial
geodesic cannot intersect a hyperplane twice, the four endpoints of c1 and c2 belong to
four different sides of ∂D. On the other hand, because C2 , C3 , C4 are combinatorially
convex, it follows from Theorem 2.12 that any dual curve intersecting P3 must intersect
P1 ; say that c1 intersects P1 and P3 . Therefore, c2 must intersect P2 and P4 , and we
deduce that c1 and c2 are transverse. But this implies that J self-intersects in X, which
is impossible.
Corollary 2.15. [Gen16, Corollary 2.17] Let X be a CAT(0) cube complex and C a
cycle of four combinatorially convex subcomplexes. If D → X is a disc diagram of
minimal complexity bounded by C, then D is combinatorially isometric to a rectangle
[0, a] × [0, b] ⊂ R2 and D → X is an isometric embedding.
Contracting isometries. We gave in the introduction the definition of a contracting
isometry. It turns out that an isometry is contracting if and only if its axis is contracting,
in the following sense:
Definition 2.16. Let (S, d) be a metric space, Y ⊂ C a subspace and B ≥ 0 a constant.
We say that Y is B-contracting if, for every ball disjoint from Y , the diameter of its
nearest-point projection onto Y is at most B. A subspace will be contracting if it is
B-contracting for some B ≥ 0.
In fact, in the context of CAT(0) cube complexes, we may slightly modify this definition
in the following way:
Lemma 2.17. Let X be a CAT(0) cube complex, S ⊂ X a combinatorially convex
subspace and L ≥ 0 a constant. Then S is contracting if and only if there exists C ≥ 0
such that, for all vertices x, y ∈ X satisfying d(x, y) < d(x, S) − L, the projection of
{x, y} onto S has diameter at most C.
Proof. The implication is clear: if S is B-contracting, {x, y} is included into the ball
B(x, d(x, S)) whose projection onto S has diameter at most B.
Conversely, suppose that S is not contracting, ie., for all n ≥ 0, there exists a ball
B(xn , rn ), with rn < d(xn , S), whose projection onto S has diameter at least n. Let
p : X → S denote the combinatorial projection onto S. If for all y ∈ B(xn , rn ) we
had d(p(y), p(xn )) < n/2, then the projection of B(xn , rn ) onto S would have diameter
at most n. Therefore, there exists yn ∈ B(xn , rn ) satisfying d(p(xn ), p(yn )) ≥ n/2. In
particular, the projection of {xn , yn } onto S has diameter at least n/2, with d(xn , yn ) <
d(xn , S).
If d(xn , yn ) < d(xn , S) − L, there is nothing to prove. Ohterwise, two cases may happen. If d(xn , S) ≤ L, then, because p is 1-Lipschitz according to Proposition 2.6,
the projection of B(xn , rn ) onto S has diameter at most 2L. Without loss of generality, we may suppose n > 2L, so that this case cannot occur. From now on, suppose
d(xn , yn ) ≥ d(xn , S) − L. Then, along a combinatorial geodesic [xn , yn ] between xn and
yn , there exists a vertex zn such that d(zn , yn ) ≤ L and d(xn , zn ) ≤ d(xn , S) − L because
we saw that d(xn , yn ) < d(xn , S). Now,
d(p(xn ), p(zn )) ≥ d(p(xn ), p(yn )) − d(p(yn ), p(zn )) ≥
n
2
− L,
so that the projection of {xn , zn } onto S has diameter at least n2 − L. Because this
diameter may be chosen arbitrarily large, this concludes the proof.
10
However, the previous criterion only holds for combinatorially convex subspaces. Of
course, it is always possible to look at the combinatorial convex hull of the subspace
which is considered, and the following lemma essentially states when it is possible to
deduce that the initial subspace is contracting.
Lemma 2.18. Let X be a CAT(0) cube complex, S ⊂ X a subspace and N ⊃ S a
combinatorially convex subspace. Suppose that the Hausdorff distance dH (S, N ) between
S and N is finite. Then S is contracting if and only if N is contracting as well.
Proof. We know from Lemma 2.8 that the projection onto S is the composition of the
projection of X onto N with the projection of N onto S. Consequently, the Hausdorff
distance between the projections onto N and onto S is at most dH (N, S). Our lemma
follows immediately.
Typically, the previous lemma applies to quasiconvex combinatorial geodesics:
Lemma 2.19. Let X be a CAT(0) cube complex and γ an infinite combinatorial geodesic.
Let N (γ) denote the combinatorial convex hull of γ. Then γ is quasiconvex if and only
if the Hausdorff distance between γ and N (γ) is finite.
Proof. We claim that (the 1-skeleton of) N (γ) is the union L of the combinatorial
geodesics whose endpoints are on γ. First, it is clear that any such geodesic must be
included into N (γ), whence L ⊂ N (γ). Conversely, given some x ∈ N (γ), we want
to prove that there exists a geodesic between two vertices of γ passing through x. Fix
some y ∈ γ. Notice that, because x belongs to N (γ), no hyperplane separates x from
γ, so that any hyperplane separating x and y must intersect γ. As a consequence, for
some n ≥ 1 sufficiently large, every hyperplane separating x and y separates γ(−n)
and γ(n); furthermore, we may suppose without loss of generality that y belongs to the
subsegment of γ between γ(−n) and γ(n). We claim that the concatenation of a geodesic
from γ(−n) to x with a geodesic from x to γ(n) defines a geodesic between γ(−n) and
γ(n), which proves that x ∈ L. Indeed, if this concatenation defines a path which is not
a geodesic, then there must exist a hyperplane J intersecting it twice. Notice that J
must separate x from {γ(n), γ(−n)}, so that we deduce from the convexity of half-spaces
that J must separate x and y. By our choice of n, we deduce that J separates γ(−n)
and γ(n). Finally, we obtain a contradiction since we already know that J separates x
from γ(−n) and γ(n). Thus, we have proved that N (γ) ⊂ L.
It follows that γ is quasiconvex if and only if the Hausdorff distance between γ and N (γ)
is finite.
It is worth noticing that any contracting geodesic is quasiconvex. This is a particular
case of [Sul14, Lemma 3.3]; we include a proof for completeness.
Lemma 2.20. Let X be a geodesic metric space. Any contracting geodesic of X is
quasiconvex.
Proof. Let ϕ be a geodesic between two points of γ. Let ϕ(t) be a point of ϕ. We want
to prove that d(ϕ(t), γ) ≤ 11B.
If d(ϕ(t), γ) < 3B, there is nothing to prove. Otherwise, ϕ(t) belongs to a maximal subsegment [ϕ(r), ϕ(s)] of ϕ outside the 3B-neighborhood of γ, ie. d(ϕ(r), γ), d(ϕ(s), γ) ≤
3B but d(ϕ(p), γ) ≥ 3B for every p ∈ [r, s]. Let
r = t0 < t1 < · · · < tk−1 < tk = s
with ti+1 − ti = 2B for 1 ≤ i ≤ k − 2 and tk − tk−1 ≤ 2B. Observe that
11
(1)
d(ϕ(r), ϕ(s)) = s − r =
k−1
X
(ti+1 − ti ) ≥ 2(k − 1)B,
i=0
but on the other hand, if pi belongs to the projection of ϕ(ti ) on γ,
d(ϕ(r), ϕ(s)) ≤ d(ϕ(r), p0 ) + d(p0 , p1 ) + · · · + d(pk−1 , pk ) + d(pk , ϕ(s)).
Noticing that d(ϕ(ti ), ϕ(ti+1 )) = ti+1 − ti ≤ 2B < 3B ≤ d(ϕ(ti ), γ), we deduce from the
fact that γ is B-contracting that
d(ϕ(r), ϕ(s)) ≤ 6B + kB.
(2)
Finally, combining 1 and 2, we get k ≤ 8. Thus,
d(ϕ(r), ϕ(s)) =
k−1
X
(ti+1 − ti ) ≤ 2kB ≤ 16B.
i=0
Now, since d(ϕ(t), ϕ(r)) or d(ϕ(t), ϕ(s)), say the first one, is bounded above by d(ϕ(r), ϕ(s))/2 ≤
8B, we deduce
d(ϕ(t), γ) ≤ d(ϕ(t), ϕ(r)) + d(ϕ(r), γ) ≤ 8B + 3B = 11B.
This concludes the proof.
3
Contracting isometries from hyperplane configurations
3.1
Quasiconvex geodesics I
Given a subcomplex Y of some CAT(0) cube complex, we denote by H(Y ) the set
of hyperplanes intersecting Y . In this section, the question we are interested in is to
determine when a combinatorial geodesic γ is quasiconvex just from the set H(γ). We
begin by introducing some definitions.
Definition 3.1. A facing triple in a CAT(0) cube complex is the data of three hyperplanes such that no two of them are separated by the third hyperplane.
Definition 3.2. A join of hyperplanes is the data of two families (H = (H1 , . . . , Hr ), V =
(V1 , . . . , Vs )) of hyperplanes which do not contain facing triple and such that any hyperplane of H is transverse to any hyperplane of V. If Hi (resp. Vj ) separates Hi−1 and
Hi+1 (resp. Vj−1 and Vj+1 ) for every 2 ≤ i ≤ r − 1 (resp. 2 ≤ j ≤ s − 1), we say that
(H, V) is a grid of hyperplanes.
If (H, V) is a join or a grid of hyperplanes satisfying min(#H, #V) ≤ C, we say that
(H, V) is C-thin.
Our main criterion is:
Proposition 3.3. Let X be a CAT(0) cube complex and γ an infinite combinatorial
geodesic. The following statements are equivalent:
(i) γ is quasiconvex;
(ii) there exists C ≥ 1 satisfying:
– H(γ) does not contain C pairwise transverse hyperplanes,
– any grid of hyperplanes (H, V) with H, V ⊂ H(γ) is C-thin;
(iii) there exists some constant C ≥ 1 such that any join of hyperplanes (H, V) satisfying H, V ⊂ H(γ) is C-thin.
12
It will be a direct consequence of the following lemma and a criterion of hyperbolicity
we proved in [Gen16]:
Lemma 3.4. An infinite combinatorial geodesic γ is quasiconvex if and only if its combinatorial convex hull is a hyperbolic subcomplex.
Proof. According to Lemma 2.19, γ is quasiconvex if and only if the Hausdorff distance
between γ and N (γ) is finite. In particular, if γ is quasiconvex, then N (γ) is quasiisometric to a line, and a fortiori is hyperbolic. Conversely, if N (γ) is a hyperbolic
subcomplex, its bigons are uniformly thin. This implies the quasiconvexity of γ.
Theorem 3.5. [Gen16, Theorem 3.3] A CAT(0) cube complex is hyperbolic if and only
if it is finite-dimensional and its grids of hyperplanes of X are uniformly thin.
Proof of Proposition 3.3. From Lemma 3.4 and Theorem 3.5, we deduce the equivalence (i) ⇔ (ii). Then, because a grid of hyperplanes or a collection of pairwise
transverse hyperplanes gives rise to a join of hyperplanes, (iii) clearly implies (ii).
To conclude, we want to prove (ii) ⇒ (iii). Let C denote the constant given by (ii). Let
(H, V) be a join of hyperplanes satisfying H, V ⊂ H(γ). Suppose by contradiction that
#H, #V ≥ Ram(C). Because H(γ) does not contain C pairwise transverse hyperplanes,
we deduce that H and V each contain a subfamily of C pairwise disjoint hyperplanes, say
H0 and V 0 respectively. Notice that any hyperplane of H0 is transverse to any hyperplane
of V 0 . Since the hyperplanes of H0 and V 0 are all intersected by γ, we conclude that
(H0 , V 0 ) defines a (C, C)-grid of hyperplanes in X, contradicting the definition of C.
Therefore, min(p, q) < Ram(C).
3.2
Contracting convex subcomplexes I
In the previous section, we showed how to recognize the quasiconvexity of a combinatorial geodesic from the set of the hyperplanes it intersects. Therefore, according to
Proposition 3.3, we reduced the problem of determining when a combinatorial geodesic
is contracting to the problem of determining when a combinatorially convex subcomplex
is contracting. This is the main criterion of this section:
Theorem 3.6. Let X be a CAT(0) cube complex and Y ⊂ X a combinatorially convex
subcomplex. Then Y is contracting if and only if there exists C ≥ 0 such that any join
of hyperplanes (H, V) with H ⊂ H(Y ) and V ∩ H(Y ) = ∅ satisfies min(#H, #V) ≤ C.
Proof. Suppose that Y is B-contracting for some B ≥ 0, and let (H, V) be a join of
hyperplanes satisfying H ⊂ H(Y ) and V ∩ H(Y ) = ∅. Because any hyperplane of H
intersects Y and H does not contain facing triples, there exist two vertices x− , x+ ∈ Y
separated by the hyperplanes of H. For convenience, let H ± denote the halfspaces
delimited by some H ∈ H which contains x± ; and, for any V ∈ V, let V + denote the
halfspace delimited by V which does not contain Y . Now, we set
H± =
T
H ± and V =
T
V +.
V ∈V
H∈H
If C is the cycle of subcomplexes (H− , V, H+ , Y ), then Corollary 2.15 states that a disc
diagram of minimal complexity bounded by C defines a flat rectangle F ⊂ X. Let
x, y ∈ F ∩ V be two vertices satisfying d(x, y) = min(#V, #H) − 1. Then, because Y
and {x, y} are separated by the hyperplanes of V, we have d(x, y) < d(x, Y ). Moreover,
if p : X → Y denotes the combinatorial projection onto Y , because x and y are separated
by min(#H, #V) − 1 hyperplanes which intersect Y , it follows from Proposition 2.6 that
d(p(x), p(y)) ≥ min(#H, #V) − 1. Finally, since Y is B-contracting, we conclude that
13
min(#H, #V) ≤ 1 + d(p(x), p(y)) ≤ 1 + B.
Conversely, suppose that there exists C ≥ 0 such that any join of hyperplanes (H, V)
with H ⊂ H(Y ) and V ∩H(Y ) = ∅ satisfies min(#H, #V) ≤ C. We want to prove that Y
is contracting by applying Lemma 2.17. Let x, y ∈ X be two distinct vertices satisfying
d(x, y) < d(x, Y ) − C and let p : X → Y denote the combinatorial projection onto Y .
If H denotes the set of the hyperplanes separating {x, y} and Y , and V the set of the
hyperplanes separating p(x) and p(y), then (H, V) defines a join of hyperplanes: because
any hyperplane of H separates {x, y} and {p(x), p(y)}, and hyperplane of V separates
{x, p(x)} and {y, p(y)} according to Proposition 2.6 and Lemma 2.7, we deduce that
any hyperplane of H is transverse to any hyperplane of V. Because the assumption
d(x, y) < d(x, Y ) − C implies #H > C, we conclude that
d(p(x), p(y)) ≤ #V ≤ C.
Therefore, Y is contracting.
Combining Proposition 3.3 and Theorem 3.6, we get:
Corollary 3.7. Let X be a CAT(0) cube complex and γ an infinite combinatorial
geodesic. Then γ is contracting if and only if there exists a constant C ≥ 1 such that
any join of hyperplanes (H, V) satisfying H ⊂ H(γ) is necessarily C-thin.
Proof. Suppose that γ is contracting. In particular, γ is quasiconvex so, according
to Proposition 3.3, there exists a constant C1 such that any join of hyperplanes (H, V)
satisfying H, V ⊂ H(γ) is C1 -thin. Then, because N (γ) is also contracting according to
Lemma 2.19 and Lemma 2.18, we deduce from Theorem 3.6 that there exists a constant
C2 such that any join of hyperplanes (H, V) satisfying H ⊂ H(γ) and V ∩ H(γ) = ∅
is C2 -thin. Now, let (H, V) be any join of hyperplanes satisfying H ⊂ H(γ). If we set
V1 = V ∩ H(γ) and V2 = V\V1 , then (H, V1 ) is C1 -thin and (H, V2 ) is C2 -thin. Thus,
min(#H, #V) ≤ min(#H, #V1 ) + min(#H, #V2 ) ≤ C1 + C2 ,
ie., (H, V) is (C1 + C2 )-thin.
Conversely, suppose that there exists a constant C ≥ 1 such that any join of hyperplanes
(H, V) satisfying H ⊂ H(γ) is necessarily C-thin. According to Proposition 3.3, γ is
quasiconvex, so that it is sufficient to prove that N (γ) is contracting to conclude that
γ is contracting as well according to Lemma 2.18. Finally, it follows from Theorem 3.6
that N (γ) is contracting.
3.3
Well-separated hyperplanes
Separation properties of hyperplanes play a central role in the study of contracting
isometries. Strongly separated hyperplanes were introduced in [BC12] in order to characterize the rank-one isometries of right-angled Artin groups; they were also crucial in
the proof of the Rank Rigidity Theorem for CAT(0) cube complexes in [CS11]. In [CS15],
Charney and Sultan used contracting isometries to distinguish quasi-isometrically some
cubulable groups, and they introduced k-separated hyperplanes in order to characterize
contracting isometries in uniformly locally finite CAT(0) cube complexes [CS15, Theorem 4.2]. In this section, we introduce well-separated hyperplanes in order to generalize
this charaterization to CAT(0) cube complexes without any local finiteness condition.
Definition 3.8. Let J and B be two disjoint hyperplanes in some CAT(0) cube complex
and L ≥ 0. We say that J and H are L-well separated if any family of hyperplanes
transverse to both J and H, which does not contain any facing triple, has cardinality
at most L. Two hyperplanes are well-separated if they are L-well-separated for some
L ≥ 1.
14
Theorem 3.9. Let γ be an infinite combinatorial geodesic. Then γ is contracting if
and only if there exist constants r, L ≥ 1, hyperplanes {Hi , i ∈ Z}, and vertices {xi ∈
γ ∩ N (Hi ), i ∈ Z} such that, for every i ∈ Z:
• d(xi , xi+1 ) ≤ r,
• the hyperplanes Hi and Hi+1 are L-well separated.
The following lemma is a combinatorial analogue to [CS15, Lemma 4.3], although our
proof is essentially different. This is the key technical lemma needed to prove Proposition
3.11, which is in turn the first step toward the proof of Theorem 3.9.
Lemma 3.10. Let γ be an infinite quasiconvex combinatorial geodesic. There exists a
constant C depending only on γ such that, if two vertices x, y ∈ γ satisfy d(x, y) > C,
then there exists a hyperplane J separating x and y such that the projection of N (J)
onto γ is included into [x, y] ⊂ γ.
Proof. Let J be a hyperplane separating x and y whose projection onto γ is not included
into [x, y] ⊂ γ. Say that this projection contains a vertex at the right of y; the case
where it contains a vertex at the left of x is completely symmetric. Thus, we have the
following configuration:
where b ∈
/ [x, y] is a projection onto γ of some vertex a ∈ N (J), z ∈ γ is a vertex
adjacent to J, and m = m(a, b, z) the median point of {a, b, z}. Because m belongs
to a combinatorial geodesic between z and b, and that these two points belong to γ,
necessarily m ∈ N (γ). On the other hand, m belongs to a combinatorial geodesic
between a and b, and by definition b is a vertex of γ minimizing the distance to a, so
d(m, b) = d(m, γ); because, by the quasiconvexity of γ, the Hausdorff distance between
γ and N (γ) is finite, say L, we deduce that d(m, b) = d(m, γ) ≤ L since m belongs to a
combinatorial geodesic between b and z (which implies that m ∈ N (γ)). Using Lemma
2.9, we get
d(y, N (J)) ≤ d(z, N (J)) + d(b, N (J)) ≤ d(b, m) ≤ L.
Thus, J intersects the ball B(y, L).
Let H be the set of hyperplanes separating x and y, and intersecting B(y, L). Of
course, because any hyperplane of H intersects γ, the collection H contains no facing
triple. On the other hand, it follows from Proposition 3.3 that dim N (γ) is finite, so
that if #H ≥ Ram(s) for some s ≥ dim N (γ) then the collection H must contain
at least s pairwise disjoint hyperplanes. Since this collection of hyperplanes does not
contain facing triples and intersects the ball B(y, L), we deduce that s ≤ 2L, hence
#H ≤ Ram(max(dim N (γ), 2L)).
15
Finally, we have proved that there are at most 2Ram(max(dim N (γ), 2L)) hyperplanes
separating x and y such that the projections of their neighborhoods onto γ are not
included into [x, y] ⊂ γ. It clearly follows that d(x, y) > 2Ram(max(dim N (γ), 2L))
implies that there is a hyperplane separating x and y such that the projection of its
neighborhood onto γ is included into [x, y] ⊂ γ.
Proposition 3.11. Let γ be an infinite combinatorial geodesic. If γ is quasiconvex then
there exist constants r, L ≥ 1, hyperplanes {Hi , i ∈ Z}, and vertices {xi ∈ γ ∩ N (Hi ), i ∈
Z} such that, for every i ∈ Z:
• d(xi , xi+1 ) ≤ r,
• the hyperplanes Hi and Hi+1 are disjoint.
Proof. Because γ is quasiconvex, we may apply the previous lemma: let C be the
constant it gives. Let . . . , y−1 , y0 , y1 , . . . ∈ γ be vertices along γ satisfying d(yi , yi+1 ) =
C + 1 for all i ∈ Z. According to the previous lemma, for every k ∈ Z, there exists
a hyperplane Jk separating y2k and y2k+1 whose projection onto γ is included into
[y2k , y2k+1 ] ⊂ γ; let xk be one of the two vertices in γ ∩ N (Jk ). Notice that
d(xi , xi+1 ) ≤ d(y2i , y2i+3 ) ≤ d(y2i , y2i+1 ) + d(y2i+1 , y2i+2 ) + d(y2i+2 , y2i+3 ) = 3(C + 1).
Finally, it is clear that, for every k ∈ Z, the hyperplanes Jk and Jk+1 are disjoint: if it
was not the case, there would exist a vertex of N (Jk ) ∩ N (Jk+1 ) whose projection onto
γ would belong to [y2k , y2k+1 ] ∩ [y2k+2 , y2k+3 ] = ∅.
Proof of Theorem 3.9. Suppose γ contracting. In particular, γ is quasiconvex, so,
by applying Proposition 3.11, we find a constant L ≥ 1, a collection of pairwise disjoint
hyperplanes {Ji , i ∈ Z}, and a collection of vertices {yi ∈ γ ∩ N (Ji ), i ∈ Z}, such that
d(yi , yi+1 ) ≤ L for all i ∈ Z. Let C be the constant given by Corollary 3.7 and set
xi = yiC for all i ∈ Z. Notice that
d(xi , xi+1 ) = d(yiC , yiC+C ) ≤ d(yiC , yiC+1 ) + · · · + d(yiC+C−1 , yiC+C ) ≤ (C + 1)L.
Now, we want to prove that the hyperplanes JnC and J(n+1)C are C-well separated
for every n ∈ Z. So let H be a collection of hyperplanes transverse to both JnC and
J(n+1)C , which contains no facing triple. Because every hyperplane H ∈ H is transverse
to any JnC+k , for 0 ≤ k ≤ C, we obtain a join of hyperplanes (VC+1 , V#H ) satisfying
VC+1 ⊂ H(γ). By definition of C, we deduce #H ≤ C.
Conversely, suppose that there exist constants `, L ≥ 1, hyperplanes {Ji , i ∈ Z}, and
vertices {xi ∈ γ ∩ N (Ji ), i ∈ Z} such that, for every i ∈ Z:
• d(xi , xi+1 ) ≤ `,
• the hyperplanes Ji and Ji+1 are L-well separated.
Let (Vp , Vq ) be a join of hyperplanes with Vp = {V1 , . . . , Vp } ⊂ H(γ). For convenience, we
may suppose without loss of generality that each Vi intersects γ at yi with the property
that yi separates yi−1 and yi+1 along γ for all 2 ≤ i ≤ p − 1. If p > 3` + 2L, there exist
L < r < s < p−L such that yr and ys are separated by Jk , Jk+1 , Jk+2 and Jk+3 for some
k ∈ Z. Because Jk and Jk+1 are L-well separated, the hyperplanes {V1 , . . . , Vr } cannot
be transverse to both Jk and Jk+1 since r > L, so there exists 1 ≤ α ≤ r such that Vα
and Jk+1 are disjoint. Similarly, the hyperplanes {Vs , . . . , Vp } cannot be transverse to
both Jk+2 and Jk+3 since p − s > L, so there exists 1 ≤ ω ≤ r such that Vω and Jk+2
are disjoint. Notice that the hyperplanes Jk+1 and Jk+2 separate Vα and Vω , so that the
hyperplanes of Vq , which are all transverse to both Vα and Vω , are transverse to both
Jk+1 and Jk+2 . But Jk+1 and Jk+2 are L-well separated, hence q ≤ L.
Thus, we have proved that min(p, q) ≤ 3` + 2L. Finally, Corollary 3.7 implies that γ is
contracting.
16
3.4
Contracting isometries I
Finally, we apply our criteria of contractivity to the axis of some isometry, providing a
characterization of contracting isometries. We first need the following definition:
Definition 3.12. Let X be a CAT(0) cube complex and g ∈ Isom(X) an isometry. We
say that g skewers a pair of hyperplanes (J1 , J2 ) if g n D1 ( D2 ( D1 for some n ∈ Z\{0}
and for some half-spaces D1 , D2 delimited by J1 , J2 respectively.
Our main criterion is:
Theorem 3.13. Let X be a CAT(0) cube complex and g ∈ Isom(X) an isometry with
a combinatorial axis γ. The following statements are equivalent:
(i) g is a contracting isometry;
(ii) there exists C ≥ 1 such that any join of hyperplanes (H, V) with H ⊂ H(γ) is
C-thin;
(iii) there exists C ≥ 1 such that:
– H(γ) does not contain C pairwise transverse hyperplanes,
– any grid of hyperplanes (H, V) with H ⊂ H(γ) is C-thin;
(iv) g skewers a pair of well-separated hyperplanes.
Proof. The equivalence (i) ⇔ (ii) is a direct consequence of Corollary 3.7. Then,
because a grid of hyperplanes or a collection of pairwise transverse hyperplanes gives
rise to a join of hyperplanes, (ii) clearly implies (iii).
Now, we want to prove (iii) ⇒ (ii). Let C denote the constant given by (iii) and let
(H, V) be a join of hyperplanes satisfying H ⊂ H(γ). If #H, #V ≥ Ram(C), then H
and V each contain a subfamily of at least C pairwise disjoint hyperplanes, say H0 and
V 0 respectively. But now (H0 , V 0 ) defines a grid of hyperplanes satisfying #H0 , #V 0 ≥ C,
contradicting the definition of C. Thus, the join (H, V) is necessarily Ram(C)-thin.
Now, we want to prove (iv) ⇒ (i). So suppose that there exist two half-spaces D1 , D2
respectively delimited by two well separated hyperplanes J1 , J2 such that g n D1 ( D2 (
D1 for some n ∈ Z. Notice that
· · · ( g 2n D1 ( g n D2 ( g n D1 ( D2 ( D1 ( g −n D2 ( g −n D1 ( g −2n D2 ( · · · .
We claim that J1 intersects γ. Suppose by contradiction that it is not the case.
Then, because γ is hgi-invariant, g k J1 does not intersect γ for every k ∈ Z. As a
consequence, there exists some m ∈ Z such that γ ⊂ g mn D1 \g (m+1)n D1 . A fortiori,
g m γ ⊂ g (m+1)n D1 \g (m+2)n D1 , and because γ is hgi-invariant, we deduce that
γ ⊂ g mn D1 \g (m+1)n D1 ∩ g (m+1)n D1 \g (m+2)n D1 = ∅,
a contradiction. So J1 intersects γ, and a fortiori, g kn J1 intersects γ for every k ∈ Z.
For every k ∈ Z, the intersection between g kn N (J1 ) and γ contains exacly two vertices;
fix an orientation along γ and denote by yk the first vertex along γ which belongs to
g kn N (J1 ). Clearly,
d(yk , yk+1 ) = d(g kn y0 , g (k+1)n y0 ) = d(y0 , g n y0 ) = kgkn
17
since y0 belongs to the combinatorial axis of g, where kgk denotes the combinatorial
translation length of g (see [Hag07]). Furthermore, J1 and g n J1 are well separated:
indeed, because J2 separates J1 and g n J1 , we deduce that any collection of hyperplanes
transverse to both J1 and g n J1 must also be transverse to both J1 and J2 , so that
we conclude that J1 and g n J1 are well separated since J1 and J2 are themselves well
separated. Therefore, {g kn J1 | k ∈ N} defines a family of pairwise well separated
hyperplanes intersecting γ at uniformly separated points. Theorem 3.9 implies that γ
is contracting.
Conversely, suppose that g is contracting. According to Theorem 3.9, there exist three
pairwise well separated hyperplanes J1 , J2 , J3 intersecting γ. Say that they respectively
delimit three half-spaces D1 , D2 , D3 satisfying D3 ( D2 ( D1 , where D3 contains
γ(+∞). We claim that g skewers the pair (J1 , J2 ). Fix two vertices x ∈ N (J1 ) ∩ γ
and y ∈ D3 ∩ γ. Notice that, if I denotes the set of n ∈ N such that g n J1 and J1 are
transverse and g n J1 does not separate x and y, then {g n J1 | n ∈ I} defines a set of
hyperplanes (without facing triples, since they all intersect γ) transverse to both J2 and
J3 . Because J2 and J3 are well separated, necessarily I has to be finite. A fortiori, there
must exist at most #I + d(x, y) integers n ∈ N such that g n J1 and J1 are transverse.
As a consequence, there exists an integer n ∈ N such that g n J1 ⊂ D2 . Finally, we have
proved (i) ⇒ (iv).
4
4.1
Combinatorial boundary
Generalities
In this section, we will use the following notation: given two subsets of hyperplanes
H1 , H2 , we define the almost inclusion H1 ⊂ H2 if all but finitely many hyperplanes of
a
H1 belong to H2 . In particular, H1 and H2 are commensurable provided that H1 ⊂ H2
a
and H2 ⊂ H1 . Notice that commensurability defines an equivalence relation on the set
a
of all the collections of hyperplanes, and the almost-inclusion induces a partial order on
the associated quotient set. This allows the following definition.
Definition 4.1. Let X be a CAT(0) cube complex. Its combinatorial boundary is the
poset (∂ c X, ≺), where ∂ c X is the set of the combinatorial rays modulo the relation:
r1 ∼ r2 if H(r1 ) and H(r2 ) are commensurable; and where the partial order ≺ is defined
by: r1 ≺ r2 whenever H(r1 ) ⊂ H(r2 ).
a
Notice that this construction is equivariant, ie., if a group G acts by isometries on X,
then G acts on ∂ c X by ≺-isomorphisms.
The following two lemmas essentially state that we will be able to choose a “nice"
combinatorial ray representing a given point in the combinatorial boundary. They will
be useful in the next sections.
Lemma 4.2. Let X be a CAT(0) cube complex, x0 ∈ X a base vertex and ξ ∈ ∂ c X.
There exists a combinatorial ray r with r(0) = x0 and r = ξ in ∂ c X.
Proof. Let rn be a combinatorial path which is the concatenation of a combinatorial
geodesic [x0 , ξ(n)] between x0 and ξ(n) with the subray ξn of ξ starting at ξ(n). If rn
is not geodesic, then there exists a hyperplane J intersecting both [x0 , ξ(n)] and ξn .
Thus, J separates x0 and ξ(n) but cannot separate ξ(0) and ξ(n) since otherwise J
would intersect the ray ξ twice. We deduce that J necessarily separates x0 and ξ(0).
Therefore, if we choose n large enough so that the hyperplanes separating x0 and ξ(0)
do not intersect the subray ξn , then rn is a combinatorial ray. By construction, we have
rn (0) = x0 , and H(rn ) and H(ξ) are clearly commensurable so rn = ξ in ∂ c X.
18
Lemma 4.3. Let r, ρ be two combinatorial rays satisfying r ≺ ρ. There exists a combinatorial ray p equivalent to r satisfying p(0) = ρ(0) and H(p) ⊂ H(ρ).
Proof. According to the previous lemma, we may suppose that r(0) = ρ(0). By
assumption, H(r) ⊂ H(ρ). Let J be the last hyperplane of H(r)\H(ρ) intersected by
a
r, ie., there exists some k ≥ 1 such that J is dual to the edge [r(k), r(k + 1)] and the
hyperplanes intersecting the subray rk ⊂ r starting at r(k + 1) all belong to H(ρ). We
claim that rk ⊂ ∂N (J). Indeed, if there exists some j ≥ k +1 such that d(rk (j), N (J)) ≥
1, then some hyperplane H would separate rk (j) and N (J) according to Lemma 2.7;
a fortiori, H would intersect r but not ρ, contradicting the choice of J. Let r0 denote
the combinatorial ray obtained from r by replacing the subray rk−1 by the symmetric
of rk with respect to J in N (J) ' J × [0, 1]. The set of hyperplanes intersecting r0
is precisely H(r)\{J}. Because |H(r0 )\H(ρ)| < |H(r)\H(ρ)|, iterating the process will
produce after finitely many steps a combinatorial ray p satisfying p(0) = r(0) = ρ(0)
and H(p) ⊂ H(ρ).
Definition 4.4. Let X be a CAT(0) cube complex and Y ⊂ X a subcomplex. We
define the relative combinatorial boundary ∂ c Y of Y as the subset of ∂ c X corresponding
the set of the combinatorial rays included into Y .
Lemma 4.5. Let r be a combinatorial ray. Then r ∈ ∂ c Y if and only if H(r) ⊂ H(Y ).
a
Proof. If r ∈ ∂ c Y , then by definition there exists a combinatorial ray ρ ⊂ Y equivalent
to r. Then
H(r) ⊂ H(ρ) ⊂ H(Y ).
a
Conversely, suppose that H(r) ⊂ H(Y ). According to Lemma 4.2, we may suppose that
a
r(0) ∈ Y . Using the same argument as in the previous proof, we find an equivalent combinatorial ray ρ with H(ρ) ⊂ H(Y ) and ρ(0) = r(0) ∈ Y . Because Y is combinatorially
convex, it follows that ρ ⊂ Y , hence r ∈ ∂ c Y .
4.2
Quasiconvex geodesics II
The goal of this section is to prove the following criterion, determining when a combinatorial axis is quasiconvex just from its endpoints in the combinatorial boundary.
Proposition 4.6. Let X be a locally finite CAT(0) cube complex and g ∈ Isom(X) an
isometry with a combinatorial axis γ. A subray r ⊂ γ is quasiconvex if and only if r is
minimal in ∂ c X.
Proof. Suppose that r is not quasiconvex. According to Proposition 3.3, for every n ≥ 1,
there exist a join of hyperplanes (Hn , Vn ) with Hn , Vn ⊂ H(r) and #Hn , #Vn = n. The
hyperplanes of Hn ∪Vn are naturally ordered depending on the order of their intersections
along r. Without loss of generality, we may suppose that the hyperplane of Hn ∪ Vn
which is closest to r(0) belongs to Hn ; because hgi acts on H(γ) with finitely many
orbits, up to translating by powers of g and taking a subsequence, we may suppose that
this hyperplane J ∈ Hn does not depend on n. Finally, let Vn denote the hyperplane of
Vn which is farthest from r(0).
Thus, if J − (resp. Vn+ ) denotes the halfspace delimited by J (resp. Vn ) which does not
contain r(+∞) (resp. which contains r(+∞)), then Cn = (J − , Vn+ , r) defines a cycle of
three subcomplexes. Let Dn → X be a disc diagram of minimal complexity bounded by
Cn . Because no hyperplane can intersect J, Vn or r twice, it follows from Proposition
2.13 that Dn → X is an isometric embedding, so we will identify Dn with its image in
X. Let us write the boundary ∂Dn as a concatenation of three combinatorial geodesics
19
∂Dn = µn ∪ νn ∪ ρn ,
where µn ⊂ J − , νn ⊂ Vn+ and ρn ⊂ r. Because X is locally finite, up to taking a subsequence, we may suppose that the sequence of combinatorial geodesics (µn ) converges
to a combinatorial ray µ∞ ⊂ J − . Notice that, if H ∈ H(µ∞ ), then H ∈ H(µk ) for
some k ≥ 1, it follows from Theorem 2.12 that H does not intersect νk in Dk , hence
H ∈ H(ρk ). Therefore, H(µ∞ ) ⊂ H(r).
According to Lemma 4.8 below, there exists an infinite collection of pairwise disjoint
hyperplanes J1 , J2 , . . . ∈ H(r). Because hgi acts on H(γ) with finitely many orbits,
necessarily there exist some r, s ≥ 1 and some m ≥ 1 such that g m Jr = Js . Thus,
{Jr , g m Jr , g 2m Jr , . . .} defines a collection of infinitely many pairwise disjoint hyperplanes
in H(r) stable by the semigroup generated by g m . For convenience, let {H0 , H1 , . . .}
denote this collection.
If J is disjoint from some Hk , then Hk , Hk+1 , . . . ∈
/ H(µ∞ ) since µ∞ ⊂ J − . On the other
hand, Hk , Hk+1 , . . . ∈ H(r), so we deduce that µ∞ ≺ r with µ∞ 6= r in ∂ c X: it precisely
means that r is not minimal in ∂ c X.
From now on, suppose that J is transverse to all the Hk ’s. As a consequence, g km J is
transverse to Hn for every n ≥ k. For every k ≥ 1, let Hk+ denote the halfspace delimited
by Hk which contains r(+∞) and let pk : X → Hk+ be the combinatorial projection onto
Hk+ . We define a sequence of vertices (xn ) by:
x0 = r(0) and xn+1 = pn+1 (xn ) for every n ≥ 0.
Because of Lemma 2.8, we have xn = pn (x0 ) for every n ≥ 1. Therefore, it follows from
Proposition 2.5 that, for every n ≥ 1, there exists a combinatorial geodesic between x0
and xn passing through xk for k ≤ n. Thus, there exists a combinatorial ray ρ passing
through all the xk ’s.
Let H ∈ H(ρ). Then H separates xk and xk+1 for some k ≥ 0, and it follows from
Lemma 2.7 that H separates xk and Hk . As a first consequence, we deduce that, for
every k ≥ 1, g km J cannot belong to H(ρ) since g km J is transverse to Hn for every n ≥ k,
hence r 6= ρ in ∂ c X. On the other hand, H separates x0 = r(0) and Hk , and r(n) ∈ Hk+
for n large enough, so H ∈ H(r). We have proved that H(ρ) ⊂ H(r), ie., ρ ≺ r. Thus,
r is not minimal in ∂ c X.
Conversely, suppose that r is not minimal in ∂ c X. Thus, there exists a combinatorial
ray ρ with ρ(0) = r(0) such that H(ρ) ⊂ H(r) and |H(r)\H(ρ)| = +∞. Let J1 , J2 , . . . ∈
H(r)\H(ρ) be an infinite collection. Without loss of generality, suppose that, for every
i > j ≥ 1, the edge Jj ∩ r is closer to r(0) than the edge Ji ∩ r. Let N ≥ 1, and let d(N )
denote the distance d(r(0), ω) where ω is the endpoint of the edge JN ∩r which is farthest
from r(0). Choose any collection of hyperplanes V ⊂ H(ρ) with #V ≥ d(N ) + N . Then
V contains a subcollection {V1 , . . . , VN } such that the edge Vi ∩ r is farther from r(0)
than the edge JN ∩ r. We know that each Vj separates {r(0), ω} and {r(k), ρ(k)} for k
large enough, and each Ji separates {r(k), ω} and {r(0), ρ(k)} for k large enough (since
Ji and ρ are disjoint); we deduce that Ji and Vj are transverse for any 1 ≤ i, j ≤ N .
Therefore, ({J1 , . . . , JN }, {V1 , . . . , VN }) defines a join of hyperplanes in H(r). In fact,
we have proved
Fact 4.7. Let r ≺ ρ be two combinatorial rays with the same origin. If J ⊂ H(r)\H(ρ)
is an infinite collection, then for every N ≥ 1 there exists a join of hyperplanes (H, V)
with H ⊂ J , V ⊂ H(ρ) and #H, #V ≥ n.
Since N can be chosen arbitrarily large, it follows from Proposition 3.3 that r is not
quasiconvex.
20
Lemma 4.8. Let X be a complete CAT(0) cube complex and r a combinatorial ray.
Then H(r) contains an infinite sequence of pairwise disjoint hyperplanes.
Proof. We begin our proof with a completely general argument, without assuming that
X is complete. First, we decompose H(r) as the disjoint union H0 t H1 t · · · where
we define Hi = {J ∈ H(r) | d∞ (r(0), J) = i} for all i ≥ 0. A priori, some Hi might
be empty or infinite; moreover, Hi could be infinite and Hi+1 non-empty. We want to
understand the behaviour of this sequence of collections of hyperplanes. Our first two
claims state that each Hi is a collection of pairwise transverse hyperplanes and that Hi
non-empty implies Hj non-empty for every j < i.
Claim 4.9. For all i ≥ 0 and J1 , J2 ∈ Hi , J1 and J2 are transverse.
Suppose by contradiction that J1 and J2 are disjoint, and, for convenience, say that J1
separates r(0) and J2 . Because d∞ (r(0), J1 ) = i, there exists i pairwise disjoint hyperplanes V1 , . . . , Vi separating r(0) and J1 . Then V1 , . . . , Vi , J1 are i+1 pairwise transverse
hyperplanes separating r(0) and J2 , hence d∞ (r(0), J2 ) ≥ i + 1, a contradiction.
Claim 4.10. Let J ∈ Hi , ie., there exists a collection of i pairwise disjoint hyperplanes
V0 , . . . , Vi−1 separating r(0) and J. Then Vj ∈ Hj for every 0 ≤ j ≤ i − 1.
First, because V0 , . . . , Vj−1 separate r(0) and Vj , necessarily we have d∞ (r(0), Vj ) ≥
j. Let H1 . . . , Hk be k pairwise disjoint hyperplanes separating r(0) and Vj . Then
H1 , . . . , Hk , Vj , . . . , Vi−1 define k + i − j pairwise disjoint hyperplanes separating r(0)
and J, hence
k + i − j ≤ d∞ (r(0), J) = i,
ie., k ≤ j. We deduce that d∞ (r(0), Vj ) ≤ j. Finally, we conclude that d∞ (r(0), Vj ) = j,
that is Vj ∈ Hj .
Our third and last claim states that, if the first collections H0 , . . . , Hp are finite, then
there exists a sequence of cubes Q0 , . . . , Qp such that the intersection between two
successive cubes is a single vertex and the hyperplanes dual to the cube Qi are precisely
the hyperplanes of Hi . For this purpose, we define by induction the sequence of vertices
x0 , x1 , . . . by:
• x0 = r(0);
• if xj is defined and Hj is finite, xj+1 is the projection of xj onto Cj :=
T
J +,
J∈Hj
where J + denotes the halfspace delimited by J which does not contain r(0).
Notice that the intersection Cj is non-empty precisely because of Claim 4.9 and because
we assumed that Hj is finite.
Claim 4.11. For every i ≥ 0, if Hi−1 is finite then any hyperplane of Hi is adjacent to
xi ; and, if Hi is finite, there exists a cube Qi containing xi and xi+1 as opposite vertices
such that Hi is the set of hyperplanes intersecting Qi .
We argue by induction.
First, we notice that any hyperplane J ∈ H0 is adjacent to x0 . Suppose by contradiction
that this not the case, and let p denote the combinatorial projection of x0 onto J + . By
assumption, d(x0 , p) ≥ 2 so there exists a hyperplane H separating x0 and p which is
different from J. According to Lemma 2.7, H separates x0 and J. This contradicts J ∈
H0 . Therefore, any hyperplane J ∈ H0 is dual to an edge e(J) with x0 as an endpoint.
If H0 is infinite, there is nothing to prove. Otherwise, {e(J) | J ∈ H0 } is a finite
21
set of edges with a common endpoint and whose associated hyperplanes are pairwise
transverse. Thus, because a CAT(0) cube complex does not contain inter-osculating
hyperplanes, these edges span a cube Q0 . By construction, the set of hyperplanes
intersecting Q0 is H0 . Furthermore, because the vertex of Q0 opposite to x0 belongs to
C0 and is at distance #H0 from x0 , we deduce that it is precisely x1 .
Now suppose that x0 , . . . , xi are well-defined that there exist some cubes Q0 , . . . , Qi−1
satisfying our claim. We first want to prove that any hyperplane J ∈ Hi is adjacent to
xi . Suppose by contradiction that this is not the case, ie., there exists some J ∈ Hi which
is not adjacent to xi . As a consequence, there must exist a hyperplane H separating xi
and J.
Case 1: H does not separate x0 and xi . So H separates x0 and J, and H does not belong
to Hk for k ≤ i − 1. Noticing that r(k) ∈ J + for sufficiently large k ≥ 0, we deduce
that H necessarily intersects r, hence H ∈ Hj for some j ≥ i. Therefore, there exist i
pairwise disjoint hyperplanes V1 , . . . , Vi separating r(0) and H. A fortiori, V1 , . . . , Vi , H
define i + 1 pairwise disjoint hyperplanes separating r(0) and J, contradicting J ∈ Hi .
Case 2: H separates x0 and xi . In particular, H intersects a cube Qj for some 0 ≤ j ≤
i − 1, ie., H ∈ Hj . Let p0 (resp. pi ) denote the combinatorial projection of x0 (resp.
xi ) onto J + . Notice that, because H is disjoint from J, H does not separate p0 and
pi : these two vertices belong to the same half-space delimited by H, say H + . Because
H separates xi and J, it separates xi and pi , so xi belongs to the second half-space
delimited by H, say H − . Then, since H separates x0 and xi by hypothesis, we deduce
that x0 belongs to H + ; in particular, H does not separate x0 and p0 . On the other
hand, if k ≥ 0 is sufficiently large so that r(k) ∈ J + , H must separate r(0) and r(k)
since H ∈ H(r). According to Proposition 2.5, there exists a combinatorial geodesic
between x0 and r(k) passing through p0 . Because H is disjoint from J + , we conclude
that H must separate x0 and p0 , a contradiction.
We have proved that any hyperplane of Hi is adjacent to xi . Thus, if Hi is finite, we
can construct our cube Qi as above. This concludes the proof of our claim.
From now on, we assume that X is a complete CAT(0) cube complex. First, as a
consequence of the previous claim, we deduce that each Hi is finite. Indeed, suppose by
contradiction that some Hi is infinite. Without loss of generality, we may suppose that
Hj is finite for every j < i, so that the points x0 , . . . , xi and the cubes Q0 , . . . , Qi−1 are
well-defined. According to our previous claim, any hyperplane of Hi is adjacent to xi :
thus, the infinite set of edges adjacent to xi and dual to the hyperplanes of Hi define an
infinite cube, contradicting the completeness of X.
Therefore, we have defined an infinite sequence of vertices x0 , x1 , . . . and an infinite
sequence of cubes Q0 , Q1 , . . .. Thanks to Claim 4.10, for every i ≥ 0, there exists a
sequence of pairwise disjoint hyperplanes V0i , . . . , Vii with Vji ∈ Hj . Because each Hk is
finite, up to taking a subsequence, we may suppose that, for every k ≥ 0, the sequence
(Vki ) is eventually constant to some hyperplane Vk ∈ Hk . By construction, our sequence
V0 , V1 , . . . defines a collection of pairwise disjoint hyperplanes in H(r), concluding the
proof of our lemma.
4.3
Contracting convex subcomplexes II
Once again, according to Proposition 3.3, the previous section reduces the problem of
determining when a combinatorial axis is contracting to the problem of determining
when a (hyperbolic, cocompact) combinatorially convex subcomplex is contracting. To
do so, we need the following definition:
22
Definition 4.12. A subset S ⊂ ∂ c X is full if any element of ∂ c X which is comparable
with an element of S necessarily belongs to S.
Our main criterion is:
Theorem 4.13. Let X be a locally finite CAT(0) cube complex and Y ⊂ X a hyperbolic
combinatorially convex Aut(X)-cocompact subcomplex. Then Y is contracting if and
only if ∂ c Y is full in ∂ c X.
Proof. Suppose that Y is not contracting. Notice that Y is a cocompact subcomplex,
so its dimension, say d, must be finite. According to Theorem 3.6, for every n ≥ d, there
exists a join of hyperplanes (Hn , Vn ) with Hn ⊂ H(Y ), Vn ∩ H(Y ) = ∅ and #Hn =
#Vn ≥ Ram(n). Next, up to taking subcollections of Hn and Vn , we may suppose thanks
to Ramsey’s theorem that Hn and Vn are collections of pairwise disjoint hyperplanes
of cardinalities exactly n, ie., (Hn , Vn ) is a grid of hyperplanes. For convenience, write
n and
Hn = (H1n , . . . , Hnn ) (resp. Vn = (V1n , . . . , Vnn ) where Hin (resp. Vin ) separates Hi−1
n (resp. V n and V n ) for every 2 ≤ i ≤ n − 1; we also suppose that V n separates
Hi+1
1
i−1
i+1
Y and Vnn .
Let Cn be the cycle of subcomplexes (N (H1n ), N (Vnn ), N (Hnn ), Y ). According to Corollary
2.15, a disc diagram of minimal complexity bounded by Cn defines a flat rectangle
Dn . Say that a hyperplane intersecting the Hin ’s in Dn is vertical, and a hyperplane
intersecting the Vin ’s in Dn is horizontal.
Claim 4.14. If the grids of hyperplanes of Y are all C-thin and n > C, then at most
Ram(C) horizontal hyperplanes intersect Y .
Let V be a collection of horizontal hyperplanes which intersect Y ; notice that V does not
contain any facing triple since there exists a combinatorial geodesic in Dn (which is a
combinatorial geodesic in X) intersecting all the hyperplanes of V. If #V ≥ Ram(s) for
some s ≥ dim Y , then V contains a subcollection V 0 with s pairwise disjoint hyperplanes,
so that (Hn , V 0 ) defines a (n, s)-grid of hyperplanes. If n > C, this implies s ≤ C.
Therefore, #V ≤ Ram(C). This proves the claim.
Now, because Y is a cocompact subcomplex, up to translating the Dn ’s, we may suppose
that a corner of each Dn belongs to a fixed compact fundamental domain C; and because
X is locally finite, up to taking a subsequence, we may suppose that, for every ball B
centered in C, (Dn ∩ B) is eventually constant, so that (Dn ) converges naturally to a
subcomplex D∞ . Notice that D∞ is isomorphic to the square complex [0, +∞)×[0, +∞)
with [0, +∞) × {0} ⊂ Y ; let ρ denote this combinatorial ray. Clearly, if r ⊂ D∞ is a
diagonal combinatorial ray starting from (0, 0), then H(ρ) ⊂ H(r), ie. r and ρ are
comparable in ∂ c X. Furthermore, the hyperplanes intersecting [0, +∞) × {0} ⊂ D∞ are
horizontal hyperplanes in some Dn , and so infinitely many of them are disjoint from Y
according to our previous claim: this implies that r ∈
/ ∂ c Y . We conclude that ∂ c Y is
c
not full in ∂ X.
Conversely, suppose that ∂ c Y is not full in ∂ c X, ie., there exist two ≺-comparable
combinatorial rays r, ρ satisfying ρ ⊂ Y and r ∈
/ ∂cY .
Suppose that r ≺ ρ. According to Lemma 4.3, we may suppose that r(0) = ρ(0)
and H(r) ⊂ H(ρ). But r(0) ∈ Y and H(r) ⊂ H(Y ) imply r ⊂ Y , contradicting the
assumption r ∈
/ ∂cY .
Suppose that ρ ≺ r. According to Lemma 4.2, we may suppose that r(0) = ρ(0). Because
r∈
/ ∂ c Y , there exists an infinite collection J ⊂ H(r)\H(Y ); a fortiori, J ⊂ H(r)\H(ρ)
since ρ ⊂ Y . It follows from Fact 4.7 that, for every N ≥ 1, there exists a join of
hypergraphs (H, V) with H ⊂ J , V ⊂ H(ρ) and #H, #V ≥ N . Therefore, Theorem 3.6
implies that Y is not contracting.
23
Remark 4.15. Notice that neither the hyperbolicity of the subcomplex Y nor its cocompactness was necessary to prove the converse of the previous theorem. Thus, the
relative combinatorial boundary of a combinatorially convex subcomplex is always full.
4.4
Contracting isometries II
Finally, we want to apply the results we have found in the previous sections to characterize contracting isometries from the combinatorial boundary. We begin by introducing
the following definition:
Definition 4.16. A point ξ ∈ ∂ c X is isolated if it is not comparable with any other
element of ∂ c X.
Our main criterion is the following:
Theorem 4.17. Let X be a locally finite CAT(0) cube complex and g ∈ Isom(X) an
isometry with a combinatorial axis γ. Then g is a contracting isometry if and only if
γ(+∞) is isolated in ∂ c X.
Proof. If g is a contracting isometry then a subray r ⊂ γ containing γ(+∞) is contracting. Because N (r) is a combinatorially convex subcomplex quasi-isometric to a line, it
follows that ∂ c N (r) = {r}. On the other hand, since N (r) is contracting, it follows from
Theorem 4.13 that ∂ c N (r) is full in ∂ c X. This precisely means that r is isolated.
Conversely, suppose that r is isolated in ∂ c X. It follows from Proposition 4.6 that r is
quasiconvex. Thus, r is contracting if and only if N (r) is contracting. The quasiconvexity of r also implies ∂ c N (r) = {r}. Because r is isolated in ∂ c X, ∂ c N (r) is full in ∂ c X,
and so N (r) is contracting according to Theorem 4.13.
Thus, containing an isolated point in the combinatorial boundary is a necessary condition
in order to admit a contracting isometry. The converse is also true if we strengthen our
assumptions on the action of our group. The first result in this direction is the following
theorem, where we assume that there exist only finitely many orbits of hyperplanes:
Theorem 4.18. Let G be a group acting on a locally finite CAT(0) cube complex X
with finitely many orbits of hyperplanes. Then G y X contains a contracting isometry
if and only if ∂ c X has an isolated vertex.
This result will be essentially a direct consequence of our main technical lemma:
Lemma 4.19. Let X be a locally finite CAT(0) cube complex. If ∂ c X has an isolated
point then, for every N ≥ 2, there exists a collection of N pairwise well-separated
hyperplanes of X which does not contain any facing triple.
Proof. Let r ∈ ∂ c X be an isolated point. According to Lemma 4.8, there exists an
infinite collection V0 , V1 , . . . ∈ H(r) of pairwise disjoint hyperplanes.
Claim 4.20. For every k ≥ 0, H(Vk ) ∩ H(r) is finite.
Suppose by contradiction that H(Vk ) ∩ H(r) is infinite for some k ≥ 0. Fix a vertex
x0 ∈ r adjacent to Vk and H1 , H2 , . . . ∈ H(Vk ) ∩ H(r). Now, set xi the combinatorial
projection of x0 onto N (Hi ) for every i ≥ 1; notice that d(x0 , xi ) −→ +∞ since X is
i→+∞
locally finite so that only finitely many hyperplanes intersect a given ball centered at
x0 . Then, once again because X is locally finite, if we choose a combinatorial geodesic
[x0 , xi ] between x0 and xi for every i ≥ 1, up to taking a subsequence, we may suppose
that the sequence ([x0 , xi ]) converges to some combinatorial ray ρ.
24
Let J ∈ H(ρ). Then J separates x0 and xi for some i ≥ 1. From Lemma 2.7, we deduce
that J separates x0 and N (Hi ). On the other hand, we know that Hi intersects the
combinatorial ray r which starts from x0 , so necessarily J ∈ H(r). We have proved that
H(ρ) ⊂ H(r)holds.
Next, notice that xi ∈ N (Vk ) for every i ≥ 1. Indeed, if pi denotes the combinatorial
projection of x0 onto N (Vk ) ∩ N (Hi ) and m the median vertex of {x0 , xi , pi }, then
x0 , pi ∈ N (Vk ) implies m ∈ N (Vk ) and xi , pi ∈ N (Hi ) implies m ∈ N (Hi ), hence
m ∈ N (Hi ) ∩ N (Vk ). Since xi minimizes the distance to x0 in N (Hi ) and that m
belongs to to a geodesic between x0 and xi , we conclude that xi = m ∈ N (Vk ). A
fortiori, ρ ⊂ N (Vk ). As a consequence, Vk+1 , Vk+2 , . . . ∈
/ H(ρ), so ρ defines a point of
∂ c X, different from r, which is ≺-comparable with r: this contradicts the fact that r is
isolated in ∂ c X. Our claim is proved.
Claim 4.21. For every k ≥ 0, there exists some i ≥ 1 such that Vk and Vk+i are
(2i)-well-separated.
Suppose by contradiction that Vk and Vk+i are not well-separated for every i ≥ 1, so
that there exists a collection of hyperplanes Hi ⊂ H(Vk ) ∩ H(Vk+i ) which does not
contain any facing triple and satisfying #Hi ≥ 2i. Let x0 ∈ r be a vertex adjacent to
Vk . Because Hi does not contain any facing triple and that Hi ⊂ H(Vk ), there exists a
vertex xi ∈ N (Vk ) such that x0 and xi are separated by at least i hyperplanes of Hi ; let
Ki denote this set of hyperplanes. For every i ≥ 1, let
Ci =
T
{halfspace delimited by J containing xi }.
J∈Ki
Let Ci be the cycle of subcomplexes (N (Vk ), Ci , N (Vk+i ), r), and let Di → X be a disc
diagram of minimal complexity bounded by Ci . According to Corollary 2.14, Di → X
is an isometric embedding so that we will identify Di with its image in X. Because
H(Vk ) ∩ H(r) is finite according to our previous claim, we know that, for sufficently
large i ≥ 1, Ki will contain a hyperplane disjoint from r, so that Ci ∩ r = ∅. Up to
taking a subsequence, we will suppose that this is always the case. In particular, ∂Di
can be decomposed as the concatenation of four non trivial combinatorial geodesics
∂Di = αi ∪ βi ∪ γi ∪ ri ,
where αi ⊂ N (Vk ), βi ⊂ Ci , γi ⊂ N (Vk+i ) and ri ⊂ r. Notice that the intersection
between αi and ri is a vertex adjacent to x0 , so, if we use the local finiteness of X to
define, up to taking a subsequence, a subcomplex D∞ as the limit of (Di ), then D∞ is
naturally bounded by two combinatorial rays α∞ and r∞ which are the limits of (αi )
and (ri ) respectively; in particular, α∞ ⊂ N (Vk ) and r∞ is a subray of r.
For each i ≥ 1, we will say that a hyperplane of Di intersecting βi is vertical, and a
hyperplane of Di intersecting αi is horizontal; a horizontal hyperplane J ∈ H(Di ) is
high if d(x0 , J ∩ αi ) > #H(Vk ) ∩ H(r).
Fact 4.22. Let i ≥ 1. The vertical hyperplanes of Di are pairwise disjoint and intersect ri . The horizontal hyperplanes of Di are pairwise disjoint. The high horizontal
hyperplanes of Di intersect γi .
It follows directly from Theorem 2.12 that the vertical hyperplanes of Di are pairwise
disjoint and intersect ri ; and the horizontal hyperplanes of Di are pairwise disjoint.
Suppose by contradiction that a high horizontal hyperplane H does not intersect γi .
According to Theorem 2.12, necessarily H intersects r. On the other hand, because
d(x0 , H∩αi ) > #H(Vk )∩H(r), there must exist a horizontal hyperplane J intersecting αi
between H and x0 which does not intersect ri ; once again, it follows from Theorem 2.12
25
that J has to intersect γi . A fortiori, J and H are necessarily transverse, contradicting
the fact that two horizontal hyperplanes are disjoint. This proves the fact.
Now, we want to define a particular subdiagram Di0 ⊂ Di . Let Hi be the nearest
high horizontal hyperplane of Di from αi ∩ ri . According to the previous fact, Hi
intersects γi . Let Di0 be the subdiagram delimited by Hi which contains βi . Notice
that the hyperplanes intersecting Di0 are either vertical or high horizontal. Thus, from
the decomposition of H(Di0 ) as horizontal and vertical hyperplanes given by Fact 4.22,
we deduce that Di0 is isometric to some square complex [0, ai ] × [0, bi ]. Notice that
bi −→ +∞ since Vk+1 , . . . , Vk+i−1 separate αi and γi , and similarly ai −→ +∞ since
i→+∞
i→+∞
0 ⊂ D
ai = length(αi ) − |H(Vk ) ∩ H(r)| and #Ki ≥ i. Therefore, if D∞
∞ denotes the
0
0
limit of (Di ), D∞ is naturally isomorphic to the square complex [0, +∞) × [0, +∞). Let
0 be the combinatorial ray associated to [0, +∞) × {0}, and µ ⊂ D 0 be a
ρ∞ ⊂ D∞
∞
diagonal combinatorial ray, ie., a ray passing through the vertices {(i, i) | i ≥ 0}.
Notice that ρ∞ is naturally the limit of (ρi ) := (∂Di0 ∩ N (Hi )). Thus, any hyperplane
intersecting ρ∞ is a vertical hyperplane in some Di . It follows from Fact 4.22 that
H(ρ∞ ) ⊂ H(r). Because r is isolated in ∂ c X, we deduce that r = ρ∞ in ∂ c X.
Now, we clearly have H(ρ∞ ) ⊂ H(µ), hence H(r) ⊂ H(µ). On the other hand, µ intera
sects infinitely many high horizontal hyperplanes, which are disjoint from r according
to Fact 4.22. Therefore, µ is a point of ∂ c X which is ≺-comparable with r, but different
from it, contradicting the assumption that r is isolated in ∂ c X. This concludes the proof
of our claim.
Now, we are able to conclude the proof of our lemma. Applying Claim 4.21 N − 1
times, we find a sequence of hyperplanes Vi(1) , . . . , Vi(N ) such that Vi(k) and Vi(k+1) are
L(k)-well-separated for some L(k) ≥ 1. Thus, {Vi(1) , . . . , Vi(N ) } defines a collection of N
pairwise L-well-separated hyperplanes, with L = max(L(1), . . . , L(N − 1)), which does
not contain any facing triple.
Proof of Theorem 4.18. If G contains a contracting isometry, then ∂ c X contains
an isolated point according to Theorem 4.17. Conversely, suppose that ∂ c X contains
an isolated point. Let N denote the number of orbits of hyperplanes for the action
G y X. According to Lemma 4.19, X contains a family of 3N pairwise well-separated
hyperplanes which does not contain any facing triple. By our choice of N , we deduce that
there exist a hyperplane J and g, h ∈ G such that J, gJ, hgJ are pairwise well-separated.
Without loss of generality, suppose that hgJ separates J and gJ, and let J + be the
halfspace delimited by J which contains gJ and hgJ. If gJ + ⊂ J + or hgJ + ⊂ J + , we
deduce that g or hg skewers a pair of well-separated hyperplanes, and we deduce from
Theorem 3.13 that G contains a contracting isometry. Otherwise, hgJ + ⊂ gJ + since
hgJ separates J and gJ. Therefore, h skewers a pair of well-separated hyperplanes, and
we deduce from Theorem 3.13 that G contains a contracting isometry.
Our second result stating that the existence of an isolated point in the combinatorial
boundary assures the existence of a contracting isometry is the following:
Theorem 4.23. Let G be a countable group acting on a locally finite CAT(0) cube
complex X. Suppose that the action G y X is minimal (ie., X does not contain a
proper G-invariant combinatorially convex subcomplex) and G does not fix a point of
∂ c X. Then G contains a contracting isometry if and only if ∂ c X contains an isolated
point.
Given a hyperplane J, delimiting two haflspaces J − and J + , an orientation of J is
the choice of an ordered couple (J − , J + ). The following terminology was introduced in
[CS11]:
26
Definition 4.24. Let G be a group acting on a CAT(0) cube complex. A hyperplane J
is G-flippable if, for every orientation (J − , J + ) of J, there exists some g ∈ G such that
gJ − ( J + .
Our main tool to prove Theorem 4.23 is:
Lemma 4.25. Let G be a countable group acting minimally on a CAT(0) cube complex
X without fixing any point of ∂ c X. Then any hyperplane of X is G-flippable.
Proof. Suppose that there exists a hyperplane J which is not G-flippable, ie., J admits
an orientation (J − , J + ) such that gJ + ∩ J + 6= ∅ for every g ∈ G. Thus, {gJ + | g ∈ G}
T
defines a family of pairwise intersecting halfspaces. If the intersection
gJ + is nong∈G
empty, it defines a proper G-invariant combinatorially convex subcomplex, so that the
action G y X is not minimal.
From now on, suppose that
gJ + = ∅. Fix an enumeration G = {g1 , g2 , . . .} and let
T
g∈G
In denote the non-empty intersection
n
T
gi J + . In particular, we have I1 ⊃ I2 ⊃ · · · and
i=1
T
In = ∅. Define a sequence of vertices (xn ) by:
n≥1
x0 ∈
/ I1 and xn+1 = pn+1 (xn ) for every n ≥ 0,
where pk : X → Ik denotes the combinatorial projection onto Ik . According to Proposition 2.5, for every n ≥ 1, there exists a combinatorial geodesic between x0 and xn
passing through xk for any 1 ≤ k ≤ n, so there exists a combinatorial ray r passing
through all the xn ’s. Notice that, because we also have xn = pn (x0 ) for every n ≥ 1
according to Lemma 2.8, H(r) is precisely the set of the hyperplanes separating x0 from
some In . In particular, if ρ is any combinatorial ray starting from x0 and intersecting
all the In ’s, then H(r) ⊂ H(ρ).
Therefore,
T
∂ c Ik is a non-empty (since it contains r) subset of ∂ c X with r as a minimal
k≥1
element, ie., for any ρ ∈
T
∂ c Ik necessarily r ≺ ρ.
k≥1
T
Let us prove that
∂ c Ik is G-invariant. Fix some g ∈ G, and, for every k ≥ 1,
k≥1
let L(k, g) be a sufficiently large integer so that {gg1 , . . . , ggk } ⊂ {g1 , . . . , gL(k,g) }. Of
k
T
course, L(k, g) ≥ k and gIk =
i=1
g
\
k≥1
∂ c Ik =
ggi J + ⊃ IL(k,g) . Consequently,
\
∂ c (gIk ) ⊃
k≥1
\
\
∂ c IL(k,g) =
k≥1
because L(k, g) −→ +∞. Thus, we have proved that g
k→∞
∂ c Ik ,
k≥1
T
k≥1
∂ c Ik ⊃
T
∂ c Ik for every
k≥1
g ∈ G. It follows that our intersection is G-invariant.
Finally, because G acts on ∂ c X by order-automorphisms, we conclude that G fixes
r ∈ ∂ c X.
Corollary 4.26. Let G be a countable group acting minimally on a CAT(0) cube complex X without fixing any point of ∂ c X. Then any pair of disjoint hyperplanes is skewered
by an element of G.
Proof. Let (J, H) be a pair of disjoint hyperplanes. Fix two orientations J = (J − , J + )
and H = (H − , H + ) so that J + ( H + . Now, by applying the previous lemma twice, we
find two elements g, h ∈ G such that gJ − ( J + and hgH + ( gH − . Thus,
27
hgH + ( gH − = X\gH + ( X\gJ + = gJ − ( J + .
We deduce that hg skewers the pair (J, H).
Remark 4.27. Lemma 4.25 may be thought of as a generalisation of the Flipping
Lemma proved in [CS11], where the Tits boundary is replaced with the combinatorial
boundary in possibly infinite dimension. Corollary 4.26 corresponds to the Double
Skewering Lemma [CS11].
Proof of Theorem 4.23. If G contains a contracting isometry, then ∂ c X contains an
isolated point according to Theorem 4.17. Conversely, suppose that ∂ c X contains an
isolated point. It follows from Lemma 4.19 that X contains two well-separated hyperplanes, and it suffices to invoke Corollary 4.26 to find an element g ∈ G which skewers
this pair of hyperplanes: according to Theorem 3.13, this is a contracting isometry.
5
Acylindrical hyperbolicity of diagram groups
5.1
Diagram groups and their cube complexes
We refer to [GS97, §3 and §5] for a detailed introduction to semigroup diagrams and
diagram groups.
For an alphabet Σ, let Σ+ denote the free semigroup over Σ. If P = hΣ | Ri is
a semigroup presentation, where R is a set of pairs of words in Σ+ , the semigroup
associated to P is the one given by the factor-semigroup Σ+ / ∼ where ∼ is the smallest
equivalent relation on Σ+ containing R. We will always assume that if u = v ∈ R then
v=u∈
/ R; in particular, u = u ∈
/ R.
A semigroup diagram over P is the analogue for semigroups to van Kampen diagrams
for group presentations. Formally, it is a finite connected planar graph ∆ whose edges
are oriented and labelled by the alphabet Σ, satisfying the following properties:
• ∆ has exactly one vertex-source ι (which has no incoming edges) and exactly one
vertex-sink τ (which has no outgoing edges);
• the boundary of each cell has the form pq −1 where p = q or q = p ∈ R;
• every vertex belongs to a positive path connecting ι and τ ;
• every positive path in ∆ is simple.
In particular, ∆ is bounded by two positive paths: the top path, denoted top(∆), and
the bottom path, denoted bot(∆). By extension, we also define top(Γ) and bot(Γ) for
every subdiagram Γ. In the following, the notations top(·) and bot(·) will refer to the
paths and to their labels. Also, a (u, v)-cell (resp. a (u, v)-diagram) will refer to a cell
(resp. a semigroup diagram) whose top path is labelled by u and whose bottom path is
labelled by v.
Two words w1 , w2 in Σ+ are equal modulo P if their images in the semigroup associated to P coincide. In particular, there exists a derivation from w1 to w2 , i.e., a
sequence of relations of R allowing us to transform w1 into w2 . To any such derivation
is associated a semigroup diagram, or more precisely a (w1 , w2 )-diagram, whose construction is clear from the example below. As in the case for groups, the words w1 , w2
are equal modulo P if and only if there exists a (w1 , w2 )-diagram.
Example 5.1. Let P = ha, b, c | ab = ba, ac = ca, bc = cbi be a presentation of the
free Abelian semigroup of rank three. In particular, the words a2 bc and caba are equal
modulo P, with the following possible derivation:
28
aabc
(a,ab→ba,c)
−→
abac
(ab,ac→ca,∅)
−→
abca
(a,bc→cb,a)
−→
acba
(∅,ac→ca,ba)
−→
caba.
Then, the associated (a2 bc, caba)-diagram ∆ is:
On such a graph, the edges are supposed oriented from left to right. Here, the diagram
∆ has nine vertices, twelve edges and four cells; notice that the number of cells of a
diagram corresponds to the length of the associated derivation. The paths top(∆) and
bot(∆) are labelled respectively by a2 bc and caba.
Since we are only interested in the combinatorics of semigroup diagrams, we will not
distinguish isotopic diagrams. For example, the two diagrams below will be considered
as equal.
If w ∈ Σ+ , we define the trivial diagram (w) as the semigroup diagram without cells
whose top and bottom paths, labelled by w, coincide. Any diagram without cells is
trivial. A diagram with exactly one cell is atomic.
If ∆1 is a (w1 , w2 )-diagram and ∆2 a (w2 , w3 )-diagram, we define the concatenation
∆1 ◦ ∆2 as the semigroup diagram obtained by identifying the bottom path of ∆1 with
the top path of ∆2 . In particular, ∆1 ◦∆2 is a (w1 , w3 )-diagram. Thus, ◦ defines a partial
operation on the set of semigroup diagrams over P. However, restricted to the subset of
(w, w)-diagrams for some w ∈ Σ+ , it defines a semigroup operation; such diagrams are
called spherical with base w. We also define the sum ∆1 + ∆2 of two diagrams ∆1 , ∆2 as
the diagram obtained by identifying the rightmost vertex of ∆1 with the leftmost vertex
of ∆2 .
Notice that any semigroup diagram can be viewed as a concatenation of atomic diagrams.
In the following, if ∆1 , ∆2 are two diagrams, we will say that ∆1 is a prefix (resp. a
suffix) of ∆2 if there exists a diagram ∆3 satisfying ∆2 = ∆1 ◦ ∆3 (resp. ∆2 = ∆3 ◦ ∆1 ).
Throughout this paper, the fact that ∆ is a prefix of Γ will be denoted by ∆ ≤ Γ.
Suppose that a diagram ∆ contains a (u, v)-cell and a (v, u)-cell such that the top
path of the first cell is the bottom path of the second cell. Then, we say that these
two cells form a dipole. In this case, we can remove these two cells by first removing
29
their common path, and then identifying the bottom path of the first cell with the top
path of the second cell; thus, we reduce the dipole. A diagram is called reduced if it does
not contain dipoles. By reducing dipoles, a diagram can be transformed into a reduced
diagram, and a result of Kilibarda [Kil94] proves that this reduced form is unique. If
∆1 , ∆2 are two diagrams for which ∆1 ◦ ∆2 is well defined, let us denote by ∆1 · ∆2 the
reduced form of ∆1 ◦ ∆2 .
Thus, the set of reduced semigroup diagrams, endowed with the product · we defined above, naturally defines a groupoid G(P), ie., loosely speaking, a “group” where
the product is only partially defined. The neutral elements correspond to the trivial
diagrams, and the inverse of a diagram is constructed in the following way: if ∆ is a
(w1 , w2 )-diagram, its inverse ∆−1 is the (w2 , w1 )-diagram obtained from ∆ by a mirror
reflection with respect to top(∆). A natural way to deduce a group from this groupoid
is to fix a base word w ∈ Σ+ , and to define the diagram group D(P, w) as the set of
reduced (w, w)-diagrams endowed with the product · we defined above.
Farley cube complexes. The groupoid G(P) has a natural generating set, namely
the set of atomic diagrams, and the group D(P, w) acts by left-multiplication on the
connected component of the associated Cayley graph which contains (w). Surprisingly,
this Cayley graph turns out to be naturally the 1-skeletton of a CAT(0) cube complex.
Below, we detail the construction of this cube complex as explained in [Far03].
A semigroup diagram is thin whenever it can be written as a sum of atomic diagrams.
We define the Farley complex X(P, w) as the cube complex whose vertices are the
reduced semigroup diagrams ∆ over P satisfying top(∆) = w, and whose n-cubes are
spanned by the vertices {∆ · P | P ≤ Γ} for some vertex ∆ and some thin diagram Γ
with n cells. In particular, two diagrams ∆1 and ∆2 are linked by an edge if and only
if there exists an atomic diagram A such that ∆1 = ∆2 · A.
Theorem 5.2. [Far03, Theorem 3.13] X(P, w) is a CAT(0) cube complex. Moreover it
is complete, i.e., there are no infinite increasing sequences of cubes in X(P, w).
There is a natural action of D(P, w) on X(P, w), namely (g, ∆) 7→ g · ∆. Then
Proposition 5.3. [Far03, Theorem 3.13] The action D(P, w) y X(P, w) is free. Moreover, it is cocompact if and only if the class [w]P of words equal to w modulo P is finite.
In this paper, we will always suppose that the cube complex X(P, w) is locally finite; in
particular, this implies that the action D(P, w) y X(P, w) is properly discontinuous.
For instance, this is case if P is a finite presentation, and because we may always suppose
that P is a finite presentation if our diagram group is finitely generated, this assumption
is not really restrictive. However, notice that X(P, w) may be locally finite even if P is
an infinite presentation. For example, the derived subgroup [F, F ] of Thompson’s group
F is isomorphic to the diagram group D(P, a0 b0 ), where
P = hx, ai , bi , i ≥ 0 | x = x2 , ai = ai+1 x, bi = xbi+1 , i ≥ 0i.
See [GS99, Theorem 26] for more details. In this situation, X(P, a0 b0 ) is nevertheless
locally finite.
Because X(P, w) is essentially a Cayley graph, the combinatorial geodesics behave as
expected:
Lemma 5.4. [Gen15a, Lemma 2.3] Let ∆ be a reduced diagram. If ∆ = A1 ◦ · · · ◦ An
where each Ai is an atomic diagram, then the path
(w), A1 , A1 ◦ A2 , . . . , A1 ◦ · · · ◦ An = ∆
30
defines a combinatorial geodesic from (w) to ∆. Furthermore, every combinatorial
geodesic from (w) to ∆ has this form.
Corollary 5.5. [Gen15a, Corollary 2.4] Let A, B ∈ X(P, w) be two reduced diagrams.
Then we have
d(A, B) = #(A−1 · B),
where #(·) denotes the number of cells of a diagram. In particular, d((w), A) = #(A).
In [Gen15a], we describe the hyperplanes of X(P, w). We recal this description below.
For any hyperplane J of X(P, w), we introduce the following notation:
• J + is the halfspace associated to J not containing (w),
• J − is the halfspace associated to J containing (w),
• ∂± J is the intersection ∂N (J) ∩ J ± .
Definition 5.6. A diagram ∆ is minimal if its maximal thin suffix F has exactly one
cell. (The existence and uniqueness of the maximal thin suffix is given by [Far03, Lemma
2.3].)
In the following, ∆ will denote the diagram ∆ · F −1 , obtained from ∆ by removing the
suffix F . The following result uses minimal diagrams to describe hyperplanes in Farley
complexes.
Proposition 5.7. [Gen15a, Proposition 2.2] Let J be a hyperplane of X(P, w). Then
there exists a unique minimal diagram ∆ such that J + = {D diagram | ∆ ≤ D}.
Conversely, if ∆ is a minimal diagram and J the hyperplane dual to the edge [∆, ∆],
then J + = {D diagram | ∆ ≤ D}.
Thanks to the geometry of CAT(0) cube complexes, we will use this description of the
hyperplanes of X(P, w) in order to define the supremum, when it exists, of a collection
of minimal diagrams. If ∆ is a diagram, let M(∆) denote the set of its minimal prefixes.
A set of minimal diagrams D is consistent if, for every D1 , D2 ∈ D, there exists some
D3 ∈ D satisfying D1 , D2 ≤ D3 .
Proposition 5.8. Let D be a finite consistent collection of reduced minimal (w, ∗)diagrams. Then there exists a unique ≤-minimal reduced (w, ∗)-diagram admitting any
element of D as a prefix; let sup(D) denote this diagram. Furthermore, M(sup(D)) =
S
M(D).
D∈D
Proof. Let D = {D1 , . . . , Dr }. For every 1 ≤ i ≤ r, let Ji denote the hyperplane
associated to Di . Because D is consistent, for every 1 ≤ i, j ≤ r the intersection Ji+ ∩ Jj+
is non-empty, hence C =
r
T
i=1
Ji+ 6= ∅. Let ∆ denote the combinatorial projection of (w)
onto C. For every 1 ≤ i ≤ r, ∆ belongs to Ji+ , hence Di ≤ ∆. Then, if D is another
diagram admitting any element of D as a prefix, necessarily D ∈ C, and it follows from
Proposition 2.5 that there exists a combinatorial geodesic between (w) and D passing
through ∆, hence ∆ ≤ D. This proves the ≤-minimality and the uniqueness of ∆.
Now, let D ∈ M(∆). If J denotes the hyperplane associated to D, then J separates (w)
and ∆, and according to Lemma 2.7 it must separate (w) and C. Therefore, either J
belongs to {J1 , . . . , Jr } or it separates (w) and some Jk ; equivalently, either D belongs
to {D1 , . . . , Dr } or it is a prefix of some Dk . Hence D ∈
r
S
i=1
31
M(Di ). Thus, we have
proved that M(∆) ⊂
M(D); the inclusion
S
D∈D
S
M(D) ⊂ M(∆)) is clear since any
D∈D
element of D is a prefix of ∆.
As a consequence, we deduce that a diagram is determined by the set of its minimal
prefixes.
Lemma 5.9. Let ∆1 and ∆2 be two diagrams. If M(∆1 ) = M(∆2 ) then ∆1 = ∆2 .
Proof. We begin with some general results. Let ∆ be a diagram.
Fact 5.10. #M(∆) = #∆.
Indeed, there exists a bijection between M(∆) and the hyperplanes separating (w) and
∆, hence #M(∆) = d((w), ∆) = #∆. We deduce that
Fact 5.11. sup(M(∆)) = ∆.
Indeed, by definition any element of M(∆) is a prefix of ∆, so that sup(M(∆)) exists
(because M(∆) is consistent) and sup(M(∆)) ≤ ∆ by the ≤-minimality of sup(M(∆)).
On the other hand, we deduce from the previous fact that
#sup(M(∆)) = |M(sup(M(∆)))| =
S
M(D) = #M(∆) = #∆.
D∈M(∆)
Therefore, we necessarily have sup(M(∆)) = ∆.
Finally, our lemma follows easily, since ∆1 = sup(M(∆1 )) = sup(M(∆2 )) = ∆2 .
Squier cube complexes. Because the action D(P, w) y X(P, w) is free and properly
discontinuous, we know that D(P, w) is isomorphic to the fundamental group of the
quotient space X(P, w)/D(P, w). We give now a description of this quotient. We define
the Squier complex S(P) as the cube complex whose vertices are the words in Σ+ ; whose
(oriented) edges can be written as (a, u → v, b), where u = v or v = u belongs to R,
linking the vertices aub and avb; and whose n-cubes similarly can be written as
(a1 , u1 → v1 , . . . , an , un → vn , an+1 ),
spanned by the set of vertices {a1 w1 · · · an wn an+1 | wi = vi or ui }.
Then, there is a natural morphism from the fundamental group of S(P) based at
w to the diagram group D(P, w). Indeed, a loop in S(P) based at w is just a series
of relations of R applied to the word w so that the final word is again w, and such
a derivation may be encoded into a semigroup diagram. The figure below shows an
example, where the semigroup presentation is P = ha, b, c | ab = ba, bc = cb, ac = cai:
Thus, this defines a map from the set of loops of S(P) based at w to the set of spherical
semigroup diagrams. In fact, the map extends to a morphism which turns out to be an
isomorphism:
Theorem 5.12. [GS97, Theorem 6.1] D(P, w) ' π1 (S(P), w).
32
For convenience, S(P, w) will denote the connected component of S(P) containing
w. Notice that two words w1 , w2 ∈ Σ+ are equal modulo P if and only if they belong to
the same connected component of S(P). Therefore, a consequence of Theorem 5.12 is:
Corollary 5.13. If w1 , w2 ∈ Σ+ are equal modulo P, then there exists a (w2 , w1 )diagram Γ and the map
∆ 7→ Γ · ∆ · Γ−1
induces an isomorphism from D(P, w1 ) to D(P, w2 ).
As claimed, we notice that the map ∆ 7→ bot(∆) induces a universal covering X(P, w) →
S(P, w) so that the action of π1 (S(P, w)) on X(P, w) coincides with the natural action
of D(P, w).
Lemma 5.14. [Gen15a, Lemma 1.3.5] The map ∆ 7→ bot(∆) induces a cellular isomorphism from the quotient X(P, w)/D(P, w) to S(P, w).
Changing the base word. When we work on the Cayley graph of a group, because
the action of the group is vertex-transitive, we may always suppose that a fixed base
point is the identity element up to a conjugation. In our situation, where X(P, w) is
the Cayley graph of a groupoid, the situation is slightly different but it is nevertheless
possible to make something similar. Indeed, if ∆ ∈ X(P, w) is a base point, then
conjugating by ∆ sends ∆ to a trivial diagram, but which does not necessarily belong to
X(P, w): in general, it will belong to X(P, bot(∆)). In the same time, D(P, w) becomes
D(P, bot(∆)), which is naturally isomorphic to D(P, w) according to Corollary 5.13.
Finally, we get a commutative diagram
D(P, w)
X(P, w)
isomorphism
/ D(P, bot(∆))
isometry
/ X(P, bot(∆))
Thus, we may always suppose that a fixed base point is a trivial diagram up to changing the base word, which does not disturb either the group or the cube complex. In
particular, a contracting isometry stays a contracting isometry after the process.
Infinite diagrams. Basically, an infinite diagram is just a diagram with infinitely
many cells. To be more precise, if finite diagrams are thought of as specific planar
graphs, then an infinite diagram is the union of an increasing sequence finite diagrams
with the same top path. Formally,
Definition 5.15. An infinite diagram is a formal concatenation ∆1 ◦∆2 ◦· · · of infinitely
many diagrams ∆1 , ∆2 , . . . up to the following identification: two formal concatenations
A1 ◦ A2 ◦ · · · and B1 ◦ B2 ◦ · · · define the same infinite diagram if, for every i ≥ 1, there
exists some j ≥ 1 such that A1 ◦ · · · ◦ Ai is a prefix of B1 ◦ · · · ◦ Bj , and conversely, for
every i ≥ 1, there exists some j ≥ 1 such that B1 ◦ · · · ◦ Bi is a prefix of A1 ◦ · · · ◦ Aj .
An infinite diagram ∆ = ∆1 ◦ ∆2 ◦ · · · is reduced if ∆1 ◦ · · · ◦ ∆n is reduced for every
n ≥ 1. If top(∆1 ) = w, we say that ∆ is an infinite w-diagram.
Notice that, according to Lemma 5.4, a combinatorial ray is naturally labelled by a
reduced infinite diagram. This explains why infinite diagrams will play a central role in
the description of the combinatorial boundaries of Farley cube complexes.
Definition 5.16. Let ∆ = ∆1 ◦ ∆2 ◦ · · · and Ξ = Ξ1 ◦ Ξ2 ◦ · · · be two infinite diagrams.
We say that ∆ is a prefix of Ξ if, for every i ≥ 1, there exists some j ≥ 1 such that
∆1 ◦ · · · ◦ ∆i is a prefix of Ξ1 ◦ · · · Ξj .
33
5.2
Combinatorial boundaries of Farley complexes
The goal of this section is to describe the combinatorial boundaries of Farley cube
complexes as a set of infinite reduced diagrams. First, we need to introduce a partial
order on the set of infinite diagrams. Basically, we will say that a diagram ∆1 is almost
a prefix of ∆2 if there exists a third diagram ∆0 which is a prefix of ∆1 and ∆2 such that
all but finitely many cells of ∆1 belong to ∆0 . Notice that, if we fix a decomposition of
a diagram ∆ as a concatenation of atomic diagrams, then to any cell π of ∆ corresponds
naturally a minimal prefix of ∆, namely the smallest prefix containing π (which is
precisely the union of all the cells of ∆ which are “above” π; more formally, using the
relation introduced by Definition 5.32, this is the union of all the cells π 0 of ∆ satisfying
π 0 ≺≺ π). Therefore, alternatively we can describe our order in terms of minimal
prefixes.
Definition 5.17. Let ∆1 and ∆2 be two infinite reduced diagrams. Then,
• ∆1 is almost a prefix of ∆2 if M(∆1 ) ⊂ M(∆2 );
a
• ∆1 and ∆2 are commensurable if M(∆1 ) = M(∆2 ).
a
Notice that two infinite reduced diagrams are commensurable if and only if each one is
almost a prefix of the other. Our model for the combinatorial boundaries of Farley cube
complexes will be the following:
Definition 5.18. If P = hΣ | Ri is semigroup presentation and w ∈ Σ+ a base word,
let (∂(P, w), <) denote the poset of the infinite reduced (w, ∗)-diagrams, up to commena
surability, ordered by the relation < of being almost a prefix.
a
Recall that to any combinatorial ray r ⊂ X(P, w) starting from (w) is naturally associated an infinite reduced (w, ∗)-diagram Ψ(r).
Theorem 5.19. The map Ψ induces a poset-isomorphism ∂ c X(P, w) → ∂(P, w).
The main tool to prove our theorem is provided by the following proposition, which is
an infinite analogue of Proposition 5.8. Recall that a collection of (finite) diagrams D is
consistent if, for every ∆1 , ∆2 ∈ D, there exists a third diagram ∆ satisfying ∆1 , ∆2 ≤ ∆.
Proposition 5.20. Let D be a countable consistent collection of finite reduced minimal (w, ∗)-diagrams. Then there exists a unique ≤-minimal reduced (w, ∗)-diagram
admitting any element of D as a prefix; let sup(D) denote this diagram. Furthermore,
S
M(sup(D)) =
M(D).
D∈D
Proof. Let D = {D1 , D2 , . . .}, with Di 6= Dj if i 6= j. For each i ≥ 1, let Hi denote
the hyperplane of X(P, w) associated to Di . Because D is consistent, the halfspaces
Hi+ and Hj+ intersect for every i ≥ 1. Therefore, for every k ≥ 1, the intersection
Ik =
k
T
i=1
Hi+ is non-empty; let ∆k denote the combinatorial projection of (w) onto Ik .
As a consequence of Proposition 2.5, there exists a combinatorial ray starting from (w)
and passing through each ∆k . In particular, for every k ≥ 1, ∆k is a prefix of ∆k+1 , so
that it makes sense to define the infinite diagram ∆ as the limit of the sequence (∆k ).
Notice that, for every k ≥ 1, ∆k ∈ Hk+ so Dk ≤ ∆k and a fortiori Dk ≤ ∆.
Now, let Ξ be another reduced w-diagram admitting each Dk as a prefix. Fixing a
decomposition of Ξ as an infinite concatenation of atomic diagrams, say A1 ◦ A2 ◦ · · · ,
we associate to Ξ a combinatorial ray r starting from (w) defined as the path
(w), A1 , A1 ◦ A2 , A1 ◦ A2 ◦ A3 . . .
34
Let n ≥ 1. Because D1 , . . . , Dn are prefixes of Ξ, there exists some k ≥ 1 such that
r(k) ∈ In . On the other hand, according to Proposition 2.5, there exists a combinatorial
geodesic between (w) and r(k) passing through ∆n , hence ∆n ≤ r(k). Therefore, we
have proved that, for every n ≥ 1, ∆n is a prefix of Ξ: this precisely means that ∆ is a
prefix of Ξ. The ≤-minimality and the uniqueness of ∆ is thus proved.
S
Because the Dk ’s are prefixes of ∆, the inclusion
M(D) ⊂ M(∆) is clear. Con-
D∈D
versely, let D ∈ M(∆); let J denote the associated hyperplane. Because D is a prefix of
∆, it is a prefix of ∆k for some k ≥ 1; and because ∆k is the combinatorial projection of
(w) onto Ik , we deduce that J separates (w) and Ik . On the other hand, Ik =
k
T
i=1
Hi+
so there exists some 1 ≤ i ≤ k such that either J = Hi or J separates (w) and Hi ;
S
equivalently, D is a prefix of Di . We have proved M(∆) ⊂
M(D).
D∈D
The following lemma states how the operation sup behaves with respect to the inclusion,
the almost-inclusion and the almost-equality.
Lemma 5.21. Let D1 and D2 be two countable consistent collections of reduced (w, ∗)diagrams.
(i) If D1 ⊂ D2 then sup(D1 ) ≤ sup(D2 );
(ii) if D1 ⊂ D2 then sup(D1 ) is almost a prefix of sup(D2 );
a
(iii) if D1 = D2 then sup(D1 ) and sup(D2 ) are commensurable.
a
Proof. If D1 ⊂ D2 then sup(D2 ) admits any element of D1 as a prefix, hence sup(D1 ) ≤
sup(D2 ) by the ≤-minimality of sup(D1 ).
If D1 ⊂ D2 then we can write
a
!
M(sup(D1 )) =
S
M(D) =
D∈D1
M(D) t
S
D∈D1 ∩D2
!
S
M(D) .
D∈D1 \D2
On the other hand, D1 \D2 is finite, because D1 ⊂ D2 , and M(D) is finite for any finite
a
diagram D, so
M(D) is finite. This proves that M(sup(D1 )) ⊂ M(sup(D2 )),
S
a
D∈D1 \D2
ie., sup(D1 ) is almost a prefix of sup(D2 ).
If D1 = D2 , we deduce by applying the previous point twice that sup(D1 ) is almost a
a
prefix of sup(D2 ) and vice-versa. Therefore, sup(D1 ) and sup(D2 ) are commensurable.
Finally, Theorem 5.19 will be essentially a consequence of the previous lemma and the
following observation. If r is a combinatorial ray in a Farley complex, let D(r) denote
the set of the minimal diagrams which are associated to the hyperplanes of H(r). Then
Lemma 5.22. Ψ(r) = sup(D(r)).
Proof. Let A1 , A2 , A3 . . . be the sequence of atomic diagrams such that
r = ((w), A1 , A1 ◦ A2 , A1 ◦ A2 ◦ A3 , . . .).
In particular, Ψ(r) = A1 ◦ A2 ◦ · · · and M(Ψ(r)) =
S
M(A1 ◦ · · · ◦ An ). On the other
n≥1
hand, any D ∈ D(r) must be a prefix of A1 ◦ · · · ◦ An for some n ≥ 1, and conversely,
any minimal prefix of some A1 ◦ · · · ◦ An must belong to D(r). Therefore,
35
M(Ψ(r)) =
S
M(A1 ◦ · · · ◦ An ) = M(sup(D(r))).
n≥1
If ∆ is a finite prefix of Ψ(r), then M(∆) ⊂ M(Ψ(r)) = M(sup(D(r))), so ∆ =
sup(M(∆)) must be a prefix of sup(D(r)) as well. Therefore, Ψ(r) ≤ M(sup(D(r))).
Similarly, we prove that M(sup(D(r))) ≤ Ψ(r), hence Ψ(r) = M(sup(D(r))).
Proof of Theorem 5.19. First, we have to verify that, if r1 and r2 are two equivalent
combinatorial rays starting from (w), then Ψ(r1 ) = Ψ(r2 ). Because r1 and r2 are
a
equivalent, i.e., H(r1 ) = H(r2 ), we know that D(r1 ) = D(r2 ), which implies
a
a
Ψ(r1 ) = sup(D(r1 )) = sup(D(r2 )) = Ψ(r2 )
a
according to Lemma 5.21 and Lemma 5.22. Therefore, Ψ induces to a map ∂ c X(P, w) →
∂(P, w). For convenience, this map will also be denoted by Ψ.
If r1 ≺ r2 , then H(r1 ) ⊂ H(r2 ) hence
a
Ψ(r1 ) = sup(D(r1 )) < sup(D(r2 )) = Ψ(r2 ).
a
Therefore, Ψ is a poset-morphism.
Let ∆ be an infinite reduced (w, ∗)-diagram. Let A1 , A2 , . . . be a sequence of atomic
diagrams such that ∆ = A1 ◦A2 ◦· · · . Let ρ denote the combinatorial ray ((w), A1 , A1 ◦
A2 , . . .). Now it is clear that Ψ(ρ) = A1 ◦ A2 ◦ · · · = ∆, so Ψ is surjective.
Let r1 and r2 be two combinatorial rays starting from (w) and satisfying Ψ(r1 ) = Ψ(r2 ).
a
It follows from Lemma 5.22 that sup(D(r1 )) = sup(D(r2 )). This means
a
S
M(D) = M(sup(D(r1 )) = M(sup(D(r2 ))) =
a
D∈D(r1 )
M(D).
S
D∈D(r2 )
Notice that D(r2 ) is stable under taking a prefix. Indeed, if P is a prefix of some
∆ ∈ D(r2 ) then the hyperplane J associated to P separates (w) and the hyperplane
H associated to ∆; because H ∈ H(r2 ) and r2 (0) = (w), necessarily J ∈ H(r2 ), hence
P ∈ D(r2 ). As a consequence, we deduce that
!
D(r1 )\D(r2 ) ⊂
S
M(D) \
D∈D(r1 )
!
S
M(D) .
D∈D(r2 )
We conclude that D(r1 )\D(r2 ) is finite. We prove similarly that D(r2 )\D(r1 ) is finite.
Therefore, we have D(r1 ) = D(r2 ), which implies H(r1 ) = H(r2 ), so that r1 and r2 are
a
a
equivalent. We have proved that Ψ is injective.
Example 5.23. Let P = hx | x = x2 i be the semigroup presentation usually associated
to Thompson’s group F . An atomic diagram is said positive if the associated relation
is x → x2 , and negative if the associated relation is x2 → x; by extension, a diagram
is positive (resp. negative) if it is a concatenation of positive (resp. negative) atomic
diagrams. Notice that any reduced infinite (x, ∗)-diagram has an infinite positive prefix.
On the other hand, there exists a reduced infinite postive (x, ∗)-diagram containing all
the possible reduced infinite positive (x, ∗)-diagrams as prefixes: it corresponds to the
derivation obtained by replacing at each step each x by x2 . Therefore, ∂(P, x) does
not contain any isolated vertex. We deduce from Theorem 4.17 that the automorphism
group Aut(X(P, x)) does not contain any contracting isometry.
Example 5.24. Let P = ha, b, p1 , p2 , p3 | a = ap1 , b = p1 b, p1 = p2 , p2 = p3 , p3 = p1 i be
the semigroup presentation usually associated to the lamplighter group Z o Z. We leave
as an exercice to notice that ∂(P, ab) has no isolated point. Therefore, Aut(X(P, ab))
does not contain any contracting isometry according to Theorem 4.17.
36
5.3
Contracting isometries
In this section, we fix a semigroup presentation P = hΣ | Ri and base word w ∈ Σ+ .
Our goal is to determine precisely when a spherical diagram g ∈ D(P, w) induces a
contracting isometry on the Farley complex X(P, w). For convenience, we will suppose
that g is absolutely reduced, ie., for every n ≥ 1 the concatenation of n copies of g
is reduced; this assumption is not really restrictive since, according to [GS97, Lemma
15.10], any spherical diagram is conjugated to an absolutely reduced spherical diagram
(possibly for a different word base). In particular, we may define the infinite reduced
(w, ∗)-diagram g ∞ as g ◦ g ◦ · · · . Our main criterion is:
Theorem 5.25. Let g ∈ D(P, w)−{(w)} be absolutely reduced. Then g is a contracting
isometry of X(P, w) if and only if the following two conditions are satisfied:
(i) g ∞ does not contain any infinite proper prefix;
(ii) if ∆ is a reduced diagram with g ∞ as a prefix, then ∆ is commensurable to g ∞ .
This theorem will be essentially a consequence of the following two lemmas.
Lemma 5.26. Let g ∈ D(P, w) − {(w)} be absolutely reduced. Choose a combinatorial
S n
geodesic [(w), g] between (w) and g, and set γ =
g ·[(w), g]. Then γ is a bi-infinite
n∈Z
combinatorial geodesic on which hgi acts by translation. Moreover, γ(+∞) = g ∞ .
Proof. Notice that γ is a bi-infinite combinatorial path passing through g n for every
n ∈ Z. Therefore, in order to deduce that γ is a geodesic, it is sufficient to show that,
for every n, m ∈ Z, the length of γ between g n and g n+m is equal to d(g n , g n+m ); but
this length is precisely
m · length([(w), g]) = m · d((w), g) = m · #(g),
and, on the other hand, because g is absolutely reduced,
d(g n , g n+m ) = #(g −n · g n+m ) = #(g m ) = m · #(g).
We conclude that γ is a combinatorial geodesic. The fact that hgi acts on γ by translation
is clear. Finally, we deduce that γ(+∞) = g ∞ from the fact that, for any k ≥ 1, we
have γ(k · #(g)) = g k .
Definition 5.27. Let ∆ be a possibly infinite (w, ∗)-diagram. The support of ∆ is the
set of maximal subpaths of w whose edges belong to the top path of some cell of ∆.
Lemma 5.28. Let ∆1 , . . . , ∆n be copies of an absolutely reduced spherical (w, w)-diagram
∆, where n > |w|. If e ⊂ ∆1 ◦ · · · ◦ ∆n which belongs to the bottom path of some cell of
∆1 , then e belongs to the top path of some cell in ∆1 ◦ · · · ◦ ∆n .
Proof. Say that an edge of ∆i is a cutting edge if it belongs to the bottom path of some
cell of ∆i but does not belong to the top path of some cell in ∆1 ◦ · · · ◦ ∆n . Suppose
that there exists a cutting edge e1 in ∆1 .
In particular, e1 is an edge of the top path of ∆2 , so e1 does not belong to the support
of ∆2 . We deduce that ∆2 decomposes as a sum Φ2 + + Ψ2 , where is a trivial diagram
with top and bottom paths equal to e1 . Let Φ1 + + Ψ1 denote the same decomposition
of ∆1 . Because e1 belongs to the bottom path of a cell of ∆1 , two cases may happen:
either e1 belongs to bot(Φ1 ), so that top(Φ2 ) ( bot(Φ1 ), or e1 belongs to bot(Ψ1 ), so
that top(Ψ2 ) ( bot(Ψ1 ). Say we are in the former case, the latter case being similar.
Now, because ∆2 and ∆1 are two copies of the same diagram, to the edge e1 of ∆1
corresponds an edge e2 of ∆2 . Notice that, because e1 belongs to bot(Φ1 ), necessarily
37
e2 6= e1 . Moreover, the product ∆−1 ◦ ∆1 ◦ · · · ◦ ∆n naturally reduces to a copy of
∆1 ◦ · · · ◦ ∆n−1 , and this process sends the edge e2 to the edge e1 . Therefore, because
e1 is a cutting edge in ∆1 , we deduce that e2 is a cutting edge in ∆2 .
By iterating this construction, we find a cutting edge ei in ∆i for every 1 ≤ i ≤ n, where
ei 6= ej provided that i 6= j. On the other hand, the path bot(∆1 ◦ · · · ◦ ∆n ) necessarily
contains all these edges, hence n ≤ |w|. Consequently, ∆1 cannot contain a cutting edge
if n > |w|.
Proof of Theorem 5.25. Let γ be the combinatorial axis of g given by Lemma 5.26.
According to Theorem 4.17, g is a contracting isometry if and only if γ(+∞) is isolated
in ∂ c X(P, w). Thus, using the isomorphism given by Theorem 5.19, g is a contracting
isometry if and only if g ∞ is isolated in ∂(P, w), ie.,
• every infinite prefix of g ∞ is commensurable to g ∞ ;
• if ∆ is a reduced diagram with g ∞ as a prefix, then ∆ is commensurable to g ∞ .
Therefore, to conclude it is sufficient to prove that a proper infinite prefix ∆ of g ∞
cannot be commensurable to g ∞ . Because ∆ is a proper prefix, there exists a cell π1 of
g ∞ which does not belong to ∆. Now, by applying Lemma 5.28 to a given edge of the
bottom path of π1 , we find a cell π2 whose top path intersects the bottom path of π1
along at least one edge; by applying Lemma 5.28 to a given edge of the bottom path of
π2 , we find a cell π3 whose top path intersects the bottom path of π2 along at least one
edge; and so on. Finally, we find an infinite sequence of cells π2 , π3 , . . . such that, for
every i ≥ 2, the top path of πi has a common edge with the bottom path of πi−1 . For
every i ≥ 1, let Ξi denote the smallest (finite) prefix of g ∞ which contains the cell πi .
This is a minimal diagram, and by construction Ξ1 , Ξ2 , . . . ∈
/ M(∆), which proves that
∆ and g ∞ are not commensurable.
Example 5.29. Let P = ha, b, p | a = ap, b = pbi and let g ∈ D(P, ab) be the following
spherical diagram:
Then g ∞ clearly contains a proper infinite prefix. Therefore, g is not a contracting
isometry of X(P, ab).
38
Example 5.30. Let P = ha, b, c | a = b, b = c, c = ai and let g ∈ D(P, a2 ) be the
following spherical diagram:
Then g ∞ is a prefix of the diagram ∆ below, but g ∞ and ∆ are not commensurable.
Therefore, g is not a contracting isometry of X(P, a2 ).
Example 5.31. Let P = ha, b, c, d | ab = ac, cd = bdi and let g ∈ D(P, ab) be the
following spherical diagram:
So the infinite diagram g ∞ looks like
We can notice that g ∞ does not contain a proper infinite prefix that any diagram
containing g ∞ as a prefix is necessarily equal to g ∞ .
The criterion provided by Theorem 5.25 may be considered as unsatisfactory, because
we cannot draw an infinite diagram in practice (although it seems to be sufficient for
spherical diagrams with only few cells, as suggested by the above examples). We conclude this section by stating and proving equivalent conditions to the assertions (i) and
(ii) of Theorem 5.25. For convenience, we will suppose our spherical diagram absolutely
reduced and normal, ie., the factors of every decomposition as a sum are spherical;
this assumption is not really restrictive since, according to [GS97, Lemma 15.13], every
spherical diagram is conjugated to a normal absolutely reduced spherical diagram (possibly for a different word base), and this new diagram can be found “effectively” [GS97,
Lemma 15.14]. We begin with the condition (i). The following definition will be needed.
39
Definition 5.32. Given a diagram ∆, we introduce a partial relation ≺ on the set of its
cells, defined by: if π, π 0 are two cells of ∆, then π ≺ π 0 holds if the intersection between
the bottom path of π and the top path of π 0 contains at least one edge. Let ≺≺ denote
the transitive closure of ≺.
Lemma 5.33. Let g ∈ D(P, w) − {(w)} be absolutely reduced. Suppose that we can
write w = xyz where supp(g) = {y}. The following two conditions are equivalent:
(i) ∂(P, x) and ∂(P, z) are empty;
(ii) if ∆ is a reduced diagram with g ∞ as a prefix, then ∆ is commensurable to g ∞ .
Proof. First, notice that g = (x) + g0 + (z) for some (y, y)-diagram g0 . This assertion
clearly holds for some (y, ∗)-diagram g0 , but, because g is a (w, w)-diagram, the equality
xyz = w = top(g) = bot(g) = x · bot(g0 ) · z
holds in Σ+ . Therefore, bot(g0 ) = y, and we conclude that g0 is indeed a (y, y)-diagram.
Now, we prove (i) ⇒ (ii). Let ∆ be a reduced diagram containing g ∞ as a prefix.
Suppose that π is a cell of ∆ such that there exists a cell π0 of g ∞ satisfying π0 ≺≺ π.
So there exists a chain of cells π0 ≺ π1 ≺ · · · ≺ πn = π. By definition, π1 share an edge
with π0 . On the other hand, it follows from Lemma 5.28 that this edge belongs to the
top path of a cell of g ∞ , hence π1 ⊂ g ∞ . By iterating this argument, we conclude that
π belongs to g ∞ . Therefore, if Ξ denotes the smallest prefix of ∆ containing a given cell
which does not belong to g ∞ , then supp(Ξ) is included into x or z.
As a consequence, there exist a (x, ∗)-diagram Φ and (z, ∗)-diagram Ψ such that ∆ =
Φ + g0∞ + Ψ. On the other hand, because ∂(P, x) and ∂(P, y) are empty, necessarily
#Φ, #Ψ < +∞. Therefore,
|M(∆)\M(g ∞ )| = |M(Φ + (yz)) t M((xy) + Ψ)| = #Φ + #Ψ < +∞,
where we used Fact 5.10. Therefore, ∆ is commensurable to g ∞ .
Conversely, we prove (ii) ⇒ (i). Let Φ be a reduced (x, ∗)-diagram and Ψ a reduced
(z, ∗)-diagram. Set ∆ = Φ + g0∞ + Ψ. This is a reduced diagram containing g ∞ as a
prefix. As above, we have
|M(∆)\M(g ∞ )| = #Φ + #Ψ.
Because ∆ must be commensurable to g ∞ , we deduce that Φ and Ψ are necessarily
finite. We conclude that ∂(P, x) and ∂(P, z) are empty.
Now, we focus on the condition (ii) of Theorem 5.25. We first introduce the following
definition; explicit examples are given at the end of this section.
Definition 5.34. Let g ∈ D(P, w) − {(w)} be absolutely reduced. A subpath u of w is
admissible if there exists a prefix ∆ ≤ g such that supp(∆) = {u} and bot(∆) ∩ bot(g) is
a non-empty connected subpath of bot(g); the subpath of w corresponding to bot(∆) ∩
bot(g) is a final subpath associated to u; it is proper if this is not the whole bot(g).
Given a subpath u, we define its admissible tree Tg (u) as the rooted tree constructed
inductively as follows:
• the root of Tg (u) is labeled by u;
• if v is a non-admissible subpath of w labelling a vertex of Tg (u), then v has only
one descendant, labeled by the symbol ∅;
40
• if v is an admissible subpath of w labelling a vertex of Tg (u), then the descendants
of v correspond to the proper final subpaths associated to u.
If w has length n, we say that Tg (u) is deep if it contains at least α(w) + 1 generations,
where α(w) = n(n+1)
is chosen so that w contains at most α(w) subpaths.
2
Lemma 5.35. Let g ∈ D(P, w) − {(w)} be absolutely reduced. The following two
conditions are equivalent:
(i) g ∞ contain an infinite proper prefix;
(ii) there exists a subpath u of w such that Tg (u) is deep.
Proof. For convenience, let g ∞ = g1 ◦ g2 ◦ · · · where each gi is a copy of g.
We first prove (i) ⇒ (ii). Suppose that g ∞ contains an infinite propre prefix Ξ. For
every i ≥ 1, Ξ induces a prefix Ξi of gi . Up to taking an infinite prefix of Ξ, we can
suppose that supp(Ξi ) and bot(Ξi ) ∩ bot(gi ) are connected subpaths of top(gi ) and
bot(gi ) respectively. Thus, if ξi denotes the word labeling supp(Ξi ) for every i ≥ 1, then
the sequence (ξi ) defines an infinite descending ray in the rooted tree Tg (ξ1 ). A fortiori,
Tg (ξ1 ) is deep.
Conversely, we prove (ii) ⇒ (i). Suppose that there exists a subword u of w such that
Tg (u) is deep. Because w contains at most α(w) subwords, necessarily Tg (u) contains
twice a proper subword ξ of supp(g) in a same lineage. Let ξ = ξ0 , . . . , ξn = ξ be the
geodesic between these two vertices in Tg (u). For every 0 ≤ i ≤ n − 1, there exists
a prefix Ξi of g such that supp(Ξi ) = {ξi } and bot(Xi ) ∩ bot(g) is a connected path
e i + (yi ) for some words xi , yi ∈ Σ+ and
labeled by ξi+1 . In particular, Ξi = (xi ) + Ξ
e
some (ξi , pi ξi+1 qi )-diagram Ξi . Now, define
e i+1 + (qi · · · q0 y0 ) for i ≥ 0.
Ξ0i+1 = (x0 p0 · · · pi ) + Ξ
Notice that Ξ0i+1 is a (x0 p0 · · · pi ξi qi · · · q0 y0 , x0 p0 · · · pi+1 ξi+1 qi+1 · · · q0 y0 )-diagram, so
that the concatenation Ξ = Ξ0 ◦ Ξ01 ◦ · · · ◦ Ξ0n−1 is well-defined.
We have proved that g n contains a prefix Ξ such that supp(Ξ) = {ξ} and bot(Ξ)∩bot(g n )
is the same subpath ξ.
Say that Ξ is a (xξy, xpξqy)-diagram, for some words x, y, p, q ∈ Σ+ . In particular, Ξ
e + (y) where Ξ
e is a (ξ, pξq)-diagram. For every i ≥ 0, set
decomposes as a sum (x) + Ξ
e + (q i y).
∆i = (xpi ) + Ξ
Finally, the concatenation ∆ = ∆0 ◦ ∆1 ◦ · · · defines an infinite prefix of g ∞ . Moreover,
supp(∆) = supp(Ξ) = {ξ} ( supp(g) ⊂ supp(g ∞ ),
so ∆ is a proper prefix of g ∞ .
By combining Lemma 5.33 and Lemma 5.35 with Theorem 5.25, we obtain the following
new criterion.
Proposition 5.36. Let g ∈ D(P, w) − {(w)} be normal and absolutely reduced. Then
g is a contracting isometry of X(P, w) if and only if the following two conditions are
satisfied:
• the support of g is reduced to {w1 } for some subword w1 of w, and, if we write
w = xw1 y, then ∂(P, x) and ∂(P, y) are empty;
• for every subpath u of w, Tg (u) is not deep.
41
Proof. According to Theorem 5.25, g is contracting if and only if
(i) g ∞ does not contain any infinite proper prefix;
(ii) if ∆ is a reduced diagram with g ∞ as a prefix, then ∆ is commensurable to g ∞ .
Now, Lemma 5.35 states that the condition (i) is equivalent to
(iii) for every subpath u of w, Tg (u) is not deep;
and Lemma 5.33 implies that the condition (ii) is a consequence of
(iv) the support of g is reduced to {w1 } for some subword w1 of w, and, if we write
w = xw1 y, then ∂(P, x) and ∂(P, y) are empty.
To conclude the proof, it is sufficient to show that (iv) is a consequence of (i) and (ii).
So suppose that (i) and (ii) hold. If supp(g) is not reduced to a single subpath, then g
decomposes as a sum of non-trivial diagrams, and because g is normal, it decomposes
as a sum of non-trivial spherical diagrams, say g = g1 + g2 . Now, since g1 and g2 are
spherical, g ∞ = g1∞ + g2∞ , so g ∞ clearly contains an infinite proper prefix, contradicting
(i). Therefore, supp(g) reduces to a single subpath, and Lemma 5.33 implies that (iv)
holds.
Remark 5.37. The first condition of Proposition 5.36 is obviously satisfied if supp(g) =
{w}. Because the case often happen, we will say that spherical diagrams satisfying this
property are full.
n
Example 5.38. The group Z • Z = ha, b, t | [a, bt ] = 1, n ≥ 0i, introduced in [GS97,
Section 8], is isomorphic to the diagram group D(P, ab), where
*
P=
a1 , a2 , a3 , p, b1 , b2 , b3
a1 = a1 p a1 = a2 , a2 = a3 , a3 = a1
,
b1 = pb1
b1 = b2 , b2 = b3 , b3 = b1
+
.
Let g ∈ D(P, ab) be the following spherical diagram group:
This is a full, normal, and absolutely reduced diagram. Moreover, a1 b1 is the only
admissible subpath, and Tg (a1 b1 ) is
a1 b1
b1
Therefore, g is a contracting isometry of X(P, ab). In particular, Z • Z is acylindrically
hyperbolic.
Example 5.39. Let P = ha, b | ab = ba, ab2 = b2 ai and let g ∈ D(P, ab4 ) be the
following spherical diagram:
42
This is a full, normal, and absolutely reduced diagram. Morever, ab4 is the only admissible subpath, and Tg (ab4 ) is
ab4
b4
b
∅
∅
We conclude that g is a contracting isometry of X(P, ab4 ).
5.4
Some examples
In this section, we exhibit some interesting classes of acylindrically hyperbolic diagram
groups by applying the results proved in the previous section. In Example 5.40, we
consider a family of groups U (m1 , . . . , mn ) already studied in [GS97] and prove that
they are acylindrically hyperbolic. Morever, we notice that each Ln = U (1, . . . , 1) turns
out to be naturally a subgroup of a right-angled Coxeter group. In Example 5.41, we
show that the •-product, as defined in [GS97], of two non trivial diagram groups is
acylindrically hyperbolic but not relatively hyperbolic. Finally, in Example 5.43, we
prove that some diagram products, as defined in [GS99], of diagram groups turn out
to be acylindrically hyperbolic. As a by-product, we also exhibit a cocompact diagram
group which is not isomorphic to a right-angled Artin group.
Example 5.40. Let Pn = hx1 , . . . , xn | xi xj = xj xi , 1 ≤ i < j ≤ ni and let U (m1 , . . . , mn )
n
denote the diagram group D(P, x1m1 · · · xm
n ), where mi ≥ 1 for every 1 ≤ i ≤ n. Notice
that, for every permutation σ ∈ Sn , U (mσ(1) , . . . , mσ(n) ) is isomorphic to U (m1 , . . . , mn );
therefore, we will always suppose that m1 ≤ · · · ≤ mn . For example, the groups U (p, q, r)
were considered in [GS97, Example 10.2] (see also [Gen15a, Example 5.3] and [Gen15b,
Example 5.11]). In particular, we know that
• U (m1 , . . . , mn ) is trivial if and only if n ≤ 2;
• U (m1 , . . . , mn ) is infinite cyclic if and only if n = 3 and m1 = m2 = m3 = 1;
• U (m1 , . . . , mn ) is a non-abelian free group if and only if n = 3 and m1 = 1, or
n = 4 and m1 = m2 = m3 = 1, or n = 5 and m1 = m2 = m3 = m4 = m5 = 1.
We claim that, whenever it is not cyclic, the group U (m1 , . . . , mn ) is acylindrically
hyperbolic. For instance, if g ∈ U (1, 1, 2) denotes the spherical diagram associated to
the following derivation
x1 x2 x3 x3 → x2 x1 x3 x3 → x2 x3 x1 x3 → x3 x2 x1 x3 → x3 x1 x2 x3 → x1 x3 x2 x3 → x1 x2 x3 x3 ,
then g is a contracting isometry of the associated Farley cube complex. (The example
generalizes easily to other cases although writing an argument in full generality is laborious.) In particular, we deduce that U (m1 , . . . , mn ) does not split as non trivial direct
product.
43
An interesting particular case is the group Ln , corresponding to U (1, . . . , 1) with n ones.
A first observation is that Ln is naturally a subgroup of U (m1 , . . . , mn ), and conversely,
U (m1 , . . . , mn ) is naturally a subgroup of Lm1 +···+mn . Secondly, Ln can be described as
a group of pseudo-braids; this description is made explicit by the following example of
a diagram with its associated pseudo-braid:
Of course, we look at our pseudo-braids up to the following move, corresponding to the
reduction of a dipole in the associated diagram:
Finally, if Cn denotes the group of all pictures of pseudo-braids, endowed with the
obvious concatenation, then Ln corresponds to “pure subgroup” of Cn . It is clear that,
if σi corresponds to the element of Cn switching the i-th and (i + 1)-th braids, then a
presentation of Cn is
hσ1 , . . . , σn−1 | σi2 = 1, [σi , σj ] = 1 if |i − j| ≥ 2i.
Alternatively, Cn is isomorphic to the right-angled Coxeter group C(Pn−1 ), where Pn−1
is the complement of the graph Pn−1 which is a segment with n − 1 vertices, i.e., Pn−1
is the graph with the same vertices as Pn−1 but whose edges link two vertices precisely
when they are not adjacent in Pn−1 . In particular, Ln is naturally a finite-index subgroup
of C(Pn−1 ).
With this interpretation, the group U (m1 , . . . , mn ) corresponds to the subgroup of Lm ,
where m = m1 + · · · + mn , defined as follows: Color the m braids with n colors, such
that, for every 1 ≤ i ≤ n, there are mi braids with the color i. Now, U (m1 , . . . , mn )
is the subgroup of the pure pseudo-braids where two braids of the same color are never
switched.
Therefore, the groups U (m1 , . . . , mn ) turn out to be natural subgroups of right-angled
Coxeter groups.
Example 5.41. Let G and H be two groups. We define the product G • H by the
relative presentation
44
hG, H, t | [g, ht ] = 1, g ∈ G, h ∈ Hi.
As proved in [GS97, Theorem 8.6], the class of diagram groups is stable under the
•-product. More precisely, let P1 = hΣ1 | R1 i, P2 = hΣ2 | R2 i be two semigroup
presentations and w1 , w2 two base words. For i = 1, 2, suppose that there does not exist
two words x and y such that xy is non-empty and wi = xwi y; as explained in during the
proof of [GS97, Theorem 8.6], we may suppose that this condition holds up to a small
modification of Pi which does not modify the diagram group. Then, if
P = hΣ1 t Σ2 t {p} | R1 t R2 t {w1 = w1 p, w2 = pw2 }i,
then D(P, w1 w2 ) is isomorphic to D(P1 , w1 ) • D(P2 , w2 ) [GS97, Lemma 8.5]. Now, we
claim that, whenever D(P1 , w1 ) and D(P2 , w2 ) are non-trivial, the product D(P1 , w1 ) •
D(P2 , w2 ) is acylindrically hyperbolic. Indeed, if Γ ∈ D(P1 , w1 ) and ∆ ∈ D(P2 , w2 ) are
non-trivial, then the spherical diagram
of D(P, w1 w2 ) is a contracting isometry. On the other hand, D(P1 , w1 ) • D(P2 , w2 )
contains a copy of Z • Z, so it cannot be cyclic. We conclude that D(P1 , w1 ) • D(P2 , w2 )
is indeed acylindrically hyperbolic.
In fact, the product G • H can be described as an HNN extension: if H ∞ denote the
free product of infinitely many copies of H, then G • H is isomorphic to the HNN
extension of G × H ∞ over H ∞ with respect to two monomorphisms H ∞ ,→ G × H ∞
associated to Hi 7→ Hi and Hi 7→ Hi+1 . Now, if t denotes the stable letter of the HNN
extension and if h ∈ H ∞ is a non-trivial element of the first copy of H in H ∞ , then it
−1
follows from Britton’s lemma that H ∞ ∩ (H ∞ )t ht = {1}. Therefore, H ∞ is a weakly
malnormal subgroup, and it follows from [MO13, Corollary 2.3] that, if G and H are
both non-trivial, the product G • H is acylindrically hyperbolic. It is interesting that:
Fact 5.42. If G and H are torsion-free, then any proper malnormal subgroup of G • H
is a free group.
Indeed, let K be a malnormal subgroup of G • H. Suppose that K contains a non-trivial
element of G × H ∞ , say gh. Then h ∈ K and g ∈ K since
hghi = hghi ∩ hghih ⊂ K ∩ K h and hghi = hghi ∩ hghig ⊂ K ∩ K g .
If h is not trivial, then G ⊂ K since
hhi = hhi ∩ hhiG ⊂ K ∩ K G ;
then, we deduce that H ∞ ⊂ K since
G = G ∩ GH
∞
∞
⊂ K ∩ KH .
Therefore, G × H ∞ ⊂ K. Similarly, if h is trivial then g is non-trivial, and we deduce
that H ∞ ⊂ K from
45
hgi = hgi ∩ hgiH
∞
∞
⊂ K ∩ KH .
We also notice that G ⊂ K since
H ∞ = H ∞ ∩ (H ∞ )G ⊂ K ∩ K G .
Consequently, G × H ∞ ⊂ K. Thus, we have proved that a malnormal subgroup of G • H
either intersects trivially G × H ∞ or contains G × H ∞ . If K intersects trivially G × H ∞ ,
the action of K on the Bass-Serre tree of G • H, associated to its decomposition as an
HNN extension we mentionned above, is free, so K is necessarily free. Otherwise, K
contains G × H ∞ . If t denotes the stable letter of our HNN extension, then
H2 ∗ H3 ∗ · · · = H ∞ ∩ (H ∞ )t ⊂ K ∩ K t ,
where we used the notation H ∞ = H1 ∗ H2 ∗ H3 ∗ · · · , such that each Hi is a copy of H.
Therefore, t ∈ K, and we conclude that K is not a proper subgroup. The fact is proved.
As a consequence, •-products of two non-trivial diagram groups are good examples of
acylindrically hyperbolic groups in the sense that they are not relatively hyperbolic. Indeed, if a group is hyperbolic relatively to a collection of subgroups, then this collection
must be malnormal (see for instance [Osi06, Theorem 1.4]), so that a relatively hyperbolic •-product of two torsion-free groups must hyperbolic relatively to a collection of
free groups according to the previous fact. On the other hand, such a group must be
hyperbolic (see for instance [Osi06, Corollary 2.41]), which is impossible since we know
that a •-product of two non trivial torsion-free groups contains a subgroup isomorphic
to Z2 .
Example 5.43. Let P = hΣ | Ri be a semigroup presentation and w ∈ Σ+ a base word.
Fixing a family of groups GΣ = {Gs | s ∈ Σ}, we define the diagram product D(GΣ , P, w)
as the fundamental group of the following 2-complex of groups:
• the underlying 2-complex is the 2-skeleton of the Squier complex S(P, w);
• to any vertex u = s1 · · · sr ∈ Σ+ is associated the group Gu = Gs1 × · · · × Gsr ;
• to any edge e = (a, u → v, b) is associated the group Ge = Ga × Gb ;
• to any square is associated the trivial group;
• for every edge e = (a, u → v, b), the monomorphisms Ge → Gaub and Ge → Gavb
are the canonical maps Ga × Gb → Ga × Gu × Gb and Ga × Gb → Ga × Gv × Gb .
Guba and Sapir proved that the class of diagram groups is stable under diagram product
[GS99, Theorem 4]. Explicitely,
Theorem 5.44. If Gs = D(Ps , ws ) where Ps = hΣs | Rs i is a semigroup presentation
and ws ∈ Σ+
s a base word, then the diagram product D(GΣ , P, w) is the diagram group
D(P̄, w), where P̄ is the semigroup presentation
hΣ t Σ̄ t Σ0 | R t R̄ t Si
where Σ0 = {as | s ∈ Σ} is a copy of Σ, Σ̄ =
F
s∈Σ
F
Σs , R̄ =
F
Rs and finally S =
s∈Σ
{s = as ws as}.
s∈Σ
Of course, any semigroup diagram with respect to P is a semigroup diagram with respect
to P̄. Thus, if D(P, w) contains a normal, absolutely reduced, spherical diagram which
does not contains a proper infinite prefix and whose support is {w}, then any diagram
product over (P, w) will be acylindrically hyperbolic or cyclic. (Notice that such a
46
diagram product contains D(P, w) as a subgroup, so it cannot be cyclic if D(P, w) is
not cyclic itself.)
For instance, let P = hx, y, z, a, b | yz = az, xa = xb, b = yi. Then D(P, xyz) contains
the following spherical diagram, which is a contracting isometry of X(P, xyz):
By the above observation, if G denotes the diagram product over (P, apb) with Gx =
Gz = Z and Gy = Ga = Gb = {1}, then G is acylindrically hyperbolic. In this case, the
Squier complex S(P, xyz) is just a cycle of length three. By computing a presentation
of the associated graph of groups, we find
G = hx, y, t | [x, y] = [x, y t ] = 1i.
This example is interesting because this is a cocompact diagram group which is not a
right-angled Artin group. Let us prove this assertion. From the presentation of G, it
is clear that its abelianization is Z3 and we deduce from [Bro82, Exercice II.5.5] that
the rank of H2 (G) is at most two (the number of relations of our presentation). On the
other hand, if A(Γ) denotes the right-angled Artin group associated to a given graph
Γ, then H1 (A(Γ)) = Z#V (Γ) and H2 (A(Γ)) = Z#E(Γ) , where V (Γ) and E(Γ) denote
the set of vertices and edges of Γ respectively. Therefore, the only right-angled Artin
groups which might be isomorphic to G must correspond to a graph with three vertices
and at most two edges. We distinguish two cases: either this graph is not connected
or it is a segment of length two. In the former case, the right-angled Artin group
decomposes as a free product, and in the latter, it is isomorphic to F2 × Z. Thus, it is
sufficient to show that G is freely irreducible and is not isomorphic to F2 × Z in order to
deduce that G cannot be isomorphic to a right-angled Artin group. First, notice that,
because G is acylindrically hyperbolic, its center must be finite (see [Osi13, Corollary
7.3]), so G cannot be isomorphic to F2 × Z. Next, suppose that G decomposes as a free
product. Because the centralizer of x is not virtually cyclic, we deduce that x belongs
to a conjugate A of some free factor; in fact, A must contain the whole centralizer of x,
hence y, y t ∈ A. As a consequence, y t ∈ A ∩ At , so that A ∩ At must be infinite. Since
a free factor is a malnormal subgroup, we deduce that t ∈ A. Finally, x, y, t ∈ A hence
A = G. Therefore, G does not contain any proper free factor, i.e., G is freely irreducible.
This concludes the proof of our assertion.
To conclude, it is worth noticing that the whole diagram product may contain contracting isometries even if the underlying diagram group (i.e., the diagram group obtained from the semigroup presentation and the word base, forgetting the factor-groups)
does not contain such isometries on its own Farley cube complex. For instance, if
P = ha, b, p | a = ap, b = pbi, w = ab, Gp = {1} and Ga = D(P1 , w1 ), Gb = D(P2 , w2 )
are non-trivial, then our diagram product is isomorphic to Ga • Gb (see [GS99, Example
7]), and the description of Ga •Gb as a diagram group is precisely the one we mentionned
in the previous example, where we saw that the action on the associated Farley cube
complex admits contracting isometries. On the other hand, X(P, ab) is a “diagonal
half-plane”, so that its combinatorial boundary does not contain any isolated point: a
fortiori, X(P, ab) cannot admit a contracting isometry.
5.5
Cocompact case
In this section, we focus on cocompact diagram groups, ie., we will consider a semigroup
presentation P = hΣ | Ri and a word base w ∈ Σ+ whose class modulo P is finite.
47
Roughly speaking, we want to prove that D(P, w) contains a contracting isometry if
and only if it does not split as a direct product in a natural way, which we describe now.
If there exists a (w, u1 v1 · · · un vn un+1 )-diagram Γ, for some words u1 , v1 , . . . , un , vn , un+1 ∈
Σ+ , then we have the natural map
(
D(P, v1 ) × · · · × D(P, vn ) →
D(P, w)
(V1 , . . . , Vn )
7→ Γ · ((u1 ) + V1 + · · · + (un ) + Vn + (un+1 )) · Γ−1
This is a monomorphism, and we denote by Γ · D(P, v1 ) × · · · × D(P, vn ) · Γ−1 its image.
A subgroup of this form will be referred to as a canonical subgroup; if furthermore at
least two factors are non trivial, then it will be a large canonical subgroup.
The main result of this section is the following decomposition theorem (cf. Theorem
1.11 in the introduction).
Theorem 5.45. Let P = hΣ | Ri be a semigroup presentation and w ∈ Σ+ a base word
whose class modulo P is finite. Then there exist some words u0 , u1 , . . . , um ∈ Σ+ and a
(w, u0 u1 · · · um )-diagram Γ such that
D(P, w) = Γ · (D(P, u0 ) × D(P, u1 ) × · · · × D(P, um )) · Γ−1 ,
where D(P, ui ) is either infinite cyclic or acylindrically hyperbolic for every 1 ≤ i ≤ m,
and m ≤ dimalg D(P, w) ≤ dim X(P, w).
The acylindrical hyperbolicity will follow from the existence of a contracting isometry
in the corresponding Farley cube complex. In order to understand what happens when
such an isometry does not exist, we will use the following dichotomy, which is a slight
modification, in the cocompact case, of the Rank Rigidity Theorem proved in [CS11].
Theorem 5.46. Let G be a group acting geometrically on an unbounded CAT(0) cube
complex X. Either G contains a contracting isometry of X or there exists a G-invariant
convex subcomplex Y ⊂ X which splits as the cartesian product of two unbounded subcomplexes.
Proof. Recall that a hyperplane of X is essential if no G-orbit lies in a neighborhood
of some halfspace delimited by this hyperplane; we will denote by E(X) the set of the
essential hyperplanes of X. Let Y ⊂ X denote the essential core associated to the
action G y X, which is defined as the restriction quotient of X with respect to E(X).
See [CS11, Section 3.3] for more details. According to [CS11, Proposition 3.5], Y is a
G-invariant convex subcomplex of X.
It is worth noticing that, because the action G y X is cocompact, a hyperplane J of
X does not belong to E(X) if and only if one of the halfspaces it delimits lies in the
R(J)-neighborhood of N (J) for some R(J) ≥ 1. Set R = max R(J). Because there
J∈H(X)
exist only finitely many orbits of hyperplanes, R is finite.
We claim that two hyperplanes of Y are well-separated in Y if and only if they are
well-separated in X.
Of course, two hyperplanes of Y which are well-separated in X are well-separated in
Y . Conversely, let J1 , J2 ∈ H(Y ) be two hyperplanes and fix some finite collection
H ⊂ H(X) of hyperplanes intersecting both J1 , J2 which does not contain any facing
triple. We write H = {H0 , . . . , Hn } and we fix a halfspace Hi+ delimited by Hi for every
0 ≤ i ≤ n, so that H0+ ⊃ H1+ ⊃ · · · ⊃ Hn+ . If n ≤ 2R, there is nothing to prove.
Otherwise, by our choice of R, it is clear that the hyperplanes HR , . . . , Hn−R belong to
E(X), and a fortiori to H(Y ). Consequently, if J1 and J2 are L-well-separated in Y ,
then they are (L + 2R)-well-separated in X.
48
Now we are ready to prove our theorem. Because the action G y Y is geometric
and essential (ie., every hyperplane of Y is essential), it follows from [CS11, Theorem
6.3] that either Y splits non trivially as a cartesian product of two subcomplexes or G
contains a contracting isometry of Y . In the former case, notice that the factors cannot
be bounded because the action G y Y is essential; in the latter case, we deduce that
the contracting isometry of Y induces a contracting isometry of X by combining our
previous claim with Theorem 3.13. This concludes the proof.
Therefore, the problem reduces to understand the subproducts of X(P, w). This purpose
is achieved by the next proposition.
Proposition 5.47. Let P = hΣ | Ri be a semigroup presentation and w ∈ Σ+ a base
word. Let Y ⊂ X(P, w) be a convex subcomplex which splits as the cartesian product of
two subcomplexes A, B ⊂ Y . Then there exist a word w0 ∈ Σ+ , a (w, w0 )-diagram Γ and
words u1 , v1 , . . . , uk , vk ∈ Σ+ for some k ≥ 1 such that w0 = u1 v1 · · · uk vk in Σ+ and
A ⊂ Γ · X(P, u1 ) × · · · × X(P, uk ) · Γ−1 and B ⊂ Γ · X(P, v1 ) × · · · × X(P, vk ) · Γ−1 .
Furthermore,
Y ⊂ Γ · X(P, u1 ) × X(P, v1 ) × · · · × X(P, uk ) × X(P, vk ) · Γ−1 .
In the previous statement, Y is a Cartesian product of the subcomplexes A and B in
the sense that there exists an isomorphism ϕ : A × B → Y which commutes with the
inclusions A, B, Y ,→ X. More explicitely, we have the following commutative diagram:
Mm A o
{
A×B
c
ϕ
/Y
/X
>
1Q /
B
Proof of Proposition 5.47. Up to conjugating by a diagram Γ of A ∩ B, we may
assume without loss of generality that (w) ∈ A ∩ B. First, we claim that, for every
∆1 ∈ A and ∆2 ∈ B, supp(∆1 ) ∩ supp(∆2 ) = ∅.
Suppose by contradiction that there exist some ∆1 ∈ A and ∆2 ∈ B satisfying supp(∆1 )∩
supp(∆2 ) 6= ∅. Without loss of generality, we may suppose that we chose a counterexample minimizing #∆1 + #∆2 . Because supp(∆1 ) ∩ supp(∆2 ) 6= ∅, there exist two cells
π1 ⊂ ∆1 , π2 ⊂ ∆2 and an edge e ⊂ w, where w is thought of as a segment representing
both top(∆1 ) and top(∆2 ), such that e ⊂ top(π1 ) ∩ top(π2 ). Notice that πi is the only
cell of the maximal thin suffix of ∆i for i = 1, 2. Indeed, if this was not the case, taking
the minimal prefixes P1 , P2 of ∆1 , ∆2 containing π1 , π2 respectively, we would produce
two diagrams satisfying supp(P1 ) ∩ supp(P2 ) 6= ∅ and #P1 + #P2 < #∆1 + #∆2 . Moreover, because Y is convex necessarily the factors A, B have to be convex as well, so that
A and B, as sets of diagrams, have to be closed by taking prefixes since (w) ∈ A ∩ B
and according to the description of the combinatorial geodesics given by Lemma 5.4;
this implies that P1 ∈ A and P2 ∈ B, contradicting the choice of our counterexample
∆1 , ∆2 . For i = 1, 2, let ∆i denote the diagram obtained from ∆i by removing the cell
πi . We claim that
supp(∆1 ) ∩ suppp(∆2 ) = supp(∆1 ) ∩ suppp(∆2 ) = ∅.
Indeed, since we know that A and B are stable by taking prefixes, ∆1 belongs to A and
∆2 belongs to B. Therefore, because
#∆1 + #∆2 = #∆1 + #∆2 < #∆1 + #∆2 ,
49
having supp(∆1 ) ∩ suppp(∆2 ) 6= ∅ or supp(∆1 ) ∩ suppp(∆2 ) 6= ∅ would contradict our
choice of ∆1 and ∆2 .
Because supp(∆1 ) ∩ suppp(∆2 ) = ∅, it is possible to write w = a1 b1 · · · ap bp in Σ+ so
that
(
∆1 = A1 + (b1 ) + · · · + Ap + (bp )
∆2 = (a1 ) + B1 + · · · + (ap ) + Bp
for some (ai , ∗)-diagram Ai and (bi , ∗)-diagram Bi . Set ∆0 = A1 + B1 + · · · + Ap + Bp .
Notice that ∆1 and ∆2 are prefixes of ∆0 .
For i = 1, 2, let Ei denote the atomic diagram such that the concatenation ∆0 ◦ Ei
corresponds to gluing πi on ∆i as a prefix of ∆0 . Notice that the diagrams E1 , E2 exist
precisely because
supp(∆1 ) ∩ suppp(∆2 ) = supp(∆1 ) ∩ suppp(∆2 ) = ∅.
As top(π1 ) ∩ top(π2 ) 6= ∅, since the intersection contains e, the two edges (∆0 , ∆0 ◦ E1 )
and (∆0 , ∆0 ◦ E2 ) of X(P, w) do not span a square. Therefore, the hyperplanes J1 , J2
dual to these two edges respectively are tangent. For i = 1, 2, noticing that ∆i is a prefix
of ∆0 ◦ Ei whereas it is not a prefix of ∆0 , we deduce from Proposition 5.7 that the
minimal diagram associated to Ji is precisely ∆i . On the other hand, the hyperplanes
corresponding to ∆1 , ∆2 are clearly dual edges of A, B respectively. A fortiori, J1 and
J2 must be transverse (in Y = A × B). Therefore, we have found an inter-osculation in
the CAT(0) cube complex X(P, w), which is impossible.
Thus, we have proved that, for every ∆1 ∈ A and ∆2 ∈ B, supp(∆1 ) ∩ supp(∆2 ) = ∅.
In particular, we can write w = u1 v1 · · · uk vk in Σ+ , for some u1 , v1 , . . . , uk , vk ∈ Σ+ ,
so that supp(∆1 ) ⊂ u1 ∪ · · · ∪ uk and supp(∆2 ) ⊂ v1 ∪ · · · ∪ vk for every ∆1 ∈ A and
∆2 ∈ B. By construction, it is clear that
A ⊂ X(P, u1 ) × · · · × X(P, uk ) and B ⊂ X(P, v1 ) × · · · × X(P, vk ).
Now, let ∆ ∈ Y = A × B be vertex and let ∆1 ∈ A, ∆2 ∈ B denote its coordinates;
they are also the projections of ∆ onto A, B respectively. By the previous remark, we
can write
(
∆1 = U1 + (v1 ) + · · · + Uk + (vk )
∆2 = (u1 ) + V1 + · · · + (uk ) + Vk
for some (ui , ∗)-diagram Ui and some (vi , ∗)-diagram Vi . Set Ξ = U1 + V1 + · · · Uk + Vk .
Noticing that d(Ξ, A) ≥ #V1 + · · · + #Vk and d(Ξ, ∆1 ) = #V1 + · · · + #Vk , we deduce
that ∆1 is the projection of Ξ onto A. Similarly, we show that ∆2 is the projection of Ξ
onto B. Consequently,
∆ = Ξ ∈ X(P, u1 ) × X(P, v1 ) × · · · × X(P, uk ) × X(P, vk ),
which concludes the proof.
By combining Theorem 5.46 and Proposition 5.47, we are finally able to prove:
Corollary 5.48. Let P = hΣ | Ri be a semigroup presentation and w ∈ Σ+ a base word
whose class modulo P is finite. Then either D(P, w) contains a contracting isometry of
X(P, w) or there exists a (w, uv)-diagram Γ such that
D(P, w) = Γ · D(P, u) × D(P, v) · Γ−1 ,
where D(P, u) and D(P, v) are non trivial.
50
Proof. According to Theorem 5.46, if D(P, w) does not contain a contracting isometry
of X(P, w), then there exists a convex D(P, w)-invariant subcomplex Y ⊂ X(P, w)
which splits as the cartesian product of two unbounded subcomplexes A, B. Up to
conjugating by the smallest diagram of Y , we may assume without loss of generality
that Y contains (w). Let Γ be the (w, u1 v1 · · · uk vk )-diagram given by Proposition
5.47. Notice that, because (w) ∈ Y and that Y is D(P, w)-invariant, necessarily any
spherical diagram belongs to Y . Thus, we deduce from Lemma 5.49 below that
D(P, w) = Γ · D(P, u1 ) × D(P, v1 ) × · · · × D(P, uk ) × D(P, vk ) · Γ−1 .
Because A ⊂ Γ · X(P, u1 ) × · · · × X(P, uk ) · Γ−1 , we deduce from the unboundedness of
A that X(P, u1 ) × · · · × X(P, uk ) is infinite; since the class of w modulo P is finite, this
implies that there exists some 1 ≤ i ≤ k such that D(P, ui ) is non trivial. Similarly,
we show that there exists some 1 ≤ j ≤ k such that D(P, vj ) is non trivial. Now, write
u1 v1 · · · uk vk = uv in Σ+ where u, v ∈ Σ+ are two subwords such that one contains ui
and the other vj . Then
D(P, w) = Γ · D(P, u) × D(P, v) · Γ−1 ,
where D(P, u) and D(P, v) are non trivial.
Lemma 5.49. Let P = hΣ | Ri be a semigroup presentation and w ∈ Σ+ a base
word whose class modulo P is finite. If a spherical diagram ∆ decomposes as a sum
∆1 + · · · + ∆n , then each ∆i is a spherical diagram.
Proof. Suppose by contradiction that one factor is not spherical. Let ∆k be the leftmost
factor which is not spherical, ie., ∆` is spherical, say a (x` , x` )-diagram, for every ` < k.
For ` ≥ k, say that ∆` is a (x` , y` )-diagram. In Σ+ , we have the equality
x1 · · · xk−1 xk xk+1 · · · xn = top(∆) = bot(∆) = x1 · · · xk−1 yk yk+1 · · · yn ,
hence xk · · · xn = yk · · · yn . Because the equality holds in Σ+ , notice that lg(xk ) =
lg(yk ) implies xk = yk , which is impossible since we supposed that ∆k is not spherical.
Therefore, two cases may happen: either lg(xk ) > lg(yk ) or lg(xk ) < lg(yk ). Replacing
∆ with ∆−1 if necessary, we may suppose without loss of generality that we are in the
latter case. Thus, xk is a prefix of yk : there exists some y ∈ Σ+ such that yk = xk y. On
the other hand, because ∆k is a (xk , yk )-diagram, the equality xk = yk holds modulo P.
A fortiori, xk = xk y modulo P. We deduce that
x1 · · · xk−1 xk y n xk+1 · · · xn ∈ [w]P
for every n ≥ 1. This implies that [w]P is infinite, contradicting our hypothesis.
Proof of Theorem 5.45. We argue by induction on the algebraic dimension of the
considered diagram group, ie., the maximal rank of a free abelian subgroup. If the
algebraic dimension is zero, there is nothing to prove since the diagram group is trivial
in this case. From now on, suppose that our diagram group D(P, w) has algebraic
dimension at least one. By applying Corollary 5.48, we deduce that either D(P, w)
contains a contracting isometry of X(P, w), and we conclude that D(P, w) is either
cyclic or acylindrically hyperbolic, or there exist two words u, v ∈ Σ+ and a (w, uv)diagram Γ such that D(P, w) = Γ · (D(P, u) × D(P, v)) · Γ−1 where D(P, u) and D(P, v)
are non-trivial. In the first case, we are done; in the latter case, because D(P, u)
and D(P, v) are non-trivial (and torsion-free, as any diagram group), their algebraic
dimensions are strictly less than the one of D(P, w), so that we can apply our induction
hypothesis to find a (u, u1 · · · ur )-diagram Φ and a (v, v1 · · · vs )-diagram Ψ such that
D(P, u) = Φ · D(P, u1 ) × · · · × D(P, ur ) · Φ−1
51
and similarly
D(P, v) = Ψ · D(P, v1 ) × · · · × D(P, vs ) · Ψ−1 ,
where, for every 1 ≤ i ≤ r and 1 ≤ j ≤ s, D(P, ui ) and D(P, vj ) are either infinite cyclic
or acylindrically hyperbolic. Now, if we set Ξ = Γ · (Φ + Ψ), then
D(P, w) = Ξ · D(P, u1 ) × · · · × D(P, ur ) × D(P, v1 ) × · · · × D(P, vs ) · Ξ−1
is the decomposition we are looking for.
Finally, the inequality n ≤ dimalg D(P, w) is clear since diagram groups are torsion-free,
and the inequality dimalg D(P, w) ≤ dim X(P, w) is a direct consequence of [Gen15a,
Corollary 5.2].
We conclude this section by showing that, in the cocompact case, being a contracting
isometry can be characterized algebraically. For convenience, we introduce the following
definition.
Proposition 5.50. Let P = hΣ | Ri be a semigroup presentation and w ∈ Σ+ a baseword whose class modulo P is finite. If g ∈ D(P, w) − {(w)}, the following statements
are equivalent:
(i) g is a contracting isometry of X(P, w);
(ii) the centraliser of g is infinite cyclic;
(iii) g does not belong to a large canonical subgroup.
Proof. It is well-known that the centraliser of a contracting isometry of a group acting
properly is virtually cyclic. Because diagram groups are torsion-free, we deduce the
implication (i) ⇒ (ii).
Suppose that there exists a (w, uv)-diagram Γ such that g ∈ Γ · D(P, u) × D(P, v) · Γ−1 ,
where D(P, u) and D(P, v) are non trivial. Let g1 ∈ D(P, u) and g2 ∈ D(P, v) be two
spherical diagrams such that g = Γ · (g1 + g2 ) · Γ−1 . In particular, the centraliser of g
contains: Γ · (hg1 i × hg2 i) · Γ−1 if g1 and g2 are non trivial; Γ · (D(P, u) × hg2 i) · Γ−1 if g1
trivial and g2 is not; Γ · (hg1 i × D(P, v)) · Γ−1 if g2 is trivial and g1 is not. We deduce
that (ii) ⇒ (iii).
Finally, suppose that g is not a contracting isometry. Up to taking a conjugate of g,
we may suppose without loss of generality that g is absolutely reduced. According to
Theorem 5.25 and Lemma 5.33, three cases may happen:
(a) supp(g) has cardinality at least two;
(b) w = xyz in Σ+ with supp(g) = {y} and X(P, x) or X(P, z) infinite;
(c) g ∞ contains a proper infinite prefix.
In case (a), g decomposes as a sum with at least two non-trivial factors, and we deduce
from Lemma 5.49 that g belongs to a large canonical subgroup. In case (b), if X(P, x) is
infinite then D(P, x) is non-trivial since [x]P is finite, and we deduce that g belongs to
the large canonical subgroup D(P, x)×D(P, yz); we argue similarly if X(P, z) is infinite.
In case (c), suppose that g ∞ contains a proper infinite prefix Ξ. For convenience, write
g ∞ = g1 ◦ g2 ◦ · · · , where each gi is a copy of g. Suppose by contradiction that g cannot
be decomposed as a sum with at least two non-trivial factors. As a consequence, for
each i ≥ 1, the subdiagram gi either is contained into Ξ or it contains a cell which does
not belong to Ξ but whose top path intersects Ξ. Because Ξ is a proper prefix of g ∞ ,
there exists some index j ≥ 1 such that gj is not included into Ξ, and it follows from
52
Lemma 5.28 that, for every i ≥ j, gi satisfies the same property. Let Ξj+r denote the
prefix of g1 ◦ · · · ◦ gj+r induced by Ξ. We know that, for every 0 ≤ s ≤ r, the subdiagram
gj+s contains a cell which does not belong to Ξ but whose top path intersects Ξ. This
implies that bot(Ξj+r ) has length at least r. On the other hand, the finiteness of [w]P
implies that the cardinality of [bot(Ξj+r )]P is bounded by a constant which does not
depend on r. Thus, we get a contradiction if r is chosen sufficiently large. We have
proved that g decomposes as a sum with at least two non-trivial factors. As in case (a),
we deduce from Lemma 5.49 that g belongs to a large canonical subgroup.
Remark 5.51. The implication (iii) ⇒ (ii) also follows from the description of centralisers in diagram groups [GS97, Theorem 15.35], since they are canonical subgroups.
A
Combinatorial boundary vs. other boundaries
In this appendix, we compare the combinatorial boundary of a CAT(0) cube complex
with three other boundaries. Namely, the simplicial boundary, introduced in [Hag13];
the Roller boundary, introduced in [Rol98]; and its variation studied in [Gur08]. In what
follows, we will always assume that our CAT(0) cube complexes have countably many
hyperplanes. For instance, this happens when they are locally finite.
A.1
Simplicial boundary
For convenience, we will say that a CAT(0) cube complex is ω-dimensional if it does
not contain an infinite collection of pairwise transverse hyperplanes.
Let X be an ω-dimensional CAT(0) cube complex. A Unidirection Boundary Set (or
UBS for short) is an infinite collection of hyperplanes U not containing any facing triple
which is:
• inseparable, ie., each hyperplane separating two elements of U belongs to U;
• unidirectional, ie., any hyperplane of U bounds a halfspace containing only finitely
many elements of U.
According to [Hag13, Theorem 3.10], any UBS is commensurable to a disjoint union
of minimal UBS, where a UBS U is minimal if any UBS V satisfying V ⊂ U must be
commensurable to U; the number of these minimal UBS is the dimension of the UBS
we are considering. Then, to any commensurability class of a k-dimensional UBS is
associated a k-simplex at infinity whose faces correspond to its subUBS. All together,
these simplices at infinity define a simplicial complex, called the simplicial boundary
∂4 X of X. See [Hag13] for more details.
The typical example of a UBS is the set of the hyperplans intersecting a given combinatorial ray. A simplex arising in this way is said visible. Therefore, to any point in
the combinatorial boundary naturally corresponds a simplex of the simplicial boundary.
The following proposition is clear from the definitions:
Proposition A.1. Let X be an ω-dimensional CAT(0) cube complex. Then (∂ c X, ≤)
is isomorphic to the face-poset of the visible part of ∂4 X.
In particular, when X is fully visible, ie., when every simplex of the simplicial boundary
is visible, the two boundaries essentially define the same objet. Notice however that
Farley cube complexes are not necessarily fully visible. For example, if P = ha, b, p | a =
ap, b = pbi, then X(P, ab) is a “diagonal half-plane” and is not fully visible. Moreover,
although Farley cube complexes are complete, they are not necessarily ω-dimensional,
53
so that the simplicial boundary may not be well-defined. This is the case for the cube
complex associated to Thompson’s group F , namely X(P, x) where P = hx | x = x2 i;
or the cube complex associated to the lamplighter group Z o Z, namely X(P, ab) where
P = ha, b, p, q, r | a = ap, b = pb, p = q, q = r, r = pi.
A.2
Roller boundary
A pocset (Σ, ≤,∗ ) is a partially-ordered set (Σ, ≤) with an order-reversing involution ∗
such that a and a∗ are not ≤-comparable for every a ∈ Σ. A subset α ⊂ P(Σ) is an
ultrafilter if:
• for every a ∈ Σ, exactly one element of {a, a∗ } belongs to α;
• if a, b ∈ Σ satisfy a ≤ b and a ∈ α, then b ∈ α.
Naturally, to every CAT(0) cube complex X is associated a pocset: the set of half-spaces
U(X) with the inclusion and the complementary operation. In this way, every vertex can
be thought of as an ultrafilter, called a principal ultrafilter; see for example [Sag14] and
references therein. For convenience, let X ◦ denote the set of ultrafilters of the pocset
associated to our cube complex. Notice that X ◦ is naturally a subset of 2U (X) , and can
be endowed with the Tykhonov topology.
Now, the Roller boundary RX of X is defined as the space of non principal ultrafilters
of X ◦ endowed with the Tykhonov topology. Below, we give an alternative description
of RX.
Given a CAT(0) cube complex X, fix a base vertex x ∈ X, and let Sx X denote the
set of combinatorial rays starting from x up to the following equivalence relation: two
rays r1 , r2 are equivalent if H(r1 ) = H(r2 ). Now, if ξ1 , ξ2 ∈ Sx X, we define the distance
between ξ1 and ξ2 in Sx X by
d(ξ1 , ξ2 ) = 2− min{d(x,J)
| J∈H(r1 )⊕H(r2 )} ,
where ⊕ denotes the symmetric difference. In fact, as a consequence of the inclusion
A ⊕ B ⊂ (A ⊕ B) ∪ (B ⊕ C) for every sets A, B, C, the distance d turns out to be
ultrametric, i.e.,
d(ξ1 , ξ3 ) ≤ max (d(ξ1 , ξ2 ), d(ξ2 , ξ3 ))
for every ξ1 , ξ2 , ξ3 ∈ Sx X. Also, notice that this distance is well-defined since its
expression does not depend on the representatives we chose.
Given a combinatorial ray r, let α(r) denote the set of half-spaces U such that r lies
eventually in U .
Proposition A.2. The map r 7→ α(r) induces a homeomorphism Sx X → RX.
Proof. If r is a combinatorial ray starting from x, it is clear that α(r) is a non-principal
ultrafilter, ie., α(r) ∈ RX. Now, if for every hyperplane J we denote by J + (resp. J − )
the halfspace delimited by J which does not contain x (resp. which contains x), then
we have
α(r) = {J + | J ∈ H(r)} t {J − | J ∈
/ H(r)}.
We first deduce that α(r1 ) = α(r2 ) for any pair of equivalent combinatorial rays r1 , r2
starting from x, so that we get a well-defined induced map α : Sx X → RX; secondly,
the injectivity of α follows, since α(r1 ) = α(r2 ) clearly implies H(r1 ) = H(r2 ).
Now, let η ∈ RX be a non-principal ultrafilter, and let H+ (η) denote the set of the
hyperplanes J satisfying J + ∈ η. Notice that, if H+ (η) is finite, then all but finitely
54
many halfspaces of η contain x, which implies that η is principal; therefore, H+ (η)
is infinite, say H+ (η) = {J1 , J2 , . . .}. On the other hand, Ji+ ∩ Jj+ is non-empty for
every i, j ≥ 1, so, for every k ≥ 1, the intersection Ck =
k
T
i=1
Ji+ is non-empty; let xk
denote the combinatorial projection of x onto Ck . According to Lemma 2.8, xk+1 is
the combinatorial projection of xk onto Ck+1 for every k ≥ 1. Therefore, by applying
Proposition 2.5, we know that there exists a combinatorial ray ρ starting from x and
passing through all the xk ’s. Now, if J ∈ H(ρ), there exists some k ≥ 1 such that J
separates x and xk , and a fortiori x and Ck according to Lemma 2.7. We deduce that
either J belongs to {J1 , . . . , Jk } or J separates x and some hyperplane of {J1 , . . . , Jk };
equivalently, there exists some 1 ≤ ` ≤ k such that J`+ ⊂ J + . Because J`+ ∈ η, we deduce
that J + ∈ η. Thus, we have proved that H(ρ) = H+ (η). It follows that α(ρ) = η. The
surjectivity of α is proved.
Finally, it is an easy exercice of general topology to verify that α is indeed a homeomorphism. The details are left to the reader.
Therefore, the combinatorial boundary ∂ c X may be thought of as a quotient of the
Roller boundary. This is the point of view explained in the next section.
It is worth noticing that this description of the Roller boundary allows us to give a
precise description of the Roller boundary of Farley cube complexes:
Proposition A.3. Let P = hΣ | Ri be a semigroup presentation and w ∈ Σ+ a base
word. The Roller boundary of X(P, w) is homeomorphic to the space of the infinite
reduced w-diagrams endowed with the metric
(∆1 , ∆2 ) 7→ 2− min{#∆
| ∆ is not a prefix of both ∆1 and ∆2 } ,
where #(·) denotes the number of cells of a diagram.
A.3
Guralnik boundary
Given a pocset (Σ, ≤,∗ ), Guralnik defines the boundary R∗ Σ as the set of almost-equality
classes of ultrafilters, partially ordered by the relation: Σ1 ≤ Σ2 if Σ2 ∩ Σ1 6= ∅, where
· denotes the closure in RX with respect to the Tykhonov topology. See [Gur08] for
more details.
In particular, if X is a CAT(0) cube complex, it makes sense to define the boundary
R∗ X as the previous boundary of the associated pocset. Notice that R∗ X contains a
particular point, corresponding to the almost-equality class of the principal ultrafilters;
let p denote this point. Then R∗ X\{p} is naturally the quotient of the Roller boundary
RX by the almost-equality relation.
Remark A.4. Although Guralnik considers only ω-dimensional pocsets in [Gur08], the
definition makes sense for every pocset, and in particular is well-defined for every CAT(0)
cube complex. Notice also that the boundary R∗ is called the Roller boundary there,
whereas the terminology now usually refers to the boundary defined in the previous
section.
Proposition A.5. The posets (R∗ X\{p}, ≤) and (∂ c X, ≤) are isomorphic.
Proof. If r is an arbitrary combinatorial ray, choose a second ray ρ ∈ Sx X equivalent
to it, and set ϕ(r) ∈ R∗ X as the almost-equality class of α(ρ), where α : Sx X → R(X)
is the isomorphism given by Proposition A.2. Notice that, if ρ0 ∈ Sx X is equivalent to
r, then α(ρ) and α(ρ0 ) are almost-equal, so we deduce that ϕ(r) does not depend on our
55
choice of ρ. As another consequence, ϕ must be constant on the equivalence classes of
rays, so that we get a well-defined map
ϕ : ∂ c X → R∗ X\{p}.
Clearly, ϕ is surjective. Now, if r1 , r2 ∈ ∂ c X satisfy ϕ(r1 ) = ϕ(r2 ), then α(r1 ) = α(r2 );
a
using the notation of the proof of Proposition A.2, this implies that
H(r1 ) = H+ (α(r1 )) = H+ (α(r2 )) = H(r2 ),
a
hence r1 = r2 in
∂ c X.
Thus, ϕ is injective.
Let r1 , r2 ∈ ∂ c X be two combinatorial rays satisfying r1 ≺ r2 , ie., H(r1 ) ⊂ H(r2 ).
a
According to Lemma 4.2 and Lemma 4.3, we may suppose without loss of generality
that r1 (0) = r2 (0) = x and H(r1 ) ⊂ H(r2 ). Let H(r2 ) = {J1 , J2 , . . .}. For every
k ≥ 1, let Ck =
k
T
i=1
Ji+ and let xk denote the combinatorial projection of x onto Ck . Let
i
T
H(r1 ) = {H1 , H2 , . . .}, and, for every i ≥ 1, let Ki =
Hj ∩ Ck ; notice that, because
j=1
H(r1 ) ⊂ H(r2 ), Ki is non-empty. Fix some k ≥ 1. Finally, for every i ≥ 1, let yi denote
the combinatorial projection of xk onto Ki . By combining Lemma 2.8 and Proposition
2.5, we know that there exists a combinatorial ray starting from xk and passing through
all the yi ’s; let ρk denote the concatenation of a combinatorial geodesic between x and
xk with the previous ray.
If ρk is not a combinatorial ray, then there exists a hyperplane J separating x and xk ,
and xk and some yj . On the other hand, it follows from Lemma 2.7 that J must separate
x and Ck , so that J cannot separate xk and yj since xk , yj ∈ Ck . Therefore, ρk is a
combinatorial ray.
If Hk denotes the set of the hyperplanes separating x and xk , then
H(ρk ) = Hk ∪ H(r1 ).
Indeed, if J is a hyperplane intersecting ρk which does not separate x and xk , then
J must separate xk and some yj ; a fortiori, J must be disjoint from Kj according to
Lemma 2.7. Thus, if z is any vertex of r1 ∩ Kj , we know that J separates xk and yj ,
but cannot separate x and xk , or yj and z; therefore, necessarily J separates x and z,
hence J ∈ H(r1 ). Conversely, let J be a hyperplane of H(r1 ). In particular, J ∈ H(r2 ),
so ρk eventually lies in J + . Because ρk (0) = x ∈
/ J + , we conclude that J ∈ H(ρk ).
In particular, because Hk is finite, we deduce that H(ρk ) = H(r1 ), ie., r1 = ρk in ∂ c X.
a
On the other hand, it is clear that, for any finite subset H ⊂ H(r2 ), there exists some
k ≥ 1 sufficiently large so that H ⊂ Hk . Therefore, the sequence of ultrafilters (α(ρk ))
converges to α(r2 ) in the Roller boundary.
Consequently, α(r2 ) belongs to the closure of the almost-equality class of α(r1 ). This
precisely means that ϕ(r1 ) ≤ ϕ(r2 ) in R∗ X. We have proved that ϕ is a poset-morphism.
Now, let r1 , r2 ∈ ∂ c X such that ϕ(r1 ) ≤ ϕ(r2 ) in R∗ X. This means that there exists
a combinatorial ray ρ such that α(ρ) belongs to the intersection between the almostequality class of α(r2 ) and the closure of the almost-equality class of α(r1 ) in the Roller
boundary. So there exists a sequence of combinatorial rays (ρk ) such that ρk = r1 in
∂ c X for every k ≥ 1 and α(ρk ) → α(ρ). Let r0 be the minimal element of the class of
r1 in Sx X which is given by Lemma A.6 below. Then H(r0 ) ⊂ H(ρk ) for every k ≥ 1,
hence H(r0 ) ⊂ H(ρ). Therefore,
H(r1 ) ⊂ H(r0 ) ⊂ H(ρ) = H(r2 ),
a
a
56
hence r1 ≺ r2 in ∂ c X. We have proved that ϕ−1 is a poset-morphism as well.
Lemma A.6. Let r ∈ Sx X be a combinatorial ray. There exists r0 ∈ Sx X equivalent to
r such that, for any combinatorial ray ρ ∈ Sx X equivalent to r, we have H(r0 ) ⊂ H(ρ).
Proof. Let H ⊂ H(r) be the set of the hyperplanes J ∈ H(r) such that r does not stay
in a neighborhood of J; say H = {J1 , J2 , . . .}. For every k ≥ 1, let Ck =
k
T
i=1
Ji+ and
let xk denote the combinatorial projection of x onto Ck . By combining Lemma 2.8 and
Proposition 2.5, we know that there exists a combinatorial ray r0 starting from x and
passing through all the xk ’s. We claim that r0 is the ray we are looking for.
Let ρ ∈ Sx X be a combinatorial ray equivalent to r, and let J ∈ H(r0 ) be a hyperplane.
By construction, J separates x and some xk ; a fortiori, it follows from Lemma 2.7 that
J separates x and Ck . On the other hand, given some 1 ≤ ` ≤ k, because r does not
stay in neighborhood of J` , there exist infinitely many hyperplanes of H(r) which are
included into J`+ ; therefore, since ρ is equivalent to r, we deduce that ρ eventually lies
in J`+ . A fortiori, ρ eventually lies in Ck . Consequently, because J separates x = ρ(0)
and Ck , we conclude that J ∈ H(ρ).
References
[BBF14] M. Bestvina, K. Bromberg, and K. Fujiwara. Constructing group actions on
quasi-trees and applications to mapping class groups. arXiv:1006.1939, 2014.
[BC12]
J. Behrstock and R. Charney. Divergence and quasimorphisms of right-angled
artin groups. Mathematische Annalen, 352(2):339–356, 2012.
[BH99]
Martin R. Bridson and André Haefliger. Metric spaces of non-positive curvature, volume 319 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1999.
[Bro82]
K. Brown. Cohomology of groups, volume 87 of Graduate texts in mathematics.
Springer-Verlag, New York, 1982.
[Che00]
V. Chepoi. Graphs of some CAT(0) complexes.
24(2):125–179, 2000.
[CS11]
Pierre-Emmanuel Caprace and Michah Sageev. Rank rigidity for CAT(0) cube
complexes. Geom. Funct. Anal., 21(4):851–891, 2011.
[CS15]
Ruth Charney and Harold Sultan. Contracting boundaries of CAT(0) spaces.
J. Topol., 8(1):93–117, 2015.
[Far03]
Daniel S. Farley. Finiteness and CAT(0) properties of diagram groups. Topology, 42(5):1065–1082, 2003.
Adv. in Appl. Math.,
[Gen15a] A. Genevois. Hyperbolic diagram groups are free. arXiv:1505.02053, 2015.
[Gen15b] A. Genevois. Hyperplanes of squier’s cube complexes. arXiv:1507.01667, 2015.
[Gen16]
A. Genevois. Coning-off CAT(0) cube complexes. arXiv:1603.06513, 2016.
[GS97]
Victor Guba and Mark Sapir. Diagram groups. Mem. Amer. Math. Soc.,
130(620):viii+117, 1997.
[GS99]
V. S. Guba and M. V. Sapir. On subgroups of the R. Thompson group F and
other diagram groups. Mat. Sb., 190(8):3–60, 1999.
57
[Gur08]
D. Guralnik. Coarse decompositions for boundaries of CAT(0) groups.
arXiv:0611006, 2008.
[Hag07]
F. Haglund.
Isometries of CAT(0) cube complexes are semi-simple.
arXiv:0705.3386, 2007.
[Hag08]
Frédéric Haglund. Finite index subgroups of graph products. Geom. Dedicata,
135:167–209, 2008.
[Hag13]
Mark F. Hagen. The simplicial boundary of a CAT(0) cube complex. Algebraic
and Geometric Topology, 13(3):1299–1367, 2013.
[Lea13]
Ian J. Leary. A metric kan-thurston theorem. Journal of Topology, 6(1):251–
284, 2013.
[MO13]
A. Minasyan and D. Osin. Acylindrical hyperbolicity of groups acting on trees.
arXiv:1310.6289, 2013.
[MW02] J. McCammond and D. Wise. Fans and ladders in small cancellation theory.
Proc. London Math. Soc., 84(3):599–644, 2002.
[Osi06]
D. Osin. Relatively hyperbolic groups: intrinsic geometry, algebraic properties, and algorithmic problems. Mem. Amer. Math. Soc., 179(843), 2006.
[Osi13]
D. Osin. Acylindrically hyperbolic groups. arXiv:1304.1246, 2013.
[Rol98]
M. Roller. Pocsets, median algebras and group actions; an extended study of
dunwoody’s construction and sageev’s theorem. dissertation, 1998.
[Sag95]
Michah Sageev. Ends of group pairs and non-positively curved cube complexes. Proc. London Math. Soc. (3), 71(3):585–617, 1995.
[Sag14]
Michah Sageev. CAT(0) cube complexes and groups. In Geometric Group
Theory, volume 21 of IAS/Park City Math. Ser., pages 7–54. 2014.
[Sul14]
Harold Sultan. Hyperbolic quasi-geodesics in CAT(0) spaces. Geom. Dedicata,
169:209–224, 2014.
[VK33]
Egbert Van Kampen. On some lemmas in the theory of groups. American
Journal of Mathematics, 55(1):268–273, 1933.
[Wis12]
Daniel T. Wise.
preprint, 2012.
The structure of groups with a quasiconvex hierarchy.
58
| 4 |
MIT Autonomous Vehicle Technology Study:
Large-Scale Deep Learning Based Analysis of
Driver Behavior and Interaction with Automation
arXiv:1711.06976v1 [cs.CY] 19 Nov 2017
Lex Fridman∗ , Daniel E. Brown, Michael Glazer, William Angell, Spencer Dodd, Benedikt Jenik,
Jack Terwilliger, Julia Kindelsberger, Li Ding, Sean Seaman, Hillary Abraham, Alea Mehler,
Andrew Sipperley, Anthony Pettinato, Bobbie Seppelt, Linda Angell, Bruce Mehler, Bryan Reimer∗
Abstract—Today, and possibly for a long time to come, the
full driving task is too complex an activity to be fully formalized
as a sensing-acting robotics system that can be explicitly solved
through model-based and learning-based approaches in order to
achieve full unconstrained vehicle autonomy. Localization, mapping, scene perception, vehicle control, trajectory optimization,
and higher-level planning decisions associated with autonomous
vehicle development remain full of open challenges. This is
especially true for unconstrained, real-world operation where the
margin of allowable error is extremely small and the number of
edge-cases is extremely large. Until these problems are solved,
human beings will remain an integral part of the driving task,
monitoring the AI system as it performs anywhere from just over
0% to just under 100% of the driving. The governing objectives of
the MIT Autonomous Vehicle Technology (MIT-AVT) study are
to (1) undertake large-scale real-world driving data collection
that includes high-definition video to fuel the development of
deep learning based internal and external perception systems,
(2) gain a holistic understanding of how human beings interact
with vehicle automation technology by integrating video data
MIT Autonomous Vehicle
Technology Study
with vehicle state data, driver characteristics, mental models,
and self-reported experiences with technology, and (3) identify
how technology and other factors related to automation adoption
and use can be improved in ways that save lives. In pursuing
these objectives, we have instrumented 21 Tesla Model S and
Model X vehicles, 2 Volvo S90 vehicles, and 2 Range Rover
Evoque vehicles for both long-term (over a year per driver)
and medium term (one month per driver) naturalistic driving
data collection. Furthermore, we are continually developing new
methods for analysis of the massive-scale dataset collected from
the instrumented vehicle fleet. The recorded data streams include
IMU, GPS, CAN messages, and high-definition video streams
of the driver face, the driver cabin, the forward roadway, and
the instrument cluster (on select vehicles). The study is ongoing and growing. To date, we have 78 participants, 7,146 days
of participation, 275,589 miles, and 3.5 billion video frames.
This paper presents the design of the study, the data collection
hardware, the processing of the data, and the computer vision
algorithms currently being used to extract actionable knowledge
from the data.
Tesla Model S
24,657 miles
588 days in study
Tesla Model X
22,001 miles
421 days in study
Tesla Model S
18,896 miles
435 days in study
Tesla Model S
18,666 miles
353 days in study
Range Rover
Evoque
18,130 miles
483 days in study
Tesla Model S
15,735 miles
322 days in study
Tesla Model X
15,074 miles
276 days in study
Range Rover
Evoque
14,499 miles
440 days in study
Tesla Model S
14,410 miles
371 days in study
Tesla Model S
14,117 miles
248 days in study
Volvo S90
13,970 miles
325 days in study
Tesla Model S
12,353 miles
321 days in study
Volvo S90
11,072 miles
412 days in study
Tesla Model X
10,271 miles
366 days in study
Tesla Model S
9,188 miles
183 days in study
Tesla Model S
8,319 miles
374 days in study
Tesla Model S
6,720 miles
194 days in study
Tesla Model S
5,186 miles
91 days in study
Tesla Model X
5,111 miles
232 days in study
Tesla Model S
4,596 miles
132 days in study
Tesla Model X
4,587 miles
233 days in study
Tesla Model X
3,719 miles
133 days in study
Tesla Model S
3,006 miles
144 days in study
Tesla Model X
1,306 miles
69 days in study
Tesla Model S
(O oad pending)
Study months to-date: 21
Participant days: 7,146
Drivers: 78
Vehicles: 25
Miles driven: 275,589
Video frames: 3.48 billion
Study data collection is ongoing.
Statistics updated on: Oct 23, 2017.
Fig. 1: Dataset statistics for the MIT-AVT study as a whole and for the individual vehicles in the study.
* Lex Fridman ([email protected]) and Bryan Reimer ([email protected]) are primary contacts. Linda Angell and Sean Seaman are affiliated with
Touchstone Evaluations Inc. All other authors are affiliated with Massachusetts Institute of Technology (MIT).
I. I NTRODUCTION
The idea that human beings are poor drivers is welldocumented in popular culture [1], [2]. While this idea is
often over-dramatized, there is some truth to it in that we’re
at times distracted, drowsy, drunk, drugged, and irrational
decision makers [3]. However, this does not mean it is easy
to design and build an autonomous perception-control system
that drives better than the average human driver. The 2007
DARPA Urban Challenge [4] was a landmark achievement
in robotics, when 6 of the 11 autonomous vehicles in the
finals successfully navigated an urban environment to reach
the finish line, with the first place finisher traveling at an
average speed of 15 mph. The success of this competition
led many to declare the fully autonomous driving task a
“solved problem”, one with only a few remaining messy
details to be resolved by automakers as part of delivering a
commercial product. Today, over ten years later, the problems
of localization, mapping, scene perception, vehicle control,
trajectory optimization, and higher-level planning decisions
associated with autonomous vehicle development remain full
of open challenges that have yet to be fully solved by systems
incorporated into a production platforms (e.g. offered for
sale) for even a restricted operational space. The testing of
prototype vehicles with a human supervisor responsible for
taking control during periods where the AI system is “unsure”
or unable to safely proceed remains the norm [5], [6].
The belief underlying the MIT Autonomous Vehicle Technology (MIT-AVT) study is that the DARPA Urban Challenge
was only a first step down a long road toward developing
autonomous vehicle systems. The Urban Challenge had no
people participating in the scenario except the professional
drivers controlling the other 30 cars on the road that day. The
authors believe that the current real-world challenge is one
that has the human being as an integral part of every aspect
of the system. This challenge is made especially difficult due
to the immense variability inherent to the driving task due to
the following factors:
(a) Face Camera for Driver State.
(b) Driver Cabin Camera for Driver Body Position.
•
(c) Forward-Facing Camera for Driving Scene Perception.
•
•
•
•
•
(d) Instrument Cluster Camera for Vehicle State.
Fig. 2: Video frames from MIT-AVT cameras and visualization
of computer vision tasks performed for each.
2
The underlying uncertainty of human behavior as represented by every type of social interaction and conflict
resolution between vehicles, pedestrians, and cyclists.
The variability between driver styles, experience, and
other characteristics that contribute to their understanding, trust, and use of automation.
The complexities and edge-cases of the scene perception
and understanding problem.
The underactuated nature of the control problem [7] for
every human-in-the-loop mechanical system in the car:
from the driver interaction with the steering wheel to the
tires contacting the road surface.
The expected and unexpected limitation of and imperfections in the sensors.
The reliance on software with all the challenges inherent
to software-based systems: bugs, vulnerabilities, and the
constantly changing feature set from minor and major
version updates.
Fig. 3: Visualization of GPS points for trips in the MIT-AVT dataset local to the New England area. The full dataset contains
trips that span over the entire continental United States.
The need for a human driver to recognize, acknowledge,
and be prepared to take control and adapt when system
failure necessitates human control of the vehicle in order
to resolve a potentially dangerous situation.
• The environmental conditions (i.e., weather, light conditions) that have a major impact on both the low-level
perception and control tasks, as well as the high-level
interaction dynamics among the people that take part in
the interaction.
• Societal and individual tolerances to human and machine
error.
As human beings, we naturally take for granted how much
intelligence, in the robotics sense of the word, is required to
successfully attain enough situation awareness and understanding [8] to navigate through a world full of predictably irrational
human beings moving about in cars, on bikes, and on foot. It
may be decades before the majority of cars on the road are
fully autonomous. During this time, the human is likely to
remain the critical decision maker either as the driver or as
the supervisor of the AI system doing the driving.
In this context, Human-Centered Artificial Intelligence
(HCAI) is an area of computer science, robotics, and experience design that aims to achieve a deeper integration
between human and artificial intelligence. It is likely that
HCAI will play a critical role in the formation of technologies
(algorithms, sensors, interfaces, and interaction paradigms)
that support the driver’s role in monitoring the AI system as it
performs anywhere from just over 0% to just under 100% of
the basic driving and higher order object and event detection
tasks.
The MIT Autonomous Vehicle Technology (MIT-AVT)
study seeks to collect and analyze large-scale naturalistic data
of semi-autonomous driving in order to better characterize
the state of current technology use, to extract insight on
how automation-enabled technologies impact human-machine
interaction across a range of environments, and to understand
how we design shared autonomy systems that save lives as
we transition from manual control to full autonomy in the
coming decades. The effort is motivated by the need to better
characterize and understand how drivers are engaging with
advanced vehicle technology [9]. The goal is to propose,
design, and build systems grounded in this understanding, so
that shared autonomy between human and vehicle AI does not
lead to a series of unintended consequences [10].
“Naturalistic driving” refers to driving that is not constrained by strict experimental design and a “naturalistic driving study” (NDS) is generally a type of study that systematically collects video, audio, vehicle telemetry, and other sensor
data that captures various aspects of driving for long periods of
time, ranging from multiple days to multiple months and even
years. The term NDS is applied to studies in which data are
acquired under conditions that closely align with the natural
conditions under which drivers typically drive “in the wild.”
Often, a driver’s own vehicle is instrumented (as unobtrusively
as possible) and each driver is asked to continue using their
vehicle as they ordinarily would. Data is collected throughout
•
3
periods of use. Further, use is unconstrained by any structured
experimental design. The purpose is to provide a record of
natural behavior that is as unaffected by the measurement
process as possible. This contrasts with on-road experiments
that are conducted in similarly instrumented vehicles, but in
which experimenters are present in the vehicle, and ask drivers
to carry out specific tasks at specific times on specific roads
using specific technology systems in the vehicle.
The MIT-AVT study is a new generation of NDS that
aims to discover insights and understanding of real-world
interaction between human drivers and autonomous driving
technology. Our goal is to derive insight from large-scale
naturalistic data being collected through the project to aid in
the design, development and delivery of new vehicle systems,
inform insurance providers of the changing market for safety,
and educate governments and other non-governmental stakeholders on how automation is being used in the wild.
This paper outlines the methodology and underlying principles governing the design and operation of the MIT-AVT study
vehicle instrumentation, data collection, and the use of deep
learning methods for automated analysis of human behavior.
These guiding principles can be summarized as follows:
• Autonomy at All Levels: We seek to study and understand human behavior and interaction with every form
of advanced vehicle technology that assists the driver
through first sensing the external environment and the
driver cabin, and then either controlling the vehicle or
communicating with the driver based on the perceived
state of the world. These technologies include everything
from automated emergency braking systems that can take
control in rare moments of imminent danger to semiautonomous driving technology (e.g., Autopilot) that can
help control the lateral and longitudinal movements of
the vehicle continuously for long periods of driving on
well-marked roadways (e.g., highways).
• Beyond Epochs and Manual Annotation: Successful
large-scale naturalistic driving studies of the past in the
United States [11], [12], [13], [14], [15] and in Europe
[16] focused analysis on crash and near-crash epochs.
Epochs were detected using traditional signal processing
of vehicle kinematics. The extraction of driver state from
video was done primarily with manual annotation. These
approaches, by their nature, left the vast remainder of
driving data unprocessed and un-analyzed. In contrast to
this, the MIT-AVT study seeks to analyze the “long-tail”
of shared-autonomy from both the human and machine
perspectives. The “long-tail” is the part of data that is
outside of short, easily-detectable epochs. It is, for example, the data capturing moment-to-moment allocation of
glance over long stretches of driving (hundreds of hours
in MIT-AVT) when the vehicle is driving itself. Analyzing
the long-tail data requires processing billions of highdefinition video frames with state-of-the-art computer
vision algorithms multiple times as we learn both what to
look for and how to interpret what we find. At the same
time, despite the focus on deep learning based analysis
•
•
of large-scale data, the more traditional NDS analytic approaches remain valuable, including manual annotation,
expert review of data, insight integration from technology suppliers, and contextualizing observed naturalistic
behavior with driver characteristics, understanding, and
perceptions of vehicle technology.
Multiple Study Duration: We seek understanding human behavior in semi-autonomous systems both from the
long-term perspective of over 1 year in subject-owned
vehicles and from a medium-term perspective of 1 month
in MIT-owned vehicles. The former provides insights
into use of autonomous vehicle technology over time
and the latter provides insights about initial interactions
that involve learning the limitations and capabilities of
each subsystem in a fashion more closely aligned with
a driver’s experience after purchasing a new vehicle
equipped with a suite of technology that the driver may
or may not be familiar with.
Multiple Analysis Modalities: We use computer vision
to extract knowledge from cameras that look at the driver
face, driver body, and the external driving scene, but we
also use GPS, IMU, and CAN bus data to add rich details
about the context and frequency of technology use. This
data is further complemented by detailed questionnaire
and interview data that comprise driver history, exposure
to various automated and non-automated technologies,
mental model evaluation, perceptions of safety, trust, selfreported use, and enjoyment. With this interdisciplinary
approach, the dataset allows for a holistic view of realworld advanced technology use, and identifies potential
areas for design, policy, and educational improvements.
The key statistics about the MIT-AVT study as a whole
and about the individual vehicles in the study are shown in
Fig. 1. The key measures of the data with explanations of the
measures are as follows:
•
Study months to-date: 21
(Number of months the study has been actively running with
vehicles on the road.)
•
Participant days: 7,146
(Number of days of active data logger recording across all
vehicles in the study.)
•
Drivers: 78
(Number of consented drivers across all vehicles in the study.)
•
Vehicles: 25
(Number of vehicles in the study.)
•
Miles driven: 275,589
(Number of miles driven.)
•
Video frames: 3.5 billion
(Number of video frames recorded and processed across all
cameras and vehicles in the study.)
Latest dataset statistics can be obtained at http://hcai.mit.
edu/avt (see §VI).
4
Mar 2016
Apr 2016
May 2016
Jun 2016
Jul 2016
Aug 2016
Sep 2016
Oct 2016
Nov 2016
Dec 2016
100-200 miles/day
400-500 miles/day
600-700 miles/day
800-900 miles/day
Data collection
is on-going…
Jan 2017
Feb 2017
Mar 2017
Apr 2017
May 2017
Jun 2017
Jul 2017
1000+ miles/day
Aug 2017
(a) Visualization of daily miles driven by vehicles in the MIT-AVT study over the first 14 months of the study. This visualization does
not show the most recent few months because of the high capacity storage within each vehicle supports extended recording between data
offloads and subsequent additional time for processing the data through the pipeline as discussion in §IV. Light green-yellow color marks
the early days of the study when less than 200 miles were collected daily. Dark blue color marks the current state of the study where more
than 1000 miles of driving are often collected with an average (per-month) daily mileage of over 500.
(b) Cumulative distance traveled by vehicles instrumented as part of the study. The plot is showing miles traveled in the first 450 days.
However, the study is ongoing and hundreds of miles worth of data is being collected every day.
Fig. 4: Statistics on miles driven daily by vehicles that are part of the MIT-AVT study.
5
gage the automation: when, where, and for how long it is
turned on, when and where it is turned off, when control is
exchanged, and many other questions. Processing this huge
volume of data necessitates an entirely different approach to
data analysis. We accomplish the automated aspect of the
knowledge extraction process by using deep learning based
computer vision approaches for driver state detection, driver
body pose estimation, driving scene segmentation, and vehicle
state detection from the instrument cluster video as shown in
Fig. 2 and discussed in §IV. The result of using deep learning
based automated annotation is that MIT-AVT can analyze
the long-tail of driving in the context of shared autonomy,
which in turn, permits the integration of complex observed
interactions with the human’s perception of their experience.
This innovative interdisciplinary approach to analysis of NDS
datasets in their entirety offers a unique opportunity to evaluate
situation understanding of human-computer interaction in the
context of automated driving.
A. Naturalistic Driving Studies
The focus of the MIT-AVT study is to gather naturalistic
driving data and to build on the work and lessons-learned
of the earlier generation of successful NDS studies carried
out over the first decade of the 21st century [11], [12], [13],
[14], [15]. These previous studies aimed to understand human
behavior right before and right after moments of crashes and
near-crashes as marked by periods of sudden deceleration. The
second Strategic Highway Research Program (SHRP2) is the
best known and largest scale of these studies [14].
In contrast to SHRP-2 and other first-generation NDS
efforts, the MIT-AVT study aims to be the standard for the next
generation of NDS programs where the focus is on large-scale
computer vision based analysis of human behavior. Manually
annotating specific epochs of driving, as the prior studies have
done, is no longer sufficient for understanding the complexities
of human behavior in the context of autonomous vehicle
technology (i.e., driver glance or body position over thousands
of miles of Autopilot use). For example, one of many metrics
that are important to understanding driver behavior is momentby-moment detection of glance region [17], [18] (see §I-C).
In order to accurately extract this metric from the 1.1 billion
frames of face video without the use of computer vision would
require an investment in manual annotation of $1,833,000
[19]. This number assumes the availability of an efficient
annotation tool that is specifically designed for the manual
glance region annotation task and can leverage distributed,
online, crowdsourcing of the annotation task. The development
of such a tool is a technical challenge that mays take several
years of continuous research and development [20], which may
eclipse the cost human annotation hours. If this was the only
metric of interest, perhaps such a significant investment would
be justifiable and feasible. However, glance region is only one
of many metrics of interest, and in terms of manual annotation
cost is one of the least expensive. Another example is driving
scene segmentation, which for 1.1 billion frames would require
an incredible investment of $16.5 billion [21], [19]. For this
reason, automatic or semi-automatic extraction of information
from raw video is of paramount importance and is at the core
of the motivation, design, research, and operation of MIT-AVT.
The fundamental belief underlying our approach to NDS is
that only by looking at the entirety of the data (with algorithms
that reveal human behavior and situation characteristics) can
we begin to learn which parts to “zoom in” on: which triggers
and markers will lead to analysis that is representative of
system performance and human behavior in the data [22],
[23], [24], [25], [26]. Furthermore, each new insight extracted
from the data may completely change our understanding of
where in the data we should look. For this reason, we believe
understanding how humans and autonomous vehicles interact
requires a much larger temporal window than an epoch of
a few seconds or even minutes around a particular event. It
requires looking at the long-tail of naturalistic driving that
has up until now been largely ignored. It requires looking
at entire trips and the strategies through which humans en-
B. Datasets for Application of Deep Learning
Deep learning [27] can be defined in two ways: (1) a branch
of machine learning that uses neural networks that have many
layers or (2) a branch of machine learning that seeks to form
hierarchies of data representation with minimum input from
a human being on the actual composition of the hierarchy.
The latter definition is one that reveals the key characteristic
of deep learning that is important for our work, which is the
ability of automated representation learning to use large-scale
data to generalize robustly over real-world edge cases that arise
in any in-the-wild application of machine learning: occlusion,
lighting, perspective, scale, inter-class variation, intra-class
variation, etc. [28].
In order to leverage the power of deep learning for extracting human behavior from raw video, large-scale annotated
datasets are required. Deep neural networks trained on these
datasets can then be used for their learned representation and
then fine-tuned for the particular application in the driving
context. ImageNet [29] is an image dataset based on WordNet
[30] where 100,000 synonym sets (or “synsets”) each define
a unique meaningful concept. The goal for ImageNet is to
have 1000 annotated images for each of the 100,000 synsets.
Currently it has 21,841 synsets with images and a total of
14,197,122 images. This dataset is commonly used to train
neural network for image classification and object detection
tasks [31]. The best performing networks are highlighted as
part of the annual ImageNet Large Scale Visual Recognition
Competition (ILSVRC) [32]. In this work, the terms “machine
learning,” “deep learning,” “neural networks,” and “computer
vision” are often used interchangeably. This is due to the fact
that the current state-of-the-art for most automated knowledge
extraction tasks are dominated by learning-based approaches
that rely on one of many variants of deep neural network
architectures. Examples of other popular datasets leveraged
in the development of algorithms for large-scale analysis of
driver behavior in our dataset include:
6
•
•
•
•
COCO [33]: Microsoft Common Objects in Context
(COCO) dataset is a large-scale dataset that addresses
the object detection task in scene understanding under
two perspectives: detecting non-iconic views of objects,
and the precise 2D localization of objects. The first task
usually refers to object localization, which uses bounding
boxes to denote the presence of objects. The second task
refers to instance segmentation, for which the precise
masks of objects are also needed. The whole dataset
features over 200,000 images labeled within 80 object
categories. Successful methods [31], [34], [35] jointly
model the two tasks together and simultaneously output
bounding boxes and masks of objects.
KITTI [36], [37]: KITTI driving dataset develops challenging benchmarks for stereo vision, optical flow, visual
odometry / SLAM and 3D object detection, captured
by driving around in both rural areas and highways of
Karlsruhe (a mid-size city in Germany). In total, there are
6 hours of traffic scenarios recorded at 10-100 Hz using
a variety of sensor modalities such as high-resolution
color and grayscale stereo cameras, a Velodyne 3D laser
scanner and a high-precision GPS/IMU inertial navigation
system. Besides, [38] also propose ground truth for 3D
scene flow estimation by collecting 400 highly dynamic
scenes from the raw dataset and augmenting them with
semi-dense scene flow ground truth.
Cityscapes [39]: The Cityscapes dataset focuses on semantic understanding of urban street scenes. It offers
a large, diverse set of stereo video sequences recorded
in streets from 50 different cities with pixel-level and
instance-level semantic labeling. There are 5,000 fully
segmented images with pixel-level annotations and an
additional 20,000 partially segmented images with coarse
annotations. Its two benchmark challenges have led to the
development of many successful approaches for semantic
segmentation [40], [41] and instance segmentation [34],
[42].
CamVid [43]: Cambridge-driving Labeled Video
Database (CamVid) is the first dataset with frame-wise
semantic labels in videos captured from the perspective
of a driving automobile. The dataset provides ground
truth labels that associate each pixel with one of 32
semantic classes. Manually specified per-pixel semantic
segmentation of over 700 images total enables research
on topics such as pedestrian detection [44], and label
propagation [45].
As shown in Fig. 2, specific tasks have been defined such as
fine-grained face recognition, body pose estimation, semantic
scene perception, and driving state prediction. Current efforts
are briefly summarized as follows:
• Fine-grained Face Recognition: Beyond classic face
recognition studies, fine-grained face recognition focuses
on understanding human behavior toward face perception,
such as facial expression recognition [46], [47], eye gaze
detection [48], [49]. In the driving context, [50], [51]
explore the predictive power of driver glances. [52], [53]
use facial expression to detect emotional stress for driving
safety and the driving experience.
• Body Pose Estimation: Work on human body pose
expands the performance, capabilities, and experience
of many real-world applications in robotics and action
recognition. Successful approaches vary from using depth
images [54], via deep neural networks [55], or with
both convolutional networks and graphical models [56].
Specifically for driving, [57] use driver pose, which
is represented by skeleton data including positions of
wrist, elbow, and shoulder joints, to model human driving
behavior. [58] cast visual analysis of eye state and head
pose for driver alertness monitoring.
• Semantic Scene Perception: Understanding the scene
from 2D images has long been a challenging task in
computer vision, which often refers to semantic image
segmentation. By taking advantage of large scale datasets
like Places [59], Cityscapes [39], many approaches [40],
[41] manage to get state-of-the-art results with powerful
deep learning techniques. As a result, precise driving
scene perception [60], [61] for self-driving cars is now
actively studied in both academia and industry.
• Driving State Prediction: Vehicle state is usually considered as a direct illustration of human decision in driving,
which is also the goal for autonomous driving. In terms
of machine learning, it serves as the ground truth for
various tasks from different perspectives such as driving
behavior [57] and steering commands [60], [61].
Many aspects of driver assistance, driver experience, and
vehicle performance are increasingly being automated with
learning-based approaches as representative datasets for these
tasks are released to the broad research community. The MITAVT study aims to be the source of many such datasets that
help train neural network architectures that provide current
and future robust solutions for many modular and integrated
subtasks of semi-autonomous and fully-autonomous driving.
C. Automotive Applications of Deep Learning
II. MIT-AVT S TUDY S TRUCTURE AND G OALS
Design of perception and control systems in the driving
domain have benefited significantly from learning-based approaches that leverage large-scale data collection and annotation in order to construct models that generalize over the
edge cases of real-world operation. Leveraging the release
large-scale annotated driving datasets [36], [39], automotive
deep learning research aims to address detection, estimation,
prediction, labeling, generation, control, and planning tasks.
The governing principle underlying the design of all hardware, low-level software, and higher-level data processing
performed in the MIT-AVT study is: continual, relentless
innovation, while maintaining backward compatibility. From
the beginning, we chose to operate at the cutting-edge of
data collection, processing, and analysis approaches. This
meant trying a lot of different approaches and developing
completely new ones: from sensor selection and hardware
7
design described in §III to the robust time-critical recording
system and the highly sophisticated data pipeline described in
§IV. It’s a philosophy that allowed us to scale quickly and find
new solutions at every level of the system stack.
advanced features available in the vehicle. After this initial
introduction to systems outside of the vehicle, participants
are seated in the vehicle and given a guided overview of
the vehicle layout and settings (e.g. seat / mirror adjustments,
touchscreen menu layout). Participant’s phones are paired with
the vehicle, and they are given the opportunity to practice
several voice commands (e.g. placing a phone call, entering
a destination). Next, more detailed overviews are provided on
the function, activation, and use of the following features:
A. Participation Considerations and Recruitment
As previously noted, the medium duration (one month long)
NDS is conducted using MIT-owned vehicles, while the long
duration (over 1 year) NDS is conducted in subject-owned
vehicles. Participants are divided into primary and secondary
drivers, all of which must sign an institutional review board
(IRB) approved informed consent form. Primary drivers in the
long NDS (usually the most frequent driver of the vehicle or
the car owner) must be willing to provide permission to install
the data acquisition equipment in the vehicle, warning labels
on windows to advise non-consented passengers and drivers
of the ongoing data collection, and coordinate with project
staff for system maintenance and data retrieval. Recruitment
is conducted through flyers, social networks, forums, online
referrals, and word of mouth. Primary drivers are compensated
for their time involvement in vehicle instrumentation, system maintenance appointments, data retrieval, and completing
questionnaires.
To be accepted as a primary driver in an MIT-owned vehicle
fleet requires that potential subjects’ daily commutes include
time on specific highways leading in and out of the Greater
Boston Area (to support comparison across similar roadway
conditions), a willingness to use a study vehicle for a period of
approximately four weeks as the subject’s primary commuting
vehicle, signing an IRB approved informed consent form, passing a Criminal Offender Record Information (CORI) check
and driving record review by MIT’s Security and Emergency
Management Office, participating in a training protocol that
covers both basic and advanced vehicle features, and completing a series of questionnaires and interviews prior to and after
their naturalistic driving experience. High-level overviews of
the training protocol, questionnaire, and interview strategies
can be found in §II-B and §II-C, respectively. Recruitment
is conducted through various online communities, flyers, a
database of volunteers in prior MIT studies, online referrals,
and word of mouth. Drivers are compensated for their time
involvement in the study with the use of a vehicle, one tank
of gas, coverage of roadway tolls for the duration of their
use of the vehicle, and a small monetary compensation for
finalizing interviews.
•
•
•
•
•
•
•
Adaptive Cruise Control (ACC)
Pilot Assist (in the Volvo)
Forward Alert Warning / City Safety (in the Volvo)
Automatic Emergency Braking
Lane Departure Warning (LDW)
Lane Keep Assist (LKA)
Blind Spot Monitor
Following this stationary in-vehicle training, participants
are provided with an on-road training drive on a multi-lane
highway. This highway driving session lasts approximately
30 minutes to allow for practical exposure to the systems
in real world setting. During the training drive participants
are encouraged to utilize the researcher and ask questions
when testing out the systems. Participants are encouraged to
customize vehicle settings to their preferences and to develop
sufficient familiarity to support the ability to choose to use or
not use certain systems for the duration of their one month
period of vehicle use.
C. Qualitative Approaches for One Month NDS
Self-report data collection methods are kept as unobtrusive
to participation in the study as possible, while still capturing
the richness of driver’s experience with the vehicle and various
systems, their thoughts on the technology after participating,
and barriers toward their intentions to adopt or discard automation moving forward. Self-report data in the medium duration
(one month long) NDS is captured using three questionnaire
batteries and two semi-structured interviews. Self-report data
is collected prior to and after the naturalistic portion of the
experiment; at no point are participants asked to complete
questionnaires or interviews while they are in possession of
the vehicle.
The questionnaire batteries are deployed in three stages.
The first occurs when a subject signs the consent form
and completes the background check paperwork. The first
questionnaire collects basic demographics and information on
driving history, driving style, exposure to various advanced
and established in-vehicle technologies, and general trust in
technology. A second questionnaire is completed immediately
following the training protocol outlined in §II-B, and captures
participants’ high level mental models, initial impressions, and
reported trust in select vehicle technologies. The third and
final questionnaire is completed at the end of the driver’s onemonth naturalistic driving period. This questionnaire assesses
reported trust in select technologies, perceptions of safety,
high- and detailed-level understanding of systems, and desire
B. Training Conditions for One Month NDS
Participants in the medium duration (one month long) NDS
are provided with introductions to the fleet vehicles in the
form of a 1.5 hour long training session. This session is
intended to introduce drivers to the physical characteristics
of the vehicle, and provide a sufficient understanding of
vehicle features in order to support safe use of advanced
technologies. Participants are provided with a study overview
by a researcher and presented with manufacturer produced
videos or information packets on one or more of the basic and
8
for having in their own future vehicles such systems as experienced during the NDS period and with hypothetical improvements. Many questions in the second and third questionnaires
are identical, allowing analysis to explore how exposure to
systems and experiential learning impact concepts such as trust
and understanding of technologies.
The semi-structured interviews are conducted in person
between a research associate and the study participant, and
take place on two occasions. The first interview occurs at
the end of the one-month naturalistic driving period, and
lasts approximately 30-60 minutes. It consists of 13 predefined questions focusing on initial reactions to the vehicle,
experience during the training drive, how training affected
their understanding of the technologies, and driver perceptions
of the technologies. The second interview is conducted over
the phone approximately two weeks after the vehicle has been
returned to MIT. The second interview lasts approximately 10
minutes, and consists of 6 questions focusing on the driver’s
adaptation back to their original, typically lower-technology
vehicle.
Full members of the consortia have rights to data access for
proprietary or other internal use purposes. Several members of
the effort are actively involved in independent research (with
and without MIT involvement) using MIT-AVT study data.
III. H ARDWARE : DATA L OGGING AND R EAL -T IME
P ROCESSING
The backbone of a successful naturalistic driving study
is the hardware and low-level software that performs the
data collection. In the MIT-AVT study, that role is served
by a system named RIDER (Real-time Intelligent Driving
Environment Recording system). RIDER was designed and
continuously developed to satisfy the following goals and
requirements:
1) Timestamped Asynchronous Sensor Recording:
Record all sensors and data streams in a way that each
sample of data (no matter its frequency or data source)
is timestamped using a centralized, reliable time-keeper.
In other words, data has to be timestamped in a way that
allows perfect synchronization of multiple data streams
in post-processing [63].
2) High-Definition Video: Capture and record 3 to 6 cameras at 720p (2.1 megapixels) resolution. The selection
of camera positions, resolution, and compression was
one of the most essential design decisions of the entire
study. See §III-C for discussion of how this selection
was made.
3) CAN Bus: Collect vehicle telemetry from the Controller
Area Network (CAN) bus(es) of the vehicle [64]. Each
vehicle has different ports and bus utilization policies,
with little information made publicly available about the
mapping of message ID’s and the message content. Raw
CAN messages must be recorded such that the essential
information is contained within those messages even if at
the time of collection those messages cannot be decoded.
4) Remote Cellular Connectivity: Low-bandwidth, infrequent communication of system status via a cellular
connection in order to detect when RIDER system
malfunction occurs.
5) Discrete and Elegant Appearance: Parts of the system
that are visible from inside or outside the car should
have a small form-factor and have visual design characteristics that do not detract from the overall appearance
of the vehicle or have an impact on the overall driving
experience.
6) Camera Mounting is Robust but Removable: Mounting must be consistent, reliable, and removable designed
specifically for each vehicle’s interior physical characteristics.
D. Competitors Collaborate: Consortium Model
Naturalistic driving data and automated deep learning based
interpretation of that data gives insights, suggestions, and
well-grounded scenarios as to the path forward for safe and
effective integration of artificial intelligence into modern and
future vehicle systems. The raw data and the high-level understanding of human behavior and system performance in such
autonomous vehicle technology is of interest to:
• Car companies (both established and newly formed)
• Automotive parts suppliers
• Insurance companies
• Technology companies
• Government regulatory agencies
• Academic and research organization with stakeholder
interest in applied artificial intelligence and vehicle safety.
When the path forward is full of uncertainty, risks, costly
misaligned investments, and potential paradigm shifts, open
innovation provides more value than closed competition. At
this moment in time, autonomous vehicle technology is a space
where competitors win by collaborating, sharing high-level
insights and large-scale, real-world data.
High-level measures such as system use and system performance can be used to enhance the design, development
and validation of future vehicle systems. Basic driver behavior
with and without technology use can fuel basic research on
driver understanding, use characteristics, and decision models
while aiding in the actuation of risk in the insurance market.
Video recording inside and out of the vehicle can be used
to develop perception, control, planning, driver sensing, and
driver assistance systems. As such, the data collected in the
MIT-AVT study can be leveraged for a range of quantitative
and qualitative efforts. Members of the Advanced Vehicle
Technology consortium [62] are collaborating to support the
acquisition of data through the MIT-AVT study, development
of new data processing approaches, and selected analysis.
RIDER components include a real-time-clock, GPS, IMU,
and the ability to record up to 6 cameras at 720p resolution,
remote cellular connectivity. The developed system employs
the use of common components tailored to suit its needs
achieving a scalable ultra low cost, accurate, extendable and
robust data recording platform.
9
Fig. 6: Fully assembled Knights of CANelot board, showing
populated microcontroller, power relay, CAN and power connections.
Fig. 5: Knights of CANelot, CAN controlled power board.
Power board mid-assembly showing populated CAN controller, transceiver, and power regulation. Also shown, unpopulated positions for the power relay, microcontroller, oscillator
and connectors.
CAN message indicating the vehicle CANbus is active, the
microcontroller is sent an interrupt by the CAN controller
waking up the microcontroller from sleep and triggering
the relay to power the primary buck converter. This begins
the booting sequence to the rest of the system. When the
vehicle shuts off and the CANbus within the car enters into
a sleep state, a signal is sent via the Knights of CANelot
microcontroller to gracefully stop all video and data recording,
shutdown the compute system, disconnect main power then
enter sleep mode once again.
To keep the electronics and stored data secure, RIDER
is placed within in the trunk away from the elements and
possible disturbances from passengers. Power and CAN data
cables are run from the OBD-II or diagnostic port to the trunk
into RIDER. USB cables for cameras are also run from each
camera location into the trunk. All data and power cables are
secured and hidden beneath interior trim panels.
A. Power Management System
B. Computing Platform and Sensors
The power systems for RIDER has many constraints: it
demanded flexibility to transfer into different vehicles and
draw minimal power when not in use as to not drain the
primary vehicle battery. The power system consists of a main
smart CAN monitoring section and a buck converter. When
active and logging data, RIDER draws less than 8 watts of
power. When in standby, RIDER’s quiescent current draw is
less than 1/10th of a watt.
The Knights of CANelot is a CAN controlled power
board that contains a microchip MCP2515 CAN controller
and MCP2551 CAN transceiver, along with an Atmega328p
microcontroller to monitor CANbus traffic. By default when
powered this microcontroller places itself into sleep and does
not allow power to enter the system by way of a switching
relay. When the CAN controller detects a specific predefined
A single board computer was chosen for this application for
its wide variety of I/O options, small form factor and ease of
development. We chose to work with the Banana Pi Pro with
the follow sensors and specifications:
• 1GHz ARM Cortex-A7 processor, 1GB of RAM
• Expandable GPIO ports for IMU/GPS/CAN
• Native onboard SATA
• Professionally manufactured daughter board for sensor
integration
• ARM processor features onboard CAN controller for
vehicle telemetry data collection
• Maxim Integrated DS3231 real-time clock for accurate
timekeeping/time-stamping +/-2 ppm accuracy
• Texas Instruments SN65HVD230 CAN transceiver
10
configuration allows for possibility of up to 6 cameras in a
single RIDER installation.
D. Ongoing Hardware Development and Innovation
RIDER is a strong and proven instrumentation platform
with adequate data collection abilities for naturalistic driving
research. During the research, development, and testing process we met some limitations of the system. While a single
board computer is sufficient for most collection processes,
limitations of minimal system memory could create issues
when expanding the system. Similarly, a Dual-Core ARM
processor is very capable when interfacing with sensors and
writing data out to files, but performance can fluctuate if any
preprocessing of the data is required onboard. From our work
we have proposed the following improvements to some of
these common issues.
The largest enhancement for the entire RIDER system
would be to upgrade the single board computing platform. Use
of the Nvidia Jetson TX2 would provide more expandability
both for I/O and processing. With greater processing and GPU
bandwidth available, real-time systems could be implemented
using both video and sensor data simultaneously for detection
and driver warning systems, internal annotation of data and
more. With greater I/O capability, upgraded sensors packages
with higher data bandwidths can be implemented. Much like
the Banana Pi Pro the Jetson TX2 has not one, but two
fully supported CAN controllers to interface with a secondary
CANbus system on the vehicle. Jetson TX2 has expandability
not only for SATA but also PCIe and mSATA, allowing for
even greater expansion of third party modules. The enhanced
processing via CPU and GPU with 8 times the onboard RAM
allows the potential for preprocessing and integration of realtime driver monitoring systems. The Jetson also has the major
advantage of being supported for use in multiple configurations for in vehicle applications. Below are the specifications
and added improvements of the Jetson TX2 over the Banana
Pi.
1) Jetson TX2 - Capabilities:
• Quad-Core ARM Cortex-A57 @ 2GHz + Dual-Core
NVIDIA Denver 2 @ 2GHz + NVIDIA Pascal GPU with
8GB of RAM
• Industrial cameras; GigE vision, USB3 Vision
• LIDAR
• Onboard video encoding
• Real-time
• 802.11a/b/g/n/ac WiFi
• Bluetooth 4.1
• USB3
• 10/100/1000 BASE-T Ethernet
• 12 lanes MIPI CSI 2.0, 2.5 Gb/sec per lane
• PCIe gen 2.0
• mSATA
• SDcard
• 2x CAN bus interface
Much like the Jetson TX2, Nvidia’s Drive PX2 was specifically designed for the automotive environment to operate semi-
Fig. 7: Final prototype version of RIDER enclosed by 3D
printed case. From top to bottom, clockwise, attached to the
top of the case is external storage in the form of a 1 terabyte
solid state hard drive. The USB cameras connect via a USB
hub shown in the center. To the right of the USB hub, Banana
Pi covered by the black SensorHAT with CAN transceiver,
GPS, IMU, and real time clock. Bottom center, buck converter
for stepping down vehicle battery voltage from 12-13.8 volts
to 5 volts for all compute systems. Lower left, Knights of
CANelot CAN controlled power board.
•
•
•
•
•
9 degrees-of freedom inertial measurement unit (STMicro
L3GD20H(gyro), LSM303D(accelerometer/compass))
GlobalTop MTK3339 GPS unit, 6 channel, DGPS capability accurate within 5 meters
Huawei E397Bu-501 4G LTE USB module
USB 3.0 4-port hub, powered
1TB/2TB solid state hard drive
C. Cameras
Three or four Logitech C920 webcams record at a resolution
of 1280x720 at 30 frames per second within the car. Two
of these cameras have been modified to accept standard CS
type lens mount for adaptability within the car for either face
or body pose orientation. The third camera is the standard
webcam that is mounted on the windshield for a forward road
perspective. Occasionally a fourth camera is placed within
the instrument cluster to capture information unavailable on
the CANbus. These cameras also contain microphones for
audio capture and recording. Custom mounts were designed
for specialty placement within the vehicle.
Most single board computers like our Banana Pi lack the
required computational ability to encode and compress more
than one raw HD video stream. The Logitech C920 camera
provides the ability to off-load compression from the compute
platform and instead takes place directly on the camera. This
11
and fully autonomous vehicles. Its well equipped for simple
tasks such as data logging, up to advanced situations where
interfacing with many cameras, high bandwidth sensors and
the ability to control both longitudinal and lateral movement
of the vehicle are required. While the Drive PX2 would be
under utilized in a data-collection-only scheme, it is a great
platform to begin developing systems for final integration into
automobiles as a production system. Below are some of the
Drive PX2 compute and sensor capabilities.
2) Drive PX2 - Capabilities:
•
•
•
•
•
•
•
•
•
•
A. Microcontroller
The microcontroller on the Knights of CANelot power
management board runs a small C program that is responsible
for powering the RIDER system in sync with the vehicle.
By default, this microcontroller is in a sleep state, awaiting a
specfic CAN message. By listening to the vehicle’s CANbus,
this program can recognize when CAN message for a specific
signal begins, which signifies the car has turned on. If this
signal is observed, the C program then connects the vehicle’s
power to the rest of the system, starting the data collection.
When the specified message ends, meaning the car is off, the
microcontroller sends a signal to the Banana Pi to close all files
and shutdown gracefully. It then waits 60 seconds to finally
disconnect power from the rest of the system and enters its
original sleep state.
2x Quad-Core ARM Cortex-A57 @ 2GHz + 2x DualCore NVIDIA Denver 2 @ 2GHz with 8GB of LPDDR4
RAM + 2x integrated Pascal + 2x dedicated Pascal MXM
modules with 4GB of GDDR5 VRAM
Two computers, connected over ethernet
8 TFLOPS, 24 DL TOPS (using int8)
6x CANbus interfaces
4x LINbus interfaces
2x FlexRay network interfaces
Ability to control car with real time system
12x GSML camera (industrial cameras)
2x 1GbE + 1x 10GbE (ethernet)
USB3
B. Single Board Computer
Our single board computer, the Banana Pi, contains a
32GB SD card that stores the RIDER filesystem, software
and configuration files. The Banana Pi runs a modified Linux
kernel using custom kernel modules and a tweaked Bannanian
operating system with performance and security enhancements. Performance was improved by disabling unnecessary
kernel modules and removing extraneous Linux services. Security enhancements included disabling all CAN transmission,
thereby prohibiting malicious or unintentional transmission
of actuating messages to a vehicle’s systems. Additional
security improvements included altering the network settings
to prevent any remote connection from logging in. Specific
MIT machines were white listed to allow configuration files
to be altered through a physical connection. The default system
services were also altered to run a series of locally installed
programs that manage data collection whenever the system
boots.
A customized Linux kernel was developed specifically for
RIDER based on the hardware and system requirements listed
above. The filesystem is stored on a replaceable micro SD
card on-board the Banana Pi. The next section outlines and
describes the software running within RIDER and the management of the data after collection.
IV. S OFTWARE : DATA P IPELINE AND D EEP L EARNING
M ODEL T RAINING
Building on the robust, reliable, and flexible hardware
architecture of RIDER is a vast software framework that
handles the recording of raw sensory data and takes that data
through many steps across thousands of GPU-enabled compute
cores to the extracted knowledge and insights about human
behavior in the context of autonomous vehicle technologies.
Fig. 8 shows the journey from raw timestamped sensor data to
actionable knowledge. The high-level steps are (1) data cleaning and synchronization, (2) automated or semi-automated data
annotation, context interpretation, and knowledge extraction,
and (3) aggregate analysis and visualization.
This section will discuss the data pipeline (Fig. 8), which
includes software implemented on RIDER boxes that enables
data streaming and recording. In addition, the software that is
used to offload and process the data on a central server will be
discussed. The operational requirement of software operating
on RIDER boxes are as follows:
1)
2)
3)
4)
5)
C. Startup Scripts
The Banana Pi runs a series of data recording initialization
bash startup scripts whenever the system boots. First, the onboard clock on the Pi is synchronized with a real-time clock
that maintains high resolution timing information. Modules for
device communication such as UART, I2C, SPI, UVC, and
CAN are then loaded to allow interaction with incoming data
streams. A monitoring script is started that shuts down the
system if a specified signal is received from the Knights of
CANelot microcontroller, and an additional GSM monitoring
script helps reconnect to the cellular network after losing
connection. The last initialization steps are to start the python
scripts Dacman and Lighthouse.
D. Dacman
Dacman represents the central data handler script that
manages all data streams. It uses a configuration file called
trip_dacman.json that contains unique device IDs for
all cameras. In addition, it contains a unique RIDER ID
associated with the RIDER box it is stored in. This configuration file also contains unique ID values for the subject,
Power on whenever the vehicle is turned on
Create a trip directory on an external solid state drive
Redirect all data streams into timestamped trip files
Log and transmit metadata to the lab in real time
Power down after the vehicle is turned off
12
RIDER Logger Hardware
GSM
Light
Processing
Heartbeat GUI
(Web-Based)
Heartbeat
Database
Legend:
Small Data
1 Gb
Offloading
Medium Data
Processing Queue
Synchronization
Big Data
Huge Data
100 Gb
5,000 Gb
100,000 Gb
Software
Action Dependency Tree
Private Trip Data
Sync
Data
Removals
CAN
IMU
Epox TOC
GPS
Vis
Face
Gaze
Body
Perform Requested Removals
Trip Status, Info
Remove Non-Consented Subjects
Epoch Status, Info
Eye
Processed Trip Data
Cog
Trip, Subtrip Statistics
Processed Trip Data
Knowledge Extraction
Distributed Computing
Semi-Automated Annotation
Processed Epoch Data
Job Pool
Visualizations
Manual Annotation
Job Scheduling
Epoch Extraction
Job Completion
Visualization
Fig. 8: The MIT-AVT data pipeline, showing the process of offloading, cleaning, synchronizing, and extracting knowledge
from data. On the left is the dependency-constrained, asynchronous, distributed computing framework. In the middle is the
sequence of high level procedures that perform several levels of knowledge extraction. On the right are broad categories of
data produced by the pipeline, organized by size.
vehicle and study this driver is associated with. Once started,
Dacman creates a trip directory on the external solid state
drive named according to the date it was created using a
unique naming convention: rider-id_date_timestamp
(e.g. 20_20160726_1469546998634990). This trip directory contains a copy of trip_dacman.json, any data
related CSV files reflecting included subsystems, as well as
a specifications file called trip_specs.json that contains
microsecond timestamps denoting the beginning and end of
every subsystem and the trip itself.
Dacman calls a manager python script for every subsystem
(e.g. audio_manager.py or can_manager.py), which
makes the relevant system calls to record data. Throughout
the course of the current vehicle trip, all data is written to
CSV files with timestamping information included in each
row. Dacman calls two other programs written in C in order
to help generate these files: cam2hd for managing cameras
and dump_can for creating CAN files. Audio or camera data
is recorded to RAW and H264 formats respectively, with an
accompanying CSV denoting the microsecond timestamp at
which each frame was recorded. If any errors are encountered
while Dacman is running, the system restarts up to two times
in an attempt to resolve them, and shuts down if unable to
resolve them.
13
E. Cam2HD
J. RIDER Database
Cam2hd is a program written in C that opens and records
all camera data. It relies on V4L (Video4Linux), which is an
open source project containing a collection of camera drivers
in Linux. V4L enables low level access to cameras connected
to RIDER by setting the incoming image resolution to 720p
and allows the writing of raw H264 frames.
A PostgreSQL database is used to store all incoming trip
information, as well as to house information about all trips
offloaded to a storage server. After additional processing, useful information about each trip can be added to the database.
Queries can then be structured to obtain specific trips or times
in which specific events or conditions occurred. The following
tables are fundamental to the trip processing pipeline:
• instrumentations: dates and vehicle IDs for the installation of RIDER boxes
• participations: unique subject and study IDs are combined to identify primary and secondary drivers
• riders: rider IDs paired with notes and IP addresses
• vehicles: vehicle information is paired with vehicle IDs
such as the make and model, the manufacture date, color,
and availability of specific technologies
• trips: provides a unique ID for each centrally offloaded
trip as well as the study, vehicle, subject and rider IDs.
Also provides information about synchronization state,
available camera types and subsystem data. Metadata
about the content of the trip itself is included, such as
the presence of sun, gps frequency and the presence of
certain technology uses or acceleration events.
• epochs epoch-label: tables for each epoch type are labeled and used to identify trips and video frame ranges
for which they occur (e.g. autopilot use in Teslas would
be in epochs autopilot)
• homebase log: contains streamed log information from
the homebase script that keeps track of RIDER system
health and state
F. DumpCAN
Dump_can is another program written in C that configures
and receives data from the Allwinner A20 CAN controller.
This program uses the can4linux module to produce a CSV
containing all CAN data received from the connected CANbus.
In addition, it offers low level manipulation of the CAN
controller. This allows dump_can to set listen only mode
on the can controller, which enables a heightened degree of
security. By removing the need to send acknowledgements
when listening to messages on the CAN network, any possible interference with existing systems on the CAN bus is
minimized.
G. Lighthouse
Lighthouse is a python script that sends information about
each trip to Homebase. Information sent includes timing information for the trip, GPS data, power consumption, temperature and available external drive space. The interval between
communications is specified in the dacman configuration
file. All communications are sent in JSON format and are
encrypted using public-key cryptography based on elliptic
curve Curve25519 due to its speed. This means that each
RIDER uses the public key of the server, as well a unique
public/private key to encrypt and transmit data. Lighthouse is
written in Python and depends on libzmq/libsodium.
K. Cleaning
After raw trip data is offloaded to a storage server, all trips
must be inspected for any inconsistencies. Some trips may
have inconsistencies that can be fixed, as in the case where
timestamping information could be obtained from multiple
files, or when a nonessential subsystem failed during a trip
(e.g. IMU or audio). In unrecoverable cases, like the event
where a camera was unplugged during a trip, that trip is
removed from the dataset. Trips that have valid data files may
also be removed from the dataset if that trip meets some set
of filtering constraints, like when a vehicle is turned on, but
does not move before turning off again.
H. Homebase
Homebase is a script that receives, decrypts and records
all information received from Lighthouse and stores them in
the RIDER database. This allows remote monitoring of drive
space and system health. All RIDER key management is done
here in order to decrypt messages from each unique box.
L. Synchronization
I. Heartbeat
After completing cleaning and filtration, valid trips undergo
a series of synchronization steps. First, the timestamps of every
frame gathered from every camera are aligned in a single video
CSV file at 30 frames per second using the latest camera
start timestamp and the earliest camera end timestamp. In
low lighting conditions the cameras may drop to recording
at 15 frames per second. In these cases, some frames may be
repeated to achieve 30 frames per second in the synced video.
After all raw videos have been aligned, new synchronized
video files can then be created at 30 frames per second. CAN
data is then decoded by creating a CSV with all relevant CAN
Heartbeat is an engineer facing interface that displays
RIDER system status information in order to validate successful operation or gain insights as to potential system
malfunction. Heartbeat uses the information committed to the
database from Homebase to keep track of various RIDER logs.
This is useful for analyzing the current state of the vehicle
fleet, and assists in determining which instrumented vehicles
are in need of drive swaps (due to the hard drive running out
of space) or system repairs. It is also useful for verifying that
any repairs made were successful.
14
messages as columns and synced frame IDs as rows. CAN
message values are then inserted frame-by-frame based on
the closest timestamp to each decoded CAN message. A final
synchronized visualization can then be generated that shows
all video streams and CAN info in separate panels in the same
video. The data is then ready to be processed by any algorithm
running statistics, detection tasks, or manual annotation tasks.
C. Cleaning Criteria
The following cases represent recoverable errors that a trip
may contain, as well as their implemented solutions:
•
•
•
V. T RIPS AND F ILES
This section will define how trip data files may be stored
in a trip directory. A trip directory represents a trip that a
driver took with their vehicle from start to finish. These are
the files that are offloaded from the external storage drive in
a RIDER box onto a central server, where the data can be
cleaned, synchronized, or processed in some other way.
•
•
•
A. Trip Configuration Files
Invalid permissions: UNIX permissions of the trip directory must allow group-only read/write access
Missing backup: Raw essential files are backed up to
allow a rollback to previous versions
Missing trip specs.json: The trip specs.json file can
sometimes be reconstructed using recorded timestamps
Missing or invalid ID: Vehicle, camera or subject IDs
may be corrected based on trip context
Invalid Nonessential Files: If IMU or audio have failed,
they can be removed and the trip can be preserved
Invalid last CSV line: Interrupted subsystems may write
incomplete lines to their data file, which can be removed
D. Filtering Criteria
Trip configuration files store specifications and information
about available subsystems are included to manage the data
logging process.
• trip dacman.json: a configuration file containing subject
and systems information used to record the trip
• trip diagnostics.log: a text file containing diagnostics
information recorded during the trip: includes external
temperature, PMU temperature, HDD temperature, power
usage and free disk space
• trip specs.json: a json file containing start and end
timestamps for all subsystems
The following cases represent unrecoverable errors or chosen criteria that result in the removal of a trip from the dataset:
•
•
•
•
•
B. Trip Data Files
•
Trip data files are the end point of all recording RIDER
data streams. They include numerous CSV (comma separated
values) files that provide timestamping information, as well as
raw video files in H264 and audio files in RAW formats.
• camera-directory: a directory named by camera type (all
contained files are also named by that camera type)
– camera-name.h264: a raw H264 file
– camera-name.error: contains camera-specific errors
– camera-name.csv: matches recorded frames with
system timestamps for later synchronization
∗ frame,ts_micro
• data can.csv: contains CAN data
– ts_micro, arbitration_id, data_length, packet_data
• data gps.csv: contains GPS data
– ts_micro, latitude, longitude,
altitude, speed, track, climb
• data imu.csv: contains IMU data
– ts_micro, x_accel, y_accel,
z_accel, roll, pitch, yaw
• audio.raw: contains raw output from a specified camera
• can.error, gps.error, imu.error, audio.error: text-based
error files for CAN, GPS, IMU and audio recordings
•
•
•
Nonconsenting driver: When the driver is not a consented participant in the study
Requested removal: When the subject requests certain
trips, dates or times be removed
Vehicle doesn’t move: When the kinematics of the
vehicle indicate no change in speed
Trip data files < 15MB: When the total size of a trip’s
files are less than 15MB (faster than duration checks)
Trip duration < 30 seconds: When the shortest camera
recording is less than 30 seconds in duration
Missing essential files: When camera files, trip_dacman.json or data_can.csv are missing
Outside volunteer participation range: Indicative of
MIT staff driving the vehicle to be maintained or washed
Large essential subsystem error files: When there are
many errors for a camera or for CAN
Mismatches in subsystem timestamps: When one subsystem ends at least one minute earlier than another
E. Synchronized Files
Synchronized files are created by synchronization scripts
that run after cleaning and filtering has taken place. These
scripts align video frames and CAN messages at a rate of
30 frames per second. They are created using the same trip
naming convention in a separate, processed directory.
•
•
•
•
15
synced video.csv: every row contains a video frame ID
and timestamp from every camera at 30 frames per second
synced video camera-name.mp4: Synchronized with
all other videos at 30 FPS using H264 encoding
synced can.csv: each row represents a synced video
frame and the closest CAN values associated with that
timestamp for every CAN message
synced vis panels.mp4: an optional visualization video
file that displays all synced videos in separate panels
where CAN data may be also displayed
VI. C ONCLUSION AND O N -G OING F UTURE W ORK
[5] V. V. Dixit, S. Chand, and D. J. Nair, “Autonomous vehicles: disengagements, accidents and reaction times,” PLoS one, vol. 11, no. 12, p.
e0168054, 2016.
[6] F. M. Favarò, N. Nader, S. O. Eurich, M. Tripp, and N. Varadaraju, “Examining accident reports involving autonomous vehicles in california,”
PLoS one, vol. 12, no. 9, p. e0184952, 2017.
[7] R. Tedrake, “Underactuated robotics: Algorithms for walking, running,
swimming, flying, and manipulation (course notes for mit 6.832),” 2016.
[8] M. R. Endsley and E. O. Kiris, “The out-of-the-loop performance
problem and level of control in automation,” Human factors, vol. 37,
no. 2, pp. 381–394, 1995.
[9] B. Reimer, “Driver assistance systems and the transition to automated
vehicles: A path to increase older adult safety and mobility?” Public
Policy & Aging Report, vol. 24, no. 1, pp. 27–31, 2014.
[10] K. Barry, “Too much safety could make drivers less safe,”
July 2011. [Online]. Available: https://www.wired.com/2011/07/
active-safety-systems-could-create-passive-drivers/
[11] V. L. Neale, T. A. Dingus, S. G. Klauer, J. Sudweeks, and M. Goodman,
“An overview of the 100-car naturalistic study and findings,” National
Highway Traffic Safety Administration, Paper, no. 05-0400, 2005.
[12] T. A. Dingus, S. G. Klauer, V. L. Neale, A. Petersen, S. E. Lee,
J. Sudweeks, M. Perez, J. Hankey, D. Ramsey, S. Gupta et al., “The
100-car naturalistic driving study, phase ii-results of the 100-car field
experiment,” Tech. Rep., 2006.
[13] S. G. Klauer, T. A. Dingus, V. L. Neale, J. D. Sudweeks, D. J. Ramsey
et al., “The impact of driver inattention on near-crash/crash risk: An
analysis using the 100-car naturalistic driving study data,” 2006.
[14] K. L. Campbell, “The shrp 2 naturalistic driving study: Addressing driver
performance and behavior in traffic safety,” TR News, no. 282, 2012.
[15] T. Victor, M. Dozza, J. Bärgman, C.-N. Boda, J. Engström, C. Flannagan,
J. D. Lee, and G. Markkula, “Analysis of naturalistic driving study data:
Safer glances, driver inattention, and crash risk,” Tech. Rep., 2015.
[16] M. Benmimoun, F. Fahrenkrog, A. Zlocki, and L. Eckstein, “Incident
detection based on vehicle can-data within the large scale field operational test (eurofot),” in 22nd Enhanced Safety of Vehicles Conference
(ESV 2011), Washington, DC/USA, 2011.
[17] L. Fridman, P. Langhans, J. Lee, and B. Reimer, “Driver gaze region
estimation without use of eye movement,” IEEE Intelligent Systems,
vol. 31, no. 3, pp. 49–56, 2016.
[18] L. Fridman, J. Lee, B. Reimer, and T. Victor, “Owl and lizard: patterns
of head pose and eye pose in driver gaze classification,” IET Computer
Vision, vol. 10, no. 4, pp. 308–313, 2016.
[19] L. Fridman, “Cost of annotation in machine learning, computer vision,
and behavioral observation domains,” in Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems, Under Review,
2018.
[20] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “Labelme:
a database and web-based tool for image annotation,” International
journal of computer vision, vol. 77, no. 1, pp. 157–173, 2008.
[21] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset
for semantic urban scene understanding,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2016, pp.
3213–3223.
[22] R. R. Knipling, “Naturalistic driving events: No harm, no foul, no
validity,” in Driving Assessment 2015: International Symposium on
Human Factors in Driver Assessment, Training, and Vehicle Design.
Public Policy Center, University of Iowa Iowa City, 2015, pp. 196–202.
[23] ——, “Crash heterogeneity: implications for naturalistic driving studies
and for understanding crash risks,” Transportation Research Record:
Journal of the Transportation Research Board, no. 2663, pp. 117–125,
2017.
[24] L. Fridman, B. Jenik, and B. Reimer, “Arguing machines: Perceptioncontrol system redundancy and edge case discovery in real-world
autonomous driving,” arXiv preprint arXiv:1710.04459, 2017.
[25] V. Shankar, P. Jovanis, J. Aguero-Valverde, and F. Gross, “Analysis of naturalistic driving data: prospective view on methodological
paradigms,” Transportation Research Record: Journal of the Transportation Research Board, no. 2061, pp. 1–8, 2008.
[26] N. Kalra and S. M. Paddock, “Driving to safety: How many miles of
driving would it take to demonstrate autonomous vehicle reliability?”
Transportation Research Part A: Policy and Practice, vol. 94, pp. 182–
193, 2016.
The application of state-of-the-art embedded system programming, software engineering, data processing, distributed
computing, computer vision and deep learning techniques to
the collection and analysis of large-scale naturalistic driving
data in the MIT-AVT study seeks to break new ground in
offering insights into how human and autonomous vehicles
interact in the rapidly changing transportation system. This
work presents the methodology behind the MIT-AVT study
which aims to define and inspire the next generation of naturalistic driving studies. The governing design principle of this
study is that, in addition to prior successful NDS approaches,
we leverage the power of computer vision and deep learning
to automatically extract patterns of human interaction with
various levels of autonomous vehicle technology. We both (1)
use AI to analyze the entirety of the driving experience in
large-scale data and (2) use human expertise and qualitative
analysis to dive deep into the data to gain case-specific
understanding. To date, the dataset includes 78 participants,
7,146 days of participation, 275,589 miles, and 3.5 billion
video frames. Statistics about the size and scope of the MITAVT dataset are updated regularly on hcai.mit.edu/avt.
ACKNOWLEDGMENT
The authors would like to thank MIT colleagues and the
broader driving and artificial intelligence research community
for their valuable feedback and discussions throughout the
development and on-going operation of this study, especially
Joseph F. Coughlin, Sertac Karaman, William T. Freeman,
John Leonard, Ruth Rosenholtz, Karl Iagnemma, and all the
members of the AVT consortium.
The authors would also like to thank the many vehicle
owners who provided valuable insights (via email or inperson discussion) about their experiences interacting with
these systems. Lastly, the authors would like to thank the
annotation teams at MIT and Touchstone Evaluations for their
help in continually evolving a state-of-the-art framework for
annotation and discovering new essential elements necessary
for understanding human behavior in the context of advanced
vehicle technologies.
Support for this work was provided by the Advanced Vehicle Technology (AVT) consortium at MIT (agelab.mit.edu/avt).
The views and conclusions being expressed are those of the
authors, and have not been sponsored, approved, or necessarily
endorsed by members of the consortium.
R EFERENCES
[1] A. Davies, “Oh look, more evidence humans shouldn’t be
driving,” May 2015. [Online]. Available: https://www.wired.com/
2015/05/oh-look-evidence-humans-shouldnt-driving/
[2] T. Vanderbilt and B. Brenner, “Traffic: Why we drive the way we do
(and what it says about us) , alfred a. knopf, new york, 2008; 978-0307-26478-7,” 2009.
[3] W. H. Organization, Global status report on road safety 2015. World
Health Organization, 2015.
[4] M. Buehler, K. Iagnemma, and S. Singh, The DARPA urban challenge:
autonomous vehicles in city traffic. springer, 2009, vol. 56.
16
[27] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press,
2016.
[28] R. Hartley and A. Zisserman, Multiple view geometry in computer vision.
Cambridge university press, 2003.
[29] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet:
A large-scale hierarchical image database,” in Computer Vision and
Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE,
2009, pp. 248–255.
[30] G. A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. J. Miller,
“Introduction to wordnet: An on-line lexical database,” International
journal of lexicography, vol. 3, no. 4, pp. 235–244, 1990.
[31] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” in Proceedings of the IEEE conference on computer vision
and pattern recognition, 2016, pp. 770–778.
[32] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large
scale visual recognition challenge,” International Journal of Computer
Vision, vol. 115, no. 3, pp. 211–252, 2015.
[33] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in
context,” in European conference on computer vision. Springer, 2014,
pp. 740–755.
[34] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask r-cnn,” in The
IEEE International Conference on Computer Vision (ICCV), Oct 2017.
[35] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable
convolutional networks,” in The IEEE International Conference on
Computer Vision (ICCV), Oct 2017.
[36] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics:
The kitti dataset,” International Journal of Robotics Research (IJRR),
2013.
[37] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous
driving? the kitti vision benchmark suite,” in Conference on Computer
Vision and Pattern Recognition (CVPR), 2012.
[38] M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,”
in Conference on Computer Vision and Pattern Recognition (CVPR),
2015.
[39] M. Cordts, M. Omran, S. Ramos, T. Scharwächter, M. Enzweiler,
R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes
dataset,” 2015.
[40] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing
network,” in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), July 2017.
[41] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell,
“Understanding convolution for semantic segmentation,” arXiv preprint
arXiv:1702.08502, 2017.
[42] S. Liu, J. Jia, S. Fidler, and R. Urtasun, “Sgn: Sequential grouping networks for instance segmentation,” in The IEEE International Conference
on Computer Vision (ICCV), Oct 2017.
[43] G. J. Brostow, J. Fauqueur, and R. Cipolla, “Semantic object classes
in video: A high-definition ground truth database,” Pattern Recognition
Letters, vol. 30, no. 2, pp. 88–97, 2009.
[44] Y. Tian, P. Luo, X. Wang, and X. Tang, “Pedestrian detection aided by
deep learning semantic tasks,” in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, 2015, pp. 5079–5087.
[45] V. Badrinarayanan, F. Galasso, and R. Cipolla, “Label propagation in
video sequences,” in Computer Vision and Pattern Recognition (CVPR),
2010 IEEE Conference on. IEEE, 2010, pp. 3265–3272.
[46] P. Liu, S. Han, Z. Meng, and Y. Tong, “Facial expression recognition via
a boosted deep belief network,” in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, 2014, pp. 1805–1812.
[47] Z. Yu and C. Zhang, “Image based static facial expression recognition
with multiple deep network learning,” in Proceedings of the 2015 ACM
on International Conference on Multimodal Interaction. ACM, 2015,
pp. 435–442.
[48] E. A. Hoffman and J. V. Haxby, “Distinct representations of eye gaze
and identity in the distributed human neural system for face perception,”
Nature neuroscience, vol. 3, no. 1, pp. 80–84, 2000.
[49] J. Wiśniewska, M. Rezaei, and R. Klette, “Robust eye gaze estimation,” in International Conference on Computer Vision and Graphics.
Springer, 2014, pp. 636–644.
[50] L. Fridman, H. Toyoda, S. Seaman, B. Seppelt, L. Angell, J. Lee,
B. Mehler, and B. Reimer, “What can be predicted from six seconds of
driver glances?” in Proceedings of the 2017 CHI Conference on Human
Factors in Computing Systems. ACM, 2017, pp. 2805–2813.
[51] F. Vicente, Z. Huang, X. Xiong, F. De la Torre, W. Zhang, and D. Levi,
“Driver gaze tracking and eyes off the road detection system,” IEEE
Transactions on Intelligent Transportation Systems, vol. 16, no. 4, pp.
2014–2027, 2015.
[52] H. Gao, A. Yüce, and J.-P. Thiran, “Detecting emotional stress from
facial expressions for driving safety,” in Image Processing (ICIP), 2014
IEEE International Conference on. IEEE, 2014, pp. 5961–5965.
[53] I. Abdic, L. Fridman, D. McDuff, E. Marchi, B. Reimer, and B. Schuller,
“Driver frustration detection from audio and video in the wild,” in KI
2016: Advances in Artificial Intelligence: 39th Annual German Conference on AI, Klagenfurt, Austria, September 26-30, 2016, Proceedings,
vol. 9904. Springer, 2016, p. 237.
[54] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake,
M. Cook, and R. Moore, “Real-time human pose recognition in parts
from single depth images,” Communications of the ACM, vol. 56, no. 1,
pp. 116–124, 2013.
[55] A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep
neural networks,” in Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 2014, pp. 1653–1660.
[56] J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler, “Joint training
of a convolutional network and a graphical model for human pose
estimation,” in Advances in neural information processing systems, 2014,
pp. 1799–1807.
[57] D. Sadigh, K. Driggs-Campbell, A. Puggelli, W. Li, V. Shia, R. Bajcsy,
A. L. Sangiovanni-Vincentelli, S. S. Sastry, and S. A. Seshia, “Datadriven probabilistic modeling and verification of human driver behavior,”
Formal Verification and Modeling in Human-Machine Systems, 2014.
[58] R. O. Mbouna, S. G. Kong, and M.-G. Chun, “Visual analysis of eye
state and head pose for driver alertness monitoring,” IEEE transactions
on intelligent transportation systems, vol. 14, no. 3, pp. 1462–1469,
2013.
[59] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning
deep features for scene recognition using places database,” in Advances
in neural information processing systems, 2014, pp. 487–495.
[60] H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving
models from large-scale video datasets,” in The IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), July 2017.
[61] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp,
P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang et al., “End
to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316,
2016.
[62] “Advanced vehicle technology consortium (avt),” 2016. [Online].
Available: http://agelab.mit.edu/avt
[63] L. Fridman, D. E. Brown, W. Angell, I. Abdić, B. Reimer, and H. Y.
Noh, “Automated synchronization of driving data using vibration and
steering events,” Pattern Recognition Letters, vol. 75, pp. 9–15, 2016.
[64] R. Li, C. Liu, and F. Luo, “A design for automotive can bus monitoring
system,” in Vehicle Power and Propulsion Conference, 2008. VPPC’08.
IEEE. IEEE, 2008, pp. 1–5.
17
| 1 |
On Competitive Algorithms for Approximations of Top-k-Position
Monitoring of Distributed Streams
arXiv:1601.04448v3 [cs.DS] 27 Oct 2016
Alexander Mäcker
Manuel Malatyali
Friedhelm Meyer auf der Heide
Heinz Nixdorf Institute & Computer Science Department
Paderborn University, Germany
{amaecker, malatya, fmadh}@hni.upb.de
Abstract
Consider the continuous distributed monitoring model in which n distributed nodes, receiving individual data
streams, are connected to a designated server. The server is asked to continuously monitor a function defined over
the values observed across all streams while minimizing the communication. We study a variant in which the server
is equipped with a broadcast channel and is supposed to keep track of an approximation of the set of nodes currently
observing the k largest values. Such an approximate set is exact except for some imprecision in an ε-neighborhood
of the k-th largest value. This approximation of the Top-k-Position Monitoring Problem is of interest in cases where
marginal changes (e.g. due to noise) in observed values can be ignored so that monitoring an approximation is sufficient and can reduce communication.
This paper extends our results from [6], where we have developed a filter-based online algorithm for the (exact)
Top-k-Position Monitoring Problem. There we have presented a competitive analysis of our algorithm against an
offline adversary that also is restricted to filter-based algorithms. Our new algorithms as well as their analyses use
new methods. We analyze their competitiveness against adversaries that use both exact and approximate filter-based
algorithms, and observe severe differences between the respective powers of these adversaries.
1 Introduction
We consider a setting in which n distributed nodes are connected to a central server. Each node continuously observes
a data stream and the server is asked to keep track of the value of some function defined over all streams. In order
to fulfill this task, nodes can communicate to the server, while the server can employ a broadcast channel to send a
message to all nodes.
In an earlier paper [6], we introduced and studied a problem called Top-k-Position Monitoring in which, at any
time t, the server is interested in monitoring the k nodes that are observing the largest values at this particular time
t. As a motivating example, picture a scenario in which a central load balancer within a local cluster of webservers
is interested in keeping track of those nodes which are facing the highest loads. We proposed an algorithm based on
the notion of filters and analyzed its competitiveness with respect to an optimal filter-based offline algorithm. Filters
are assigned by the server and are used as a means to indicate the nodes when they can resign to send updates; this
particularly reduces communication when observed values are “similar” to the values observed in the previous time
steps.
In this paper, we broaden the problem and investigate the monitoring of an approximation of the Top-k-Positions.
We study the problem of ε-Top-k-Position Monitoring, in which the server is supposed to maintain a subset of k nodes
such that all nodes observing “clearly larger” values than the node which observed the k-th largest value are within this
set and no node observing a “clearly smaller” value belongs to this set. Here, smaller/larger is meant to be understood
with respect to ε and the k-th largest value observed. A detailed definition is given in Sect. 2. Relaxing the problem
in this direction can reduce communication while, in many cases, marginal or insignificant changes (e.g. due to noise)
This work was partially supported by the German Research Foundation (DFG) within the Priority Program “Algorithms for Big Data” (SPP
1736) and by the EU within FET project MULTIPLEX under contract no. 317532.
1
in observed values can be ignored and justify the sufficiency of an approximation. Examples are situations where lots
of nodes observe values oscillating around the k-th largest value and where this observation is not of any qualitative
relevance for the server. We design and analyze algorithms for ε-Top-k-Position Monitoring and, although we use
these very tools of filters and competitive analysis [6], the imprecision/approximation requires fundamentally different
online strategies for defining filters in order to obtain efficient solutions.
1.1 Our Contribution
In this paper we investigate a class of algorithms that are based on using filters and study their efficiency in terms of
competitive analysis.
As a first technical contribution we analyze an algorithm (Sect. 3) which allows the server to decide the logical
disjunction of the (binary) values observed by the distributed nodes. It uses a logarithmic number of rounds and a
constant number of messages on expectation. As a by-product, using this algorithm, the result on the competitiveness
of the filter-based online algorithm in [6] can be reduced from O(k log n + log ∆ log n) to O(k log n + log ∆), for
observed values from {0, 1, . . . , ∆}.
Second, we also propose an online algorithm (Sect. 4) that is allowed to introduce an error of ε ∈ (0, 1/2] in the
output and compare it to an offline algorithm that solves the exact Top-k-Position Monitoring problem. We show that
this algorithm is O(k log n + log log ∆ + log 1ε )-competitive. Note that this imprecision allows to bring the log ∆ in
the upper bound down to log log ∆ for any constant ε.
We also investigate the setting in which also the offline algorithm is allowed to have an error in the output (Sect. 5).
We first show that these results are not comparable to previous results; we prove a lower bound on the competitiveness
of Ω(n/k). Our third and main technical contribution is an algorithm with a competitiveness of O(n2 log(ε∆) +
n log2 (ε∆) + log log ∆ + log 1ε ) if the online and the offline algorithm may use an error of ε.
However, if we slightly decrease the allowed error for the offline algorithm, the lower bound on the competitiveness
of Ω(n/k) still holds, while the upper bound is reduced to O(n + k log n + log log ∆ + log 1ε ).
1.2 Related Work
Efficient computation of functions on big datasets in terms of streams has turned out to be an important topic of
research with applications in network traffic analysis, text mining or databases (e.g. [9] and [7]).
The Continuous Monitoring Model, which we consider in this paper, was introduced by Cormode et al. [2] to
model systems comprised of a server and n nodes observing distributed data streams. The primary goal addressed
within this model is the continuous computation of a function depending on the information available across all n data
streams up to the current time at a dedicated server. Subject to this main concern, the minimization of the overall
number of messages exchanged between the nodes and the server usually determines the efficiency of a streaming
algorithm. We refer to this model and enhance it by a broadcast channel as proposed by Cormode et al. in [3].
An important class of problems investigated in literature are threshold computations where the server is supposed
to decide whether the current function value has reached some given threshold τ . For monotone functions such as
monitoring the number of distinct values or the sum over all values, exact characterizations in the deterministic case
are known [2, 3]. However, non-monotone functions, e.g., the entropy [1], turned out to be much more complex to
handle.
A general approach to reduce the communication when monitoring distributed streams is proposed in [12]. Zhang
et al. introduce the notion of filters, which are also an integral part of our algorithms. They consider the problem
of continuous skyline maintenance, in which a server is supposed to continuously maintain the skyline of dynamic
objects. As they aim at minimizing the communication overhead between the server and the objects, they use a filter
method that helps in avoiding the transmission of updates in case these updates cannot influence the skyline. More
precisely, the objects are points of a d-dimensional space and filters are hyper-rectangles assigned by the server to the
objects such that as long as these points are within the assigned hyper-rectangle, updates need not be communicated
to the server.
Despite its online nature, by now streaming algorithms are barely studied in terms of competitiveness. In their work
[11], Yi and Zhang were the first to study streaming algorithms with respect to their competitiveness and recently this
approach was also applied in a few papers ([5, 10, 6, 4]). In their model [11], there is one node and one server and the
goal is to keep the server informed about the current value of a function f : Z+ → Zd that is observed by the node
and changes its value over time, while minimizing the number of messages. Yi and Zhang present an algorithm that
2
is O(d2 log(d · δ))-competitive if the last value received by the server might deviate by δ from the current value of f .
Recently, Tang et al. [10] extended this work by Yi and Zhang for the two-party setting to the distributed case. They
consider a model in which the server is supposed to track the current value of a (one-dimensional) function that is
defined over a set of n functions observed at the distributed nodes. Among other things, they propose an algorithm for
the case of a tree-topology in which the distributed nodes are the leaves of a tree connecting them to the server. They
show that on any instance I their algorithm incurs communication cost that is by a factor of O(hmax log δ), where
hmax represents the maximimum length of a path in the tree, larger than those of the best solution obtained by an
online algorithm on I.
Following the idea of studying competitive algorithms for monitoring streams and the notion of filters, Lam et
al. [5] present an algorithm for online dominance tracking of distributed streams. In this problem a server always
has to be informed about the dominance relationship between n distributed nodes each observing an online stream of
d-dimensional values. Their algorithm is based on the idea of filters and they show that a mid-point strategy, which
sets filters to be the mid-point between neighboring nodes, is O(d log U )-competitive with respect to the number of
messages sent in comparison to an offline algorithm that sets filters optimally.
While we loosely motivated our search for approximate solutions by noise in the introduction, in other problems
noise is a major concern and explicitly addressed. For example, consider streaming algorithms for estimating statistical
parameters like frequency moments [13]. In such problems, certain elements from the universe may appear in different
forms due to noise and thus, should actually be treated as the same element.
2 Preliminaries
In our setting there are n distributed nodes {1, . . . , n}. Each node i receives a continuous data stream (vi1 , vi2 , vi3 . . .),
′
which can be exclusively observed by node i. At time t, vit ∈ N is observed and no vit , t′ > t, is known. We omit the
index t if it is clear from the context.
Following the model in [3], we allow that between any two consecutive time steps, a communication protocol
exchanging messages between the server and the nodes may take place. The communication protocol is allowed to use
an amount of rounds which is polylogarithmic in n and max1≤i≤n (vit ). The nodes can communicate to the server while
the server can communicate to single nodes or utilize a broadcast channel to communicate a message that is received
by all nodes at the same time. These communication methods incur unit communication cost per message, we assume
instant delivery, and a message at time t is allowed to have a size at most logarithmic in n and max1≤i≤n (vit ).
Problem Description Consider the Top-k-Position Monitoring problem [6], in which the server is asked to keep
track of the set of nodes currently holding the k largest values. We relax this definition and study an approximate
variant of the problem in which this set is exact except for nodes in a small neighborhood around the k-th largest
value. We denote by π(k, t) the node which observes the k-th largest value at time t and denote by top-k := {i ∈
{1, . . . , k} : π(i, t)} the nodes observing the k largest values. Given an error 0 < ε < 1, for a time t we denote
1
t
by E(t) := ( 1−ε
vπ(k,t)
, ∞] the range of values that are clearly larger than the k-th largest value and by A(t) :=
1
t
t
[(1 − ε)vπ(k,t) , 1−ε vπ(k,t) ] the ε-neighborhood around the k-th largest value. Furthermore, we denote by K(t) := {i :
vit ∈ A(t)} the nodes in the ε-neighborhood around the k-th largest value. Then, at any time t, the server is supposed
to know the nodes F (t) = FE (t) ∪ FA (t) = {i1 , . . . , ik } according to the following properties:
1. FE (t) = {i : vit ∈ E(t)} and
2. FA (t) ⊆ K(t) = {i : vit ∈ A(t)}, such that |FA (t)| = k − |FE (t)| holds.
Denote by ∆ the maximal value observed by some node (which may not be known beforehand). We use F1 = F (t) if
t is clear from the context, F2 = {1, . . . , n} \ F (t), and call F ∗ the output of an optimal offline algorithm. If the k-th
t
and the (k + 1)-st largest value differ by more than ε vπ(k,t)
, F (t) coincides with the set in the (exact) Top-k-Position
Monitoring problem and hence, F (t) is unique. We denote by σ(t) := |K(t)| the number of nodes at time t which are
in the ε-neighborhood of the k-th largest value and σ := maxt σ(t). Note that |K(t)| = 1 implies that F (t) is unique.
Furthermore for solving the exact Top-k-Position Monitoring problem we assume that the values are distinct (at least
by using the nodes’ identifiers to break ties in case the same value is observed by several nodes).
3
2.1 Filter-Based Algorithms & Competitive Analysis
A set of filters is a collection of intervals, one assigned to each node, such that as long as the observed values at each
node are within its respective interval, the output F (t) need not change. For the problem at hand, this general idea of
filters translates to the following definition.
Definition 2.1. [6] For a fixed time t, a set of filters is defined as an n-tuple of intervals (F1t , . . . , Fnt ), Fi ⊆ N ∪ {∞}
and vi ∈ Fi , such that as long as the value of node i only changes within its interval (i.e. vi ∈ Fi ), the value of the
output F need not change.
Observe that each pair of filters (Fi , Fj ) of nodes i ∈ F (t) and j ∈
/ F (t) must be disjoint except for a small
overlapping. This observation can be stated formally as follows.
Observation 2.2. For a fixed time t, an n-tuple of intervals is a set of filters if and only if for all pairs i ∈ F (t) and
j∈
/ F (t) the following holds: vi ∈ Fi = [ℓi , ui ], vj ∈ Fj = [ℓj , uj ] and ℓi ≥ (1 − ε)uj .
In our model, we assume that nodes are assigned such filters by the server. If a node observes a value that is larger
than the upper bound of its filter, we say the node violates its filter from below. A violation from above is defined
analogously. If such a violation occurs, the node may report it and its current value to the server. In contrast to [6], we
allow the server to assign “invalid” filters, i.e., there are affected nodes that directly observe a filter-violation. However,
for such an algorithm to be correct, we demand that the intervals assigned to the nodes at the end of the protocol at time
t and thus, before observations at time t + 1, constitute a (valid) set of filters. We call such an algorithm filter-based.
Note that the fact that we allow invalid filters (in contrast to [6]) simplifies the presentation of the algorithms in the
following. However, using a constant overhead the protocols can be changed such that only (valid) filters are sent to
the nodes.
Competitiveness To analyze the quality of our online algorithms, we use analysis based on competitiveness and
compare the communication induced by the algorithms to that of an adversary’s offline algorithm.
Similar to [5] and [6], we consider adversaries that are restricted to use filter-based offline algorithms and hence,
OPT is lower bounded by the number of filter updates. However, we compare our algorithms against several adversaries which differ in terms of whether their offline algorithm solves the exact Top-k-Position Monitoring Problem or
ε-Top-k-Position Monitoring. The adversaries are assumed to be adaptive, i.e., values observed by a node are given by
an adversary who knows the algorithm’s code, the current state of each node and the server and the results of random
experiments.
An online algorithm is said to have a competitiveness of c if the number of messages is at most by a factor of c
larger than that of the adversary’s offline algorithm.
2.2 Observations and Lemmas
Define for some fixed set S ⊆ {1, . . . , n} the minimum of the values observed by nodes in S during a time period
[t, t′ ] as M INS (t, t′ ) and the maximum of the values observed during the same period as M AXS (t, t′ ).
Definition 2.3. Let t, t′ be given times with t′ ≥ t. For a subset of nodes S ⊆ {1, . . . , n} the values M AXS (t, t′ ) :=
∗
maxt≤t∗ ≤t′ maxi∈S (vit ) and M IN S (t, t′ ) are defined analogously.
Observe that it is sufficient for an optimal offline algorithm to only make use of two different filters F1 and F2 .
Proposition 2.4. Without loss of generality, we may assume that an optimal offline algorithm only uses two different
filters at any time.
Proof. Let [t, t′ ] be an interval during which OP T does not communicate. We fix its output F1∗ and define F2∗ :=
{1, . . . , n} \ F1∗ . If OP T only uses two different filters throughout the interval, we are done. Otherwise, using F1∗
as output throughout the interval [t, t′ ] and filters F1 = [M INF1∗ (t, t′ ), ∞] and F2 = [0, M AXF2∗ (t, t′ )], which must
be feasible due to the assumption that OP T originally assigned filters that lead to no communication, leads to no
communication within the considered interval.
The following lemma generalizes a lemma in [6] to ε-Top-k-Position Monitoring. Assuming the optimal offline
algorithm did not change the set of filters during a time period [t, t′ ], the minimum value observed by nodes in F1∗ can
only be slightly smaller than the maximum value observed by nodes in F2∗ .
4
Lemma 2.5. If OP T uses the same set of filters F1 , F2 during [t, t′ ], then it holds M IN F1∗ (t, t′ ) ≥ (1−ε) M AXF2∗ (t, t′ ).
Proof. Assume to the contrary that OP T uses the same set of filters throughout the interval [t, t′ ] and outputs F1∗ ,
but M INF1∗ (t, t′ ) < (1 − ε)M AXF2∗ (t, t′ ) holds. Then there are two nodes, i ∈ F1∗ and j ∈
/ F1∗ , and two times
t2
t1
′
′
′
t1 , t2 ∈ [t, t ], such that vi = M INF1∗ (t, t ) and vj = M AXF2∗ (t, t ). Due to the definition of a set of filters and the
fact that OP T has not communicated during [t, t′ ], OP T must have set the filter for node i to [s1 , ∞], s1 ≤ vit1 , and
for node j to [−∞, s2 ], s2 ≥ vjt2 . This is a contradiction to the definition of a set of filters and Observation 2.2.
At last a result from [6] is restated in order to calculate the (exact) top-k set for one time step.
Lemma 2.6. [6] There is an algorithm that computes the node holding the largest value using O(log n) messages on
expectation.
3 Auxiliary Problem: Existence
In our competitive algorithms designed and analyzed in the following, we will frequently make use of a protocol for
a subproblem which we call E XISTENCE: Assume all nodes observe only binary values, i.e. ∀i ∈ {1, . . . , n} : vi ∈
{0, 1}. The server is asked to decide the logical disjunction for one fixed time step t.
It is known that for n nodes each holding a bit vector of length m the communication complexity to decide the
bit-wise disjunction is Ω(nm) in the server model [8]. Observe that in our model 1 message is sufficient to decide the
problem assuming the nodes have a unique identifier between 1 and n and the protocol uses n rounds.
We prove that it is sufficient to use a constant amount of messages on expectation and logarithmic number of
rounds. Note that the algorithm in the following lemma is a Las Vegas algorithm, i.e. the algorithm is always correct
and the number of messages needed is based on a random process.
Lemma 3.1. There is an algorithm E XISTENCE P ROTOCOL that uses O(1) messages on expectation to solve the
problem E XISTENCE.
Proof. Initially all nodes are active. All nodes i deactivate themselves, if vi = 0 holds, that is, these nodes do not
take part in the following process. In each round r = 0, 1, . . . , log n the active nodes send messages independently
at random with probability pr := 2r /n. Consequently, if the last round γ = log n is reached, all active nodes i with
vi = 1 send a message with probability 1. As soon as at least one message was sent or the γ-th round ends, the
protocol is terminated and the server can decide E XISTENCE.
Next, we analyze the above protocol and show that the bound on the expected number of messages is fulfilled.
Let X be the random variable for the number of messages used by the protocol and b be the number of nodes i with
vi = 1. Note that the expected number of messages sent in round r is b·pr and the probability that no node has sent a
Qr−1
b
message before is k=0 (1 − pk ) .
Observing that the function f (r) = b · pr (1 − pr−1 )b has only one extreme point and 0 ≤ f (r) < 2 for r ∈
[0, log n], it is easy to verify that the series can be upper bounded by simple integration:
E[X] ≤
b
log(n)
X b2r r−1
Y
b
2k
+
1−
n
n
n
r=1
k=0
log(n)
b
X b2r
2r−1
1−
≤1+
n
n
r=1
b
Z log(n) r
b2
2r−1
≤1+
1−
dr + 2
n
n
0
5
b #log n
2r−1
b
r
(2 − 2n) 1 −
≤3 +
(b + 1)n ln(2)
n
"
0
1
≤3 +
·
n ln(2)
b
b !
2log n−1
20−1
(2
− 2n) 1 −
+ 2n 1 −
n
n
"
b
b #
1
1
1
(n − 2n) 1 −
+ 2n 1 −
≤3 +
n ln(2)
2
2n
"
#
b
1
1
1
≤3 +
(−n) b + 2n 1 −
n ln(2)
2
2n
!
b
1
1
1
2 1−
− b
≤3 +
ln(2)
2n
2
1
1
2
≤3 +
≤6 .
2− b ≤3+
ln(2)
2
ln(2)
log n
This protocol can be used for a variety of subtasks, e.g. validating that all nodes are within their filters, identifying
that there is some filter-violation or whether there are nodes that have a higher value than a certain threshold.
Corollary 3.2. Given a time t. There is an algorithm which decides whether there are nodes which observed a
filter-violation using O(1) messages on expectation.
Proof. For the distributed nodes to report filter-violations we use an approach based on the E XISTENCE P ROTOCOL to
reduce the number of messages sent in case several nodes observe filter-violations at the same time. The nodes apply
the E XISTENCE P ROTOCOL as follows: Each node that is still within its filter applies the protocol using a 0 as its value
and each node that observes a filter-violation uses a 1. Note that by this approach the server definitely gets informed if
there is some filter-violation and otherwise no communication takes place.
The E XISTENCE P ROTOCOL can be used in combination with the relaxed definition of filters to strengthen the
result for Top-k-Position Monitoring from O(k log n + log ∆ log n) to O(k log n + log ∆). We first introduce a
generic framework and then show how to achieve this bound.
A generic approach Throughout the paper, several of our algorithms feature similar structural properties in the
sense that they can be defined within a common framework. Hence, we now define a generic approach to describe the
calculation and communication of filters, which we then refine later. The general idea is to only use two different filters
that are basically defined by one value separating nodes in F (t) from the remaining nodes. Whenever a filter-violation
is reported, this value is recalculated and used to set filters properly.
The approach proceeds in rounds. In the first round we define an initial interval L0 . In the r-th round, based on
interval Lr , we compute a value m that is broadcasted and is used to set the filters to [0, m] and [m, ∞]. As soon as
node i reports a filter-violation observing the value vi , the coordinator redefines the interval Lr+1 := Lr ∩ [0, vi ] if
the violation is from above and Lr+1 := Lr ∩ [vi , ∞] otherwise. The approach finishes as soon as some (predefined)
condition is satisfied.
Corollary 3.3. There is an algorithm that is O(k log n + log ∆)-competitive for (exact) Top-k-Position Monitoring.
Proof. Our algorithm proceeds in phases that are designed such that we can show that an optimal algorithm needs to
communicate at least once during a phase and additionally, we can upper bound the number of messages sent by the
online algorithm according to the bound on the competitiveness.
We apply the generic approach with parameters described as follows. The initial interval is defined as L0 := [ℓ, u],
t
t
where ℓ = vπ(k+1,t)
, u = vπ(k,t)
. This can be done by determining the values of the nodes holding the k + 1 largest
values using O(k log n) messages on expectation. In the r-th round, based on interval Lr , we compute the midpoint of
6
Lr as the value m which is broadcasted and used to set the filters. As soon as a filter-violation is reported, the generic
framework is applied. In case Lr is empty the phase ends.
Note that the distance between u and ℓ gets halved every time a node violates its filter leading to O(log(u0 −ℓ0 )) =
O(log ∆) messages on expectation per phase. Also, it is not hard to see that during a phase OP T has communicated
at least once and hence, we obtain the claimed bound on the competitiveness.
4 Competing against an Exact Adversary
In this section, we propose an algorithm based on the strategy to choose the nodes holding the k largest values as an
output and use this set as long as it is feasible. It will turn out that this algorithm is suitable in two scenarios: First,
it performs well against an adversary who solves the Top-k-Position Monitoring problem (cf. Theorem 4.5); second,
we can use it in situations in which an adversary who is allowed to introduce some error and cannot exploit this error
because the observed data leads to a unique output (cf. Sect. 5).
In particular, we develop an algorithm started at t that computes the output set F1 := F (t) using the protocol from
Lemma 2.6 and for all consecutive times witnesses whether F1 is correct or not. Recall that while computing the set
F (t) from scratch (cf. Lemma 2.6) is expensive in terms of communication, witnessing its correctness in consecutive
rounds is cheap since it suffices to observe filter-violations (cf. Definition 2.1 and Corollary 3.2).
The algorithm tries to find a value m which partitions F1 from F2 according to the generic framework, such that
for all nodes i ∈ F1 it holds vi ≥ m and for all nodes i ∈ F2 it holds vi ≤ m. We call such a value m certificate.
Guessing OPT’s Filters In the following we consider a time period [t, t′′ ] during which the output F (t) need not
change. Consider a time t′ ∈ [t, t′′ ]. The online strategy to choose a certificate at this time contingents on the size of
some interval
L∗ from which an offline algorithm must have chosen
the lower bound ℓ∗ of the upper filter at time t
such that the filters are valid throughout [t, t′ ]. The algorithm T OP -K-P ROTOCOL keeps track of (an approximation
of) L∗ at time t′ denoted by L = [ℓ, u] for which L∗ ⊆ L holds. The online algorithm tries to improve the guess
where OPT must have set filters by gradually reducing the size of interval L (while maintaining the invariant L∗ ⊆ L)
at times it observes filter-violations.
t
t
= M AXF2 (t, t) and are
= M INF1 (t, t) and ℓ := vπ(k+1,t)
Initially u and ℓ are defined as follows: u := vπ(k,t)
redefined over time. Although defining the certificate as the midpoint of L = [ℓ, u] intuitively seems to be the best
way to choose m, the algorithm is based on four consecutive phases, each defining a different strategy.
In detail, the first phase is executed as long as the property
log log u > log log ℓ + 1
(P1)
holds. In this phase, m is defined as ℓ + 22 after r filter-violations observed. If the property
r
log log u ≤ log log ℓ + 1 ∧ u > 4ℓ
(P2)
holds, the value m is chosen to be 2mid where mid is the midpoint of [log ℓ, log u]. Observe that 2mid ∈ L = [ℓ, u]
holds.
The third phase is executed if property
u≤4ℓ∧u>
1
ℓ
1−ε
(P3)
holds and employs the intuitive approach of choosing m as the midpoint of L. The last phase contains the remaining
case of
1
ℓ
(P4)
u≤
1−ε
and is simply executed until the next filter-violation is observed using the filters F1 = [ℓ, ∞] and F2 = [0, u].
In the following we propose three algorithms A1 , A2 , and A3 which are executed if the respective property hold
and analyze the correctness and the amount of messages needed.
7
Lemma 4.1. Given time t, an output F (t), and an interval L = [ℓ, u] for which (P1) holds, there is an algorithm A1
that witnesses the correctness of F (t) until a time t′ at which it outputs L′ = [ℓ′ , u′ ] for which (P1) does not hold. The
algorithm uses O(log log ∆) messages on expectation.
Proof. The algorithm A1 applies the generic framework and defines the value m, the server broadcasts, as m :=
r
ℓ0 + 22 , where ℓ0 is the initial value of ℓ. If log log u′ − log log ℓ′ ≤ 1 holds, the algorithm terminates and outputs
L′ = [ℓ′ , u′ ] with ℓ′ and u′ defined as the redefinition of ℓ and u respectively.
To analyze the amount of messages needed and express it in terms of ∆, observe that in the worst case the server
only observes filter-violations from nodes i ∈ F2 . In case there is a filter-violation from above, i.e. a node i ∈ F1
reports a filter-violation, the condition log log u′ − log log ℓ′ ≤ 1 holds. At least in round r = log log(u − ℓ), which is
by definition upper bounded by log log ∆, the algorithm terminates.
If F (t) is not valid at time t′ , there are nodes i1 ∈ F1 , i2 ∈ F2 and time points t1 , t2 (t1 = t′ ∨ t2 = t′ ) for which
t1
vi1 < vit22 holds. Thus, A1 observed a filter-violation by either i1 or i2 followed by a sequence alternating between
filter-violations and filter-updates. At some point (but still at time t′ ) log log u′ −log log ℓ′ ≤ 1 holds and the algorithm
outputs (ℓ′ , u′ ), proving A1 ’s correctness for time t′ .
Lemma 4.2. For a given F (t) and a given interval L = [ℓ, u] for which (P2) holds, there is an algorithm A2 that
witnesses the correctness of F (t) until a time t′ at which it outputs L′ = [ℓ′ , u′ ] for which (P2) does not hold. The
algorithm uses O(1) messages on expectation.
Proof. We apply the generic approach and choose the value m to be broadcasted by 2mid , where mid is the midpoint
of [log ℓ, log u].
To analyze the amount of messages needed, bound L = [ℓ, u] in terms of values that are double exponential in
a
a+2
2. To this end, let a ∈ N be the largest number such that ℓ ≥ 22 holds. Now observe since (P2) holds, u ≤ 22
follows. Since the algorithm chooses the midpoint of the interval [log ℓ, log u] in order to get m and halves this interval
after every filter-violation, one can upper
thenumberiof rounds by analyzing how often the interval [log ℓ, log u]
h bound
a
a+3
= [2a , 8 ∗ 2a ] can be halved at most a constant number
gets halved. This is [log ℓ, log u] ⊆ log 22 , log 22
of times, until it contains only one value, which implies that 4 · ℓ > u holds.
Lemma 4.3. For a given F (t) and a given interval L = [ℓ, u] for which (P3) holds, there is an algorithm A3 that
witnesses the correctness of F (t) until a time t′ at which it outputs L′ = [ℓ′ , u′ ] for which (P3) does not hold. The
algorithm uses O(log 1/ε) messages on expectation.
Proof. The algorithm applies the generic framework
and uses the midpoint strategy starting with the interval L0 :=
[ℓ, u]. Observe that it takes at most O log 1ε redefinitions of L to have the final size, no matter whether the algorithm
observes only filter-violations from nodes i ∈ F (t) or i ∈
/ F (t). This together with the use of the E XISTENCE P RO TOCOL for handling filter-violations yields the needed number of messages on expectation. The correctness follows
similarly as shown for Lemma 4.1.
Now we propose an algorithm started at a time t which computes the output F (t) and witnesses its correctness
until some (not predefined) time t′ at which the T OP -K-P ROTOCOL terminates using a combination of the algorithms
stated above. Precisely the T OP -K-P ROTOCOL is defined as follows:
Algorithm
T OP -K-P ROTOCOL
t
1. Compute the nodes holding the (k + 1) largest values and define ℓ := vk+1
, u := vkt and F (t).
2. If (P1) holds, call A1 with the arguments F (t) and L = [ℓ, u]. At the time t′ at which A1 outputs L′ = [ℓ′ , u′ ]
set ℓ := ℓ′ and u := u′ .
3. If (P2) holds, call A2 with the arguments F (t) and L = [ℓ, u]. At the time t′ at which A2 outputs L′ = [ℓ′ , u′ ]
set ℓ := ℓ′ and u := u′ .
4. If (P3) holds, call A3 with the arguments F (t) and L = [ℓ, u]. At the time t′ at which A3 outputs L′ = [ℓ′ , u′ ]
set ℓ := ℓ′ and u := u′ .
8
5. If u ≥ ℓ and u ≤
1
(1−ε) ℓ
holds, set the filters to F1 := [ℓ, ∞], F2 := [0, u]. At the time t′ at which node i ∈ F2
′
reports a filter-violation from below define ℓ := vit . In case node i ∈ F1 reports a filter-violation from above,
′
define u := vit .
6. Terminate and output (ℓ, u).
Lemma 4.4. Consider a time t. The algorithm T OP -K-P ROTOCOL computes the top-k set and witnesses its correctness until a time t′ at which it outputs L = [ℓ, u], where ℓ ≤ M AX F2 (t, t′ ), M INF1 (t, t′ ) ≤ u, and ℓ > u holds (i.e. L
is empty). The algorithm uses O(k log n + log log ∆ + log 1ε ) messages on expectation.
Proof. We first argue on the correctness of T OP -K-P ROTOCOL and afterwards shortly analyze the number of messages
used.
The algorithm computes in step 1. a correct output F1 at time t by using the algorithm from Lemma 2.6 for k times.
In consecutive time steps t′ > t the correctness of T OP -K-P ROTOCOL follows from the correctness of algorithms
A1 , A2 , and A3 in steps 2. - 4. For the correctness of step 5. observe that by setting the filters to F1 = [ℓ, ∞] and
1
F2 = [0, u] and the fact that u ≤ 1−ε
ℓ holds the filters are valid. Thus, as long as all nodes observe values which are
inside their respective filters the output need not change.
At the time step t′ the protocol terminates and outputs L = [ℓ, u] it holds u < ℓ. Thus, there are nodes i1 ∈ F1
and i2 ∈ F2 and time steps t1 , t2 ∈ [t, t′ ] with: vit11 ≤ u and vit22 ≥ ℓ, and thus, vit11 < vit22 .
To argue on the number of messages observe that the first step can be executed using O(k log n) number of
messages. At the time the condition of steps 2. - 5. are checked these steps can be performed using O(k log n) number
of messages, by computing the nodes holding the k + 1 largest values. The algorithms A1 , A2 , and A3 are called at
most once each thus the conditions are also checked at most once. After executing step 5. the algorithm terminates
which leads to the result on the number of messages as stated above.
Theorem 4.5. The algorithm T OP -K-P ROTOCOL has a competitiveness of O(k log n + log log ∆ + log 1ε ) allowing
an error of ε compared to an optimal offline algorithm that solves the exact Top-k-Position Monitoring problem.
Proof. The correctness of T OP -K-P ROTOCOL and the number of messages follow from Lemma 4.4. Now we argue
that OPT had to communicate at least once in the interval [t, t′ ] during which T OP -K-P ROTOCOL was applied. If OPT
communicated, the bound on the competitiveness directly follows. Now assume that OPT did not communicate in
the interval [t, t′ ]. We claim that the interval L maintained during T OP -K-P ROTOCOL always satisfies the invariant
L∗ ⊆ L. If this claim is true, we directly obtain a contradiction to the fact that OPT did not communicate because
of the following reasons. On the one hand, because OPT has to monitor the exact Top-k-Positions, OPT chooses the
same set of nodes F ∗ = F1 which was chosen by the online algorithm. On the other hand, at the time t′ the algorithm
T OP -K-P ROTOCOL terminates, u′ < ℓ′ holds. Thus, the interval L′ is empty and since L∗ ⊆ L′ holds, it follows that
L∗ is empty and hence, OPT must have communicated.
We now prove the claim. Recall that T OP -K-P ROTOCOL is started with an interval L that fulfills L∗ ⊆ L by
definition. To show that L∗ ⊆ L holds during the entire interval [t, t′ ], it suffices to argue that each of the previous
algorithms makes sure that when started with an interval L such that L∗ ⊆ L, it outputs L′ with L∗ ⊆ L′ . Our
following reasoning is generic and can be applied to the previous algorithms. Consider the cases in which filterviolations are observed and hence the interval L is modified: If a filter-violation from below happened at a time
t1 > t, there is a node i ∈ F2 with a value vit1 > ℓ′ and thus, ℓ∗ > ℓ′ holds. If a filter-violation from above happened
′
at a time t′ , there is a node i ∈ F1 with a value vit < u′ and thus, u∗ < u′ holds. This case-distinction leads to the
result, that L∗ has to be a subset of [ℓ′ , u′ ].
5 Competing against an Approximate Adversary
In this section, we study the case in which the adversary is allowed to use an approximate filter-based offline algorithm,
i.e. one that solves ε-Top-k-Position Monitoring. Not surprisingly, it turns out that it is much more challenging for
online than for offline algorithms to cope with or exploit the allowed error in the output. This fact is formalized in the
lower bound in Theorem 5.1, which is larger than previous upper bounds for the exact problem. However, we also
propose two online algorithms that are competitive against offline algorithms that are allowed to have the same error ε
and a smaller error ε′ ≤ 2ε , respectively.
9
5.1 Lower Bound for Competitive Algorithms
We show a lower bound on the competitiveness proving any online algorithm has to communicate at least (σ − k)
times in contrast to an offline algorithm which only uses k + 1 messages. Recall that the adversary generates the data
streams and can see the filters communicated by the server. Note that as long as the online and the offline algorithm
are allowed to make use of an error ε ∈ (0, 1) the lower bound holds, even if the errors are different.
Theorem 5.1. Any filter-based online algorithm which solves the ε-Top-k-Position Monitoring problem and is allowed
to make use of an error of ε ∈ (0, 1) has a competitiveness of Ω(σ/k) compared to an optimal offline algorithm which
is allowed to use a (potentially different) error of ε′ ∈ (0, 1).
Proof. Consider an instance in which the observed values of σ ∈ [k + 1, n] nodes are equal to some value y0 (the
remaining n−σ nodes observe smaller values) at time t = 0 and the following adversary: In time step r = 0, 1, . . . , n−
k, the adversary decides to change the value of one node i with vir = y0 to be vir+1 = y1 < (1 − ε) · y0 such that a
filter-violation occurs. Observe that such a value y1 exists if ε < 1 holds and a node i always exists since otherwise
the filters assigned by the online algorithm cannot be feasible. Hence, the number of messages sent by the online
algorithm until time step n − k is at least n − k. In contrast, the offline algorithm knows the n − k nodes whose
values change over time and hence, can set the filters such that no filter-violation happens. The offline algorithm sets
two different filters: One filter F1 = [y0 , ∞] for those k nodes which have a value of y0 at time step n − k using
k messages and one filter F2 = [0, y0 ] for the remaining n − k nodes using one broadcast message. By essentially
repeating these ideas, the input stream can be extended to an arbitrary length, obtaining the lower bound as stated.
5.2 Upper Bounds for Competitive Algorithms
Now we propose an algorithm D ENSE P ROTOCOL and analyze the competitiveness against an optimal offline algorithm
in the setting that both algorithms are allowed to use an error of ε.
The algorithm D ENSE P ROTOCOL is started a time t. For sake of simplicity we assume that the k-th and the
t
t
(k + 1)-st node observe the same value z, that is z := vπ(k,t)
= vπ(k+1,t)
. However, if this does not hold we can
t
t
define the filters to be F1 = [vπ(k+1,t) , ∞] and F2 = [0, vπ(k,t) ] until a filter-violation is observed at some time t′
t
using O(k log n) messages on expectation. If the filter-violation occurred from below define z := vπ(k,t)
and if a
t
filter-violation from above is observed define z := vπ(k+1,t) .
The high-level idea of D ENSE P ROTOCOL is similar to the T OP -K-P ROTOCOL to compute a guess L on the lower
endpoint of the filter of the output F ∗ of OPT (assuming OPT did not communicate during [t, t′ ]) for which the
invariant ℓ∗ ∈ L∗ ⊆ Lr holds. The goal of D ENSE P ROTOCOL is to halve the interval L while maintaining ℓ∗ ∈ L
until L = ∅ and thus show that no value exists which could be used by OPT.
To this end, the algorithm partitions the nodes into three sets. Intuitively speaking, the first set which we call V1
contains those nodes which have to be part of the optimal output, V3 those nodes that cannot be part of any optimal
output and V2 the remaining nodes. The sets change over time as follows. Initially V1t contains those nodes that
1
observes a value vit > 1−ε
z. Since the algorithm may discover at a time t′ > t that some node i has to be moved to
′
′
V1t +1 which also contains all nodes from previous rounds, i.e. V1t ⊆ V1t +1 . On the other hand V3t initially contains
the nodes which observed a value vit < (1 − ε)z. Here also the algorithm may discover at a time t′ > t that some
′
node i has to be moved to V3t +1 which (similar to V1 ) contains nodes from previous rounds. At the time t the set V2t
simply contains the remaining nodes {1, . . . , n} \ (V1t ∪ V3t ) and its cardinality will only decrease over time.
In the following we make use of sets S1 and S2 to indicate that nodes in V2 may be moved to V1 or V3 depending
on the values observed by the remaining nodes in V2 . Nodes in S1 observed a value larger than z but still not that large
to decide to move it to V1 and similarly nodes in S2 observed smaller values than z but not that small to move it to V3 .
Next we propose the algorithm D ENSE P ROTOCOL in which we make use of an algorithm S UB P ROTOCOL for the
scenario in which some node i exists that is in S1 and in S2 . At a time at which the S UB P ROTOCOL terminates it
outputs that ℓ∗ has to be in the lower half of L or in the upper half of L thus, the interval L gets halved (which initiates
the next round) or moves one node from V2 to V1 or V3 . Intuitively speaking S UB P ROTOCOL is designed such that, if
OPT did not communicate during [t, t′ ], where t is the time the D ENSE P ROTOCOL is started and t′ is the current time
step, the movement of one node i ∈ V2 to V1 or V3 implies that i has necessarily to be part of F ∗ or not. For now we
assume the algorithm S UB P ROTOCOL to work correctly as a black box using SU B(n, |L|) number of messages.
Note that in case Lr contains one value and gets halved, the interval Lr+1 is defined to be empty. In case the
algorithm observes multiple nodes reporting a filter-violation the server processes one violation at a time in an arbitrary
′
10
order. Since the server may define new filters after processing a violation one of the multiple filter-violations may be
not relevant any longer, thus the server simply ignores it.
Algorithm: D ENSE P ROTOCOL
t
1. Define z := vkt = vk+1
and the following sets:
1
z},
V1 := {i ∈ {1, . . . , n} | vit > 1−ε
t
V3 := {i ∈ {1, . . . , n} | vi < (1 − ε)z},
V2 := {1, . . . , n} \ (V1 ∪ V3 ).
Define an interval L0 := [(1 − ε)z, z] and define sets S1 , S2 of nodes which are initially empty and use S to
denote S1 ∪ S2 . Set r := 0 indicating the round.
2. The following rules are applied for (some) round r:
1
Let ℓr be the midpoint of Lr and ur := 1−ε
ℓr
For a node i the filter is defined as follows:
If i ∈ V1 , Fi := [ℓr , ∞];
1
If i ∈ V2 ∩ S1 , Fi := [ℓr , 1−ε
z].
if i ∈ V2 \ S, Fi := [ℓr , ur ];
If i ∈ V2 ∩ S2 , Fi := [(1 − ε)z, ur ].
if i ∈ V3 , Fi := [0, ur ].
The output F (t) is defined as V1 ∪ (S1 \ S2 ) and k − |V1 ∪ (S1 \ S2 )| many nodes from V2 \ S2 .
3. Wait until time t′ , at which some node i reports a filter-violation:
a. If i ∈ V1 , then set Lr+1 to be the lower half of Lr and define S2 := ∅.
b. If i ∈ (V2 \ S) violates its filter from below then
b.1. If the server observed strictly more than k nodes with larger values than ur then set Lr+1 to be the
upper half of Lr and define S1 := ∅.
b.2. else add i to S1 and update i’s filter.
c. If i ∈ S1 \ S2 violates its filter then
c.1. If i violates its filter from below then move i from S1 and V2 to V1 and update i’s filter.
c.2. else add i to S2 and call S UB P ROTOCOL.
d. If the server observed k nodes with values vi > ur and n − k nodes with values vi < ℓr then call
T OP -K-P ROTOCOL
e. If Lr+1 was set if is empty, end the protocol, otherwise increment r, update ur , ℓr , all filters using the
rules in 2., and goto step 3.
— And their symmetric cases —
a’. If i ∈ V3 then set Lr+1 to be the upper half of Lr and define S1 := ∅.
b’. If i ∈ (V2 \ S) violates its filter from above then
b’.1. If the server observed strictly more than n − k nodes with smaller values than ℓr then set Lr+1 to the
lower half of Lr and define S2 := ∅.
b’.2. else add i to S2 .
c’. If i ∈ S2 \ S1 violates its filter then
c’.1. If i violates its filter from above
then delete i from S2 , delete i from V2 , and add i to V3 .
c’.2. else add i to S1 and call S UB P ROTOCOL.
We analyze the correctness of the protocol in the following lemma and the number of messages used in Lemma 5.3.
We prove that OPT communicated at least once in Lemma 5.7.
Lemma 5.2. The protocol D ENSE P ROTOCOL computes a correct output F (t′ ) at any time t′ .
11
Proof. By definition the output consists of nodes from V1 , S1 and (arbitrary) nodes from V2 \ S2 (cf. step 2.). Observe
that by definition of the filters of the nodes in these subsets, the minimum of all lower endpoints of the filters is ℓr
following the rules in step 2. Also observe that the maximum of all upper endpoints of the filters of the remaining
1
ℓr holds, the values observed by nodes i ∈ F1 are (lower) bounded by ℓr
nodes is ur . Since by definition ur = 1−ε
and nodes i ∈ F2 are (upper) bounded by ur , thus the overlap of the filters is valid.
Now we argue that there are at least k nodes in the set V1 ∪ S1 ∪ V2 \ S2 . To this end, assume to the contrary that
t′ is the first time step at which strictly less than k nodes are in the union of these sets. Now observe that the cases in
the D ENSE P ROTOCOL in which nodes are deleted from one of V1 , S1 or V2 \ S2 are 3.c.1., 3.c.2., and 3.b’.2..
Observe that in step 3.c.1. the algorithm moves i from S1 and V2 to V1 and thus i is again part of the output and
does not change the cardinality. In step 3.c.2. the node i is added to S2 and S UB P ROTOCOL is called afterwards. At
this time t′ node i is (again) part of the output of S UB P ROTOCOL and thus there are sufficiently many nodes to choose
as an output which is a contradiction to the assumption. In the remaining case 3.b’.2. D ENSE P ROTOCOL adds i to S2 .
However, since at time t′ strictly less than k nodes are in V1 ∪ S1 ∪ (V2 \ S2 ), there are strictly more than n − k nodes
in S2 ∪ V3 and thus, the algorithm would execute step 3.b’.1. instead. This leads to a contradiction to the assumption.
By these arguments the correctness follows.
Lemma 5.3. The protocol D ENSE P ROTOCOL uses at most O(k log n + σ log(εvk ) + (σ + log(εvk )) · SU B(σ, |L|))
messages on expectation.
Proof. Initially the algorithm computes the top-k set and probes all nodes which are in the ε-neighborhood of the node
observing the k-th largest value, using O(k log n + σ) messages on expectation.
During each round r each node can only violate its filter at most constant times without starting the next round
r + 1 or leading to a call of S UB P ROTOCOL based on the following simple arguments: All nodes i in V1 or V3 directly
start the next round r + 1 after a filter-violation. Now fix a node i ∈ V2 and observe that if it is not contained in S1 and
S2 it is added to S1 if a filter-violation from below or to S2 if a filter-violation from above is observed. At the time
this node i observes a filter-violation in the same direction (i.e. from below if it is in S1 and from above if it is in S2 ) it
is added to V1 or V3 . In these cases the next filter-violation will start the next round. The last case that remains is that
it is added to both sets, S1 and S2 . Observe that the S UB P ROTOCOL is called and starts the next round or decides on
one node (which may be different from the fixed node i) to be moved to V1 or V3 .
Observe that at most σ + 1 nodes can perform filter-violations without starting the next round since each node from
V1 or V3 directly starts the next round and the number of nodes in V2 is bounded by σ. Furthermore observe that after
each round the interval L is halved thus, after at most log |L0 | + 1 rounds the set Lr is empty.
′
Now focus on the S UB P ROTOCOL which also halves L after termination or decides on one node i ∈ V2t to be
′
′
moved to V1t +1 or V3t +1 . Thus, it can be called at most σ + log(εvk ) times, leading to the result as stated above.
The S UB P ROTOCOL We propose an algorithm which is dedicated for the case in the execution of D ENSE P ROTO COL that one node i was added to S1 and to S2 .
∈ V2 \ S reported a filter-violation from below and from above and thus gets added to S1 and to S2 (in an arbitrary
order). In detail, it has observed a value which is larger than ur and a value which is smaller than ℓr . As a short
remark, if i ∈ F ∗ would hold, then ℓ∗ ≤ ℓr follows and on the other hand if i ∈
/ F ∗ holds, then ℓ∗ ≥ ℓr follows, but
∗
in D ENSE P ROTOCOL cannot decide i ∈ F in steps 3.c.2. or 3.c’.2.
Algorithm: S UB P ROTOCOL
1. Define an interval L′0 := Lr ∩ [(1 − ε)z, ℓr ], S1′ := S1 , and S2′ := ∅. Set r′ := 0 indicating the round.
2. The following rules are applied for (some) round r′ :
1 ′
ℓr ′ .
Let ℓ′r be the midpoint of L′r′ and u′r′ := 1−ε
For a node i the filter is defined as follows:
If i ∈ V1 , Fi′ := Fi ;
1
If i ∈ V2 ∩ (S1′ \ S2′ ), Fi′ := [ℓr , 1−ε
z].
1
′
′
′ := ′
[ℓr′ , 1−ε z];
If i ∈ V2 ∩ S1 ∩ S2 , Fi
if i ∈ V2 \ S ′ , Fi′ := [ℓr , u′r′ ].
if i ∈ V2 ∩ (S2′ \ S1′ ), Fi′ := [(1 − ε)z, u′r′ ];
if i ∈ V3 , Fi′ := [0, u′r′ ];
12
The output F (t) is defined as V1 ∪ (S1′ \ S2′ ) ∪ (S1′ ∩ S2′ ) and sufficiently many nodes from V2 \ S2′ .
3. Wait until time t′ , at which node i reports a filter-violation:
a. If i ∈ V1 , then terminate S UB P ROTOCOL and set Lr+1 to be the lower half of Lr .
b. If i ∈ (V2 \ S ′ ) violates its filter from below
b.1. If the server observed strictly more than k nodes with larger values than ur then
– set L′r′ +1 to be the upper half of L′r′ and redefine S1′ := S1 .
– If L′r′ +1 is defined to the empty set then terminate S UB P ROTOCOL and define the last node i
which was in S1′ ∩ S2′ and observed a filter-violation from above to be moved to V3 . If such a
node does not exist the node i ∈ S1 ∩ S2 moves to V3 .
b.2. Else add i to S1′ .
c. If i ∈ S1′ \ S2′ violates its filter
c.1. If i violates its filter from below then move i from V2 and S1′ to V1 .
c.2. Else add i to S2′ and update i’th filter.
d. If i ∈ S1′ ∩ S2′ violates its filter
d.1. If i violates from below then move i to V1 terminate the S UB P ROTOCOL.
d.2. else
– define L′r′ +1 to be the lower half of L′r′ and redefine S2′ := ∅.
– If L′r′ +1 is defined to be the empty set then terminate S UB P ROTOCOL and move i to V3 .
e. If the server observed k nodes with values vi > ur and n − k nodes with values vi < ℓr then call
T OP -K-P ROTOCOL
f. If L′r′ +1 was set increment r′ , update u′r′ , ℓ′r′ , all filters using the rules in 2., and goto step 3.
— And their symmetric cases —
a’. If i ∈ V3 , then
– set L′r′ +1 to be the upper half of L′r′ and redefine S1′ := S1 .
– If L′r′ +1 is defined to the empty set then terminate S UB P ROTOCOL and define the last node i which
was in S1′ ∩ S2′ and observed a filter-violation from above to be moved to V3 . If such a node does not
exist the node i ∈ S1 ∩ S2 moves to V3 .
b’. If i ∈ (V2 \ S ′ ) violates its filter from above
b’.1. If the server observed strictly more than n − k nodes with a value less than ℓr , then terminate S UB P ROTOCOL and set Lr+1 to be the lower half of Lr .
b’.2. else add i to S2′ .
c’. If i ∈ S2′ \ S1′
c’.1. If i violates its filter from above then move i from V2 and S2′ to V3 .
c’.2. else add i to S1′ and update i’th filter.
Lemma 5.4. The protocol S UB P ROTOCOL computes a correct output F (t′ ) at any time t′ at which a node i ∈ S1 ∩ S2
exists.
Proof. By definition the output consists of nodes from V1 , S1′ \ S2′ , S1′ ∩ S2′ and (arbitrary) nodes from V2 \ S2′ (cf.
step 2.). Observe that by definition of the filters of the nodes in these subsets, the minimum of all lower endpoints of
the filters is ℓ′r′ (in case the node is in S1 and in S2 ) following the rules in step 2. Also observe that the maximum of
all upper endpoints of the filters of the remaining nodes (in subsets V2 \ S ′ , S2′ \ S1′ or V3 ) is u′r′ . Since by definition
1 ′
u′r′ = 1−ε
ℓr′ holds, the values observed by nodes i ∈ F1 are (lower) bounded by ℓ′r′ and nodes i ∈ F2 are (upper)
bounded by u′r′ thus, the overlap of the filters is valid.
Now we argue that there are at least k nodes in the sets V1 , S1 \ S2 , S1 ∩ S2 , and V2 \ S2 . To this end, simply
assume to the contrary that at a time t′ there are strictly less than k nodes in the union of the sets. It follows that at
13
this time t′ , the algorithm has observed that there are strictly more than n − k nodes with a value smaller than ℓ′r′ .
Thus, the algorithm would continue (compare case b’.1.) with a lower value of ℓr or, in case the interval Lr is empty,
terminates (which is a contradiction).
By these arguments the correctness follows.
Lemma 5.5. The protocol S UB P ROTOCOL uses at most O(σ log |L|) messages on expectation.
Proof. During each round r′ each node can only violate its filter at most constant times without starting the next round
r′ + 1 based on the following simple arguments: All nodes i in V1 or V3 directly start the next round r′ + 1 after
a filter-violation. Now fix a node i ∈ V2 and observe that if it is not contained in S1′ and S2′ it is added to S1′ if a
filter-violation from below or to S2′ if a filter-violation from above is observed. At the time this node i observes a
filter-violation in the same direction (i.e. from below if it is in S1′ and from above if it is in S2′ ) it is added to V1 or V3 .
In these cases the next filter-violation will start the next round. The last case that remains is that it is added to both
sets, S1′ and S2′ . Observe that T OP -K-P ROTOCOL terminates if i ∈ S1′ ∩ S2′ violates its filter from below (and moves
itoV1 ). Otherwise i violates its filter from above S UB P ROTOCOL starts the next round r′ + 1.
Observe that at most σ + 1 nodes can perform filter-violations without starting the next round since each node from
V1 or V3 directly starts the next round (r + 1 from the D ENSE P ROTOCOL or r′ + 1 this protocol) and the number of
nodes in V2 is bounded by σ.
Furthermore observe that after each round the interval L′ , the guess of OPTs lower endpoint of the upper filter,
is halved. The range of L′ is upper bounded by the range of L thus, after at most log |L| + 1 rounds the set L′ is
empty.
Lemma 5.6. Given a time point t at which S UB P ROTOCOL is started. At the time t′ which S UB P ROTOCOL terminates,
there is one node i that is moved from V2 to V1 or V3 or the interval Lr (from D ENSE P ROTOCOL) is halved correctly.
Proof. Focus on the cases in which L′ is halved or there is a decision on a node i to move to V1 or V3 (cf. cases 3.b.1.,
3.d.1. 3.d.2., 3.a’., and 3.c’.1.).
In step 3.b.1. the server observed at the time t′ a filter-violation from i ∈ V2 \ S ′ and there are (strictly) more
than k nodes observed with a larger value than u′r′ . Observe that in this case for all subsets S with k elements there
exists one node i ∈
/ S which observed a value vi ≥ u′r′ , thus no matter which set is chosen by OPT, for the upper
1 ′
∗
ℓr′ holds, it follows ℓ∗ ≥ ℓ′r′ . Furthermore if
bound u for nodes i ∈
/ F ∗ it holds: u∗ ≥ u′r′ , and since u′r′ = 1−ε
′
′
′
Lr′ +1 was defined as the empty set, and a node i ∈ S1 ∩ S2 exists, observe that i gets a value vi ≤ ℓ′r′ and since in
this case u∗ ≥ u′r′ holds, i ∈
/ F ∗ follows. If such a node i does not exist during the execution of S UB P ROTOCOL,
the node i ∈ S1 ∩ S2 which initiated the S UB P ROTOCOL can be decided to move to V3 since during the execution of
S UB P ROTOCOL the interval L′ is only halved to the upper half, thus i ∈ S1 ∩ S2 observed a value vi < ℓr = ℓ′r′ and
since u∗ ≥ u′r′ holds, this i ∈
/ F ∗ follows.
1
z and thus has to be part of F ∗ .
In step 3.d.1. the node i observed a value vi which is larger than 1−ε
′
In step 3.d.2. the node i observed a value vi < ℓr′ . If during the execution of S UB P ROTOCOL the set L′ was
defined as the upper half at least once then there was a node j ∈ V3 or strictly more than k nodes which observed
a larger value than u′r′ . It follows, that this i cannot be part of F ∗ . In case during the execution of S UB P ROTOCOL
the set L′ is alway defined to the lower half, then ℓ′r′ is the lower end of L and since node i observed a value strictly
smaller than ℓ′r′ it cannot be part of F ∗ .
The arguments for case 3.a’. are similar to 3.b.1.
For the remaining case 3.c’.1. simply observe that i observed a smaller value than (1 − ε)z thus i cannot be part
of F ∗ follows.
First, focus on the steps in which L is halved and observe that steps 3.a. and 3.b’.1. are the same cases as in the
D ENSE P ROTOCOL.
Lemma 5.7. Given a time point t at which D ENSE P ROTOCOL is started. Let t′ be the time point at which D ENSE P ROTOCOL terminates. During the time interval [t, t′ ] OPT communicated at least once.
Proof. We prove that OPT communicated by arguing that ℓ∗ , the lower endpoint of the upper filter, i.e. the filter for
the output F ∗ , is in the guess Lr at each round r (ℓ∗ ∈ L∗ ⊆ Lr ). Hence we show that although if we halve the
interval Lr , the invariant ℓ∗ ∈ L∗ ⊆ Lr is maintained all the time of the execution of D ENSE P ROTOCOL and possible
calls of S UB P ROTOCOL.
14
In the following we assume to the contrary that OPT did not communicate throughout the interval [t, t′ ]. We
first argue for the execution of D ENSE P ROTOCOL and assume that the invariant by calls of S UB P ROTOCOL hold by
Lemma 5.6.
First focus on the D ENSE P ROTOCOL, which halves the interval Lr in steps 3.a., 3.b.1., 3.a’., and 3.b’.1.:
In step 3.a. in which a node i ∈ V1 violates its filter from above and observes a value vi < ℓr , it holds: i ∈ F ∗
thus, ℓ∗ < ℓr follows.
In step 3.b.1. there are (strictly) more than k nodes with a larger value than ur . It follows that for all subsets S
(with k elements) there is one node i ∈
/ S observing a value larger than ur and thus, ℓ∗ ≥ (1 − ε)ur = ℓr holds.
The case 3.a’. (which is symmetric to 3.a.) is executed if a node i ∈ V3 observed a filter-violation (vi > ur ) which
implies that the upper endpoint u∗ of filter F2 is larger than vi and thus, ℓ∗ ≥ (1 − ε)ur = ℓr .
In step 3.b’.1. (which is symmetric to 3.b.1.) there are (strictly) more than n − k nodes with a smaller value than
ℓr . It follows that for all subsets S (with k elements) there is one node i ∈ S observing a value smaller than ℓr and
thus, ℓ∗ ≤ ℓr holds.
Theorem 5.8. There is an online algorithm for ε-Top-k-Position Monitoring which is O(σ 2 log(εvk ) + σ log2 (εvk ) +
log log ∆ + log 1ε )-competitive against an optimal offline algorithm which may use an error of ε.
Proof. The algorithm works as follows. At time t at which the algorithm is started, the algorithm probes the nodes
t
t
holding the k + 1 largest values. If vπ(k+1,t)
< (1 − ε)vπ(k,t)
holds, the algorithm T OP -K-P ROTOCOL is called.
Otherwise the algorithm D ENSE P ROTOCOL is executed. After termination of the respective call, the procedure starts
over again.
Observe that if the condition holds, there is only one unique output and thus, the T OP -K-P ROTOCOL monitors the
Top-k-Positions satisfiying the bound on the competitiveness as stated in Theorem 4.5. If the condition does not hold,
t
there is at least one value in the ε-neighborhood of vπ(k,t)
and thus, the D ENSE P ROTOCOL monitors the approximated
Top-k-Positions as analyzed in this section.
The number of messages used is simply obtained by adding the number of messages used by the respective algorithms as stated above.
To obtain the upper bounds stated at the beginning, we upper bound σ by n and vk by ∆: O(n2 log(ε∆) +
n log2 (ε∆)+log log ∆+log 1ε ). Note that for constant ε we obtain a slightly simpler bound of O(n2 log ∆+n log2 ∆)
on the competitiveness.
Corollary 5.9. There is an online algorithm for ε-Top-k-Position Monitoring which is O(σ + k log n + log log ∆ +
log 1ε )-competitive against an optimal offline algorithm which may use an error of ε′ ≤ 2ε .
Proof. The algorithm works as follows. At the initial time step t the algorithm probes the nodes holding the k + 1
t
t
largest values. If vπ(k+1,t)
< (1 − ε)vπ(k,t)
holds the algorithm T OP -K-P ROTOCOL is called.
Otherwise the online algorithm simulates the first round of the D ENSE P ROTOCOL, that is nodes are partitioned
into V1 , V2 , and V3 and the filters are defined as proposed (cf step 2. of D ENSE P ROTOCOL). Here all nodes with values
1
larger than 1−ε
(1 − 2ε )z are directly added to V1 instead of adding to S1 , and nodes observing values smaller than
ε
(1 − 2 )z are added to V3 . Furthermore, if a filter-violation from some node i ∈ V2 is observed, it is directly moved
(deleted from V2 and added) to V1 in case it violates from below, and added to V3 if violated from above.
Whenever a node from V1 (or from V3 ) violates its filter the algorithm terminates. Additionally if (strictly) more
than k nodes are in V1 the algorithm is terminated or if (strictly) less than k nodes are in V1 ∪ V2 . If exactly k nodes
are in V1 and n − k nodes are in V3 the T OP -K-P ROTOCOL is executed.
For the following argumentation on the competitiveness we focus on the case that T OP -K-P ROTOCOL was not
called since the analysis of T OP -K-P ROTOCOL holds here. Observe that OPT (with an error of ε′ ) had to communicate
based on the following observation:
Let t′ be the time at which the algorithm terminates. Assume to the contrary that
OPT did not communicate during
′
[t, t ]. In case node i ∈ V1 observes a filter-violation from above, vi < (1 − 2ε )z and ε′ ≤ 2ε , OPT had to set ℓ∗ ≤ vi
and u∗ ≥ z, which leads to a contradiction
to the definition of filters. In case node i ∈ V3 observes a filter-violation
1
from below, vi > 1−ε
(1 − 2ε )z and ε′ ≤ 2ε , OPT had to set u∗ ≥ vi and ℓ∗ ≤ z, which leads to a contradiction to
the definition of filters. The fact that OPT had to communicate in the remaining cases follows by the same arguments.
Since all cases lead to a contradiction, the bound on the competitiveness as stated above follows.
15
References
[1] Arackaparambil, C., Brody, J., Chakrabarti, A.: Functional Monitoring without Monotonicity. In: Proceedings
of the 36th International Colloquium on Automata, Languages and Programming, pp. 95–106. Springer, Berlin
(2009)
[2] Cormode, G.: The Continuous Distributed Monitoring Model. ACM SIGMOD Record 42.1, pp. 5–14. (2013)
[3] Cormode, G., Muthukrishnan, S., Ke, Y.: Algorithms for Distributed Functional Monitoring. ACM Transactions
on Algorithms 7, 21 (2011)
[4] Giannakopoulos Y., Koutsoupias, E.: Competitive Analysis of Maintaining Frequent Items of a Stream. Theoretical Computer Science 562, pp. 23–32. (2105)
[5] Lam, T.W., Liu, C.-M., Ting, H.-F.: Online Tracking of the Dominance Relationship of Distributed Multidimensional Data. In: Proceedings of the 8th International Workshop on Approximation and Online Algorithms,
pp. 178–189. Springer, (2011)
[6] Mäcker, A., Malatyali, M., Meyer auf der Heide, F.: Online Top-k-Position Monitoring of Distributed Data
Streams. In: Proceedings of the 29th International Parallel and Distributed Processing Symposium, pp. 357–364.
IEEE, (2015)
[7] Muthukrishnan, S.: Data Streams: Algorithms and Applications. Now Publishers Inc, (2005)
[8] Phillips, J., Verbin, E., Zhang, Q.: Lower Bounds for Number-in-Hand Multiparty Communication Complexity,
Made Easy. In: Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 386–501.
SIAM (2012)
[9] Sanders, P., Schlag, S., Müller, I.: Communication Efficient Algorithms for Fundamental Big Data Problems. In:
Proceedings of the IEEE International Conference on Big Data, pp. 15–23. IEEE, Silicon Valley (2013)
[10] Tang M., Li F., Tao Y.: Distributed Online Tracking. In: Proceedings of the 2015 ACM SIGMOD International
Conference on Management of Data, pp. 2047–2061. ACM, (2015)
[11] Yi, K., Zhang, Q.: Multidimensional Online Tracking. ACM Transactions on Algorithms 8, 12 (2012)
[12] Zhang, Z., Cheng, R., Papadias, D. and Tung, A.K.H.: Minimizing the Communication Cost for Continuous
Skyline Maintenance. In: Proceedings of the ACM SIGMOD International Conference on Management of data,
pp. 495–508. ACM, New York (2009)
[13] Zhang, Q.: Communication-Efficient Computation on Distributed Noisy Datasets. In: Proceedings of the 27th
ACM Symposium on Parallelism in Algorithms and Architectures, pp. 313–322. ACM, (2015)
16
| 8 |
Graphulo: Linear Algebra Graph Kernels
for NoSQL Databases
Vijay Gadepally* , Jake Bolewski† , Dan Hook* , Dylan Hutchison‡ , Ben Miller* , Jeremy Kepner*
*
arXiv:1508.07372v2 [cs.DS] 6 Oct 2015
† MIT
MIT Lincoln Laboratory
Computer Science and Artificial Intelligence Laboratory
‡ Stevens Institute of Technology
Abstract—Big data and the Internet of Things era
continue to challenge computational systems. Several
technology solutions such as NoSQL databases have
been developed to deal with this challenge. In order to
generate meaningful results from large datasets, analysts
often use a graph representation which provides an
intuitive way to work with the data. Graph vertices
can represent users and events, and edges can represent
the relationship between vertices. Graph algorithms
are used to extract meaningful information from these
very large graphs. At MIT, the Graphulo initiative
is an effort to perform graph algorithms directly in
NoSQL databases such as Apache Accumulo or SciDB,
which have an inherently sparse data storage scheme.
Sparse matrix operations have a history of efficient
implementations and the Graph Basic Linear Algebra
Subprogram (GraphBLAS) community has developed a
set of key kernels that can be used to develop efficient
linear algebra operations. However, in order to use the
GraphBLAS kernels, it is important that common graph
algorithms be recast using the linear algebra building
blocks. In this article, we look at common classes of
graph algorithms and recast them into linear algebra
operations using the GraphBLAS building blocks.
I. I NTRODUCTION
The volume, velocity and variety [1] of data being
collected by today’s systems far outpace the ability to
provide meaningful results or analytics. A common
way to represent such large unstructured datasets is
through a graph representation as they provide an
intuitive representation of large data sets. In such
a representation, graph vertices can represent users
or events and edges can represent the relationship
between vertices. Many recent efforts have looked
at the mapping between graphs and linear algebra.
Vijay Gadepally is the corresponding author and can be reached
at vijayg [at] mit.edu
This material is based upon work supported by the National
Science Foundation under Grant No. DMS-1312831. Any opinions,
findings, and conclusions or recommendations expressed in this
material are those of the author(s) and do not necessarily reflect
the views of the National Science Foundation.
In such a mapping, graphs are often represented as
sparse arrays such as associative arrays or sparse
matrices using a graph schema. One such effort is
the Graph Basic Linear Algebra Subprogram (GraphBLAS) group which looks to provide a set of kernels
that can be used to cast graph algorithms as sparse
linear algebraic operations [2]. The abilty to represent
graph algorithms as linear algebraic operations can be
greatly beneficial for algorithms scaled for large data
volume such as those in [3], [4]. However, for such
an initiative to be successful, it is important that the
proposed linear algebra kernels cover a wide variety of
graph algorithms that are often used by analysts. This
article looks at common classes of graph algorithms
and provides an initial set of graph algorithms recast
as linear algebraic operations.
The purpose of our present research effort is to
enable graph algorithms directly on NoSQL (Not Only
SQL) databases. Databases such as Apache Accumulo
or SciDB have become a popular alternative to traditional relational databases due to their high availability, partition tolerance and performance. NoSQL
databases often make use of a key value store or
store information in triples which are similar to the
way sparse matrices are stored [5]. We see a large
similarity between our work on performing graph
algorithms directly on NoSQL databases and research
on the GraphBLAS specification. The GraphBLAS
community has proposed an initial set of building
blocks:
•
•
•
•
•
•
•
SpGEMM: Sparse Generalized Matrix Multiply
SpM{Sp}V: Sparse Matrix (Sparse) Vector Multiply
SpEWiseX: Sparse Element-wise Multiplication
SpRef: Sparse Reference to a subset
SpAsgn: Sparse Assignment to a subset
Scale: SpEWiseX with a scalar
Apply: Apply a function to each element
Further, these kernels have been described to work
on alternate semiring structures such as the tropical
semiring which replaces traditional algebra with the
min operator and the traditional multiplication with
the + operator. This flexibility allows a wide variety of
graph analytics to be represented using the aforementioned building blocks. Table I summarizes classes of
graph algorithms that are widely used by the graph
analytics community.
With the popularity of NoSQL databases and the
inherent parallels between the representation of data
in such databases and sparse arrays, our research
effort looks at determining how kernels from the
GraphBLAS specification can be evaluated on NoSQL
databases. However, in order to ensure that these
kernels will be able to perform common NoSQL
database tasks, such as exploration and community
detection, it is important that the proposed kernels
are able to express a wide variety of common graph
analytics.
A. The Graphulo Initiative
Graphulo [6] is an ongoing initiative at the Massachusetts Institute of Technology that looks at using
the GraphBLAS kernels on the Apache Accumulo
database. Accumulo is used for a variety of applications and has some of the highest published
performance [7]. A goal of the Graphulo initiative is
to use Accumulo server components such as iterators
to perform graph analytics. In order to provide end
users with a specification to which they can write their
algorithms, Graphulo is being written to conform to
the GraphBLAS specifications.
B. Paper Outline
In this article, we present an initial set of common
graph algorithms recast in the language of sparse
linear algebra and expressed using the proposed
GraphBLAS kernels. In Section II we introduce the
base datatype of NoSQL databases - associative arrays
- and discuss common schemas used to represent large
graphs in associative arrays. In Section III, we recast
popular graph algorithms from the Exploration &
Traversal, Subgraph Detection, Centrality and Community Detection classes of graph algorithms using
GraphBLAS kernels. In Section IV we discuss the results, limitations and future work and provide readers
with an understanding of how these algorithms can
be implemented on NoSQL databases such as Apache
Accumulo. We conclude the article in Section V.
II. A SSOCIATIVE A RRAYS AND G RAPH S CHEMAS
The Graphulo project looks at how graph algorithms can be performed on NoSQL databases. Associative arrays are used as the data type for storing
and manipulating a large variety of complex datasets.
In order to represent a dataset using associative arrays,
we look at a few common schemas that can be used.
A. Associative Arrays
Associative arrays are used to describe the relationship between multidimensional entities using numeric/string keys and numeric/string values. Associative arrays provide a generalization of sparse matrices.
Formally, an associative array A is a map from d sets
of keys K1 × K2 × ... × Kd to a value set V with a
semi-ring structure
A : K1 × K2 × ... × Kd → V,
where (V, ⊕, ⊗, 0, 1) is a semi-ring with
addition operator ⊕, multiplication operator ⊗,
additive-identity/multiplicative-annihilator 0, and
multiplicative-identity 1. Furthermore, associative
arrays have a finite number of non-zero values which
means their support supp(A) = A−1 (V \{0}) is
finite.
As a data structure, associative arrays returns a
value given some number of keys and constitute a
function between a set of tuples and a value space.
In practice, every associative array can be created
from an empty associative array by simply adding and
subtracting values. With this definition, it is assumed
that only a finite number of tuples will have values,
and all other tuples will have a default value of the
additive-identity/multiplicative-annihilator 0. Further,
the associative array mapping should support operations that resemble operations on ordinary vectors and
matrices such as matrix multiplication. In practice,
associative arrays support a variety of linear algebraic
operations such as summation, union, intersection, and
multiplication. Summation of two associative arrays,
for example, that do not have any common row or
column key performs a union of their underlying nonzero keys.
Graphulo database tables are exactly described using the mathematics of associative arrays [5]. In the
D4M schema, a table in the Accumulo database is an
associative array. In this context, the primary differences between associative arrays and sparse matrices
are: associative array entries always carry their global
row and column labels while sparse matrices do not.
Another difference between associative arrays is that
sparse matrices can have empty rows or columns
Algorithm Class
Exploration & Traversal
Subgraph Detection & Vertex Nomination
Centrality
Similarity
Community Detection
Prediction
Shortest Path
Description
Algorithms to traverse or search
vertices
Finding subgraphs or components
within a graph
Finding important vertices or
within a graph
Finding parts of a graph which are
similar in terms of vertices or edges
Look for communities (areas of
high connectedness or similarity)
within a graph
Predicting new or missing edges
Finding the shortest distance between vertices or sets of vertices
Algorithm Examples
Depth First Search, Breadth First Search
K-Truss subgraph detection, Clique detection
Betweenness Centrality, Eigen Centrality
Graph Isomorphism, Jaccard Index, Neighbor Matching
Topic Modeling, Non-negative matrix factorization (NMF), Principle Component
Analysis, Singular Value Decomposition
Link Prediction, Emerging community detection
Floyd Warshall, Bellman Ford, A* Algorithm, Johnson’s Algorithm
TABLE I: Classes of Graph Algorithms
while associative arrays do not. For the purposes of
this algorithmic work associative arrays are encoded
as sparse matrices.
B. Graph Schemas
The promise of big data is the ability to correlate
diverse and heterogeneous data sources to reduce
the time to insight. Correlating this data requires
putting data into a common frame of reference so
that similar entities can be compared. The associative
arrays described in the previous subsection can be
used with a variety of NoSQL databases such as
Accumulo and require a schema to convert the dense
arbitrary data into a sparse associative representation.
Given the variety of data, there are a few commonly
used graph schemas [5] which we discuss below.
1) Adjacency Matrix: In this schema, data is
organized as a graph adjacency matrix which can
represent directed or undirected weighted graphs.
In this schema, rows and columns of the adjacency
matrix represents vertices, and values represent
weighted edges between vertices. Adjacency matrices
provide a great deal of functionality and are one
of the more common ways to express graphs
through matrices. For graph G = (V, E) where
V = {v1 , v2 , ..., vn } and E = {e1 , e2 , ..., em }, the
adjacency matrix A is a n × n matrix where:
# edges f rom vi to vj , if i 6= j
A(i, j) =
number of self loops, if i = j
2) Incidence Matrix: The incidence matrix representation of a graph can represent multi-hyperweighted as well as directed and multi-partite graphs
(multiple edges between vertices, multiple vertices per
edge and multiple partitions). The incidence matrix
representation is capable of representing complex
graphs when compared to the adjacency matrix representation. In the incidence matrix representation,
matrix rows correspond to edges, and matrix columns
represent vertices, with nonzero values in a row indicated vertices associated with the edge. The value at a
particular row-column pair represents the edge weight
and sign is often used to represent direction. There are
many representations for the incidence matrix, and a
common format is described below.
For graph G = (V, E) where V
=
{v1 , v2 , ..., vn } and E = {e1 , e2 , ..., em }, the
incidence matrix E is a m × n matrix where:
+|ei | if ei goes into vj
if ei leaves vj
E(i, j) = −|ei |
0
otherwise
3) D4M Schema: The D4M 2.0 Schema [8],
provides a four associative array solution, (T edge,
T edgeT , T deg, and T raw), that can be used to
represent complex data. The edge tables, T edge and
T edgeT , contain the full semantic information of the
data set in the rows and columns of the associative
arrays. From the schema described in [8], a dense
database can be converted to a sparse representation
by exploding each data entry into an associative array
where each unique column-value pair is a column. The
T deg array maintains a count of the degrees of each of
the columns of T edge, and T raw is used to store the
raw data. A more thorough description of the schema
is provided in [8]. Once in sparse matrix form, the
full machinery of linear algebraic graph processing
and detection theory can be applied. Linear algebraic
operations applied on associative arrays organized
using the D4M schema can have useful results. For
example, addition of two arrays represents a union,
and the multiplication of two arrays represents a
correlation.
III. A LGORITHMS
There are many different graph algorithms that can
be analyzed. In this section, we present an overview
of our work in representing the classes of graph
algorithms presented in Table I using kernels from
the GraphBLAS specification. For the work presented
in this section, we encode associative arrays as sparse
matrices.
A. Centrality
Of the many centrality metrics, there are a few that
are particularly well-suited to the GraphBLAS framework. Degree centrality, for example, assumes that
a vertex’s importance is proportional to the number
of connections it shares. Given an adjacency matrix,
A, this can easily be computed via a row or column
reduction, depending on whether in- or out-degree is
of interest.
Other centrality metrics are explicitly linear algebraic in their formulation. For example, eigenvector
centrality assumes that each vertex’s centrality is
proportional to the sum of its neighbors’ centrality
scores. This is equivalent to scoring each vertex based
on its corresponding entry in the principal eigenvector, which can be computed via the power method.
Starting with Starting with a random positive vector
x0 with entries between zero and 1, we iteratively
compute
xk+1 = Axk
until |xTk+1 xk |/(kxk+1 k2 kxk k2 ) is close to 1.
Related metrics are Katz centrality and PageRank.
Katz centrality considers the number of k-hop paths
to a vertex, for all k, penalizing those with higher
distances. This is also computed via an iterative
procedure in which the kth-order degree vector is
computed, and added to an accumulator as follows:
dk+1 = Adk
xk+1 = xk + αk dk+1 ,
where d0 is a vector of 1s and we use the same
stopping criterion as eigenvector centrality. PageRank
simulates a random walk on a graph, with the possibility of jumping to an arbitrary vertex. Each vertex
is then ranked according to the probability of landing
on it at an arbitrary point in an infinite random walk.
If the probability of jumping to an arbitrary vertex
is 0, then this is simply the principal eigenvector of
AT D−1 , where D is a diagonal matrix of vertex outdegrees. If the probability of a jump is α, then we
compute the principal eigenvector of
α
1N ×N + (1 − α)AT D−1 .
N
As with eigenvector centrality, this can be done using
the power method, where multiplication by a matrix
of 1s can be emulated by summing the vector entries
and creating a new vector where each entry is equal
to the resulting value. All of these centrality measures
rely on doing iterative matrix-vector multiplications,
which fits nicely within the scope of GraphBLAS.
There has also been work on casting betweenness
centrality—where a vertex’s importance is based on
the number of shortest paths that contain it—in linearalgebraic operations [9]. Other metrics, such as closeness centrality, will be the subject of future work.
B. Subgraph detection and vertex nomination
Detection of interesting and anomalous subgraphs
has been a problem of interest for the computer
science community for many years. Examples of this
problem space include vertex nomination (ranking
vertices based on how likely they are to be associated
with a subset of “cue” vertices) [10], planted clique
detection [11], and planted cluster detection [12].
A problem related to planted clique and planted
cluster detection is computing the truss decomposition. A k-truss is a graph in which every edge is part
of at least k − 2 triangles. Any graph is a 2-truss, and
any k-truss in a graph is part of a (k − 1)-truss in
the same graph. Computing the truss decomposition
of a graph involves finding the maximal k-truss for
all k ≥ 2. A recent technique for computing the
truss decomposition [13] can be easily converted into
linear-algebraic operations. Define the support of an
edge to be the number of triangles of which the edge
is a member. The algorithm can be summarized as
follows:
1) Compute the support for every edge.
2) If there is no edge with support less than k − 2,
stop.
3) Otherwise, remove an edge with support less
than k − 2, update the supports of its associated
vertices, and go to 2.
In [13], a more efficient algorithm is proposed that
considers the edges in order of increasing support. In
the linear-algebraic form, all edges are considered at
once, and the appropriate edges are removed simultaneously.
To see the linear-algebraic algorithm, first consider
the unoriented incidence matrix E. Each row of E
has a 1 in the column of each associated vertex. To
get the support for this edge, we need the overlap
of the neighborhoods of these vertices. If the rows
of the adjacency matrix A associated with the two
vertices are summed, this corresponds to the entries
that are equal to 2. Summing these rows is equivalent
to multiplying A on the left by the edge’s row in E.
Therefore, to get the support for each edge, we can
compute EA, apply to each entry a function that maps
2 to 1 and all other values to 0, and sum each row of
the resulting matrix. Note also that
A = ET E − diag(ET E),
which allows us to recompute EA after edge removal
without performing the full matrix multiplication. We
take advantage of this fact in Algorithm 1. Within
the pseudocode, xc refers to the complement of x in
the set of row indices. This algorithm can return the
full truss decomposition by computing the truss with
k = 3 on the full graph, then passing the resulting
incidence matrix to the algorithm with an incremented
k. This procedure will continue until the resulting
incidence matrix is empty. This algorithm can be
realized using the GraphBLAS kernels SpGEMM,
SpMV, and Apply.
Data: The unoriented incidence matrix E,
integer k
Result: Incidence matrix of k-truss subgraph Ek
initialization;
d = sum(E)
A = E T E − diag(d)
R = EA
s = (R == 2)1
x = f ind(s < k − 2)
while x is not empty do
Ex = E(x, :)
E = E(xc , :)
dx = sum(Ex )
R = R(xc , :)
R = R − E[ExT Ex − diag(dx )]
s = (R == 2)1
x = f ind(s < k − 2)
end
return E
Algorithm 1: Algorithm to compute k-truss using
linear algebra. 1 refers to an array of 1s
As an example of computing the k-truss using the
algorithm described, consider the task of finding the
3-truss of the graph in Fig. 1.
The incidence matrix for the graph shown in Fig-
Fig. 1: Example 5-vertex graph
ure 1 is
1
0
1
E=
0
1
0
1
1
0
0
0
1
0
1
0
1
1
0
0
0
1
1
0
0
0
0
0
.
0
0
1
From E, we can compute A using
A = E T E − diag(d) to be:
3 0 0
3 1 1 1 0
1 3 1 0 1 0 3 0
A=
1 1 3 1 0 − 0 0 3
1 0 1 2 0 0 0 0
0 0 0
0 1 0 0 1
To get the support,
1 1
0 1
1 0
R =
0 0
1 0
0 1
1 1
2 1
1 1
=
2 1
1 2
1 1
we first compute
0 0 0
0 1
1 0 0
1 0
0 1 0
1 1
1 1 0
1 0
1 0 0
0 1
0 0 1
2 1 1
1 1 1
2 1 0
.
1 1 0
1 2 0
1 0 1
The support is then given by
0 0 1
1 0 0
0 0 1
s = (R == 2)1 =
1 0 0
0 1 0
0 0 0
0
0
0
0
1
0
the relation
0
0
0
2
0
0
0
0
.
0
1
1
1
0
1
0
1
0
1
0
0
0
1
0
0
0
0
1
1
0
1 1
0
1 = 1 .
0
1 2
0
1
0
0
Since k = 3, x will be the set of edges where the
support is less than 1, i.e., x = {6} and xc =
{1, 2, 3, 4, 5}. Thus, R and E will be set to their first
5 rows, and the update will be computed as follows:
1 1 2 1 1
2 1 1 1 1
R =
1 1 2 1 0
2 1 1 1 0
1 2 1 2 0
0 0 0 0 0
1 1 0 0 0
0 1 1 0 0 0 0 0 0 1
−
1 0 0 1 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0
0 1 0 0 0
1 0 1 0 0
1 1 2 1 0
2 1 1 1 0
=
1 1 2 1 0 .
2 1 1 1 0
1 2 1 2 0
The pattern of 2s in R did not change with the removal
of edge 6, so the support will not change. Therefore,
the graph represented by the new incidence matrix is
a 3-truss.
C. Similarity
Computing vertex similarity is important in applications such as link prediction [14]. One common
method for determining the similarity of two vertices
is to compute the Jaccard coefficient. This quantity
measures the overlap of the neighborhoods of two vertices in an unweighted, undirected graph. For vertices
vi and vj where N (v) denotes the neighbors of vertex
v, the Jaccard coefficient is defined as
T
| N (vi ) N (vj ) |
S
Jij =
.
(1)
| N (vi ) N (vj ) |
Given the connection vectors (a column or row in the
adjacency matrix A) for vertices vi and vj (denoted
as ai and aj ) the numerator and denominator of
Equation 1 can be expressed as aT
i aj where we
replace multiplication with the AND operator in the
numerator and the OR operator in the denominator.
This gives us
Jij
=
Jij
=
T
(aT
i ∧ aj )./(ai ∨
A2AND ./A2OR .
aj )
This, however, would involve computing a dense
matrix, and we are primarily interested in cases where
this is impossible. Two phenomena can be exploited
that will help provide an efficient implementation: the
symmetry of J and sparseness of A2AND . Since J is
symmetric, we can compute only the upper triangular
part and then add the transpose. First we compute the
upper triangular part of the numerator in the entry
wise division. The numerator is A2AND , which in an
unweighted graph is the same as computing a standard
matrix multiplication. We can represent A as L + U ,
where L is strictly lower triangular and U is strictly
upper triangular. Since A is symmetric, L = U T .
Thus, we have
A2 = (L + U )2
=
L2 + LU + U L + U 2
=
(U 2 )T + U 2 + U T U + U U T
It can be verified that U 2 is strictly upper triangular
and, therefore (U 2 )T is strictly lower triangular. After
we compute the upper triangular part of A2 , we can
divide each nonzero value by the number of total
neighbors of the associated vertices. Exploiting these
properties, we can compute the Jaccard coefficient as
described in Algorithm 2. The triu operation extracts
the upper triangular part of the graph, as in MATLAB.
Algorithm 2 can be computed using the GraphBLAS
kernels SpGEMM, SpMV, and SpEWiseX. Computing the upper triangular part of a graph can be done
through a user-defined function that implements the
Hadamard product. For example, if ⊗ = f (i, j),
triu(A) = A ⊗ 1 where f (i, j) = {A(i, j) : i ≤
j, 0 otherwise}. An example of the computation on
the graph in Fig. 1 is provided in Fig. 2.
Data: Adjacency matrix A
Result: Matrix of Jaccard indices J
initialization;
d = sum(A)
U = triu(A)
X = UUT
Y = UT U
J = U 2 + triu(X) + triu(Y )
J = J − diag(J)
for each nonzero entry Jij in J do
Jij = Jij /(di + dj − Jij )
end
J = J + JT
Algorithm 2: Algorithm to compute Jaccard index
using linear algebra.
D. Community Detection
Community detection is a class of graph algorithms designed to find community structures within
a graph. Graph communities often contain dense
internal connections and may possibly overlap with
U =
U U =
T
J =
=
0
0
0
0
0
1
0
0
0
0
1
1
0
0
0
1
0
1
0
0
0
1
0
0
0
U =
0
0
0
0
0
0
1
1
1
0
0
1
2
1
1
0
1
1
2
0
0
0
1
0
1
J =
0
0
0
0
0
1
0
0
0
0
2
1
0
0
0
1
2
1
0
0
1
0
1
0
0
0
0
0
0
0
1/5
1/2
1/4
1/3
0
0
0
0
1/5
2/3
0
0
0
1/4
0
1/3
0
0
0
0
2
./
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
1
1
0
0
0
0
0
0
0
0
1
0
0
0
0
2
1
0
0
0
3
3
0
0
0
3
3
3
0
0
3
3
3
2
0
1
2
1
0
0
1
0
0
0
0
1
0
1
0
0
0
0
+ 0
0
0
UU
3
0
0
0
0
3
3
0
0
0
T
2
2
2
0
0
=
1
1
1
1
0
3
1
1
0
0
1
2
0
0
0
1
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
2
1
0
0
0
1
2
1
0
0
−
1
0
1
0
0
Fig. 2: Computing Jaccard coefficients of the graph in Fig. 1. In line 2, J = U 2 + triu (U U T ) + triu (U T U ).
In line 3, J = J./(di + dj − J). Computing J = J + J T removes the order dependence. Computation is on
non-zero entries in each matrix.
other communities. Real graphs such as social media
have been shown to exhibit such community structure on geography, language, age group, etc. [15].
The communities may then be used to suggest or
recommend new information, connections, or even
products as recommender systems do for popular
online marketplaces such as Amazon and Google [16].
One common method used as a basis for such systems
is topic modeling. Topic modeling is a very popular
class of algorithms that provides an intuitive look
into the topics that make up data. As an example,
consider a set of documents made up of various terms.
Application of topic modeling can automatically determine a set of topics, the terms that make up a
topic and the documents that strongly align with these
topics. Techniques such as topic modeling have gained
wide usage for automatic summarization, document
modeling and can provide users with simple and quick
insight into a dataset. Topic modeling is a general
field, and a popular technique for topic modeling is
non-negative matrix factorization [17], [18].
Non-negative matrix factorization (NMF) is a class
of tools used to factorize a given matrix into two
matrices. Multiplying these two matrices produces
an approximation of the original matrix. Consider a
matrix Am×n to be factored into matrices Wm×k and
Hk×n where m corresponds to the number of rows of
A, n corresponds to the number of columns in A, and
k corresponds to the number of topics. Further, NMF
enforces the constraint that none of these matrices
contain any negative elements.
By definition,
A = W ∗ H.
(2)
In the above factorization, the columns of W can be
considered a basis for the matrix A with the rows of
H being the associated weights needed to reconstruct
A. The property that W and H are nonnegative is
useful because it can have physical significance (as
opposed to negative weights or basis elements). One
way to find the matrices W, H such that A ≈ W ∗ H
is through an iterative technique such as the algorithm
presented in Algorithm 3.
In order to solve the equations in Algorithm 3,
it is necessary to find a least squares solution to a
system of linear equations for W and H. One way
of doing this is by finding the matrix inverse of
W T ∗ W and H ∗ H T (both are square matrices) and
multiplying with the right hand side of the equations.
One method to find the matrix inverse is typically
done by techniques such as the Singular Value Decomposition (SVD). However, in order to make use of
the GraphBLAS kernels, we present an technique used
by iterative eigenvalue solvers. In such systems, for
iteration k: Xk+1 = Xk ∗ (2I − AXk ). The algorithm
used to find the matrix inverse for a square matrix A
is given in Algorithm 4.
Data: Matrix A to invert
Result: X = A−1
initialization;
P
kArow k = maxi (P j Aij )
kAcol k = maxj ( i Aij )
X1 = AT /(kArow k ∗ kAcol k)
while kXt+1 − Xt kF > do
Xt+1 = Xt ∗ (2 ∗ In×n − A ∗ Xt )
end
Algorithm 4: Matrix inverse through Iteration. At
each iteration, we check if the value of Xt+1 is close
to the previous iteration estimate of X.
Using this formulation, computing the inverse of a
matrix can be done purely using GraphBLAS kernels.
Combining Algorithms 3 and 4, we can find compute
the NMF of a matrix A using only GraphBLAS
kernels. Where (W T ∗ W )−1 and (H ∗ H T )−1 are determined by using the relation develop in Algorithm 4.
In fact, computing the NMF of a matrix using Algorithm 5 will require the GraphBLAS SpRef/SpAsgn,
SpGEMM, Scale, SpEWiseX, and Reduce kernels.
The outlined algorithm has been tested against a social
media dataset and provides intuitive results.
For example, Algorithm 5 was applied to a set of
words collected from the popular social media website
Data: Incidence Matrix A (size m × n), number
of topics k
Result: Wand H
initialization;
W = random m x k matrix
while kA − W ∗ HkF > do
Solve W T ∗ W ∗ H =W T ∗ A for H
Set elements in H < 0 to 0
Solve H ∗ H T ∗ W T = H ∗ AT for W
Set elements in W < 0 to 0
end
Algorithm 3: NMF through Iteration. At each step
of the iteration, we check if the Frobenius norm of
the difference between A and W ∗ H is less than the
acceptable error.
Data: Incidence Matrix A (size m × n), number
of topics k
Result: W and H
W = random m x k matrix
while kA − W ∗ HkF > do
Solve H =(W T ∗ W )−1 ∗ W T ∗ A for H
Set elements in H < 0 to 0
Solve W T = (H ∗ H T )−1 ∗ H ∗ AT for W
Set elements in W < 0 to 0
end
Algorithm 5: NMF and Inverse through Iteration.
Twitter. The algorithm was used to determine common
themes from approximately 20,000 tweets. By setting
the number of topics to 5, we were able to determine
words/tweets that fell into 5 different topics. The
results from this experiment are shown in Fig. 3.
From a graph perspective, this implies that tweets
corresponding to these tweets from a community. For
topic 1, as an example, this community represents
users who tweet in the Turkish language.
IV. D ISCUSSION
The algorithms presented in this paper demonstrate
several algorithmic capabilities using the initial set of
GraphBLAS operations, but there are a few inefficiencies that could be improved upon with some additional
functions. In Algorithm 1, for example, when EA is
computed, it would be more efficient to only consider
the additions that yield a 2 in the resulting matrix. This
could be achieved by replacing the + operator with
a logical AND, but this would violate the semiring
axioms. Enabling the ability to use linear-algebraic
Fig. 3: Application of algorithm 5 to 20k tweets and modeling with 5 topics. Topic 1 represents tweets with
Turkish words; topic 2 represents tweets related to dating; topic 3 relates to an acoustic guitar competition in
Atlanta, GA; topic 4 relates to tweets with Spanish words; and topic 5 represents tweets with English words.
machinery with data operations that do not conform
to the rules for semirings may provide substantial
speedups.
Algorithm 2 leverages the symmetry of the graph
to save some of the unnecessary operations, but some
values under the main diagonal must still be computed in the process. Since it is fairly common to
work with undirected graphs, providing a version of
matrix multiplication that exploits the symmetry, only
stores the upper-triangular part, and only computes
the upper-triangular part of pairwise statistics, would
be a welcome contribution to this effort.
Algorithm 5 computes the NMF of a matrix A
which can represent the adjacency matrix of a graph.
However, calculation of the matrix inverse using this
method can result in dense matrix operations. Since
the aim of this step is to solve a least squares problem,
it would be more efficient to implement this using
a sparse QR factorization or iterative method that
preserves the sparsity of the problem as much as possible. We would welcome community involvement in
building these methods using the GraphBLAS kernels.
As a next step in the Graphulo effort, we will extend
the sparse matrix implementations of the algorithms
discussed in this article to associative arrays. The
ability to perform the graph algorithms described
directly on associative arrays will allow us to implement efficient GraphBLAS operations directly on
Accumulo data structures. In order to make efficient
implementations, we will use various Accumulo features, such as the Accumulo iterator framework, to
quickly scan Accumulo tables over servers in parallel
and perform batch operations such as scaling.
V. C ONCLUSIONS
There are a large variety of graph algorithms that
can be used to solve a diverse set of problems.
The Graphulo initiative at the Massachusetts Institute
of Technology is interested in applying the sparse
linear algebra kernels of the GraphBLAS specification
to associative arrays which exactly describe NoSQL
database tables such as those found in the open source
Apache Accumulo. Current ongoing work includes
defining efficient implementations of the algorithms
discussed in this article, extending the classes of supported algorithms and providing a library that can perform basic operations directly in NoSQL databases.
ACKNOWLEDGMENT
The authors wish to thank the Graphulo team at
MIT CSAIL and Lincoln Laboratory. We also thank
the reviewers, GraphBLAS contributors and National
Science Foundation for their generous ongoing support of this program.
R EFERENCES
[1] D. Laney, “3D data management: Controlling data volume,
velocity and variety,” META Group Research Note, vol. 6,
2001.
[2] http://istc bigdata.org/GraphBlas/, “The graph BLAS forum.”
[3] D. Bader, A. Buluç, J. G. UCB, J. Kepner, and T. Mattson,
“The graph BLAS effort and its implications for exascale,”
2014.
[4] V. Gadepally and J. Kepner, “Big data dimensional analysis,”
IEEE High Performance Extreme Computing, 2014.
[5] J. Kepner and V. Gadepally, “Adjacency matrices, incidence
matrices, database schemas, and associative arrays,” in International Parallel & Distributed Processing Symposium
Workshops (IPDPSW), 2014 IEEE International.
IEEE,
2014.
[6] “Graphulo: Graph processing on Accumulo.” [Online].
Available: http://graphulo.mit.edu
[7] J. Kepner, W. Arcand, D. Bestor, B. Bergeron, C. Byun,
V. Gadepally, M. Hubbell, P. Michaleas, J. Mullen, A. Prout
et al., “Achieving 100,000,000 database inserts per second
using Accumulo and D4M,” IEEE High Performance Extreme
Computing, 2014.
[8] J. Kepner, C. Anderson, W. Arcand, D. Bestor, B. Bergeron,
C. Byun, M. Hubbell, P. Michaleas, J. Mullen, D. O’Gwynn
et al., “D4M 2.0 schema: A general purpose high performance
schema for the accumulo database,” in High Performance
Extreme Computing Conference (HPEC), 2013 IEEE. IEEE,
2013, pp. 1–6.
[9] J. Kepner and J. Gilbert, Graph algorithms in the language
of linear algebra. SIAM, 2011, vol. 22.
[10] G. A. Coppersmith and C. E. Priebe, “Vertex nomination via
content and context,” 2012, preprint: arXiv.org:1201.4118v1.
[11] R. R. Nadakuditi, “On hard limits of eigen-analysis based
planted clique detection,” in Proc. IEEE Statistical Signal
Process. Workshop, 2012, pp. 129–132.
[12] E. Arias-Castro and N. Verzelen, “Community detection in
random networks,” 2013, preprint: arXiv.org:1302.7099.
[13] J. Wang and J. Cheng, “Truss decomposition in massive
networks,” Proc. VLDB Endowment, vol. 5, no. 9, pp. 812–
823, May 2012.
[14] D. Liben-Nowell and J. Kleinberg, “The link-prediction problem for social networks,” J. Amer. Soc. Inform. Sci. and
Technology, vol. 58, no. 7, pp. 1019–1031, May 2007.
[15] S. Fortunato, “Community detection in graphs,” Physics Re-
ports, vol. 486, no. 3, pp. 75–174, 2010.
[16] J. B. Schafer, J. Konstan, and J. Riedl, “Recommender
systems in e-commerce,” in Proceedings of the 1st ACM
conference on Electronic commerce. ACM, 1999, pp. 158–
166.
[17] F. Wang, T. Li, X. Wang, S. Zhu, and C. Ding, “Community
discovery using nonnegative matrix factorization,” Data Mining and Knowledge Discovery, vol. 22, no. 3, pp. 493–521,
2011.
[18] J. Kim and H. Park, “Toward faster nonnegative matrix factorization: A new algorithm and comparisons,” in Data Mining,
2008. ICDM’08. Eighth IEEE International Conference on.
IEEE, 2008, pp. 353–362.
| 8 |
arXiv:1502.06077v2 [math.AC] 17 Aug 2015
ON THE RELATIONSHIP BETWEEN DEPTH
AND COHOMOLOGICAL DIMENSION
HAILONG DAO AND SHUNSUKE TAKAGI
Dedicated to Professor Yuji Yoshino on the occasion of his sixtieth birthday.
Abstract. Let (S, m) be an n-dimensional regular local ring essentially of finite type over
a field and let a be an ideal of S. We prove that if depth S/a ≥ 3, then the cohomological
dimension cd(S, a) of a is less than or equal to n − 3. This settles a conjecture of Varbaro for
such an S. We also show, under the assumption that S has an algebraically closed residue
field of characteristic zero, that if depth S/a ≥ 4, then cd(S, a) ≤ n − 4 if and only if the local
d is torsion. We give a number of applications, including
Picard group of the completion S/a
a vanishing result on Lyubeznik’s numbers, and sharp bounds on cohomological dimension
of ideals whose quotients satisfy good depth conditions such as Serre’s conditions (Si ).
1. Introduction
Local cohomology was introduced by Grothendieck in the early 1960s and has become
since a fundamental tool in commutative algebra and algebraic geometry. It is important to
know when they vanish. Let S be a Noetherian local ring and a be an ideal of S. Then the
cohomological dimension cd(S, a) of a in S is defined by
cd(S, a) = sup{i ∈ Z≥0 | Hai (M ) 6= 0 for some S-module M }.
This invariant has been studied by many authors including Hartshorne [11], Ogus [24],
Peskine-Szpiro [25], Huneke-Lyubeznik [16], Lyubeznik [20], Varbaro [26], etc. It follows
from a classical vanishing theorem of Grothendieck that cd(S, a) is less than or equal to the
dimension of S. A natural question to ask is under what conditions one can obtain a better
upper bound on cd(S, a). In this paper, we assume that S is an n-dimensional regular local
ring containing a field and investigate the relationship between cd(S, a) and the depth of S/a.
The first result of this type is that if depth S/a ≥ 1, then cd(S, a) ≤ n − 1, which is
an immediate consequence of the Hartshorne-Lichtenbaum vanishing theorem [11]. It also
follows from results of Ogus [24] and Peskine-Szpiro [25] (see also [16]) that if depth S/a ≥ 2,
then cd(S, a) ≤ n − 2. In fact, Peskine-Szpiro proved a more general result in positive
characteristic: if S is a regular local ring of characteristic p > 0 and if depth S/a ≥ i, then
cd(S, a) ≤ n − i. One might expect that an analogous result holds in equal characteristic
zero, but there exists a class of examples where S is a localization of a polynomial ring over
a field of characteristic zero, depth S/a ≥ 4 and cd(S, a) = n − 3 (see Example 2.11). On
the other hand, Varbaro [26] proved that if b is a homogeneous ideal of a polynomial ring
T = k[x1 , . . . , xn ] over a field k and if depth T /b ≥ 3, then cd(T, b) ≤ n − 3. He conjectured
that a similar statement holds in a local setting.
2010 Mathematics Subject Classification. 13D45, 14B15, 32C36.
Key words and phrases. Local cohomology, Cohomological dimension, Local Picard groups.
1
2
H. DAO and S. TAKAGI
Conjecture 1.1. Let S be an n-dimensional regular local ring containing a field and let a be
an ideal of S. If depth S/a ≥ 3, then cd(S, a) ≤ n − 3.
Motivated by the above conjecture, we consider a necessary and sufficient condition for
cd(S, a) to be less than n − 2 when S is a regular local ring essentially of finite type over a
field of characteristic zero. The following is our main result.
Theorem 1.2 (cf. Theorem 2.4). Let (S, m, k) be an n-dimensional regular local ring essentially of finite type over a field of characteristic zero with residue field k algebraically closed.
Let a be an ideal of S and set R = S/a. Suppose that depth R ≥ 2 and Hm2 (R) is a k-vector
b \ {mR})
b is
space. Then cd(S, a) ≤ n − 3 if and only if the torsion subgroup of Pic(Spec R
b is the mR-adic completion of R.
finitely generated, where R
We note that an analogous statement does not hold in positive characteristic (see Remark
2.5). Also, the assumption on Hm2 (R) cannot be removed (see Example 2.6). As a corollary
of Theorem 1.2, we give an affirmative answer to Conjecture 1.1 when S is essentially of finite
type over a field (Corollary 2.8).
We also study the case where depth S/a ≥ 4. In this case, as we have mentioned above,
cd(S, a) is not necessarily less than n − 3. We give a necessary and sufficient condition for
cd(S, a) to be less than n − 3 in a form similar to that of Theorem 1.2.
Theorem 1.3 (cf. Theorem 2.9). Let the notation be the same as that used in Theorem 1.2,
b \ {mR})
b is
and suppose that depth R ≥ 4. Then cd(S, a) ≤ n − 4 if and only if Pic(Spec R
torsion.
We have several applications of our results. We obtain vanishing results of the Lyubeznik
numbers λi,j (R), numerical invariants of a Noetherian local ring R containing a field introduced by Lyubeznik [20] (see Definition 3.1 for their definition). In particular, we prove that
if R is a d-dimensional local ring essentially of finite type over a field satisfying Serre’s condition (S3 ), then λd−1,d (R) = 0 (Proposition 3.3). We also have an extension of a recent result
by Katzman-Lyubeznik-Zhang [17] which in turn extended the classical result of Hartshorne
on connectedness of the punctured spectrum (Proposition 3.6). Finally, we give sharp bounds
on cohomological dimension of ideals whose quotients satisfy good depth conditions such as
Serre’s conditions (Si ).
Theorem 1.4 (=Theorem 3.8). Let S be an n-dimensional regular local ring containing a
field and let a ⊂ S be an ideal of height c.
(1) If S/a satisfies Serre’s condition (S2 ) and dim S/a ≥ 1, then
jn − 2k
cd(S, a) ≤ n − 1 −
.
c
(2) Suppose that S is essentially of finite type over a field. If S/a satisfies Serre’s condition (S3 ) and dim S/a ≥ 2, then
jn − 3k
.
cd(S, a) ≤ n − 2 −
c
Such results partly answer a question raised in [15, Question 2.5].
In this paper, all rings are Noetherian commutative rings with unity and C is the field of
complex numbers. When discussing the completion of a local ring (R, m), we will mean the
b and the completion of the
m-adic completion of R. We denote the completion of R by R
[
b sh .
strict henselization of the completion of R by (R)
Depth and cohomological dimension
3
Acknowledgments. The authors would like to thank Matteo Varbaro for inspirations on this work and
generously sharing his unpublished notes and many useful comments. We are also indebted to Bhargav
Bhatt for helpful discussions and pointing out a mistake in a previous version of this work. We are
grateful to Brian Conrad, Craig Huneke, Atsushi Ito and Yusuke Nakamura for valuable conversions.
The first author was partially supported by NSF grant DMS 1104017. The second author was partially
supported by Grant-in-Aid for Scientific Research (C) No.26400039 from JSPS.
2. Main results on cohomological dimension
First we recall the notion of local Picard groups.
Definition 2.1. Let (R, m) be a Noetherian local ring. The local Picard group Picloc (R) of
R is the Picard group Pic(Spec R \ {m}) of the punctured spectrum of R. The étale-local
Picard group Picet−loc (R) of R is the Picard group Pic(Spec Rh \ {mRh }) of the punctured
spectrum of the henselization Rh of R.
Remark 2.2. If (R, m) is a Noetherian local k-algebra of depth ≥ 2 and with residue field k,
b \ {mR})
b by [3, Chapter II Corollaire 7.2].
then Picet−loc (R) is isomorphic to Pic(Spec R
Next we show that “Lefschetz principle” holds for étale-local Picard groups.
Lemma 2.3. Let L/K be an extension of algebraically closed fields of characteristic zero.
Pick polynomials g1 , . . . , gs ∈ (x1 , . . . , xn )K[x1 , . . . , xn ], and denote
(Am , mAm ) = K[x1 , . . . , xn ](x1 ,...,xn ) /(g1 , . . . , gs ),
(Bn , nBn ) = L[x1 , . . . , xn ](x1 ,...,xn ) /(g1 , . . . , gs ).
2
(Am ) is a finite-dimensional K-vector space.
We suppose that depth Am ≥ 2 and HmA
m
(1) The torsion subgroup of Picet−loc (Am ) is isomorphic to that of Picet−loc (Bn ).
(2) If depth Am ≥ 3, then Picet−loc (Am ) ∼
= Picet−loc (Bn ).
Proof. (2) immediately follows from [3, Chapter III Proposition 2.10], so we will only prove
(1). Let A = K[x1 , . . . , xn ]/(g1 , . . . , gs ) and B = L[x1 , . . . , xn ]/(g1 , . . . , gs ). Let xA ∈ Spec A
(resp. xB ∈ Spec B) be the closed point corresponding to the maximal ideal m := (x1 , . . . , xn )
(resp. n := (x1 , . . . , xn )), and denote UA = Spec A \ {xA } (resp. UB = Spec B \ {xB }).
Claim. For all integers j and n ≥ 1, there exists a natural isomorphism
H j (UA ⊗A Ah , µn ) ∼
= H j (UB ⊗B B h , µn ),
m
where
Ahm
(resp.
Bnh )
n
is the (strict) henselization of Am (resp. Bn ).
Proof of Claim. We may reduce to the case where L is the algebraic closure of a rational function field K(t1 , . . . , tm ) over K. Let C = K[t1 , . . . , tm , x1 , . . . , xn ]/(g1 , . . . , gs ) and
UC = Am
K ×Spec K UA = Spec C \ Spec K[t1 , . . . , tm ]. We denote by xC ∈ Spec C the point
corresponding to the prime ideal (x1 , . . . , xn ) and by xC the geometric point over xC corresponding to L. By the smooth base change theorem for étale cohomology, one has
(Rj iC ∗ f ∗ µn )x ∼
= (f ∗ Rj iA∗ µn )x ,
C
C
where f : Spec C → Spec A is a natural map and iA : UA ֒→ Spec A (resp. iC : UC ֒→ Spec C)
is a natural inclusion. Note that the strict henselization of Spec C at xC is isomorphic to
f
→ Spec A is isomorphic to Spec Ahm . Thus,
Spec Bnh and that of Spec A at xC → Spec C −
(Rj iC ∗ f ∗ µn )x ∼
= H j (UC ⊗C B h , µn ) ∼
= H j (UB ⊗B B h , µn ),
C
∗
j
(f R iA ∗ µn )xC
n
∼
= H j (UA ⊗A Ahm , µn ),
n
4
H. DAO and S. TAKAGI
so we obtain the assertion.
By virtue of [3, Chapter II Théorème 7.8], the local Picard scheme Picloc
Am /K of Am over K
loc
and the local Picard scheme PicBn /L of Bn over L exist (see [3, Chapter II] for the definition
of local Picard schemes). It then follows from the proof of [3, Chapter III Proposition 2.7]
loc
∼
that Picloc
Bn /L = PicAm /K ⊗K L. Since L and K are algebraically closed fields, one has a
natural inclusion
∼ Picloc (K) ֒→ Picloc (L) ∼
Picet−loc (Am ) =
= Picet−loc (Bn ).
Am /K
Bn /L
Applying the above claim to the Kummer sequence, we see that this inclusion of local Picard
groups induces an isomorphism of n-torsion subgroups
n Pic
et−loc
(Am ) ∼
= n Picet−loc (Bn )
for all integers n ≥ 1, because Picet−loc (Am ) = Pic(UA ⊗A Ahm ) (resp. Picet−loc (Bn ) =
Pic(UB ⊗B Bnh )). This means that the torsion subgroup of Picet−loc (Am ) is isomorphic to
that of Picet−loc (Bn ).
Now we state one of the main results of this paper. Example 2.6 demonstrates that the
assumptions of Theorem 2.4 are optimal.
Theorem 2.4. Let (S, m) be an n-dimensional regular local ring essentially of finite type
over a field of characteristic zero and let a be an ideal of S. Suppose that depth S/a ≥ 2 and
Hm2 (S/a) is an S/m-vector space. Then cd(S, a) ≤ n − 3 if and only if the torsion subgroup
\
d sh ) is finitely generated.
of Picloc ((S/a)
Proof. Let k be the residue field of S, and denote by k the algebraic closure of k. Fix a nonzero
element f ∈ a ∩ m2 . By [8, Theorem 1.3], we can find a regular local ring (T0 , mT0 ) with
residue field k and an element g ∈ T0 such that : (1) T0 is the localization of a polynomial ring
at a maximal ideal, and (2) T0 ⊂ S is a faithfully flat extension which induces an isomorphism
T0 /(g) ∼
= S/(f ). Let φ : T0 → T0 /(g) → S/(f ) be a ring homomorphism induced by the isomorphism in (2) and let aT0 be the ideal φ−1 (aS/(f )) ⊂ T0 . Then T0 /aT0 ∼
= S/a. Let (T, mT )
be a faithfully flat extension of T0 obtained by localizing the polynomial ring k[x1 , . . . , xn ] at
a maximal ideal lying over mT0 , and set aT = aT0 T . Then cd(S, a) = cd(T0 , aT0 ) = cd(T, aT ).
Since the completion of the strict henselization of the completion of S/a is isomorphic to the
\
d sh ) =
∼ Picet−loc (T /aT ). Therefore, it is enough to
completion of T /aT , one has Picloc ((S/a)
show that cd(T, aT ) ≤ n − 3 if and only if the torsion subgroup of Picet−loc (T /aT ) is finitely
generated. It is easy to see that depth T /aT ≥ 2 and Hm2 T (T /aT ) is a T /mT -vector space, so
we may assume that S is the localization of the polynomial ring k[x1 , . . . , xn ] at the maximal
\
d sh ) ∼
ideal (x1 , . . . , xn ). We remark that Picloc ((S/a)
= Picet−loc (S/a) in this case.
Consider the subfield k′ of k obtained by adjoining to Q all the coefficients of a set of
generators f1 , . . . , fr of a. By a standard argument, there exists a subfield of C isomorphic
to k′ , so the algebraic closure k′ of k′ can be embedded in C. Since the fi are defined over
k′ , set (Sk′ , mk′ ) = k′ [x1 , . . . , xn ](x1 ,...,xn ) and ak′ = (f1 , . . . , fr )Sk′ . Similarly, set (SC , mC ) =
C[x1 , . . . , xn ](x1 ,...,xn) and aC = (f1 , . . . , fr )SC . Then cd(SC , aC ) = cd(Sk′ , ak′ ) = cd(S, a) and
depth SC /aC = depth Sk′ /ak′ = depth S/a ≥ 2. Also, it is easy to check that Hm2 C (SC /aC ) is
a C-vector space. Thus, we can reduce the problem to the case where k = C with the aid of
Lemma 2.3.
Depth and cohomological dimension
5
From now on, we consider the case where S = C[x1 , . . . , xn ](x1 ,...,xn ) . Since depth S/a ≥ 2,
we know from [24, Corollary 2.11] that cd(S, a) ≤ n − 2. Therefore, cd(S, a) ≤ n − 3 if and
only if Han−2 (S) = 0.
Claim. Han−2 (S) is supported only at the maximal ideal m.
Proof of Claim. We may assume that S is a complete regular local ring for the proof of Claim.
n−2
(Sp ) = 0 for all prime ideals p of height n−1. By the second vanishing
We will show that HaS
p
theorem [24, Corollary 2.11], it is enough to show that depth Sp /aSp ≥ 2 for all prime ideals
p of height n − 1 containing a. Since Hm2 (S/a) is an S/m-vector space, ExtSn−2 (S/a, S) is
also an S/m-vector space by local duality. Similarly, since depth S/a ≥ 2, we see by local
(Sp /aSp , Sp ) = 0 for all
(Sp /aSp , Sp ) = ExtSn−1
duality that ExtSn−1 (S/a, S) = 0. Thus, ExtSn−2
p
p
1 (S /aS ) =
prime ideals p of height n − 1. Applying local duality again, one has that HpS
p
p
p
0 (S /aS ) = 0.
HpS
p
p
p
Let f1 , . . . , fr ∈ (x1 , . . . , xn )C[x1 , . . . , xn ] be a set of generators of a and X be the closed
subscheme of Cn defined by the fi . If x ∈ X denotes the origin of Cn , then OX,x ∼
= S/a.
n−2
It follows from [24, Theorem 2.8] and the above claim that the vanishing of Ha (S) is
2
bX,x ), where
equivalent to the vanishing of the local de Rham cohomology H{x},dR
(Spec O
bX,x is the completion of OX,x . This local de Rham cohomology is isomorphic to the relative
O
singular cohomology H 2 (Xan , (X \ {x})an ; C) by [12, Chapter III Proposition 3.1, Chapter
IV Theorem 1.1]. Since the homology groups of a complex quasi-projective scheme (with
coefficients in Z) are all finitely generated by [7, Chapter 1 (6.10)], the relative homology
Hi (Xan , (X \ {x})an ; Z) is also finitely generated for all i. It then follows from the universal
coefficient theorem that
H 2 (Xan , (X \ {x})an ; C) ∼
= H 2 (Xan , (X \ {x})an ; Z) ⊗ C.
Thus, Han−2 (S) = 0 if and only if H 2 (Xan , (X \ {x})an ; Z) is torsion. We will show that
H 2 (Xan , (X \ {x})an ; Z) is torsion if and only if the torsion subgroup of Picet−loc (S/a) is
finitely generated.
Let U ⊂ X be a contractible Stein open neighborhood of x. It follows from the excision
theorem [2, Chapter II Lemma 1.1] that for each i ∈ N,
H i+1 (Xan , (X \ {x})an ; Z) ∼
= H i+1 (U, U \ {x}; Z) ∼
= H i (U \ {x}, Z),
H i+1 (Xan , OX ) ∼
= H i+1 (U, OU ) ∼
= H i (U \ {x}, OU ).
{x}
an
{x}
First we consider the following commutative diagram:
H 0 (U, OU× )
/ H 0 (U, OU )
ρ×
U,U \{x}
ρU,U \{x}
H 0 (U \ {x}, OU× )
/ H 0 (U \ {x}, OU ),
where the vertical maps are the restriction maps of sections and the horizontal maps are
injections induced by the inclusion of sheaves OU× ֒→ OU . Note that the restriction map
ρU,U \{x} is surjective by [2, Chapter II Theorem 3.6], because depth OU,x ≥ 2. Let s ∈
H 0 (U \ {x}, OU× ) ⊆ H 0 (U \ {x}, OU ). Since ρU,U \{x} is surjective, there exists an extension
sb ∈ H 0 (U, OU ) of s that does not vanish on U \ {x}. If sb is not in H 0 (U, OU× ), then sb(x) = 0,
6
H. DAO and S. TAKAGI
which implies that dimx U ≤ 1. This contradicts the assumption that depth OU,x ≥ 2. Thus,
×
), that is, ρ×
sb ∈ H 0 (X, OX
U,U \{x} is surjective.
Next we consider the following commutative diagram with exact rows, induced by the
exponential sequence1:
H 0 (U, OU )
ρU,U \{x}
H 0 (U \ {x}, OU )
/ H 0 (U, O × )
U
/ H 1 (U, Z)
ρ×
U,U \{x}
/ H 0 (U \ {x}, O × )
U
/ H 1 (U \ {x}, Z)
/ H 1 (U \ {x}, OU ).
Since U is contractible, the map H 0 (U, OU ) → H 0 (U, OU× ) is surjective. Combining with the
×
0
0
surjectivity of ρ×
U,U \{x} , we see that H (U \ {x}, OU ) → H (U \ {x}, OU ) is also surjective,
which is equivalent to saying that H 1 (U \ {x}, Z) → H 1 (U \ {x}, OU ) is injective. Therefore,
we obtain the following exact sequence from the exponential sequence:
0 → H 1 (U \ {x}, Z) → H 1 (U \ {x}, OU ) → H 1 (U \ {x}, OU× ) → H 2 (U \ {x}, Z).
We then use the fact that the étale-local Picard group Picet−loc (S/a) is isomorphic to the
direct limit of Pic(U \ {x}, OU× ) as U ⊂ X runs through all analytic open neighborhoods of
x, which follows from [3, Chapter III Proposition 3.2] and the proof of Proposition 3.6 in
loc. cit. Taking the direct limit of the above exact sequence, we have the following exact
sequence:
2
0 → H 2 (Xan , (X\{x})an ; Z) → H{x}
(Xan , OXan ) → Picet−loc (S/a) → H 3 (Xan , (X\{x})an ; Z).
2 (X , O
Note that H{x}
an
Xan ) is a C-vector space.
If H 2 (Xan , (X \ {x})an ; Z) is not tor-
sion, then Picet−loc (S/a) contains Q/Z, an infinitely generated torsion group. Conversely,
if H 2 (Xan , (X \ {x})an ; Z) is torsion, then it has to be zero, and the torsion subgroup of
Picet−loc (S/a) is isomorphic to a subgroup of H 3 (Xan , (X \ {x})an ; Z), which is finitely gen
erated. Thus, we complete the proof of Theorem 2.4.
Remark 2.5. An analogous statement to Theorem 2.4 does not hold in positive characteristic.
For example, let E be a supersingular elliptic curve in the projective plane P2 over an algebraically closed field k of characteristic p > 0 and let (R, m) be the localization of the affine
cone over E × E at the unique homogeneous maximal ideal. We easily see that depth R = 2
and the natural Frobenius action on Hm2 (R) is nilpotent, because E is a supersingular elliptic
curve. R has embedding dimension 9, so let S = k[x1 , . . . , x9 ](x1 ,...,x9 ) and a be the kernel
of the natural surjection S → R. Since Hm0 (R) = Hm1 (R) = 0 and the Frobenius action on
Hm2 (R) is nilpotent, it follows from [22, Corollary 3.2] that Ha9 (S) = Ha8 (S) = Ha7 (S) = 0,
that is, cd(S, a) = 6, as the height of a is equal to 6.
On the other hand, since R is a normal isolated singularity, one has an inclusion
b ∼
b
Pic(E × E)/Z ∼
= Cl(R) ֒→ Cl(R)
= Picloc (R),
where the last isomorphism follows from [9, Proposition 18.10]. Thus, the torsion subgroup
\
d sh ) = Picloc (R)
b is not finitely generated, because the torsion subgroup of the
of Picloc ((S/a)
Picard group of the abelian variety E × E is not finitely generated.
×2πi
e
×
1The exponential sequence 0 → Z −
−−→ OX −
→ OX
→ 1 exists on any (even non-reduced) complex analytic
space X, where e is defined as follows: if f ∈ OX is locally represented as the restriction of a holomorphic
e
function fe on a complex manifold M , then e(f ) = e2πif |X . It is easy to check its well-definedness and the
exactness of the exponential sequence.
Depth and cohomological dimension
7
Example 2.6. In Theorem 2.4, the assumption on Hm2 (R) cannot be removed. Let S =
C[x, y, u, v, w](x,y,u,v,w) and (R, m) = S/a where a = (x, y) ∩ (u, v). Then depth R = 2 but
Hm2 (R) does not have finite length. For suppose it does, then local duality would imply that
depth Rp ≥ 2, where p = (x, y, u, v). Since Rp has disconnected punctured spectrum, this is
\
d sh ) = Picloc (R)
b = Z (the proof is the same
a contradiction. On the other hand, Picloc ((S/a)
as that of [19, Example 28, 29], or see [14, Example 5.3]). However, since a is a square-free
monomial ideal, cd(S, a) = pdS R = 5 − 2. Thus, the conclusion of Theorem 2.4 does not
hold if we remove the condition that Hm2 (R) has finite length.
Proposition 2.7 (cf. [18, Lemma 8]). Let (S, m) be an n-dimensional regular local ring
essentially of finite type over a field of characteristic zero and let a be an ideal of S. If
\
d sh ) is finitely generated.
depth S/a ≥ 3, then Picloc ((S/a)
Proof. We use the same strategy as the proof of Theorem 2.4. We may assume that S =
C[x1 , . . . , xn ](x1 ,...,xn) . Let x ∈ X be a closed point of an affine scheme X of finite type over C
such that OX,x ∼
= S/a. The exponential sequence then induces the following exact sequence:
2
d → H 3 (Xan , (X \ {x})an ; Z).
H 2 (Xan , (X \ {x})an ; Z) → H{x}
(Xan , OXan ) → Picloc (S/a)
loc d
2 (X , O
If depth S/a ≥ 3, then we see that H{x}
an
Xan ) vanishes and then Pic (S/a) is isomord is finitely generated.
phic to a subgroup of H 3 (Xan , (X \ {x})an ; Z). Thus, Picloc (S/a)
Corollary 2.8. Let (S, m) be an n-dimensional regular local ring essentially of finite type
over a field. If a is an ideal of S such that depth S/a ≥ 3, then cd(S, a) ≤ n − 3.
Proof. The positive characteristic case follows from a result of Peskine and Szpiro [25] and
the characteristic zero case does from Theorem 2.4 and Proposition 2.7.
When depth S/a ≥ 4, the cohomological dimension cd(S, a) is not necessarily less than
n − 3 (see Example 2.11). We give a necessary and sufficient condition for cd(S, a) to be less
than n − 3 in terms of the local Picard group of the completion of the strict henselization of
the completion of S/a.
Theorem 2.9. Let (S, m) be an n-dimensional regular local ring essentially of finite type
over a field of characteristic zero and let a be an ideal of S such that depth S/a ≥ 4. Then
\
d sh ) is torsion.
cd(S, a) ≤ n − 4 if and only if Picloc ((S/a)
Proof. We use the same strategy as the proof of Theorem 2.4 again. We may assume that
S = C[x1 , . . . , xn ](x1 ,...,xn ) . Note that Hai (S) = 0 for all i ≥ n − 2 by Corollary 2.8. We
also see from an argument analogous to Claim in the proof of Theorem 2.4 that Han−3 (S) is
supported only at the maximal ideal m. Let x ∈ X be a closed point of an affine scheme X of
finite type over C such that OX,x ∼
= S/a. The vanishing of Han−3 (S) is equivalent to saying
that the relative singular cohomology H 3 (Xan , (X \ {x})an ; Z) is torsion. We will show that
H 3 (Xan , (X \ {x})an ; Z) ∼
= Picet−loc (S/a).
Let U ⊂ X be a contractible Stein open neighborhood of x. It follows from the excision
theorem and the contractibility of U that
∼ H 3 (U, U \ {x}; Z) =
∼ H 2 (U \ {x}, Z).
H 3 (Xan , (X \ {x})an ; Z) =
i+1
(U, OU ) = 0 for
Also, since depth OU,x ≥ 4 and U is Stein, one has H i (U \ {x}, OU ) ∼
= H{x}
i = 1, 2. Then the exponential sequence induces the following exact sequence:
0 = H 1 (U \ {x}, OU ) → H 1 (U \ {x}, OU× ) → H 2 (U \ {x}, Z) → H 2 (U \ {x}, OU ) = 0.
8
H. DAO and S. TAKAGI
In other words, H 1 (U \ {x}, OU× ) ∼
= H 2 (U \ {x}, Z). Thus, we can conclude that
H 2 (U \ {x}, Z) ∼
H 1 (U \ {x}, OU× ) ∼
Picet−loc (S/a) ∼
= H 3 (Xan , (X \ {x})an ; Z).
= lim
= lim
−→
−→
U ∋x
U ∋x
The following corollary is immediate from Corollary 2.8 and Theorem 2.9.
Corollary 2.10. Let (S, m) be an n-dimensional regular local ring essentially of finite type
over a field of characteristic zero and let a be an ideal of S such that depth S/a ≥ 4. Then
\
d sh ) is not torsion.
cd(S, a) = n − 3 if and only if Picloc ((S/a)
Example 2.11. Let A and B be standard graded Cohen-Macaulay normal domains over an
algebraically closed field of characteristic zero. Suppose that dim A ≥ 3, dim B ≥ 2, and the
a-invariants of A and B are both negative. Let R = A#B be the Segre product of A and B.
We write R = T /a using the standard embedding, where T is a standard graded polynomial
ring with unique homogeneous maximal ideal m. Let S = Tm . Then cd(S, a) = dim S − 3.
To prove this, we just need to verify the conditions of Corollary 2.10. We know that
dim R = dim A+dim B −1 ≥ 4 and R is a Cohen-Macaulay normal domain (see [10, Theorem
4.2.3]). Let U = Proj A, V = Proj B and X = Proj R. Then Pic(X) = Pic(U ) × Pic(V ) has
rank at least 2. For this we need the assumption that A has depth at least 3 ([13, Exercise
III.12.6]). Since R is normal, there exists the following exact sequence (see [13, Exercise
II.6.3])
0 → Z → Cl(X) → Cl(Spec R \ {mR}) → 0,
which induces an exact sequence of Picard groups
0 → Z → Pic(X) → Pic(Spec R \ {mR}) → 0.
\
d sh ) = Picloc (S/a)
d has positive rank. Thus cd(S, a) = dim S−3.
It then follows that Picloc ((S/a)
For instance, when A and B are polynomial rings of dimension m, n, respectively, we see
that a is generated by 2×2 minors of the m×n matrix of indeterminates, and it is well-known
that in such a case cd(S, a) = mn − 3 (see for example [4, Remark 7.12]).
3. Applications
In this section, we give a number of applications of the results in §2. First, we recall the
definition of Lyubeznik numbers.
Definition 3.1 ([20, Definition 4.1]). Let R be a Noetherian local ring that admits a surjection from an n-dimensional regular local ring (S, m) of equal characteristic. Let a ⊂ S be the
kernel of the surjection and let k = S/m be the residue field of S. Then for each 0 ≤ i, j ≤ n,
the Lyubeznik number λi,j (R) is defined by
λi,j (R) = dimk (ExtiS (k, Han−j (S))).
It is known that the λi,j (R) are all finite and independent of the choice of the surjection
S → R.
As an application of Corollary 2.8, we obtain vanishing results of Lyubeznik numbers.
Proposition 3.2. Let R be a local ring essentially of finite type over a field. Then for all
j < depth R, one has
λj−2,j (R) = λj−1,j (R) = λj,j (R) = 0.
Depth and cohomological dimension
9
Proof. Let S be an n-dimensional regular local ring essentially of finite type over a field and
n−j
a be an ideal of S such that S/a ∼
= R. Since the injective dimension of Ha (S) is less than
n−j
or equal to the dimension of the support of Ha (S) by [20, Corollary 3.6 (b)], we will show
that
dim Supp(Han−j (S)) ≤ j − 3
for all 0 ≤ j < depth S/a (here we use the convention that the dimension of the empty set is
n−j
−∞). To check this, it is enough to show that HaS
(Sp ) = 0 for all 0 ≤ j < depth S/a and
p
for all prime ideals p of height h ≤ n − j + 2. If j = 0, then Han (S) = 0 by the LichtenbaumHartshorne vanishing theorem [11]. If j = 1, then Han−1 (S) = 0 by the second vanishing
theorem [24], [25]. Thus, we may assume that j ≥ 2 and h = n − j + 2.
Since depth S/a > j ≥ 2, we have Hmj (S/a) = Hmj−1 (S/a) = Hmj−2 (S/a) = 0. Local
duality over S yields that ExtSn−j (S/a, S) = Extn−j+1
(S/a, S) = Extn−j+2
(S/a, S) = 0. In
S
S
n−j
n−j+1
n−j+2
particular, ExtSp (Sp /aSp , Sp ) = ExtSp
(Sp /aSp , Sp ) = ExtSp
(Sp /aSp , Sp ) = 0. Then
local duality over the (n − j + 2)-dimensional regular local ring Sp yields that
2
1
0
HpS
(Sp /aSp ) = HpS
(Sp /aSp ) = HpS
(Sp /aSp ) = 0,
p
p
p
n−j
(Sp ) = 0.
that is, depth Sp /aSp ≥ 3. We, therefore, conclude from Corollary 2.8 that HaS
p
The following proposition comes from a discussion with Matteo Varbaro, whom we thank.
Proposition 3.3. Let R be a d-dimensional local ring essentially of finite type over a field.
If R satisfies Serre’s condition (S3 ), then λd−1,d (R) = 0.
Proof. Let (S, m) be an n-dimensional regular local ring essentially of finite type over a field
and a be an ideal of S such that S/a ∼
= R. We use the Grothendieck spectral sequence
E2p,q = Hmp (Haq (S)) =⇒ E p+q = Hmp+q (S).
Since E2d−1,n−d is an injective S-module by [20, Corollary 3.6 (a)], it is isomorphic to the
direct sum of λd−1,d (R) copies of the injective hull ES (S/m) of the residue field of S. In
particular, the vanishing of λd−1,d (R) is equivalent to saying that E2d−1,n−d = 0.
Claim. E2d−r−1,n−d+r−1 = 0 for all r ≥ 2.
Proof of Claim. We may assume that d ≥ 3 and r ≤ d + 1. Since the injective dimension of
Han−d+r−1 (S) is less than or equal to the dimension of the support of Han−d+r−1 (S) by [20,
Corollary 3.6 (b)], it suffices to show that
dim Supp(Han−d+r−1 (S)) ≤ d − r − 2.
n−d+r−1
(Sp ) = 0 for all prime ideals p of height h ≤
In other words, we will show that HaS
p
n
n−1
n − d + r + 1. Note that Ha (S) = Ha (S) = 0 by the Lichtenbaum-Hartshorne vanishing
theorem [11] and the second vanishing theorem [24], [25]. Therefore, we only consider the case
where r ≤ d−1 and h = n−d+r+1. In this case, depth Sp /aSp ≥ 3 by assumption. Applying
n−d+r−1
(Sp ) = 0.
Corollary 2.8 to the (n−d+r+1)-dimensional local ring Sp , we see that HaS
p
On the other hand, it is easy to see that E2d+r−1,n−d−r+1 = 0 for all r ≥ 2. Combining this
with the above claim, we conclude that
d−1,n−d ∼
E d−1,n−d ∼
=E
= ··· ∼
= E d−1,n−d = 0,
2
∞
3
where the last equality follows from the fact that
E n−1
= Hmn−1 (S) = 0.
10
H. DAO and S. TAKAGI
Remark 3.4. Let R be an excellent local ring containing a field (but not necessarily essentially
of finite type over a field). If R is an isolated singularity, then making use of Artin’s algebraization theorem [1, Theorem 3.8], we still have the same vanishing results as Propositions
3.2 and 3.3.
Next, we prove an extension of a recent result by Katzman-Lyubeznik-Zhang [17] which in
turn extended the classical result of Hartshorne on connectedness of the punctured spectrum.
Definition 3.5 ([17, Definition 1.1]). Let (R, m) be a Noetherian local ring and let p1 , . . . , pm
be the minimal primes of R. The simplicial complex ∆(R) associated to R is a simplicial
complex on the vertices 1, . . . , m such that a simplex {i0 , . . . , is } is included in ∆(R) if and
only if pi0 + · · · + pis is not m-primary.
Proposition 3.6. Let (R, m, k) be a local ring essentially of finite type over a field with
residue field k separably closed. If depth R ≥ 3, then
e 0 (∆(R);
b k) = H
e 1 (∆(R);
b k) = 0,
H
e i (∆(R);
b k) are the reduced singular homology of ∆(R).
b
where the H
Proof. It follows from a very similar argument to the proof of [17, Theorem 1.2], together
with Corollary 2.8.
Finally, we note some consequences of our results combined with the key induction theorem
[16, Theorem 2.5]. We start with a reinterpretation of this theorem which is more convenient
for our use:
Theorem 3.7 (Huneke-Lyubeznik). Let (S, m) be a d-dimensional regular local ring containing a field and let a ⊂ S be an ideal of pure height c. Let f : N → N be a non-decreasing
function. Assume there exist integers l′ ≥ l ≥ c such that
(1) f (l) ≥ c,
(2) cd(Sp , ap ) ≤ f (l′ + 1) − c + 1 for all prime ideals p ⊃ a with ht p ≤ l − 1,
(3) cd(Sp , ap ) ≤ f (ht p) for all prime ideals p ⊃ a with l ≤ ht p ≤ l′ ,
(4) f (r − s − 1) ≤ f (r) − s for every r ≥ l′ + 1 and every c − 1 ≥ s ≥ 1.
Then cd(S, a) ≤ f (d) if d ≥ l.
Proof. We apply [16, Theorem 2.5] with M = A = B = S, I = a and n = f (d) + 1. We check
all the conditions of [16, Theorem 2.5]. First, we need to show n > c. However, since f is
non-decreasing and d ≥ l, it follows from (1) that n − 1 = f (d) ≥ f (l) ≥ c. The condition
(3) allows us to assume d ≥ l′ + 1 (otherwise we take p = m to conclude).
Next, let s be an integer such that 1 ≤ s ≤ c − 1. In order to verify the two conditions (i)
and (ii) in [16, Theorem 2.5], by [16, Lemma 2.4], it is enough to show the following claim:
Claim. cd(Sp , ap ) ≤ n − s − 1 for all prime ideals p ⊃ a with dim S/p ≥ s + 1.
Since d ≥ l′ + 1, we have that n − s − 1 = f (d) − s ≥ f (l′ + 1) − s ≥ f (l′ + 1) − c + 1, so
(2) proves the claim if ht p ≤ l − 1. If ht p ≥ l, then by (3) and induction on d, we know that
cd(Sp , ap ) ≤ f (ht p). However, since ht p ≤ d − s − 1, it follows from (4) that
f (ht p) ≤ f (d − s − 1) ≤ f (d) − s = n − s − 1.
Theorem 3.8. Let S be an n-dimensional regular local ring containing a field and let a ⊂ S
be an ideal of height c.
Depth and cohomological dimension
11
(1) If S/a satisfies Serre’s condition (S2 ) and dim S/a ≥ 1, then
jn − 2k
.
cd(S, a) ≤ n − 1 −
c
(2) Suppose that S is essentially of finite type over a field. If S/a satisfies Serre’s condition (S3 ) and dim S/a ≥ 2, then
jn − 3k
cd(S, a) ≤ n − 2 −
.
c
Proof. First note that a is of pure height c jbecause
k S/a satisfies (S2 ). For the statement (1),
m−2
we use Theorem 3.7 with f (m) = m − 1 −
, l = c + 1 and l′ = 2c + 1. For (2), we use
c
k
j
, l = c + 2 and l′ = 2c + 2. To verify the condition (3) of Theorem 3.7,
f (m) = m − 2 − m−3
c
we need to invoke Corollary 2.8 as follows. If ht p = l = c + 2, then it follows from (1) that
cd(Sp , ap ) ≤ c = f (ht p). If c + 3 ≤ ht p ≤ l′ = 2c + 2, then f (ht p) = ht p − 3. However, since
depth Sp /aSp ≥ 3 in this case, it follows from Corollary 2.8 that cd(Sp , ap ) ≤ ht p − 3.
Remark 3.9. There have been many results in the literature which give similar bounds when
the strict henselization of R is a domain or has a small number of minimal primes. For
example, [23, Corollary 1.2] gives the same bound as that of Theorem 3.8 (1) under the
assumption that the number of minimal primes is less than n/c. Of course, there are a lot of
examples of ideals with good depth and many minimal primes. A very elementary example
is the complete intersection I = (x1 . . . xn , y1 . . . yn ) ⊂ k[[x1 , · · · , xn , y1 , · · · , yn ]], which has
n2 minimal primes {(xi , yj )}.
There are many examples where the bounds in Theorem 3.8 are sharp. For example, one
can use the Segre product of two polynomial rings of dimension 2 and d with d ≥ 3, as
explained in Example 2.11. However, those examples have c relatively big compared to n.
We suspect that in general one can do a little better, for instance, as follows:
Question 3.10. Let S be an excellent regular local ring that contains a field. Let n = dim S,
and a ⊂ S an ideal of height c. If S/a satisfies Serre’s condition (S2 ), then is it always true
that
j n k jn − 1k
−
?
cd(S, a) ≤ n −
c+1
c+1
Under some extra assumption, for example if S/a is normal, the answer is yes by [16,
Theorem 3.9]. If the answer to the above question is affirmative, then one can show that the
bound is sharp for most n and c. We give a class of examples in the case when c is odd using
square-free monomial ideals. More details will be explained in [5].
Example 3.11. Let c = 2l − 1, l > 1 and suppose n = ql. Let S = k[x1 , · · · , xn ], m =
(x1 , . . . , xn ) and J1 be the monomial ideal (x1 · · · xl , xl+1 · · · x2l , . . . , x(q−1)l+1 · · · xql ). Let J2
be the square-free part of J1 ml−1 . Then J2 is an ideal with linear presentation and all the
generators are in degree 2l − 1 = c. The regularity reg J2 of J2 is equal to
j n k jn − 1k
j n k jn − 1k
−
=n−
−
.
n−q+1 = n−
2l
2l
c+1
c+1
On the other hand, if a is the Alexander dual of J2 , then S/a satisfies (S2 ), ht a = c and
cd(S, a) = pd S/a = reg J2
(see for example [6]).
12
H. DAO and S. TAKAGI
References
[1] M. Artin, Algebraic approximation of structures over complete local rings, Inst. Hautes Études Sci. Publ.
Math. (1969), no. 36, 23–58.
[2] C. Bănică and O. Stănăşilă, Algebraic methods in the global theory of complex spaces, Editura Academiei,
Bucharest, John Wiley & Sons, London-New York-Sydney, 1976.
[3] J.-F. Boutot, Schéma de Picard local, Lecture Notes in Mathematics, vol. 632, Springer, Berlin, 1978.
[4] W. Bruns and U. Vetter, Determinantal rings, Lecture Notes in Mathematics, vol. 1327, Springer, Berlin,
1988.
[5] H. Dao and D. Eisenbud, On ideals with partial linear resolution, in preparation.
[6] H. Dao, C. Huneke, and J. Schweig, Bounds on the regularity and projective dimension of ideals associated
to graphs, J. Algebraic Combin. 38 (2013), no.1, 37–55.
[7] A. Dimca, Singularities and topology of hypersurfaces, Universitext, Springer-Verlag, New York, 1992.
[8] S. P. Dutta, A theorem on smoothness–Bass-Quillen, Chow groups and intersection multiplicity of Serre,
Trans. Amer. Math. Soc. 352 (2000), 1635–1645.
[9] R. M. Fossum, The divisor class group of a Krull domain, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 74, Springer-Verlag, 1973.
[10] S. Goto and K. Watanabe, On graded rings I, J. Math. Soc. Japan 30 (1978), 179-213.
[11] R. Hartshorne, Cohomological dimension of algebraic varieties, Ann. of Math. 88 (1968), 403–450.
[12] R. Hartshorne, On the de Rham cohomology of algebraic varieties, Inst. Hautes Études Sci. Publ. Math.
No. 45 (1975), 5–99.
[13] R. Hartshorne, Algebraic Geometry, Graduate Text in Mathematics 52, Springer-Verlag, New York, Heidelberg, Berlin, 1977.
[14] R. Hartshorne, Generalized divisors on Gorenstein schemes, K-Theory 8 (1994), 287–339.
[15] C. Huneke, Problems in local cohomology, Free Resolutions in Commutative Algebra and Algebraic
Geometry Sundance 90, Res. Notes in Math., Jones and Barlett 2 (1992), 93-108.
[16] C. Huneke and G. Lyubeznik, On the vanishing of local cohomology modules, Invent. Math. 102 (1990),
no. 1, 73–93.
[17] M. Katzman, G. Lyubeznik and W. Zhang, An extension of a theorem of Hartshorne, arXiv:1408.0858.
[18] J. Kollár, Grothendieck–Lefschetz type theorems for the local Picard group, arXiv:1211.0317.
[19] J. Kollár, Maps between local Picard groups, arXiv:1407.5108.
[20] G. Lyubeznik, Finiteness properties of local cohomology modules, Invent. Math. 113 (1993), 41–55.
[21] G. Lyubeznik, F -modules: applications to local cohomology and D-modules in characteristic p > 0, J.
reine angew. Math. 491 (1997), 65–130.
[22] G. Lyubeznik, On the vanishing of local cohomology in characteristic p > 0, Compositio Math. 142
(2006), 207–221.
[23] G. Lyubeznik, On some local cohomology modules, Advances in Math. 213 (2007), 621–643.
[24] A. Ogus, Local cohomological dimension, Ann. of Math. (2) 98 (1973), 327–365.
[25] C. Peskine and L. Szpiro, Dimension projective finie et cohomologie locale, Publ. Math. Inst. Hautes
Et́udes Sci. 42 (1973), 47–119.
[26] M. Varbaro, Cohomological and projective dimensions, Compositio Math. 149 (2013), 1203–1210.
Department of Mathematics, University of Kansas, Lawrence, KS 66045-7523, USA
E-mail address: [email protected]
Graduate School of Mathematical Sciences, University of Tokyo, 3-8-1 Komaba, Meguro-ku,
Tokyo 153-8914, Japan
E-mail address: [email protected]
| 0 |
1
Flight Trajectory Planning for Fixed-Wing Aircraft
in Loss of Thrust Emergencies.
Saswata Paul∗ , Frederick Hole† , Alexandra Zytek∗ and Carlos A. Varela∗
Department of Computer Science∗ ; Department of Mechanical, Aerospace, and Nuclear Engineering†
Rensselaer Polytechnic Institute, Troy, New York, 12180
{pauls4, holef, zyteka}@rpi.edu, [email protected]
arXiv:1711.00716v1 [cs.SY] 31 Oct 2017
Abstract
Loss of thrust emergencies—e.g., induced by bird/drone strikes or fuel exhaustion—create the need for dynamic data-driven
flight trajectory planning to advise pilots or control UAVs. While total loss of thrust (gliding) trajectories to nearby airports can
be pre-computed for all initial points in a 3D flight plan, dynamic aspects such as partial power, wind, and airplane surface
damage must be considered for accuracy. In this paper, we propose a new Dynamic Data-Driven Avionics Software (DDDAS)
approach which during flight updates a damaged aircraft performance model, used in turn to generate plausible flight trajectories
to a safe landing site. Our damaged aircraft model is parameterized on a baseline glide ratio for a clean aircraft configuration
assuming best gliding airspeed on straight flight. The model predicts purely geometric criteria for flight trajectory generation,
namely, glide ratio and turn radius for different bank angles and drag configurations. Given actual aircraft flight performance data,
we dynamically infer the baseline glide ratio to update the damaged aircraft model. Our new flight trajectory generation algorithm
thus can significantly improve upon prior Dubins based trajectory generation work by considering these data-driven geometric
criteria. We further introduce a trajectory utility function to rank trajectories for safety, in particular, to prevent steep turns close
to the ground and to remain as close to the airport or landing zone as possible. As a use case, we consider the Hudson River
ditching of US Airways 1549 in January 2009 using a flight simulator to evaluate our trajectories and to get sensor data (airspeed,
GPS location, barometric altitude). In this example, a baseline glide ratio of 17.25:1 enabled us to generate trajectories up to
28 seconds after the birds strike, whereas, a 19:1 baseline glide ratio enabled us to generate trajectories up to 36 seconds after
the birds strike. DDDAS can significantly improve the accuracy of generated flight trajectories thereby enabling better decision
support systems for pilots in total and partial loss of thrust emergency conditions.
I. I NTRODUCTION
Dynamic Data-Driven Applications and Systems (DDDAS) use data from sensor measurements to dynamically update a
system’s model, thereby, improving the model’s accuracy and its effectiveness in prediction-based applications [1]. Flight
systems are quite complex, rendering pure model-based approaches insufficient to accurately predict an aircraft’s performance
upon system failures. In this paper, we investigate Dynamic Data-Driven Avionics Software (DDDAS), in decision support
systems for unexpected loss of thrust (LOT) emergencies. LOT emergencies can result from fuel exhaustion as in Tuninter
1153’s accident, but also from bird strikes as in the Hudson River ditching of US Airways 1549. Aerodynamic models can
be used to plan flyable trajectories from the airplane’s LOT initial location to an airport runway or to an off-airport landing
site. We propose a DDDAS approach to flight trajectory generation that distills a complex aerodynamics model into purely
geometric constraints: glide ratio and radius of turns for different bank angles and drag configurations. Our damaged aircraft
model can predict the different glide ratios and radii of turns using a single parameter: the aircraft’s baseline glide ratio, g0 . g0
represents the glide ratio for a clean aircraft configuration assuming best gliding airspeed on straight flight. If we can infer the
actual g0 from sensor data on the fly, we can re-compute flight trajectories that will automatically consider dynamic factors,
such as partial power, wind aloft, and airplane surface damages, since these factors will impact the inferred baseline glide
ratio. By considering different bank angles and drag configurations in the generated plans, we can rank different plausible
trajectories according to the risk they impose on flights. We develop safety metrics to rank flyable trajectories considering
maximum distance from landing site, total length or trajectory, total time of trajectory, average altitude, length of extended
runway segment and average bank angle over height. We use a flight simulator to evaluate generated trajectories, and to gather
sensor data (e.g., airspeed, GPS location, and barometric altitude) to dynamically update the damaged aircraft performance
model.
Our dynamic data driven trajectory generation approach generates a trajectory in the clean configuration with no thrust and
gathers data from aircraft sensors for comparison. If there is an inconsistency between the two data, the observed data is sent
to a model refinement component, which infers a new glide ratio and sends it to the damaged aircraft model. The damaged
aircraft model uses this refined baseline glide ratio to create a new function to compute glide ratios for different bank angles
and sends it to the trajectory generation algorithm to generate trajectories which are consistent with the actual capabilities
of the aircraft. The damaged aircraft model takes as input the glide ratio for the clean configuration and a drag multiplier
table for generating the glide ratios in dirty configurations and generates glide ratios and radii of turn for different values of
This work was accepted as a full paper and presented in the Second International Conference on InfoSymbiotics / DDDAS (Dynamic Data Driven
Applications Systems) held at MIT, Cambridge, Massachusetts in August, 2017.
2
TABLE I: Glide ratio and radius of turn for various bank angles at best glide speed(225 kts) for Airbus A320.
Bank angle:
0°
10°
20°
30°
45°
60°
Glide ratio:
17.25:1
16.98:1
16.21:1
14.92:1
12.19:1
8.62:1
Radius of turn(feet):
∞
25430
12319
7766
4484
2588
TABLE II: Glide ratio and radius of turn for various bank angles at best glide speed(65 kts) for Cessna 172.
Bank angle:
0°
10°
20°
30°
45°
60°
Glide ratio:
9:1
8.86:1
8.45:1
7.79:1
6.36:1
4.5:1
Radius of turn(feet):
∞
2122
1028
648
374
216
bank angle, airspeed and drag configuration. Our trajectory planning algorithm generates trajectories from an initial point to a
given runway or a set of runways in case of a LOT emergency situation. After the possible trajectories are generated, they are
evaluated on the basis of several safety metrics and ranked according to their safety level. This is important because in case
of a real emergency, the pilots have to choose a course of action in a matter of seconds and if trajectories are already ranked,
it becomes easier for the pilots to make a fast and educated choice.
(a) 2D view.
(b) 3D view
Fig. 1: Effect of bank angle on trajectories.
Bank angle of turns has a major impact on the glide ratio and radius of turn of an aircraft (Table I, Table II). This difference
in glide ratio and radius of turn in turn, has a major impact on the type of trajectories (Fig. 1). So, in our trajectory planning
algorithm, we use three discrete values of bank angles: 20°, 30° and 45°. All the scenarios and experiments in this paper have
been done for an Airbus A320.
The rest of the paper is structured as follows: Section II discusses prior work done by us on avionics systems and related
work done on trajectory generation along with novel aspects of the work presented in this paper; Section III describes the
aerodynamic model used in this paper; Section IV describes our novel trajectory planning algorithm; Section V describes the
dynamic data driven model for trajectory generation; Section VI contains details of the experiments done along with the results
observed; Section VII contains conclusions and future work and Section VIII contains acknowledgements.
II. P REVIOUS W ORK
In prior work, we have developed a ProgrammIng Language for spatiO-Temporal data Streaming applications (PILOTS) to
detect sensor failures from data and to estimate values for quantities of interest (e.g., airspeed, or fuel quantity) upon fault
detection and isolation. We have successfully applied PILOTS to data from actual aircraft accidents, including Air France 447
when pitot tubes icing resulted in wrong airspeed measurements, and Tuninter 1153, when a wrong fuel quantity indicator
installation resulted in complete fuel exhaustion and engine failure [2]. We have worked on data streaming application for
avionics systems including development of failure models that can detect errors in sensor data, simulation of error detection
and correction using real data from flights, development of error signatures to detect and classify abnormal conditions from
sensor data, programming model to reason about spatio-temporal data streams, and design and implementation of a language
for spatio-temporal data streaming applications [3]–[8].
Dubins curves [9] are used to find the shortest paths, in two dimensions, between two configurations for a robot or a car
that can move only in one direction (Fig. 2). Dubins 2D shortest car paths have been previously applied for generating shortest
paths for aircraft in three dimensional space. Atkins et al use Dubins paths to generate 3D trajectories for aircraft in the
event of a loss of thrust emergency [10], [11]. They define worst case trajectory, direct trajectory and high altitude trajectory.
3
Fig. 2: Four types of Dubins paths with a straight line segment: RSR, RSL, LSL and LSR. The shortest one (RSR in this case)
is chosen.
Their high altitude trajectory has intermediate ’S’ turns to lose excess altitude which might take the aircraft far away from the
runway. Owen et al propose Dubins airplane paths for fixed wing UAVs with power for maneuverability [12]. They introduce
three type of trajectories for low altitude, middle altitude and high altitude scenarios. The low altitude trajectory consists of a
simple 3D Dubins airplane path, the high altitude trajectory consists of spiral turns preceeded by a Dubins airplane path and
the middle altitude trajectory consists of an intermediate arc that allows the aircraft to lose excess altitude.
We try to improve upon prior work by proposing a trajectory generation algorithm that removes the need for ’S’ turns and
arcs and minimizes the number of different maneuvers in the trajectory. We do this by introducing an extended runway in the
final approach before landing that allows the aircraft to lose excess altitude. We also remove the need for a middle altitude
scenario and incorporate it into the high altitude scenario. We evaluate generated trajectories using a set of trajectory safety
metrics that can be used to rank the trajectories depending on how safe they are. We use sensor data from the aircraft to
recompute the baseline glide ratio after discrete intervals of time to regenerate new and more accurate trajectories that take
into account the capabilities of the aircraft at that time.
III. A IRCRAFT M ODEL
(a) Position of a particle with respect to a 3D inertial frame [13].
(b) Inertial velocity expressed in polar coordinates.
Fig. 3: Position and inertial velocity of a point mass in a 3D frame.
4
Stengel [13] defines the position of a point with respect to a 3 dimensional inertial frame as follows:
x
r = y
z
Therefore, the velocity (v) and linear momentum (p) of a particle are given by:
ẋ
vx
dr
= ṙ = ẏ = vy
v=
dt
ż
vz
vx
p = mv = m vy
vz
(1)
(2)
(3)
where m = mass of the particle. Fig 3b shows the inertial velocity of a point mass in polar coordinates, where γ is the vertical
flight path angle and ξ is the horizontal flight path angle. Considering the motion to be a straight line motion in a particular
direction, we can use vx to denote motion in the xy horizontal plane. 2-dimensional equations for motion of a point mass,
which coincides with center of mass of an aircraft, restricted to the vertical plane are given below:
ẋ
vx
ż vz
=
(4)
v˙x fx /m
fz /m
v˙z
Transforming velocity to polar coordinates:
p
x
vx
vcosγ
v
vx2 + vz2
=
=
=⇒
=
z
vz
−vsinγ
γ
−sin−1 (vz /v)
Therefore, rates of change of velocity and flight path angle are given by:
dp
v̇
vx2 + vz2
dt
=
d
γ̇
sin−1 (vz /v)
− dt
(5)
(6)
Longitudinal equations of motion for a point mass are given by:
v̇(t) =
γ̇(t) =
ẋ(t) = vx = v(t)cosγ(t)
(7)
ż(t) = vz = −v(t)sinγ(t)
(8)
(CT cosα −
CD ) 12 ρ(z)v 2 (t)S
− mg0 sinγ(t)
m
(CT sinα + CL ) 21 ρ(z)v 2 (t)S − mg0 cosγ(t)
mv(t)
(9)
(10)
where x is position projected on horizontal plane, z is -height(altitude), v is airspeed, γ is flight path angle, CT is side force
coefficient, CD is drag coefficient ρ is density of air, α is the angle of attack, and CL is lift coefficient.
Lift and Drag are given by:
1 2
L = CL ρvair
S
(11)
2
1 2
D = CD ρvair
S
(12)
2
For a gliding flight, the condition of equilibrium is defined by the following equations:
1
L = CL ρv 2 S = wcosγ
2
1
D = CD ρv 2 S = wsinγ
2
(13)
(14)
where h is the altitude vector, S is the surface area of the wing and w is the weight of the aircraft.
Therefore, the gliding flight path angle (Fig. 5a) can be found:
cot γ =
L
D
(15)
5
From geometry, we have:
ẋ = vcosγ
(16)
ḣ = vsinγ
(17)
Therefore,
ẋ
∆x
=
= g0
∆h
ḣ
where g0 is the glide ratio for straight line glide at a constant airspeed.
Hence, from 15 and 18, we can conclude that:
L
g0 =
D
Glide range is maximum when (L/D) is maximum (Fig. 4).
cot γ =
(∆x = ∆h cot γ) → ∆xmax
L
→
when
D
(18)
(19)
L
D
(20)
max
Corresponding airspeed is given by:
s
vglide =
2w
p
2 + C2
ρS CD
L
(21)
Fig. 4: Glide ratio for a glider in straight line flight.
(a) Forces on a gliding flight.
(b) Weight vs lift in banked turns.
Fig. 5: Forces on a glider in straight line motion and banked turns.
For banked turns, if the bank angle is θ, the vertical component of lift, L0 = L cos θ (Fig. 5b). Hence the glide ratio gθ is
given by Eq. 22, which forms the basis of our geometrical model of a gliding flight.
L0
L
gθ =
=
cos θ = g0 cos θ
(22)
D
D
6
IV. T RAJECTORY P LANNING A LGORITHM
LOT emergencies can occur at any altitude. Depending on the altitude of the emergency, the type of possible trajectories
to reachable runways varies. Owen et al [12] describe three types of scenarios depending on the altitude of the aircraft with
respect to the runway, namely, low altitude, middle altitude and high altitude trajectories. We introduce the concept of an
extended runway when the altitude of the aircraft is too high for a simple Dubins airplane path to bring the aircraft to the
runway and at the same time, the altitude of the aircraft is too low for spiralling down to the runway. Therefore, our trajectory
planning algorithm (Fig. 6, Pseudocode 1) reduces the problem of finding trajectories to two scenarios: low altitude scenario
and high altitude scenario.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Trajectory_Generator ( s t a r t i n g _ c o n f i g u r a t i o n , runway_configuration , runway_altitude , bank_angle )
dubins=Generate_Dubins ( s t a r t i n g _ c o n f i g u r a t i o n , runway_configuration , bank_angle )
dubins . e n d _ a l t i t u d e = a l t i t u d e of l a s t point of dubins
I f d u b i n s . e n d _ a l t i t u d e < r u n w a y _ a l t i t u d e then
PATH = FAILURE
End I f
E l s e I f d u b i n s . e n d _ a l t i t u d e == r u n w a y _ a l t i t u d e t h e n
PATH = d u b i n s
End I f
Else I f d u b i n s . e n d _ a l t i t u d e > r u n w a y _ a l t i t u d e then
s p i r a l = G e n e r a t e _ S p i r a l ( dubins , e n d _ a l t i t u d e , r u n w a y _ a l t i t u d e )
s p i r a l . e n d _ a l t i t u d e = a l t i t u d e of l a s t point of s p i r a l
I f s p i r a l . e n d _ a l t i t u d e > r u n w a y _ a l t i t u d e then
end_altitude=spiral . end_altitude
distance=search_distance_parameter
While e n d _ a l t i t u d e > r u n w a y _ a l t i t u d e do
new_point= Find_Point ( d i s t a n c e )
l o s s = l o s s o f a l t i t u d e i n g l i d i n g from n e w _ p o i n t t o runway
e x t e n d e d _ r u n w a y = n e w _ p o i n t t o runway
dubins_new= Generate_Dubins ( s t a r t i n g _ c o n f i g u r a t i o n , new_point , bank_angle )
s p i r a l _ n e w = G e n e r a t e _ S p i r a l ( dubins_new , e n d _ a l t i t u d e , r u n w a y _ a l t i t u d e )
spiral_new . e n d _ a l t i t u d e = a l t i t u d e of l a s t point of s p i r a l
end_altitude = spiral_new . end_altitude
I f ( e n d _ a l t i t u d e −l o s s ) == r u n w a y _ a l t i t u d e t h e n
PATH= d u b i n s _ n e w + s p i r a l _ n e w + e x t e n d e d _ r u n w a y
break
End I f
Else
d i s t a n c e += s e a r c h _ d i s t a n c e _ p a r a m e t e r
End While
End I f
Else
PATH= d u b i n s + s p i r a l
End I f
R e t u r n PATH
End T r a j e c t o r y _ G e n e r a t o r
Pseudocode 1: Trajectory Planning Algorithm
A. Low Altitude Scenario
We define a low altitude scenario as a case in which a simple 3D Dubins airplane path can bring the aircraft to the runway
at the correct altitude. In this type of scenario, the generated trajectory consists of a Dubins airplane path from the starting
point to the runway (Fig. 7).
B. High Altitude Scenario
Our model defines a high altitude scenario as any case in which a simple Dubins airplane path cannot bring the aircraft to
the runway at the correct altitude, but brings it over the runway with an excess altitude. In such a scenario, the excess altitude
needs to be lost in the trajectory so that the pilot can safely land the aircraft. The value of the excess altitude has an impact on
the type of trajectories that can be generated. There are two distinct types of cases that fall under the high altitude scenario:
• The excess altitude is enough for generating an integral number of spiral turns to bring the aircraft to the runway.
• The excess altitude is not enough for generating an integral number of spiral turns to bring the aircraft to the runway.
In the first case, when the excess altitude allows for generating an integral number of spiral turns, the trajectory consists of
a Dubins airplane path, followed by an integral number of spiral turns (Fig. 8). In the second case, when the excess altitude
does not allow for generating an integral number of spirals, our algorithm extends the final approach along the runway heading
by adding an extended runway of length e, where e ∈ [0, 2πRgd,0 /gc,θ ) where R is the radius of turn for the bank angle θ
7
Fig. 6: Flowchart of trajectory planning algorithm.
(a) 2D view.
(b) 3D view
Fig. 7: A low altitude trajectory.
being used, gd,0 is the dirty configuration glide ratio for straight line gliding and gc,θ is the clean configuration glide ratio for
banked turns. The addition of this extended straight line approach in the final path allows the aircraft to lose the excess altitude
before reaching the runway (Fig. 9). In our experiments, we used a search_distance_parameter of 50 feet (Pseudocode 1)
while calculating the extended final segment.
In certain cases, the value of the excess altitude might allow generation of integral number of spirals but not enough spirals
to lose the excess altitude in entirety. Such special cases call for the need of using a combination of both spiral turns and
an extended final straight line approach. Thus, in such scenarios, our trajectory planning algorithm generates trajectories that
consist of a Dubins airplane path, followed by an integral number of spiral turns, followed by an extended final approach
(Fig. 10).
We use the dirty configuration glide ratio in the extended final segments in order to ensure that the starting point of the
extended final segment is at an altitude hd from which the flight can safely make it to the runway. The aircraft can reach the
start of the final segment at an altitude ha ≥ hd and in case the aircraft reaches the starting point of the final segment at an
altitude ha > hd , then the pilot will have the flexibility to lose the excess altitude ha − hd by increasing drag by performing
8
(a) 2D view.
(b) 3D view
Fig. 8: A high altitude trajectory.
(a) 2D view.
(b) 3D view
Fig. 9: A middle altitude trajectory.
(a) 2D view.
(b) 3D view
Fig. 10: A high altitude trajectory with spiral segments and an extended final approach.
’S’ turns, or by increasing sideslip and forward slip, and still making it to the runway (Fig. 11a). On the other hand, if the
aircraft reaches the starting point of the final segment at an altitude ha0 < hd , then the pilot can keep using clean configuration
until it is safe to increase drag (Fig. 11a) and make a successful landing. However, if we generate trajectories by using the
clean configuration glide ratio in the final segment, then the final segment will start at an altitude hc that is too optimistic and
in case the aircraft reaches the start of the final segment at an altitude ha < hc , then the aircraft will crash before reaching
the runway (Fig. 11b).
9
However, it should be noted that our current algorithm fails to return trajectories in certain cases, for example, when the
aircraft is directly above the runway. It can generate trajectories only for those cases where the 2D Dubins path from initial to
final configurations consists of a curve, followed by a straight line segment, followed by a curve.
(a) Using a dirty configuration glide ratio in final segment.
(b) Using a clean configuration glide ratio in final segment.
Fig. 11: Significance of using the dirty configuration glide ratio in final segment.
Our trajectory
generation software generates space discretized trajectories of the form T : [r0 , r1 , r2 , r3 , ...., rN ], where
xi
ri = yi and xi , yi and zi define the position of a point i with respect to a 3 dimensional inertial frame.
zi
V. T RAJECTORY S AFETY M ETRICS
We introduce a set of trajectory safety metrics to evaluate our trajectories. Each trajectory T generates a value for each
metric and each of these values are then normalized relative to the minimum (Eq. 23) or maximum (Eq. 24) value (whichever
is desired).
x − xmax
kxk =
(23)
xmin − xmax
x − xmin
kxk =
(24)
xmax − xmin
The safety metrics that have been considered for this paper are:
• Average Altitude (z̄)- This metric computes the average of the altitude (z) for all N points in T. This metric is normalized
against the maximum value because for a given trajectory, a higher average altitude from the ground is desirable.
PN
zi
z̄ = i=0
(25)
N
¯ This metric computes the average of the distance (d) from the runway for all N
• Average Distance From Runway (d)points in T. This metric is normalized against the minimum value because for a given trajectory, minimum distance from
the runway at all time is desirable.
PN
d(ri , rR )
¯
d = i=0
(26)
N
where rR is the position vector of the runway and d(r1 , r2 ) is given by Eq. 27.
p
d(r1 , r2 ) = (x1 − x2 )2 + (y1 − y2 )2 + (z1 − z2 )2
(27)
¯
θ
• Average Bank Angle Over Height (
)- This metric measures the occurrence of steep turns near the ground. This is
z
computed by taking an average of the ratio between bank angle (θ) and altitude (z) for all N points in T. Since it is not
desirable to have steep turns very close to the ground, this metric is normalized against the minimum value.
θi
¯ PN
i=0 zi
θ
=
(28)
z
N
•
Number of turns (n)- This metric counts the number of turns (n) in T. It is desirable to have as minimum number of
turns as possible, hence it is normalized against the minimum value.
n = Number of turns in Dubins airplane path + number of 360° turns in spiral segment
(29)
10
•
Total length (l)- This metric measures the total length (l) of the trajectory T. It is normalized against the minimum as
shorter trajectories are more desirable than longer ones.
l=
N
X
d(ri , ri−1 )
(30)
i=1
•
Extended final runway distance (e)- This metric measures the length (e) of the extended final straight line approach.
Trajectories with longer final straight line approaches are desirable as a longer final approach allows for the pilot to make
adjustments to drag and speed easily just before landing. Thus, this metric is normalized against the maximum value.
e = d(re , rR )
(31)
where re is the position vector of the starting point of the extended final segment and rR is the position vector of the
runway.
We introduce an utility function (u), which is computed by taking the average of the normalized values of the above metrics,
and used to rank the trajectories. The higher the value of the utility function, the better is the rank of the trajectory.
kz̄k + d¯ +
u=
¯θ + knk + klk + kek
h
6
(32)
This utility function can be easily modified to account for other safety factors like wind, terrain; etc in future work.
VI. DYNAMIC DATA D RIVEN F LIGHT T RAJECTORY G ENERATION
In order to generate trajectories that are as faithful to the current performance of the aircraft as possible, after discrete
time intervals, we take data from the aircraft sensors and estimate the correct baseline glide ratio (the glide ratio for a clean
configuration with best gliding airspeed in a straight flight), g0 . A flowchart of this approach is given in Fig. 12. The DDDAS
approach has four components: the damaged aircraft model, the flight trajectory generator, aircraft/sensors and the model
refinement component.
Fig. 12: Dynamic data driven flight trajectory generation.
The damaged aircraft model (Fig. 13) takes as inputs the new baseline glide ratio (g0 ) and a drag multiplier table to compute
the glide ratio (g(φ)) for every bank angle (φ) and the corresponding radius of turn (r(φ, v)) for a given bank angle (φ) and
airspeed (v). The drag multiplier table contains the ratios (δ) that can be used to obtain the baseline glide ratio (g0 (c)) for a
drag configuration c from the baseline glide ratio (g0 ) of a clean configuration.
g0 (c) = δ × g0
(33)
Given the baseline glide ratio (g0 ) and drag configuration ratio (δ), the glide ratio for a bank angle (φ) can be obtained from
Eq. 34.
g(φ) = g0 × δ × cos φ
(34)
11
Fig. 13: Damaged aircraft model.
Fig. 14: Estimation of glide ratio from sensors (FDR data of US Airways 1549).
Given the airspeed (v) and gravitational constant (G = 11.29 kn2 ft−1 ), the radius of turn (r(φ, v)) for a bank angle (φ) and
airspeed (v) can be obtained from Eq. 35 .
v2
r(φ, v) =
(35)
G × tan φ
The functions for calculating the glide ratio (g(φ)) for every bank angle (φ) and the corresponding radius of turn (r(φ, v))
are sent from the damaged aircraft model to the flight trajectory generator to be used in the trajectory planning algorithm for
computing trajectories.
We constantly read data from the aircraft sensors to estimate the observed glide ratio gˆθ . This is done every second by using
the pressure altitude (Ap ) and the airspeed (v) that are available from the sensor data. For a given instant ti , we can compute
the observed glide ratio gˆθ ti by taking a ratio of the horizontal distance travelled to the altitude lost in the preceding η seconds
(Eq. 36). This is followed by two steps: first, it is checked if this instant ti was preceded by a window ω of steady descent
with no rise in altitude; then it is checked if the distribution of gˆθ ti in the period ω had a standard deviation within a threshold
στ . This is done to detect stable windows of flight in order to make sure that aberrations and large deviations in the glide ratio
are not taken into account while calculating new trajectories.
Pi
Horizontal distance covered in preceding η seconds
i−3 vti
gˆθ ti =
=
(36)
Loss in altitude in preceding η seconds
Apti−3 − Apti
If the window ω preceding an instant ti is a stable window, we compute the observed glide ratio gˆθ for the observed bank
angle θ by taking a mean of the glide ratios in ω.
Pn
gˆθ
gˆθ = i=1 ti for all ti ∈ ω
(37)
n
12
Fig. 15: Effect of too short size of ω.
Fig. 16: Effect of too large size of ω.
In our experiments, we take η = 4, ω of duration 10 seconds and στ = 5 (Fig. 14). The choice of these values have a major
impact on the estimation of gˆθ because too short ω does not have enough data to give a proper estimate of gˆθ while too long
ω makes it difficult to find a proper estimate of gˆθ (Fig. 15, Fig. 16) and too small value of στ makes the conditions too strict
to find suitable values of gˆθ while too large value of στ makes it difficult to filter out noisy values of gˆθ (Fig. 17, Fig. 18).
The observed glide ratio for airspeed, bank angle and drag configuration is then sent to the model refinement component along
with the bank angle and drag configuration.
0
When it receives the data from the sensors, the model refinement component calculates the new baseline glide ratio g0
using the observed bank angle θ, drag configuration δ from the sensor data and the observed glide ratio gˆθ for that bank angle.
0
0
gˆθ = g0 δ cos θ =⇒ g0 =
gˆθ
δ cos θ
(38)
This is done assuming that the aircraft is maintaining the best gliding airspeed for clean configuration.
Fig. 19 shows dynamic data driven approach being used to generate trajectories to LGA4 from an altitude of 4451 feet. The
image clearly shows that correcting the baseline glide ratio from flight data can have major impact on new trajectories that are
generated. It allows us to account for dynamic factors such as partial power, wind and effects of surface damage and compute
a trajectory corresponding to the current performance capabilities.
13
Fig. 17: Effect of too small value of στ .
Fig. 18: Effect of too large value of στ .
(a) 2D View.
(b) 3D View.
Fig. 19: Trajectories to LGA4 from an altitude of 4551 feet. New trajectories computed using data from flight simulator.
14
VII. E XPERIMENTATION AND R ESULTS
A. Simulations With La Guardia Airport
We ran simulations with several hypothetical scenarios from different altitudes. Figures 20 to 31 show the 2D and 3D views
of the trajectories generated for LGA4, LGA13 and LGA31 from altitudes of 4000, 6000, 8000 and 10000 feet. The starting
point had the configuration: {longitude:-73.88000°, latitude:40.86500°, heading: 12.9°}. In all cases, a clean configuration
glide ratio of 17.25:1 was used for generating the trajectories with a dirty configuration glide ratio of 9:1 in the extended final
segments.
(a) 2D View.
(b) 3D View.
Fig. 20: Trajectory to LGA4 from 4000 feet.
(a) 2D View.
(b) 3D View.
Fig. 21: Trajectory to LGA13 from 4000 feet.
(a) 2D View.
(b) 3D View.
Fig. 22: Trajectory to LGA31 from 4000 feet.
15
(a) 2D View.
(b) 3D View.
Fig. 23: Trajectory to LGA4 from 6000 feet.
(a) 2D View.
(b) 3D View.
Fig. 24: Trajectory to LGA13 from 6000 feet.
(a) 2D View.
(b) 3D View.
Fig. 25: Trajectory to LGA31 from 6000 feet.
16
(a) 2D View.
(b) 3D View.
Fig. 26: Trajectory to LGA4 from 8000 feet.
(a) 2D View.
(b) 3D View.
Fig. 27: Trajectory to LGA13 from 8000 feet.
(a) 2D View.
(b) 3D View.
Fig. 28: Trajectory to LGA31 from 8000 feet.
17
(a) 2D View.
(b) 3D View.
Fig. 29: Trajectory to LGA4 from 10000 feet.
(a) 2D View.
(b) 3D View.
Fig. 30: Trajectory to LGA13 from 10000 feet.
(a) 2D View.
(b) 3D View.
Fig. 31: Trajectory to LGA31 from 10000 feet.
18
B. US Airways Flight 1549
Us Airways flight 1549 was a flight that took off from New York City’s La Guardia Airport on January 15, 2009 and lost
power in both engines when it struck a flock of Canada geese a couple of minutes after takeoff. The pilots managed to land
the Airbus A320-214 successfully in the Hudson river and save the lives of everyone on board the flight. In order to analyze
the other possible options that may have been available to the pilot instead of landing the aircraft in the Hudson, we used our
trajectory planning algorithm to recreate the conditions of the particular incident of flight 1549.
Fig. 32: Flight 1549 Time vs Altitude graph.
We collected data from Flight Data Recorder data as published in the National Transportation Safety Board report (Table III)
and simulated scenarios for t+4 through t+40 seconds, t being the time when the pilot said "birds" (15:27:10 UTC) as determined
from the sound recording in the Flight Data Recorder. From the data (Fig. 32), it is clearly visible that the flight kept gaining
altitude until t+16 seconds and attained a true altitude of 3352 feet before beginning to descend. We simulated two cases. For
the first case, we used a glide ratio of 17.25:1 as predicted by [14] for an Airbus A320 in no thrust conditions. For the second
TABLE III: US Airways 1549 Flight Data Recorder data.
Time Delay
Latitude(decimal)
Longitude(decimal)
Pressure Altitude(feet)
true altitude(feet)
magnetic heading(degrees)
Airspeed(kts)
t
40.8477
-73.8758
2792
3056
0
218
t+4
40.8513
-73.8767
2888
3152
0.7
207.125
t+8
40.8547
-73.8781
2968
3232
0
200.25
t+12
40.8581
-73.8786
3048
3312
0.4
193
t+16
40.8617
-73.8794
3088
3352
358.9
185.25
t+20
40.865
-73.88
3040
3304
357.2
185.25
t+24
40.8678
-73.8806
2916
3180
352.6
185.375
t+28
40.8711
-73.8819
2760
3024
344.5
187
t+32
40.8739
-73.8842
2580
2844
333.3
190.625
t+36
40.761
-73.8861
2368
2632
320.6
198.75
t+40
40.8789
-73.8897
2156
2420
305.5
202.875
TABLE IV: Rank of trajectories generated for US Airways 1549 for t+4 seconds with glide ratio 17.25:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA22
LGA13
30
45
45
0.55
1.00
0.00
0.00
0.78
1.00
0.00
1.00
0.66
1.00
0.00
0.00
¯
θ
h
kek
u
Rank
1.00
0.00
0.77
0.00
1.00
0.67
0.43
0.63
0.52
3
1
2
19
TABLE V: Rank of trajectories generated for US Airways 1549 for t+8 seconds with glide ratio 17.25:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA22
LGA13
30
45
45
0.61
1.00
0.00
0.00
0.76
1.00
0.00
1.00
0.66
1.00
0.00
0.00
¯
θ
h
kek
u
Rank
1.00
0.00
0.50
0.00
1.00
0.68
0.43
0.63
0.47
3
1
2
TABLE VI: Rank of trajectories generated for US Airways 1549 for t+12 seconds with glide ratio 17.25:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA22
LGA13
30
45
45
0.66
1.00
0.00
0.00
0.77
1.00
0.00
1.00
0.63
1.00
0.00
0.00
¯
θ
h
kek
u
Rank
1.00
0.00
0.75
0.00
1.00
0.69
0.44
0.63
0.52
3
1
2
TABLE VII: Rank of trajectories generated for US Airways 1549 for t+16 seconds with glide ratio 17.25:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA22
LGA13
30
45
45
0.55
1.00
0.00
0.00
0.70
1.00
0.00
1.00
0.69
1.00
0.00
0.00
¯
θ
h
kek
u
Rank
0.00
0.56
1.00
0.00
1.00
0.72
0.26
0.71
0.56
3
1
2
TABLE VIII: Rank of trajectories generated for US Airways 1549 for t+20 seconds with glide ratio 17.25:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA13
45
45
0.00
1.00
1.00
0.00
1.00
0.00
1.00
1.00
¯
θ
h
kek
u
Rank
0.00
1.00
1.00
0.00
0.67
0.50
1
2
TABLE IX: Rank of trajectories generated for US Airways 1549 for t+24 seconds with glide ratio 17.25:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA13
45
45
1.00
0.00
0.00
1.00
0.00
1.00
1.00
1.00
¯
θ
h
kek
u
Rank
0.00
1.00
1.00
1.00
0.50
0.83
2
1
TABLE X: Rank of trajectories generated for US Airways 1549 for t+4 seconds with glide ratio 19:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA22
LGA13
LGA13
LGA31
30
45
30
45
45
0.20
0.38
0.00
0.35
1.00
0.30
1.00
0.12
0.81
0.00
0.16
1.00
0.00
0.91
0.16
1.00
1.00
1.00
1.00
1.00
¯
θ
h
kek
u
Rank
0.88
0.71
1.00
0.70
0.00
0.33
1.00
0.16
0.86
0.00
0.47
0.85
0.38
0.77
0.36
3
1
4
2
5
TABLE XI: Rank of trajectories generated for US Airways 1549 for t+8 seconds with glide ratio 19:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA22
LGA13
LGA13
LGA31
30
45
30
45
45
0.12
0.35
0.00
0.33
1.00
0.34
1.00
0.14
0.82
0.00
0.16
1.00
0.00
0.93
0.13
1.00
1.00
1.00
1.00
1.00
¯
θ
h
kek
u
Rank
0.87
0.68
1.00
0.80
0.00
0.41
1.00
0.17
0.87
0.00
0.48
0.81
0.38
0.74
0.36
3
1
4
2
5
TABLE XII: Rank of trajectories generated for US Airways 1549 for t+12 seconds with glide ratio 19:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA22
LGA13
LGA13
LGA31
30
45
30
45
45
0.09
0.33
0.00
0.28
1.00
0.36
1.00
0.11
0.81
0.00
0.20
1.00
0.00
0.92
0.17
1.00
1.00
1.00
1.00
1.00
¯
θ
h
kek
u
Rank
0.81
0.54
1.00
0.56
0.00
0.41
1.00
0.17
0.87
0.00
0.47
0.81
0.38
0.74
0.36
3
1
4
2
5
TABLE XIII: Rank of trajectories generated for US Airways 1549 for t+16 seconds with glide ratio 19:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA22
LGA13
LGA13
30
45
30
45
0.43
1.00
0.00
0.96
0.20
1.00
0.00
0.80
0.14
1.00
0.00
0.95
1.00
1.00
1.00
1.00
¯
θ
h
kek
u
Rank
0.62
0.00
1.00
0.51
0.22
1.00
0.00
0.83
0.44
0.83
0.33
0.84
3
2
4
1
20
(a) 2D View.
(b) 3D View.
Fig. 33: Trajectory to LGA22 with a glide ratio of 17.25:1 at time t+4 .
(a) 2D View.
(b) 3D View.
Fig. 34: Trajectory to LGA13 with a glide ratio of 17.25:1 at time t+4 .
(a) 2D View.
(b) 3D View.
Fig. 35: Trajectory to LGA22 with a glide ratio of 17.25:1 at time t+8.
case, we used a glide ratio of 19:1 as calculated from the simulator data. We used a dirty configuration glide ratio of 9:1 for
the extended runway segments.
1) Using a glide ratio of 17.25:1:
At t+4 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8513, -73.8767,
3152, 0.7} and our algorithm was able to generate trajectories to LGA22 using 30° and 45° bank angles (Fig. 33) and LGA13
21
(a) 2D View.
(b) 3D View.
Fig. 36: Trajectory to LGA13 with a glide ratio of 17.25:1 at time t+8.
(a) 2D View.
(b) 3D View.
Fig. 37: Trajectory to LGA22 with a glide ratio of 17.25:1 at time t+12.
(a) 2D View.
(b) 3D View.
Fig. 38: Trajectory to LGA13 with a glide ratio of 17.25:1 at time t+12.
using 45° bank angle (Fig. 34). LGA4 and LGA31 were unreachable according to our results. The trajectories were ranked
according to our metrics (Table IV).
At t+8 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8547, -73.8781,
,3232, 0} and LGA22 was reachable using 30° and 45° bank angles (Fig. 35) and LGA13 was reachable using 45° bank angle
(Fig. 36). The trajectories were ranked according to our metrics (Table V).
At t+12 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8581, -73.8786,
22
(a) 2D View.
(b) 3D View.
Fig. 39: Trajectory to LGA22 with a glide ratio of 17.25:1 at time t+16.
(a) 2D View.
(b) 3D View.
Fig. 40: Trajectory to LGA13 with a glide ratio of 17.25:1 at time t+16.
(a) 2D View.
(b) 3D View.
Fig. 41: Trajectory to LGA22 with a glide ratio of 17.25:1 at time t+20.
3312, 0.4} and LGA22 was reachable using 30° and 45° bank angles (Fig. 37) and LGA13 was reachable using 45° bank
angle (Fig. 38). The trajectories were ranked according to our metrics (Table VI).
At t+16 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was { 40.8617, -73.8794,
3352, 358.9} and LGA22 was reachable using 30° and 45° bank angles (Fig. 39) and LGA13 was reachable using 45° bank
angle (Fig. 40). The trajectories were ranked according to our metrics (Table VII).
At t+20 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.865, -73.88, 3304,
23
(a) 2D View.
(b) 3D View.
Fig. 42: Trajectory to LGA13 with a glide ratio of 17.25:1 at time t+20.
(a) 2D View.
(b) 3D View.
Fig. 43: Trajectory to LGA22 with a glide ratio of 17.25:1 at time t+24.
(a) 2D View.
(b) 3D View.
Fig. 44: Trajectory to LGA13 with a glide ratio of 17.25:1 at time t+24.
357.2} LGA22 was reachable using 45° bank angle (Fig. 41) and LGA13 was reachable using 45° bank angle (Fig. 42). The
trajectories were ranked according to our metrics (Table VIII).
At t+24 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8678, -73.8806,
3180, 352.6} and LGA22 was reachable using 45° bank angle (Fig. 43) and LGA13 was reachable using 45° bank angle (Fig.
44). The trajectories were ranked according to our metrics (Table IX).
At t+28 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8711, -73.8819,
24
(a) 2D View.
(b) 3D View.
Fig. 45: Trajectory to LGA13 with a glide ratio of 17.25:1 at time t+28.
TABLE XIV: Rank of trajectories generated for US Airways 1549 for t+20 seconds with glide ratio 19:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA22
LGA13
LGA13
30
45
30
45
0.65
1.00
0.00
0.94
0.03
1.00
0.00
0.84
0.09
1.00
0.00
0.97
1.00
1.00
1.00
1.00
¯
θ
h
kek
u
Rank
0.26
0.00
1.00
0.12
0.06
1.00
0.00
0.86
0.35
0.83
0.33
0.79
3
1
4
2
(a) 2D View.
(b) 3D View.
Fig. 46: Trajectory to LGA22 with a glide ratio of 19:1 at time t+4 .
3024, 344.5} and LGA22 was no longer reachable while LGA13 was reachable using 45° bank angle(Fig. 45). The trajectories
were not ranked since there was only one trajectory.
At t+32 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8789, -73.8897,
2420, 305.5} and none of the runways at La Guardia Airport were reachable using any bank angle for turns.
TABLE XV: Rank of trajectories generated for US Airways 1549 for t+24 seconds with glide ratio 19:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA13
LGA13
45
30
45
0.99
0.00
1.00
0.97
0.00
1.00
0.94
0.00
1.00
0.00
1.00
0.00
¯
θ
h
kek
u
Rank
0.44
0.00
1.00
0.98
0.00
1.00
0.72
0.17
0.83
2
3
1
TABLE XVI: Rank of trajectories generated for US Airways 1549 for t+28 seconds with glide ratio 19:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA13
45
45
1.00
0.00
0.00
1.00
0.00
1.00
1.00
1.00
¯
θ
h
kek
u
Rank
0.00
1.00
0.00
1.00
0.33
0.83
2
1
25
TABLE XVII: Rank of trajectories generated for US Airways 1549 for t+32 seconds with glide ratio 19:1.
Runway
Bank angle
d¯
kz̄k
klk
knk
LGA22
LGA13
45
45
1.00
0.00
0.00
1.00
0.00
1.00
1.00
1.00
(a) 2D View.
¯
θ
h
kek
u
Rank
0.00
1.00
1.00
1.00
0.50
0.83
2
1
(b) 3D View.
Fig. 47: Trajectory to LGA13 with a glide ratio of 19:1 at time t+4 .
(a) 2D View.
(b) 3D View.
Fig. 48: Trajectory to LGA31 with a glide ratio of 19:1 at time t+4 .
(a) 2D View.
(b) 3D View.
Fig. 49: Trajectory to LGA22 with a glide ratio of 19:1 at time t+8.
26
(a) 2D View.
(b) 3D View.
Fig. 50: Trajectory to LGA13 with a glide ratio of 19:1 at time t+8.
(a) 2D View.
(b) 3D View.
Fig. 51: Trajectory to LGA31 with a glide ratio of 19:1 at time t+8.
(a) 2D View.
(b) 3D View.
Fig. 52: Trajectory to LGA22 with a glide ratio of 19:1 at time t+12.
2) Using a glide ratio of 19:1:
At t+4 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8513, -73.8767,
3152, 0.7} and our algorithm was able to generate trajectories to LGA31 using 45° bank angle (Fig. 48), LGA22 using 30°
and 45° bank angles (Fig. 46) and LGA13 using 30° and 45° bank angles (Fig. 47). LGA4 was unreachable according to our
results. The trajectories were ranked according to our metrics (Table X).
27
(a) 2D View.
(b) 3D View.
Fig. 53: Trajectory to LGA13 with a glide ratio of 19:1 at time t+12.
(a) 2D View.
(b) 3D View.
Fig. 54: Trajectory to LGA31 with a glide ratio of 19:1 at time t+12.
(a) 2D View.
(b) 3D View.
Fig. 55: Trajectory to LGA22 with a glide ratio of 19:1 at time t+16.
At t+8 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8547, -73.8781,
3232, 0} and LGA31 was reachable using 45° bank angle (Fig. 51), LGA22 was reachable using 30° and 45° bank angles
(Fig. 49) and LGA13 was reachable using 30° and 45° bank angles (Fig. 50). The trajectories were ranked according to our
metrics (Table XI).
At t+12 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8581, -73.8786,
3312, 0.4} and LGA31 was reachable using 45° bank angle (Fig. 54), LGA22 was reachable using 30° and 45° bank angles
28
(a) 2D View.
(b) 3D View.
Fig. 56: Trajectory to LGA13 with a glide ratio of 19:1 at time t+16.
(a) 2D View.
(b) 3D View.
Fig. 57: Trajectory to LGA22 with a glide ratio of 19:1 at time t+20.
(a) 2D View.
(b) 3D View.
Fig. 58: Trajectory to LGA13 with a glide ratio of 19:1 at time t+20.
(Fig. 52) and LGA13 was reachable using 30° and 45° bank angles (Fig. 53). The trajectories were ranked according to our
metrics (Table XII).
At t+16 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8617, -73.8794,
3352, 358.9} and LGA31 was no longer reachable, LGA22 was reachable using 30° and 45° bank angles (Fig. 55) and LGA13
was reachable using 30° and 45° bank angles (Fig. 56). The trajectories were ranked according to our metrics (Table XIII).
At t+20 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.865, -73.88, 3304,
29
(a) 2D View.
(b) 3D View.
Fig. 59: Trajectory to LGA22 with a glide ratio of 19:1 at time t+24.
(a) 2D View.
(b) 3D View.
Fig. 60: Trajectory to LGA13 with a glide ratio of 19:1 at time t+24.
(a) 2D View.
(b) 3D View.
Fig. 61: Trajectory to LGA22 with a glide ratio of 19:1 at time t+28.
357.2} LGA22 was reachable using 30° and 45° bank angles (Fig. 57) and LGA13 was reachable
angles (Fig. 58). The trajectories were ranked according to our metrics (Table XIV).
At t+24 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft
3180, 352.6} and LGA22 was reachable using 30° and 45° (Fig. 59) bank angles and LGA13 was
45° bank angles (Fig. 60). The trajectories were ranked according to our metrics (Table XV).
At t+28 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft
using 30° and 45° bank
was {40.8678, -73.8806,
reachable using 30° and
was {40.8711, -73.8819,
30
(a) 2D View.
(b) 3D View.
Fig. 62: Trajectory to LGA13 with a glide ratio of 19:1 at time t+28.
(a) 2D View.
(b) 3D View.
Fig. 63: Trajectory to LGA22 with a glide ratio of 19:1 at time t+32.
(a) 2D View.
(b) 3D View.
Fig. 64: Trajectory to LGA13 with a glide ratio of 19:1 at time t+32.
3024, 344.5} and LGA22 was reachable using 45° bank angle (Fig. 61) and LGA13 was reachable using only 45° bank angle
(Fig. 62). The trajectories were ranked according to our metrics (Table XVI).
At t+32 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8739, -73.8842,
2844, 333.3} and LGA22 was reachable using 45° bank (Fig. 63) angle and LGA13 was reachable using only 45° bank angle
(Fig. 64). The trajectories were ranked according to our metrics (Table XVII).
At t+36 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.761, -73.8861,
31
(a) 2D View.
(b) 3D View.
Fig. 65: Trajectory to LGA13 with a glide ratio of 19:1 at time t+36.
2632, 320.6} and LGA22 was no longer reachable and LGA13 was reachable using only 45° bank angle (Fig. 65).
At t+40 seconds, the latitude, longitude, true altitude, magnetic heading configuration of the aircraft was {40.8789, -73.8897,
2420, 305.5} and none of the runways at La Guardia Airport were reachable using any bank angle for turns.
In order to check the viability and accuracy of the generated trajectories, we simulated the trajectories generated by our
trajectory planning algorithm in a Precision Flight Controls CAT III Flight Simulator running X-Plane software with an Airbus
A320 (Fig. 66, Fig. 67).
(a) 2D view.
(b) 3D view.
Fig. 66: Generated vs simulated trajectories to LGA22 at t+8.
VIII. C ONCLUSION AND F UTURE W ORK
Augmenting flight management architectures with a system to generate and rank feasible trajectories in case of LOT scenarios
will help in reducing the response time of pilots for choosing a course of action to make a successful landing in case of such
emergency situations. Our evaluation of US Airways flight 1549 shows that using a baseline glide ratio of 17.25:1 for straight
line flight, runways were reachable up to 28 seconds after the emergency was detected and a water landing could have been
avoided if this information was made available to the pilots during the actual emergency. Our algorithm was unable to generate
trajectories to Teterboro airport even though we had successfully managed to land the flight at TEB24 during simulator trials.
In the case of US Airways 1549, the left engine still had 30% power. While it is not clear how much of this power turned into
thrust for a damaged engine, a conservative baseline glide ratio of 19:1 illustrates that trajectories can be generated for up to
36 seconds after birds strike. Using a data-driven feedback loop, DDDAS based trajectory generation systems can determine
the actual capabilities of the affected aircraft at the time in question and generate trajectories that are practical and also reject
previously calculated trajectories that might no longer be feasible.
Future directions of work includes generating terrain aware trajectory planning algorithms that take into account the features
of the terrain by analyzing and processing data from terrain databases. Terrain aware algorithms may detect obstacles in the
Source code available at: http://wcl.cs.rpi.edu/pilots
32
(a) 2D view.
(b) 3D view.
Fig. 67: Comparison of generated and simulated spiral segments.
path and generate trajectories that avoid those obstacles and also detect alternative landing sites such as rivers, lakes, highways,
etc, if conventional runways are absent in the vicinity of an emergency. For example, in the case of US Airways 1549, the
Hudson river was a very reasonable alternative landing zone (the pilots had concluded that no runway in the nearby airports
were reachable) considering the fact that it provided the pilots with a practically unlimited runway to land the flight without
having to worry about running short or overshooting the runway. In this paper, we have used Dubins airplane paths that consider
only a single radius of turn for all turns in a particular trajectory. An interesting future direction of work would be to use
small radii of turns (steep bank angles) in the initial curves and larger radii of turns (shallower bank angles) in the final curves
near the ground. Finally, uncertainty quantification in the trajectory planning process is another potential future direction. This
would allow us to compute the probability of success of a trajectory by considering possible variations in variables such as
wind, pilot error, and the actual amount of thrust being generated by the damaged engines.
IX. ACKNOWLEDGEMENTS
This research is partially supported by the DDDAS program of the Air Force Office of Scientific Research, Grant No.
FA9550-15-1-0214 and NSF Grant No. 1462342.
R EFERENCES
[1] Frederica Darema. Dynamic data driven applications systems: New capabilities for application simulations and measurements. Computational Science
– ICCS 2005: 5th International Conference, Atlanta, GA, USA, May 22-25, 2005. Proceedings, Part II, pages 610–615, 2005.
[2] Shigeru Imai, Erik Blasch, Alessandro Galli, Wennan Zhu, Frederick Lee, and Carlos A. Varela. Airplane flight safety using error-tolerant data stream
processing. IEEE Aerospace and Electronics Systems Magazine, 32(4):4–17, 2017.
[3] Sida Chen, Shigeru Imai, Wennan Zhu, and Carlos A. Varela. Towards learning spatio-temporal data stream relationships for failure detection in avionics.
In Dynamic Data-Driven Application Systems (DDDAS 2016), Hartford, CT, Aug 2016. To appear.
[4] Shigeru Imai, Alessandro Galli, and Carlos A. Varela. Dynamic data-driven avionics systems: Inferring failure modes from data streams. In Dynamic
Data-Driven Application Systems (DDDAS 2015), Reykjavik, Iceland, June 2015.
[5] Shigeru Imai, Richard Klockowski, and Carlos A. Varela. Self-healing spatio-temporal data streams using error signatures. In 2nd International Conference
on Big Data Science and Engineering (BDSE 2013), Sydney, Australia, December 2013.
[6] Shigeru Imai and Carlos A. Varela. A programming model for spatio-temporal data streaming applications. In Dynamic Data-Driven Application Systems
(DDDAS 2012), pages 1139–1148, Omaha, Nebraska, June 2012.
[7] Richard S. Klockowski, Shigeru Imai, Colin Rice, and Carlos A. Varela. Autonomous data error detection and recovery in streaming applications.
In Proceedings of the International Conference on Computational Science (ICCS 2013). Dynamic Data-Driven Application Systems (DDDAS 2013)
Workshop, pages 2036–2045, May 2013.
[8] Shigeru Imai and Carlos A. Varela. Programming spatio-temporal data streaming applications with high-level specifications. In 3rd ACM SIGSPATIAL
International Workshop on Querying and Mining Uncertain Spatio-Temporal Data (QUeST) 2012, Redondo Beach, California, USA, November 2012.
[9] L. E. Dubins. On curves of minimal length with a constraint on average curvature, and with prescribed initial and terminal positions and tangents.
American Journal of Mathematics, 79(3):497–516, 1957.
[10] Ella M Atkins. Emergency landing automation aids: an evaluation inspired by US airways flight 1549. In AIAA Infotech@ Aerospace Conference,
Atlanta, Georgia, 2010.
[11] Ella M Atkins, Igor Alonso Portillo, and Matthew J Strube. Emergency flight planning applied to total loss of thrust. Journal of Aircraft, 43(4):1205–1216,
2006.
[12] Mark Owen, Randal W. Beard, and Timothy W. McLain. Implementing Dubins Airplane Paths on Fixed-Wing UAVs*, pages 1677–1701. Springer
Netherlands, Dordrecht, 2015.
[13] Robert F Stengel. Flight dynamics. Princeton University Press, 2015.
[14] Kivanc A Avrenli and Barry J Dempsey. Is"Green Dot" always the optimum engines-out glide speed on the airbus a320 aircraft? Journal of
Aviation/Aerospace Education & Research, 24(3):33, 2015.
| 3 |
A simulation technique for slurries interacting with moving parts
and deformable solids with applications
arXiv:1703.05158v1 [cs.CE] 15 Mar 2017
Patrick Mutabaruka · Ken Kamrin
Abstract A numerical method for particle-laden fluids interacting with a deformable solid domain
and mobile rigid parts is proposed and implemented in a full engineering system. The fluid domain
is modeled with a lattice Boltzmann representation, the particles and rigid parts are modeled with
a discrete element representation, and the deformable solid domain is modeled using a Lagrangian
mesh. The main issue of this work, since separately each of these methods is a mature tool, is to
develop coupling and model-reduction approaches in order to efficiently simulate coupled problems
of this nature, as occur in various geological and engineering applications. The lattice Boltzmann
method incorporates a large-eddy simulation technique using the Smagorinsky turbulence model.
The discrete element method incorporates spherical and polyhedral particles for stiff contact interactions. A neo-Hookean hyperelastic model is used for the deformable solid. We provide a detailed
description of how to couple the three solvers within a unified algorithm. The technique we propose
for rubber modeling/coupling exploits a simplification that prevents having to solve a finite-element
problem each time step. We also develop a technique to reduce the domain size of the full system by
replacing certain zones with quasi-analytic solutions, which act as effective boundary conditions for
the lattice Boltzmann method. The major ingredients of the routine are are separately validated.
To demonstrate the coupled method in full, we simulate slurry flows in two kinds of piston-valve
geometries. The dynamics of the valve and slurry are studied and reported over a large range of
input parameters.
Keywords Discrete elements method · Lattice Boltzmann · Fluid-particle interaction · Smagorinsky
turbulence model · Hyperelastic model · Neo-Hookean elastic rubber model
1 Introduction
For systems that involve grains, fluids, and deformable solids, a key challenge is to determine reasonable methodologies to couple very distinct numerical techniques. On their own, systems of dry grains
are commonly simulated using the discrete element method (DEM), wherein each grain’s position
is evolved by Newton’s laws applied by way of contact interactions with other grains. For fluids, a
variety of approaches exist including finite volume methods, finite difference methods, and the Lattice Boltzman Method (LBM), which are based on updating fluid data on an Eulerian background
set. While the former two methods directly simulate Navier-Stokes, the latter utilizes a lattice discretization of the Boltzmann equation, which approaches Navier-Stokes under the proper refinement.
As for solids, in the large deformation limit, finite-element methods are commonly used, typically
based on a moving Lagrangian node set. Systems that mix particles, fluids, and deformable solids
require development of methods that allow proper momentum exchange between these disparate
P. Mutabaruka
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
E-mail: [email protected]
K. Kamrin
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
E-mail: [email protected]
2
Patrick Mutabaruka, Ken Kamrin
representations, which can be computationally quite complex if not reduced. However, because the
particles can enter a dense-packed state, we do not wish to reduce the particle-fluid mixture to a
simplified dilute-suspension continuum model.
The purpose of this paper is three-fold:
1. We introduce a reduced-order method that permits continuum deformable solid models, represented with finite-elements, to interact with both grains and fluid in a dynamic environment.
The fluid-particle implementation is based on a joint LBM-DEM method similar to those used in
[1, 2, 3, 4, 5, 6]. LBM is well-suited to this problem because of its ease dealing with many moving
boundaries. The solid interaction method introduced uses data interpolation to map deformed
solid configurations from separate, individual solid deformation tests to the in-situ solid as it
interacts with particles and fluid.
2. Because of the inherent complexity in multi-material modeling, the ability to remove the need
to simulate large zones of the computational domain can be advantageous, as long as the macrophysics in those zones can be properly represented otherwise. Herein, we introduce an LBM
sub-routine that allows us to remove a large zone of the computational fluid domain and replace
it with a global analytical form, which handshakes back to the LBM simulation domain appropriately.
3. As a key example of where these methods may come to use, we demonstrate their usage in
two different piston-valve geometries. In both piston-valve geometries, a large piston pushes a
particle-laden fluid through a passive valve. The valve is spring-loaded, and opens when the
slurry pressure beneath is large enough. The deformable solid aspect comes into play because
the valve has a rubber component along its bottom, which is intended to make a seal with
the valve seat. We conduct a systematic parameter study of valve behavior under variations in
particle size, input packing fraction, and polydispersity as well as variations in fluid viscosity and
piston speed. We consider two types of valve setups: (1) A ‘pressure valve’, in which the valve
separates a zone of pressurized slurry above it from a zone of low pressure below it. Slurry pushed
through the valve is hence pressurized as it passes through. (2) A ‘safety valve’, whose goal is to
ensure the pressure in a flowing conduit does not exceed a critical limit. Here, the valve is placed
adjacent to a flowing conduit and remains closed unless the pressure is high enough. Figure 1
shows mid-simulation snapshots of both valve setups, showing particles, fluid stream lines, rubber
deformation, and the geometry of the valve and frame. Note that we exploit symmetry about
the zy-plane and simulate only half the domain.
Fig. 1 Pressure valve (left) and safety valve (right) setups with particles (silver), fluid stream lines (colored according
to fluid speed |v| (m/s)), deformable solid (colored according to equivalent shear strain εq ), valve (dark gray), and
frame (light gray). A spring (not shown) applies a downward force on the valve.
In testing our method, we provide numerical validations of the new techniques introduced. We
also perform validations of the LBM approach in the simulated valve geometry. In analyzing valve
simulation results, we provide physical commentary where possible to justify various observations.
A simulation technique for slurries interacting with moving parts and deformable solids with applications
3
2 Numerical method
The discrete-element method (DEM) is already a mature tool that is applied in conjunction with
experiments both for a better understanding of the micromechanics of granular materials and as a
means of virtual experimentation when laboratory experiments are unavailable. In a similar vein,
the inclusion of a fluid at the subgranular scale in DEM simulations provides a powerful tool in the
broad field of fluid-grain mixtures. Obviously, the available computational power and research time
restrict considerably the number of particles or the size of a physical system.
In the case of dry granular materials, statistically representative samples are obtained and simulated with O(104 ) of particles in 2D [7]. Despite enhanced kinematic constraints, 2D simulations
often lead to novel physical insights and realistic behaviors that can be easily generalized to 3D
configurations. However, with fluid in the pore space, 2D simulations are much less reliable in the
dense regime since the pore space is discontinuous with zero permeability. This two-dimensional
flaw can be partially repaired by adding artificially a permeable layer on the particles. But only
3D simulations may account for a realistic behavior of particle-fluid mixtures with their natural
permeability. Moreover, complex geometries/boundaries relating to realistic engineering problems
cannot be fully captured in 2D simulations or symmetric 2D extensions (e.g. axis-symmetry); only
3D approaches can handle such problems in full generality.
We developed a 3D fluid dynamics algorithm based on the lattice Boltzmann method (LBM).
This algorithm was interfaced with a DEM algorithm with a standard linear spring-dashpot-friction
model of contact between particles. The combined LBM-DEM method for particle-laden fluid is then
further coupled to a deformable solid domain using finite elements to model a rubber-like behavior.
The rubber coupling is intentionally simplified.
Within actual computer power, it is still a significant challenge to model the entirety of most engineering systems and problems. Certain sub-scale details and complex interactions are unnecessary to
capture the macroscale system response for a given loading. We utilize symmetric boundaries (where
possible) and a variety of techniques to shrink the system size and average-up sub-scale phenomena.
Specifically in this work: to handle sub-scale behavior in the fluid we use a Large-Eddy-Simulation
(LES) technique (see Sect. 2.2), to mimic a large fluid domain outside the focus region we have
created a technique we denote Zoom-in with Effective Boundaries (ZIEB) (see Sect. 2.6), and to
reduce simulation time we developed a weak coupling to the rubber domain based on a Neo-Hookean
model developed in Abaqus. The last part is computed separately and only the result is imported
into LBM-DEM simulation; the coupling and description of this part is expounded in Sec. 2.4.
2.1 Discrete-element method
The DEM is based on the assumption of elastic solids with damping and frictional contact behavior
[8, 9, 10, 11, 12, 13, 14, 15]. Newton’s equations of motion are integrated for all degrees of freedom
with simple force laws expressing the normal and friction forces as explicit functions of the elastic
deflection defined from the relative positions and displacements at contact points. We treat all quasirigid solids in the domain using this DEM description, including grains, the valve, and solid system
boundaries. Correspondingly, all solid-on-solid contact forces (e.g. grain on grain, grains on valve,
grain on solid wall) are obtained using DEM contact laws. The valve and system walls are discretized
as a kinematically constrained connected mesh of polyhedral solid ‘particles’.
To simplify contact interactions, we assume linear elastic normal and tangential contact forces
characterized by a normal stiffness kn and tangential stiffness kt . This is applied to all contact
interactions, e.g. between spheres, polyhedra, or sphere-polyhedra, though the stiffnesses can vary
depending on the two objects in contact. In additional to the elastic part, a dissipative part of the
contact force is necessary [11, 13, 16, 17]. In our model, we use a linear visco-elastic law for normal
damping and a linear visco-elasto-plastic law for tangential damping and friction forces where the
plastic part uses a Coulomb law. The visco-elastic law is modeled by a parallel spring-dashpot model.
The contact normal force is defined as:
kn − γn vrn · n if δn ≤ 0
fn=
(1)
0
otherwise
4
Patrick Mutabaruka, Ken Kamrin
where n is the contact normal vector and vrn is the relative velocity along the contact normal. γn
represents a viscosity parameter with a value that depends on the normal restitution coefficient
between grains. According to Coulomb’s law the friction force is given by:
(
kt δ t − γt vrt if |f t | ≤ µs fn
t
(2)
f =
vr
otherwise
−µs fn |vtr |
t
and
δt =
R r
n
t vt dt if |f t | ≤ µs f
1
t
kt f
r
(3)
otherwise
where µs is the friction coefficient, vrt = v − vrn is the tangential relative velocity, and γt is a
viscosity parameter, which depends on the tangential restitution coefficient.
The equations of motion (both linear and angular momentum balance) are integrated according
to a Velocity Verlet scheme [9].
2.2 Lattice Boltzmann method
The LBM is based on a material representation of fluids as consisting of particle distributions moving
and colliding on a lattice [1, 3, 18, 19]. The partial distribution functions fi (r, t) are introduced to
represent the probability density of a fluid particle at the position r with a velocity u = ci at time
t along discrete direction i. The three components of ci are given in Tab. 1.
Table 1 The ci components for a D3Q19 scheme (see Fig. 2).
i
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
x
y
z
0
0
0
-1
0
0
0
-1
0
0
0
-1
-1
-1
0
-1
1
0
-1
0
-1
-1
0
1
0
-1
-1
0
-1
1
1
0
0
0
1
0
0
0
1
1
1
0
1
-1
0
1
0
1
1
0
-1
0
1
1
0
1
-1
The Lattice Boltzmann method is often non-dimensionalized when applied to physical problems.
The governing macroscopic equations are given in terms of lattice units:
Characteristic length scale
Characteristic velocity
Characteristic density
∆x
c
ρf
(4)
where ∆x is the lattice spacing, c = ∆x/∆t is the lattice speed with ∆t the time step, and ρf is the
fluid density at zero pressure. For the following, we will describe the method in lattice units.
Figure 2 shows a cartesian grid where the meshing scheme D3Q19, corresponding to 18 space
directions in 3D used in our simulations, is represented. In LBM, the scheme D3Q19 is defined for
each node where the distribution functions evolve according to a set of rules, which are constructed
so as to ensure the conservation equations of mass, momentum and energy (with dissipation), so
as to recover the Navier-Stokes equations [20]. This holds only when the wave lengths are small
compared to the lattice spacing [21].
At each lattice node, the fluid density ρ and momentum density ρu are defined as
X
ρ=
fi .
(5)
i
ρu =
X
fi ci .
(6)
i
and the temperature is given by
X1
D
fi
kT =
m(ci − u)2
2
2
ρ
i
(7)
A simulation technique for slurries interacting with moving parts and deformable solids with applications
5
Fig. 2 3D lattice discretization with 18 directions (D3Q19).
where D is the number of space dimensions, m is particle mass, and k is the Boltzmann constant.
The equilibrium state is assumed to be governed by the Maxwell distribution:
f eq (c) = ρ
m D/2
h m
i
exp −
(c − u)2
2πkT
2kT
(8)
where u is the mean velocity. By expanding (Eq. 8) to order 2 as a function of u/cs , which is the local
Mach number with cs being the LBM sound velocity, a discretized form of the Maxwell distribution
is obtained and used in the LBM:
u2
(ci · u)2
ci · u
(9)
fieq = ρwi 1 + 2 − 2 +
cs
2cs
2c4s
where the factor w0 = 1/3, w(1,2,3,10,11,12) = 1/18 and the rest of wi = 1/36. wi depend on the
scheme
of rotational invariance [22]. The LBM sound speed is then given by
√
P with the requirement
cs = i wi c2i = 1/ 3.
The velocities evolve according to the Boltzmann equation. In its discretized form, it requires
an explicit expression of the collision term. We used the Bhatnagar-Gross-Krook (BGK) model in
which the collision term for each direction i is simply proportional to the distance from the Maxwell
distribution [23]:
∂fi
1
= (fieq (r, t) − fi (r, t))
(10)
∂t coll
τ
where τ is a characteristic time. Hence, for the D3Q19 scheme, we have a system of 18 discrete
equations governing the distribution functions:
fi (r + ci ∆t, t + ∆t) = fi (r, t) +
1
τ
(fieq (x, t) − fi (r, t))
(11)
These equations are solved in two steps. In the collision step, the variations of the distribution
functions are calculated from the collisions:
f˜i (r, t + ∆t) = fi (r, t) +
1
τ
(fieq (r, t) − fi (r, t))
(12)
6
Patrick Mutabaruka, Ken Kamrin
where the functions f˜i designate the post-collision functions. In the streaming step, the new distributions are advected in the directions of their propagation velocities:
fi (r + ci ∆t, t + ∆t) = f˜i (r, t + ∆t).
(13)
The above equations imply an equation of state [21, 24, 25]:
P (ρ) = ρc2s .
(14)
The kinematic viscosity is then given by [2, 25]
1
η = c2s τ −
2
(15)
with the requirement τ > 1/2.
As discussed in [21], the Lattice Boltzmann method holds only when the pressure wave lengths
are small compared to the lattice spacing unit. This imposes a limitation on Mach number M a =
u/cs 1 and therefore fluid speeds higher than the sound speed cannot be simulated.
In nature, for a given fluid we have: sound speed c ∗ cs , density ρf and viscosity ηf . From Eq. 12,
we need the relaxation time τ . This is related to c, ηf and ∆x by:
τ = 0.5 +
ηf 1
c2s c∆x
(16)
Equation. 16 shows that since c and ηf are fixed from fluid properties, only ∆x can be used to ensure
the stability of LBM, which becomes unstable when τ → 1/2. Numerically, there is a limitation in
computer capability regarding the smallest value of ∆x. To handle this, a sub-grid turbulent model
based on LES with a Smagorinsky turbulence model is used [26, 27, 28, 29]. The viscosity formulation
is:
1
1
∗
2
∗
2
η = η + η t = cs τ −
= cs τ + τt −
(17)
2
2
where ηt = c2s τt is the sub-grid LES viscosity and τt is the sub-grid LES LBM relaxation time. The
LES viscosity is calculated from the filtered strain rate tensor Sα,β = 12 (∂α uβ + ∂β uα ) and a filter
2
length scale lx through the relation ηt = (Clx ) S where
PC is the Smagorinsky constant. In LBM,
S is obtained from the second momentum [30] Πα,β = i6=0 ciα ciβ (fi − fieq ) as S = 2ρcΠ2 τ ∗ , where
s
p
Π = 2Πα,β Πα,β . From S and Π, τt is expressed as:
q
√
1
−1
2
2
4
τt =
τ + 2 2(Clx ) (ρcs δt) Π − τ
(18)
2
where the filter length scale lx = ∆x is the spatial lattice discretization.
2.3 LBM-DEM coupling
There exist different techniques to model fluid-structure interaction. The most used in CFD is stress
integration, however, in LBM the preferred approach is based on momentum exchange [1, 3, 5, 31, 32,
33, 34, 35]. This approach is simple in LBM, since in LBM each node already contains information
about the derivatives of the hydrodynamic variables in each distribution function fi [24]. Due to
the presence of a solid boundary, after the collision step but before the streaming step, we know
the population of fi except those which reflect off the solid wall as shown in Fig. 3 (fi =?). In
our simulations, we use the method developed by Bouzidi [1], which mimics a macroscopic no-slip
condition. For clarity, we describe the interaction rules using lattice units. This means that the
time step is one, the space discretization is one, etc. Out of the characteristic scales, we will denote,
around the fluid-solid boundary, x as position and the subscripts f , s, and b respectively will indicate
either the fluid or solid domain and the fluid-solid boundary. We present the method in simplified
1D form. For more clarity in the way momenta are exchanged between fluid and solid domains, let
us introduce q = |xb − xf |/∆x. According to the LBM scheme (collide and stream Sect. 2.2) in the
presence of a solid wall, we have two scenarios depending on the wall position xb (see Fig. 3):
A simulation technique for slurries interacting with moving parts and deformable solids with applications
1. q <
2. q ≥
∆t.
1
2
1
2
7
where fluid leaves node xf , reflects off the wall, and reaches xf again in time less than ∆t.
where fluid leaves node xf , reflects off the wall, and reaches xf again in time greater than
To handle these scenarios, we introduce a fictitious node x0f (see Fig. 3) such that after streaming, a
fluid particle leaving x0f arrives at xf exactly in time ∆t.
Fig. 3 A 2D illustration of the fluid-structure interaction scheme. Black squares are fluid nodes, triangles are fictitious
nodes, empty squares are solid nodes and circles are boundary nodes.
As shown in Fig. 3, if xf is the last fluid node before we reach the solid boundary, xs = xf + c
should be a solid node. Let fi0 be the distribution function such that fi0 is the opposite direction of
i where i is the direction oriented from fluid node to solid boundary. By using a linear interpolation,
fi0 is expressed as follow:
fi0 (xf , t + ∆t) = 2qfic (xf , t) + (1 − 2q)fic (xf − ci , t) + ∂fiw0
1 c
c
w
fi0 (xf , t + ∆t) = 2q
fi (xf , t) + 2q−1
2q fi (xf + ci , t) + ∂fi0
for q <
for q ≥
1
2
1
2
(19)
where fic corresponds to the distribution function of fi after the collision step but before streaming
and fic (xf + ci , t) = fic0 (xf , t). The term ∂fiw0 is calculated from the boundary velocity and is zero if
the boundary is stationary. ∂fiw0 is calculated by considering that the fluid velocity u evolves linearly
between xf and xs . If u0 is the boundary velocity, u is then defined by
u = u0 + (xf − q)
∂u
∂x
(20)
at first order in u. The equilibrium value of fi is given by fi = fi0 + 3ωi u · ci where fi0 is constant
and depends on the lattice discretization scheme [1, 36]. Using a linear interpolation, ∂fiw0 is given
by:
∂fiw0 = 6ωi u0 ci for q < 21
(21)
∂fiw0 = 3q ωi u0 ci for q ≥ 12
Hydrodynamic forces acting on the structure are calculated by using momentum exchange [1, 5, 31,
37]. Let Qf and Qs be fluid and solid momentum calculated near the boundary. The exchanged
momentum is given by:
∆Q = Qs − Qf .
(22)
Qf and Qs are calculated as follows:
Qf = ∆xD
XX
all
i
fif ci
(23)
8
Patrick Mutabaruka, Ken Kamrin
Qs = ∆xD
XX
all
fis0 ci0
(24)
i
where D is the space dimension, and fif and fis0 are respectively the fluid and the solid distribution functions. To be clear, fis0 is constructed at a lattice point occupied by solid by taking the
solid velocity us and density ρs and assigning a Maxwell equilibrium distribution, per Eq. 9. The
hydrodynamic force F and torque T are then given by:
F=
∆xD X X f
∆Qf s
=
(fi + fis0 )ci0
∆t
∆t
i
(25)
∆xD X X
l × (fif + fis0 )ci0
∆t
i
(26)
all
T=
all
where l is the distance between the center-of-mass of the solid domain and the boundary node xb .
2.4 LBM-DEM-Rubber coupling
Unlike the coupling between LBM and DEM, the LBM-Rubber and DEM-Rubber coupling is indirect. We focus our explanation below on the case of a rubber ring component of a valve, but the
idea can be generalized to other cases.
A first simulation is performed using Abaqus from which the deformed rubber shape and reaction force of the valve seat on the rubber are saved for many states of rubber compression. This
simulation uses no fluid or particles. The Abaqus simulation consists of compressing the rubber
ring geometry against the bare valve seat (see inset of Fig. 5). The rubber is simulated as a nearly
incompressible neo-Hookean
elastic solid with a strain energy Ψ (per unit reference volume) given
2
by Ψ = (Gr /2) I 1 − 3 + (Kr /2) (J − 1) where I 1 is the first deviatoric strain invariant defined
2
2
2
1
as I 1 = λ1 + λ2 + λ3 , the deviatoric stretches are given by λi = J 3 λi , J is the total volume ratio,
and λi are the principal stretches. The (small-strain) shear modulus is Gr and bulk modulus is Kr .
A frictionless contact between rubber and valve seat is used for simplicity. Figure 4 shows several
snapshots during the Abaqus simulation and Fig. 5 gives the seat net-force on rubber as a function of
(downward) displacement δ of the rubber ring, h(δ). We index each deformed rubber configuration
by the value of δ it corresponds to. The Abaqus tests are performed under quasi-static conditions
but we also assume damping can exist such that upward the force on the rubber satisfies the relation
F = h(δ) + ν δ̇.
(27)
Fig. 4 Snapshots of rubber deformation during the Abaqus simulation at 0 %, 33 %, 66 % and 100 % of the simulation
duration where the color map shows the Mises stress in Pa.
Then, the data from the Abaqus simulation is used in a weak coupling routine to describe rubber
configurations when the ring takes part in a slurry simulation. In short, the method determines which
A simulation technique for slurries interacting with moving parts and deformable solids with applications
9
Fig. 5 Net-force of valve seat on rubber as function of displacement. The inset shows the configuration of Abaqus
simulation where the dot line represent the imposed velocity boundary U .
of the deformed rubber configurations from the stand-alone Abaqus tests is the best representation
of the actual deformed rubber state at that moment in the slurry problem. Hence, the utility of this
method lies in the fact that the rubber deformation that occurs in the actual slurry problem largely
resembles the modes of deformation seen in a purely solid compression experiment. Situations where
the rubber surface becomes heavily locally deformed could be problematic for this approach.
From the LBM-DEM point of view, the rubber is composed of tetrahedra; this allows us to
compute contact forces for DEM and the exchanged momentum for LBM as if it were a simple
collection of polyhedral objects. Since the Abaqus simulation is performed without fluid or particles,
to use its solutions we need to deduce an effective upward force from LBM-DEM acting on the
bottom rubber surface, which can then be referenced against the Abaqus data to infer a deformed
rubber shape. The effective force is needed because the rubber in the Abaqus simulations has contact
only with the valve seat, whereas in the slurry case, there can be additional forces from fluid and
particles extending to the lateral surfaces of the rubber.
Key to our routine is to identify two subsets of the exposed rubber surface, denoted surface A and
surface B. Surface A is the part that makes contact with the valve seat and surface B remains free
in the Abaqus simulations (see left Fig .6). In particular, surface A and B are geometrically defined
using the the last frame of the Abaqus simulation where the rubber is fully compressed. In the slurry
case, we add uniform hydrostatic stress to the observed rubber loading distribution until the mean
normal stress acting on surface B vanishes. This leaves us with a loading state that resembles one from
Abaqus. Because the rubber is essentially incompressible, changing the hydrostatic stress uniformly
along the surface does not affect the deformed
rubber configuration. To be specific,R we compute the
R
normal stress on surface A using PA = A1 A n·f dS and on surface B using PB = B1 B n·f dS where f
is the stress from hydrodynamic, particle, and valve-seat forces and n is the normal vector on section
dS. Since the rubber deformation is caused by the shear part of the stress, we uniformly subtract
the traction PB from all normal stresses acting on the rubber. This modified loading now resembles
the Abaqus loading (inset Fig. 5) in which only an upward force on surface A exists. Therefore, we
define the effective upward force on surface A as F = AreaA · (PA − PB ).
The rubber shape is updated using a four step loop, which is performed after particle positions
and fluid data are updated. The goal of the iteration routine is to implicitly solve Eq. 27 for δ so
10
Patrick Mutabaruka, Ken Kamrin
that the effective force from particles, fluid, and the valve seat on the final rubber state matches the
force from Eq. 27.
– Step 1: Compute effective upward force F from fluid, particle, and valve-seat interactions on
rubber given the current guess for δ
– Step 2: Use this force and Eq. 27 to update to a new guess for δ.
– Step 3: Check if new and old δ differ by less than a tolerance. If so break, if not update the
rubber shape based on the new δ.
– Step 4: Update applied force on surfaces A and B according to the rubber shape given by the
new guess for δ.
We assume that fluid forces do not change throughout the iteration procedure. This is true by
assuming a small incremental rubber shape change between iterates, so only particle and valve-seat
forces on the rubber are updated during Step 4.
In Step 2 we utilize the following update rule
δ←
∆t
ν
F − h(δ) + δ0 ∆t
η +δ
1+
ν
η
(28)
where δ0 is the actual displacement of the rubber at the beginning of the time-step. The coefficient
η is numerically selected to aid convergence. Note that if the updated value of δ matches the value
inputted, then Eq. 27 is solved. The update rule above attempts to move δ toward such a solution
with each pass through the iteration loop. We check convergence of δ (Step 3) using a tolerance that
is scaled by the error of the first iterate, δ1 − δ0 , where δ1 is the value obtained for δ after the first
pass of the loop.
We use simple linear interpolation to compute values of h(δ) when δ is between two neighboring
values from the Abaqus output.
Fig. 6 Configuration of rubber-LBM-DEM coupling. The left figure gives the geometrical definitions of area A and
B, and the initial and final shape of the rubber as it proceeds through the iteration loop to solve Eq. 27, with a
representation of particles in contact shown. The right figure gives a 1D analog of the model to compute the rubber
deformation.
The third step consists of updating the rubber shape using the new guess for δ obtained in Step
2. Again, a linear interpolation is applied as is done for h(δ). For the shape, we export the Abaqus
mesh, which allows the interpolation. For example, if the new δ lies directly between two frames
of the Abaqus data, the rubber shape is updated by moving the nodes of the rubber to positions
halfway between those of the neighboring frames.
The fourth step consists of recomputing the contact force between the tetrahedra representing
the rubber, and the particles/valve-seat. The contact force is computed using Eq. 1 for the normal
part and Eq. 2 for the tangential part. One specificity here is that we do not update δt during the
coupling loop routine unless when |f t | > µs fn where δt = k1t f t . A schematic model of one update
through the iteration loop is presented in Fig. 6 (left) and a 1D mechanical model of the treatment
of the rubber interaction is visualized in Fig. 6 (right).
A simulation technique for slurries interacting with moving parts and deformable solids with applications
11
2.5 Numerical algorithm
The numerical method described in previous sections is implemented using the algorithm displayed
in Fig. 7. Note that we compute the valve acceleration by enjoining the applied force and mass of the
rubber and valve components together in the Verlet update; rubber deformations are not included
in the calculation of net acceleration of the valve/rubber composite as they are small compared to
the movement of the valve overall. As shown in Fig. 7, the LBM step is computed in parallel with
DEM force calculation and Rubber force calculation.
2.6 Zoom-in with Effective Boundaries (ZIEB)
The Zoom-In with Effective Boundaries (ZIEB) technique replaces a fluid reservoir/domain with an
analytical solution that interacts dynamically with the remainder of the domain. The challenge is
to determine the correct effective dynamics at the fictitious interface, and to transfer the analytical
result to LBM distribution functions. In this study we model valves, which can be positioned far
from the pump that leads the slurry to the valve. The goal is to avoid having to calculate flow in
the expansive pump region. From a computational point of view, one might assume a simple input
velocity boundary condition should solve the problem, however, for a compressible fluid, the imposed
flow and pressure may depend on the total removed volume and feedback with the dynamics within
the valve system. In this section, we first detail how to obtain the analytical solution then explain
how to implement this solution as an LBM boundary condition.
ZIEB analytical solution
Per Fig. 8, we assume the virtual (i.e. removed) domain is a cylinder and piston. The cylinder is
initially full of fluid and has total volume V0 . As the piston moves, fluid is pushed into the simulated
domain. Let the movement of the piston be given by some prescribed Hp (t), where Hp measures
the piston displacement. The cross-sectional area of the piston (and cylindrical domain) is Ap . The
piston velocity, vp (t), is simply defined from the time-derivative of Hp . Define as vf the mean inflowing fluid velocity component on the interface between the domains. Let ρ be the average density
of fluid in the virtual cylinder between the piston head and the interface. Further, we make the
simplifying assumption that in the cylinder region the fluid density is in fact uniform, such that it
is equal to ρ throughout.
Conservation of fluid mass in the cylinder domain can be expressed by balancing the mass rate
within the cylinder against the mass flux into the simulated domain:
d
ρ · (V0 − Ap Hp ) = −ρvf Ap ⇒
dt
dρ
v p Ap − v f Ap
=ρ
dt
V0 − Ap Hp
(29)
In a fully continuum framework, the above equation would need to be augmented with momentum
balance in order to provide the in-flowing velocity, vf , at the fictitious interface. However, using the
LBM description, we can update vf in another way, which is consistent with momentum balance on
the small scale.
Implementation of ZIEB analytical solution in LBM
At a given time tn , we assume ρn is given in the cylinder domain and is equal to the density at xS ,
where, per Fig. 8, xS is the lattice point in the simulated domain that is adjacent to the interface
with the virtual domain. We suppose the velocity at xS is the interfacial velocity vfn . Both density
and velocity at time tn at xS are defined by Eq. 5 and 6 through distribution functions fin .
The distribution functions are updated to tn+1 under the following procedure, which is applied
after the collision step but before streaming. First we update and store the density ρn+1 at xS using
explicit integration of Eq. 29:
12
Patrick Mutabaruka, Ken Kamrin
Fig. 7 LBM-DEM-RUBBER implementation algorithm.
ρ
n+1
vpn Ap − vfn Ap
= ρ exp
∆t
V0 − Ap Hpn
n
(30)
Next, a partial LBM streaming step is performed at xS using the distributions at time tn . During
this step xS streams to and from its neighboring ‘real’ lattice points within the simulated domain.
A simulation technique for slurries interacting with moving parts and deformable solids with applications
13
Virtual*Domain
vp
vf
⇢
Ap
Hp
Simulated*Domain
f12 =?
xS
xV
Fig. 8 Top: Full geometry of problem to be solved — valve region connected to a cylinder and piston region. ZIEB
technique allows removal of the piston/cylinder domain from the simulation, by replacing it with effective boundary
conditions at the fictitious interface. Bottom: Zoom-in near the fictitious interface showing the first simulated lattice
point (at xS ) and the closest virtual lattice point (at xV ).
However, it only streams out of the interface with the virtual domain and does not receive any
distributions from the virtual domain. Define ρ∗ as the density at xS after this partial streaming
step.
The next step is to back-solve the needed distributions to be streamed in from the virtual domain
in order to guarantee the final density at xS equals ρn+1 . For example, consider a setup as shown in
Fig. 8 and suppose the fictitious interface is normal to the ẑ direction (see Fig. 2). After the partial
streaming step, updated (though not finalized) distribution values exist for all the fi except for the
values associated to i = 7, 9, 12, 15, and 17. These five distribution values are all unknown after
the partial streaming step. To compute them, first we modify only the value of the f12 distribution,
which is the distribution that streams into the simulated domain normal to the fictitious boundary:
n
f12 = ρn+1 − (ρ∗ − f12
)
(31)
This advects all the missing density at xs from a fictitious node xV (see bottom of P
Fig. 8). With
1
fi . Because
these distributions, the velocity at time tn+1 is computed at xS according to u = ρn+1
the distributions in the i = 7, 9, 15, and 17 directions are still unknown at this point, the Maxwell
equilibrium function Eq. 9 is then used to redistribute all the distributions at xS to a more natural,
equilibrium state. This updates all the distributions to their final values, at tn+1 .
We notice here that the initial value of ρ at the beginning of the simulation should be a normalized
value (in lattice units) otherwise an additional step of normalizing by the physical fluid reference
density will be necessary before using it in Eq. 29 and Eq. 31.
2.7 Tests and validations
We first test some of the individual components of the routine. In this section, we provide separate
numerical validations of the ZIEB technique, the rubber deformation model, and the LBM.
To validate the ZIEB method, we performed an analysis of fluid flow in a geometry comprised of
a piston driving fluid passing through a narrow restriction. This flow field is then compared to that
obtained using a “virtual piston” in which the domain containing the moving piston is removed and
in its place an effective boundary condition from ZIEB is used, see Fig. 9. The real piston begins
positioned such that the fluid volume between the piston and input section is V0 ; the same V0 is used
in Eq. 30 for the virtual piston. We use V0 = 1.77e-06 m3 and Ap = 7.09 m2 . As input parameters,
we use a (pressure-free) fluid density ρf = 1000 kg/m3 , dynamic viscosity η = 0.001 P a.s and
a Smagorinsky constant of C = 0.4 for the sub-grid turbulence model [3, 6, 38]. Figure 10 shows
14
Patrick Mutabaruka, Ken Kamrin
the comparison between the two simulations regarding fluid velocity and the normalized input fluid
density computed in the same domain (see Fig. 9). The agreement is strong, even the time-dependent
fluctuations, confirming the correctness of the ZIEB method.
Fig. 9 Configuration of tests for the Zoom-in with Effective Boundaries (ZIEB) technique. The real piston geometry
is displayed in the top figure (mid-simulation) and the virtual piston geometry is displayed beneath, where ZIEB has
been applied on the left boundary to mimic the removed piston domain.
3e-03
4
real piston
virtual piston
real piston
virtual piston
2e-03
∆ρ/ρ
velocity (m/s)
3
2
1e-03
1
0e+00
0
0
0.002
0.004
0.006
time (s)
0.008
0.01
0
0.002
0.004
0.006
time (s)
0.008
0.01
Fig. 10 Velocity and normalized input fluid density as functions of time for the real and virtual piston setups. The
velocity and density are calculated within the boxed subdomain in Fig. 9.
The test of the rubber coupling and rubber deformation is performed running a loading/unloading
test without fluid. A force (loading/unloading) Fload is directly applied on the valve which presses
the rubber into contact with twelve frozen spheres (see Fig. 11). Two phases are considered: a loading
phase with Fload = 40 N then an unloading phase with Fload = 10 N . We use a frictionless contact
type between the rubber and spheres where normal stiffness is set to 1e+05 N/m and no normal
damping is used to insure that all dissipation comes from internal rubber damping. The rubber
coupling parameters (Eq. 28) are set to ν = 80 N · s/m and η = 10 N · s/m. The valve density is
set to 7850 kg/m3 and the rubber density to 1200 kg/m3 . The time step is set to 1e-07 s. Fig. 11
shows the loading/unloading force, the reaction force F of spheres on the rubber, the displacement
δ (right inset) and the corresponding force h(δ). The agreement of Fload , h, and F after a relaxation
time verifies the coupling.
The last test is focused on verifying the fluid LBM simulation by comparing flow of fluid in the
rubber channel of a pressure-valve assembly against an analytical flow solution (recall Fig. 1). We
A simulation technique for slurries interacting with moving parts and deformable solids with applications
15
Fig. 11 A dry numerical test of the simplified DEM-Rubber coupling. The left figure shows the configuration of the
test where the loading/unloading force Fload is applied directly to the valve. Color corresponds to equivalent shear
strain magnitude εq in the rubber. Spheres are held fixed on the valve seat. The right figure shows loading/unloading
force Fload , h(δ) and the net reaction force F of spheres on rubber. The inset shows the rubber deformation δ as a
function of time.
can run a simulation where the fluid viscosity is large, such that the flow in the channel will be in
the Stokes limit. To aid in calculating an analytical solution, we treat the flow as radially directed
and assert the lubrication approximation. In view of Fig. 1 for the definition of the y direction, we
obtain the following system of equations, which includes momentum and mass balance under the
lubrication limit:
1 ∂
r ∂r (rvr ) = 0,
2
ηf ∂∂yv2r = ∂p
∂r .
∂p
∂y
= 0,
(32)
This is solved by v(r, y) = Ar y(h−y) where A is an undetermined constant and p(r) = −2Aηf ln(r)+C
where C is a constant and A is the same as in the velocity field equation. From the p(r) equation,
the pressure difference ∆p between r = Rin and outer at r = Rout (see Fig. 12) is given by:
∆p = −2Aηf ln(Rout /Rin ). Using y = h/2 = 0.0026 m, Rin = 0.0065 m and Rout = 0.0110 m, we
find v(Rin , h/2) ' 4.16 m/s (see Fig. 12 left) from our numerical data, giving A ∼ 1.59e+04 s−1 .
Hence, the predicted pressure difference is ∆p ' −5.3e+05 P a which is quite close to the obtained
pressure difference from the simulation (−5.7e+05 P a, see Fig. 12 right). Fig. 13 shows the analytical
solution as a function of r in comparison with the numerical data and agreement is found. In this
test, the fluid density is ρf = 1000 kg/m3 and dynamic viscosity is ηf = 3.16 P a · s. The valve
density is set to 7850 kg/m3 and the rubber density to 1200 kg/m3 . We use ZIEB on the input
section (see Fig. 14) with a virtual piston velocity of vp = 6 m/s, and we apply a constant pressure
of Pout = 1.04e+04 P a at the output section (see Fig. 14).
3 Examples
We present numerical examples utilizing the valve geometries presented in Fig. 14. The two systems
are used to mimic a safety and pressure valve. The different parts and their corresponding names
are presented in Fig. 14. Their dimensions are given in Appendix A.
The valve is spring-loaded from above. Initially, the valve is closed due to force from the spring
(and prescribed overpressure above in the pressure-valve geometry). The spring’s force-displacement
relation is chosen to be non-linear and is expressed as follows: Fs = kv δ0 + kv (a/ π2 ) tan( π2 aδ ); kv is
the stiffness, δ0 = 0.092 m is the preload spring displacement, δ is the load displacement and a =
0.0045 m is the spring maximum compression. The fluid region above the valve begins unpressurized
in the safety-valve case and pressurized by the selected Pout in the pressure-valve case. For each
simulation, both for safety and pressure valve, we start by introducing fluid through the input
section beneath valve domain using the ZIEB technique at constant virtual piston velocity vp . When
the beneath valve domain reaches a large enough pressure, it will overcome the spring-load (and
possible top pressure) on the valve to open it. We continue displacing the virtual piston and then
turn off vp when we assume that the flow has reached a steady state. We then wait until the valve
16
Patrick Mutabaruka, Ken Kamrin
Fig. 12 Fluid flow and pressure in the rubber channel. On the left we show the fluid pressure and on the right, the
velocity magnitude.
3.0e+06
numerical data
p(r) = -2Aη ln(r) + C
pressure (Pa)
2.8e+06
2.6e+06
2.4e+06
2.2e+06
2.0e+06
0.006
0.007
0.008
0.009
r (m)
0.010
0.011
0.012
Fig. 13 Numerical versus theoretical comparison for pressure (left) and velocity (right) in the rubber channel. A ∼
1.59e+04 s−1 , y = h/2 ' 0.0013 m, C ∼ −2.33e+06 P a.
is closed. We check the behavior of the valve systems with and without particles but the closure
phenomena is only investigated for the case where we have particles.
In the presence of particles, we start with a constant packing fraction in the domain beneath the
valve corresponding to the imposed packing fraction φ at the input section. During the simulation,
each particle that exits the simulated domain is removed and each new particle introduced at the
input section starts with the fluid input velocity as its initial velocity. To control the input particle
flow, we insert grains to ensure that the input particle packing fraction is constant within a width
of 0.00425 m from the interface with the virtual domain. We introduce/remove particles in the
simulated system every 50 steps.
Physical parameters involved in the valve problem are displayed in Tab. 2), which include the
geometry of the valve system (e.g. safety-valve or pressure-valve), each of which has fixed size
dimensions (see Appendix). For all tests, we fix the solid density of particles ρs = 2500 kg/m3 ,
(pressure-free) fluid density ρf = 1000 kg/m3 (small-strain) rubber shear modulus Gr = 3.0e+05 P a
and bulk modulus Kr = 8.0e+06 P a, rubber damping ν = 80 N · s/m, rubber+valve mass Mvr =
9.2e-03 kg, and valve spring stiffness kv = 625N/m.
A simulation technique for slurries interacting with moving parts and deformable solids with applications
17
Fig. 14 Illustration of different parts of the safety and pressure valve.
Since mono-disperse particles may induce a crystallisation phenomena at higher packing, a random size distribution is used uniformly between dmin and dmax . The distribution may be described
by a mean particle size d = (dmin + dmax )/2 and polydispersity ∆d = dmax /dmin .
Table 2 Parameters.
Particles
Valve + Rubber
mean diameter
solid density
polydispersity
input packing fraction
[d]
[ρs ]
[∆d]
[φ]
Fluid
mass
rubber shear modulus
rubber bulk modulus
valve spring stiffness
rubber damping
[Mvr ]
[Gr ]
[Kr ]
[kv ]
[ν]
System
dynamic viscosity
pressure-free density
output pressure
[ηf ]
[ρf ]
[Pout ]
system geometry
system size
piston speed
To generalize the valve dynamics and flow behavior, we choose
the natural units of our system
p
to be the input section radius [L] = r for length, time [T ] = ρf r3 /kv and mass [M ] = ρf r3 . From
these units, a dimensionless parametric space is represented by:
(
Pout
geo,
, vp
kv
s
ηf
kv d
ρ
Gr r Kr r Mvr
, , ∆d, φ, s ,
,
,
,
ρf r r
ρf kv ρf r kv
kv ρf r3
)
where geo is the system geometry. Taking into account the fixed parameters, the dimensionless parametric space we will explore is described by the following groups:
(
Pout
geo,
, vp
kv
s
ηf
kv d
, , ∆d, φ,
ρf r r
kv ρf r
)
The second group is only relevant to pressure valves and the latter five can be independently controlled through the selection of vp , d, ∆d, φ, and ηf .
The parameters for all tests are summarized in Tab. 3 and Tab. 4. As indicated in the Tables, the
tests are conducted in order to observe the dependence of the valve behavior on each of ηf , vp , d, ∆d,
and φ independently; for each variable, a sequence of tests is performed where it is varied over a
range while the others are held fixed.
The contact model (DEM solver and DEM-Rubber coupling), fluid turbulence model (LES) and
numerical parameters are displayed in Tab. 5
[geo]
[r]
[vp ]
18
Patrick Mutabaruka, Ken Kamrin
Table 3 Range of parameters investigated for safety and pressure valve simulations without particles. All units are
in [kg], [m], [s].
geo=safety valve
range
ηf
vp
1e-03 to 3.16e+01
1 to 12.5
geo=pressure valve
fixed
range
fixed
name
value
name
value
Pout
vp
d
∆d
φ
0
12.5
0
0
0
Pout
vp
d
∆d
φ
10416
6
0
0
0
Pout
d
∆d
φ
ηf
0
0
0
0
1e-03
Pout
d
∆d
φ
ηf
10416
0
0
0
1e-03
ηf
vp
1e-03 to 3.16e+01
1 to 12
Table 4 Range of parameters investigated for safety and pressure valve for simulations with particles. All units are
in [kg], [m], [s].
geo=safety valve
range
vp
d
∆d
φ
1 to 12.5
0.8e-03 to 1.5e-03
geo=pressure valve
fixed
Pout
d
∆d
φ
ηf
Pout
vp
∆d
φ
ηf
0
0.8e-03
1.2
0.084
1e-03
0
12.5
1.2
0.053
1e-03
1.1 to 1.5
Pout
vp
d
φ
ηf
0
12.5
0.8e-03
0.053
1e-03
0.026 to 0.128
Pout
vp
∆d
d
ηf
0
12.5
1.2
0.8e-03
1e-03
range
vp
d
∆d
φ
fixed
1 to 12
Pout
d
∆d
φ
ηf
10416
0.8e-03
1.2
0.067
1e-03
0.8e-03 to 1.4e-03
Pout
vp
∆d
φ
ηf
10416
6
1.2
0.053
1e-03
1.1 to 1.5
Pout
vp
d
φ
ηf
10416
6
0.8e-03
0.053
1e-03
0.026 to 0.117
Pout
vp
∆d
d
ηf
10416
6
1.2
0.8e-03
1e-03
3.1 Pressure valve lift behavior
In this section, we discuss the effect of fluid viscosity, piston velocity, and input packing fraction on
the opening, steady flow, and closure behavior of the valve for a pressure valve configuration. As
shown in Tab. 3 and Tab. 4, when varying a parameter of interest, we fix the others to a control
set taken from vp = 6 m/s, ηf = 0.001 P a · s, φ = 0.053 and ∆d = 1.2. We will focus our analysis
on the pressure valve and will give a brief analysis of the results for the safety valve in Sec. 3.2.
Valve opening phase
During the opening phase, we observe a delay between the initiation of the piston and the initiation
of valve opening. The effect is not due to packing fraction Fig. 20 (left), polydispersity Fig. 19 (left)
or mean particle diameter Fig. 19 (right), rather, the delay increases with fluid viscosity Fig. 15
(left), and decreases when piston velocity increases Fig. 15 (right) (simulation without particles)
and Fig. 20 (right) (simulation with particles). The lack of dependence of the delay time on particle
inputs is because the mean particle diameter is bigger than the initial valve lift so it does not modify
A simulation technique for slurries interacting with moving parts and deformable solids with applications
19
Table 5 The contact parameters (DEM solver and DEM-Rubber coupling), fluid turbulence model (LES) and numerical parameters. All units are in [kg], [m], [s].
stiffness
normal
tangantial
friction coefficient
seat-valve
seat-rubber
seat-particle
valve-particle
rubber-particle
particle-particle
1e+11
1e+06
1e+07
1e+07
1e+05
1e+07
0
0
0.8e+07
0.8e+07
0.8e+05
0.8e+07
seat-valve
seat-rubber
seat-particle
valve-particle
rubber-particle
particle-particle
damping
normal
tangantial
rubber and numerical parameters
seat-valve
seat-rubber
seat-particle
valve-particle
rubber-particle
particle-particle
1e+03
0
3
3
0
3
0
0
0
0
0
0
Smagorinsky constant
lattice speed
fluid space discretization
DEM time step
numerical rubber convergence (η)
0
0
0.4
0.4
0.4
0.4
0.4000
2.5e+03
3.0e-04
5.0e-08
1.0e+01
fluid behavior in the valve-rubber channel (see schematic in Fig. 16). The more dominant effect is
negative suction pressure, which develops in the valve-rubber channel as the valve initially displaces
upward, as shown in Fig. 17. The delay increases with increasing viscosity because this increases
the suction force due to lubrication effects in the narrow valve-rubber channel. At the same time, in
the beneath valve region where the fluid domain is not thin, the pressure is mostly independent of
viscosity as we observe in Fig. 18 (left) where before the first peak of the valve lift (Fig. 15 (left))
the pressure evolution is the same.
4e-03
4e-03
ηf = 1.00e-3 (Pa.s)
ηf = 1.00e+0 (Pa.s)
ηf = 1.00e+1 (Pa.s)
ηf = 3.16e+1 (Pa.s)
3e-03
vp = 1 (m/s)
vp = 2 (m/s)
vp = 4 (m/s)
vp = 6 (m/s)
3e-03
vp = 8 (m/s)
lift (m)
lift (m)
vp = 10 (m/s)
2e-03
1e-03
0e+00
0.000
2e-03
1e-03
0.005
0.010
time (s)
0.015
0.020
0e+00
0.000
0.005
0.010
time (s)
0.015
0.020
Fig. 15 Valve lift as function of time for different fluid viscosity (left) and different virtual piston velocity (right)
(without particles).
Increasing the piston velocity reduces the delay because the suction pressure in the valve-rubber
channel is balanced by faster growth of the beneath valve pressure. Figure 18 (right) shows that
during the pressurization phase, the pressure slope increases with piston velocity, as expected.
Quasi-steady open valve phase
The valve displacement begins to approach a steady value (with or without oscillations) after the
first peak in valve lift (t ' 0.010 s) and ends when the piston motion is stopped at t = 0.025 s.
As shown in Fig. 15, the steady lift position, for the simulations without particles, increases with
fluid viscosity and piston velocity. For lower viscosity, the valve has an ‘underdamped’ response
characterized by decaying oscillations, whereas for larger viscosities, an ‘overdamped’ response can
be seen (see Fig. 15 (left)). The presence of particles can modify the valve lift behavior. Figure 19
20
Patrick Mutabaruka, Ken Kamrin
Fig. 16 Rubber channel, valve channel, and beneath valve domain configuration.
Fig. 17 Several snapshots showing valve and rubber channel pressure evolution for ηf = 31.6 P as and vp = 6 m/s.
(without particles).
1.5e+07
1.5e+07
1.0e+07
5.0e+06
0.0e+00
0.000
0.005
0.010
time (s)
0.015
0.020
vp = 1 (m/s)
vp = 2 (m/s)
vp = 4 (m/s)
vp = 6 (m/s)
Pressure (Pa)
Pressure (Pa)
ηf = 1.00e-3 (Pa.s)
ηf = 1.00e+0 (Pa.s)
ηf = 1.00e+1 (Pa.s)
ηf = 3.16e+1 (Pa.s)
vp = 8 (m/s)
1.0e+07
vp = 10 (m/s)
5.0e+06
0.0e+00
0.000
0.005
0.010
time (s)
0.015
0.020
Fig. 18 Beneath valve pressure as a function of time for different fluid viscosity (left) and different piston velocity
(right) (without particles).
(left) shows that virtually no effect on valve lift is observed for different polydispersity in the range
we tested. Increasing the mean diameter of particles, Fig. 19 (right), increases the steady lift position
A simulation technique for slurries interacting with moving parts and deformable solids with applications
21
but this appears to be primarily a particle size effect; after the initial upward motion of the valve, the
valve descends downward and is stopped by a monolayer of mobile particles in the rubber channel,
which holds the valve position at roughly 1d high. Further tests would be needed at higher fixed
piston speeds to determine if the valve positioning depends more robustly on d.
Tests involving variation of the packing fraction or the piston velocity, Fig. 20, show a non-trivial
valve lift behavior in which three approximate regimes can be identified:
1. A lower input particle flux behavior: ϕ < ϕl .
2. A transition input particle flux behavior: ϕl ≤ ϕ ≤ ϕu .
3. A higher input particle flux behavior: ϕu < ϕ.
where the input flux is defined by ϕ = φ vp .
4e-03
4e-03
∆d =
∆d =
∆d =
∆d =
∆d =
d = 0.8e-3 (m)
d = 1.1e-3 (m)
d = 1.4e-3 (m)
3e-03
lift (m)
lift (m)
3e-03
1.1
1.2
1.3
1.4
1.5
2e-03
1e-03
0e+00
0.00
2e-03
1e-03
0.01
0.02
time (s)
0.03
0e+00
0.00
0.04
0.01
0.02
time (s)
0.03
0.04
Fig. 19 Valve lift as function of time for different polydispersity (left) and different mean grain diameter (right).
5e-03
4e-03
φ = 0.026
φ = 0.053
φ = 0.084
φ = 0.101
φ = 0.117
3e-03
vp = 1 (m/s)
vp = 3 (m/s)
vp = 6 (m/s)
4e-03
vp = 9 (m/s)
lift (m)
lift (m)
vp = 12 (m/s)
2e-03
1e-03
0e+00
0.00
3e-03
2e-03
1e-03
0.01
0.02
time (s)
0.03
0.04
0e+00
0.00
0.01
0.02
time (s)
0.03
0.04
Fig. 20 Valve lift as function of time for different packing fraction (left) and different piston velocity (right).
In the first regime, we observe a simple particle suspension flow through the valve and rubber
channel with a quasi-constant lift as shown in Fig. 20. The beneath valve packing fraction shows
also a quasi-constant value Fig. 21. This regime is observed for ϕ < ϕl ' 0.405 (from Fig. 20).
The second regime is characterized by unsteady motion of the valve that oscillates between
notably disparate high and low positions. This regime, for the range of parameters we tested, appears
to be limited by ϕ > ϕl ' 0.405 and ϕ < ϕu ' 0.611. To better understand the valve lift behavior
in this regime let us analyze the lift for φ = 0.084. Figure 22 and Fig. 23 show the time dependence
of the lift, beneath valve packing fraction, and beneath valve pressure. Notice that the peaks and
valleys of the beneath valve pressure and packing fraction are relatively in sync. The peaks in the lift
plot are delayed with respect to those of the pressure and packing fraction. This can be understood
as follows: when the valve position is low, particles aggregate under the valve and as they do so, they
22
Patrick Mutabaruka, Ken Kamrin
packing fraction
3e-01
φ = 0.026
φ = 0.053
φ = 0.084
φ = 0.101
φ = 0.117
2e-01
1e-01
0e+00
0.00
0.01
0.02
time (s)
0.03
0.04
Fig. 21 Beneath valve packing fraction as function of time for different input packing fraction.
form something of a ‘plug’ that causes the pressure beneath the valve to build up. When the pressure
is sufficiently high, the valve will open up to release the pressure, which causes the backed-up grains
beneath the valve to escape through the open valve. When this happens it causes the beneath valve
packing fraction and the pressure to decrease, which immediately allows the valve to recover to its
initial lift (Fig. 22, Fig. 23). This phenomena can be distinguished from the lower input particle flux
regime, in that in the lower flux case, the flux of grains into the system is not enough to back-up
sufficiently under the valve and induce a pressure build-up.
Fig. 22 Valve lift (left) and beneath valve packing fraction (right) for φ = 0.084 as function of time.
Figure 24 shows several snapshots of the beneath valve region for φ = 0.084 (from Fig. 20 (left)).
According to Fig. 22 (right), the first two packing fraction peaks correspond to t = 0.0095 s and
t = 0.0132 s, and the first two valleys to t = 0.0112 s and t = 0.0145 s. The rest of the snapshots
correspond to the time between the peaks and the valley. Comparing the first packing fraction peak
and the first valve lift peak in Fig. 24, we observe a delay; the first peak in packing fraction occurs at
t = 0.0095 s, and the peak for valve lift occurs between t = 0.0105 s and t = 0.0112 s. Using Fig. 22,
we find that the delay is ∼ 0.0014 s. The same delay is observed for all peaks and valleys. Contrary
to the valve lift, between the pressure and packing fraction peak/valleys, no delay is observed. This
is in agreement with the lift behavior Fig. 20 where the valve lift is a consequence of the packing
fraction/pressure evolution.
A simulation technique for slurries interacting with moving parts and deformable solids with applications
23
Fig. 23 Beneath valve pressure as function of time for φ = 0.084.
Fig. 24 Several snapshots showing particles beneath valve for φ = 0.084. According to Fig. 22 (right), the first two
packing fraction peaks correspond to t = 0.0095 s and t = 0.0132 s, and the first two valley correspond to t = 0.0112 s
and t = 0.0145 s.
The third regime of valve behavior corresponds to a high particle flux such that the beneath
valve slurry develops a sustainably high pressure able to push and hold the valve at a maximal lift.
This is observed for ϕ > ϕu ' 0.611 on Fig. 21 (left) from φ ' 0.101.
Out of the three phases, one outlier phenomenon is observed for vp = 1 m/s. In fact, Fig. 20
(right) shows that when the valve is opened, from t = 0.013 s to t = 0.025 s, we have a constant but
small lift. During this phase, it turns out that a portion of the particles are stuck at the entrance of
the valve channel but without entering fully into the channel. The force from these stuck particles
and fluid pressure is enough to hold open the valve at a constant lift.
Valve closing mechanisms
In this section, we focus on the closure phase of the valve simulations with particles in order to
investigate the effect particles have on the final lift of the valve and to study the degree to which
particles become stuck in the valve-rubber channel, which could have detrimental effects on valve
performance in practice. Since the rubber plays the role of a seal between valve and seat, preventing
grain trapping during closure could be a key design goal in such systems.
The closure phase starts when the piston velocity vp is turned off. In the range of the parametric
study we investigated, the final lift is mostly affected by the mean particle diameter d as shown in
Fig. 26 (for d and ∆d) and Fig. 27 (for φ and vp ). This behavior is simply explained by the fact
that the closing valve squeezes out all but a monolayer of particles, which become stuck in the valve
24
Patrick Mutabaruka, Ken Kamrin
channel as illustrated in Fig. 25 where a zoom-in on the valve channel shows how geometrically the
lift depends on the stuck particle diameter d. Using the size of particles and the geometry of the valve
channel Fig. 25, we calculated the envelope giving the maximum lift (upper bound) and minimum
lift (lower bound) which should be obtained if only big particles (with dmax ) or small particles (with
dmin ) were stuck. The two bounds are given by:
lift = d∗ / cos(θ)
(33)
where d∗ is either dmax or either dmin and θ is the valve channel inclination as shown on Fig. 25.
θ = 29.53o is obtained from Fig. 36 (DETAIL B). As expected, the final lift is always between these
two limits.
Fig. 25 Final lift configuration; relation to stuck particle size.
Figure 27 (right) for vp = 1 m/s shows a lift of ∼ 0.00048 ± 0.00001 m which is less than the
minimum lift (liftmin = 0.00155±0.000001 m) for the smallest particle diameter in the polydispersity
to travel through the valve. In fact here, as discussed previously in Section Quasi-steady open valve
phase for the effect of vp on the lift, no particle flow is observed in the valve/rubber channel since
the lift is less than one particle diameter. Therefore, when vp is turned off, the rubber descent is
unimpeded by grains. Once the rubber makes contact with the valve seat, the fluid beneath the valve
cannot escape and therefore a pressure and residual lift of ∼ 0.00048 ± 0.00001 m remains, which is
the lift when the rubber is in contact with the seat with zero deformation.
3e-03
3e-03
upper bound
upper bound
2e-03
upper bound
lift (m)
lift (m)
2e-03
1e-03
1e-03
0e+00
0.0e+00
lower bound
5.0e-04
1.0e-03
d (m)
1.5e-03
2.0e-03
Fig. 26 Final lift for different d (left) and ∆d (right).
0e+00
1.0
1.1
1.2
1.3
∆d
1.4
1.5
1.6
A simulation technique for slurries interacting with moving parts and deformable solids with applications
3e-03
3e-03
2e-03
2e-03
upper bound
lift (m)
lift (m)
upper bound
lower bound
lower bound
1e-03
1e-03
0e+00
0.00
25
0e+00
0.10
0.05
0
0.15
10
5
15
vp (m/s)
φ
Fig. 27 Final lift for different φ (left) and vp (right).
We give here a first quantitative view on which variables — among dmax , ∆d, φ and vp — matter
most in affecting the quantity of particles that get stuck beneath the valve during closure. Figure 28
and Fig. 29 show the projected area of particles on the seat (beneath valve-rubber area) normalized
by the beneath valve-rubber area, i.e. the ‘normalized area of stuck particles’ (NASP). On these four
figures, we find that φ matters most whereas there is little variation due to changes in dmax , vp and
∆d.
We can assume the packing fraction in the open valve channel, during the quasi-steady valve
phase, is bounded below by φ. As the valve descends during closure, particles are squeezed out and,
as a further lower bound, we can approximate that a single monolayer at the same packing fraction φ
remains. This implies the total number of stuck particles in the rubber-valve channel is approximated
by: Nstuck ' lift A φ π 6d3 where the final lift is given by lift ' d/ cos(θ), and A is the projected
2
d
A
rubber-valve area. The projected total particle area is: Sstuck ' cos(θ)
A φ π 6d3 π 4d ' 23 cos(θ)
φ
normalizing the Sstuck by A, we obtain:
NASP '
3 1
φ = 1.72φ
2 cos(θ)
(34)
4e-01
4e-01
3e-01
3e-01
normalized stuck area
normalized stuck area
The above lower bound formula assumes that the final packing fraction of grains stuck in the
valve is greater than the input value, φ. In our tests we have observed that this is always true except
for the one outlier case mentioned previously (vp = 1 m/s, Fig. 20 (right)) where no particles travel
through the channel because the beneath valve fluid pressure is less than the necessary pressure to
open the valve to a lift greater than dmin / cos(θ). This case is observed in Fig. 29 (left) (vp = 1 m/s)
where the normalized stuck area is zero.
2e-01
1e-01
1e-01
0e+00
5.0e-04
2e-01
1.0e-03
1.5e-03
2.0e-03
d (m)
Fig. 28 NASP for different d (left) and ∆d (right).
0e+00
1.0
1.1
1.2
1.3
∆d
1.4
1.5
1.6
26
Patrick Mutabaruka, Ken Kamrin
4e-01
4e-01
3e-01
2e-01
y = 1.72 x
normalized stuck area
normalized stuck area
3e-01
1e-01
1e-01
0e+00
0.00
2e-01
0e+00
0.10
0.05
0
0.15
10
5
15
vp (m/s)
φ
Fig. 29 NASP for different φ (left) and vp (right).
If particles get stuck, they can also potentially break depending on the force of contact with the
valve and seat, and grain properties such as the particle size and strength. A loose approximation of
the possible volume of debris created can be made by assuming stuck particles all break. This may
be expressed in terms of the final lift and the NASP, and then normalized by d3 , giving
Debris = lift · A · NASP/d3
(35)
where A is the valve-rubber projected area.
Supposing the lift obeys Eq. 33 where d∗ = d and NASP obeys Eq. 34, we suggest the Debris
variable is approximated by:
φ
3 A
(36)
Debris =
2
2 cos(θ) d2
80
80
60
60
Debris
Debris
Figure 30 (left) shows the Debris as a function of d and Fig. 30 (right) as a function of φ. We show
comparisons to our approximate formula (Eq. 36).
40
20
20
0
5.0e-04
40
1.0e-03
1.5e-03
d (m)
2.0e-03
0
0.00
0.10
0.05
0.15
φ
Fig. 30 Debris variation as function d (left) and φ (right). The red dashed line is the lower bound analytical solution
(Eq. 36)
3.2 Safety valve lift behavior
Many of the behaviors in the safety valve configuration mimic those of the pressure valve. Here
we summarize the safety valve data. The used input parameters are resumed in Tab. 3 for the
simulations without particles and Tab. 4 for the simulations with particles.
A simulation technique for slurries interacting with moving parts and deformable solids with applications
27
Figure 31 shows the time evolution of the valve lift for different fluid viscosity (left) and for
different piston velocity vp ; both simulations are without particles. The delay at the beginning of
the simulation is less marked because of the absence of the above valve pressure. The steady lift
shows many of the same behaviors as the pressure valve, except there can be non-zero vp (1 m/s
and 2.5 m/s) and the valve may not ever open since the open end beneath the valve can prevent the
beneath-valve pressure from building up enough to overcome the valve’s spring force to open it.
4e-03
4e-03
ηf = 1.00e-3 (Pa.s)
ηf = 1.00e+0 (Pa.s)
ηf = 1.00e+1 (Pa.s)
ηf = 3.16e+3 (Pa.s)
3e-03
vp = 1.0 (m/s)
vp = 2.5 (m/s)
vp = 5.0 (m/s)
vp = 7.5 (m/s)
3e-03
vp = 10.0 (m/s)
lift (m)
lift (m)
vp = 12.5 (m/s)
2e-03
1e-03
1e-03
0e+00
0.000
2e-03
0.005
0.010
time (s)
0.015
0e+00
0.000
0.020
0.005
0.010
time (s)
0.020
0.015
Fig. 31 Valve lift as a function of time for different fluid viscosity (left) and different piston velocity (right) (without
particles).
The three observed regimes for the pressure valve during the steady lift are also evidenced here
as shown in Fig. 32 for different φ and vp . The ϕl and ϕu need to be calculated here taking into
account the outgoing flux through the modular closure of the safety valve system. This then becomes:
ϕ = ϕin − ϕout where ϕin is calculated from the input region and ϕout is calculated from the flux
of outgoing particles through both outlet sections. Figure 33 (left) shows the time evolution of the
valve lift for different ∆d where a small deviation in different values of ∆d is observed.
4e-03
4e-03
φ = 0.026
φ = 0.053
φ = 0.084
φ = 0.101
φ = 0.117
3e-03
vp = 1.0 (m/s)
vp = 2.5 (m/s)
vp = 5.0 (m/s)
vp = 7.5 (m/s)
3e-03
vp = 10.0 (m/s)
lift (m)
lift (m)
vp = 12.5 (m/s)
2e-03
1e-03
1e-03
0e+00
0.00
2e-03
0.01
0.02
time (s)
0.03
0.04
0e+00
0.00
0.01
0.02
time (s)
0.03
0.04
Fig. 32 Valve lift as function of time for different packing fraction (left) and different piston velocity (right).
As shown in Fig. 25, the envelope of the final lift can be predicted by lift = d∗ / cos(θ), dmin 6
d 6 dmax however, if the stuck particles are not entirely lodged in the valve channel, this prediction
is wrong. This is the case in Fig. 33 (right) for d = 0.0014 m where the final lift is less than the
observed lift for d = 0.0011 m. Figure 34 shows that in fact no particles are fully stuck in the valve
channel but there are some stuck in the rubber channel, as indicated by a non-zero normal force fn .
We see in Fig. 34 that even though contact exists between a partially stuck particle and the valve
channel region, the total force coming from the valve channel is close to ∼ 0.1 N whereas the total
force observed in the rubber channel is close to ∼ 10 N ; this means that the main force balancing
∗
28
Patrick Mutabaruka, Ken Kamrin
the valve spring force comes from the rubber channel and therefore, the final lift is overpredicted by
the previous formula, lift = d∗ / cos(θ).
4e-03
5e-03
∆d = 1.1
∆d = 1.2
∆d = 1.3
∆d = 1.4
∆d = 1.5
2e-03
1e-03
0e+00
0.00
4e-03
lift (m)
lift (m)
3e-03
d = 0.8e-3 (m)
d = 1.1e-3 (m)
d = 1.4e-3 (m)
3e-03
2e-03
1e-03
0.01
0.02
time (s)
0.03
0.04
0e+00
0.00
0.01
0.02
time (s)
0.03
0.04
Fig. 33 Valve lift as a function of time for different polydispercity (left) and different mean diameter (right).
Fig. 34 Final valve lift configuration for d = 0.0014 m showing (partially) stuck particle in the valve entry and
normal forces, f n , supported by all particles.
The obtained results for both valves suggest that we may need more investigation with a broader
range of particle input parameters to better make a comparison between the safety and the pressure
valve geometries.
4 Conclusion
In this paper, we have presented a detailed implementation of a 3D DEM-LBM-Rubber coupling
in two complex valve geometries. With different focused tests, we have validated the implemented
methods. The coupling of the three types of materials shows a good agreement with our physical
predictions. We also have demonstrated the validity of the ZIEB technique, which allowed us to run
simulations without having to simulate the entire domain
Simulations performed without particles give realistic behaviors. We observe a lubrication effect
causing suction that delays the opening of the valve after piston motion commences. We find that
increasing fluid viscosity increasingly overdamps the valve lift, reducing or removing the valve oscillations in the quasi-steady regime. We have validated a Stokesian pressure drop across the valve
channel when the fluid being driven through is sufficiently viscous.
A simulation technique for slurries interacting with moving parts and deformable solids with applications
29
In the simulations performed with particles in the pressure valve geometry, we found, for the
steady lift portion, three qualitative lift behaviors. The different regimes appear to be governed by
the total particle flux, which combines the imposed piston velocity vp and packing fraction φ through
a flux ϕ = vp φ. The flux variable appears to indicate when the open-valve dynamics transition from
a steady value of small lift, to an oscillating value that traverses between high and low positions, to
a steady high lift value. Further investigation may be needed to calibrate the robustness of the ϕ
variable in determining qualitative valve dynamics.
The valve closure was also investigated, which occurs when the (virtual) piston motion is stopped.
The pressure valve shows a dependency of the final lift on particle size and we give a prediction of
the lift envelope based on the minimum and maximum particle sizes in the polydispersity. We show
that if the maximum lift during the open phase does not exceed dmax / cos(θ), the final lift at closure
can be less than dmin / cos(θ) because the particles are not entirely stuck in the valve channel.
Lastly we demonstrate the robustness of the approach by switching to a safety valve configuration,
in which the above-valve region is not pressurized and the below-valve region has another exit.
Similar qualitative behaviors are observed as compared to the pressure valve, both with and without
particles, albeit at different specific values of the piston speed and input particle packing fraction.
Acknowledgements This work was supported by ARO grant W911 NF-15-1-0598 and Schlumberger Technology
Corporation. PM and KK would like to thank J.-Y. Delenne (INRA, UMR IATE Montpellier) for his helpful and
useful discussions on DEM-LBM coupling and Sachith Dunatunga for his help in streamlining the numerics. Conflict
of Interest: The authors declare that they have no conflict of interest.
30
Patrick Mutabaruka, Ken Kamrin
A Safety valve, pressure valve, and their dimensions
In this appendix, we give the safety and pressure valve configuration and the dimensions. Units are millimeters mm
and degrees o .
Fig. 35 Safety and pressure valve configuration.
Fig. 36 Frame dimension.
A simulation technique for slurries interacting with moving parts and deformable solids with applications
Fig. 37 Valve dimension.
Fig. 38 Rubber dimension.
Fig. 39 Pressure Closure dimension.
Fig. 40 Safety Closure dimension.
31
32
Patrick Mutabaruka, Ken Kamrin
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
M. Bouzidi, M. Firdaouss, P. Lallemand, Physics of Fluids 13, 3452 (2001)
Z. Guo, B. Shi, N. Wang, Journal of Computational Physics 165, 288 (2000)
Y.T. Feng, K. Han, D.R.J. Owen, International Journal For Numerical Methods In Engineering 72, 1111 (2007)
K. Han, Y.T. Feng, D.R.J. Owen, Computers & Structures 85, 1080 (2007)
K. Iglberger, N. Thürey, U. Rüde, Computers & Mathematics 55, 1461 (2008)
S. Hou, J. Sterling, S. Chen, G. Doolen, Pattern Formation and Lattice Gas Automata 6, 149 (1996)
C. Voivret, F. Radjaı̈, J.Y. Delenne, M.S. El Youssoufi, Physical Review E 76, 021301 (2007)
P.A. Cundall, The measurement and analysis of accelerations in rock slopes. Ph.D. thesis, Imperial College
London (University of London) (1971)
M.P. Allen, D.J. Tildesley, Computer Simulation of Liquids (Oxford University Press, Oxford, 1987)
M. Jean, Computer Methods in Applied Mechanic and Engineering. 177, 235 (1999)
J.J. Moreau, in Powders & Grains 93 (A. A. Balkema, Rotterdam, 1993), p. 227
S. Luding, J. Duran, E. Clément, J. Rajchenbach, in Proc. of the 5th Chemical Engineering World Congress
(AIChE, San Diego, 1996)
S. Luding, Granular Matter 10, 235 (2008)
F. Radjai, V. Richefeu, Mechanics of Materials 41, 715 (2009)
N.V. Brilliantov, T. Pöschel, Philos Transact A Math Phys Eng Sci 360, 415 (2002)
R. Hart, P. Cundall, J. Lemos, International Journal of Rock Mechanics and Mining Sciences & Geomechanics
Abstracts. 25, 117 (1988)
W. O. R., Mechanics of Materials 16, 239 (1993)
Z.G. Feng, E.E. Michaelides, Journal of Computational Physics 195, 602 (2004)
Z. Yu, A. Wachs, Journal of Non-Newtonian Fluid Mechanics 145, 78 (2007)
X. He, L.S. Luo, Physical Review E 55, R6333 (1997)
S. Chapman, T. Cowling, The mathematical theory of nonuniform gases. (Cambridge University Press, 1970)
A. Satoh, Introduction to the practice of molecular simulation. (Elsevier Insights, 2011)
P. Bathnagar, E. Gross, M. Krook, Physical Review E 94, 511 (1954)
S. Chen, G. Doolen, Annual Review of Fluid Mechanics 30, 329 (1998)
X. He, L.S. Luo, J. Stat. Phys. 88, 927 (1997)
H. Yu, S.S. Girimaji, L.S. Luo, Journal of Computational Physics 209, 599 (2005)
R.H. Kraichnan, Journal of the Atmospheric Sciences 33, 1521 (1976)
J. Smaqorinsky, Mon. Wea. Rev., USWB 91, 99 (1963)
P. Moin, J. Kim, Journal of fluid mechanics 118, 341 (1982)
S. Hou, J. Sterling, S. Chen, G. Doolen, Pattern formation and lattice gas automata 6, 151 (1996)
D.Z. Yu, R.W. Mei, L.S. Luo, W. Shyy, Progress In Aerospace Sciences 39, 329 (2003)
L. Huabing, L. Xiaoyan, F. Haiping, Q. Yuehong, Physical review. E 70, 026701 (2004)
C. Peskin, Journal of Computational Physics 10(2), 252 (1972)
D. Wan, S. Turek, International Journal for Numerical Methods in Fluids 51, 531 (2006)
D. Wan, S. Turek, Journal of Computational and Applied Mathematics 203, 561 (2007)
R.V. A. J. C. Ladd, Journal of Statistical Physics 104(516) (2001)
P. Lallemand, L.S. Luo, Journal of Computational Physics 184, 1191 (2003)
M. Germano, U. Piomelli, P. Moin, W.H. Cabot, Physics of Fluids A: Fluid Dynamics 3(7), 1760 (1991)
| 5 |
A Predictive Account of Café Wall Illusions
Using a Quantitative Model
Nasim Nematzadeh1,2¶, David M.W. Powers1¶
1
College of Science and Engineering, Flinders University, Adelaide, Australia
Faculty of Mechatronics, Dept of Science and Engineering, Karaj Branch, Islamic Azad University (KIAU), Karaj-Alborz, Iran
2
nasim.nematzadeh, david.powers @flinders.edu.au
Abstract
This paper explores the tilt illusion effect in the Café Wall pattern using a classical Gaussian Receptive Field model. In this illusion, the mortar lines are misperceived as diverging or converging rather
than horizontal. We examine the capability of a simple bioplausible filtering model to recognize different degrees of tilt in the Café Wall illusion based on different characteristics of the pattern. Our
study employed a Difference of Gaussians model of retinal to cortical “ON” center and/or “OFF” center receptive fields. A wide range of parameters of the stimulus, for example mortar thickness, luminance, tiles contrast, phase of the tile displacement, have been studied for their effects on the inducing
tilt in the Café Wall illusion. Our model constructs an edge map representation at multiple scales that
reveals tilt cues and clues involved in the illusory perception of the Café Wall pattern. We present
here that our model can not only detect the tilt in this pattern, but also can predict the strength of the
illusion and quantify the degree of tilt. For the first time quantitative predictions of a model are reported for this stimulus considering different characteristics of the pattern. The results of our simulations are consistent with previous psychophysical findings across the full range of Café Wall variations tested. Our results also suggest that the Difference of Gaussians mechanism is the heart of the
effects explained by, and the mechanisms proposed by the Irradiation, Brightness Induction, and
Bandpass Filtering models.
Keywords: Visual perception; Biological neural networks; Geometrical illusions; Tilt effects; Café
Wall illusion; Difference of Gaussians; Perceptual grouping; Classical Receptive Fields; Retinal
processing models; Retinal Ganglion Cells; Lateral Inhibition
1 Introduction
Visual illusions have the potential to give insight into the biology of vision [1, 2, 3]. They further
open windows for solving the engineering and application specific problems relevant to image processing tasks including edge detection and feature selection, as we seek to attain human level performance. Many illusions have been used to evaluate such effects, notably Café Wall [4, 5, 6] and Mach
Bands [7, 8].
Café Wall is a member of Twisted Cord family of Geometrical Illusions, and has received some
fairly convincing explanations [4, 9, 10]. The Munsterberg version of this pattern is a chessboard with
dark very thin separator between shifted rows of black and white tiles, giving the illusion of tilted
contours dividing the rows. The Café Wall pattern (Fig 1-center) has grey stripes interpreted as mortar
lines dividing shifted rows of black and white tiles, inducing a perception of diverging and converging
of the mortar lines. Morgan and Moulden suggest the mortar lines are critical for the strength of illu1
sion and that illusion has its highest strength when the luminance of the mortar is intermediate relative
to the luminance of the tiles [6]. We consider the whole range of luminance for mortar lines from
Black to White as different variations of the Café Wall illusion. Other variations include Hollow
Square [9, 11] Fraser Pattern, Spiral Café Wall (Fig 1-left), Spiral Fraser; and even variations of Zollner Illusion (Fig 1-right) [5, 11, 12] where small parallel line segments (inducing horizontal/vertical
lines) on parallel lines result in perceiving these liens as being convergent or divergent. The Café Wall
illusion seems to originate from the inducing effect of Twisted Cord [13] elements and then integration of these to an extended continuous contour along the whole mortar line [14, 15].
Fig 1. Left: Spiral Café Wall [16], Center: Café Wall, Right: Zollner [17].
Over the last few decades, many low-level and high-level models have been proposed with the aim
of explaining various Geometrical/Tilt Illusions. However, there are many issues unaddressed across
the range of the Geometrical Illusions and their underlying percepts. Specially modeling the illusion
effect as it is perceived is a challenging task in Computer Vision models. For the Café Wall illusion
and its variations, some of the explanations are based on “high-level” models [4, 16], but many rely
on “low-level bioplausible” models of simple filtering techniques [6, 10].
A low-level explanatory approach [18-22] employed in this study is a common bioderived model
for retinal ganglion cell (RGC) responses to the stimulus. The model presents the simple cell responses using an edge map representation at multiple scales derived from the Differences of Gaussians
(DoG). This is an effective filtering technique for identifying the edges [23, 24] in the early stages of
visual processing. Symmetrical DoGs at multiple scales have been used for spatial filtering to generate edge maps, modelling ON-center OFF-surround receptive fields (RFs). Our explanations of tilt in
Tile Illusions [18, 19, 20-22] connects to the Marr’s theory of primal sketch in general and his speculation of 3D structure above the edge map [25]. Further details about the underlying mechanism of our
model arise from the known or theorized mechanisms of early visual processing such as the retinal
Point Spread Function (PSF) and lateral inhibition effect of the RFs [26] is presented in Section 2.1.
We now concentrate on different explanation models for Café Wall illusion.
One of the first explanations for the Café Wall illusion was the Irradiation Hypothesis [14], which
was first introduced by Helmholtz [27], proposing that a compressive transform causes a shift of high
contrast boundary towards the dark side of its inflection [6]. The limitation of the Irradiation as a sole
explanation of the Café Wall illusion is that the hypothesis does not explain why the illusion is enhanced by the mortar lines and why the pattern doesn’t need to be of a very high contrast as explicitly
required by Irradiation Theory. So this explanation is incomplete on its own.
Border Locking is another hypothesis used in high-level explanations for tilt illusions [4]. The appearance of small wedges with local asymmetry is based on the luminance contrast of the light/dark
half tiles and their integration along the rows, so that they form long wedges. Gregory and Heard state
that the effect depends critically on luminance and that it disappears if the mortar line is either brighter than the light tiles, or dimmer than the dark ones [4]. We will show this in our experimental results.
Brightness Induction (BI) describes a change in the direction of perceived brightness, and is another explanation for Café Wall illusion. The change of direction can be towards the surrounds (Brightness Assimilation) or away from it (Brightness Contrast) [28]. McCourt has shown that a periodic
field of a sine or square wave varying brightness would result in inducing brightness on a homogeneous narrow grey field positioned on top of it [5]. He generated a new version of Café Wall based on
replacing mortar lines with the patches of light, dark and grey and explained the tilt effect as Bright-
2
ness Induction, noting that alternately-reversing diagonal line elements can produce the characteristic
of illusory convergence in the whole image.
Fraser [13] proposed another hypothesis, connecting the Twisted Cord to the Café Wall illusion
without using filtering but his hypothesis is not a complete explanation of the illusion. His idea was
based on the fact that the tilt is perceived when two identically colored tiles joined by the mortar lines
at their opposite corners create a Twisted Cord element. There are some other proposed explanations
such as Fourier-based Models, but it seems that the effect arises from small local wedge shapes rather
than global features. Therefore, what is needed is local frequency analysis, which might bring some
information out, rather than the global transformation [6].
Bandpass Spatial Filtering is argued to be performed by the retinal ganglion cells of ON-center and
OFF-center receptive fields, being model in [6] by a simple discrete Laplacian filter mask viewed as
an approximation to Difference of Gaussians (noting that DoG gives the same results), and in [10] by
a Difference of Gaussians viewed as “a very close approximation” to a Laplacian of Gaussian (LoG).
There are some more recent explanations for the Café Wall illusion. A similar psychological approach to [4] was proposed by Kitaoka as the Phenomenal Model [16, 29], which is an elemental
composition approach to explain a wide variety of Tilt Illusions. Kitaoka and his team created many
of these patterns using a heuristic tool [29] for graphical constructions of illusory stimuli from elementary units. Their psychological approach is based on the “contrast polarities of a solid square and
its adjacent line segment” to explain the Café Wall illusion. They have suggested that when dark/light
squares are adjacent with dark/light line segment, the direction of tilt is to counteract the square angle,
otherwise the square angle is going to expand.
Fermuller and Malm proposed a Point and Line Estimation Model [30] for finding the displacement
of points and lines as the elementary units in images and used their techniques to explain certain Geometrical Illusions. As applied to Café Wall, this theory is based on categorizing the edges by saying
that if mortar lines border both dark tile and bright tile, then the two edges move towards each other.
On the other hand, mortar lines between two bright regions or two dark regions cause the edges to
move away from each other. Their explanation is similar to [4, 29].
Arai proposed a computational nonlinear model for Contrast Enhancement in early visual processing based on discrete maximal overlap bi-orthogonal wavelet [31]. It is a multiscale model and
after decomposition of input signal into four sub bands of approximations, horizontal, vertical, and
diagonal details, a mathematical operator, which “enhances small values of detail if the energy of detail is small, and inhibits large values of detail if its energy is large” [31:pp.178] is used. They have
investigated the output of their system on Brightness/Lightness Illusions such as Hermann Grid,
Chevreul, Mach Bands, Todorovic, and Café Wall.
The Irradiation Effect involving enlargement of bright region to encroach on their dark sides was
identified for shifted chessboard pattern by Westheimer [12], who proposed a model including a combination of “light spread”, “compressive nonlinearity”, “center surround organization” and “border
locations”, to explain Café Wall illusion. For retinal spatial distribution of excitation, he first calculated the convolution of image with the retinal point spread function, which then passed through a certain nonlinearity equation (Naka-Rushton), and then to the spatial center-surround transformation with
the excitatory and inhibitory zones to get the final result.
Elimination of the tilt effect in Café Wall illusion is possible by the enlargement of the black tiles
[12: Fig.11]. This results in compensating the border shifts and corner distortions. Similar elimination
techniques can be used with additional superimposed dots positioned in the corners (black/white dots
on white/black tiles), eliminating the corner effect [30:Fig 5, 29:Figs 5C;D]. The elimination of the
3
illusion is also possible by replacing the black and white tiles with equiluminant but highly contrasting colour tiles [5, 32].
There are some previous explanations for connecting ‘Brightness Induction’ illusions and ‘Geometrical’ illusions. The Café Wall pattern and its variations are accounted ‘second-order’ tilt patterns [18,
19] involving ‘Brightness Assimilation and Contrast’ [33] as well as ‘Border shifts’ [12, 6]. In a thorough review of lightness, brightness, and transparency (LBT) [34] it has been noted that one of the
most promising approaches for modelling brightness coding is multiscale filtering [35] in conjunction
with contrast normalization. Lateral inhibition is a common feature of most of the above models such
as Irradiation [14], Brightness Induction [5], and Bandpass filtering [6].
In Section 2 we provide a detailed examination of the role of multiscale representation in computational models of vision, with a main focus on evidence of multiscale filtering within the retina (Section 2.1). We then describe the characteristics of a simple classical model based on Differences and
Laplacian of Gaussians (DoG, LoG), and utilize this bioplausible model to explain Café Wall illusion
qualitatively and quantitatively (Sections 2.2 and 2.3), followed by the details of patterns investigated
(Section 2.4). Afterwards, the experimental results on variations of Café Wall illusion are going to be
presented in Section 3. These patterns are created by modifying different characteristics of the Café
Wall pattern such as mortar luminance and thickness (width), tiles contrast, phase of tile displacement
for further investigation of the tilt effect. We then outline a testable framework for the design of psychophysical experiments capable of testing our predictions (Section 3.4). We conclude by highlighting the advantages and disadvantages of the model and proceed to outline a roadmap of our ongoing
and future work (Section 4).
2 Material and Method
2.1 Multiscale Representation in Vision and Vision Models
Psychophysical and physiological findings have suggested a multiscale transform model of the processing in the mammalian visual cortex as well as the early stage processing within the retina [36-38].
Kuffler was the pioneer who recorded the activity of the retinal ganglion cells (RGCs) that exit as the
optic nerve, and found two types of center surround cells [39]. Hubel and Wiesel performed many
pioneering experiments that increased our understanding of cortical visual processing [40]. Daugman
showed an approximation of these cortical cells by using Gaussian windows modulated by a sinusoidal wave for this impulse response. A specific spatial orientation tuning of these cells arises from dilation of modulated Gaussian-Gabor functions [41].
The need to extract multiscale information when modelling the visual mechanism in Computer Vision (CV) applications have been established by many researchers in the field [25, 42-44] and some of
these early ideas have later been subsumed by the wavelet paradigm. In a multiresolution algorithm
[44], the search typically moves from coarse to fine scale, processing low-resolution images first and
then zooming selectively into fine scales of the visual data. Mallat showed the impact of wavelets for
low-level vision in multiresolution search, multiscale edge detection and texture discrimination [45].
Pyramidal image representations and scale invariant transforms [46] are well matched to human
visual encoding and do not need image partitioning like JPEG-DCT [47]. A scale-space analysis is an
emergent result of image decomposition by finding the differences between pairs of scaled filters with
different parameterizations, notably the Laplacian or Difference of Gaussian filters (LoG/DoG) [48,
49], giving rise to Ricker and Marr wavelets.
Note further that self-organization models of repeated patterns of edge detectors at particular angles
are well-established [50]. Higher-level spatial aggregation of regularly spaced spots or edges in turn
4
automatically gives rise to analogues of DCT and DWT type bases, the latter with localization determined by the higher-level lateral interaction functions or the constraints of an underlying probabilistic
connectivity model [51].
In our visual system, light is focused into receptors which transduce it to neural impulses that are
further processed in the retina [52]. Then the middle layer of the retina, which is the focus of our
study, enhances the neural signals through the process of lateral inhibition [26], causing an activated
nerve cell in the middle layer to decrease the ability of its nearby neighbors to become active. This
biological convolution with its specific Point Spread Function (PSF) enables feature/texture encoding
of the visual scene but also we show, leads to optical illusions. The effect of center-surround processing and lateral inhibition on an indistinctly defined edge with gradual change from dark to light is
that it reinforces transition between the light and dark areas to appear more abrupt due to appearance
of overshoot and undershoots [52]. These result in sharpening the edges facilitate the visual tasks.
The first layer of the retina has a nonlinear mechanism for retinal gain control, flattening the illumination component, and make it possible for the eye to see under poor light condition [52]. The lateral inhibition in middle layer of the retina thus evidences both a bandpass filtering property and an
edge enhancement capability. In the final layer, we find ganglion cells whose axons exit the eye and
carry the visual signals to the cortex (inter alia).
The contrast sensitivity of the retinal ganglion cells can be modeled based on Classical Receptive
Fields (CRFs), involving circular center and surround antagonism, in which for revealing the edge
information it uses the differences and second differences of Gaussians [23, 24] or Laplacian of
Gaussian (LoG) [53]. Marr and Hildreth [25] proposed an approximation of LoG with DoG based on
a specified ratio of the standard deviations (σ) of the Gaussians [23, 24]. Powers showed that DoGlike models can themselves result from a simple biophysical model of ontogenesis and can usefully
approximate the interaction functions proposed in a variety of neural models [51].
We hypothesized that visual perception of a scene starts by extracting the multiscale edge map, and
a bioplausible implementation of a contrast sensitivity of retinal RFs using DoG filtering produce a
stack of multiscale outputs [54]. In retinal encoding, what is sent to the brain is a stack of images or a
scale-space, not a single image. One of the first models for foveal retinal vision was proposed by
Lindeberg and Florack [55] and our model is most inspired by it. Their model is based on simultaneous sampling of the image at all scales, and the edge map in our model is generated in a similar way.
Edge map extraction is an essential and primitive task in most image processing applications, most
directly to Marr’s primal sketch for perception of a 3D view of the world [25] and is one of its two
main components. There are also possibilities of involvement of higher order derivatives of Gaussians, which can be seen in retinal to cortical visual processing models such as [25, 53, 56, 57] but there
is no biological evidence for them.
Our model output has some similarities with Brightness Assimilation and Contrast theory of early
model developed by Jameson and Harvich [33] based on DoG filters with multiple spatial scales.
They noted the existence of parallel processing that occurs in our visual processing as the result of
simultaneous appearance of sharp edges and mixed colour that define delimited regions. They proposed that contrast effect happening when the stimulus components are relatively large in size compared to the center of the filter, and assimilation effect when components of the stimulus are small
compared to the filter center.
Recent physiological findings on retinal ganglion cells (RGCs) have dramatically extended our understanding of retinal processing. Previously, it was believed that retinal lateral inhibition could not be
a major contributor to Café Wall illusion because the effect is highly directional and arises from both
5
orientation as well as spatial frequency. Neuro-computational eye models [54-56] have been proposed
based on biological findings by considering the size variation of RGCs due to variations of the eccentricity and dendritic field size [58]. Field and Chichilnisky [36] published a sophisticated report about
the circuitry and coding of the visual processing inside the retina, by noting the existence of at least 17
distinct retinal ganglion cell types and their specific role in visual encoding.
As mentioned before, some RGCs have been found to have orientation selectivity similar to the
cortical cells [59, 60]. Also there is evidence for the existence of other types of retinal cells like horizontal and amacrine cells which have elongated surround beyond the CRF size. This has lead to orientation selectivity for modelling of the eye, as retinal non-CRFs (nCRFs) implementation [61-63].
The underlying mechanics of retinal multiscale processing that encode visual data from fine to
coarse scales gets clear indication from all of this evidence from the diversity of intra-retinal circuits,
and different types of RGCs [36, 37]. Also there are variations of the size of each individual RGC due
to the retinal eccentricity [57]. This indicates a high likelihood of the involvement of retinal/cortical
simple cells and their early stages of processing in revealing the tilt cues inside Tile Illusion patterns
and in Café Wall in particular.
2.2 Model
The features of our bioplausible approach are intended to model the characteristics of a human’s
early visual processing. Based on numerous physiological studies e.g. [36-38] there is a diversity of
the receptive field types and sizes inside the retina, resulting in multiscale encoding of the visual scene. This retinal representation is believed to be “scale invariant” in general, and there is an adaptation
mechanism for the Receptive Field sizes to some textural elements inside our field of view [54, 64].
Applying Gaussian filter on an image makes a blurred version of the image. The Difference of two
Gaussian convolutions generates one scale of the edge map representation in our DoG model. For a
2D signal such as image I, the DoG output modelling the retinal GC responses with center surround
organization is given by:
(1)
DoGσ, sσ(x,y) = I×1/2πσ2 exp[-(x2+y2)/(2σ2)] – I×1/2π(sσ)2 exp[-(x2+y2)/(2s2σ2)]
where x and y are the distance from the origin in the horizontal and vertical axes respectively and σ is
the standard deviation/scale of the center Gaussian (σc). As shown in Eq (2) sσ indicates the standard
deviation (or scale) of the surround Gaussian (σs=sσ). s is referred to as Surround ratio in our model.
(2)
s= σsurround/ σcenter = σs / σc
The Laplacian of Gaussian (LoG) can be estimated as the Differences of two DoGs, estimating the
second derivative of Gaussian. For modelling the receptive field of retinal Ganglion Cells, DoG filtering [23, 24] is a good approximation of LoG, if the ratio of dispersion of center to surround is close to
1.6 [10, 25] (s ≈1.6 ≈ φ, the ubiquitous Golden Ratio).
As indicated in Eq (1) by increasing the value of s, we reach a wider area of surround suppression,
although the height of the surround Gaussian declines. We also tested a broader range of Surround
ratio from 1.4 to 8.0 but this made little difference to our results. In addition to the s factor, the filter
size is another parameter to be considered in the model. The DoG is only applied within a window in
which the value of both Gaussians are insignificant outside the window (less than 5% for the surround
Gaussian). A parameter is defined called Window ratio (h) to control window size. The size is determined based on this parameter (h) and the scale of the center Gaussian (σc) as given in Eq (3):
WindowSize=h× σc +1
6
(3)
Parameter h determines how much of each Gaussian (center and surround) is included inside the DoG
filter (+1 as given in Eq (3) guaranties a symmetric filter). For the experimental results, in order to
capture both excitation and inhibition effects, the Window ratio (h) have been set to 8 in this paper.
A sample DoG edge map of a Tile Illusion that is actually a DoG of the image at multiple scales is
shown in Fig 2. The DoG is highly sensitive to spots of matching size, but is also responsive to lines
of appropriate thickness and to contrast edges. A crop section (84×84px) of Trampoline pattern [65]
was selected as an input image. The edge map has been shown at five different scales, σc=0.5, 1.0,
1.5, 2.0 and 2.5 to capture important information from the image (this is related to the texture/object
sizes in the pattern and by applying Eq (3); we can determine a proper range for σc in the model for
any arbitrary pattern). The DoG filters in the figure are shown in jetwhite color map [66].
Fig 2. DoG edge map of a crop section of Trampoline pattern [65] as a sample of tile illusion with the size of 84×84px. The
scales of DoG filters are σc= 0.5, 1.0, 1.5, 2.0, 2.5 for detecting the important information in the pattern. Other parameters of
the model are: s=2, and h=8 (Surround and Window ratios respectively - Reproduced with permission from [72]).
Scale invariant processing in general is not sensitive to the exact parameter setting, ideally the
model’s parameters should be set in a way that at fine scales, they capture high frequency texture details and at coarse scales, the kernel has appropriate size relative to the objects within the scene.
The DoG edge map at multiple scales for a crop section of a Café Wall pattern has been shown in
Fig 3, as the output of the model, revealing the Twisted Cord elements along the mortar lines. The
appearance of the Twisted Cord elements have been shown as the filtering output of applying either
DoG/LoG on a Café Wall image at specific filter sizes (scales) in previous literature [6, 10]. We use a
model [18-22] that highlights the perception of divergence and convergence of mortar lines in the
“Café Wall” illusion. We will show how the model is capable of revealing the illusory cues in the
pattern qualitatively, as well as measuring the strength and orientation of the tilt effect quantitatively
in a wide range of Café Wall patterns. We will also show how our explanation of tilt effect can be
connected to the Brightness Induction theory.
2.3 Processing Pipeline for Tilt Analysis
The DoG transformation, modelling RGC responses, creates an edge map representation at multiple
scales for the pattern, consisting of a set of tilted line segments for the Café Wall stimulus [18-22] at
its fine scales (noted as Twisted Cord elements in the literature [6, 11, 13]). Then for quantitative
measurement of tilt angles in the edge map to compare them with the tilt perceived by a human observer, we embed the DoG model in a processing pipeline using Hough space. We use the houghlines
representation for the edges to determine the tilt angles at different scales of the DoG edge map as
shown in Fig 3.
Fig 3. Flowchart of the model and analytical tilt processing (Reproduced with permission from [72]).
2.3.1 Multiple scale edge map
The most fundamental parameter in the model is the scale of the center Gaussian (σc). Defining a
proper range of scales in our model is highly correlated with characteristics of the pattern elements, in
particular the mortar lines and tile sizes in the Café Wall pattern. The edge map representation at multiple scales as the output of the MODEL have been shown in Fig 3 for a cropped section of a Café
Wall pattern with 200×200px Tiles (T) and 8px Mortar (M), and in Fig 7 for three variations of Café
7
Wall pattern presented in the jetwhite color map [66]. Based on the fixed parameters of the Surround
and Window ratios, relative to σc (s=2 and h=8) and the pattern characteristics, an illustrative range
for σc to detect both mortar information and tiles is a range of 0.5M =4px to 3.5M =28px (refer to Eq
(3), at scale 28 the DoG filter has a size of 8×σc=8×28=224 nearly the same as Tile size=200px. Increasing the scale from this point results in a very distorted edge map, due to the DoG averaging and
the filter size). We have selected incremental steps of 0.5M between scales. This allows for the extraction of both mortar lines (at fine scales) and the Café Wall tiles (at coarse scales) in the edge map, as
well as revealing the tilt cues and different perceptual groupings [19] at multiple scales through the
gradual increase of the DoG scales. So the output of the model is a DoG edge map at multiple scales
for any arbitrary pattern, and we may refer to it as a multiple scale edge map in the text. Now we will
explain how to measure the slope of the detected tilted lines in the DoG edge maps.
Since we have used normalized Gaussians in our model, the curve between the center and surround
Gaussians in the DoG filter shows the activation response and that the surround Gaussian intersects
the center Gaussian at its inflection points (for the 2.0 ratio). Therefore, σc =4 (Eq. 3) in our model
corresponds to a filter in which the mortar size of 8px lies within one standard deviation (σc) away
from the mean of the filter, so we are able to detect the mortar lines with high accuracy in the filtered
response. Therefore, the detected tilts by the model at σc =4 in our results (H) show the prediction of
tilt angle in the foveal vision with high acuity in the center of our current gaze direction. We discuss
about it in details in Section 3.4.
2.3.2 Analysis with Hough
The quantitative tilt measurement (Fig 3) includes three stages, EDGES, HOUGH and ANALYSIS
implemented in MATLAB.
EDGES: At each scale, first the edge map is binarized and then Hough Transform (HT) [67] is applied and allow us to measure the tilt angles in detected line segments in the binary edge map as described in the following stages. HT uses a two-dimensional array called the accumulator (HA) to store
lines information with quantized values of ρ and θ in its cells. ρ represents the distance between the
line passing through the edge point, and θ is the counter-clockwise angle between the normal vector
(ρ) and the x-axis, with the range of [0, π). So based on new parameters of (ρ, θ) every edge pixel (x,
y) in the image space corresponds to a sinusoidal curve in the Hough space as given by ρ =x.cos θ
+y.sin θ.
HOUGH: All possible lines that could pass through every edge point in the edge map, have been
accumulated inside the HA matrix. We are more interested in the detection of tilt inducing line segments inside the Café Wall pattern. Two MATLAB functions called houghpeaks and houghlines
have been employed for this reason. The ‘houghpeaks’ function finds the peaks in the HA matrix,
which are the dominant line segments. It has parameters of NumPeaks (maximum number of lines to
be detected), Threshold (threshold value for searching the HA for the peaks), and NHoodSize (neighborhood suppression size which set to zero after the peak is identified). The ‘houghlines’ function,
however, extracts line segments associated with a particular bin in the HA. It has parameters of FillGap (maximum gap allowed between two line segments associated with the same Hough bin), and
MinLength (minimum length for merged lines to be kept). Sample outputs of the HOUGH analysis
stage have been presented in Fig 3 (for a crop section of a Café Wall pattern) as well as Figs 5, 8, 9,
and Figs 12 to 14 for different variations of the Café Wall pattern investigated. Detected houghlines
are shown in green, displayed on the edge maps with DoG scales ranges from 4 to 28.
ANALYSIS: For categorizing detected line segments, we have considered four reference orientations of horizontal (H), vertical (V), positive diagonal (+45º, D1), and negative diagonal (-45º, D2).
An interval of [-22.5º, 22.5º) around each reference orientation have been chosen to cover the whole
8
space. The information from HOUGH are saved inside four orientation matrices based on how close
they are to one of these reference orientations at this stage for further tilt analysis. The statistical tilt
measurements of the detected houghlines in the neighborhood of each reference orientation is the
output of this final stage.
2.4 Patterns Investigated
To evaluate the tilts predicted by the model and to find out how close these predictions are to reported psychophysical experiments in the literature, we have generated different variations of the Café
Wall pattern similar to the previously tested ones. All the generated patterns have the same configuration (#rows and columns of tiles) as Café Walls of 3×8 tiles with 200×200px tiles.
The patterns investigated include Mortar-Luminance (ML) variations in which the mortar lines
have a relative luminance in the range of Black (ML=0) to White (ML=1), and three shades of Grey in
between Black and White (0.25, 0.5 and 0.75) and are displayed in Fig 4-top (five patterns).
We investigated also Mortar-Width (MW-thickness) variations, ranging from no mortar lines
(MW=0px) through MW= 4, 8, 16, 32 and 64px and are presented in Fig 4-middle (six patterns).
Fig 4. Patterns investigated – All the variations are Café Walls of 3×8 tiles with 200×200px tiles. Top: Mortar-Luminance
(ML) variations from Black (ML=0) to White (ML=1) mortar, and three shades of Grey in between the Black and White
(ML=0.25, 0.5, and 0.75) patterns. Middle: Mortar thickness (Width–MW) variations, from no mortar lines (MW=0) and
MW= 4, 8, 16, 32, 64px patterns (*: The original Café Wall with Black and White tiles, the mortar of intermediate luminance
between the luminance of the tiles (ML=0.5), and the mortar width=8px in our samples; MW=8px). Bottom: Other variations
investigated from Grey-Tiles with three luminance levels for mortar lines, below, between, or above the luminance of tiles,
then phase of tile displacement of shifts of 1/3 and 1/5 of a tile between consecutive rows of the Café Wall, and finally mirrored image inducing opposite direction of tilt as well as Hollow Square pattern (Reproduced with permission from [72])
The bottom of Fig 4 show three patterns involving tiles with two shades of Grey (0.25 and 0.75)
separated by mortar with one of three levels of luminance (0, 0.5, 1), and below that two variations
showing different degrees of displacement (phase shifts of 1/3 and 1/5), and finally a mirrored pattern
used to demonstrate that predicted tilts of the pattern reverses as expected and a hollow square version
that is used as an extreme case (seven variations). These examples are repeated in later figures along
with the processed versions with detected Hough lines (eighteen stimuli in total).
3 Experimental Results
The tilt perception in Café Wall pattern is not only affected by foveal and peripheral view to the
pattern, which have been investigated in our previous studies [20-22], but also from the patterns characteristics as well, such as mortar luminance, size, phase of tile displacement, tiles contrast and so
forth. We have analyzed the effect of these parameters on the magnitude and orientation of the tilt
effect in eighteen different variations (Fig 4) of the Café Wall pattern.
3.1 Mortar Luminance variations
3.1.1 Patterns Characteristics
The patterns under investigation are five variations given in Fig 4-top. These patterns are Café
Walls of 3×8 tiles with 200×200px tiles (T) and 8px mortar (M) (Café Wall 3×8-T200-M8). To generate the samples we have used a value that can be interpreted as ‘reflectance’ (viewed printed) or
‘relative luminance (viewed on screen) to represent grey levels in the range [0,1]. Together with ambient direct and indirect illumination, these combine to give a subjective effect of relative brightness.
9
Kingdom [34] defines ‘brightness’ as the perceptual correlate of the perceived luminance. Blakeslee
and McCourt [68] explain that ‘luminance’ is the physical intensity of a stimulus and that in achromatic vision, we see patterns of varying luminance, in which the judgement of ‘brightness’, ‘lightness’ and ‘contrast’ would be identical in this case. In this paper we refer to the grey level of mortar
lines as simply ‘luminance’, and denote Mortar Luminance with ML for an easier referral to this pattern characteristic in the rest of this paper.
By looking at the Mortar-Luminance variations, we see a very minor tilt effect for the Black
(ML=0.0) and White (ML=1.0) mortar patterns, compared to the Grey mortar variations (with three
luminance levels: ML=0.25, 0.5, 0.75). Psychophysical experiments reported in the literature have
shown that the strongest tilt effect in the Café Wall pattern occurs when the mortar luminance is in the
intermediate range between the Black and White tiles [5, 6].
3.1.2 DoG Edge Maps and Quantitative Tilt Results
The binary DoG edge maps at seven different scales for the Mortar-Luminance variations (Fig 4top) have been presented in Fig 5, in which the edge maps have been overlayed by the detected
houghlines displayed in green. Let concentrate on the binary edge map first. The range of DoG scales
(Section 2.3.1) starts from 0.5M=4px and continues to 3.5M=28px with incremental steps of 0.5M (σc
=4, 8, 12, 16, 20, 24, 28). Note that for the detection of near horizontal tilted line segments, as the
perceived mortar lines in the Café Wall illusion, the DoG scale should be in the intermediate range,
close to the mortar size, σc ~M=8px [20, 22]. We suggest that σc =4 is appropriate for predicting foveal
tilt effect and σc =8 for predicting the tilt at the edge of the image in the periphery of the retina. Comparing the edge maps at coarse scales (σc =20, 24, 28) shows nearly similar DoG outputs. This is because at the coarse scales, the scale of the DoG is large enough to capture the tile information. The
difference between the DoG edge maps are mainly at fine to medium scales of the DoGs (σc =4, 8, 12,
16). At scale 16 (σc =16), we see a transient state between detecting nearly horizontal tilted line segments connecting tiles with the Twisted Cord elements along the mortar lines, to zigzag vertical
grouping of tiles at the coarse scales, in a complete opposite direction. At this scale, we still see some
mortar cues left in the edge maps of some of these variations, while in others, those mortar cues are
completely disappeared. We will explain later on (Sections 3.2.2 and 3.2.3) how the strength of the
illusion (which is different from the magnitude of detected tilt angles) are predictable based on the
persistence of the mortar cues at the multiple scales of the DoG edge maps. Now we need to quantify
the mean tilts in the edge maps for these patterns using the Hough analysis pipeline in our model (as
described in Section 2.3.2).
For quantitative analysis of tilt, same parameters of Hough have been applied for all of these variations, and for every scale of the edge maps to attain reliable tilt results, which are comparable between
these variations. The fixed Hough parameters are: NumPeaks=1000, FillGap=40px, and
MinLenght=450px, and if the algorithm cannot detect any lines at any specific scale, we simply report
it as no detection of lines at that scale. Fig 5 shows the detected houghlines displayed in green on the
DoG edge maps for the Mortar-Luminance variations. Blue lines indicate the longest lines detected.
The absolute mean tilts and the standard errors of detected tilt range have been provided in Fig 6.
Fig 5. DoG edge maps at multiple scales (σc=4, 8, 12, 16, 20, 24, 28) for five Mortar-Luminance variations, from Black (Left
– ML 0.0) to White (Right - ML 1.0) mortar, in which the edge maps are overlayed by the detected houghlines displayed in
green (Blue lines indicate the longest lines detected) – Hough parameters are kept the same in all experiments for detecting
near Horizontal, Vertical and Diagonal tilted lines in the edge maps. NumPeaks=1000, FillGap=40px, MinLenght= 450px
for every DoG scales. The last row shows each variation of the pattern investigated (Reproduced with permission from [72]).
10
As the left column in Fig 5 shows, for the Black mortar variation (ML=0.0-Munsterberg pattern),
there are no detected lines at the finest scale (σc =4; first row represented by NaN-no detected lines).
As the mean tilt results in Fig 6 shows, there is no near horizontal lines detected at any of the DoG
scales in the Munsterberg pattern. The detected houghlines at medium scale (σc =12) are near vertical
lines, with then near diagonal lines at larger scales (σc =20 to 28). As the DoG edge map of the pattern
shows the apparent grouping of tiles, is a zigzag vertical pattern. So the edge map leads to a same
conclusion as we had in houghlines analysis, with no near horizontal tilts in this pattern. There is no
confusion of visual cues in our results that contribute the tilt effect in this pattern, so we state that
there is no tilt illusion in the Munsterberg pattern (ML=0.0 – Black mortar).
Fig 6. Mean tilts and the standard error of detected tilt angles of houghlines for the five variations of Mortar-Luminance,
from Black (ML 0.0) and White (ML 1.0) mortar on Top, to Grey mortar lines (ML=0.25, 0.5, 0.75) at the Bottom, for DoG
edge maps at seven scales (σc=4, 8, 12, 16, 20, 24, 28) – Hough parameters are kept the same in all experiments for detecting
near horizontal (H), vertical (V) and diagonal (D1, D2) tilted lines in the edge maps. NumPeaks=1000, FillGap=40px, and
MinLenght=450px for all DoG scales of the edge maps. NaN: not a number means no lines detected (Reproduced with permission from [72]).
As the right column in Fig 5 shows, for the White mortar variation (ML=1.0), there is no detected
lines at the finest scale (σc =4) similar to the Munsterberg pattern. But a few lines are detected around
1º of horizontal deviation at the next scale (σc =8) as with the details given in Fig 6. The detected tilt
range is much below the Grey mortar variations, which is approximately more than 6.5º at scale 8.
Even at scale 12, the mean tilt is roughly 3º less than the Grey mortar variations. For the variation of
the White mortar, we see some cues of mortar lines in the edge map at fine scales, but these are different than the Grey mortar variations. If there is any illusion in the White mortar pattern, the tilts predicted by the model is quite negligible (None at σc =4 and ~1° at σc =8). Relying on these quantitative
results, we can state that similar to the Black mortar, there is no illusion in the White mortar variation
as well.
The houghlines of the Grey mortar patterns with three levels of luminance for Dark-, Mid-, and
Light-Grey (0.25, 0.5, 0.75), have been shown in Fig 5-center columns. The absolute mean tilts and
the standard errors of detected tilt range for these patterns have been shown at the bottom of Fig 6.
For the Dark-Grey (ML=0.25) and Mid-Grey (ML=0.5) mortar, the mean tilts at the finest scale (σc
=4) are ~3.5º compared to ~4.5º for ML=0.75 (Light-Grey). As Fig 6 shows, we are still able to detect
horizontal lines at scale σc =16 for two patterns of ML=0.5 and ML=0.75, but not for ML=0.25. This
gets clear from the DoG edge maps (Fig 5), because the mortar cues are still influential at this scale in
the two variations of Mid- and Light-Grey, but not in the Dark-Grey mortar pattern. The near horizontal tilts are ~12º at this scale in both of the patterns. Based on the result we conclude that, the range of
the detected mean tilts is up to 1º more, between ~3.5º to 12º for ML=0.5 variation, but between ~4.5º
to 12º for ML=0.75 pattern. This could be an indication for a stronger tilt effect in the Mid-Grey pattern compared to the Light-Grey variation. This supports previous psychophysical findings that the
highest strength for tilt effect in Café Wall illusion is when the luminance of the mortar is in the intermediate luminance of the tiles [5, 6].
At coarse scales after σc =16, we have the zigzag vertical groupings of tiles. Fig 5 shows that at
scales 16 and 20, the vertical lines are fitted in the edge maps with a reliable tilt range. The deviations
from the vertical axis are ~6º to 9º in different variations at coarse scales (σc =16 to 28). Detected
houghlines around the diagonal axes (D1, D2) start at the coarsest scales (σc =24, 28). The deviations
from the diagonal axes are in the range of ~13º for the Grey mortar variations, slightly higher ~13.5º
for the White mortar pattern, and a bit lower ~11.5º for the Black mortar (Munsterberg pattern).
11
3.1.3 Discussion
The perceptual grouping of Café Wall pattern with White mortar seems a bit different from the
Munsterberg pattern with Black mortar. In the literature, it has been reported that these variations of
Café Wall do not have any illusory tilts [4, 6] or better put, they are not as strong as the intermediate
luminance of Grey mortar lines. It has also said that the mortar should be very thin for Munsterberg
version to induce the tilt. We have tested one sample of Munsterberg pattern in here with the same
mortar size as the others (M=8px). The predicted tilts for the Black and White mortar variations show
no to very weak tilts (at σc =4, 8; the finest scales in the model) which support previously reported
findings [4, 6].
It was also suggested that the highest strength for the illusion is when the luminance of mortar is in
the intermediate luminance of the tiles [5, 6]. Our results on the Grey mortar variations are consistent
with this suggestion. We have shown that although both patterns of ML=0.5 and ML=0.75 (Mid- and
Light- Grey mortar) have a highly persistent mortar cues from the finest to medium scale (σc =4 to
16), but mean tilt results for ML=0.5 pattern show a higher range of tilt angles compared to ML=0.75
variation (1º more - starting from the lower angle at the finest scale with more rapid tilt increase into
the medium scale). This indicates a stronger tilt effect for Café Wall illusion having mortar lines with
an intermediate luminance. We discuss further in Section 3.4 how the tilts predicted by the model at
multiple scales are related to the differences of resolution to the eccentricity.
3.2 Mortar Thickness variations
3.2.1 Patterns Characteristics
The six different variations of Mortar-Width (MW) are shown in Fig 4-middle. It has been reported
that by increasing the height (width) of mortar lines, an inverse of the Café Wall illusion is happening
[10], which is discussed here as well as in Section 3.2.3. The Mortar Width (Size) is denoted with MW
for an easier referral to this pattern characteristic. Note that for these samples, we have used MW to
specify patterns that belong to the Mortar-Width category. When we talk about the characteristics of a
Café Wall pattern in general, we refer to mortar size simply by using M (Mortar size, thickness, and
width all refer to the size of mortar lines).
3.2.2 DoG Edge Maps and Quantitative Tilt Results
Since the mortar sizes are different in these variations while the tile sizes are the same, we have
selected a pattern with mid-size mortar thickness, MW=8px as a base, to define the scales of the DoG
edge maps for these variations. So considering the mortar size of 8px (the DoG scales vary from 0.5M
to 3.5M with incremental steps of 0.5M), the appropriate scales are σc =4, 8, 12, 16, 20, 24, 28 which
we use for all the Mortar-Width variations (Fig 4-middle). Fig 7 shows the DoG edge maps at seven
scales in the jetwhite colour map for the Mortar-Width variations of MW= 16, 32 and 64px (thick
mortar lines). The binary DoG edge maps for all these patterns have been presented in Figs 8 and 9, in
which the edge maps have been overlayed by the detected houghlines displayed in green. Let
concentrate on the binary edge map first.
Since the tile sizes are the same, by comparing the DoG edge maps in Figs 8 and 9 we see that at
the coarsest scale (σc =28), the DoG filter size is large enough to fully capture the tiles (Window
size=8×σc=224 ~Tile size=200px– Section 2.3.1). So for all of the Mortar-Width variations, the
coarsest scales (σc =24, 28) show nearly similar DoG outputs. However, we see some substantial
changes in the last two variations of very thick mortar lines (MW=32, 64px) at these scales.
12
The cues of mortar lines are still available for MW=64px pattern even at scale σc =28. For two variations of the very thick mortar (MW=32 and 64px pattern), we see Brightness Induction on the mortar
lines, which can be seen more clearly on the jetwhite colour map representations of the edge maps in
Fig 7. These brightness artifacts can be seen at scales 8 and 12 for MW=32px pattern and at scales 12,
16 and 20 for MW=64px pattern.
Fig 7. DoG edge maps at multiple scales (σc=4, 8, 12, 16, 20, 24, 28) for the thick mortar variations displayed in jetwhite
colormap. The original patterns are Café Walls of 3×8 tiles with 200×200px tiles and mortar size variations of MW=16, 32,
and 64px. The other parameters of the model are s=2 and h=8 for the Surround and Window ratios respectively. The last row
shows each variation of the pattern investigated (Reproduced with permission from [72]).
In addition, there is another difference in these two variations versus the rest of the thinner mortar
patterns. At the finest scales (σc =4, 8) for MW=32px pattern (Fig 7-center column), and at scales 4, 8
and 12 for MW=64px pattern (Fig 7-right column), the directions of groupings of identically coloured
tiles are not as clear as they were in other patterns with thinner mortar lines. The mortar in these two
patterns are quite thick, wide enough to separate the tiles completely, and our only perception when
viewing these patterns, are the changes of brightness along the mortar lines. We refer to this effect as
Enlarged Twisted Cord. The directions of these brightness changes are similar to what has been detected at their fine DoG scales. However, in these variations, we do not see the tilt effect as clearly as
we perceive it on the other thinner mortar variations. These variations have a strong brightness induction, but not a strong tilt effect. The only thing that bridges between the tilt effect to the brightness
induction effect in these patterns is the thick mortar sizes. The brightness induction and the direction
of Enlarged Twisted Cord elements seem to be more subject dependent in these very thick mortar
variations.
In the literature it has been shown that when the diameter of the DoG operator (implementing center-surround operators) is larger than the mortar width, an opposite phase brightness induction appears
[69]. This has been reported as a Reverse of Café Wall illusion by Earle and Maskell [10]. Lulich and
Stevens also reported for “a reversal of the traditional Café Wall effect that is dependent upon mortar
width” [70:pp.428]. The Reversed Twister Cord in thick mortar variations of the pattern, also called
Twisted Rope [10] with an opposite direction to the Twisted Cord elements along mortar lines to distinguish these two. The term Enlarged Twisted Cord emphasize the continuity with the Twisted Cord
unlike the usage of Twisted Rope in [10]. They note that the effect of Reversed Twisted cord occurs
for a limited range of spatial frequency that is acting as bandpass filters. Outside this limit, the twisted
cord elements breaks into two parallel segments aligned with the mortar lines [10], and thus no twisted cord elements are presented. This breakage of Twisted Cords that are explained in [10] can be observed clearly in the DoG edge maps presented in Fig 9, for example at scale 8 for MW=64px pattern
and at scale 4 for MW=32px variation. Lulich and Stevens [70] note that by increasing mortar width
the induced tilt along the mortar are further diminished and disappear when the mortar width is about
twice the size of the DoG operator. In our results, the diminishing of the tilt cues can be viewed for
MW=8px pattern at scale 16 of the edge map, but varies in other samples, with higher range for
MW=4px, and much lower range for the thicker mortar lines (from the MW=16px upwards). The
brightness induction observed and explained here is also consistent with Morgan and Moulden’s conclusion [6] that the effect is a consequence of Bandpass filtering. We have shown that this configuration allows us to explore the connection of the edge map representation at multiple scales and the
strength of the illusion. We come back to this later on in Section 3.4.
13
Fig 8. DoG edge maps at seven scales (σc=4, 8, 12, 16, 20, 24, 28) for three variations of Mortar-Width, from no mortar lines
(MW 0px) to 8px mortar, in which the edge maps are overlayed by the detected houghlines displayed in green (Blue lines
indicate the longest lines detected) – Hough parameters are kept the same in all experiments for detecting near horizontal,
vertical and diagonal tilted lines in the edge maps. NumPeaks=1000, FillGap=40px, and MinLenght=450px for all DoG
scales. In all experiments the parameters of the DoG model are kept constant as s=2 (Surround ratio), h=8 (Window ratio).
The last row shows each variation of the pattern investigated (Reproduced with permission from [72]).
For MW=0px pattern (Fig 8-left column), the only grouping of pattern elements that can be seen in
the edge map at multiple scale is the zigzag vertical grouping of tiles. We can state based on the DoG
edge map that there is no tilt illusion in this pattern. Morgan and Moulden suggest that the mortar line
is critical to the strength of illusion [6], and our result supports this suggestion. For MW=4, 8 and
16px (Figs 8 and 9), based on the defined DoG scales, we can see the mortar cues from the finest
scale till the maximum scale of σc =20 for MW=16px pattern. The edge maps show that the mortar
cues do not disappear soon after the DoG filter reaches to the mortar size. For instance, in MW=8px
pattern, the near horizontal tilt cues along the mortar lines are quite persistent from the finest scale (σc
=4) till scale 16 which is twice of the mortar size. From the edge maps in Figs 8 and 9 we see that
there is a definite correlation between the width of the mortar lines and the strength of tilt illusion in
these variations. The results show that increasing the height of mortar results in decreasing the
strength of illusion. For the illusion to appear the mortar size reported in the literature is between 1°
and 10° min of arc [4] based on our eye’s sensitivity and the mechanism of bandpass filtering [6]. We
can use the predicted tilts to find the mortar thickness with the highest inducing tilt effect in order to
generate tilt illusion patterns with the strongest effects.
The quantitative measurement of detected tilts in the DoG edge maps of the Mortar-Width variations (Fig 4-middle), have been measured based on the same parameters of Hough for all of the patterns, and for every DoG scale of the edge maps (NumPeaks=1000, FillGap=40px, and MinLenght=
450px). Figs 8 and 9 show the results of detected houghlines displayed in green on the DoG edge
maps at multiple scales for the Mortar-Width variations. The absolute mean tilts and the standard errors of detected tilt range have been tabulated for easier comparison in Fig 10.
Fig 9. DoG edge maps at seven scales (σc=4, 8, 12, 16, 20, 24, 28) for three different variations of Mortar-Width, from mortar size of 16px to 64px, in which the edge maps are overlayed by the detected houghlines displayed in green (Blue lines
indicate the longest lines detected) – Hough parameters are kept the same in all experiments for detecting near horizontal,
vertical and diagonal tilted lines in the edge maps. NumPeaks=1000, FillGap=40px, and MinLenght=450px for all DoG
scales (σc=4, 8, 12, 16, 20, 24, 28). In all experiments the parameters of the DoG model are kept constant as s=2(Surround
ratio), h=8 (Window ratio). The last row shows each variation of the pattern investigated (Reproduced with permission from
[72]).
In MW=0px pattern (Fig 8-left column), there are no detected houghlines around the horizontal
orientation at any scales. The results of the mean tilts in Fig 10 confirm this. The only grouping of
pattern elements that can be seen in the edge map at multiple scales and also in the detected houghlines, are the zigzag vertical grouping of tiles, with the vertical deviation of ~7.5º- 8º, and the diagonal mean tilts between ~9.4º- 10.4º. Therefore based on the edge map and Hough analysis results, we
can conclude that there is no tilt illusion in this pattern.
Fig 10. Mean tilts and the standard errors of detected tilt angles of houghlines for six variations of the Mortar-Width, from
no mortar (MW 0px) to 64px mortar and for the multiple scale (σc =4, 8, 12, 16, 20, 24, 28) edge maps – Hough parameters
are kept the same in all experiments for detecting near horizontal, vertical and diagonal tilted lines in the edge maps (H, V,
D1, D2). NumPeaks=1000, FillGap=40px, and MinLenght=450px for all DoG scales. NaN means no detected lines (Reproduced with permission from [72]).
14
The mean tilts near horizontal at scales 4, 8 and 12 in three patterns of MW=4, 8 and 16px show a
range of ~3.5º- 9.5º. Fig 10 also indicates that by increasing mortar width from 4px to 32px, there will
be an increase of the detected tilt angles from approximately 3.3º in the MW=4px pattern to roughly
5.8º in the MW=32px variation. This supports previous findings that in Café Wall patterns with thick
mortar, the inducing bands of Twisted Cords appear at steeper angle to the horizontal compared to
thinner mortar [70]. There are no horizontal lines detected at the medium to coarse scales; for
MW=4px pattern there is none after σc =16, for MW=8px there is none after σc=20, and for MW=16px
there is none after σc =24. The maximum mean tilts of ~14º in the patterns of MW=32 and 64px cannot be detected for the thinner mortar variations (MW≤16px). The vertical and diagonal deviations of
the detected houghlines are nearly the same in these variations. The near vertical tilts is about ~7º8.5º, while the range of diagonal tilts is between ~11.8º- 13.8º for MW=4, 8 and 16px at coarse scales
(σc =20, 24, 28).
Now let concentrate on the thickest mortar variation in our samples (MW=64px). Since the mortar
size is huge compared to the other variations, the mortar cues still exist at the specified coarsest scale
(σc=28) of the DoG edge map. At scale 28, the DoG captures a complete tile, and houghlines results
show that near horizontal lines can be detected until the coarsest scale. There is no horizontal lines
detected at the finest scale (σc =4), and the range of horizontal mean tilts is between ~10.5º- 17º along
different scales. The mean tilts in this pattern (MW=64px) is more than ~5º larger than the range of
horizontal mean tilts in the thinner mortar variations (which is ~3.5º- 9.5º in MW=4 and 8px patterns
and ~5º- 14º in MW=16 and 32px patterns). When we compare the tilt effect in this pattern with the
thinner mortar variations such as MW=8px pattern, we see that the tilt effect is very weak here (due to
a weak persistence of the mortar cues in relation to the width of the mortar-Fig 11), but the predicted
tilts show a strong tilt effect. We will investigate these very thick mortar variations further in the next
Section to explain our quantitative tilt results. The diagonal tilts in this pattern get stable around 12.8º
at scale σc =28. The only scale with detected houghlines around the vertical orientation is at σc =12
with the mean tilt of ~20º which is misleading and again larger than the deviations of the other variations. As Fig 9 shows, the zigzag vertical grouping of tiles did not occur, not even at scale 28 in this
pattern.
3.2.3 Very Thick mortar variations
In the experiments reported so far in our study, we have assumed the common hypothesis that for
detection of near horizontal tilted line segments along mortar lines or the appearance of Twisted Cord
elements in the literature, the DoG scale (σc) should be close to mortar size (σc ~M ) [6, 20-22, 70].
We show now that this is not precisely true when mortar size exceeds 16px in our samples (the Café Wall stimuli have tiles of 200×200px). For two patterns of MW=32 and 64px this hypothesis is not
valid. We show here that the mortar cues are completely disappeared in the DoG edge maps of these
patterns at scales much smaller than the mortar size.
In the previous experiment, we have used a fixed number of scales for the DoGs (seven scales) for
all of the Mortar-Width variations (Section 3.2.2). The MW=8px pattern was selected as a base to
specify the range of scales as σc =4, 8, 12, 16, 20, 24, 28 for all these variations to detect both mortar
and tiles.
For very thick mortar variations (MW=32 and 64px), we found that in the defined range of scales
for the DoGs, the mortar cues still exist at the coarsest scale (σc =28). In addition, we have detected
brightness induction in the DoG edge maps of these variations, much stronger than patterns with thinner mortar lines although the perception of tilt in these variations are very weak. The mean tilts pre15
sented in Fig 10 show overestimated tilt angles with strong tilts in these variations despite of our
weakly perceived tilts.
Since the tile sizes are the same for all samples (200×200px), then the coarse scales for detecting
them should be similar in size. The DoG edge maps of patterns MW=16, 32 and 64px at seven scales
have been shown in the jetwhite colour map in Fig 7 and in binary form in Fig 9. As indicated in Fig 9
for the MW=64px pattern, we have mortar cues at the coarsest scale (σc =28) and a large deviation
from the horizontal at this scale along the mortar lines. The zigzag vertical grouping of tiles which
appeared clearly at coarse scales of the DoG edge maps for the MW=16 and 32px patterns, are not
shown for the MW=64px pattern in the predefined range of DoG scales. So we have examined a few
scales above 28 for these patterns, and gathered some of the important results in Fig 11. The figure
shows the DoG edge maps at eight different scales (σc =8, 12, 16, 20, 24, 28, 32, 48) for thick mortar
patterns. There is a different gap from scale 32 to 48 compared to incremental steps of 4 between the
rest of the scales, since we deliberately wanted to show the DoG outputs at scale 48 specifically for
the MW=64px pattern. Fig 11 shows that the edge maps of the thick mortar patterns (MW=16, 32 and
64px) are very similar after scale 32, when the tiles are extracted completely. We see a change of
groupings of tiles from the near horizontal to the zigzag vertical at scale 24 for the MW=16px pattern,
and at scale 32 for the MW=32px pattern.
Fig 11. Binary DoG edge maps at multiple scales for three thick-mortar variations with mortar lines of width 16px (MW
16px) to 64px for the Café Walls of 3×8 tiles with 200×200px tiles. The edge maps include 8 scales of σc=8, 12, 16, 20, 24,
28, 32, 48 presented in the figure. The other parameters of the DoG model are s=2 (Surround ratio) and h=8 (Window ratio).
The last row shows each variation of the pattern investigated (Reproduced with permission from [72]).
The other thing worth mentioning is that the mortar lines are detectable at scale 16 (σc=M) for
MW=16px pattern, but not at scale 32 (σc =M) for MW=32px pattern (they are detected at the previous
scale; σc =24). There are no mortar cues available at scale σc=64=M with MW=64px. In the edge
map, the mortar cues exist till scale 32 (σc =32) and then disappear at coarse scales, with a filter size
of nearly half of the size of the mortar lines! This might be an indication for a very weak tilt effect, if
there is any, for the MW=32 and 64px patterns compared to the MW=16px variation.
If we look back to the MW=8px pattern (Fig 8-right column), we see the mortar cues are not only
detectable at scale 8 (σc=M), but also at the following two scales (σc=12, 16). Comparing two variations of MW=8px with MW=16px (Fig 9-left column), we see that for MW=16px pattern, the mortar
cues nearly disappear in just one scale after the mortar size (at σc=20) except for very small dots in
their place. The persistence of the mortar cues in the edge map of this pattern is not as strong as the
MW=8px pattern which last for a few consecutive scales larger than the mortar size (σc=12, 16). For
the very thin mortar (MW=4px), the edge map (Fig 8-center column) shows a persistent mortar cues
from the finest scale (σc =4=M) till scale 12, similar to the MW=8px pattern. So the tilts predicted by
the model for very thin mortar variations (MW=4px and 8px) show the strongest tilt effects.
We argue here that the persistence of mortar cues plays a major role in determining how strongly
we perceive the induced tilt in the Café Wall illusion. Our model seems to be a good fit to our biological understanding of the early stages of vision. Quantitative predictions of tilt at multiple scales for a
wide range of conditions, and specifically the predictions of the strength of illusion across multiple
scales as conditions vary. These quantified results have been shown here for the first time by our predictive model and the DoG edge map representations at multiple scales.
3.2.4 Discussion
The DoG edge maps at multiple scales for the variation of no mortar (MW=0px) indicates the same
non-illusory percepts like the previous reports by others [5, 14]. Our DoG edge map representation
16
supports the previous findings that the strongest tilt effect in the Café Wall illusion occurs with the
thin mortar lines (MW=4 and 8px in our samples). Also the multiple scale edge map nicely unveils the
underlying cues involved in thick mortar variations of the Café Wall illusion, and indicates how the
tilt effect degrades here while the brightness induction increases in these patterns. The brightness induction that is referred to as Enlarged Twisted Cord in this work, previously reported with different
names such as Reversed Twister cord and Twisted Rope in the literature [10, 70].
Fig 12. DoG edge maps at seven scales (σc =4, 8, 12, 16, 20, 24, 28) for three Grey-Tiles variations (Black-Left, Mid-GreyCenter and White-Right mortar lines for tiles with relative luminance of 0.25-Dark Grey, and 0.75-Light Grey), in which the
edge maps are overlayed by the detected houghlines displayed in green (Blue lines indicate the longest lines detected) –
Hough parameters are kept the same in all experiments for detecting near horizontal, vertical and diagonal tilted lines in the
edge maps. NumPeaks=1000, Threshold=3, FillGap=40, and MinLength=450 for all DoG scales. Other parameters of the
DoG model are s=2 (Surround ratio), and h=8 (Window ratio). The last row shows each variation of the pattern investigated
(Reproduced with permission from [72]).
3.3 Other Variations of the Pattern
3.3.1 Grey Tiles variations
3.3.1.1 Patterns Characteristics
Grey-Tiles are variations of Café Wall patterns with lower contrasted tiles, in which instead of the
Black and White tiles with maximum contrast, the tiles here are two shades of Grey (with the relative
luminance of tiles equal to 0.25 for Dark-Grey and 0.75 for Light-Grey). So in these variations the
luminance contrast between the tiles are half of the luminance contrast of the original Café Wall pattern with the Black and White tiles. The mortar lines have one of the three levels of luminance here,
either below, between, or above the luminance of both of the Grey tiles selected as ML=0 (Black), 0.5
(Mid-Grey), and 1.0 (White), which have been presented in Fig 4-bottom.
3.3.1.2 DoG Edge Maps and Quantitative Tilt Results
The binary edge maps at seven scales for the three variations of low contrasted tiles (Grey-Tiles)
have been presented in Fig 12, in which the edge maps have been overlayed by the detected houghlines displayed in green. The scale of the DoGs are similar to our previous investigations in Sections
3.1 and 3.2. For easier comparison of the detected line segments in the edge maps of these variations,
the edge map of the original Café Wall with Black and White tiles and mortar luminance of 0.5 has
been provided in the left column of Fig 13.
Fig 13. DoG edge maps at seven scales (σc =4, 8, 12, 16, 20, 24, 28) for three patterns of the original Café Wall-Left column,
and two variations of phase of tile displacement with the shifts of 1/3 and 1/5 of a tile in the Center and Right -columns, in
which the edge maps are overlayed by the detected houghlines displayed in green (Blue lines indicate the longest lines detected) – Hough parameters are kept the same in all experiments for detecting near horizontal, vertical and diagonal tilted
lines in the edge maps. NumPeaks=1000, Threshold=3, FillGap=40, and MinLength=450 for all DoG scales. Other parameters of the DoG model are s=2 (Surround ratio) and h=8 (Window ratio). The last row shows each variation of the pattern
investigated (Reproduced with permission from [72]).
By comparing the edge map of the intermediate mortar luminance (ML=0.5) in the variations of
Grey-Tiles (Fig 12-center column) with the original Café Wall pattern (Fig 13-left column), we see
very similar tilt cues along the mortar lines at multiple scales of the DoGs (However there are some
border effects revealed around the variations of the low contrasted tiles, that are not present in the
17
Black and White variations of the pattern investigated so far). There are some edge effects appeared in
the DoG edge maps of the Black and White mortar patterns (Fig 12). In the Black mortar (ML=0.0)
variation, we see two detected edges at fine scales along the mortar lines (for inner and outer regions
of tiles), and blurring effect of these small edges as the DoG scale increases results in generation of
the edge map. But in the White mortar (ML=1.0) variation, we see just one detected edge along the
mortar at fine scales. This is due to the shape of the filter we have used in our model for the implementation of the ON-Center OFF-surround RFs and the RGC responses.
Fig 14. DoG edge maps at seven scales (σc=4, 8, 12, 16, 20, 24, 28) for three patterns of the original Café Wall pattern-Left
column, mirrored image (Direction Change) in the Center, and Hollow Square in the Right-column, in which the edge maps
are overlayed by the detected houghlines displayed in green (Blue lines indicate the longest lines detected) – Hough parameters are kept the same in all experiments for detecting near horizontal, vertical and diagonal tilted lines. NumPeaks=1000,
Threshold=3, FillGap=40, and MinLength=450 for all DoG scales. Other parameters of the DoG model are s=2 (Surround
ratio), and h=8 (Window ratio). The last row shows each variation of the pattern investigated (Reproduced with permission
from [72]).
Detected houghlines have been shown in Fig 12 displayed in green on the DoG edge maps at multiple scales (σc =4, 8, 12, 16, 20, 24, 28) for the variations of Grey-Tiles. The hough parameters are
kept the same as the previous experiments (NumPeaks=1000, Threshold=3, FillGap=40, and
MinLength=450 in Sections 3.1 and 3.2). The absolute mean tilts and the standard errors of the low
contrasted tile patterns (Grey-Tiles) have been summarized in Fig 15-top. The absolute mean tilts and
the standard errors of calculation for the original Café Wall have been provided in the figure for easier
comparison of tilt results of other variations with this pattern, at the bottom-left corner of the figure
(This is the result from Fig 6 for ML=0.5 or Fig 10 for MW=8px. The original Café Wall has shown
the maximum detected tilt range among the patterns investigated in this study).
Comparing the mean tilt results of the Black mortar pattern (ML=0.0) from the variations of Greytiles with the original Café Wall pattern in Fig 15 shows that the near horizontal mean tilts are much
lower than the original Café Wall, and the rest of the low contrasted tile patterns (ML=0.5, and
ML=1.0). As noted in Section 2.3.1 (p. 9) the most appropriate filter size for investigation of the horizontal inducing tilts for mortar size of 8px is the finest scale (σc=4). It is less than 1° for ML=0, 1
(mortar of Black and White) and ~2° for Mid-Grey mortar. The tilts predicted by the model for the
Grey-Tiles are weaker than the other variations shown in Fig 15 with high contrasted tiles of Black
and White tiles at scales σc =4, 8. At scale 8, it is ~2º for the Black (ML=0.0) and White (ML=1.0)
mortar patterns and ~4.5º for the Mid-Grey (ML=0.5) variation. Even for the ML=0.5 version, the
mean tilts are nearly 2º smaller than the detected mean tilts in the original Café Wall pattern at scales
8 and 12 (~4.5º to 7.5º compared to ~6.6º to 9.5º). The predicted results indicate for a weak tilt effect
in the variations of Grey-Tiles. Gregory and Heard note that the effect depends critically on the luminance and that it disappears if the mortar line is either brighter than the light tiles, or dimmer than the
dark ones [4], and our results support these psychophysical findings.
Also at coarse scale (σc =24), our results show the near horizontal tilts with ~2º in the variations of
Grey-Tiles compared to none in the original Café Wall pattern. Checking the houghlines in Fig 12
shows that these detected lines are the artefacts of border lines which are out of the interest.
The diagonal mean tilts in the Grey-Tiles are approximately 2º smaller in the ML=0.5 (Mid-Grey)
pattern, which is ~11.5º-12º compared to ~12.5º-13.5º in the original Café Wall. So both of the horizontal mean tilts at fine scales and the diagonal mean tilts at coarse scales indicate higher range of
detected tilts in the original Café Wall pattern compared to the variations of low contrasted tiles
(Grey-Tiles).
18
3.3.1.3 Discussion
We have shown here that although the illusion persists in the variation of Grey-Tiles with an intermediate mortar luminance between the luminance of both Grey tiles (ML=0.5 pattern), but the illusion
strength is not as high as the original Café Wall pattern, which is consistent with previous reports in
the literature. It has been given in the literature that the strength of illusion in low contrasted tile variations of the pattern is less than the original Café Wall with high contrasted tiles [4, 6]. Also Café
Walls with lower contrasted tiles need thinner mortar lines to have the same degree of apparent convergence/divergence in the illusion percept [4], not yet tested by our model (But for Mortar-Width
variations with contrast luminance of one unit for the tiles we have illustrated that patterns with thinner mortar lines produce wider range of tilt angles indicating stronger tilt effect-Section 3.2).
3.3.2 Phase displacement effect on detected tilt
3.3.2.1 Patterns Characteristics
Two more patterns are generated based on two different phase of tile displacement, which is the
amount of tile shift between consecutive rows in the Café Wall pattern. One of these patterns has a
phase shift of 1/3 (shift of 1/3rd of a tile) and the other one a phase shift of 1/5, and both displayed in
Fig 4- bottom.
3.3.2.2 DoG Edge Maps and Quantitative Tilt Results
The results of the binary DoG edge maps at seven scales (σc =4, 8, 12, 16, 20, 24, 28) for the two
variations of phase shifts of 1/3 and 1/5 (Fig 4-bottom) have been presented in Fig 13, in which the
edge maps have been overlayed by the detected houghlines displayed in green. Comparing the edge
maps of these patterns with the edge map of the original Café Wall pattern on the left column of Fig
13 shows that the tilt cues are much weaker across multiple scales of the edge maps in the tile shift
variations. For the pattern with phase shift of 1/5 (shift=1/5 of a tile), we see the weakest tilt effect
and the near horizontal tilts (Fraser Cord elements) reveal only at the finest scales (σc =4, 8). The inducing near horizontal tilts along the mortar lines exist till scale 12 (σc =4, 8, 12) for the phase shift of
1/3 (shift=1/3 of a tile). But, as indicated for the original Café Wall pattern with the shift=1/2 of a tile
between the consecutive rows, the mortar cues last till scale 16. The results show again that the persistence of mortar cues in the edge map representation at multiple scales is an indication for the strength
of the tilt effect in the Café Wall illusion.
Detected houghlines have been shown in Fig 13 for the DoG edge maps at seven scales for the two
variations of phase shift displacement. The hough parameters are the same as the previous experiments (NumPeaks=1000, Threshold=3, FillGap=40, and MinLength=450).
The absolute mean tilts and the standard errors of detected tilts for the two patterns of phase of tile
displacement are shown at the bottom of Fig 15. Comparing these results with the mean tilt results of
the original Café wall at the bottom-left corner reveals that in small shift of 1/5 of a tile, there is no
near horizontal lines detected along the mortar at the fine scales. There are a few near horizontal lines
detected at scales 12 and 16 which is much larger than the original Café Wall pattern (~15.5º-18.9º
compared to ~9.5º-12º). The scales of 12 and 16 are much larger than the mortar size (M=8px) and
the predicted results are not reliable at these scales for the mortar size of 8px (see Section 2.3.1). Fig
12 with the details of detected houghlines also support these. When we compare the phase shift of
1/3 with the original Café Wall pattern, we still see a lower range of mean tilts which is ~4º-9.5º compared to ~3.5º-12º at the range of fine scales (σc =4, 8, 12).
19
The vertical mean tilts is much smaller in the pattern with phase shift of 1/5 around 2.5º-3.5º at different scales. It is slightly higher in the phase shift of 1/3 (~4º-6º; recall that for the near vertical and
diagonal tilts, we should consider coarse scale DoGs in our model; σc >16). Checking the edge maps
in Fig 13 shows that the vertical grouping of tiles appear at scale 8 in the two variations of tile shift
displacement compared to scale 12 and even 16 in the original Café Wall pattern. This is a good indication for a weaker tilt effect along the mortar lines in these variations compared to the original Café
Wall because; the tilt cues are more persistent along mortar lines at fine to medium scales in the original Café Wall pattern. In addition, we see sharper vertical lines with less deviation from the vertical in
the phase shift of 1/5, highlighting the vertical grouping of tiles that emerges at smaller scales. The
near diagonal mean tilts for these two patterns (phase shifts of 1/3 and 1/5) are nearly more than 3º
lower than the predicted tilt ranges along the diagonal axes compared to the original Café Wall (~9.5º
compared to ~12.5º-13.5º).
3.3.2.3 Discussion
The tilts predicted by the model show higher range of detected mean tilts near the horizontal for the
original Café Wall pattern compared to these variations of phase of tile displacement (phase shifts of
1/3 and 1/5). Our results are consistent with previous reports that the tilt effect is maximal with phase
shift of half cycle between consecutive rows and minimal or no distortion when it is in a checkerboard
phase [5].
3.3.3 Tilt effect direction and Hollow Square pattern
3.3.3.1 Patterns Characteristics
The mirrored image of the original Café wall pattern is shown at the bottom of Fig 4 (which is the
mirrored image of either ML=0.5 in the Mortar-Luminance variations, or MW=8px in the MortarWidth variations, shown with * in the figure) results in an opposite direction of inducing tilt. This version may be referred to as Direction Change variation of the Café Wall pattern in the text. The final
pattern we consider is the Hollow Square pattern [9, 11], which consist of hollow tiles with the same
size of the Café Wall tiles. The outlines of tiles are thinner than the mortar size but the border thickness of these hollow tiles produce roughly similar size to the mortar size of 8px in the Café Wall pattern, since two hollow tiles are adjacent to each other. If the outlines of the hollow squares are thickened in this version, we ultimately reach to a similar pattern to the Café wall without any mortar lines
[11]. These two patterns are presented at the bottom of Fig 4.
3.3.3.2 DoG Edge Maps and Quantitative Tilt Results
The results of the binary DoG edge maps at seven scales for two patterns of the mirrored image,
inducing opposite direction of tilt, and the Hollow Square pattern have been displayed in Fig 14, in
which the edge maps have been overlayed by the detected houghlines displayed in green. The scales
of the DoGs are similar to our previous investigations in Sections 3.1 and 3.2. For easier comparison,
we have provided again the DoG edge map of the original Café Wall pattern in the figure. The DoG
edge map of the Direction Change variation shows exactly the same tilt cues at multiple scales of the
DoGs, but with the opposite direction for convergence/divergence along the mortar lines, supporting
previous finding that the tilt effect reverses “when alternate rows of tiles are pushed across half a cycle”. [4].
The edge map of the Hollow Square pattern, detect two edges for each side of the hollow tiles in
the pattern–inner and outer region. The edge map is completely different from the rest of the Café
Wall variations, but some similarities can be found with the MW=0px pattern in Fig 8 and especially
20
at the finest scales (σc =4, 8) which are the most appropriate scales in the model for predicting the tilts
around the horizontal (Section 2.3.1). What we see in the edge map at fine scales are grouping of tiles
in vertical-zigzag direction by the high frequency details of the edges. We have seen these vertical
grouping of tiles at coarse scales in most of the variations of the Café Wall pattern investigated. There
are no near horizontal tilts in the place of mortar (connecting section of rows of hollow tiles) in this
pattern at fine scales. Based on the quantitative tilt results we can claim that there is no illusion in this
pattern.
Detected houghlines have been shown in Fig 14 for the DoG edge maps at multiple scales for the
mirrored image of the Café Wall (Direction Change) and the Hollow Square pattern. The hough parameters are the same as the previous experiments (NumPeaks=1000, Threshold=3, FillGap=40, and
MinLength=450).
The mean tilts and the standard errors of detected tilt angles of houghlines have been presented in
Fig 16. In the Direction Change pattern, the near horizontal tilt range is quite similar to the original
Café Wall pattern ranging from ~3.5º to 12º. The mean tilts along the vertical and diagonal orientations are again very similar to the original Café Wall pattern. Slight changes less than a degree (˂1º)
are in the acceptable mean tilt range due to the standard errors around 0.6º-0.7º in the original Café
Wall (Fig 15), and this error indicates that the results are statistically very close to each other in these
two variations. This is what we have expected, and we have shown that the tilt effect has an opposite
direction of divergence/convergence tilts in the detected houghlines (Fig 14).
Fig 15. Top: Mean tilts and the standard errors of detected tilt angles of houghlines for three variations of Grey-Tiles (with
relative mortar luminance of Black-Left, Mid-Grey-Center and White-Right mortar lines for tiles with luminance of 0.25 for
Dark Grey, and 0.75 for Light Grey), Bottom: Mean tilts and the standard errors of detected tilt range for three patterns of the
original Café Wall pattern on Left, and two variations of phase of tile displacement with the shifts of 1/3 tile in the Center,
and 1/5 tile in the Right. The calculations are done for the edge maps at multiple scales (σc=4, 8, 12, 16, 20, 24, 28) – Hough
parameters are kept the same in all experiments for detecting near horizontal, vertical and diagonal tilted lines (H, V, D1,
D2) in the edge maps. NumPeaks=1000, Threshold=3, FillGap=40, and MinLength=450 for all DoG scales. NaN means no
detected lines (Reproduced with permission from [72]).
Fig 16. Left: Mean tilts and the standard errors of detected tilt angles of houghlines for the mirrored image (Direction
Change), and Right for the Hollow Square pattern. The calculations are done for the edge maps at multiple scales (σc=4, 8,
12, 16, 20, 24, 28) – Hough parameters are kept the same in all experiments for detecting near horizontal, vertical and diagonal tilted lines (H, V, D1, D2) in the edge maps. NumPeaks=1000, Threshold=3, FillGap=40, and MinLength=450 for all
DoG scales. NaN means no detected lines (Reproduced with permission from [72]).
The last pattern we consider is the Hollow Square. The mean tilts and the standard errors of detected tilt angles of houghlines have been shown in Fig 16-right. Comparing the near horizontal mean
tilts of this pattern with the original Café Wall pattern at the finest scale (σc =4) shows that the tilt
angles detected are negligible (<1º) here compared to 3.5º in the original Café Wall pattern. There is a
similar mean tilt ~6º (compared to ~6.6º) at scale 8. But what is important to be considered is that
although the detected lines have mean tilts deviation around 6º, the lines have both positive and negative orientations along each mortar position at σc =8 (connecting of hollow tiles in rows) compared to
a single tilt orientation (either positive or negative tilt) for the detected lines along each mortar in the
original Café Wall pattern (based on the detected houghlines in Fig 13). In real vision, at this transition resolution, these contradictory tilts tend to cancel each other and result in lower tilt range than the
predicted tilt results (~6º). Based on the Hough parameters that kept the same for the tilt analysis of
the patterns investigated, we found much smaller tilt deviation from the diagonal axes, which is ~3.5º6º, compared to 13º in the original Café Wall pattern. Similar to the diagonal mean tilts, we see again
smaller deviation from the vertical axis for the detected lines which is 2º-5º compared to 6.5º-9.5º in
21
the original Café Wall pattern. So the model has not detected any considerable tilt angle in the Hollow
Square pattern in.
3.3.3.3 Discussion
We have shown here that lower contrasted tile (Grey-Tiles) patterns compared to the contrast of
unity in the original Café Wall pattern have lower range of detected mean tilts, indicating weaker tilt
illusion compared to the Black and White tiles in the original Café Wall pattern. Among the three different variations of Grey-Tiles we have shown that the mortar luminance should be in the intermediate
range for a strong tilt effect. It has suggested that the decrease of mortar width in low contrasted tile
variations of the Café Wall pattern enhances the tilt effect [4]. This has not yet tested by our model.
Also our quantitative mean tilt results showed the maximum near horizontal mean tilts with half tile
shift (for the original Café Wall) and that the effect is diminished close to a chessboard phase. Our
results also show an opposite direction for the illusory tilt in the mirrored image of Café wall variation (half cycle shift of the Café Wall pattern), as well as a very weak tilt effect in the Hollow Square
pattern (the variation tested here). It have been reported that the decrease in contrast reduces the apparent tilt of Hollow Square illusion [9] that has not yet tested by our model. In the next section we
explain how to verify the tilts predicted by the model in a testable framework connecting the scale of
the edge maps to the differences of resolution to the eccentricity and visual angle.
3.4 Outlines of Psychophysical Experiments
In this section we aim to make it clear about what testable predictions the model makes. We show
that the tilt prediction by the model for the given stimuli models matching individual foveal and peripheral acuity/eccentricity (at very fine scales) as well as the relative distance/visual angle viewed
(for medium to coarse scales). So the model matches a single prediction for each stimulus considering
its acuity/eccentricity/distance setting. Also here we outline the critical factors and essential parameters that should be considered in our later psychophysical experiments to validate the predicted results.
The eyes process the scene at different scales at different times, due to the intentional and unintentional eye movements (such as overt saccades and gaze shifts) while we look at the scene/pattern.
These result in a rapid scan of the field of view by the fovea for encoding of the high-resolution information. Our perception of illusory tilt in the Café Wall is affected by our fixation on the pattern
and the tilt effect is weakened in a region around fixation point, but the peripheral tilts stay unaffected
inducing stronger tilts. So in psychophysical experiments we should consider perceptual factors such
as foveal/peripheral view to the stimulus to measure the induced tilt in Café Wall illusion.
In the psychophysical experiments we thus need to identify and formalize the effective pixel resolution (how many pixels within a subtended visual angle and the corresponding physical area on a
viewer’s retina). Issues related to the design of effective visual experiments include the resolution,
visual angle, viewing parameters, the speed of presenting a stimuli, techniques to map data values to
their corresponding visual representations and stimulus features and data properties.
The physical measurement of the subtended visual angle (ν) can be calculated based on the physical size of an element/object in our field of view (S) and its viewing distance (D) as ν =2 tan-1 S/2D or
tan ν =S/D. Considering the mechanics of the eye, the visual angle can be also calculated based on the
size of retinal image (R) and the distance from the nodal points to the retina (n ~ 17mm) as tan ν
=S/D= R/n, that results in the retinal image of size R=17× S/D mm. Therefore when 1cm tile is viewed
at 0.5m distance then the subtended visual angle of a tile is ν =0.02 rad = 0.02×57°= 1.14° ~ 1°.
All the tilt analysis done in our simulations have been considered the resolutions of MATLAB gen22
erated stimuli/patterns, then we should make sure that the pixel sizes in MATLAB that is the image
matrix dimensions are displayed with its exact size on the display (monitor) by an equivalent size
pixel representation. This will be guaranteed by using truesize function in MATLAB. Note that this is
not in general the same as the pixel size of the monitor and is the basis for sizing of image features
and thus the definition of pixel that needs to be related to retinal cone density and ocular acuity [7375].
The limits on visualizations information have been hypothesised by Healey and Sawant nothing
that: “a minimum number of pixels ’resolution’ and subtended physical area on the retina ‘visual angle’ are required to perceive different values of a visual feature” [76:pp.2]. They have identified the
boundaries for distinguishing different values of luminance, hue, size and orientation. Similar investigations have been reported by other researchers for instance Legge et al. [77, 78] investigate documents thresholds and text legibility for viewer’s readability considering visual features like contrast,
size, color and luminance.
The limit of visual acuity for a ‘point acuity’ to resolve two point targets is noted to be 1' and for
‘grating acuity’ to distinguish bright and dark bars in a uniform grey patch is measured in the range of
1-2', while for instance the ‘letter acuity’ to resolve letters in 20/20 vision (5 arc min letters to be seen
with 90% accuracy) is 5' and that most visual acuity fall within the 1-2' range corresponding to the
center of the fovea [76]. They conclude that: “resolution and visual angle limits depends on feature
values being distinguishable from one another, and not on the specific values being displayed”
([76]:pp.14). Rather than the related parameters to the measurement of the visual angle we also need
to consider the visual condition of the subjects (for instance subjects with 20/20 visual acuity and
above, in a certain age group such as 20-40 years) as well as viewing conditions such as free eye
movements for experiments.
A convincing way to demonstrate how our model prediction is going to be tested on real subjects in
our future psychophysical experiments is to present the stimulus along with tilts predicted by the
model next to each other, with viewers’ task to select the closest possible tilted lines to the perceived
tilt in the stimulus. Certainly for a reliable measurement, the presentation framework plays an important role, and in our design we need to make sure we have eliminated any possible side effect of
the presentation on the strength of the illusion.
We should note here that in our tilt analysis, we have estimated the mean tilt angles in the DoG
edge maps of the stimuli/patterns at seven different scales around four reference orientations of Horizontal, Vertical, and Diagonals (H, V, D; from σc = 0.5M to 3.5m with incremental steps of 0.5M; M:
mortar size). The edge map at multiple scales which consists of the DoG filtered outputs from fine to
coarse scales indicate how our distance from the pattern may affects the tilt or in other words how the
perceived tilt is related to the pattern’s visual angle as a whole, and also its elementary parts/figures
(mortar and tiles in Café Wall pattern). We believe that the relationship between σc in the model and
the mortar size is more important than the actual pixel size.
The model uses normalized Gaussians and the curve between the center and surround Gaussians in
the DoG filter shows the activation response. The surround Gaussian intersects the center Gaussian at
its inflection points (for the 2.0 ratio). Therefore, σc =4 in our model corresponds to a filter in which
the mortar size of 8px lies within one standard deviation (σc) away from the mean of the filter, so we
are able to detect the mortar lines with high accuracy in the filtered response as noted previously in
Section 2.3.1. Therefore, the tilts predicted by the model at σc =4 in our results show the prediction of
tilt angles in the foveal vision with high acuity in the center of our current gaze direction. For a fixed
size stimulus, the results reported for each scale indicate how the distance affects our tilt perception.
23
At σc=8 the results show particular distance and so many degree of eccentricity in a local neighbourhood around our fixation points. The mean tilts calculated for larger scales than 8 (σc > 8; 8 is the
Mortar size), do not be considered as predicted tilt angles near the horizontal for the illusion.
Therefore, relying on the tilts predicted by the model presented in Figs 6, 10, 15 and 16, the near
horizontal tilts at scales 4, and maximum at scale 8 (σc =4, 8) are the predicted tilt angles corresponds
to the foveal vision of the pattern. These are the appropriate scales to detect the tilt cues in the Café
Walls explored. We encounter the disappearance of mortar lines in the pattern as the viewing distance
from the stimulus increases (equivalent to decreasing its visual angle). The near vertical and diagonal
(V, D) tilts in the predicted results will be then considered for further analysis of tilts (at coarse
scales).
All the stimuli used in this study have a same size of 3×8 tiles (with tiles of 200×200px) and an
overall size of ~1600×616px (note that for Mortar Width variations the height of the image will have
a moderate change). Here we concentrate on the Café Wall of 3×8 with tiles of 200×200px, mortar
size of 8px and the intermediate mortar luminance between the luminance of the black and white tiles
(ML=0.5; Fig 4-top section with *; the resolutions of MATLAB generated pattern) with five copies
presented in the left of Fig 17. As indicated before, σc=4 in our model corresponds to the prediction of
tilt angles in the foveal vision with high acuity. As indicated in Fig 6, the predicted tilts near the horizontal at scale 4 (σc =4) is 3.5°±0.5 which covers a tilt range between 3-4°.
To let readers compare the predicted tilt results with their own subjective experience of tilt in this
stimulus we have presented tilted line segments with actual angles in the range of ±2.5° to ±4.5° with
0.5°difference in between, and positioned them adjacent to the mortar lines of the stimulus in Fig 17.
These tilted lines have a very close characteristics to the mortar lines (with a similar size and luminance) and we have overlayed them on top of black backgrounds with blurry outlines at the edges to
make the appearance of these line segments as close as possible to the inducing tilt along the mortar
lines. The length of the line segments are close to a tile size to prevent any misjudgement of long tilted intersecting lines that may eliminate the inducing tilts in the illusion. In the middle of the figure we
have presented the convergent/divergent line segments with magnitudes of 3°, 3.5° and 4° that are the
detected range of tilt angles by the model for the stimulus. If you test your perceived tilt on the stimulus presented in Fig 17 with the detected tilt range on the left, you will find that the model prediction
are closely matched to our perception of tilt. Psychophysical assessments are required to show this in
more details as the future priority of our study.
Fig 17. Left: five copies of Café Wall of 3×8 tiles with the intermediate mortar luminance between the luminance of the
black and white tiles. The resolutions of MATLAB generated pattern is the tile sizes of 200×200px and mortar size of 8px
(ML=0.5; Fig 4-top section with *). Right: Convergent/divergent line segments in the range of ±2.5° to ±4.5° with
0.5°difference in between. In the middle of the figure the tilted lines indicate the detected range of tilt angles by the model
for the stimulus at scale 4 (σc =4; 3.5°±0.5) with magnitudes of 3°, 3.5° and 4° around the horizontal (mean tilt table for
ML=0.5 in Fig. 6).
We may perceive the illusory tilt of the mortar lines with some moderate changes among individuals for instance some people may see mortar lines as being more bended/curved than being tilted. The
position of the tilted line segments at one end of the mortar lines facilitate the integration of the Fraser
Cord elements appeared in our local view to an integrated long continuous contour in our global view.
Note that we have not done any local sampling of the stimuli in this work for tilt investigations.
The configurations used as the stimuli in this research (Fig 4) are small Café Wall configurations (Café Walls of 3×8 tiles). The tilt predictions of the model for variations of different sampling sizes for
simulating foveal and peripheral vision and investigating the inducing tilt range in these samples have
been reported in other research article [71]. We observe that the model predicts tilt angles close to our
judgement of tilt across the different configurations of the pattern.
24
Fig 18. Left: Café Wall of 11×11 tiles (investigated in [71]) with the intermediate mortar luminance between the luminance
of the black and white tiles. The resolutions of MATLAB generated pattern is the tile sizes of 200×200px and mortar size of
8px (the same as the stimulus in Fig 17). Right: Convergent/divergent line segments at ±4° showing the tilt predicted by the
model for the stimulus at scale 4 (σc =4; 4.05°±0.09) around the horizontal (mean tilt tables in [71]: Fig. 14).
For larger configurations of Café Wall with the same tile size and mortar as the stimulus in Fig 17,
the tilt effect appears to be a bit stronger when comparing Fig 17 with Fig 18. A Café Wall of 11×11
tiles (tiles of 200×200px, mortar of 8px, and the same mortar luminance as the stimulus in Fig 17)
have been presented in Fig 18. The tilts predicted by the model on different variations of Café Wall
pattern with different configurations of tiles in the rows and columns can be found in [71]. The predicted tilt for this stimulus at scale 4 (σc =4) around the horizontal is 4.05°±0.09 ([71]: Fig. 14) that
results for tilt angles between 3.96° and 4.14° in which we have rounded them to ~4.0° (rounding to
the nearest unit) and have shown ±4° line segments next to the stimulus at the right end points of the
mortar lines and adjacent to them for evaluating our/readers perceived tilts with the predicted results
(the predicted mean tilt is 3.5°±0.5 at σc =4 for the stimulus in Fig 17 stimulus; from Fig 6). In psychophysical experiments, to validate the results with high accuracy, we might go for presenting only
one pair of these lines (convergent/divergent lines) or even one of these lines instead of presenting
them all next to the stimulus as shown in Figs 17 and 18. Again if you test your perceived tilt in Fig
18 you find that the predicted mean tilt seems to closely match with our perceived illusory tilt in the
stimulus. Now let’s check the effect of visual angle on the perceived tilt.
Fig 19. The effect of the visual angle on the perceived tilt in Café Wall illusion (Café Wall of 11×11 tiles; the resolutions of
MATLAB generated pattern is the tile sizes of 200×200px and mortar size of 8px the same as the stimulus in Fig. 18). The
size of the stimulus is adjusted in these images in a way that the image in the Center has a size reduction of 25% from the
image in the Left, and the image in the Right has a size reduction of 50% from the image in the Left.
The effect of the visual angle on the perceived illusory tilt can be checked in Fig 19 for Café Wall
of 11×11 tiles. Although the pattern is generated with tiles of 200×200px and mortar of 8px as its
original size in MATLAB, but the size of the pattern (fixed aspect ratio of width to height) are adjusted to cover widths of 236px, 177px and 118px in total as shown in the Fig 19 from left to right. The
center image has a width reduction of 1/4th from the width (W) of the image in the left (larger image)
(W'=3/4×W=3/4×236=177px), and a second width reduction of 1/4th from W has been applied to
adjust the size (resolution) of the image in the right (W''=2/4×W=2/4×236=118px). We have used
Visio software to generate these copies of Café Wall stimulus considering the exact relative size noted
above. This individual images may appear with different resolutions on different monitors/printers or
when we change the scale of the displayed image (by resizing it), but the size ratios between these
images stay the same and this is what we want to test here (relative visual angle stays constant). The
aim is to show that there is a threshold value relative to our eyes sensitivity to the spatial frequency of
image/pattern and by reducing the size of the Café Wall stimulus and approaching to that threshold,
we encounter with a stronger inducing tilt. Fig 20 demonstrates this with two sample sizes.
In Fig 20 we have shown tilt angles of ±4° as the tilts predicted by the model for this pattern at σc
=4 [71: Fig. 14] presented next to the stimulus on the right of the figure (with wider visual angle), and
one degree more than the predicted tilts equal to ±5° for a smaller sized image on the left (with narrower visual angle). This can be tested on a digital copy of stimulus on the screen and changing the
size of the displayed stimulus to a smaller size. This notified threshold can be further investigated in
our later psychophysical experiments in order to find the relationship between the strength of inducing
tilt in the illusion, the visual angle of the stimulus as whole, and the visual angle of its elementary
25
parts/figures (tiles and mortar in Café Wall).
Fig 20. The Café Wall of 11×11 tiles (the resolutions of MATLAB generated pattern is the tile sizes of 200×200px and mortar size of 8px the same as the stimulus in Figs 18, 19). The image on the Left has a 25% size reduction from the image in the
Right. The tilts predicted by the model for the stimulus is reported to be 4.05°±0.09 at scale 4 (σc =4; 4.05°±0.09) around the
horizontal ([71]: Fig. 14). Left: The Convergent/divergent line segments are shown at ±5° that is one degree more than the
tilt predicted by the model for the stimulus. Right: The Convergent/divergent line segments are shown at ±4° indicating the
tilt predicted by the model for the stimulus.
The estimation noted above is consistent with our understanding of tilt based on our foveal/peripheral vision. For larger patterns, when we foveate at the center of the image (facilitated with
a red cross in Fig 18), we see a very weak tilt in this vicinity, but in the periphery the induced tilt is
much stronger. When the visual angle decreases (such as for smaller pattern on the left of Fig 20),
again close to the focal point, the illusory tilt is weak but around the fovea, we are able to capture
more tiles in the periphery with more inducing tilt cues on the same retinal area. The strength in our
perceived tilt may be affected by cues related to the illusion strength such as activations of more retinal GCs, evident in our model as the persistence of mortar cues across multiple scales in the edge
map, or from detecting a larger inducing tilt angle in smaller sized patterns. This should be tested in
detail by psychophysical experiments.
To explain how this can be addressed by our model it should be noted that we have considered σc
=4 to evaluate the tilts predicted by the model in this section so far. Based on the predicted results and
our understanding of foveal/peripheral vision we expect that for the range of scales tested in our model (related to the pattern characteristics), the mean tilts reported for one scale larger than the finest
scale, at scale 8 (σc =8), corresponds to some neighbourhood region around the foveal area. This
seems to approximate the maximum tilt angle around the horizontal as detected for variations of the
visual angles and different configurations for diverse eccentricity of retinal GCs that provide us with
the peripheral vision. For this stimulus (Fig 18; based on the resolutions of MATLAB generated pattern corresponds to a specific visual angle) we have measured the mean tilts at scale 8 to be 7.5°±0.33
around the horizontal ([71]: Fig. 14).
4 Conclusion
It is increasingly clear that information in the visual systems is processed at multiple levels of resolution, perhaps simultaneously, perhaps sequentially in some sense. We examined the capability of a
bioplausible vision model, simulating retinal/cortical simple cells to address the illusory percept in
variations of Café Wall pattern. Exploring the tilt effect in the Café Wall illusion, we have shown that
a simple DoG model of lateral inhibition in retinal/cortical simple cells leads to the emergence of tilt
in the pattern. Our model generates an intermediate representation at multiple scales that we refer to
as an edge map. For the recognition of a line at a particular angle of tilt, further processing by orientation selective cells in the retina and/or cortex is assumed [14, 15] but we have exploited an image processing pipeline for quantitative measurement of tilt angle using Hough transform in our study.
We have shown in this paper that the DoG edge map not only shows the emergence of tilt in Café
Wall illusion, but also can explain different degrees of tilt effect in variations of Café Wall illusion
based on their characteristics. The qualitative and quantitative tilt results of the Café Wall variations
investigated support previous psychophysical and experimental findings [4, 5, 6, 9, 10, 14, 15, 69, 70]
on these stimuli.
The edge map has the ability to show some elementary factors that are involved in our local and
global view of the pattern (at least in part represented by foveal and peripheral vision). We believe
that the tilt effect in the Café Wall illusion arises from two incompatible groupings of pattern ele26
ments (tiles and mortar) that are present simultaneously as a result of multiscale retinal visual encoding of the pattern. At fine scales, there are grouping of two identically coloured tiles with the mortar
line in a near horizontal direction (appearance of Fraser’s Twisted Cord elements in focal view). At
coarse scales, when the mortar cues disappear from the edge map, another grouping starts to appear in
an orthogonal direction, grouping tiles in a zigzag with a broadly vertical orientation. At medium to
coarse scales a swapping of the local groupings of the adjacent identically coloured tiles occurs, that
contradict with the grouping of the Twisted Cord elements at fine scales. These two incompatible
groupings, along with systematic differences relating to the relative size of Gaussian and pattern
scales, result in illusory tilt effects that reflect changes in size and density with eccentricity, predicting
the change in illusion effects according to distances from the focal point in the pattern versus distance
in the retinal image from the fovea into the periphery. For variations on Café Wall involving very
thick mortar, the Enlarged Twisted Cord elements result in weak tilt effects plus a different effect,
traditionally explained as brightness induction, where an illusory rope like ‘Reversed Twisted CordTwisted Rope’ construct along the mortar is perceived and our DoG model picks up a reversed angle
of twist.
We have shown that explanatory models and hypothesis for Café Wall illusion such as
the Irradiation, Brightness Induction, and Bandpass filtering appear to share the central mechanism
of lateral inhibition that ultimately underlies the tilt effect in this illusory pattern. We further expect
that these retinal filter models will prove to play an important role in higher–level models of simulating depth and motion processing. This supports the use of Gaussian Filter and their differences or
derivatives in Computer Vision. We also have shown empirically that this model, has a high potential
in revealing the underlying mechanism connecting low-level filtering approaches to mid- and highlevel explanations such as ‘Anchoring theory’ and ‘Perceptual grouping [19].
Although we have covered many of the aspects involved in the illusory tilt perceived in variations
of the Café Wall pattern by our model in this work (through relying on the available psychophysical
reports in the literature), as well as showing examples of how the predicted tilt by the model matches
a single prediction for each stimulus considering its acuity/eccentricity/distance setting as a testable
framework for the model, but many things are left to be explored. These include psychophysical experiments as a priority in our future study to confirm the predictions implicit in our results, and are
expected to lead us to a more precise multiple scale filtering which is adaptable to the patterns characteristics.
Acknowledgements
Nasim Nematzadeh was supported by an Australian Research Training Program (RTP) award.
References
[1]
Grossberg S, Todorovic D. Neural dynamics of 1-D and 2-D brightness perception: A unified model of classical and recent
phenomena. Perception & psychophysics. 1988;43(3): 241-277.
[2]
Eagleman DM. Visual illusions and neurobiology. Nature Reviews Neuroscience. 2001;2(12):920-926.
[3]
Changizi MA, Hsieh A, Nijhawan R, Kanai R, Shimojo S. Perceiving the present and a systematization of illusions. Cognitive Science.
2008;32(3):459-503.
[4]
Gregory RL, Heard P. Border locking and the Café Wall illusion. Perception. 1979;8(4):365-380.
[5]
McCourt ME. Brightness induction and the Cafe Wall illusion. Perception. 1983;12(2):131-142.
[6]
Morgan M, Moulden B. The Münsterberg figure and twisted cords. Vision Research. 1986;26(11):1793-1800.
[7]
Kingdom F, Moulden B. A multi-channel approach to brightness coding. Vision research. 1992;32(8):1565-1582.
27
[8]
Pessoa L. Mach bands: How many models are possible? Recent experimental findings and modeling attempts. Vision Research.
1996;36(19):3205-3227.
[9]
Woodhouse JM, Taylor S. Further studies of the Café Wall and Hollow Squares illusions. Perception. 1987;16(4):467-471.
[10] Earle DC, Maskell SJ, Fraser cords and reversal of the café wall illusion. PERCEPTION-LONDON. 1993; 22: 383-383.
[11] Bressan P. Revisitation of the family tie between Münsterberg and Taylor-Woodhouse illusions. Perception. 1985;14(5):579-585.
[12] Westheimer G. Irradiation, border location, and the shifted-chessboard pattern. Perception. 2007;36(4):483.
[13] Fraser J. A new visual illusion of direction. British Journal of Psychology, 1904-1920. 1908;2(3):307-320.
[14] Moulden B, Renshaw J. The Munsterberg illusion and ‘irradiation’. Perception. 1979;8:275-301.
[15] Grossberg S, Mingolla E. Neural dynamics of form perception: Boundary completion, illusory figures, and neon color spreading.
Psychological review. 1985;92(2):173.
[16] Kitaoka A. Tilt illusions after Oyama (1960): A review1. Japanese Psychological Research. 2007;49(1):7-19.
[17] Wikipedia. Zollner illusion. https://upload.wikimedia.org/wikipedia/commons/2/2d/Zollner_illusion.svg. 2016.
[18] Nematzadeh N, Lewis TW, Powers DMW. Bioplausible multiscale filtering in retinal to cortical processing as a model of computer
vision. ICAART2015-International Conference on Agents and Artificial Intelligence, Lisbon, Portugal. SCITEPRESS. 2015.
[19] Nematzadeh N, Powers DMW, Lewis TW. Bioplausible multiscale filtering in retino-cortical processing as a mechanism in perceptual
grouping. Brain Informatics, DOI 10.1007/s40708-017-0072-8.
[20] Nematzadeh N, Powers DMW. A quantitative analysis of tilt in the Café Wall illusion: a bioplausible model for foveal and peripheral
vision. In Digital Image Computing: Techniques and Applications (DICTA). 2016;1-8. IEEE.
[21] Nematzadeh N, Powers DMW, Trent LW. Quantitative analysis of a bioplausible model of misperception of slope in the Café Wall
illusion. In Workshop on Interpretation and Visualization of Deep Neural Nets (WINVIZNN). ACCV. 2016.
[22] Nematzadeh N, Powers DMW. A Bioplausible Model for Explaining Café Wall Illusion: Foveal vs. Peripheral Resolution. In
International Symposium on Visual Computing (ISVC). 2016; 426-438. Springer International Publishing.
[23] Rodieck RW, Stone J. Analysis of receptive fields of cat retinal ganglion cells. Journal of Neurophysiology. 1965;28(5):833-849.
[24] Enroth-Cugell C, Robson JG. The contrast sensitivity of retinal ganglion cells of the cat. The Journal of physiology. 1966;187(3):517552.
[25] Marr D, Hildreth E. Theory of edge detection, Proceedings of the Royal Society of London. Series B. Biological Sciences. 1980;207
(1167):187-217.
[26] Ratliff F, Knight B, Graham N. On tuning and amplification by lateral inhibition. Proceedings of the National Academy of Sciences.
1969;62(3):733-740.
[27] von Helmholtz H. Handbuch der Physiologischen, Optik. 1911; Vol II. In Helmoholc’s Treatise on Physiological. Optics. 1962; Vols I
and II (Edited by Southall J. P. C.). Dover, New York.
[28] Penacchio O, Otazu X, Dempere-Marco L. A neurodynamical model of brightness induction in v1. PloS one. 2013;8(5):e64086.
[29] Kitaoka A, Pinna B, Brelstaff G. Contrast polarities determine the direction of Café Wall tilts. PERCEPTION-LONDON.
2004;33(1):11-20.
[30] Fermüller C, Malm H. Uncertainty in visual processes predicts geometrical optical illusions. Vision research. 2004;44(7):727-749.
[31] Arai H. A nonlinear model of visual information processing based on discrete maximal overlap wavelets. Interdisciplinary Information
Sciences. 2005;11(2):177-190.
[32] Westheimer G. Illusions in the spatial sense of the eye: Geometrical–optical illusions and the neural representation of space. Vision
research. 2008;48(20):2128-2142.
[33] Jameson D, Hurvich LM. Essay concerning color constancy. Annual review of psychology. 1989;40(1):1-24.
[34] Kingdom FA. Lightness, brightness and transparency: A quarter century of new ideas, captivating demonstrations and unrelenting
controversy. Vision Research. 2011;51(7):652-673.
[35] Blakeslee B, McCourt ME. A multiscale spatial filtering account of the White effect, simultaneous brightness contrast and grating
induction. Vision research. 1999;39(26):4361-4377.
[36] Field G, Chichilnisky E. Information processing in the primate retina: circuitry and coding. Annu. Rev. Neurosci. 2007;30:1-30.
[37] Gauthier JL, Field D, Sher A, Greschner M, Shlens J, Litke AM, Chichilnisky E. Receptive fields in primate retina are coordinated to
sample visual space more uniformly. PLoS biology. 2009;7(4):747.
[38] Gollisch T, Meister M. Eye smarter than scientists believed: neural computations in circuits of the retina. Neuron. 2010;65(2):150-164.
[39] Kuffler SW. Neurons in the retina: organization, inhibition and excitation problems. Cold Spring Harbor Symposia on Quantitative
Biology. Cold Spring Harbor Laboratory Press. 1952;17:281-292.
[40] Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of
physiology. 1962;160(1):106-154.
[41] Daugman JG. Two-dimensional spectral analysis of cortical receptive field profiles. Vision research. 1980;20(10):847-856.
[42] Rosenfeld A, Thurston M. Edge and curve detection for visual scene analysis. Computers, IEEE Transactions on. 1971;100(5):562569.
[43] Marr D, Vision. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. 1982.
[44] Burt PJ, Adelson EH. The Laplacian pyramid as a compact image code. Communications, IEEE Transactions on. 1983;31(4):532-540.
28
[45] Mallat S. Wavelets for a vision. Proceedings of the IEEE. 1996;84(4):604-614.
[46] Lowe DG. Object recognition from local scale-invariant features. Computer vision. The proceedings of the seventh IEEE international
conference on. 1999;2:1150-1157. IEEE.
[47] Taubman D, Marcellin M. JPEG2000 image compression fundamentals, standards and practice: image compression fundamentals,
standards and practice (Vol. 642). Springer Science & Business Media. 2012.
[48] Jacques L, Duval L, Chaux C, Peyré G. A panorama on multiscale geometric representations, intertwining spatial, directional and
frequency selectivity. Signal Processing. 2011; 91(12):2699-2730.
[49] Lindeberg T. Generalized Gaussian scale-space axiomatics comprising linear scale-space, affine scale-space and spatio-temporal scalespace. Journal of Mathematical Imaging and Vision. 2011;40(1):36-81.
[50] Von der Malsburg C. Self-organization of orientation sensitive cells in the striate cortex, Kybernetik. 19731;4(2): 85-100.
[51] Powers DMW. Lateral Interaction Behaviour Derived from Neural Packing Considerations. School of Electrical Engineering and
Computer Science. University of New South Wales. 1983.
[52] Smith SW. Digital signal processing: a practical guide for engineers and scientists, Newnes. 2003.
[53] Ghosh K, Sarkar S, Bhaumik K. Understanding image structure from a new multi-scale representation of higher order derivative
filters. Image and Vision Computing. 2007;25(8):1228-1238.
[54] Romeny BMH. Front-end vision and multi-scale image analysis: multi-scale computer vision theory and applications, written in
Mathematica. Springer Science & Business Media. 2003.
[55] Lindeberg T, Florack L. Foveal scale-space and the linear increase of receptive field size as a function of eccentricity. 1994.
[56] Young RA. The Gaussian derivative model for spatial vision: I. Retinal mechanisms. Spatial vision. 1987;2(4): 273-293.
[57] Lourens T. Modeling retinal high and low contrast sensitivity filters. From Natural to Artificial Neural Computation. Springer.
1995;61-68.
[58] Shapley R, Perry VH. Cat and monkey retinal ganglion cells and their visual functional roles. Trends in Neurosciences. 1986;9:229235.
[59] Barlow HB, Hill RM. Evidence for a physiological explanation of the waterfall phenomenon and figural after-effects. 1963;1345-1347.
[60] Weng S, Sun W, He S. Identification of ON–OFF direction‐selective ganglion cells in the mouse retina. The Journal of physiology.
2005;562(3):915-923.
[61] Cavanaugh JR, Bair W, Movshon JA. Nature and interaction of signals from the receptive field center and surround in macaque V1
neurons. Journal of neurophysiology. 2002;88(5):2530-2546.
[62] Carandini M. Receptive fields and suppressive fields in the early visual system. The cognitive neurosciences. 2004;3:313-326.
[63] Wei H, Zuo Q, Lang B. Multi-scale image analysis based on non-classical receptive field mechanism. Neural Information Processing.
Springer Berlin Heidelberg. 2011;601-610.
[64] Craft E, Schütze H, Niebur E, Von Der Heydt R. A neural model of figure–ground organization. Journal of neurophysiology.
2007;97(6):4310-4326.
[65] Kitaoka A. Trampoline pattern. http://www.ritsumei.ac.jp/~akitaoka/motion-e.html. 2000.
[66] Powers, DMW. Jetwhite color map. Mathworks – https://au.mathworks.com/matlabcentral/fileexchange/48419-jetwhite-colours/content/jetwhite.m. 2016.
[67] Illingworth J, Kittler J. A survey of the Hough transform. Computer vision, graphics, and image processing. 1988;44(1):87-116.
[68] Blakeslee B, McCourt ME. What visual illusions tell us about underlying neural mechanisms and observer strategies for tackling the
inverse problem of achromatic perception. Frontiers in human neuroscience. 2015;9.
[69] Foley JM, McCourt ME. Visual grating induction. JOSA A. 1985;2(7):1220-1230.
[70] Lulich DP, Stevens KA. Differential contributions of circular and elongated spatial filters to the Café Wall illusion. Biological
cybernetics. 1989;61(6):427-435.
[71] Nematzadeh N, Powers DM. The Café Wall Illusion: Local and Global Perception from multiple scale to multiscale. Journal of
Applied Computational Intelligence and Soft Computing: Special issue of Imaging, Vision, and Pattern Recognition (in press). 2017.
[72] Nematzadeh N. A Neurophysiological Model for Geometric Visual Illusions. PhD Thesis. Flinders University (in preparation).
[73] Lombardo M, Serrao S, Ducoli P, Lombardo G. Eccentricity dependent changes of density, spacing and packing arrangement of
parafoveal cones. Ophthalmic and Physiological Optics, 2013;33(4), 516-526.
[74] Lombardo M, Lombardo G, Lomoriello DS, Ducoli P, Stirpe M, Serrao S. Interocular symmetry of parafoveal photoreceptor cone
density distribution. Retina, 2013;33(8), 1640-1649.
[75] Dabir S, Mangalesh S, Kumar KA, Kummelil MK, Roy AS, Shetty R. Variations in the cone packing density with eccentricity in
emmetropes. Eye, 2014;28(12), 1488.
[76] Healey CG, Sawant AP. On the limits of resolution and visual angle in visualization. ACM Transactions on Applied Perception (TAP),
2012;9(4), 20.
[77] Legge GE, Rubin GS, Luebker A. Psychophysics of reading—V. The role of contrast in normal vision. Vision research, 1987:27(7),
1165-1177.
[78] Legge GE, Parish DH, Luebker A, Wurm LH. Psychophysics of reading. XI. Comparing color contrast and luminance contrast. JOSA
A, 1990;7(10), 2002-2010.
29
Fig 1. Left: Spiral Café Wall illusion [16], Center: Café Wall pattern, Right: Zollner illusion [17].
Fig 2. DoG edge map of a crop section of Trampoline pattern [REF] as a sample of tile illusion with the size of
84×84px. The scales of DoG filters are σc= 0.5, 1.0, 1.5, 2.0, 2.5 for detecting the important information in
the pattern. Other parameters of the model are: s=2, and h=8 (Surround and Window ratios respectively Reproduced by permission from [72]).
30
Fig 3. Flowchart of the model and analytical tilt processing (Reproduced by permission from [72]).
31
Fig 4. Patterns investigated – All the variations are Café Walls of 3×8 tiles with 200×200px tiles. Top: Mortar-Luminance (ML) variations from Black (ML=0) to White (ML=1) mortars, and three shades of Grey in
between the Black and White (ML=0.25, 0.5, and 0.75) patterns. Middle: Mortar thickness (Width–MW)
variations, from no mortar lines (MW=0) and MW= 4, 8, 16, 32, 64px patterns (*: The original Café Wall
with Black and White tiles, the mortar of intermediate luminance between the luminance of the tiles, and the
mortar width=8px in our samples). Bottom: Other variations investigated from Grey-Tiles with three luminance levels for mortar lines, below, between, or above the luminance of tiles, then phase of tile displacement of shifts of 1/3 and 1/5 of a tile between consecutive rows of the Café Wall, and finally mirrored image
inducing opposite direction of tilt as well as Hollow Square pattern (Reproduced by permission from [72]).
32
33
1
Fig 5. DoG edge maps at multiple scales (σc=4, 8, 12, 16, 20, 24, 28) for five Mortar-Luminance variations, from Black (Left – ML 0.0) to White (Right - ML 1.0)
mortars, in which the edge maps are overlayed by the detected houghlines displayed in green (Blue lines indicate the longest lines detected) – Hough parameters
are kept the same in all experiments for detecting near Horizontal, Vertical and Diagonal tilted lines in the edge maps. NumPeaks=1000, FillGap=40px, and
MinLenght=450px for every DoG scales. The last row shows each variation of the pattern investigated (Reproduced by permission from [72]).
2
Fig 6. Mean tilts and the standard error of detected tilt angles of houghlines for the five variations of Mortar-Luminance, from Black (ML 0.0) and White (ML 1.0)
34 edge maps at seven scales (σc=4, 8, 12, 16, 20, 24, 28) – Hough parameters are
mortars on Top, to Grey mortar lines (ML=0.25, 0.5, 0.75) at the Bottom, for DoG
kept the same in all experiments for detecting near horizontal (H), vertical (V) and diagonal (D1, D2) tilted lines in the edge maps. NumPeaks=1000, FillGap=40px, and MinLenght=450px for all DoG scales of the edge maps. NaN: not a number means no lines detected (Reproduced by permission from [72]).
3
Fig 7. DoG edge maps at multiple scales (σc=4, 8, 12, 16, 20, 24, 28) for the thick mortar variations displayed in jetwhite colormap. The original patterns are Café Walls of 3×8 tiles with 200×200px tiles and
mortar size variations of MW=16, 32, and 64px. The other parameters of the model are s=2 and h=8 for the
Surround and Window ratios respectively. The last row shows each variation of the pattern investigated
(Reproduced by permission from [72]).
35
Fig 8. DoG edge maps at seven scales (σc=4, 8, 12, 16, 20, 24, 28) for three variations of Mortar-Width, from
no mortar lines (MW 0px) to 8px mortar, in which the edge maps are overlayed by the detected houghlines
displayed in green (Blue lines indicate the longest lines detected) – Hough parameters are kept the same in
all experiments for detecting near horizontal, vertical and diagonal tilted lines in the edge maps. NumPeaks=1000, FillGap=40px, and MinLenght=450px for all DoG scales. In all experiments the parameters of
the DoG model are kept constant as s=2 (Surround ratio), h=8 (Window ratio). The last row shows each
variation of the pattern investigated (Reproduced by permission from [72]).
4
36
5
Fig 9. DoG edge maps at seven scales (σc=4, 8, 12, 16, 20, 24, 28) for three different variations of MortarWidth, from mortar size of 16px to 64px, in which the edge maps are overlayed by the detected houghlines
displayed in green (Blue lines indicate the longest lines detected) – Hough parameters are kept the same in
all experiments for detecting near horizontal, vertical and diagonal tilted lines in the edge maps. NumPeaks=1000, FillGap=40px, and MinLenght=450px for all DoG scales (σc=4, 8, 12, 16, 20, 24, 28). In all
experiments the parameters of the DoG model are kept constant as s=2(Surround ratio), h=8 (Window ratio). The last row shows each variation of the pattern investigated (Reproduced by permission from [72]).
37
6
Fig 10. Mean tilts and the standard errors of detected tilt angles of houghlines for six variations of the Mortar-Width, from no mortar (MW 0px) to 64px mortar and
for the multiple scale (σc) edge maps – Hough parameters are kept the same in all experiments for detecting near horizontal, vertical and diagonal tilted lines in the
edge maps (H, V, D1, D2). NumPeaks=1000, FillGap=40px, and MinLenght=450px
38 for all DoG scales. NaN means no detected lines (Reproduced by permission
from [72]).
7
Fig 11. Binary DoG edge maps at multiple scales for three thick-mortar variations with mortar lines of
width 16px (MW 16px) to 64px for the Café Walls of 3×8 tiles with 200×200px tiles. The edge maps
include 8 scales of σc=8, 12, 16, 20, 24, 28, 32, 48 presented in the figure. The other parameters of the
DoG model are s=2 (Surround ratio) and h=8 (Window ratio). The last row shows each variation of the
pattern investigated (Reproduced by permission from [72]).
39
8
Fig 12. DoG edge maps at seven scales (σc =4, 8, 12, 16, 20, 24, 28) for three Grey-Tiles variations (BlackLeft, Mid-Grey-Center and White-Right mortar lines for tiles with relative luminance of 0.25-Dark Grey, and
0.75-Light Grey), in which the edge maps are overlayed by the detected houghlines displayed in green (Blue
lines indicate the longest lines detected) – Hough parameters are kept the same in all experiments for detecting near horizontal, vertical and diagonal tilted lines in the edge maps. NumPeaks=1000, Threshold=3,
FillGap=40, and MinLength=450 for all DoG scales. Other parameters of the DoG model are s=2 (Surround
ratio), and h=8 (Window ratio). The last row shows each variation of the pattern investigated (Reproduced
by permission from [72]).
40
9
Fig 13. DoG edge maps at seven scales (σc =4, 8, 12, 16, 20, 24, 28) for three patterns of the original Café
Wall-Left column, and two variations of phase of tiles displacement with the shifts of 1/3 and 1/5 of a tile in
the Center and Right -columns, in which the edge maps are overlayed by the detected houghlines displayed
in green (Blue lines indicate the longest lines detected) – Hough parameters are kept the same in all experiments for detecting near horizontal, vertical and diagonal tilted lines in the edge maps. NumPeaks=1000,
Threshold=3, FillGap=40, and MinLength=450 for all DoG scales. Other parameters of the DoG model are
s=2 (Surround ratio) and h=8 (Window ratio). The last row shows each variation of the pattern investigated
(Reproduced by permission from [72]).
41
10
Fig 14. DoG edge maps at seven scales (σc=4, 8, 12, 16, 20, 24, 28) for three patterns of the original Café
Wall pattern-Left column, mirrored image (Direction Change) in the Center, and Hollow Square in the Rightcolumn, in which the edge maps are overlayed by the detected houghlines displayed in green (Blue lines
indicate the longest lines detected) – Hough parameters are kept the same in all experiments for detecting
near horizontal, vertical and diagonal tilted lines. NumPeaks=1000, Threshold=3, FillGap=40, and
MinLength=450 for all DoG scales. Other parameters of the DoG model are s=2 (Surround ratio), and h=8
(Window ratio). The last row shows each variation of the pattern investigated (Reproduced by permission
from [72]).
42
11
Fig 15. Top: Mean tilts and the standard errors of detected tilt angles of houghlines for three variations of Grey-Tiles (with relative mortar luminance of Black-Left,
Mid-Grey-Center and White-Right mortar lines for tiles with luminance of 0.25 for Dark Grey, and 0.75 for Light Grey), Bottom: Mean tilts and the standard errors
of detected tilt range for three patterns of the original Café Wall pattern on Left, and two variations of phase of tiles displacement with the shifts of 1/3 tile in the
43
Center, and 1/5 tile in the Right. The calculations are done for the edge maps at multiple
scales (σc=4, 8, 12, 16, 20, 24, 28) – Hough parameters are kept the same
in all experiments for detecting near horizontal, vertical and diagonal tilted lines (H, V, D1, D2) in the edge maps. NumPeaks=1000, Threshold=3, FillGap=40,
and MinLength=450 for all DoG scales. NaN means no detected lines (Reproduced by permission from [72]).
12
Fig 16. Left: Mean tilts and the standard errors of detected tilt angles of houghlines for the mirrored image
(Direction Change), and Right for the Hollow Square pattern. The calculations are done for the edge maps at
multiple scales (σc=4, 8, 12, 16, 20, 24, 28) – Hough parameters are kept the same in all experiments for
detecting near horizontal, vertical and diagonal tilted lines in the edge maps. NumPeaks=1000, Threshold=3,
FillGap=40, and MinLength=450 for all DoG scales. NaN means no detected lines (Reproduced by permission from [72]).
44
13
Fig 1. Left: five copies of Café Wall of 3×8 tiles with the intermediate mortar luminance between the
luminance of the black and white tiles. The resolutions of MATLAB generated pattern is the tile sizes of
200×200px and mortar size of 8px (ML=0.5; Fig 4-top section with *). Right: Convergent/divergent line
segments in the range of ±2.5° to ±4.5° with 0.5°difference in between. In the middle of the figure the
tilted lines indicate the detected range of tilt angles by the model for the stimulus at scale 4 (σc =4;
3.5°±0.5) with magnitudes of 3°, 3.5° and 4° around the horizontal (mean tilt table for ML=0.5 in Fig. 6;
Reproduced by permission from [72]).
45
Fig 2. Left: Café Wall of 11×11 tiles (investigated in [71]) with the intermediate mortar luminance between the luminance of the black and white tiles. The resolutions of MATLAB generated pattern is the
tile sizes of 200×200px and mortar size of 8px (the same as the stimulus in Fig 17). Right: Convergent/divergent line segments at ±4° showing the tilt predicted by the model for the stimulus at scale 4 (σc
=4; 4.05°±0.09) around the horizontal (mean tilt tables in [71]:Fig. 14).
14
Fig 3. The effect of the visual angle on the perceived tilt in Café Wall illusion (Café Wall of 11×11 tiles;
the resolutions of MATLAB generated pattern is the tile sizes of 200×200px and mortar size of 8px the
same as the stimulus in Fig. 18). The size of the stimulus is adjusted in these images in a way that the
image in the Center has a size reduction of 25% from the image in the Left, and the image in the Right has
a size reduction of 50% from the image in the Left.
15
46
Fig 4. The Café Wall of 11×11 tiles (the resolutions of MATLAB generated pattern is the tile sizes of
200×200px and mortar size of 8px the same as the stimulus in Figs 18, 19). The image on the Left has a
25% size reduction from the image in the Right. The tilts predicted by the model for the stimulus is reported to be 4.05°±0.09 at scale 4 (σc =4; 4.05°±0.09) around the horizontal ([71]:Fig. 14). Left: The Convergent/divergent line segments are shown at ±5° that is one degree more than the tilt predicted by the
model for the stimulus. Right: The Convergent/divergent line segments are shown at ±4° indicating the
tilt predicted by the model for the stimulus.
16
47
| 1 |
arXiv:1401.4857v1 [cs.NE] 20 Jan 2014
A Genetic Algorithm to Optimize a Tweet for
Retweetability
Ronald Hochreiter
Christoph Waldhauser
May 2013
Abstract
Twitter is a popular microblogging platform. When users send out
messages, other users have the abilitiy to forward these messages to their
own subgraph. Most research focuses on increasing retweetability from
a node’s perspective. Here, we center on improving message style to increase the chance of a message being forwarded. To this end, we simulate
an artifical Twitter-like network with nodes deciding deterministically on
retweeting a message or not. A genetic algorithm is used to optimize
message composition, so that the reach of a message is increased. When
analyzing the algorithm’s runtime behavior across a set of different node
types, we find that the algorithm consistently succeeds in significantly
improving the retweetability of a message.
Keywords: Twitter, social network, message style, genetic algorithm, deterministic optimization.
1
Introduction
Twitter is a popular microblogging platform, that has been frequently at the focal point of research. Of special interest has been the complex network structure
that characterizes Twitter networks and the specifics that govern the propagation of information within Twitter networks. But how can Twitter users style
their messages, so that they reach furthest?
In this paper we aim at making use of that research by building a simulation
framework to enable researchers to investigate more closely the propagation of
information on Twitter. The simulation framework is being put to the test by
tasking a genetic algorithm with composing a tweet that reaches furthest in
different metrics. In that, we differ from [6] seminal contribution by optimizing
message contents instead of optimizing target audience. The latter approach in
only of limited use in the online scenario, as Twitter authors cannot influence
who will follow them.
This paper is structured as follows. First relevant research regarding Twitter’s networking structure and information diffusion is being reviewed. We then
1
introduce the simulation framework and describe the algorithm that was used to
obtain optimal tweets. Finally we present the results and offer some conclusions.
2
Message diffusion in Twitter networks
When communicating an actor seeks to get her message across [10]. A central
aspect of this process is to ensure that a message is not only received by the
original audience, but also that this audience spreads that message further on
their own accounts [8]. This process has been researched rather thoroughly
from very different aspects: medical epidemiology [4] and system dynamics [3]
to name but a few approaches fielded to tackle this complex problem. While
findings and insights differ, a common denominator is that message recipients
will resend a message if it passes a recipient’s filter, i.e. is to her liking [9].
These filters are domain specific but the common principle of message diffusion
remains true for very diverse domains.
The advent of micro-blogging has greatly simplified access to message diffusion data. By looking at e.g. Twitter data, connection structure as well as
message contents and meta data are readily available in a machine readable
format. This has produced a wealth of studies relating to message diffusion on
Twitter. In the following, we will survey recent contributions to the field to
distill key factors that influence message diffusion on Twitter.
In Twitter, users post short messages that are publicly viewable online and
that get pushed to other users following the original author. It is common practice to cite (retweet) messages of other users and thus spread them within ones
own part of the network. Messages can contain up to 140 characters including
free text, URL hyperlinks and marked identifiers (hashtags) that show that a
tweet relates to a certain topic. Metadata associated with each tweet is the
point of origin, i.e. the user that posted the tweet, the time it was posted and
the user agent or interface used to post it. On top of that, the tweets relation
to other tweets is available. For each user, additional meta data is available like
the age of the account, the number of followers, a description and the location.
Twitter networks are typical for the networks of human communication.
They are more complex (i.e. structured and scale-free) than randomly linked
networks with certain users functioning as hubs with many more connections
than would be expected under uniform or normal distributions. It is useful to
think of Twitter networks as directed graphs with nodes being Twitter users
and the following of a user being mapped to the edges [7]. A tweet then travels
from the original author to all directly connected nodes. If one of the nodes
chooses to retweet the message, it is propagated further down the network.
For average users, Twitter networks’ degree distribution follows a power law
and [7, 5] report the distribution’s exponent to be 2.3 and 2.4 respectively, therefore well within the range of typical human communication networks. However,
there are extremely popular Twitter authors (celebrities, mass media sites) that
have many more followers than would be expected even under a power-law distribution.
2
A distinguishing feature of Twitter is its small-world property. Most users
are connected to any other user using only a small number of nodes in between.
See [7] for an overview of Twitter’s small world properties. Despite their findings
that for popular users, the power-law distribution is being violated and average
path lengths between users being shorter than expected, they underscore that
homophily (i.e. similar users are more likely to be in contact) can be indeed
observed and that geographically close users are more likely to be connected.
Following the notion of message filtering introduced above, it is clear that
Twitter users are selecting messages for propagating them further according to
specific preferences. Applying these preferences for filtering purposes, they can
make use of the message contents available as listed above. Besides the number
of URLs [13, 12] and hashtags [12] contained, also the free text contents are
of importance. According to [11, 2], a key aspect in filtering free text is the
polarity and the emotionality of the message. [11] also point to the length of
tweet being an important predictor for its retweetability.
Beside message specific filtering criteria, also author specific filtering can
occur. For instance, a Twitter user that has a past record of being retweeted
often, will be more likely to be retweeted in the future [13, 14]. However, when
styling a single tweet for maximum retweetability, factors like past popularity
or even number of followers [12] cannot be influenced and are therefore not
represented in the model used.
Shifting the focus from the message recipient to the message sender, spreading a message as far as possible is a key goal. The success of a message can
be measured using different metrics. In their seminal work, [13] list three possibilities: One is the (average) speed a tweet achieves in traversing a network.
Another popularity metric is the scale, that is total number of retweets achieved.
Finally, range can be considered a popularity metric as well. Here range is the
number of edges it takes to travel from the original author to the furthest
retweeter.
In this section we reviewed the latest research related to message diffusion
on Twitter. Key factors influencing the probability of a tweet being retweeted
are the polarity and emotionality of a tweet, its number of included hyperlinks
and hashtags as well as the time of day it was originally posted. There are other
factors influencing retweet probability, however they are beyond the control of a
message sender and therefore do not apply to the problem at hand. In the next
section we will introduce a simulation framework that can be used to establish
a Twitter-like network to analyze the diffusion principles of messages governing
them.
3
Simulation framework
This paper uses the concept of message filtering to simulate the diffusion of
messages in networks and Twitter serves as an example for this. As detailed
above, Twitter users are considered nodes, their following relationships edges
in the network graph. Messages they send travel from node to node along the
3
graph’s edges. The topographical features of this network, i.e. the distribution
of edges, follow the specifics of scale-free, small-world networks as described
above. The nodes have individual preferences that govern if a message is being
passed on or ignored. In the following we will describe the simulator used to
simulate this kind of network.
Twitter networks exhibit a number of characteristics that we discussed above.
The simulator uses these properties to generate an artificial network that very
similar to Twitter networks. To this end, the number of connections a node
has is drawn from a power-law distribution. In accordance with the findings
reported above, the distribution’s exponent is fixed 2.4. From these figures,
an adjacency matrix is constructed. As Twitter’s following relations are not
required to be reciprocal, the resulting graph is directed. As Twitter contains
many isolated nodes, the resulting graph based on a Twitter-like power-law distribution also contains a number of isolated nodes. However, these nodes are
irrelevant for the problem at hand, and are thus removed.
Every node is then initialized with a set of uniform random message passing
preferences. The dimensions and their domains are given in Table 1.
Table 1: Message and node preferences.
Parameter
Domain
Polarity
−1; 1
Emotionality −1; 1
Length
1; 140
Time
morning, afternoon, night
# URLs
0; 10
# Hashtags
0; 10
When a message is sent out from the original authoring node, it is passed on
initially to all first-degree followers of that node. Each follower is then evaluated,
if she will pass on the message or not. This process is repeated until all nodes
that have received the message have been evaluated.
A node’s decision on passing the message or not is based on the preferences of
that node. In this model, this decision is purely deterministic. A node computes
the differences between its own preferences and the properties of the message in
all six dimensions. If the mean of this differences is lower than some threshold
value , the message is being forwarded. Otherwise, it is discarded.
The simulation framework described above was used to generate an artificial
Twitter-like network for use in this simulation study. To focus on the principles of message propagation, only a small network with initially 250 nodes was
generated. After removing isolated nodes, 245 nodes with at least 1 connection
remained. The average path length of that network was 4.3. The maximum of
first degree connections was observed to be at 170 nodes. This is much larger
than median and mean observed to be at 2 and 3.5, respectively.
In this section we described how an artificial Twitter-like network was built
4
using a power-law distribution. This network was paired with node preferences
with respect to the passing on of messages. Using a deterministic function, each
node uses its own preferences and a message’s properties to decide on whether
to pass it on or not. In the following we will describe a genetic algorithm that
was used to craft a message that will reach a maximum number of nodes within
that network.
4
Algorithm
In the simulated network, nodes pass on any message they encounter according
to the message properties and their own preferences regarding these properties.
If a sender now wants to maximize the effect a message has, i.e. to maximize
the retweets a tweet will experience, she has to write a message that meets the
expectations of the right, i.e. highly connected nodes. While topical choices
are obviously important as well, also the right choices regarding message style
influence the probability of a message being retweeted. In this section we present
a genetic algorithm that styles messages so that a maximum number of nodes
retweet it.
The algorithm’s chromosome are the message properties as described in Table 1. An initial population of size 50 was initialized with random chromosomes.
Using the standard genetic operators of mutations and crossover, the algorithm
was tasked to maximize the number of nodes that received the message. In the
terms introduced above, this relates to the scale of a message spreading.
To ensure that successful solutions are carried over from one generation
to the next, the top 3 solutions were cloned into the next generation. This
approach of elitism was shown by [1] to positively impact a genetic algorithm’s
runtime behavior. Ten percent of every generation was reseeded with random
values to ensure enough fresh material in the gene pool. The remaining 85
percent of a generation was created by crossing over the chromosomes of two
solutions. To identify solutions eligible for reproduction, tournament selection
using a tournament size of 5 was implemented. Children’s genes were mutated
at random. The probability of a child being mutated was set to be at 0.05.
In this section we described a genetic algorithm that can maximize the
retweetability of a tweet. Using state of the art genetic operators and selection mechanisms, a message is being styled so that it will reach a maximum
number of nodes. In the following we describe the success the algorithm had in
fulfilling its task using sender nodes with a high, medium and low number of
first-degree connections.
5
Results
The genetic algorithm as described above was used to find optimal message
composition with respect to retweetability for three different sender nodes. The
sender nodes differed in the number of first degree connections they had. The
5
genetic algorithm described above was allowed to search for an optimum for 250
generations. Each optimization run was replicated 50 times with random start
values. The reported result are averages and standard errors across those 50
replications.
To evaluate the algorithm’s performance, two factors are key: the number
of generations it takes to arrive at markedly more successful messages and the
stability of the discovered solutions. While the former is important to gauge
the algorithm’s runtime behavior and suitability for real-world deployment, the
latter can reveal insights on how easy findings can be generalized across different
networks. In the following, these the results relating to these two factors across
all three node types are being described.
Irrespective of the number of first degree connections a node has, optimization quickly succeeds in improving the initially random message styles. Figure
1 depicts the clearly visible trend.
Figure 1: Mean fitness as improving over generations for four different kinds of
sender nodes. Shaded area is a 95% confidence interval derived from replicating
the optimization 50 times.
Turning the attention towards stability, the last generation’s best solution
should be similar across all 50 replications. Table 2 gives the means and their
standard errors for all three kinds of nodes.
We will discuss these results in the next section and offer some concluding
remarks.
6
Table 2: Solution stability. Mean and Standard Error (in parentheses).
Parameter
5 nodes
10 nodes
170 nodes
Polarity
-0.08 (0.14)
0.38 (0.17)
0.32 (0.15)
Emotionality -0.01 (0.17)
-0.02 (0.22)
0.06 (0.23)
Length
82.26 (15.43) 104.28 (9.80) 72.18 (14.59)
Time
2.00 (0.00)
2.00 (0.00)
1.60 (0.49)
# URLs
5.60 (1.01)
5.76 (0.98)
4.00 (0.97)
# Hashtags
3.14 (1.05)
6.06 (0.98)
6.84 (0.71)
6
Discussion
The evaluation results provided in the previous section exhibit a number of
peculiarities. Most striking is perhaps, that for high and medium first degree
connections, the algorithm quickly improves the styling of a message so that
many more nodes will retweet it. While the highly connected sender node
has only little room to improve, a sender with a modest number of connections
benefits greatly from applying the algorithm. For both node types, the algorithm
almost reaches its optimum after 50 generations. For a node with very few firstdegree connections, the optimization process takes much longer to complete.
However, in a subsequent simulation, the algorithm eventually reached into
similar heights, given enough generations. On average, with only five connected
nodes to start from, the algorithm required 720 generations to reach an optimum
beyond 200 nodes.
Another interesting observation is, that the variance in the obtained solutions increases steadily as the number of first-degree connection decreases.
In many applications of genetic algorithms, the stability of identified optimal solutions across replications is a decisive factor. For the problem at hand,
stability is of lesser importance. When styling a message, apparently different
methods lead to nearly equal performance of the message.
7
Conclusion
In this paper we introduced a genetic algorithm to optimize the retweetability of
tweets. To do this, we simulated a Twitter-like network and associated each node
with a set of preferences regarding message retweeting behavior. The node’s
decision is purely deterministic, based on message properties coming close to a
node’s own preferences. The genetic algorithm succeeded in styling messages so
that they became retweeted more widely. Dependent on the number first-degree
connections of the sender node, the fitness of the algorithm’s terminal solution
varied.
This contribution is but a first step in an endeavor to understand the precise
mechanics of message propagation on Twitter. Previous work was focused on
sender node properties. By taking message properties in account when assessing
7
retweetability, we not only ventured into uncharted territory, we also discovered
new insights regarding the feasibility of message optimization.
Our model has a number of limitations, that need to be resolved in future
research. Most prominently, this is the deterministic decision function. While
reasonable for a first model, it would be naive to assume that nodes’ retweet
behavior is purely mechanistic. Rather, it is plausible that the decision to
retweet is being influenced to no small part by chance. Therefore, a stochastic
decision function would be required. We are confident, however, that the genetic
algorithm presented can also optimize a stochastic problem.
Another extension is the calibration of the used Twitter network with reallife empirical data. This would allow to initialize the nodes not with uniform
random values, but rather with empirically observed ones.
References
[1] D. Bhandari, C. A. Murthy, and S. K. Pal. Genetic algorithm with elitist
model and its convergence. International Journal of Pattern Recognition
and Artificial Intelligence, 10(6):731–747, 1996.
[2] M. Cha, H. Haddadi, F. Benevenuto, and P. K. Gummadi. Measuring user
influence in Twitter: The million follower fallacy. In Proceedings of the
Fourth International Conference on Weblogs and Social Media, ICWSM
2010. The AAAI Press, 2010.
[3] J. Goldenberg, B. Libai, and E. Muller. Talk of the network: A complex systems look at the underlying process of word-of-mouth. Marketing
Letters, 12(3):211–223, 2001.
[4] D. Gruhl, R. Guha, D. Liben-Nowell, and A. Tomkins. Information diffusion through blogspace. In Proceedings of the 13th International Conference
on World Wide Web, WWW 2004, pages 491–501. ACM, 2004.
[5] A. Java, X. Song, T. Finin, and B. L. Tseng. Why we twitter: An analysis
of a microblogging community. In Advances in Web Mining and Web Usage
Analysis, 9th International Workshop on Knowledge Discovery on the Web,
WebKDD 2007, volume 5439 of Lecture Notes in Computer Science, pages
118–138. Springer, 2009.
[6] D. Kempe, J. M. Kleinberg, and É. Tardos. Maximizing the spread of influence through a social network. In Proceedings of the Ninth ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining, KDD
2003, pages 137–146. ACM, 2003.
[7] H. Kwak, C. Lee, H. Park, and S. B. Moon. What is Twitter, a social network or a news media? In Proceedings of the 19th International Conference
on World Wide Web, WWW 2010, pages 591–600. ACM, 2010.
[8] B. McNair. An Introduction to Political Communication. Routledge, 2011.
8
[9] E. M. Rogers. Diffusion of Innovations. Free Press, 2010.
[10] C. Shannon and W. Weaver. The Mathematical Theory of Communication.
University of Illinois Press, 2002.
[11] S. Stieglitz and L. Dang-Xuan. Political communication and influence
through microblogging-an empirical analysis of sentiment in Twitter messages and retweet behavior. In 45th Hawaii International International
Conference on Systems Science (HICSS-45 2012), pages 3500–3509. IEEE
Computer Society, 2012.
[12] B. Suh, L. Hong, P. Pirolli, and E. H. Chi. Want to be retweeted? Large
scale analytics on factors impacting retweet in Twitter network. In Proceedings of the 2010 IEEE Second International Conference on Social Computing, SocialCom / IEEE International Conference on Privacy, Security,
Risk and Trust, PASSAT 2010, pages 177–184. IEEE Computer Society,
2010.
[13] J. Yang and S. Counts. Predicting the speed, scale, and range of information
diffusion in Twitter. In Proceedings of the Fourth International Conference
on Weblogs and Social Media, ICWSM 2010. The AAAI Press, 2010.
[14] T. R. Zaman, R. Herbrich, J. Van Gael, and D. Stern. Predicting information spreading in Twitter. In Workshop on Computational Social Science
and the Wisdom of Crowds, NIPS 2010, 2010.
9
| 9 |
arXiv:1609.02380v1 [math.GR] 8 Sep 2016
AUTOMORPHISMS OF K-GROUPS II
PAUL FLAVELL
Abstract. This work is a continuation of Automorphisms of K-groups I, P.
Flavell, preprint. The main object of study is a finite K-group G that admits
an elementary abelian group A acting coprimely. For certain group theoretic
properties P, we study the ACG (A)-invariant P-subgroups of G. A number
of results of McBride, Near solvable signalizer functors on finite groups, J.
Algebra 78(1) (1982) 181-214 and Nonsolvable signalizer functors on finite
groups, J. Algebra 78(1) (1982) 215-238 are extended.
One purpose of this work is to build a general theory of automorphisms,
one of whose applications will be a new proof of the Nonsolvable Signalizer
Functor Theorem. As an illustration, this work concludes with a new proof of
a special case of that theorem due to Gorenstein and Lyons.
1. Introduction
This work is a continuation of [1]. Namely we consider an elementary abelian
group A acting coprimely on the K-group G. The main focus is on how the ACG (A)invariant subgroups interact with each other and influence the global structure of
G.
A new theme introduced is to consider a group theoretic property P for which
G possesses a unique maximal normal P-subgroup OP (G) and a unique normal
subgroup OP (G) that is minimal subject to the quotient being a P-group. This
leads to the notions of P-component and (A, P)-component which generalize the
notions of sol-component and (A, sol)-component introduced in [1]. In that paper,
we considered how the A-components of an ACG (A)-invariant subgroup H of G are
related to the (A, sol)-components of G. In §7 we shall develop a partial extension
of that theory to the (A, P)-components of H.
If in addition P is closed under extensions, it will be shown in §5 that G possesses a unique maximal ACG (A)-invariant P-subgroup. This generalizes a result of
McBride [4] who proved it in the case P = “is solvable”. McBride also introduced
the notion of a near A-solvable group. In §6 we shall extend that work, introducing
the notion of a near (A, P)-group.
The results of §5 and §6 have applications to the study of nonsolvable signalizer
functors. In §8 we shall present a result of McBride [5, Theorem 6.5]. We have
taken the liberty of naming this result the McBride Dichotomy since it establishes a
fundamental dichotomy in the proof of the Nonsolvable Signalizer Functor Theorem.
As a further application, this paper concludes with a new proof of a special case of
the Nonsolvable Signalizer Functor Theorem due to Gorenstein and Lyons [3].
2010 Mathematics Subject Classification. Primary 20D45 20D05 20E34 .
A considerable portion of this research was done whilst the author was in receipt of a Leverhulme Research Project grant and during visits to the Mathematisches Seminar, ChristianAlbrechts-Universität, Kiel, Germany. The author expresses his thanks to the Leverhulme Trust
for their support and to the Mathematisches Seminar for its hospitality.
1
2
PAUL FLAVELL
2. P-components
Throughout this section we assume the following.
Hypothesis 2.1. P is a group theoretic property that satisfies:
1. P is subgroup and quotient closed.
2. If G/M and G/N are P-groups then so is G/M ∩ N .
3. If M and N are normal P-subgroups of the group G then so is M N .
Some obvious examples being: P = “is soluble”; “is nilpotent”; “is trivial”; “is of
odd order”.
For any group G we define:
OP (G) = h N E G | N is a P-group i
and
OP (G) =
\
{ N E G | G/N is a P-group }.
Then OP (G) is the unique maximal normal P-subgroup of G and OP (G) is the
unique smallest normal subgroup whose quotient is a P-group.
Definition 2.2. A P-component of G is a subgroup K of G that satisfies
K E E G, K/OP (K) is quasisimple and K = OP (K).
The set of P-components of G is denoted by
compP (G)
and we define
EP (G) = h compP (G) i.
Lemma 2.3. Let G be a group.
(a) OP (G) contains every subnormal P-subgroup of G.
(b) If N E E G then OP (N ) = N ∩ OP (G).
(c) If H ≤ G then OP (G) ∩ H ≤ OP (H).
(d) If G = M N with M, N E G then OP (G) = OP (M )OP (N ).
Proof. (a). Let N be a subnormal P-subgroup of G and set H = h N G i. If H = G
then since N E E G we have N = G and the result is clear. Suppose H 6= G then
by induction N ≤ OP (H). Now OP (H) char H E G so OP (H) ≤ OP (G) and then
N ≤ OP (G).
(b). This follows from (a) and the fact that P is subgroup closed.
(c). Because P is subgroup closed.
(d). Note that OP (M ) char M E G. Set G = G/OP (M )OP (N ). Then G = M N .
Now M is a P-group since it is a quotient of the P-group M/OP (M ). Similarly so
is N . Now M , N E G whence G is a P-group and so OP (G) ≤ OP (M )OP (N ).
Set G∗ = G/OP (G). Then M ∗ is a P-group so OP (M ) ≤ OP (G). Similarly
P
O (N ) ≤ OP (G), completing the proof.
Lemma 2.4. Let G be a group and suppose K, L ∈ compP (G).
(a) If N E E G then compP (N ) ⊆ compP (G).
(b) If H ≤ G and K ≤ H then K ∈ compP (H).
(c) K is perfect and possesses a unique maximal normal subgroup, namely
Z(K mod OP (K)).
AUTOMORPHISMS OF K-GROUPS II
3
(d) (Wielandt) Suppose N E E G. Then either K ≤ N or [K, N ] ≤ OP (K).
(e) If K ≤ L then K = L.
(f) Either
K = L or [K, L] ≤ OP (K) ∩ OP (L).
In particular, K and L normalize each other.
(g) If G1 , . . . , Gn E E G and K ≤ h G1 , . . . , Gn i then K ≤ Gi for some i.
(h) If C, D ⊆ compP (G) satisfy h C i = h D i then C = D.
(i) [K, OP (G) sol(G)] ≤ OP (K). In particular OP (G) sol(G) normalizes every
P-component of G.
(j) Set G = G/OP (G). The map
compP (G) −→ comp(G) defined by K 7→ K
is an injection. If every P-group is solvable then it is a bijection.
Proof. (a),(b). These follow immediately from the definition of P-component.
(c). Suppose that N is a proper normal subgroup of K. Now K/OP (K) is
quasisimple so either N ≤ Z(K mod OP (K)) or N maps onto K/OP (K). Assume
the latter. Then K = N OP (K) so K/N ∼
= OP (K)/OP (K) ∩ N , hence K/N is a
P-group. Then K = OP (K) ≤ N and K = N . Thus Z(K mod OP (K)) is the
unique maximal normal subgroup of K. Also K/OP (K) is perfect so K ′ maps onto
K/OP (K) and hence K = K ′ .
(d). Set M = h N G i. If N = G then the conclusion is clear so assume N 6= G.
Now N E E G so M 6= G. If K ≤ M then the conclusion follows by induction, so
we may assume that K 6≤ M .
Suppose that K is not normal in KM . Now K E E KM so there exists g ∈ KM
such that K and K g normalize each other but K 6= K g . Set T = K g K. Now
KM = K g M so T = K g (T ∩ M ). Note that K g E T and T ∩ M E T . Since
K E T and K is perfect, we have K = [K, T ] = [K, K g ][K, T ∩ M ] and then
K = (K ∩ K g )(K ∩ T ∩ M ). But K ∩ K g and K ∩ T ∩ M are proper normal
subgroups of K, contrary to K having a unique maximal normal subgroup. We
deduce that K E KM , hence
[K, M ] ≤ K ∩ M ≤ Z(K mod OP (K))
and so [K, M, K] ≤ OP (K). Similarly [M, K, K] ≤ OP (K). Since OP (K) E KM
and K is perfect, the Three Subgroups Lemma implies that [K, M ] ≤ OP (K). Since
N ≤ M we have [K, N ] ≤ OP (K).
(e). We have K E E L so either K = L or K ≤ Z(L mod OP (L)). Assume the
latter. Now K is perfect, whence K ≤ OP (L) and K is a P-group. This is not
possible since K = OP (K). Hence K = L.
(f). Assume that K 6= L. Then (c) implies K 6≤ L and L 6≤ K. Two applications
of (d) imply [K, L] ≤ OP (K) ∩ OP (L).
(g). Suppose K 6≤ Gi for all i. Then (d) implies that h G1 , . . . , Gn i normalizes K
and centralizes K/OP (K). This is absurd since K ≤ h G1 , . . . , Gn i and K/OP (K)
is perfect.
(h). Let C ∈ C. By (g) there exists D ∈ D with C ≤ D. Then (e) forces C = D,
whence C ⊆ D. Similarly D ⊆ C.
(i). Since K is not a P-group and is perfect, we have K 6≤ OP (G) and K 6≤
sol(G). Apply (d).
4
PAUL FLAVELL
(j). Since K E E G, (a) implies that OP (K) = OP (G) ∩ K, whence K ∼
=
K/OP (K) and so K is quasisimple. Thus K ∈ comp(G). Suppose that K = L.
Then K ≤ LOP (G). As K is not a P-group, (g) implies K ≤ L and then (e) forces
K = L. Hence the map is an injection.
Suppose that every P-group is solvable and that C ∈ comp(G). Choose D
minimal subject to D E E G and D = C. Suppose that OP (D) 6= D. Then
OP (D) ≤ Z(C) whence C/Z(C) is an image of D/OP (D), which is an image of
D/OP (D). Thus C/Z(C) is a P-group. This is a contradiction since every P-group
is solvable. Hence OP (D) = D. As D E E G we have OP (D) = D ∩ OP (G) so
D/OP (D) ∼
= C, which is quasisimple. Thus D ∈ compP (G).
We remark that in (j) the extra condition to ensure that the map is a bijection
is needed. For example, let P be the property defined by G is a P-group if and
only if G = sol(G)E(G) and every component of G is isomorphic to A5 . Now let
G = A5 wr A5 .
3. (A, P)-components
Throughout this section, assume the following.
Hypothesis 3.1.
• Hypothesis 2.1.
• A is a finite group.
Definition 3.2. Let G be a group on which A acts. An (A, P)-component of G is
an A-invariant subgroup K of G that satisfies
K E E G, K/OP (K) is A-quasisimple and K = OP (K).
The set of (A, P)-components of G is denoted by
compA,P (G).
Lemma 3.3. Let G be a group on which A acts. The (A, P)-components of G are
the subgroups generated by the orbits of A on compP (G). Distinct orbits generate
distinct (A, P)-components.
Proof. Suppose { K1 , . . . , Kn } is an orbit of A on compP (G) and define K =
h K1 , . . . , Kn i. Certainly K E E G. Lemma 2.4(f) implies Ki E K for each i, so
K = K1 · · · Kn . By Lemma 2.3(d), OP (K) = OP (K1 ) · · · OP (Kn ) = K1 · · · Kn =
K. Using Lemma 2.4(j), with K in the role of G, we see that K/OP (K) is the
central product of the quasisimple groups Ki OP (K)/OP (K) and that these are
permuted transitively by A. Thus K/OP (K) is A-quasisimple and hence K is an
(A, P)-component.
Conversely suppose that K ∈ compA,P (G). Set K = K/OP (K) so that K =
K1 ∗ · · · ∗ Kn with each Ki quasisimple and A acting transitively on { K1 , . . . , Kn }.
Let Li be the inverse image of Ki in K. Then Li E K and K = L1 · · · Ln . Set
Ki = OP (Li ) E E G. By Lemma 2.3(d) we have K = OP (K) = K1 · · · Kn . Then
Ki maps onto Ki so Li = OP (K)Ki . Again by Lemma 2.3(d), Ki = OP (Li ) =
OP (OP (K))OP (Ki ) = OP (Ki ). Thus Ki ∈ compP (G) and K is the subgroup
generated by an orbit of A on compP (G). Finally, Lemma 2.4(h) implies that
distinct orbits generate distinct (A, P)-components.
Lemma 3.4. Let G be a group on which A acts and suppose K, L ∈ compA,P (G).
AUTOMORPHISMS OF K-GROUPS II
5
(a) If N is an A-invariant subnormal subgroup of G then compA,P (N ) ⊆
compA,P (G).
(b) If H is an A-invariant subgroup of G and K ≤ H then K ∈ compA,P (H).
(c) K is perfect and possesses a unique maximal A-invariant subnormal subgroup, namely Z(K mod OP (K)).
(d) Suppose N is an A-invariant subnormal subgroup of G. Then either K ≤ N
or [K, N ] ≤ OP (K).
(e) If K ≤ L then K = L.
(f) Either
K = L or [K, L] ≤ OP (K) ∩ OP (L).
In particular, K and L normalize each other.
(g) Suppose G1 , . . . , Gn are A-invariant subnormal subgroups of G and K ≤
{ G1 , . . . , Gn } then K ≤ Gi for some i.
(h) Suppose C, D ⊆ compA,P (G) satisfy h C i = h D i. Then C = D.
(i) [K, OP (G) sol(G)] ≤ OP (K). In particular, OP (G) sol(G) normalizes every
(A, P)-component of G.
(j) Set G = G/OP (G). The map
compA,P (G) −→ compA (G) defined by K 7→ K
is an injection. If every P-group is solvable then it is a bijection.
(k) K E h K G i.
Proof. (a) and (b) are immediate from the definitions.
(c),(e),(f),(g),(h),(i),(j) follow with the same argument as used in the proof of
Lemma 2.4.
(d) follows from Lemmas 3.3 and 2.4(d).
(k). By Lemma 3.3 we have K = h K1 , . . . , Kn i where { K1 , . . . , Kn } ⊆
compP (G) so Lemma 2.4(f) implies Ki E h K G i. Then K E h K G i.
4. Preliminaries
Lemma 4.1. Let r be a prime and A and elementary abelian r-group that acts
coprimely on the K-group G. Suppose K ∈ compA (G) and that H is an ACK (A)invariant subgroup of G with H ∩ K ≤ Z(K). Then [H, K] = 1.
Proof. Set G = G/Z(E(G)). We have [H, CK (A)] ≤ H ∩ E(G) so as K E E(G)
it follows that [H, CK (A), CK (A)] ≤ H ∩ K ≤ Z(K). Note that CK (A) = CK (A)
by Coprime Action. Then [H, CK (A), CK (A)] = 1. The Three subgroups Lemma
implies that [H, CK (A)′ ] = 1. Then H permutes the components of G onto which
CK (A)′ projects nontrivially. By [1, Theorem 4.4(a)], CK (A)′ 6= 1 so these components are precisely the components of K. We deduce that H normalizes K and
then that H normalizes K. Then [H, CK (A)] ≤ H ∩ K ≤ Z(K) so [H, CK (A)] = 1.
[1, Theorem 4.4(c)] implies that [H, K] = 1. Since K is perfect, it follows from the
Three Subgroups Lemma that [H, K] = 1.
Lemma 4.2. Let P be a group theoretic property that satisfies:
(a) P is subgroup and quotient closed.
(b) If G/M and G/N are P-groups then so is G/M ∩ N .
6
PAUL FLAVELL
Suppose the group A acts coprimely on the group G, that P is an A-invariant
subgroup of G and that K ∈ compA (G). Assume that
CK (A) ≤ NG (P ) and [P, CK (A)] is a P-group.
Then
P ≤ NG (K) or CK/Z(K) (A) is a P-group.
Proof. Set M = h CK (A)P i = [P, CK (A)]CK (A) ≤ E(G). We have M =
[P, CK (A)](M ∩ K) and M ∩ K E M since K E E(G). Now M/M ∩ K is a
P-group since it is isomorphic to a quotient of [P, CK (A)]. Thus
OP (M ) ≤ M ∩ K.
Since P ≤ NG (M ) we have P ≤ NG (OP (M )).
Set E = E(G) and G = G/Z(E(G)), so E is the direct product of the components
of G. Set N = OP (M ). Suppose that N 6= 1. Now N is A-invariant so P permutes
the components of G onto which N projects nontrivially. Since N ≤ K and both
N and K are A-invariant, these components are precisely the components of K.
Then P normalizes K and hence K.
Suppose that N = 1. Then N ≤ Z(E(G)) and so M/M ∩ Z(E(G)) is a P-group.
As CK (A) ≤ M and Z(K) = K ∩ Z(E(G)) it follows that CK (A)/CK (A) ∩ Z(K) is
a P-group. Since A acts coprimely on G, the quotient is isomorphic to CK/Z(K) (A),
completing the proof.
5. P-subgroups
Definition 5.1. Let A be a group that acts on the group G and let P be a group
theoretic property. Then
OP (G; A) = h P ≤ G | P is an ACG (A)-invariant P-subgroup i.
We are interested in situations where OP (G; A) is a P-group, in other words, when
does G possess a unique maximal ACG (A)-invariant P-subgroup? The goal of this
section is to prove the following.
Theorem 5.2. Let P be a group theoretic property that is closed under subgroups,
quotients and extensions. Let A be an elementary abelian r-group for some prime r
and assume that A acts coprimely on the K-group G. Then OP (G; A) is a P-group.
As an immediate consequence we have the following.
Corollary 5.3. Let r be a prime and A an elementary abelian r-group that acts
on the group G. Suppose that θ is an A-signalizer functor on G and that θ(a) is a
K-group for all a ∈ A# .
Let P be a group theoretic property that is closed under subgroups, quotients and
extensions. Define θP by
θP (a) = OP (θ(a); A)
for all a ∈ A# . Then θP is an A-signalizer functor.
This generalizes a result of McBride [4, Lemma 3.1], who proves it in the case
P = “is solvable” and θ is near solvable.
Throughout the remainder of this section, we assume the hypotheses of Theorem 5.2.
Lemma 5.4. Assume that N is an ACG (A)-invariant subgroup of G.
AUTOMORPHISMS OF K-GROUPS II
7
(a) CG (A) normalizes OP (N ; A).
(b) Suppose that OP (G; A) and OP (N ; A) are P-groups. Then
OP (N ; A) = OP (G; A) ∩ N.
If in addition, N E G then OP (N ; A) E OP (G; A).
(c) Suppose N = N1 × · · · × Nm with each Ni being A-invariant. Then
OP (N ; A) = OP (N1 ; A) × · · · × OP (Nm ; A).
Proof. (a). Since CG (A) normalizes ACN (A), it permutes the ACN (A)-invariant
subgroups of N .
(b). By (a), OP (N ; A) ≤ OP (G; A)∩N . Moreover, OP (G; A)∩N is an ACN (A)invariant P-subgroup of N , proving the reverse inclusion.
(c). Trivial.
Lemma 5.5. Suppose that N is an A-invariant normal subgroup of G and that
N = N1 × · · · × Nm
with each Ni being simple. For each i, let πi : N −→ Ni be the projection map.
Suppose also that B is a subgroup of A that normalizes but does not centralize each
Ni .
(a) CN (A)πi = CNi (B) for each i.
(b) OP (N ; A) = OP (N ; B).
(c) If X is an ACN (A)-invariant subgroup of G that normalizes each Ni then
[X, CN (B)] ≤ (X ∩ N )π1 × · · · × (X ∩ N )πm .
Proof. Note that A permutes { N1 , . . . , Nm } since this is the set of components of
N . For each i, set Ai = NA (Ni ). Now Ni is a simple K-group so [1, Theorem 4.1]
implies that the Sylow r-subgroups of Aut(Ki ) are cyclic. Hence Ai = CA (Ni )B.
Using [1, Lemma 3.6] we have CN (A)πi = CNi (Ai ) = CNi (B) so (a) follows.
(b). Choose 1 ≤ i ≤ m, let X be a BCNi (B)-invariant P-subgroup of Ni and
set Y = h X A i. Since Ai = NA (Ni ) it follows that Y is the direct product of
| A : Ai | copies of X. Then Y is an A-invariant P-subgroup. Now CN (B) =
CN1 (B) × · · · × CNm (B). It follows that Y is CN (B)-invariant. Now CN (A) ≤
CN (B) whence Y ≤ OP (N ; A). Using Lemma 5.4(c), with B in place of A, we
deduce that OP (N ; B) ≤ OP (N ; A).
To prove the opposite containment, suppose that Z is an ACN (A)-invariant Psubgroup of N . From (a) it follows that each Zπi is a BCNi (B)-invariant of Ni ,
whence Z ≤ OP (N1 ; B) × · · · × OP (Nm ; B). Another application of Lemma 5.4(c)
implies Z ≤ OP (N ; B), completing the proof.
(c). Let x ∈ X and c ∈ CN (A). Then c = (cπ1 ) · · · (cπm ) and, as x normalizes
each Ni , it follows that [x, cπi ] ∈ Ni . Hence
[x, c] = [x, cπ1 ] · · · [x, cπm ].
Now [x, c] ∈ X ∩ N so it follows that
[x, c]πi = [x, cπi ].
In particular, [x, cπi ] ∈ (X ∩N )πi . Since CN (A)πi = CNi (B) we have [x, CNi (B)] ≤
(X ∩ N )πi . Now CN (B) = CN1 (B) × · · · × CNm (B) so the result follows.
8
PAUL FLAVELL
Lemma 5.6. Suppose that N is an A-invariant normal subgroup of G that is the
direct product of nonabelian simple groups and that X is an ACN (A)-invariant subgroup of G. Assume that X and OP (N ; A) are P-groups. Then X ≤ NG (OP (N ; A)).
Proof. Assume false and consider a counterexample with | A | minimal and then
| G |+| N |+| X | minimal. Then G = XN . By Coprime Action, X = CX (A)[X, A] =
h CX (B) | B ∈ Hyp(A) i. Since CX (A) normalizes OP (N ; A) it follows that
X = [X, A] = CX (B) for some B ∈ Hyp(A). We have
N = N1 × · · · × Nm
where each Ni is nonabelian and simple. Then { N1 , . . . , Nm } is the set of components of N and is hence permuted by AG. Using Lemma 5.4(c) and the minimality
of | N | it follows that AX is transitive on { N1 , . . . , Nm }. If Ni is a P-group
for some i then so is N , whence N = OP (N ; A), contrary to X not normalizing
OP (N ; A). We deduce that no Ni is a P-group.
Claim 1. B acts semiregularly on { N1 , . . . , Nm }.
Proof. Assume false. Set B0 = NB (N1 ). Then without loss, B0 6= 1. As B ≤
Z(AX) and AX is transitive on { N1 , . . . , Nm } it follows that B0 normalizes each
Ni . By the same reasoning, either B0 acts nontrivially on each Ni or trivially on
each Ni . The minimality of | A | rules out the latter case since if [B0 , X] = 1 then
we could replace A by A/B0 . Hence B0 acts nontrivially on each Ni . Lemma 5.5(b)
implies that OP (N ; A) = OP (N ; B0 ) so as [X, B0 ] = 1 it follows from Lemma 5.4(a)
that X normalizes OP (N ; A), a contradiction.
Claim 2. Let 1 ≤ i ≤ m. Then | NA (Ni ) | = r.
Proof. Without loss, i = 1 and { N1 , . . . , Nl } is the orbit of A on { N1 , . . . , Nm }
that contains N1 . By Claim 1, NA (N1 ) ∩ B = 1 so as B ∈ Hyp(A) it follows that
| NA (N1 ) | = 1 or r. Suppose, for a contradiction, that | NA (N1 ) | = 1. Then A
is regular on { N1 , . . . , Nl }. Set K = N1 × · · · × Nl , so that K ∈ compA (G). [1,
Lemma 3.6] implies that CK (A) ∼
= N1 and that CK (A) is maximal subject to being
A-invariant. In particular, CK (A) is not a P-group so Lemma 4.2 implies that X
normalizes K. Then [X, CK (A)] ≤ X ∩ K. Note that K is not a P-group since
N1 is not a P-group, whence X ∩ K < K. Since X ∩ K is ACK (A)-invariant, it
follows that X ∩ K E CK (A). Then as CK (A) is not a P-group and CK (A) is
simple, we have X ∩ K = 1. Lemma 4.1 implies that [X, K] = 1. Recall that AX
is transitive on { N1 , . . . , Nm } and that K ∈ compA (G). It follows that K = N ,
whence [X, OP (N ; A)] = 1, a contradiction.
Claim 3. | A | = r and m = 1, so N is simple.
Proof. Consider the permutation action of AX on { N1 , . . . , Nm }. Note that A ∈
Sylr (AX) so it follows from Claim 2 that NA (Ni ) ∈ Sylr (NAX (Ni )) for all i. Set
A∗ = NA (N1 ). Let 1 ≤ i ≤ m. Then NA (Ni ) is conjugate in AX to A∗ , so as A is
abelian, there exists x ∈ X with NA (Ni )x = A∗ . Now [NA (Ni ), x] ≤ A ∩ X = 1 so
we deduce that
NA (Ni ) = A∗
for all i. Recall that B ∈ Hyp(A) so Claims 1 and 2 imply that A = A∗ × B. As
X = [X, A] = CX (B) we have X = [X, A∗ ] and it follows that X normalizes each
Ni . Then B is transitive on { N1 , . . . , Nm } and either A∗ is nontrivial on each Ni
AUTOMORPHISMS OF K-GROUPS II
9
or trivial on each Ni . In the latter case, X centralizes N and hence OP (N ; A), a
contradiction. Thus A∗ is nontrivial on each Ni .
We will apply Lemma 5.5, with A∗ in the role of B. Put Y = (X ∩ N )π1 ×
· · · × (X ∩ N )πm . Lemma 5.5(a) implies that Y is CN (A∗ )-invariant. Note that
XY is a P-group because X normalizes each Ni and hence each (X ∩ N )πi .
Lemma 5.5(c) implies that XY is an A∗ CN (A∗ )-invariant. Lemma 5.5(b) implies
that OP (N ; A) = OP (N ; A∗ ) so if A 6= A∗ , then the minimality of | A | supplies
a contradiction. We deduce that A = A∗ . Then | A | = r and B = 1. As B is
transitive on { N1 , . . . , Nm }, we have m = 1.
It is now straightforward to complete the proof. Note that X ∩ N is an ACN (A)invariant P-subgroup of N and that N is not a P-group. Since N is simple it
follows that (X ∩ N )CN (A) < N .
Suppose that X ∩ N ≤ CN (A). Then [CN (A), X, A] ≤ [X ∩ N, A] = 1. Trivially [A, CN (A), X] = 1 so the Three Subgroups Lemma forces [X, A, CN (A)] =
1. As X = [X, A] it follows from [1, Theorem 4.4(c)] that [X, N ] = 1. Then
[X, OP (N ; A)] = 1, a contradiction. We deduce that X ∩ N 6≤ CN (A).
Now [1, Theorem 4.1] implies that N ∼
= L2 (2r ) or Sz(2r ) and that | Out(N ) | = r.
Consequently
G = XN = CG (N ) × N.
Let α and β be the projections G −→ CG (N ) and G −→ N respectively. Then
X ≤ Xα × Xβ and Xβ ≤ OP (N ; A). It follows that X normalizes OP (N ; A), a
contradiction.
Proof of Theorem 5.2. Assume false and let G be a minimal counterexample. Using
Lemma 5.4(a) it follows that G = OP (G; A) and since P is closed under extensions
we have OP (G) = 1. Let N be a minimal A-invariant normal subgroup of G. Since
G = OP (G; A), the minimality of G implies that G/N is a P-group.
Suppose that N is abelian. Then N is an elementary abelian q-group for some
prime q. Now OP (G) = 1 so N is not a P-group. The hypothesis satisfied by
P implies that every P-group is a q ′ -group. In particular, N is a normal Sylow
subgroup of G. [1, Coprime Action(g)] implies there exists an A-invariant complement H to N , so G = HN and H ∩ N = 1. Let P be an ACG (A)-invariant
P-subgroup of G and set G0 = P N . Then G0 = (G0 ∩ H)N and P and G0 ∩ H are
A-invariant complements to N in G0 . [1, Coprime Action(g)] implies P c = G0 ∩ H
for some c ∈ CG0 (A). Since P is ACG (A)-invariant we obtain P ≤ H. Then
G = OP (G; A) ≤ H, a contradiction. We deduce that N is nonabelian. Then N is
a direct product of simple groups.
Suppose that N 6= G. Then OP (N ; A) is a P-group by the minimality of G.
Now G = OP (G; A) so Lemma 5.6 implies OP (N ; A) E G. As OP (G) = 1 this
forces OP (N ; A) = 1. Let P be an ACG (A)-invariant P-subgroup of G. Then
P ∩ N ≤ OP (N ; A) = 1. Since N is A-invariant and the direct product of simple
groups, it is the direct product of the A-components of G. Lemma 4.1 implies
[P, N ] = 1. But G = OP (G; A) whence N ≤ Z(G), a contradiction. We deduce
that N = G. Moreover, since N is a minimal A-invariant normal subgroup of G, it
follows that G is A-simple.
Recall the definitions of underdiagonal and overdiagonal subgroups of G as given
in [1, §6]. Let P be an ACG (A)-invariant P-subgroup of G and suppose that P is
10
PAUL FLAVELL
overdiagonal. Then each component of G is a P-group, so G is a P-group, a contradiction. We deduce that every ACG (A)-invariant P-subgroup of G is underdiagonal.
[1, Lemma 6.8(b)] implies that G possesses a unique maximal ACG (A)-invariant
underdiagonal subgroup. Thus G 6= OP (G; A). This final contradiction completes
the proof.
6. Near (A, P)-subgroups
Throughout this section we assume the following.
Hypothesis 6.1.
• r is a prime and A is an elementary abelian r-group.
• A acts coprimely on the K-group G.
• P is a group theoretic property that is closed under subgroups, quotients
and extensions.
• Every solvable group is a P-group.
Definition 6.2.
• G is a near (A, P)-group if CG (A) is a P-group.
• OnP (G) = h N E G | N is A-invariant and a near (A, P)-group i.
• OnP (G; A) = h H ≤ G | H is ACG (A)-invariant and a near (A, P)-group i.
Lemma 6.3.
(a) OnP (G) is a near (A, P)-group.
(b) Suppose N E G is A-invariant and that N and G/N are near (A, P)-groups.
Then so is G.
Proof. (a). Suppose that N, M E G are A-invariant near (A, P)-groups. Coprime
Action implies that CN M (A) = CN (A)CM (A) so as CN (A) and CM (A) normalize
each other, the assumptions on P imply that CN (A) and CM (A) is a P-group.
Thus N M is a near (A, P)-group and the result follows.
(b). Because P is closed under extensions.
The main aim of this section is to prove the following.
Theorem 6.4.
(a) OnP (G; A) is a near (A, P)-group.
(b) Suppose that N is an A-invariant normal subgroup of G then OnP (N ; A) =
OnP (G; A) ∩ N .
(c) Suppose that H is an ACG (A)-invariant subgroup of G then OP,E (H) normalizes OnP (G; A).
Lemma 6.5. Suppose that G is A-simple and that X 6= 1 is an ACG (A)-invariant
near (A, P)-subgroup of G. Then G is a near (A, P)-group.
Proof. Suppose first that sol(X) 6= 1. Then G possesses a nontrivial ACG (A)invariant solvable subgroup. [1, Theorem 4.1(c)] implies that CG (A) is solvable.
By hypothesis, every solvable group is a P-group so G is a near (A, P)-group.
Hence we may assume that sol(X) = 1. Moreover, by considering CX (A) in place
of X, we may assume that sol(CX (A)) = 1.
Since sol(X) = 1, [1, Theorem 4.4(a)] implies CX (A) 6= 1, so as sol(CX (A)) = 1
we have 1 6= E(CX (A)) E CG (A). Now G is an A-simple K-group and CG (A) is nonsolvable so F ∗ (CG (A)) is A-simple and CG (A)/F ∗ (CG (A)) is solvable by [1, Theorem 6.5(a),(b) and Theorem 4.1]. Then E(CX (A)) = F ∗ (CG (A)) so F ∗ (CG (A)) is
AUTOMORPHISMS OF K-GROUPS II
11
a P-group. Since CG (A)/F ∗ (CG (A)) is solvable, the hypothesis on P implies that
CG (A) is a P-group. Then G is a near (A, P)-group.
Corollary 6.6. Suppose that X is an ACG (A)-invariant near (A, P)-subgroup of
G and that L ∈ compA (G) with Z(L) = 1. Then [X, L] = 1 or L is a near (A, P)group.
Proof. Assume [X, L] 6= 1. Lemma 4.1 implies that X ∩ L 6= 1 so as X ∩ L is an
ACL (A)-invariant near (A, P)-subgroup of L, the result follows.
Proof of Theorem 6.4. (a). Assume false and let G be a minimal counterexample.
Then G = OnP (G; A), OnP (G) = 1, sol(G) = 1 and G/E(G) is a near (A, P)-group.
It follows that E(G) is not a near (A, P)-group and that there exists L ∈ compA (G)
such that L is not a near (A, P)-group. Note that Z(L) ≤ sol(G) = 1. As G =
OnP (G; A), Corollary 6.6 implies L ≤ Z(G), a contradiction.
(b). Since OnP (G; A) ∩ N is a near (A, P)-group we have OnP (G; A) ∩ N ≤
OP (N ; A). Now CG (A) permutes the ACN (A)-invariant near (A, P)-subgroups of
N so CG (A) normalizes OP (N ; A). Thus OnP (N ; A) ≤ OP (G; A) ∩ N , completing
the proof.
(c). Since OnP (G) ≤ OnP (G; A) we may pass to the quotient G/OnP (G) and
assume that OnP (G) = 1. Now every solvable group is a P-group so sol(G) = 1
and hence CG (E(G)) = 1. Let
Cn = { K ∈ compA (G) | K is a near (A, P)-group }
C0 = compA (G) − Cn .
Corollary 6.6 implies that OnP (G; A) centralizes h C0 i. Then OnP (G; A) normalizes
h Cn i = CE(G) (h C0 i). As CG (E(G)) = 1 we also have Cn 6= ∅.
Suppose L ∈ compA,P (H). We claim that L acts trivially on C0 . If L ≤ E(G)
then the claim is trivial so suppose L 6≤ E(G). Now every solvable group is a
P-group so OP (L) is the unique maximal A-invariant normal subgroup of L. Consequently L ∩ E(G) ≤ OP (L). Now E(G) ∩ HCG (A) E HCG (A) so Theorem 3.4(d)
implies that E(G) ∩ HCG (A) normalizes L. Let K ∈ C0 . Then CK (A) ≤ NG (L).
Since K ∩L ≤ L∩E(G) ≤ OP (L), Lemma 6.5 implies K ∩L = 1, whence [K, L] = 1
by Lemma 4.1. This establishes the claim. We deduce that EP (H) normalizes
h C0 i and also normalizes h Cn i = CE(G) (h C0 i). We have previously seen that
OnP (G; A) normalizes h Cn i so as OP (H) ≤ OnP (G; A) we have that
h OP,E (H), OnP (G; A), CG (A) i ≤ NG (h Cn i).
Since OnP (G) = 1, the normalizer is a proper subgroup of G. The conclusion
follows by induction.
7. Local to global results
We shall generalize some of the results of [1, §9] concerning A-components to
(A, P)-components. Consider the following:
Hypothesis 7.1.
• r is a prime and A is an elementary abelian r-group that acts coprimely on
the K-group G.
• P is a group theoretic property that satisfies Hypothesis 2.1.
• H is an ACG (A)-invariant subgroup of G.
12
PAUL FLAVELL
The aim is to establish a connection between the (A, P)-components of H and the
structure of G. This is not possible in full generality, but if additional assumptions
are made then it is.
Hypothesis 7.2.
• Hypothesis 7.1.
• Every P-group is solvable.
• a ∈ A# and H is CG (a)-invariant.
Hypothesis 7.3.
• Hypothesis 7.1.
• Whenever A acts coprimely on the K-group X and CX (A) is a P-group
then X is solvable.
Lemma 7.4.
(a) Assume Hypothesis 7.1. If P is any of the properties “is trivial”, “is nilpotent” or “has odd order” then Hypothesis 7.3 is satisfied.
(b) Assume Hypothesis 7.3. Then every P-group is solvable.
Proof. (a). This follows from [1, Theorem 4.4].
(b). Let X be a P-group and let A act trivially on X.
We state the main result of this section.
Theorem 7.5.
(a) Assume Hypothesis 7.2 and that K ∈ compA,P (H) satisfies K = [K, a].
Then K ∈ compA,P (G).
(b) Assume Hypothesis 7.3. Then OP (H)EP (H) acts trivially on compsol (G).
e ∈ comp
If K ∈ compA,P (H) then there exists a unique K
A,sol (G) with
e
K ≤ K.
At the end of this section, examples will be constructed to show that the additional hypothesis in (b) is needed. It would be interesting to investigate if (a)
holds without the solvability hypothesis. Two lemmas are required for the proof of
Theorem 7.5.
Lemma 7.6. Assume Hypothesis 7.1, that K ∈ compA,P (H) and that
(a) K ≤ E(G), or
(b) sol(G) = 1 and K acts trivially on comp(G).
e ∈ compA (G) with K ≤ K.
e
Then there exists a unique K
Proof. Uniqueness is clear since distinct elements of compA (G) have solvable intersection. Note that since H is CG (A)-invariant we have K ∈ compA,P (HCG (A)).
Hence we may assume that CG (A) ≤ H.
Suppose (a) holds. Assume the conclusion to be false. Let L ∈ compA (G).
Now L ∩ H E E H and K 6≤ L so Lemma 3.4(d) implies [K, L ∩ H] ≤ OP (K).
As CL (A) ≤ H we have [K, CL (A)] ≤ OP (K) and CL (A) normalizes K. Since
CE(G) (A) is the product of the subgroups CL (A) as L ranges over compA (G), it
follows that [K, CE(G) (A)] ≤ OP (K).
By hypothesis, K ≤ E(G) so [K, CK (A)] ≤ OP (K). Hence OP (K)CK (A) E K
and Coprime Action implies that A is fixed point free on K/OP (K)CK (A). This
quotient is therefore solvable by [1, Theorem 4.4]. Since K is perfect, it follows
AUTOMORPHISMS OF K-GROUPS II
13
that K = OP (K)CK (A). But [K, CK (A)] ≤ OP (K) so K/OP (K) is abelian. This
contradiction completes the proof.
Suppose (b) holds. Let N be the intersection of the normalizers of the components of G. Since sol(G) = 1 we have CG (E(G)) = 1 and then the Schreier Property
implies that N/E(G) is solvable. Now K is perfect and K ≤ N whence K ≤ E(G).
Apply (a).
Lemma 7.7. Assume Hypothesis 7.1 and that L ∈ compA (G). Then:
(a) OP (H)EP (H) normalizes L and every component of L; or
(b) CL/Z(L) (A) is a P-group.
Proof. Suppose that OP (H)EP (H) ≤ NG (L). Then AOP (H)EP (H) permutes
comp(L). Since A is transitive and OP (H)EP (H) is a normal Hall-subgroup of
AOP (H)EP (H), [1, Lemma 3.8] implies that OP (H)EP (H) acts trivially. Then
(a) holds.
Suppose that OP (H) 6≤ NG (L). Now CL (A) ≤ NG (H) so [OP (H), CL (A)] ≤
OP (H). In particular, the commutator is a P-group. Lemma 4.2 implies that
CL/Z(L) (A) is a P-group.
Suppose that EP (H) 6≤ NG (L). Choose K ∈ compA,P (H) with K 6≤ NG (L).
Set H0 = HCG (A) so H E H0 and K ∈ compA,P (H0 ). Now L ∩ H0 E E H0 so
Lemma 2.4(d) implies K ≤ L ∩ H0 or [K, L ∩ H0 ] ≤ OP (K). The first possibility
does not hold since K 6≤ NG (L). Moreover, CL (A) ≤ L ∩ H0 so we deduce that
[K, CL (A)] is a P-group. Lemma 4.2, with K and L in the roles of P and K
respectively, implies that CL/Z(L) (A) is a P-group.
Proof of Theorem 7.5. (a). Now K ∈ compA,P (CG (a)H) because CG (a) normalizes
H. Hence we may assume that CG (a) ≤ H. Now K = K1 ∗ · · · ∗ Kn where
K1 , . . . , Kn are the h a i-components of K. As K = [K, a] it follows that Ki =
[Ki , a] for each i. If Ki E E G for each i then K E E G and then K ∈ compA,P (G).
Hence, as Hypothesis 7.2 remains valid if A is replaced by h a i, we may assume
that A = h a i.
Consider first the case that sol(G) = 1. Assume that K acts nontrivially on
comp(G). Then the set
C = { L0 ∈ comp(G) | K 6≤ NG (L0 ) }
is nonempty. Since A normalizes K it follows that A acts on C. Now K = [K, a]
so it follows also that a acts nontrivially. Choose L0 ∈ C with L0 6= La0 . Set
L = h LA
0 i ∈ compA (G). Now L0 is simple because sol(G) = 1 so as A = h a i we
see that L is the direct product of r copies of L0 and then that CL (A) ∼
= L0 . By
hypothesis, every P-group is solvable, so CL (A) is not P-group. Lemma 7.7 implies
that K normalizes L and L0 , a contradiction. We deduce that K acts trivially on
comp(G).
e with K ≤ K
e ∈ compA (G). Now C e (a) ≤
Lemma 7.6 implies there exists K
K
e [1, Corollary 6.9] implies
e and K = [K, a] ≤ H ∩ K
e so C e (a) < H ∩ K.
H∩K
K
e = K.
e Then K E E K.
e Since K
e is A-simple we obtain K = K
e and so
H ∩K
K E E G as desired.
Returning now to the general case, set S = sol(G). Applying the previous argument to G/S, we obtain KS E E G. Now CS (a) ≤ S ∩ H ≤ sol(H) so Lemma 2.4(i)
implies CS (a) ≤ NG (K). Then [1, Lemma 8.3] implies S ≤ NG (K) whence K EE G
and K ∈ compA,P (G) as required.
14
PAUL FLAVELL
(b). As in (a) we may suppose that CG (A) ≤ H. Consider first the case that
sol(G) = 1. If L ∈ compA (G) then L is nonsolvable so by hypothesis, CL (A) is
not a P-group. Lemma 7.7 implies that OP (H)EP (H) normalizes L and every
component of L. Since every component of G is contained in an A-component of
G, it follows that OP (H)EP (H) acts trivially on comp(G). Lemma 7.6(b) implies
e with K ≤ K
e ∈ compA (G).
there exists K
Returning to the general case, set S = sol(G) and G = G/S. The map X 7→ X
is a bijection compsol (G) −→ comp(G). It follows that OP (H)EP (H) acts trivially
on compsol (G). By the previous paragraph and Lemma 3.4(j) there exists M ∈
compA,sol (G) with K ≤ M . Then K ≤ M sol(G). Lemma 3.4(i) implies M E
M sol(G) whence K = K ∞ ≤ (M sol(G))∞ = M and the proof is complete.
We close this section with a corollary and an example. In what follows, nil is an
abbreviation for the group theoretic property “is nilpotent”.
Corollary 7.8. Let A be an elementary abelian r-group that acts coprimely on the
K-group G. Let a ∈ A# and suppose that H is an ACG (a)-invariant subgroup of
G.
e with
(a) Let K ∈ compA (H). Then there exists K
e ∈ compA,nil (G).
K ≤K
e with
(b) Let K ∈ compA,nil (H). Then there exists K
e ∈ comp
K ≤K
A,sol (G).
Proof. (a) follows from [1, Theorem 9.1] and (b) follows from Lemma 7.4 and Theorem 7.5(a).
The following example shows that the corollary cannot be extended further and
that the restriction on P in Theorem 7.5(b) is needed.
Example 7.9. Let r be a prime and J1 be a simple r′ -group that admits an
automorphism a1 of order r with CJ1 (a1 ) solvable. For example L2 (2r ) or Sz(2r )
with r > 5. Let K be a simple group with order n. Then K acts on the direct
product
D = J1 h a 1 i × J 2 h a 2 i × · · · × J n h a n i
permuting the direct factors regularly. Set a = a1 a2 · · · an ∈ CD (K) and put
A = h a i. Let G be the semidirect product
G = (J1 × · · · × Jn ) ⋊ K.
Then a acts on G with CG (a) = (CJ1 (a1 ) × · · · × CJn (an ))K. Observe that K is
contained in an (A, sol)-component of CG (a). However, the (A, sol)-components of
G are J1 , . . . , Jn , none of which contain K.
8. The McBride Dichotomy
In this section we give an application of §6 to the study of Signalizer Functors.
No originality is claimed, the results being a presentation of McBride’s work [4,
Theorem 6.6]. They culminate in a fundamental dichotomy in the proof of the
Nonsolvable Signalizer Functor Theorem.
Throughout this section, we assume the following.
AUTOMORPHISMS OF K-GROUPS II
15
Hypothesis 8.1.
(a) P is a group theoretic property that is closed under subgroups, quotients
and extensions.
(b) Every solvable group is a P-group.
We will be interested in the subgroup OP,E (H) for groups H. Note that
OP,E (H) = OP (H)EP (H).
Lemma 8.2. Let G be a group.
(a) Suppose H ≤ G and OP,E (G) ≤ H. Then OP,E (G) = OP,E (H).
(b) Suppose H, M ≤ G, OP,E (H) ≤ M and OP,E (M ) ≤ H. Then OP,E (H) =
OP,E (M ).
Proof. (a). Set G = G/OP (G). Then
E(G) = OP,E (G) ≤ H.
Since P is closed under extensions we have OP (G) = 1. Then
[OP (H), E(G)] ≤ OP (H) ∩ E(G)
≤ OP (E(G)) ≤ OP (G) = 1.
Now every solvable group is a P-group so sol(G) = 1 and hence CG (E(G)) =
1. Then OP (H) = 1 and OP (H) ≤ OP (G). Since OP (G) ≤ H it follows that
OP (H) = OP (G). As E(G) ≤ H we have E(G) E E(H). Then any component of
H not contained in E(G) would centralize E(G), contrary to CG (E(G)) = 1. We
deduce that E(G) = E(H) and the conclusion follows.
(b). Observe that OP,E (H) = OP,E (H ∩ M ) = OP,E (M ).
We remark that (b) is an elementary version of Bender’s Maximal Subgroup Theorem.
Lemma 8.3. Let r be a prime and A be a noncyclic elementary abelian r-group
that acts coprimely on the K-group G. Then
OP,E (G) = OP,E (h OP,E (CG (B)) | B ∈ Hyp(A) i)
= OP,E (h OP,E (CG (a)) | a ∈ A# i).
Proof. Let H be either right hand side. Coprime Action implies that OP (G) ≤ H.
Passing to the quotient G/OP (G) and applying [1, Lemma 6.12] it follows that
OP,E (G) ≤ H. Apply Lemma 8.2(a).
Throughout the remainder of this section we assume the following.
Hypothesis 8.4.
(a) Hypothesis 8.1.
(b) r is a prime and A is an elementary abelian r-group with rank at least 3.
(c) A acts on the group G.
(d) θ is an A-signalizer functor on G.
(e) θ(a) is a K-group for all a ∈ A# .
e = h θ(a) | a ∈ A# i.
(f) G
e = h θ(B) | B ∈ Hyp(A) i.
Lemma 8.5. G
16
PAUL FLAVELL
Proof. Let a ∈ A# . Coprime Action applied to the action of A/h a i on θ(a) implies
that
θ(a) = h θ(a) ∩ CG (B) | a ∈ B ∈ Hyp(A) i.
Now θ(a) ∩ CG (B) = θ(B) whenever a ∈ B ∈ Hyp(A) so the conclusion follows.
Lemma 8.6. Let
H1 = h OP,E (θ(a)) | a ∈ A# i
and
H2 = h OP,E (θ(B)) | B ∈ Hyp(A) i.
Let i ∈ { 1, 2 } and suppose that Hi is contained in a θ-subgroup. Then Hi is a
e
θ-subgroup and OP,E (H1 ) E G.
Proof. Since any A-invariant subgroup of a θ-subgroup is a θ-subgroup, the first
assertion holds. Suppose i = 1. Let B ∈ Hyp(A). Lemma 8.3, with B in the role
of A, implies that
OP,E (H1 ) = OP,E (h OP,E (CH1 (b)) | b ∈ B # i).
Let b ∈ B # . Since H1 is a θ-subgroup we have OP,E (θ(b)) ≤ CH1 (b) ≤ θ(b).
Lemma 8.2(a) implies that
OP,E (θ(b)) = OP,E (CH1 (b)).
Note that θ(B) ≤ θ(b) so θ(B) normalizes OP,E (θ(b)). It follows that θ(B) normale
izes OP,E (H1 ). Then Lemma 8.5 implies that OP,E (H1 ) E G.
#
Suppose i = 2. Let a ∈ A . Lemma 8.3 implies that
OP,E (θ(a)) = OP,E (h OP,E (θ(a) ∩ CG (B)) | a ∈ B ∈ Hyp(A) i)
= OP,E (h OP,E (θ(B)) | B ∈ Hyp(A) i)
≤ H2 .
It follows that H1 ≤ H2 , so apply the previous case.
#
For each a ∈ A , define
θnP (a) = OnP (θ(a); A).
Theorem 6.4 implies that θnP is an A-signalizer functor on G.
Lemma 8.7. Assume that θnP is complete. Then
OP,E (θ(B)) ≤ NG (θnP (G))
for all B ∈ Hyp(A).
Proof. Set S = θnP (G) and let b ∈ B # . Then
CS (b) = θnP (b) = OnP (θ(b); A).
Now θ(B) ≤ θ(b) so Theorem 6.4(c) implies OP,E (θ(B)) normalizes OnP (θ(b); A).
Since S = h CS (b) | b ∈ B # i, the result follows.
Theorem 8.8 (The McBride Dichotomy). Suppose that θ is a minimal counterexample to the Nonsolvable Signalizer Functor Theorem.
(a) Either θnP = 1 or θ = θnP .
(b) Either
AUTOMORPHISMS OF K-GROUPS II
17
• There exist no nontrivial θ(A)-invariant solvable θ-subgroups; or
• θ(A) is solvable and every nonsolvable composition factor of every θsubgroup belongs to { L2 (2r ), L2 (3r ), U3 (2r ), Sz(2r ) }.
Proof. Note that minimality is with reference to the integer
X
| θ(a) |.
|θ| =
a∈A#
e and that no nontrivial
Since θ is a minimal counterexample, it follows that G = G
θ-subgroup is normal in G.
(a). Suppose that θnP is complete and that θnP 6= 1. Set S = θnP (G). Then
S is a θ-subgroup so NG (S) 6= G and hence NG (S) possesses a unique maximal
θ-subgroup, θ(NG (S)). Lemmas 8.7 and 8.6 supply a contradiction. We deduce
that either θnP is not complete, in which case θ = θnP ; or θnP (G) = 1, in which
case θnP = 1.
(b). Let P be the group theoretic property “is solvable”. Suppose that θnP = 1.
Let X be a θ(A)-invariant solvable θ-subgroup. Let a ∈ A# . Then, as CX (A)
is solvable, we have CX (a) ≤ OnP (θ(a); A) = θnP (a) = 1. Since A is noncyclic,
it follows that X = 1, so the first assertion holds. Suppose that θ = θnP . Let
a ∈ A# . Then θ(A) = Cθ(a) (A) = CθnP (a) (A) so θ(A) is solvable. Let X be a
θ-subgroup. Then CX (A) ≤ θ(A) so CX (A) is solvable. The conclusion follows
from [1, Theorem 4.4].
9. A theorem of Gorenstein and Lyons
We will provide an alternate proof of a special case of the Nonsolvable Signalizer
Functor Theorem due to Gorenstein and Lyons [3]. It is an application of the main
result of §5. Throughout this section, we assume the following.
Hypothesis 9.1.
• r is a prime and A is an elementary abelian r-group with rank at least 3.
• A acts on the group G.
• θ is an A-signalizer functor on G.
• θ(a) is a K-group for all a ∈ A# .
We shall prove:
Theorem 9.2 (Gorenstein-Lyons). Assume that A acts trivially on compsol (θ(a))
for all a ∈ A# . Then θ is complete.
First we develop a little general theory.
Definition 9.3. A subfunctor of θ is an A-signalizer functor ψ on G with ψ(a) ≤
θ(a) for all a ∈ A# . We say that ψ is a proper subfunctor if ψ(a) 6= θ(a) for some
a ∈ A# and that ψ is θ(A)-invariant if ψ(a) is normalized by θ(A) for all a ∈ A# .
Lemma 9.4. Let t ∈ A# , set D = h C[θ(a),t] (t) | a ∈ A# i and define ψ by
ψ(a) = [θ(a), t](θ(a) ∩ D)
for all a ∈ A# .
(a) ψ is a θ(A)-invariant subfunctor of θ.
(b) If ψ is complete then so is θ.
18
PAUL FLAVELL
Proof. (a). Let a ∈ A# . Then C[θ(a),t] (t) is Aθ(A)-invariant since θ(a) and t are.
Thus D is Aθ(A)-invariant. Now [θ(a), t] E θ(a) so ψ(a) is a subgroup of θ(a).
Again, it is Aθ(A)-invariant. Let b ∈ A# . By [1, Coprime Action(e)],
ψ(a) ∩ CG (b) = C[θ(a),t] (b)Cθ(a)∩D (b).
Set X = C[θ(a),t] (b). Then X ≤ θ(a) ∩ CG (b) ≤ θ(b). By [1, Coprime Action(a)] we
have
X = [X, t]CX (t) ≤ h [θ(b), t], θ(b) ∩ C[θ(a),t] (t) i ≤ ψ(b).
Trivially, Cθ(a)∩D (b) ≤ θ(b) ∩ D ≤ ψ(b). We conclude that ψ(a) ∩ CG (b) ≤ ψ(b), so
ψ is an A-signalizer functor.
(b). This is [2, Corollary 4.3]
Lemma 9.5. Suppose that:
(i) θ is incomplete.
(ii) ψ is complete whenever ψ is a proper θ(A)-invariant subfunctor of θ.
Then the following hold:
(a) For each t ∈ A# ,
θ(t) = h C[θ(a),t] (t) | a ∈ A# i.
(b) Let
S = { S | S is a simple section of C[θ(a),t] (t) for some a, t ∈ A# }
and let P be the group theoretic property defined by:
H is a P-group if and only if every noncyclic composition factor
of H is isomorphic to a member of S.
Then θ(t) is a P-group for all t ∈ A# .
Proof. (a). Adopt the notation defined in the statement of Lemma 9.5. Since θ is
incomplete, it follows from (ii) and Lemma 9.4 that θ = ψ. Then θ(t) = ψ(t) =
[θ(t), t](θ(t) ∩ D) = D.
(b). For each a ∈ A# , C[θ(a),t] (t) is an Aθ(A)-invariant P-subgroup of θ(t). Now
θ(A) = Cθ(t) (A) so Theorem 5.2 implies that h C[θ(a),t] (t) | a ∈ A# i is a P-group.
Then (a) implies that θ(t) is a P-group.
Lemma 9.6. Suppose that A acts on the K-group H, that t ∈ A# and that t acts
trivially on compsol (H). Then t acts trivially on compsol (M ) whenever M is an
ACH (A)-invariant subgroup of H.
Proof. Assume false and choose K0 ∈ compsol (M ) with K0 6= K0t . Set K =
h K0A i ∈ compA,sol (M ). Now [K, t] is an A-invariant nonsolvable normal subgroup
of K so K = [K, t] ≤ [H, t]. Since [H, t] E H we have compsol ([H, t]) ⊆ compsol (H),
hence we may assume that H = [H, t]. Passing to the quotient H/ sol(H) we may
also assume that sol(H) = 1. Then compsol (H) = comp(H) and CH (E(H)) = 1.
Since H = [H, t] and t acts trivially on comp(H) it follows that every component
of H is normal in H. As CH (E(H)) = 1, the Schreier Property implies that
H/E(H) is solvable. Now K is perfect so K ≤ E(H) and then Lemma 7.6 implies
there exists K ∗ with K ≤ K ∗ ∈ compA (H). Hence we may assume that K ∗ = H,
so that H is A-simple.
Without loss, CA (H) = 1. Set
A∞ = ker(A −→ Sym(comp(H))),
AUTOMORPHISMS OF K-GROUPS II
19
so that t ∈ A∞ . Since M is ACH (A)-invariant and nonsolvable, it follows from [1,
Lemmas 6.6 and 6.7] that either M = CH (B) for some B ≤ A with B ∩ A∞ = 1 or
M ≤ CH (A∞ ). The second possibility does not hold since t ∈ A∞ and K = [K, t] ≤
M . Thus the first possibility holds. [1, Lemma 6.5] implies that M is A-simple.
Since K ∈ compA (M ) we have K = M , so K = CH (B). Then the components of K
correspond to the orbits of B on comp(H). Since t is trivial comp(H) it normalizes
each orbit of B and hence each component of K. This contradiction completes the
proof.
Lemma 9.7. Suppose that A acts coprimely on the group H. Let t ∈ A and suppose
that t acts trivially on compsol (H). If S is a simple section of C[H,t] (t) then there
exists L ∈ compsol (H) with | S | < | L/ sol(L) |.
Proof. Coprime Action implies that [H, t, t] = [H, t] and as [H, t] E H we have
compsol ([H, t]) ⊆ compsol (H) so we may suppose that H = [H, t]. We may also
pass to the quotient H/ sol(H) to suppose that sol(H) = 1.
Since t acts trivially on comp(H) and H = [H, t] it follows that every component
of H is normal in H. Let L1 , . . . , Ln be the components of H, so E(H) = L1 × · · ·×
Ln . As sol(H) = 1 we have CH (E(H)) = 1 and the Schreier Property implies that
H/E(H) is solvable. In particular, CH (t)/CE(H) (t) is solvable so S is isomorphic
to a simple section of CE(H) (t). Now CE(H) (t) = CL1 (t) × · · · × CLn (t). Thus
S is isomorphic to a simple section of CLi (t) for some i. Note that CLi (t) 6= Li
since otherwise, as Li E Hh t i and H = [H, t], we would have Li ≤ Z(H). Thus
| S | < | Li | and the proof is complete.
Proof of Theorem 9.2. Assume false and let θ be a minimal counterexample. By
Lemma 9.6, the hypotheses of Lemma 9.5 are satisfied. Note that a group X is
nonsolvable if and only if compsol (X) 6= ∅. By the Solvable Signalizer Functor
Theorem, there exists a pair (b, K) with b ∈ A# and K ∈ compsol (θ(b)). Choose
such a pair with | K/ sol(K) | maximal. By Lemma 9.5 there exists a, t ∈ A# such
that K/ sol(K) is isomorphic to a simple section of C[θ(a),t] (t). Lemma 9.7 implies
| K/ sol(K) | < | L/ sol(L) | for some L ∈ compsol (θ(a)). This contradicts the choice
of (b, K) and completes the proof.
References
1. P. Flavell, Automorphisms of K-groups I, Preprint: http://arxiv.org/abs/1609.01969
2. P. Flavell, A new proof of the Solvable Signalizer Functor Theorem, J. Algebra 398 (2014)
350-363
3. D. Gorenstein and R. Lyons, Nonsolvable Signalizer Functors on Finite Groups, Proc. London
Math. Soc. 35 (3) (1977) 1-33
4. P.P. McBride, Near solvable signalizer functors on finite groups, J. Algebra 78(1) (1982) 181214
5. P.P. McBride, Nonsolvable signalizer functors on finite groups, J. Algebra 78(1) (1982) 215-238
The School of Mathematics, University of Birmingham, Birmingham B15 2TT, Great
Britain
E-mail address: [email protected]
| 4 |
Personalized Machine Learning for Robot Perception
of Affect and Engagement in Autism Therapy
arXiv:1802.01186v1 [cs.RO] 4 Feb 2018
Ognjen (Oggi) Rudovic1∗ , Jaeryoung Lee2 , Miles Dai1 ,
Björn Schuller3 and Rosalind W. Picard1
1
2
3
MIT Media Lab, USA
Chubu University, Japan
Imperial College London, UK
∗
E-mail: [email protected].
Robots have great potential to facilitate future therapies for children on the
autism spectrum. However, existing robots lack the ability to automatically
perceive and respond to human affect, which is necessary for establishing
and maintaining engaging interactions. Moreover, their inference challenge
is made harder by the fact that many individuals with autism have atypical
and unusually diverse styles of expressing their affective-cognitive states. To
tackle the heterogeneity in behavioral cues of children with autism, we use the
latest advances in deep learning to formulate a personalized machine learning
(ML) framework for automatic perception of the children’s affective states and
engagement during robot-assisted autism therapy. The key to our approach is
a novel shift from the traditional ML paradigm; instead of using “one-size-fitsall” ML models, our personalized ML framework is optimized for each child
by leveraging relevant contextual information (demographics and behavioral
1
assessment scores) and individual characteristics of each child. We designed
and evaluated this framework using a dataset of multi-modal audio, video and
autonomic physiology data of 35 children with autism (age 3-13) and from 2
cultures (Asia and Europe), participating in a 25-minute child-robot interaction (∼ 500k datapoints). Our experiments confirm the feasibility of the robot
perception of affect and engagement, showing clear improvements due to the
model personalization. The proposed approach has potential to improve existing therapies for autism by offering more efficient monitoring and summarization of the therapy progress.
1
Introduction
The past decade has produced an extensive body of research on human-centered robot technologies capable of sensing and perceiving human behavioral cues, leading to more naturalistic
human-robot interaction (1, 2). However, virtually all existing robots are still limited in their
perception of human signals. Until a few years ago, the main focus of research on social robots
has been on their design (3). The next generation of these robots will have to be not only more
appealing to humans, but also more social-emotionally intelligent. Health care is one of the
areas in particular that can substantially benefit from the use of socially assistive robots, which
have the potential to facilitate and improve many aspects of clinical interventions (4, 5). The
most recent advances in machine learning (ML) (6) and, in particular, deep learning (7), have
paved the way for such technology.
Various terms for the use of social robots in clinical therapy settings have emerged, including Socially Assistive Robotics (8, 9), Robot-enhanced Therapy (10, 11), and Robot-augmented
Therapy (12). The main role of social robots in this context is to facilitate a therapy through
story-telling and games, helping the therapist deliver content in a more engaging and interac2
tive manner. Here we focus on a particular application of social robots to therapy for children
with Autism Spectrum Conditions (ASC) (13). Recently, research on autism has received significant attention due to the increasing number of individuals on the spectrum (1 in 64, with
the prevalence of 4:1 males to females (14)). Children with ASC have persistent challenges
in social communication and interactions, as well as restricted repetitive patterns of behavior,
interests and/or activities (13). To improve their social skills, children with ASC undertake
various types of occupational therapies during which they rehearse social and emotional communication scripts with a therapist. Traditionally, the therapist encourages the child to engage
by means of motivational activities (e.g., using toys). More recently, social robots have been
used to this aim, as many children with ASC find them enjoyable and engaging, perhaps due
to their human-like, yet predictable and nonthreatening nature (15, 16). This, in turn, has been
shown to increase the learning opportunities for these children, for whom early intervention
may enhance long-term outcome (17, 18).
A typical robot-assisted autism therapy for teaching emotion expressions to children with
ASC proceeds as follows: a therapist uses images of facial and body expressions of basic emotions (e.g. sadness, happiness, anger, and fear) as shown by typically developing children.
Then, the robot shows its expressions of these emotions to the child, and the therapists asks
the child to recognize the emotion. This is followed by the mirroring stage, where the child is
encouraged to imitate the robot expressions. If successful, the therapist proceeds to the next
level, telling a story and asking the child to imagine what would the robot feel in a particular
situation. These steps are adopted from the Theory of Mind (ToM) concept (19), designed to
teach the perspective taking ("social imagination") – one of the key challenges for many children with ASC. Other therapy designs have also been applied in the field, including Applied
Behavioral Analysis (ABA) (20) and Pivotal Response Treatment (PRT) (21). However, using
humanoid and other types of robotic solutions as part of clinical therapy is still in the "experi3
mental" stage (22). The progress, in part, has been impeded due to the inability of current robot
solutions to autonomously perceive, interpret, and naturally respond to the children’s behavioral
cues. Today, this has been accomplished by the "person behind the curtain", typically a therapist who controls the robot via a set of keyboard programmed behaviors and different types
of prompts such as the robot waving at or saying something to the child, designed to engage
the child (the so-called "Wizard of Oz (WoZ)" scenario (23)). This makes the interaction less
natural and potentially more distracting for the child and therapist (24). Thus, there is a need for
(semi)autonomous and data-driven robots (25) that can learn and recognize the child’s behavioral cues, and also adapt to them with more smoothly (e.g., by changing the exercise, providing
feedback/prompts, etc.) (26). This is discussed in more detail in several pioneering works on the
use of robots in autism therapy (15, 27, 28), where particular attention has been placed on robot
appearance (29, 30), interaction strategy (31) and analysis of behavioral cues of individuals interacting with the robots (30), including their facial expressions (32), body movements (33),
autonomic physiology (34), and vocalizations (35, 36).
Automated analyses of children’s behavioral cues during robot-assisted therapy relies on
ML from sensory inputs capturing different behavioral modalities (37). Typically, these inputs
are transformed into feature representations, which are then used to train supervised ML models,
where human experts provide labels for target states (e.g., engagement levels) of the children
by examining each child’s (video) data. Then, the ML consists of selecting a model (e.g., a
Support Vector Machine (SVM) (6)), which learns a mathematical function to map the input
features onto the target labels. This model is then applied to new data, providing estimates of
target outputs. For instance, in the context of children with ASC, Zheng et al. (38) proposed
the imitation skill training architecture and used a rule-based finite state machine method to
recognize the children’s body gestures from skeleton data. Likewise, Sanghvi et al. (39) used
a dataset of affective postural expressions during chess games between children and the iCat
4
robot. The authors extracted the upper body silhouette, and trained a set of weak classifiers
for the expression classification. Kim et al. (40) estimated the emotional states of children
with ASC from audio-based data and assessed their social engagement during playing with the
robots, using the Interactive Emotional Dyadic Motion Capture (IEMOCAP) database for realtime emotion classification with SVMs. Other work adopted a similar approach using SVMs
to estimate engagement based on children’s head movements (41). More recently, Esteban
et al. (11) demonstrated the feasibility of robot-assisted autism therapy using the NAO robot,
Kinect cameras, and multimodal-sensory information, including gaze estimation, human action
recognition, facial expression recognition, and voice analysis to classify stereotypical behaviors
of children with ASC using SVMs.
In this work, we use the most recent advances in ML (7, 42) to design a deep learning
framework that can easily personalize the robot’s interpretation of the children’s affective states
and engagement to different cultures and individuals. This is motivated by our previous work
(43), where we found significant cultural and individual differences in the expression of affect
and engagement of the children with ASC. The analyzed data contain ∼ 500k samples from
a highly synchronized multi-modal (visual, audio and physiology) and cross-cultural dataset
of children with ASC interacting with a social robot NAO (43). In our current work, we use
these data to design and implement the first fully personalized deep learning framework that
can automatically estimate the child’s affective states and engagement.
5
Figure 1: Overview: Data from three modalities (audio, visual and autonomic physiology) are collected
using unobtrusive sensors embedded in the robot (audio-visual), and worn on the child’s wrist (heart
rate, electrodermal activity and temperature). The data are obtained from 35 children with ASC and
with different cultural backgrounds and diverse expressive abilities, while they are engaged in a realworld therapeutic scenario. The context for analysis of the target data is structured through three key
robot stages: (1) Sensing, (2) Perception, and (3) Interaction. The main focus of this study is on the
Perception step, where we demonstrate the feasibility of automatically inferring the child’s affective
states and engagement levels using the proposed personalized perception of affect deep network.
The workflow of the envisioned ML-robot-assisted autism therapy consists of three key
steps. The robot sensing of outward and inward behavioral cues of the child undertaking the
therapy is attained using both open-source and new tools we built for processing the sensory
data from video, audio, and bio-signals of the children. The main focus of this work, which is
built upon deep neural networks (7), is the robot perception step. It takes as input the processed
features and automatically estimates (continuous) levels of the child’s valence, arousal, and
engagement. These are then used to design and implement the child-robot interaction (Fig. 1).
6
2
Results
We performed four sets of experiments. These were carefully designed to evaluate the role of
the model personalization and the interpretability/utility of the proposed personalized framework. We also provide comparisons of the proposed PPA-net with alternative approaches, and
investigate the role of different behavioral modalities on robot perception of affect and engagement.
2.1
System design
Fig. 2 shows the proposed PPA-net that we designed and implemented to optimally handle: (i)
the multi-modal and missing nature of the children data (feature layer) (44, 45), (ii) highly
heterogeneous children data (a famous adage says: “If you have met one person with autism,
you have met one person with autism.”) (context layer), and (iii) simultaneous and continuous estimation of the children’s affective dimensions: valence, arousal, and engagement — all
three being critical for evaluating the efficacy of the autism therapy (decision layer). For (i),
we perform the fusion of different modalities to obtain optimal input features for each child.
We used a special type of network based on auto-encoders (46) (Sec. 4). The learned feature
representations are further augmented by the expert knowledge within the context layer, quantified using the expert-assessed childhood autism rating scale (CARS) (47). The architecture of
this layer is designed based on the nesting of the children using their demographics (culture and
gender), followed by individual models for each child. Since the target outputs exhibit different
dependence structures for each child (Fig. 3), we used the notion of multi-task learning (48, 49)
to learn the child-specific decision layers.
7
Figure 2: Personalized Perception of Affect Network (PPA-net). The feature layer uses (supervised)
autoencoders to filter the noise in the features and reduce the feature size. At the intermediate level
(context layer), behavioural scores of the child’s mental, motor, and verbal ability (also quantified by
CARS) are used to adaptively select the optimal features for each child in the fusion part of the network.
Further personalization of the network to each child is achieved via the demographic variables: culture
and gender. Finally, the estimation of valence, arousal, and engagement is accomplished in the decision
layer using the child-specific network layers, which output continuous estimates of target states.
To train and evaluate the network, we used the dataset of children undergoing occupational
therapy for autism, led by experienced therapists, and assisted by a humanoid robot NAO (43).
The data come from 35 children: 17 from Japan (C1), and 18 from Serbia (C2), all of whom
had a prior diagnosis of autism. This data include: (i) video recordings of facial expressions and
head movements, body movements, pose and gestures, (ii) audio-recordings, and (iii) autonomic
physiology from the child: heart rate (HR), electrodermal activity (EDA), and body temperature
8
(T) – as measured on the non-dominant wrist of the child. We extracted various features (the
sensing step) from these modalities using state-of-the-art tools for video and audio processing
(OpenFace (50), OpenPose (51), and OpenSmile (52)) - see Appendix/C. We also developed
feature extraction/noise-cleaning tools for processing of physiology data. Fig. 3(A) summarizes
the results. Note that in 48% of the data, at least one of the target modalities was absent (e.g.,
when the child’s face was not visible, and/or when a child refused to wear the wristband).
To obtain the target labels needed to train the network, the dataset was coded in terms of
valence, arousal, and engagement on a continuous scale from −1 to +1 by five trained human
experts, while watching the audio-visual recordings of the sessions. The coders’ agreement was
measured using the intra-class correlation (ICC) (53), type (3,1). The ICC ranges from 0 − 1,
and is commonly used in behavioral sciences to assess the coders’ agreement. The average ICC
per output was: valence (0.53 ± 0.17), arousal (0.52 ± 0.14) and engagement (0.61 ± 0.14).
The codings were pre-processed and averaged across the coders. These were then used as
the ground truth for training ML models, where the data sets of each child were separated in
disjoint training, validation, and testing data subsets (Sec. 4.5). A detailed summary of the data,
features, and coding criteria is provided in Appendix/B,C.
2.2
Effects of Model Personalization
The main premise of model personalization via the PPA-net is that disentangling different
sources of variation in behavioral modalities of the children with ASC is expected to improve
the individual estimation performance compared to the traditional "one-size-fits-all" approach.
Fig. 3(D) depicts the ICC scores computed at each level in the model hierarchy. Specifically,
the performance scores at the top node in the graph are obtained using the predicted outputs
for children from both cultures using the proposed personalized PPA-net and the group-level
perception of affect network (GPA-net). Overall, the strength of the model personalization
9
Figure 3: (A) Summary of the fraction of data present across the different modalities both individually
and concurrently. (B) The dependence patterns derived from the manual codings of the valence, arousal,
and engagement. Note the large differences in these
10 patterns at the culture and individual levels. (C)
Clustering of the children from C1&C2 using the t-SNE, an unsupervised dimensionality reduction
technique, applied to the auto-encoded features (Sec. 2.2). (D) ICC scores per child: C1 (17) and
C2 (18) for valence (V), arousal (A) and engagement (E) estimation. Note the effects of the model
personalization: the performance of the proposed personalized PPA-net (in black) improves at all three
levels (culture, gender, and individual) when compared to the GPA-net (in gray). At the individual level,
we depict difference in their performance ∆ICC.
can be seen in the performance improvements at culture, gender, and individual levels in all
(sub)groups of the children. A limitation can be seen in the adverse gender-level performance
by the PPA-net on the two females from C1: this is due to the lack of data at this branch in the
model hierarchy, which evidently led to the PPA-net overfitting the data of these two children
when fine-tuning their individual layers, a common bottleneck of ML algorithms when trained
on limited data (54). Since the layers of the GPA-net were tuned using data of all the children,
this resulted in more robust estimation of the network on these individuals.
The individual estimation performance by the two networks is shown at the bottom of
Fig. 3(D). The improvements in ICC due to the network personalization range from 5% − 20%
per child. We also note drops in the PPA-net performance on some children. This is common
in multi-task learning, where the gain in performance on some tasks (here "tasks" are children)
comes at the expense of performance on the others. We also ran the paired t-test with unequal
variances (α = 0.05), using the ICC scores from 10 repetitions of the random splits of the data
per child (see Sec. 4.5), and compared the two models per child. The number of the children,
on which the personalized PPA-net outperformed significantly the GPA-net, in both cultures
are: V = 24/35, A = 23/35, and E = 31/35. More specifically, within C1, we obtained:
V = 10/17, A = 10/17, E = 15/17. Likewise, for C2, we obtained: V = 14/18, A = 13/18,
E = 16/18. Finally, the personalized PPA-net performed significantly better on all three outputs
(V, A, E) on 6/17 children from C1, and 9/18 children from C2. Taken together, these results
demonstrate notable benefits of the model personalization to improving the robot perception of
affect and engagement, at each level in the model hierarchy.
We also analyzed the auto-encoded features obtained after the fusion of target modalities
(Sec. 4). The original size of the auto-encoded features was 250 dimensions (D). Fig. 3(C)
shows the embeddings of these features into a 2D space obtained using the t-Distributed Stochastic Neighbor Embedding (t-SNE) (55) – a popular ML technique for unsupervised dimension11
ality reduction and visualization of high-dimensional data. Note the clustering of the children’s
data in the projected space (the children ID was not used), confirming the high heterogeneity in
behavioral cues of these children. Personalizing the PPA-net to each child allows it to accommodate individual differences at different levels in the model hierarchy, leading to overall better
performance compared to the GPA-net.
2.3
Interpretability and Utility
A barrier to adoption of deep learning is when interpretability is paramount. Understanding
the features that lead to a particular output builds trust with clinicians and therapists using the
system in their daily practice. To analyze the contribution of each behavioral modality, and the
features within, we used DeepLift (Learning Important FeaTures) (56), an open-source method
for computing importance scores in a neural network. Fig. 4(A) shows the importance scores of
the input features from each modality and from CARS for estimation of engagement, obtained
by applying DeepLift to the PPA-net. We note that the body and face modality are dominant
when the scores are computed for both cultures together. The prior information derived from
CARS is also a big influencer in estimation of engagement. However, at the culture level,
the DeepLift produces opposite scores for the body modality/CARS for the two cultures. This
evidences that the model was capable of disentangling the culture-level differences. By analysis
of CARS (total scores) in our previous work (43), we found statistically significant differences
in the scores for the two cultures (p < 0.05), which, in part, also explains the difference in their
contribution across the two cultures. Similar observations can be made at the gender-level, yet,
these are more difficult to interpret due to the imbalance in males vs. females.
The PPA-net utility can be demonstrated through visualization of several components of the
robot sensing and perception. Fig. 4 (B) depicts the estimated affect and engagement levels,
along with the key autonomic physiology signals of a child undergoing the therapy (currently,
12
Figure 4: (A) Interpretability can be enhanced by looking at the influence of the input features on the output target: for example, here the output target is engagement level. The relative
13
importance scores (y-axis) are shown for the face, body, audio, physiology, and CARS features (x-axis). These are obtained from the DeepLift (56) tool, which provides negative/positive
values when the input feature drives the output toward −1/ + 1. The most evident differences arise from body features, also indicating cultural differences in the two groups. Therapy
monitoring (B) and summarization (C), based on the robot’s sensing of behavioral cues (audio-visual and autonomic physiology), and perception of the current affective states and engagement
levels, of the child. In (B), we depict the engagement (E), arousal (A) and valence (V) levels of the child. The automatically estimated levels using the PPA-net are shown in blue, and the
ground-truth based on the human coding is shown in red. We also plot the corresponding signals measured from the child’s wrist: accelerometer readings (ACC) showing the movement
p
x2 + y 2 + z 2 along 3-axes (x, y, z); blood-volume pulse (BVP) and electro-dermal activity (EDA). The bars in plot (B) summarize the therapy in terms of the average
intensity r =
levels of E, V and A (± SD) within each phase of the therapy: (1) pairing, (2) recognition, and (3) imitation (there is no phase (4) - storytelling - because the child left after phase (3)).
these results are obtained by an off-line analysis of the recorded video). We note that the PPAnet was able to accurately detect the changes in the child’s engagement levels (e.g., during the
disengagement segment), while providing estimates that are overall highly consistent with human coders. Since the PPA-net is personalized using data of each child, evidently, it learned
particular expressions of affect and engagement for the interacting child. Fig. 4(C) summarizes
the therapy in terms of average valence, arousal, and engagement levels (along with their variability) within each phase of the therapy. Compared to human coders, the PPA-net produces
these statistics accurately for engagement and arousal levels, while it overestimates the valence
levels. However, as more data of target child becomes available, these can be improved by
re-training his/her individual layer.
2.4
Alternative Approaches
How much advantage does the new personalized deep learning approach obtain over more traditional ML? Table 1 (Appendix/A) shows the estimation results obtained by alternative approaches and evaluated using the same experimental protocol (Sec. 4). Here, we compare the
performance of the proposed PPA-net/GPA-net with traditional multi-layer perceptron (MLP)
deep networks with the same hierarchical structure but optimized using standard learning techniques (i.e., without sequential nesting of the layers).1 We also include the traditional ML
models: linear regression (LR) (6), support vector regression (SVR) (6), and gradient boosted
regression trees (GBRTs) (57). In the ML literature, LR is usually considered the baseline
model. SVR is an adaptation of the SVM models (6), used in state-of-the-art works on humanrobot interaction (e.g., (11, 40)), for estimation of continuous outputs. On the other hand,
GBRTs are commonly used in clinical decision tasks due to their easy interpretation of input
features (58). For more details about training and evaluation of the models, see Appendix/A.
1
The network layers are optimized jointly in MLP-0, followed by fine-tuning of individual layers (MLP).
14
From the ICC scores, we note the benefits of the proposed deep learning strategy (PPAnet). The joint learning of all layers in the MLP results in a lack of discriminative power of the
network. Compared to unpersonalized models (GPA-net, MLP-0, LR, SVR, and GBRTs), there
is a gap in performance. While LR fails to account for highly-nonlinear dependencies in the
data, the non-linear kernel method (SVR) achieves it to some extent, but does not reach the full
performance attained by the PPA-net due to the absence of hierarchical structure. On the other
hand, GBRTs are capable of discovering a hierarchy in the features, yet, they lack a principled
way of adapting to each child. Also, the large variance in performance of all the models is
because of high heterogeneity in behavioral expressions of children with ASC. By ranking
the models based on the number of ‘winning’ tasks (TaskRank), the PPA-net outperforms the
compared models on majority of the tasks (48%), followed by SVR (22%) and GBRT (13%).
2.5
Effects of Different Modalities
To assess the contribution of each modality for estimation of target outputs, we evaluated the
PPA-net using visual (face and body), audio and physiological features both independently and
together. Fig. 8 (Appendix/A) shows the average results for both cultures, and for the children
within each culture. As expected, the fusion approach outperforms the individual modalities
across all three outputs (valence, arousal, and engagement). Also, higher performance was
achieved on C1 than C2 with the multi-modal approach – confirming the complimentary and
additive nature of these modalities (59). Furthermore, the body features outperform the other
individual modalities, followed by the face and physiology modality. The low performance
by the audio modality is attributed to a high level of background noise, which is difficult to
control in real-world settings (60). Also, while the physiology features are comparable to the
best performing individual modality (body) in C1, this is not the case in C2.
15
3
Discussion
The overall objective of this work was to demonstrate the feasibility of an automated system
for robot perception of affect and engagement during autism therapy. This is driven by the societal need for new technologies that can facilitate and improve existing therapies for a growing
number of children with ASC. Recent advances in ML and data collection, using unobtrusive
sensors such as cameras and microphones, and wearable technology for measurement of autonomic physiology, have paved a way for such technology (61), however, little progress has been
done so far (62). To this end, we introduced a novel personalized ML framework that can easily
adapt to a child’s affective states and engagement even across different cultures and individuals.
This framework builds upon state-of-the-art deep learning techniques (7, 42), which we used
to implement the proposed Personalized Perception of Affect Network (PPA-net). While deep
learning has shown great success in a variety of learning tasks (e.g., object and scene recognition (63,64) and sentiment analysis (65)), it has not been explored before in the context of robot
perception for use in autism therapy. One of the reasons is the previous lack of data needed to
take full advantage of deep learning. Using the cross-cultural and multi-modal dataset containing over 500k images of child-robot interactions during autism therapy (43), we were able to
successfully design the robot perception based on the PPA-net.
As shown in our experiments comparing the personalized PPA-net with the group-level
GPA-net and MLP (i.e., the traditional "one-size-fits-all" approaches), deep learning has had
great success in leveraging a vast amount of data. However, realizing the full potential of
our framework on the data of children with ASC requires the network to personalize for each
child. We showed that with the PPA-net, an average intra-class agreement (ICC) of 59% can
be achieved between the model predictions and human (manual) coding of children’s affect and
engagement levels, where the average agreement between the human coders was 55.3%. This
16
does not imply that humans are not better in estimating affect and engagement but rather that the
proposed framework provides a more consistent and less biased estimation approach. Compared
to the standard approach in the field to coding affect (valence and arousal) levels, in the most
recent and largest public dataset of human faces (Affect-Net (66)), the coders agreement was
60.7%, and the automatic prediction of valence and arousal (using CNN-AlexNet (67)) was
60.2% and 53.9%, respectively, in terms of Pearson correlation. Note, however, that these
results are obtained from face images of typical individuals, whereas coding and automatic
estimation of the same dimensions from children with ASC is a far more challenging task.
The model personalization is accomplished using three key ingredients that make it particularly suited for the task at hand and different from existing approaches. First, it uses a
novel learning algorithm that allows the deep network to take full advantage of data sharing
at each level in the model hierarchy (i.e., the culture, gender, and individual level). This is
achieved via the newly introduced network operators (learn, nest, and clone) and fine-tuning
strategies (Sec. 4), where the former are based on the notion of network nesting (68) and deeplysupervised nets (69). We showed that, overall, this approach improves the estimation of affect
and engagement at each level in the model hierarchy, obtaining statistically significant improvements on 15/35 children (across all three outputs) when compared to the GPA-net (Fig. 3(D)).
Second, previous deep models (e.g., (70, 71)) that focused on multi-task learning do not leverage the contextual information such as demographics (culture and gender), nor account for the
expert knowledge. We also showed in our experiments on the network interpretability (Sec. 2.3)
that this is important for disentangling different sources of variance arising from the two cultures
and individuals. This, in turn, allows the PPA-net to focus on individual variation when learning
the network parameters for each child. Third, using the network layers as building blocks in
our framework, we efficiently personalized the network to the target context. Traditional ML
approaches such as SVMs, used in previous attempts to implement the robot perception, and an
17
ensemble of regression trees (6), do not offer this flexibility. By contrast, the PPA-net brings
together the interpretability, design flexibility and overall improved performance.
Another important aspect of robot perception is its ability to effectively handle multi-modal
nature of the data, especially in the presence of noisy and missing modalities. We showed in
Sec. 2.5 that the fusion of audio-visual and physiological cues contributes to increasing the network performance. While our experiments revealed that body and face modalities play a central
role in the estimation, the autonomic physiology is also an important facet of affect and engagement (72). This is the first time that both outward and inward expressions of affect and engagement were used together to facilitate the robot perception in autism therapy. We also found that
the autonomic physiology influences differently the output of the two cultures. Namely, in C1
the physiology modality alone achieved an average ICC of above 50%, where in C2 this score
was around 30%. This disparity may be attributed to cultural differences, as children from C2
were moving more during the interactions, which often caused faulty readings from the physiology sensors. Furthermore, the audio modality underperformed in our experiments, despite
the evidence in previous works that successfully used it in estimation of affect (73). There are
potentially two solutions to remedy this: use more advanced techniques for reduction of background noise and user diarization, and a richer set of audio descriptors (60). By analyzing the
feature importance, we found that CARS largely influences the estimation of engagement. This
suggests that, in addition to the data-driven approach, the expert knowledge is important for
informing the robot perception in the form of prior knowledge. Lastly, in this work, we adopted
a feature-level fusion of different modalities; however, more advanced approaches can be used
to personalize the feature fusion to each child (e.g., using a mixture-of-experts approach (74)).
One of the important questions we tried to address is how ML algorithms can be of practical
use for therapists and clinicians working with children with ASC? The potential utility of our
personalized ML framework within autism therapy is through the use of visualization of the
18
estimated affect and engagement levels, and the key autonomic physiology signals of a child
undertaking the therapy (Fig. 4(B)). We note at least two benefits of this: first, the obtained
scores can be used by the robot to automatically adapt its interaction with the child. This can
also assist a therapist to monitor in real time the target behavioral cues of the child, and to
modify the therapy “on the fly". It should also inform the therapist about the idiosyncratic
behavioral patterns of the interacting child. Furthermore, it can assist the therapists in reading
the children’s inward behavioral cues, i.e., their autonomic physiology, which cannot easily be
read from outward cues (e.g., EDA as a proxy of the child’s internal arousal levels, the increase
of which, if not detected promptly, can lead to meltdowns in children with ASC). Second, as we
show in Fig. 4(C), the output of the robot perception can be used to summarize the therapy in
terms of average valence, arousal, and engagement levels (along with their variability) within
each phase of the therapy. This, in turn, would allow for a long-term monitoring of the children’s
progress, also signaling when the robot fails to accurately perceive the child’s signals. This can
be used to improve certain aspects of the child’s behavioral expressions by profiling the child
and designing strategies to optimize his/her engagement through a personalized therapy content.
We also note some limitations of the current robot perception that highlight opportunities
for future method enhancement. First, in the current structure of the proposed PPA-net, we assumed that the children split based on their demographics solely. While the findings in Sec. 2.3
show that the body modality has the opposite influence between the two cultures on estimation
of engagement, thus justifying the current PPA-net architecture, other network structures are
also feasible. For example, an adaptive robot perception would adopt a hybrid approach where
prior knowledge (e.g. demographics) is combined with a data-driven approach to automatically
learn the network structure (75). Also, our current framework is static, while the data we used is
inherently dynamic (the sensed signals are temporally correlated). Incorporating the temporal
context within our framework can be accomplished at multiple levels: different network param19
eters can be learned for each phase of the therapy. To this end, more advanced models such as
recurrent neural networks (76), can be used in the individual layers. Furthermore, the network
generalizability not only within the previously seen children, as currently done by the PPA-net,
but also to new children is another important aspect of robot perception. Extending the current
framework so that it can optimized for previously unseen children would additionally increase
its utility. Due to the hierarchical nature of the PPA-net, a simple way to currently achieve this is
by adding an individual layer for each new child, while re-using the other layers in the network.
There are several other aspects of this work that also need be addressed in the future. Although we used a rich dataset of child-robot interactions to build the robot perception system,
this dataset contains a single therapy session per child. An ideal system would have a constant
access to the therapy data of a target child, allowing the robot to actively adapt its interpretations
of the child’s affect and engagement, and further personalize the PPA-net as the therapy progresses. For this, ML frameworks such as active learning (77) and reinforcement learning (78)
are a good fit. This would allow the robot to continuously adjust the network parameters using
new data, and also reduce the coding effort by only asking human coders to provide labels for
cases for which it is uncertain. Another constraint of the proposed robot perception solution is
that the video data come from a background camera/microphone. While this allows us to have a
more stable view for the robot sensing of the face-body modality, the view from the robot’s perspective would enable more naturalistic interactions. This is also known as active vision (79),
however, it poses a number of challenges including the camera stabilization and multi-view
adaptation (80). Finally, one of the important avenues for future research on robot perception
for autism is to focus on its utility and deployment within every-day autism therapies. Only in
this way can the robot perception and the learning of children with ASC be mutually enhanced.
20
4
4.1
Materials and Methods
Data representations
We used the feed-forward multi-layer neural network approach (81) to implement the proposed
deep learning architecture (Fig. 2). Each layer receives the output of the layer above as its
input, producing higher-level representations (82) of the features extracted from the behavioral
modalities of the children. We began with the GPA-net, where all layers are shared among
the children. The network personalization was then achieved (i) by replicating the layers to
construct the hierarchical architecture depicted in Fig. 2, and (ii) by applying the proposed finetuning strategies to optimize the network performance on each child. The last layers of the
network were then used to make individual estimations of affect and engagement.
4.2
Feature Fusion and Autoencoding
We applied the feature-level fusion to the face (xf ), body (xb ), audio (xa ), and physiology (xp )
features of each child as: x = [xf ; xb ; xa ; xp ] ∈ RDx ×1 , where Dx = 396 is the overall
dimension of the input. The continuous labels for valence (yv ), arousal (ya ), and engagement
(ye ) for each child were stored as y = [yv ; ya ; ye ] ∈ RDy ×1 , and Dy = 3. Furthermore, the data
of each child were split into non-overlapping training, validation and test datasets (Sec. 4.5).
To reduce the adverse effects of partially-observed and noisy features in the input x (Fig. 8 (A)
- Appendix/A), we used an autoencoder (AE) (83) in the first layer of the PPA-net. The AE
transforms x to a hidden representation h0 (with an encoder) through a deterministic mapping:
h0 = fθ0 (e) (x) = W0 (e) x + b0 (e) , θ0 (e) = {W0 (e) , b0 (e) },
(1)
parametrized by θ0 (e) , where e designates the parameters on the encoder side. We used the linear
activation function (LaF), where the parameters θ0 (e) = {W0 (e) , b0 (e) } are a weight coefficient
matrix and a bias vector, respectively. This hidden representation is then mapped back to the
21
input, producing the decoded features:
(d)
(d)
(d)
(d)
(d)
z0 = fθ(d) (h0 ) = W0 h0 + b0 , θ0 = {W0 , b0 },
(2)
0
(d)
where d designates the parameters of the decoder, and W0
(e)T
= W0
are the tied weights
used for the inverse mapping of the encoded features (decoder). In this way, the input data
were transformed to a lower-dimensional and less-noisy representations (’encoding’). Since the
input data are multi-modal, the encoded subspace also integrates the correlations among the
modalities, rendering more robust features for learning of the subsequent layers in the network.
We augmented the encoding process by introducing a companion objective function (CoF)
for each hidden layer (69). The CoF acts as a regularizer on the network weights, enabling the
outputs of each layer to pass the most discriminative features to the next layer. Using the CoF,
the AE also reconstructs target outputs y0 (in addition to z0 ) as:
(c)
(c)
(c)
(c)
(c)
y0 = fθ(c) (h0 ) = W0 h0 + b0 , θ0 = {W0 , b0 }.
(3)
0
(d)
(c)
The AE parameters ω0 = {θ0 (e) , θ0 , θ0 } were then optimized over the training dataset to
d
P
minimize the mean-squared-error (MSE) loss (defined as α(a, b) =
kai − bi k2 ) for both the
i=1
decoding (αd ) and output (αc ) estimates:
ω0∗
N
1 X
(i) (i)
(i) (i)
= arg min α(x, y) = arg min
(1 − λ)αd (x , z0 ) + λαc (y0 , y ) ,
N i=1
ω0
ω0
(4)
where N is the number of training datapoints from all the children. The parameter λ ∈ (0, 1)
was chosen to balance the network’s generative power (the feature decoding) and discriminative
power (the output estimation), and was optimized using validation data (in our case, the optimal
value was λ = 0.8). The learned fθ0 (e) (·) was applied to the input features x, and the resulting
code h0 was then combined with the CARS (xcars ∈ R15×1 ) for each child: h1 = [h0 ; xcars ].
This new data representation was used as input to the subsequent layers of the network.
22
Figure 5: The learning of the PPA-net. (A) The supervised-AE performs the feature smoothing by
dealing with missing values and noise in the input, while preserving the discriminative information in
the subspace h0 - constrained by the CoF0 . The learning operators in the PPA-net: (B) learn, (C) nest
and (D) clone, are used for the layer-wise supervised learning, learning of the subsequent vertical layers,
and horizontal expansion of the network, respectively. (E) The group level GPA-net is first learned by
sequentially increasing the network depth using learn & nest. The GPA-net is then used to initialize the
personalized PPA-net weights at the culture, gender, and individual level (using clone). (F) The network
personalization is then accomplished via the fine tuning steps I and II (Sec. 4).
4.3
Group-level Network
We first trained the GPA-net, where all network layers are shared among the children (Fig. 5
(E)). The weights of the GPA-net were also used to initialize the PPA-net, followed by the proposed fine-tuning strategies to personalize the network (Sec. 4.4). The former step is important
because each layer below the culture level in the PPA-net uses only a relevant subset of the
data (e.g., in C1, data of two females are present below the gender layer), resulting in less data
to train these layers. This, in turn, could easily lead to overfitting of the PPA-net, especially
of its child-specific layers, if only the data of a single child were used to learn their weights.
To this end, we employed a supervised layer-wise learning strategy, similar to that proposed in
23
recent deep learning works (68, 69). The central idea is to train the layers sequentially and in a
supervised fashion by optimizing two layers at a time: the target hidden layer and its CoF.
We defined two operators in our learning strategy - learn and nest (Fig. 5). The learn
operator is called when simultaneously learning the hidden and CoF layers. For the hidden
layers, we used the rectified linear unit (ReLU) (7), defined as: hl = max(0, Wl hl−1 + bl ),
where l = 1, . . . , 4, and θl = {Wl , bl }. ReLU is the most popular activation function that
provides a constant derivative, resulting in fast learning and preventing vanishing gradients in
deep neural networks (7). The AE output and CARS (h1 ) were fed into the fusion (l = 1) layer,
followed by the culture (l = 2), gender (l = 3), and individual (l = 4, 5) layers, as depicted
(c)
in Fig. 5, where each CoF is a fully connected LaF with parameters θl
(c)
(c)
= {Wl , bl }. The
(c)
optimal parameters of the l-th layer ωl = {θl , θl } were found by minimizing the loss:
ωl∗
N
1 X
(i)
αc (yl+1 , y (i) ),
= arg min αc (hl , y) = arg min
N i=1
ωl
ωl
(5)
where αc is the MSE loss (Sec.4.2), computed between the output of the ReLU layer (hl+1 )
passed through the LaF layer of the CoF (yl+1 ), and true outputs (y).
When training the next layer in the network, we used the nest operator (Fig.5) in a similar
fashion as in (68), to initialize the parameters as:
θl+1 = {Wl+1 ← I + , bl+1 ← 0}
(c)
(c)
(c)
(c)
(c)
θl+1 = {Wl+1 ← Wl , bl+1 ← bl },
(6)
where the weight matrix Wl+1 of the ReLU was set to an identity matrix (I). To avoid the
network being trapped in a local minimum of the previous layer, we added a low Gaussian
noise (i,j = N (0, σ 2 ), σ = 0.01) to the elements of I. We set the parameters of the supervised
linear layer using the weights of the CoF above, which assures that the network achieves similar
performance after nesting of the new ReLU layer. Before we started training the nested layer,
we ‘froze’ all the layers above by setting the gradients of their weights to zero – a common
24
approach in a layer-wise training of deep models (84). This allows the network to learn the best
weights for the target layer (at this stage). The steps learn & nest were applied sequentially to
all subsequent layers in the network. Then, the fine-tuning of the network hidden layers and the
last CoF was done jointly. We initially set the number of epochs to 500, with earlystopping, i.e.,
training until the error on a validation set reaches a clear minimum (82) (∼ 100 epochs).2
The network parameters were learned using the standard backpropagation algorithm (7).
Briefly, this algorithm indicates how a model should change its parameters that are used to
compute the representation in each layer from the representation in the previous layer. The loss
of the AE layer and each pair of the ReLU/LaF(CoF) layers was minimized using the Adadelta
gradient descent algorithm with learning rate lr = 1, 200 epochs, and a batch size of 100. The
optimal network configuration had 396 × 250, and 250 × 3 hidden neurons (h) in the AE and
its CoF layers, respectively. Likewise, the size of the fusion ReLU was 265 (250 + 15) × 200,
and 200 × 200 for all subsequent ReLU layers. The size of their CoF layers was 200 × 3. We
implemented the PPA-net using the Keras API (85) with a Tensorflow backend (86), on a Dell
Precision workstation (T7910), with the support of two GPUs (NVIDIA GF GTX 1080 Ti).
4.4
Network Personalization
To personalize the GPA-net, we devised a learning strategy that consists of three steps: the
network initialization followed by two fine-tuning steps. For the former, we introduced a new
operator, named clone, which widens the network to produce the architecture depicted in Fig. 2.
Specifically, the AE (l = 0) and fusion (l = 1) layers were configured as in the GPA-net (using
the same parameters). The clone operator was then applied to generate the culture, gender, and
2
After this, we also tried fine-tuning the last layer only; however, this did not affect the network performance.
25
individual layers, the parameters of which were initialized as follows:
(q)
← θl0 ,
q = {C1 , C2 }
(g)
← θl0 ,
g = {male, f emale}
(k)
← θl0 ,
k = {1, .., K}
l = 2 : θl
l = 3 : θl
l = 4 : θl
(c, k)
(7)
(c, 0)
l = 5 : θl−1 ← θl−1 , k = {1, .., K}.
As part of the clone procedure, the culture and gender layers were shared among the children,
while the individual layers were child-specific.
To adapt the network parameters to each child, we experimented with different fine-tuning
strategies. We report here a two-step fine-tuning strategy that performed the best. First, we
updated the network parameters along the path to a target child, while freezing the layers not
intersecting with that particular path. For instance, for child k and demographics {C1 , f emale},
(C1 )
∗
the following updates were made: ωl=1:5,k
= {θ0 , θ1 , θ2
(f emale)
, θ3
(k)
(c, k)
, θ4 , θ5
}. Practically,
this was achieved by using a batch of 100 random samples of target child, to compute the
network gradients along that child-path. In this way, the network gradients were accumulated
across all the children, and then back-propagated (1 epoch). This was repeated for 50 epochs,
and the Stochastic Gradient Descent (SGD) algorithm (lr = 0.03) was used to update the
network parameters. At this step, SGD produced better parameters than Adadelta. Namely,
due to its adaptive lr, Adadelta quickly altered the initial network parameters, overfitting the
parameters of deeper layers, for the reasons mentioned above. This, in turn, diminished the
shared knowledge provided by the GPA-net. On the other hand, the SGD with the low and fixed
lr made small updates to the network parameters at each epoch, allowing the network to better
fit each child while preserving the shared knowledge. This was followed by the second finetuning step where the child-specific layers (ReLU/LaF(CoF)) were further optimized. For this,
(k)
(c, k)
we used Adadelta (lr = 1) to tune the child-specific layers, ωk∗ = {θ3 , θ3
}, one-child-at-
the-time (200 epochs). Further details of these learning strategies are provided in Appendix/A.
26
4.5
Evaluation Procedure
We performed a random split of data of each child into three disjoint sets: we used 40% of a
child’s data as a training set, and 20% as the validation data to select the best model configuration. The remaining 40% were used as the test set to evaluate the model’s generalization to
previously unseen data. This protocol imitates a realistic scenario where a portion of a child’s
data (e.g., annotated by child therapists) is used to train and personalize the model to the child,
and the rest is used to estimate affective states and engagement from new data of that child. To
avoid any bias in the data selection, this process was repeated ten times. The input features were
z-normalized (zero mean, unit variance), and the model’s performance is reported in terms of
ICC (and MSE) computed from the model estimates and ground-truth labels (see Appendix).
27
Acknowledgments
We thank the Serbian Autism Society, and the educational therapist Ms S. Babovic for her invaluable feedback during this study. We would also like to thank the Ethics Committee from
Japan (Chubu IRB), Serbia - MHI, and USA (MIT IRB), for allowing this research to be conducted. We also thank Ms Havannah Tran and Ms Jiayu Zhou for helping us to prepare the
figures in the paper, Dr Javier Hernandez, for his insights into the processing and analysis of
physiological data, Dr Manuel Cebrian for his advice on formatting the paper, and MIT undergraduate students: Mr John Busche, for his support in experiments for alternative approaches,
and Ms Sadhika Malladi, for her help with running the DeepLift code. Our special thanks go to
all the children and their parents who participated in the data collection - without them, this research would not be possible. Funding: This work has been supported by MEXT Grant-in-Aid
for Young Scientists B Grant No. 16763279, and Chubu University Grant I Grant No. 27IS04I
(Japan). The work of O. R. has been funded by EU HORIZON 2020 under grant agreement
no. 701236 (ENGAGEME) - Marie Skłodowska-Curie Individual Fellowship, and the work
of B.S. under grant agreement no. 688835 (DE-ENIGMA). Author contributions: O.R. and
R.P. conceived the personalized machine learning framework. O.R. derived the proposed deep
learning method. M.D. and O.R. implemented the method and conducted the experiments. J.L.
supported the data collection, data processing and analysis of the results. B.S. provided insights
into the method and audio-data processing. All authors contributed to the writing of the paper.
Competing interests: The authors declare that they have no competing interests.
28
Appendix
A
Details on Model Training and Alternative Approaches
Table 1: The mean (±SD) of the ICC scores (in %) for estimation of the children’s valence, arousal,
and engagement. TaskRank quantifies the portion of tasks (35 children × 3 outputs = 105 in total) on
which the target model outperformed the compared models, including standard deep multi-layer perceptron network with last layers adapted to each child (MLP), joint MLP (MLP-0), linear regression (LR),
support-vector regression (SVR) with a Gaussian kernel, and gradient-boosted regression trees (GBRT).
Models Valence Arousal Engagement Average TaskRank (in %)
PPA-net
GPA-net
MLP
MLP-0
52±21
47±28
47±18
43±22
60±16
57±15
54±15
52±15
65±24
60±25
59±22
57±23
59±21
54±24
53±20
51±20
48.0 (1)
10.5 (4)
3.40 (5)
2.90 (6)
LR
SVR
GBRT
12±18 21±16
45±26 56±14
47±21 51±15
24±21
49±22
49±22
18±19
50±21
49±20
1.00 (7)
21.9 (2)
12.4 (3)
Figure 6: Empirical Cumulative Distribution Function (CDF) computed from average estimation errors
for valence, arousal, and engagement levels, and in terms of (A) ICC and (B) MSE. We show the performance by three top ranked models (based on TaskRank in Table 1). The individual performance
scores for 35 children are used to compute the CDFs in the plots. From the plots, we see that the improvements due to the network personalization are most pronounced for 40% < F (X) < 75% of the
children. On the other hand, the model personalization exhibits similar performance on the children
for whom the group-level models perform very well (0% < F (X) < 40%), or largely underperform
(75% < F (X) < 100%). This indicates that for the underperforming children, the individual expressions of affect and engagement vary largely across the children. Thus, more data of those children is
needed to achieve a more effective model personalization.
29
Figure 7: The networks’ learning: Mean Squared Errors (MSE) during each epoch in the network
optimization are shown for the personalized (PPA-net and MLP) and group-level (GPA-net) models, and
for training (tr) and validation (va) data. Note that the GPA-net learns faster and with a better local
minimum compared to the standard MLP. This is due to the former using layer-wise supervised learning
strategy. This is further enhanced by fine-tunning steps in PPA-net, achieving the lowest MSE during the
model learning, which is due to its ability to adapt its parameters to each culture, gender and individual.
Figure 8: The contribution of visual (face and body), audio and physiology modalities in the estimation
of the valence, arousal, and engagement levels of the children using PPA-net. The fusion approach
(’ALL’) outperforms the individual modalities, evidencing the additive contribution of each modality to
predicting the target outputs. The large error bars reflect the high level of heterogeneity in the individual
performance of the network on each child, as expected for many children with ASC.
In Table. 1, we compare different methods described in Sec. 2.4 in the paper, and detailed
below. In Fig. 6, we depict the error distributions of the top performing methods, highlighting
the regions in the error space where the proposed PPA-net is most effective (and otherwise).
Fig. 7 shows the convergence rates of the deep models evaluated in the paper, and in terms of
the learning steps and the loss minimization (MSE). Note that the proposed PPA-net is able to
fit the target children significantly better, while still outperforming the compared methods on
the previously unseen data of those children (Table 1). In traditional ML, where the goal is
30
to be able to generalize to previously unseen subjects, this could be considered as algorithmic
bias (the model overfitting). By contrast, in personalized ML, as proposed here, it is beneficial
as it allows the model to perform the best on unseen data of target subject to whom we aim
to personalize the model. Fig. 8 depicts the contribution of each modality to the estimation
performance (Sec. 2.5). The bars in the graph show the mean (±SD) ICC performance for
each modality obtained by averaging it across the children. The PPA-net configuration used to
make predictions from each modality was the same as in the multi-modal scenario. However,
the size of the auto-encoded space varied in order to accomodate the size of the input features.
Specifically, the optimal size of the encoded features per modality was: 150 (face), 50 (body),
and original feature size: 24 for the audio, and 30 for the physiology modality, were used.
In what follows, we provide additional details on the training procedures for the alternative
methods used in our experiments.
• MLP-0/MLP: For the standard multi-layer perceptron (MLP-0) deep network, we used
the same architecture/layer types as in our GPA-net, however, the training of its layers
was done in traditional manner (’at once’) using Adadelta algorithm. The number of
epochs was set to 200, with earlystopping on the validation set. The personalization of
this network (MLP) was accomplished by cloning the last layer in the network (decision
layer) and fine-tuning it to each child using SGD (lr = 0.03).
• LR: The standard Linear Regression (LR) (6) with L-2 regularization on the design matrix. The regularization parameters were set on the validation data used in our experiments
to obtain the best performance.
• SVR: Support Vector Regression (SVR) (6) is the standard kernel-based method used
for non-linear regression. It defines a kernel matrix computed by applying a pre-defined
kernel function to data pairs. We used the standard isotropic Radial Basis Function (RBF).
31
Due to the non-parametric nature of this method, it was computationally infeasible to use
all training data to form the kernel matrix. To circumvent this, we trained one SVR
per child (using all training data). Then, we selected support vectors (SV) – the most
discriminative examples – from each child (SVs=1000), and re-trained the model using
these SVs, thus, 35k data points in total. To avoid the model over fitting, the penalty
parameter C was selected on the validation data.
• GBRT: Gradient Boosted Regression Trees (GBRT) (57) is a generalization of boosting
to arbitrary differentiable loss functions. We set the number of basis functions (weak
learners) to 35, corresponding to the number of the children. The trees depth was varied
from d=3-10, and we found that d=5 performed the best on the validation set. This configuration was used to produce the results reported. The main advantage of GBRT is that
they naturally handle heterogeneous data in the input, and have ability to select the most
discriminative features for target task, also allowing for an easy interpretation. However,
like LR and SVR, their traditional version does not allow for an easy personalization to
each child, in contrast to the proposed PPA-net.
For the methods above, we used the existing publicly available implementations. Specifically, for MLP-0/MLP we used Keras API (85), and for the rest, we used the sklearn (87), a
python toolbox for ML.
B
Data
We reported the results obtained on the dataset of children undergoing occupational therapy for
autism (43). The therapy was led by experienced child therapists, and assisted by a humanoid
robot NAO. The goal of the therapy was to teach the children to recognize and imitate emotive
behaviors (using the Theory of Mind concept (88)) as expressed by NAO robot. During the
32
Table 2: The summary of participants [taken from (43)]. The average CARS scores of the two groups
are statistically different (t(34) = −3.35, p = 0.002).
Age
Age range
Gender (male:female)
CARS
C1 (Japan)
7.59± 2.43
3–12
15:2
31.8± 7.1
C2 (Serbia)
9.41± 2.46
5–13
15:4
40.3± 8.2
therapy, the robot was driven by a "person behind the curtain" (i.e., the therapist) but the data
were collected for enabling the robot to have a future autonomous perception of the affective
states of a child learner. The data include: (i) video recordings of facial expressions, head and
body movements, pose and gestures, (ii) autonomic physiology (heart rate (HR), electrodermal
activity (EDA) and body temperature (T)) from the children, as measured on their non-dominant
wrist, and (iii) audio-recordings (Fig. 1). The data come from 35 children, with different cultural
backgrounds. Namely, 17 children (16 males / 1 female) are from Japan (C1), and 19 children
(15 males / 4 females) are from Serbia (C2) (43). Note that in this paper we excluded the
data of one male child from C2 due to the low-quality recording. Each child participated in a 25
minutes long child-robot interaction. Children’s ages varied from 3-13, and all the children have
a prior diagnosis of autism (see Table 2). The protocol for the data acquisition was reviewed and
approved by relevant Institutional Review Boards (IRBs), and informed consent was obtained
in writing from the parents of the children. More details about the data, recording setting and
therapy stages (pairing, recognition, imitation and story-telling) can be found in (43).
C
Feature Processing
The raw data of synchronized video, audio and autonomic physiology recordings were processed using the state-of-the-art open-source tools. For analysis of facial behavior, we used the
OpenFace toolkit (50). This toolkit is based on Conditional Local Neural Fields (CLNF) (89),
a ML model for detection and tracking of 68 fiducial facial points, described as 2 dimensional
33
(2D) coordinates (x, y) in face images (Fig. 1). It also provides 3D estimates of head-pose
and eye-gaze direction (one for each eye), as well as the presence and intensity (on a 6 level
Likert scale) of 18 facial action units (AUs) (90). The latter are usually referred to as the judgment level descriptors of facial activity, in terms of activations of facial muscles. Most human
facial expressions can be described as a combination of these AUs and their intensities, and
they have been the focus of research on automated analysis of facial expressions (91). For
capturing the body movements, we used the OpenPose toolkit (51) for automated detection of
18-keypoint body pose locations, 21-keypoint hand estimation, and 70 fiducial facial landmarks
(all in 2D), along with their detection confidence (0-1). From this set, we used the body pose
and facial landmarks, and disregarded the hand tracking (due to frequent occlusions of the children’s hands). OpenPose is built upon recent advances in convolutional neural networks - CNNs
(specifically, the VGG-19 net (92)), and the part affinity fields for part association (51).
For audio signal processing, we used the openSmile toolkit (52) to automatically extract
acoustic low-level descriptors (LLDs) from the speech waveform on frame level. Specifically,
we used 24 LLDs ( (pitch, MFCC, LSP, etc.) provided by openSmile, which have already
been used effectively for cross-lingual automatic diagnosis of ASC from children’s voices (73).
These features were computed over sliding windows of length 100 ms with 10 ms shift, and
then aligned with the visual features using time-stamps stored during the data recording.
To measure the biosignals based on autonomic physiology (HR, EDA and T), we used the
commercially available E4 wrist-worn sensor (93, 94). This wristband provides real-time readings of blood volume pulse (BVP) and HR (64Hz), EDA via the measurement of skin conductance (4Hz), skin T (4Hz), and 3-axis accelerometer (ACC) data (32Hz). From these signals, we
also extracted additional commonly used hand-crafted features (34), as listed in Table 3. Note
that since HR is obtained from BVP, we used only the raw BVP. Again, these were temporally
aligned with the visual features using time-stamps stored during the data recording.
34
Table 3: The summary of the features used from different data modalities.
Feature
ID
1-209
210-215
216-223
Modality
Description
FACE (OpenPose)
FACE (OpenFace)
FACE (OpenFace)
224-240
FACE (OpenFace)
241-258
FACE (OpenFace)
Facial landmarks: 70x2 (x,y) and their confidence level (c)
Head pose: 3D location and 3D rotation
Eye gaze: 2x3 - 3D eye gaze direction vector in world coordinates
for left and right eye + 2D eye gaze direction in radians in world
coordinates for both eyes
Binary detection of 18 AUs: AU01,AU02, AU04, AU05, AU06,
AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20, AU23,
AU25, AU26, AU28, AU45
Intensity estimation (0-5) of 17 AUs: AU01,AU02, AU04, AU05,
AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20,
AU23, AU25, AU26, AU45
259-312
BODY (OpenPose)
Body pose: 18x3 - the pose keypoints containing the body part
locations (x,y) and detection confidence ( c )
313-327
BODY (E4 ACC)
Accelerometer
data: 3D raw signal (x, y, z)), z-normalized vector
p
2
2
2
x + y + z (mean and SD within 5 sec window), mean, SD,
max, min, 1st diff, abs value of 1st diff, abs value of normalized
1st diff, 2nd diff, abs value of 2nd diff, abs value of normalized
2nd diff, mean amplitude deviation (10 sec window)
328-338
PHYSIOLOGY (EDA)
339-347
PHYSIOLOGY (HR)
348-357
PHYSIOLOGY (T)
Raw EDA and its: z-normalized value (30 sec window), mean,
SD, max, min, integration, slope, number of peaks, amplitude,
number of zero-crossings
Raw HR signal and its: z-normalized value (4s window size),
mean, 1st diff, abs value of 1st diff, abs value of the normalized
1st diff, 2nd diff, abs value of 2nd diff, abs value of normalized
2nd diff
Raw T signal and its: z-normalized value (4s window size), mean,
1st diff, abs value of 1st diff, abs value of the normalized 1st diff,
2nd diff, abs value of 2nd diff, abs value of normalized 2nd diff
358-381
AUDIO (openSmile)
LLDs: RMS energy, Spectral flux, Spectral entropy, Spectral
variance, Spectral skewness, Spectral kurtosis, Spectral slope,
Harmonicity, MFCC 0, MFCC 1–10, MFCC 11–12, MFCC
13–14, Log Mel frequency band 0–7, LSP frequency 0–7, F0
(ACF based), F0 (SHS based), F0 envelope, Probability of voicing, Jitter, Shimmer, Logarithmic HNR, Sum of auditory spectrum (loudness), ZCR, Logarithmic HNR
382-396
CARS
15 ratings (0-4): relating to people, emotional response, imitation, body use, object use, adaptation to change, listening response, taste-smell-touch, visual response, fear or nervous, verbal
communication, activity level, nonverbal communication, level
and consistency of intellectual response, general impression
35
The multi-modal learning has been achieved by consolidating these features to act as predictors of target affective states and engagement in our personalized affect perception deep
networks. From the OpenPose output, we used the face and body features with the detection
confidence over each feature set (face&body) above 30%, which we found to be a good threshold by visually inspecting the detection results. The final feature set was formed as follows: (i)
visual: we used the facial landmarks from OpenPose, enhanced with the head-pose, eye-gaze
and AUs, as provided by OpenFace. (ii) Body: we merged the OpenPose body-pose features,
and E4 ACC features encoding the hand movements. (ii) Audio: the original feature set is
kept, and (iii) Physiology: contains the features derived from the E4 sensor, without the ACC
features. Table 3 summarizes these features.
We also included an auxiliary feature set provided by the expert knowledge. Namely, the
children’s behavioral severity at the time of the interaction (after the recordings) was scored
on the CARS (47) by the therapists (Table 2). The CARS form is typically completed in less
than 30 minutes, and it asks about 15 areas of behavior defined by a unique rating system (0-4)
developed to assist in identifying individuals with ASC. The rating values given for the 15 areas
are summed to produce a total score for each child. CARS covers the three key behavioral
dimensions pertinent to autism: social-emotional, cognitive, and sensory, and based on the total
scores, the children fall into one of the following categories: (i) no autism (score below 30),
(ii) mild-to-moderate autism (score: 30–36.5), and (iii) moderate-to-severe autism (37–60). We
used this 15-D feature set (the CARS scores for each of the 15 areas) as a unique descriptor for
each child - encoding the expert knowledge about the children’s behavioral traits.
D
Coding
The dataset was labeled by human experts in terms of two most commonly used affective dimensions (valence and arousal), and engagement, all rated on a continuous scale in the range from
36
−1 to +1. Specifically, five expert therapists (two from C1 and three from C2) coded the videos
independently while watching the audio-visual recordings of target interactions. As a measure
of the coders’ agreement, we used the intra-class correlation (ICC) score, type (3,1) (53). This
score is a measure of the proportion of a variance that is attributable to objects of measurement compared to the overall variance of the coders. The ICC is commonly used in behavioral
sciences to assess the agreement of judges. Unlike the well-known Pearson correlation (PC),
ICC penalizes the scale differences and offset between the coders, which makes it a more robust measure of coders’ agreement. The codings were aligned using the standard alignment
techniques: we applied time-shifting of ±2 seconds to each coder, and selected the shift which
produced the highest average inter-coder agreement. The ground truth labels that we used to
evaluate the ML models were then obtained by averaging the codings of 3/5 coders, who had the
highest agreement (based on the pair-wise ICC scores). We empirically found, that in this way,
outlying codings can significantly be reduced. The obtained coding ("the gold standard") was
then used as the ground truth for training ML models for estimation of valence, arousal, and engagement levels during the child-robot interactions. Finally, note that in our previous work (43),
we used discrete annotations for the three target dimensions. Since these were coded per manually selected engagement episodes, for this work we re-annotated the data to obtain a more
fine-grained (i.e., continuous) estimates of the affect and engagement from the full dataset. The
description of the exemplary behavioral cues used during the coding process is given in Table 4.
37
Table 4: The description of the behaviors and corresponding cues given to the coders as reference
points when coding target affective states and engagement levels. All three dimensions are coded on a
continuous scale based on the perceived intensity of the target dimensions.
Level
negative (-1)
Dimension
Valence
neutral (0)
Valence
positive(+1)
Valence
negative (-1)
Arousal
neutral (0)
Arousal
positive(+1)
Arousal
negative (-1)
Engagement
neutral (0)
Engagement
positive(+1)
Engagement
Description
The child shows clear signs of experiencing unpleasant feelings
(being unhappy, angry, visibly upset, showing dissatisfaction,
frightened), dissatisfaction and disappointment (e.g., when NAO
showed an expression that the child did not anticipate)
The child seems alert and/or attentive, with no obvious signs of
any emotion, pleasure or displeasure
The child shows signs of intense happiness (e.g., clapping hands),
joy (in most cases followed with episodes of laughter), and delight (e.g., when NAO performed)
The child seems very bored or uninterested (e.g., looking away,
not showing interest in the interaction, sleepy, passively observing)
The child shows no obvious signs of a physical activity (face,
head, hand, and/or bodily movements); seems calm, thinking, airdrawn
The child performs an intense and/or sudden physical activity followed by constant (sometimes rapid) movements like hand clapping, touching face/head/knees, actively playing with the robot,
wiggling in the chair (C2) or on the floor (C1), jumping, walking
around the room, being highly excited, shouting
The child is not responding to the therapist and/or NAO’s prompts
at all and after the prompts, or walks away from NAO, looking for
other objects in the room, ignoring the robot and the therapist
The child seems indifferent to the interaction, looking somewhere
else, not paying the full attention to the interaction; the therapist
repeats the question and/or attempts the task a few times, until the
child complies with the instructions
The child is paying full attention to the interaction, immediately
responds to the questions of the therapist, reacting to NAO spontaneously, executing the tasks; the child seems very interested
with minimal or no incentive from the therapist to participate in
the interaction
38
References
1. T. Kanda, H. Ishiguro, Human-robot interaction in social robotics (CRC Press, 2017).
2. T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots, Robotics
and autonomous systems 42, 143–166 (2003).
3. E. S.-W. Kim, Robots for social skills therapy in autism: Evidence and designs toward
clinical utility, Ph.D. thesis, Yale University (2013).
4. G.-Z. Yang, et al., Medical robotics—regulatory, ethical, and legal considerations for increasing levels of autonomy (2017).
5. L. D. Riek, Healthcare robotics, Communications of the ACM 60, 68–78 (November 2017).
6. C. M. Bishop, Pattern recognition and machine learning (Springer, 2006).
7. Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521, 436–444 (May 2015).
8. M. J. Matarić, Socially assistive robotics: Human augmentation versus automation, Science
Robotics 2 (March 2017).
9. A. Tapus, M. Mataric, B. Scassellati, Socially assistive robotics [Grand Challenges of
Robotics], IEEE Robotics & Automation Magazine 14, 35–42 (2007).
10. A. Peca, Robot enhanced therapy for children with autism disorders: Measuring ethical
acceptability, IEEE Technology and Society Magazine 35, 54–66 (June 2016).
11. P. G. Esteban, et al., How to build a supervised autonomous system for robot-enhanced therapy for children with autism spectrum disorder, Paladyn, Journal of Behavioral Robotics
8, 18–38 (April 2017).
39
12. D. Freeman, et al., Virtual reality in the assessment, understanding, and treatment of mental
health disorders, Psychol. Med. pp. 1–8 (2017).
13. A. P. Association, Diagnostic and statistical manual of mental disorders (DSM-5 R )
(American Psychiatric Pub, 2013).
14. D. L. Christensen, et al., Prevalence and characteristics of autism spectrum disorder among
4-year-old children in the autism and developmental disabilities monitoring network, Journal of Developmental & Behavioral Pediatrics 37, 1–8 (January 2016).
15. D. Feil-Seifer, M. Mataric, Robot-assisted therapy for children with autism spectrum disorders, Proc. of the 7th International Conference on Interaction Design and Children (2008),
pp. 49–52.
16. W. A. Bainbridge, J. W. Hart, E. S. Kim, B. Scassellati, The benefits of interactions with
physically present robots over video-displayed agents, Int. J. Soc. Robot. 3, 41–52 (January
2011).
17. M. Helt, et al., Can children with autism recover? if so, how?, Neuropsychology review 18,
339–366 (2008).
18. C. M. Corsello, Early intervention in autism, Infants & Young Children 18, 74–85 (April
2005).
19. S. Baron-Cohen, A. M. Leslie, U. Frith, Does the autistic child have a "theory of mind"?,
Cognition 21, 37–46 (October 1985).
20. S. Harker, Applied behavior analysis (aba), Encyclopedia of Child Behavior and Development pp. 135–138 (2011).
40
21. R. L. Koegel, L. Kern Koegel, Pivotal Response Treatments for Autism: Communication,
Social, and Academic Development. (2006).
22. J. J. Diehl, L. M. Schmitt, M. Villano, C. R. Crowell, The clinical use of robots for individuals with Autism Spectrum Disorders: A critical review, Research in Autism Spectrum
Disorders 6, 249–262 (2012).
23. B. Scassellati, H. Admoni, M. Matarić, Robots for use in autism research, Annual review
of biomedical engineering 14, 275–294 (2012).
24. K. Dautenhahn, I. Werry, Towards interactive robots in autism therapy: Background, motivation and challenges, Pragmatics & Cognition 12, 1–35 (2004).
25. P. Liu, D. F. Glas, T. Kanda, H. Ishiguro, Data-driven hri: Learning social behaviors by
example from human–human interaction, IEEE Transactions on Robotics 32, 988–1008
(2016).
26. E. S. Kim, R. Paul, F. Shic, B. Scassellati, Bridging the research gap: Making hri useful to
individuals with autism, Journal of Human-Robot Interaction 1 (2012).
27. B. M. Scassellati, Foundations for a theory of mind for a humanoid robot, Ph.D. thesis,
Massachusetts Institute of Technology (2001).
28. K. Dautenhahn, I. Werry, Towards interactive robots in autism therapy: Background, motivation and challenges, Pragmatics & Cognition 12, 1–35 (2004).
29. C. L. Breazeal, Designing sociable robots (MIT press, 2004).
30. P. Pennisi, et al., Autism and social robotics: A systematic review, Autism Research 9,
165–183 (February 2016).
41
31. M. A. Goodrich, A. C. Schultz, Human-robot interaction: a survey, Foundations and trends
in human-computer interaction 1, 203–275 (February 2007).
32. S. M. Anzalone, S. Boucenna, S. Ivaldi, M. Chetouani, Evaluating the engagement with
social robots, International Journal of Social Robotics 7, 465–478 (August 2015).
33. M. B. Colton, et al., Toward therapist-in-the-loop assistive robotics for children with autism
and specific language impairment, Autism 24, 25 (2009).
34. J. Hernandez, I. Riobo, A. Rozga, G. D. Abowd, R. W. Picard, Using electrodermal activity
to recognize ease of engagement in children during social interactions, Proceedings of the
ACM International Joint Conference on Pervasive and Ubiquitous Computing (2014), pp.
307–317.
35. M. E. Hoque, Analysis of speech properties of neurotypicals and individuals diagnosed with
autism and down, Proceedings of the 10th International ACM SIGACCESS Conference on
Computers and Accessibility (2008), pp. 311–312.
36. A. Baird, et al., Automatic classification of autistic child vocalisations: A novel database
and results, Interspeech pp. 849–853 (2017).
37. T. Belpaeme, et al., Multimodal child-robot interaction: Building social bonds, Journal of
Human-Robot Interaction 1, 33–53 (December 2012).
38. Z. Zheng, et al., Robot-mediated imitation skill training for children with autism, IEEE
Transactions on Neural Systems and Rehabilitation Engineering 24, 682–691 (June 2016).
39. J. Sanghvi, et al., Automatic analysis of affective postures and body motion to detect
engagement with a game companion, The 6th ACM/IEEE International Conference on
Human-Robot Interaction (HRI) (2011), pp. 305–311.
42
40. J. C. Kim, P. Azzi, M. Jeon, A. M. Howard, C. H. Park, Audio-based emotion estimation
for interactive robotic therapy for children with autism spectrum disorder, The 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) (2017), pp.
39–44.
41. S. S. Rajagopalan, O. R. Murthy, R. Goecke, A. Rozga, Play with me – measuring a child’s
engagement in a social interaction, The 11th IEEE International Conference and Workshops
on Automatic Face and Gesture Recognition (2015), vol. 1, pp. 1–8.
42. M. I. Jordan, T. M. Mitchell, Machine learning: Trends, perspectives, and prospects, Science 349, 255–260 (Jul 2015).
43. O. Rudovic, J. Lee, L. Mascarell-Maricic, B. W. Schuller, R. W. Picard, Measuring engagement in robot-assisted autism therapy: a cross-cultural study, Frontiers in Robotics and AI
4, 36 (2017).
44. J. Ngiam, et al., Multimodal deep learning, Proceedings of the 28th International Conference on Machine Learning (ICML) (2011), pp. 689–696.
45. N. Jaques, S. Taylor, A. Sano, R. Picard, Multimodal autoencoder: A deep learning approach to filling in missing sensor data and enabling better mood prediction, International
Conference on Affective Computing and Intelligent Interaction (ACII), 2017 (2015).
46. Y. Bengio, L. Yao, G. Alain, P. Vincent, Generalized denoising auto-encoders as generative
models, Advances in Neural Information Processing Systems (2013), pp. 899–907.
47. E. Schopler, M. E. Van Bourgondien, G. J. Wellman, S. R. Love, The childhood autism
rating scale, (CARS2) (WPS Los Angeles, 2010).
48. Y. Zhang, Q. Yang, An overview of multi-task learning, National Science Review (2017).
43
49. N. Jaques, O. Rudovic, S. Taylor, A. Sano, R. Picard, Predicting tomorrow’s mood, health,
and stress level using personalized multitask learning and domain adaptation, IJCAI 2017
Workshop on Artificial Intelligence in Affective Computing (2017), pp. 17–33.
50. T. Baltrušaitis, P. Robinson, L.-P. Morency, Openface: an open source facial behavior analysis toolkit, IEEE Winter Conference on Applications of Computer Vision (2016), pp. 1–10.
51. Z. Cao, T. Simon, S.-E. Wei, Y. Sheikh, Realtime multi-person 2d pose estimation using
part affinity fields, IEEE Conference on Computer Vision and Pattern Recognition (2017).
52. F. Eyben, F. Weninger, F. Gross, B. Schuller, Recent developments in opensmile, the munich open-source multimedia feature extractor, Proceedings of the 21st ACM International
Conference on Multimedia (2013), pp. 835–838.
53. P. E. Shrout, J. L. Fleiss, Intraclass correlations: uses in assessing rater reliability, Psychol.
Bull. 86, 420 (March 1979).
54. H. Larochelle, Y. Bengio, J. Louradour, P. Lamblin, Exploring strategies for training deep
neural networks, J. Mach. Learn. Res. 10, 1–40 (January 2009).
55. L. v. d. Maaten, G. Hinton, Visualizing data using t-sne, J. Mach. Learn. Res. 9, 2579–2605
(November 2008).
56. A. Shrikumar, P. Greenside, A. Shcherbina, A. Kundaje, Not just a black box: Learning
important features through propagating activation differences, International Conference on
Computer Vision and Pattern Recognition (2016).
57. J. H. Friedman, Greedy function approximation: a gradient boosting machine, Ann. Stat.
pp. 1189–1232 (October 2001).
44
58. V. Podgorelec, P. Kokol, B. Stiglic, I. Rozman, Decision trees: an overview and their use in
medicine, Journal of medical systems 26, 445–463 (2002).
59. R. Picard, M. Goodwin, Developing innovative technology for future personalized autism
research and treatment, Autism Advocate 50, 32–39 (2008).
60. B. W. Schuller, Intelligent audio analysis (Springer, 2013).
61. E. Brynjolfsson, T. Mitchell, What can machine learning do? workforce implications, Science 358, 1530–1534 (2017).
62. M. R. Herbert, Treatment-guided research, Autism Advocate 50, 8–16 (2008).
63. A. Kendall, Y. Gal, R. Cipolla, Multi-task learning using uncertainty to weigh losses for
scene geometry and semantics, International Conference on Computer Vision and Pattern
Recognition (2017).
64. R. Salakhutdinov, J. B. Tenenbaum, A. Torralba, Learning with hierarchical-deep models,
IEEE transactions on pattern analysis and machine intelligence 35, 1958–1971 (August
2013).
65. W. Wang, S. J. Pan, D. Dahlmeier, X. Xiao, Recursive neural conditional random fields for
aspect-based sentiment analysis, Computation and Language (2016).
66. A. Mollahosseini, B. Hasani, M. H. Mahoor, Affectnet: A database for facial expression,
valence, and arousal computing in the wild, IEEE Transactions on Affective Computing PP,
1-1 (2017).
67. A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional
neural networks, Advances in neural information processing systems (2012), pp. 1097–
1105.
45
68. T. Chen, I. Goodfellow, J. Shlens, Net2net: Accelerating learning via knowledge transfer,
International Conference on Learning Representations (ICLR) (2016).
69. C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, Z. Tu, Deeply-supervised nets, Artificial Intelligence and Statistics (2015), pp. 562–570.
70. S. Ruder, An overview of multi-task learning in deep neural networks, arXiv preprint
arXiv:1706.05098 (2017).
71. S. A. Taylor, N. Jaques, E. Nosakhare, A. Sano, R. Picard, Personalized multitask learning for predicting tomorrow’s mood, stress, and health, IEEE Transactions on Affective
Computing PP, 1-1 (2017).
72. R. El Kaliouby, R. Picard, S. Baron-Cohen, Affective computing and autism, Annals of the
New York Academy of Sciences 1093, 228–248 (December 2006).
73. M. Schmitt, E. Marchi, F. Ringeval, B. Schuller, Towards cross-lingual automatic diagnosis
of autism spectrum condition in children’s voices, Proceedings of the 12th Symposium on
Speech Communication (2016), pp. 1–5.
74. N. Shazeer, et al., Outrageously large neural networks: The sparsely-gated mixture-ofexperts layer, International Conference on Learning Representations (ICLR) (2017).
75. Y. Lu, et al., Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017).
76. R. J. Williams, D. Zipser, A learning algorithm for continually running fully recurrent
neural networks, Neural computation 1, 270–280 (1989).
46
77. B. Settles, Active learning, Synthesis Lectures on Artificial Intelligence and Machine
Learning 6, 1–114 (2012).
78. V. Mnih, et al., Human-level control through deep reinforcement learning, Nature 518,
529–533 (2015).
79. S. Chen, Y. Li, N. M. Kwok, Active vision in robotic systems: A survey of recent developments, The International Journal of Robotics Research 30, 1343–1377 (2011).
80. H. I. Christensen, et al., Next generation robotics, A Computing Community Consortium
(CCC) (2016).
81. Y. Bengio, Learning deep architectures for ai, Foundations and trends in Machine Learning
2, 1–127 (November 2009).
82. Y. Bengio, A. Courville, P. Vincent, Representation learning: A review and new perspectives, IEEE Trans. Pattern Anal. Mach. Intell. 35, 1798–1828 (March 2013).
83. G. E. Hinton, R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science 313, 504–507 (July 2006).
84. Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle, Greedy layer-wise training of deep
networks, Advances in neural information processing systems (2007), pp. 153–160.
85. F. Chollet, et al., Keras (2015).
86. M. Abadi, et al., Tensorflow: A system for large-scale machine learning, Proceedings of
the 12th USENIX Conference on Operating Systems Design and Implementation (USENIX
Association, 2016), pp. 265–283.
47
87. F. Pedregosa, et al., Scikit-learn: Machine learning in Python, Journal of Machine Learning
Research 12, 2825–2830 (2011).
88. J. A. Hadwin, P. Howlin, S. Baron-Cohen, Teaching Children with Autism to Mind-Read:
Workbook (John Wiley & Sons, 2015).
89. T. Baltrušaitis, P. Robinson, L.-P. Morency, 3d constrained local model for rigid and nonrigid facial tracking, IEEE Conference on Computer Vision and Pattern Recognition (2012),
pp. 2610–2617.
90. E. Paul, Facial Expressions (John Wiley & Sons, Ltd, 2005).
91. J. F. Cohn, F. De la Torre, Automated face analysis for affective computing, The Oxford
handbook of affective computing (2015).
92. K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, International Conference on Computer Vision and Pattern Recognition (2014).
93. Empatica e4: https://www.empatica.com/en-eu/research/e4/ (2015).
94. J. Hernandez, D. J. McDuff, R. W. Picard, Bioinsights: extracting personal data from “still”
wearable motion sensors, The 12th IEEE International Conference on Wearable and Implantable Body Sensor Networks (2015), pp. 1–6.
48
| 2 |
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
Application of Graph Coloring to Biological Networks
Susan Khor
Abstract
We explore the application of graph coloring to biological networks, specifically protein-protein
interaction (PPI) networks. First, we find that given similar conditions (i.e. number of nodes,
number of links, degree distribution and clustering), fewer colors are needed to color
disassortative (high degree nodes tend to connect to low degree nodes and vice versa) than
assortative networks. Fewer colors create fewer independent sets which in turn imply higher
concurrency potential for a network. Since PPI networks tend to be disassortative, we suggest
that in addition to functional specificity and stability proposed previously by Maslov and
Sneppen (Science 296, 2002), the disassortative nature of PPI networks may promote the ability
of cells to perform multiple, crucial and functionally diverse tasks concurrently. Second, since
graph coloring is closely related to the presence of cliques in a graph, the significance of node
coloring information to the problem of identifying protein complexes, i.e. dense subgraphs in a
PPI network, is investigated. We find that for PPI networks where 1% to 11% of nodes
participate in at least one identified protein complex, such as H. sapien (DIP20070219,
DIP20081014 and HPRD070609), DSATUR (a well-known complete graph coloring algorithm)
node coloring information can improve the quality (homogeneity and separation) of initial
candidate complexes. This finding may help to improve existing protein complex detection
methods, and/or suggest new methods.
Keywords: graph coloring, biological networks, degree-degree correlation, concurrency, protein
complexes
1
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
Supplementary Material
SM-1 Supplementary Material for Section 2
Network formation
Using a different random number seed each time, two networks with power-law distributed
degree distributions are produced with the preferential attachment algorithm described in [2]. For
both networks, all nodes belong to the same component, the number of nodes N = 1,000, and the
number of links M = 4,960. Let these two networks form a set called D0. The relevant
characteristics of these networks are given in Table SM-1.1 and Fig. SM-1.1.
Table SM-1.1 Node degree summary statistics for the networks.
Min Max Average Std. dev. Mod Median
3
116
9.92
10.3947
5
7
5
102
9.92
9.2080
5
7
Degree
1
10
100
1000
R e ve rse d C D F
1.000
0.100
0.010
0.001
Fig. SM-1.1 The reversed cumulative degree distributions of the test networks on a log-log scale.
In the first experiment (E1), assortative and disassortative versions of the networks in D0 are
formed by rewiring randomly chosen pairs of links either to increase or to decrease degreedegree correlation per [20]. These networks have little to no clustering. In E1, the networks in
D0 form the baseline or null model.
In the second experiment (E2), the node degree lists (which is a list of node degrees in node
label order) of the networks in D0 are fed into the algorithm in [9] to produce networks with high
clustering. Two networks are produced for each node degree list with a different random number
seed each time. Let these four networks form a set called S0. In E2, the networks in S0 form the
baseline or null model. Disassortative and assortative versions of the four networks in S0 are
2
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
produced using the algorithm in Appendix A of [9] which essentially controls the links between
the top 5% of high degree nodes. For E2, the link probability between the set of top 50 (5% ×
1000) high degree nodes is set at 0.00 to create networks more disassortative than the null
networks, and 0.25 and 0.75 to create networks more assortative than the null networks. Fig. SM1.2 compares the clustering [19] and assortativity [12] characteristics of the E1 and E2 networks.
0.4
0.6
Assortativity (A)
Assortativity (A)
assort
0.2
null
0.0
disassort
-0.2
-0.4
0.25
0.2
null
0.1
0.00
0
-0.1
-0.2
-0.3
-0.6
14.5
0.75
0.3
0.4
15.5
16.5
14
17.5
15
17
Network / Test problem
Network / Test problem
0.8
0.5
0.75
assort
null
disassort
0.25
Clustering (C)
0.4
Clustering (C)
16
0.3
0.2
null
0.7
0.00
0.6
0.1
0.5
0.0
14
15
16
17
14
15
16
Network / Test problem
Network / Test problem
Fig. SM-1.2 Topological characteristics of the two networks for E1 (left) and the four networks for E2
(right). The degree distributions of these networks are given in Fig. SM-1.1. The E2 networks labeled 15
and 15.5 (16 and 16.5) have the same degree distribution as the E1 network labeled 15 (16).
Graph Coloring Algorithms
The DSATUR (degree saturation) algorithm [3] begins by labeling a highest degree node
with the lowest numbered color and proceeds to color one node at a time, giving preference to
nodes of high saturation or of high degree if there is more than one node with the same amount
of saturation, with the lowest numbered color without incurring a conflict. Saturation refers to
the unique number of colors neighbouring an uncolored node. In our implementation, colors
begin at 0 and increase by 1. We do not fix the number of colors c for a network beforehand, but
3
17
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
instead use DSATUR to find c. Thus, the c value found may or may not be the chromatic number
of a network. DSATUR is run once per network.
The hill climbing (HC) algorithm repeatedly chooses a random number of nodes from a
network to mutate, i.e. change to a randomly chosen color within a given palette, until either the
network is properly colored or the maximum number of tries (max_evals) is reached. In the
experiments, max_evals is set to 2 million. The number of nodes to mutate is controlled by the
mutation rate (Pm), which in the experiments is set to 0.0625, permitting HC to mutate 1 to 62
(0.0625 × N) nodes at a time. In HC the current network is reproduced with some slight random
variation via mutation and the better colored or fitter network of the parent-offspring pair is
selected for reproduction in the next iteration while the less fit network is discarded. HC graph
coloring is done by first using the number of colors required by DSATUR, and then as necessary,
incrementing the number of colors until HC achieves a high (close to 100%) success rate, i.e.
finds a proper coloring within max_evals on every run it does.
Method
DSATUR is run once per network and its results are averaged over network type, i.e.
disassortative, null and assortative for E1, and 0.00, null, 0.25 and 0.75 for E2. Due to HC’s
stochastic nature, 10 independent runs (with a different random number seed each time) are
made for each network, and results are averaged over all runs per network type. Unlike
DSATUR, there is no inherent order in HC’s color assignments, i.e. the highest degree node need
not be labeled with color 0, and HC may produce different but proper c-coloring of a network.
This difference between algorithms is considered when evaluating the results. Table SM-1.2
illustrates the result summarization process for Fig. 1.
Table SM-1.2
Network
E1
Disassort
Null
assort
15.0
16.0
DSATUR Colors
6
6
7
7
24
22
Network 15.0
15.5
16.0
16.5
E2
DSATUR Colors
Avg.
7
8
8
9
8.00
0.00
9
10
11
11
10.25
Null
13
12
13
12
12.50
0.25
16
16
17
16
16.25
0.75
Colors
6
8
12
14
24
26 Colors
8
10
14
16
18
20
E1
RMHC Success Rate
E2
RMHC Success Rate
17/40 40/40
Disassort 20/20
0.00
0/20 0/20 16/20 20/20
0/40 22/40 37/40
Null
Null
14/20 20/20 0.25
32/40 39/40
assort
36/40 39/40
0.75
Avg.
6
7
23
4
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
Results
Fig. SM-1.3 examines the coloring of the top 50 high degree nodes. The DSATUR values
are the average (avg) and one standard deviation (sd) of color values for the top 50 high degree
nodes of each network. A low average combined with a small standard deviation indicates little
variability in the coloring of the top 50 high degree nodes. This simple summary is not
applicable to HC because unlike DSATUR, HC does not assign the lowest numbered color to
nodes. Further, permutation of a proper coloring is also a proper coloring. Therefore, for HC, the
one standard deviation value of color values for the top 50 high degree nodes of the 10 random
runs is recorded, and the HC plots report the average of these standard deviations to indicate the
color range of the top 50 high degree nodes. What is important is not the predominant color of
the nodes of a network, but the number of or range of colors of the nodes, which tells us the
number of independent sets and thus the groups of tasks that may execute concurrently.
12
18
16
(E1)
avg
10
sd
14
sd
12
Dsatur Color
DSATUR color
(E2)
avg
10
8
6
8
6
4
4
2
2
0
0
disassort
null
assort
disassort
9
8
null
assort 0.75
null
assort 0.75
7
(E1)
(E2)
6
7
5
HC Color
HC Color
6
5
4
3
4
3
2
2
1
0
1
disass ort
null
ass ort
disassort
Fig. SM-1.3 Color summary for top 50 high degree nodes of E1 (left) and E2 (right) networks. Error bars
indicate 99% confidence interval. Color range increases significantly as networks become less
disassortative (left to right) denoting that more independent sets are created for the same number of
nodes.
5
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
The plots in Fig. SM-1.3 show that high degree nodes are partitioned into fewer independent
sets when a network is less assortative. For both DSATUR and HC, the color range of the top 50
high degree nodes is significantly larger for assortative than disassortative networks. Also, in
both E1 and E2 networks, DSATUR colors all the top 50 high degree nodes with the same color
0. This is expected for E2 since link probability is 0.00 between any pair of nodes belonging to
the top 50 high degree nodes.
Why are disassortative networks more colorable with a smaller palette? Previously, [17]
reported that increases in network clustering increases graph coloring difficulty due to shorter
path lengths and increased network cliquishness. Similarly, we find path length amongst nodes
of high degree to be a distinguishing factor between disassortative and assortative networks and a
determining factor in the number of colors required by DSATUR or by HC. Compared with their
assortative counterparts, disassortative networks have longer median path lengths amongst nodes
of high degree (q1 MPL) although there is no significant different between median path lengths
of the networks as a whole (MPL) (Fig. SM-1.4)
10.0
9.0
10.0
(E1)
0.00
(E2)
disassort
9.0
8.0
null
8.0
null
7.0
assort
7.0
0.25
6.0
6.0
0.75
5.0
5.0
4.0
4.0
3.0
3.0
2.0
2.0
1.0
1.0
0.0
0.0
q1 MPL
q2 MPL
q3 MPL
MPL
q1 MPL
Max PL
q2 MPL
q3 MPL
MPL
Max PL
Fig. SM-1.4 Median path length (MPL) of nodes by degree quartile and average network diameter (Max
PL) for E1 networks (left) and for E2 networks (right). Error bars indicate one standard deviation. The
quartiles are formed as follows: (i) unique degree values are sorted in ascending order, and (ii) this sorted
list is divided into four (almost) equal parts. Quartile 1 (q1) nodes are those with degree values larger than
or equal to the minimum value in the upper quartile of this sorted list (Quartile 1 nodes are those with
higher degrees). Quartile 2 nodes are those with degree values larger than or equal to the median of this
sorted list. Quartile 3 nodes are those with degree values larger than or equal to the minimum value of the
lower quartile of this sorted list. Quartile 4 comprises all nodes in the network.
The effect of path length amongst nodes of high degree on graph coloring is intuited as
follows: in general, by nature of having more links, nodes with high degree are more constrained
in their color choices than nodes with low degree. By preferring to fix the color of high degree
nodes, which DSATUR does explicitly in its algorithm and HC does implicitly (negative
6
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
correlations are recorded between node degree and time of last successful mutation, and between
node degree and number of successful mutations), the color palette expands more slowly and less
unnecessarily. Nodes of low degree have more color choices and their exact color can be
determined later within the existing color range. As such, a network would be colorable with
fewer colors if nodes of high degree were separated from each other but still connected to one
another via nodes of lower degrees which are less constrained in their color choices. Longer path
lengths amongst nodes of high degree reflect networks with such characteristics, as do negative
degree-degree correlation or disassortative node degree mixing pattern. Differences in degreedegree correlation may also explain the large performance variation associated with coloring
scale-free networks reported in [18].
SM-2 Supplementary Material for Section 3
PPI datafiles
The PPI networks in this paper are constructed from the data sources listed in Table SM-2.1.
These data files are freely available on-line for download and the DIP 2008 dataset was the most
recent in the DIP at the time of preparing this paper. Table SM-2.2 lists the organisms in this
study. Mammalian does not refer to a particular organism but is included as an additional test
network.
Table SM-2.1 PPI data sources
Label
DIPYYYYMMDDMIF25
DIP HiTHr
HPRD
TAP
Details
Species specific Full DIP (http://dip.doe-mbi.ucla.edu) files dated YYYYMMDD.MIF25.
High throughput datasets in MIF format from DIP (http://dip.doe-mbi.ucla.edu)
The Human Protein Reference Database (http://www.hprd.org)
File used: HPRD_SINGLE_PSIMI_070609.xml
The Yeast TAP Project (http://tap.med.utoronto.ca)
Files used: TAP_core.txt and MCL_clusters.txt
Krogan et al. Global landscape of protein complexes in the yeast Saccharomyces
cerevisiae. Nature 2006; 440:637-643.
Table SM-2.2 Organisms
Short name
Full name
NCBI TaxId
Celeg
Caenorhabditis elegans
Dmela
Drosophila melanogaster
Ecoli
Escherichia coli
Hpylo
Helicobacter pylori
Hsapi
Homo sapiens
9606*
Scere
Saccharomyces cerevisiae
Mammalian Mammalian
40674*
* Used to identify interactors and interactions for different organisms in the HPRD file.
7
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
Table SM-2.3 gives the size of the PPI datafiles in terms of number of listed interactors and
interactions. Self interactions are those with only one distinct interactor listed per interaction.
Binary interactions are those with exactly two distinct interactors listed per interaction. Complex
interactions are those with more than two distinct interactors listed per interaction.
Table SM-2.3 Characteristics of PPI data files
Data source
DIP HiTHr
Gavin2002a
Giot2003a
Li2004a
TAP (2006)
DIP20070219MIF25
DIP20081014MIF25
HPRD (Release 8, 2009)
Organism
DID
Interactors
Scere
Dmela
Celeg
Scere
Celeg
Dmela
Ecoli
Hpylo
Hsapi
Scere
Celeg
Dmela
Ecoli
Hpylo
Hsapi
Scere
Hsapi
Mammalian
1S
1D
1C
3S
4C
4D
4E
4P
4H
4S
5C
5D
5E
5P
5H
5S
7H
7X
1,361
7,036
2,633
2,708
2,646
7,461
1,858
710
1,186
4,968
2,651
7,505
1,879
713
1,645
4,977
3,214
6,148
Binary
3,221
20,732
3,966
7,123
3,976
22,641
5,928
1,358
1,427
17,240
3,979
22,677
5,937
1,360
1,806
17,226
3,555
18,523
Interactions
Complex
0
0
0
339
0
1
445
0
13
779
0
9
445
0
79
801
9
456
Self
0
193
60
0
60
185
1,041
61
64
289
61
186
1,052
61
138
294
509
1,583
PPI network construction
Interactors and non-self interactions in a PPI datafile become respectively the nodes and
links of a PPI network. Except for the TAP dataset, the topology of complex interactions is
unspecified in the PPI datafiles. As such, we first use a spanning tree (built by adding one node
at a time to the existing tree) to link all nodes participating in a complex interaction, and then use
a parameter Pe which we introduce to specify the probability of adding links to the complex.
Links built in this manner are hypothetical and may coincide with actual interactions or not. The
spoke model is another way to handle the undetermined topological aspect of complex
interactions but this requires knowledge or selection of a central node (the bait) from which links
are made to all other participants of a complex [1]. The choice of Pe affects the number of links
in a PPI network with complexes, and may also affect node degree and other network statistics
such as clustering coefficient, assortativity and path length. As such, three Pe values are used in
our experiments: 0.00, 0.25 and 0.50.
8
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
Interactions in the TAP datafile (TAP_core.txt) are all binary. The 339 complex interactions
for TAP are derived from the accompanying MCL_cluster.txt file as follows: for each cluster in
MCL_cluster.txt (there are 547 clusters, some with only two members or interactors), retain only
interactors found in TAP_core.txt and then count as a complex, only those clusters with more
than two members.
Table SM-2.4 summarizes the fixed (Pe independent) characteristics of PPI networks
generated from the PPI datafiles in Table SM-2.3. The number of nodes in Table SM-2.4 may
differ from the number of interactors in Table SM-2.3 because we only include in our PPI
networks those interactors listed as participants in an interaction. A complex node is a node
participating in a complex interaction or equivalently belonging to a complex. Complex size
refers to the number of nodes in a complex. Dividing the number of complex nodes by the
number of complexes need not yield average complex size because complexes may overlap, i.e.
a complex node may belong to more than one complex, and average complex size counts a
shared complex node multiple times.
Table SM-2.5 gives a sample of values for the variable (Pe dependent) characteristics of PPI
networks. The values may vary only for PPI networks with unspecified topology for complexes
(these networks are highlighted in gray).
Dealing with inaccuracies in PPI data
To address the possibility of incompleteness and expected high false positive rate in PPI
data, we first use the variation over time in the number of nodes, and number and type of
interactions per organism as observed in Tables SM-2.4 and SM-2.5 as a source of noise that is
more plausible than simply adding and removing nodes and links at random from a network.
Second, links of a network are rewired at random with various proportions Pr. First 2% of the
links are rewired, then another 2%, and finally 6% to make a total of 10%.
9
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
Table SM-2.4 Fixed (Pe independent) characteristics of PPI networks
No. of
No. of Complex Complex nodes
Complex Size
No. of
DID
Nodes (a)
Nodes (b)
% (100b/a)
Complexes Min Max Avg Stdev
1S
1,361
0
0.00
0
0
0
0
1D
7,027
0
0.00
0
0
0
0
1C
2,624
0
0.00
0
0
0
0
3S
2,708
2,554
94.31
339
3
49
6.4
5.9
4C
2,637
0
0.00
0
0
0
0
4D
7,451
3
0.04
1
3
3
3
4E
1,548
1,233
79.65
445
3
89 13.4 12.9
4P
701
0
0.00
0
0
0
0
4H
1,173
23
1.96
13
3
4
3.3
0.5
4S
4,964
1,988
40.05
779
3
55
9.4
8.3
5C
2,640
0
0.00
0
0
0
0
5D
7,495
27
0.36
9
3
5
3.6
0.7
5E
1,561
1,233
78.99
445
3
89 13.4 12.9
5P
704
0
0.00
0
0
0
0
5H
1,595
166
10.41
79
3
5
3.3
0.6
5S
4,971
1,983
39.89
801
3
55
9.4
8.2
7H
2,231
26
1.17
9
3
4
3.1
0.3
7X
5,716
956
16.72
456
3
12
3.9
1.5
Table SM-2.5 Variable (Pe dependent) characteristics of PPI networks
Pe = 0.00
Pe = 0.25
Pe = 0.50
DID No. of
Degree
No. of
Degree
No. of
Links Min, Max, Avg., Stdev. Links
Min, Max, Avg., Stdev.
Links
1S 3,221
1, 53, 4.7, 5.9
1D 20,732
1, 178, 5.9, 9.4
1C 3,966
1, 187, 3.0, 7.2
3S 7,123
1, 141, 5.3, 7.5
4C 3,976
1, 187, 3.0, 7.2
4D 22,642
1, 178, 6.1, 9.8
22,642
1, 178, 6.1, 9.8
22,643
4E 9,047
1, 248, 11.7, 26.1
16,176
1, 412, 20.8, 44.7
21,964
4P 1,358
1, 54, 3.9, 5.4
4H 1,443
1, 37, 2.5, 3.0
1,443
1, 37, 2.5, 3.0
1,445
4S 22,178
1, 283, 8.9, 13.8
31,862
1, 283, 12.8, 20.4
40,771
5C 3,979
1, 187, 3.0, 7.2
5D 22,689
1, 178, 6.1, 9.8
22,693
1, 178, 6.1, 9.8
22,694
5E 9,087
1, 252, 11.6, 26.0
16,195
1, 428, 20.7, 44.3
21,895
5P 1,360
1, 54, 3.9, 5.4
5H 1,892
1, 37, 2.4, 2.8
1,904
1, 37, 2.4, 2.8
1,920
5S 22,158
1, 283, 8.9, 13.8
31,737
1, 283, 12.8, 20.2
40,719
7H 3,561
1, 97, 3.2, 6.2
3,563
1, 97, 3.2, 6.2
3,564
7X 19,181
1, 191, 6.7, 10.9
19,397
1, 190, 6.8, 11.0
19,614
Cells are left blank if there is no change in value.
10
Degree
Min, Max, Avg., Stdev.
1, 178, 6.1, 9.8
1, 523, 28.4, 57.8
1, 37, 2.5, 3.0
1, 321, 16.4, 27.6
1, 178,6.1, 9.8
1, 524, 28.1, 57.4
1, 37, 2.4, 2.8
1, 309, 16.4, 27.5
1, 97, 3.2, 6.2
1, 188, 6.9, 11.0
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
PPI network naming convention
To ease the identification of PPI networks and their variations in the results, we assign
numerical labels (NID) to the PPI networks as follows: NID = ODID + Pe + Pr. For instance, the
NID of a PPI network for S. cerevisiae built from dataset DIP20081014MIF25 with Pe = 0.25
and Pr = 0.04 is 4.29. ODID (Table SM-2.6) arranges the networks by data file chronological
order and by organism. Pe for networks without complex interactions is 0.00.
Table SM-2.6 ODID
ODID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
DID 1S 3S 4S 5S 1D 4D 5D 1C 4C 5C 4E 5E 4P 5P 4H 5H 7H 7X
SM-3 Supplementary Material for Section 4
Network clusters and Protein complexes
Protein complexes often form network clusters, i.e. densely linked subgraphs, and network
clustering forms the basis of protein complex detection algorithms such as HCS [13], MCODE
[1] and RNSC [10]. Wherever possible, we use the term ‘complex’ for a biologically meaningful
cluster of protein nodes which has been tagged as such, and ‘cluster’ for a group of nodes with
high link density. A cluster need not be a complex.
Complex interactions are considered as protein complexes. However, this does not mean
that there are no protein complexes in PPI networks with no complex interactions specified. The
protein complexes in these networks, e.g. 1S and 1D, are just not explicitly identified as such in
their datafiles, and we exclude them from our work in section 4 of the paper. Information about
protein complexes for 1S and 1D can be derived from other biological databases e.g. MIPS. But
we decided to test more recent PPI networks and these have complex interactions explicitly
defined in their datafiles. No doubt there are other means of creating PPI networks and
discovering their complexes, e.g. combining different data sources, but these are not dealt with in
our current work.
11
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
Results
Fig. SM-3.1 compares pairs of corresponding after the results are summarized by ODID and
Pe.
(b)
1.2
1.0
1.0
0.8
0.8
cov
cov
(a)
1.2
0.6
0.4
0.6
0.4
average C
0.2
0.0
0.00
5.00
10.00
15.00
median C
0.2
average D
0.0
0.00
20.00
median D
5.00
NID
1.2
10.00
15.00
NID
(c)
(d)
0.08
C
C
1.0
20.00
D
D
0.06
sepa
acc
0.8
0.6
0.04
0.4
0.02
0.2
0.0
0.00
5.00
10.00
15.00
20.00
0.00
0.00
5.00
10.00
15.00
20.00
NID
NID
Fig. SM-3.1 Comparison of the ‘after’ clustering quality statistics
Tables SM-3.1a and SM-3.1b contain data produced for 5S (S. cerevisiae from
DIP20081014MIF25) to illustrate the summarization process described in section 4.3. The ODID
for 5S is 4. Col notes the average values (Avg.) compared in Figs. 9 and SM-3.1.
12
S. Khor. Application of Graph Coloring to Biological Networks (Dec 14, 2009)
Table SM-3.1a Basic algorithm (Avg. Color, ‘C’) results for 5S
NID
Avg.
cov
Before
Median
Acc
cov
Sepa
Col
After
Avg.
Median
Acc
Sepa
cov
cov
SM-3.1a SM-3.1b SM-3.1c SM-3.1d
Avg.
cov
9b
After - Before
Median
Acc
cov
9c
9d
Sepa
ODID
+ Pe
9e
4.00 0.4662 0.4366 0.0026 0.0002
0.4769
0.4463
0.0028
0.0003
0.0107 0.0097 0.0002 0.0001
4.02 0.4401 0.4150 0.0024 0.0002
0.4532
0.4281
0.0029
0.0003
0.0131 0.0131 0.0005 0.0001
4.04 0.4182 0.3928 0.0021 0.0002
0.4301
0.3995
0.0027
0.0003
0.0119 0.0067 0.0006 0.0001
4.10 0.3526 0.3212 0.0015 0.0001
0.3628
0.3320
0.0016
0.0003
0.0102 0.0108 0.0001 0.0002
Avg. 0.4193 0.3914 0.0022 0.0002
0.4308
0.4015
0.0025
0.0003
0.0115 0.0101 0.0004 0.0001
4.25 0.4428 0.4090 0.0022 0.0002
0.4502
0.4154
0.0023
0.0002
0.0074 0.0064 0.0001 0.0000
4.27 0.4182 0.3796 0.0016 0.0001
0.4279
0.3890
0.0018
0.0002
0.0097 0.0094 0.0002 0.0001
4.29 0.4000 0.3654 0.0016 0.0001
0.4079
0.3659
0.0017
0.0002
0.0079 0.0005 0.0001 0.0001
4.35 0.3432 0.3111 0.0011 0.0001
0.3534
0.3261
0.0012
0.0001
0.0102 0.0150 0.0001 0.0000
Avg. 0.4011 0.3663 0.0016 0.0001
0.4099
0.3741
0.0018
0.0002
0.0088 0.0078 0.0001 0.0001
4.50 0.4635 0.4602 0.0023 0.0001
0.4701
0.4642
0.0023
0.0002
0.0066 0.0040 0.0000 0.0001
4.52 0.4381 0.4296 0.0019 0.0001
0.4448
0.4341
0.0020
0.0002
0.0067 0.0045 0.0001 0.0001
4.54 0.4223 0.4116 0.0018 0.0001
0.4296
0.4248
0.0019
0.0002
0.0073 0.0132 0.0001 0.0001
4.60 0.3662 0.3432 0.0011 0.0001
0.3752
0.3512
0.0014
0.0001
0.0090 0.0080 0.0003 0.0000
Avg. 0.4225 0.4112 0.0018 0.0001
0.4299
0.4186
0.0019
0.0002
0.0074 0.0074 0.0001 0.0001
4.00
4.25
4.50
Table SM-3.1b Alternative algorithm (Degree, ‘D’) results for 5S
NID
Col
4.00
4.02
4.04
4.10
Avg.
4.25
4.27
4.29
4.35
Avg.
4.50
4.52
4.54
4.60
Avg.
Avg.
cov
0.4665
0.4411
0.4162
0.3518
0.4189
0.4473
0.4210
0.4003
0.3468
0.4039
0.4687
0.4410
0.4227
0.3591
0.4229
Before
Median
Acc
cov
0.4425
0.4099
0.3793
0.3111
0.3857
0.4246
0.3788
0.3636
0.3041
0.3678
0.4563
0.4333
0.4070
0.3347
0.4078
0.0025
0.0022
0.0020
0.0014
0.0020
0.0022
0.0019
0.0017
0.0012
0.0018
0.0020
0.0018
0.0018
0.0012
0.0017
Sepa
0.0002
0.0002
0.0002
0.0001
0.0002
0.0002
0.0002
0.0001
0.0001
0.0002
0.0002
0.0001
0.0001
0.0001
0.0001
After
After - Before
ODID
Avg.
Median
Avg. Median
+ Pe
Acc
Sepa
Acc
Sepa
cov
cov
cov
cov
SM-3.1a SM-3.1b SM-3.1c SM-3.1d
9b
9c
9d
9e
0.4765
0.4459 0.0026 0.0002 0.0100 0.0034 0.0001 0.0000
0.4527
0.4175 0.0023 0.0002 0.0116 0.0076 0.0001 0.0000
0.4295
0.3884 0.0021 0.0002 0.0133 0.0091 0.0001 0.0000
0.3679
0.3333 0.0015 0.0002 0.0161 0.0222 0.0001 0.0001
0.4317
0.3963 0.0021 0.0002 0.0128 0.0106 0.0001 0.0000 4.00
0.4578
0.4286 0.0022 0.0002 0.0105 0.0040 0.0000 0.0000
0.4324
0.3920 0.0020 0.0002 0.0114 0.0132 0.0001 0.0000
0.4139
0.3694 0.0018 0.0002 0.0136 0.0058 0.0001 0.0001
0.3648
0.3333 0.0013 0.0001 0.0180 0.0292 0.0001 0.0000
0.4172
0.3808 0.0018 0.0002 0.0134 0.0131 0.0001 0.0000 4.25
0.4797
0.4712 0.0021 0.0002 0.0110 0.0149 0.0001 0.0000
0.4542
0.4443 0.0018 0.0002 0.0132 0.0110 0.0000 0.0001
0.4359
0.4144 0.0018 0.0002 0.0132 0.0074 0.0000 0.0001
0.3750
0.3399 0.0012 0.0001 0.0159 0.0052 0.0000 0.0000
0.4362
0.4175 0.0017 0.0002 0.0133 0.0096 0.0000 0.0001 4.50
13
| 1 |
A pseudo-quasi-polynomial algorithm
for mean-payoff parity games
Laure Daviaud1 , Marcin Jurdziński1 , and Ranko Lazić1
arXiv:1803.04756v1 [cs.GT] 13 Mar 2018
1
DIMAP, Department of Computer Science, University of Warwick, UK
Abstract
In a mean-payoff parity game, one of the two players aims both to achieve a
qualitative parity objective and to minimize a quantitative long-term average of payoffs
(aka. mean payoff). The game is zero-sum and hence the aim of the other player
is to either foil the parity objective or to maximize the mean payoff. Our main
technical result is a pseudo-quasi-polynomial algorithm for solving mean-payoff parity
games. All algorithms for the problem that have been developed for over a decade
have a pseudo-polynomial and an exponential factors in their running times; in the
running time of our algorithm the latter is replaced with a quasi-polynomial one.
Our main conceptual contributions are the definitions of strategy decompositions for
both players, and a notion of progress measures for mean-payoff parity games that
generalizes both parity and energy progress measures. The former provides normal
forms for and succinct representations of winning strategies, and the latter enables
the application to mean-payoff parity games of the order-theoretic machinery that
underpins a recent quasi-polynomial algorithm for solving parity games.
1
Introduction
A motivation to study zero-sum two-player games on graphs comes from automata theory and logic, where they have been used as a robust theoretical tool, for example, for
streamlining of the initially notoriously complex proofs of Rabin’s theorems on the complementation of automata on infinite trees and the decidability of the monadic second-order
logic on infinite trees [14, 21], and for the development of the related theory of logics with
fixpoint operators [11]. More practical motivations come from model checking and automated controller synthesis, where they serve as a clean combinatorial model for the study
of the computational complexity and algorithmic techniques for model checking [12], and
for the automated synthesis of correct-by-design controllers [20]. There is a rich literature
on closely related “dynamic games” in the classical game theory and AI literatures reaching back to 1950’s, and games on graphs are also relevant to complexity theory [9] and to
competitive ratio analysis of online algorithms [22].
1.1
Mean-payoff parity games
A mean-payoff parity game is played by two players—Con and Dis—on a directed graph.
From the starting vertex, the players keep following edges of the graph forever, thus
forming an infinite path. The set of vertices is partitioned into those owned by Con and
those owned by Dis, and it is the owner of the current vertex who picks which outgoing
edge to follow to the next current vertex. Who is declared the winner of an infinite path
1
formed by such interaction is determined by the labels of vertices and edges encountered
on the path. Every vertex is labelled by a positive integer called its priority and every
edge is labelled by an integer called its cost. The former are used to define the parity
condition: the highest priority that occurs infinitely many times is odd; and the latter are
used to define the (zero-threshold) mean-payoff condition: the (lim-sup) long-run average
of the costs is negative. If both the parity and the mean-payoff conditions hold then Con
is declared the winner, and otherwise Dis is.
In the following picture, if Dis owns the vertex in the middle then she wins the game
(with a positional strategy): she can for example always go to the left whenever she is in
the middle vertex and this way achieve the positive mean payoff 1/2. Conversely, if Con
owns the middle vertex then he wins the game. He can choose to go infinitely often to
the left and see priority 1—in order to fulfill the parity condition—and immediately after
each visit to the left, to go to the right a sufficient number of times—so as to make the
mean-payoff negative. Note that a winning strategy for Con is not positional.
−1
1
1
0
0
0
0
Throughout the paper, we write V and E for the sets of vertices and directed edges in a
mean-payoff parity game graph, π(v) for the priority of a vertex v ∈ V , and c(v, u) for the
cost of an edge (v, u) ∈ E. Vertex priorities are positive integers no larger than d, which
we assume throughout the paper to be a positive even integer, edge costs are integers
whose absolute value does not exceed the positive integer C, and we write n and m for
the numbers of vertices and edges in the graph, respectively.
Several variants of the algorithmic problem of solving mean-payoff parity games have
been considered in the literature. The input always includes a game graph as described
above. The value of (a vertex in) a mean-payoff parity game is defined as ∞ if Con does not
have a winning strategy for the parity condition, and otherwise the smallest mean payoff
that Con can secure while playing so as to satisfy the parity condition. (Note that the
paper that introduced mean-payoff parity games [7] defined Con to be the maximizer and
not, as we do, the minimizer of the mean payoff. The two definitions are straightforwardly
inter-reducible; the choice we made allows for a better alignment of our key notion of a
mean-payoff parity progress measure with the literature on energy progress measures [2].)
The value problem is to compute the value of every vertex. The threshold problem is,
given an additional (rational) number θ as a part of the input, to compute the set of
vertices with finite value (strictly) less than θ. (Note that a value of a vertex is not finite,
i.e., it is ∞, if and only if Con does not have a winning strategy for his parity condition,
which can be checked in quasi-polynomial time [3, 17].) In the zero-threshold problem the
threshold number θ is assumed to be 0.
As Chatterjee et al. [6, Theorem 10] have shown, the threshold problem can be used
to solve the value problem at the cost of increasing the running time by the modest
O(n · log(nW )) multiplicative term. Their result, together with a routine linear-time
reduction from the threshold problem to the zero-threshold problem (subtract θ from costs
of all edges), motivate us to focus on solving the zero-threshold problem in this paper.
For brevity, we will henceforth write “mean-payoff condition” instead of “zero-threshold
mean-payoff condition”.
The roles of the two players in a mean-payoff parity game are not symmetric for
several reasons. One is that Con aims to satisfy a conjunction of the parity condition
2
and of the mean-payoff condition, while the goal of Dis is to satisfy a disjunction of
the negated conditions. The other one is that negations of the parity condition and of
the mean-payoff condition are not literally the parity and the mean-payoff conditions,
respectively: the negation of the parity condition swaps the roles of even and odd, and the
negation of the strict (“less than”) mean-payoff condition is non-strict (“at least”). The
former asymmetry (conjunction vs disjunction) is material and our treatments of strategy
construction for players Con and Dis differ substantially, but the latter are technically
benign. The discussion above implies that the goal of player Dis is to either satisfy the
parity condition in which the highest priority that occurs infinitely many times is even, or
to satisfy the “at least” zero-threshold mean-payoff condition.
1.2
Related work
Mean-payoff games have been studied since 1960’s and there is a rich body of work on
them in the stochastic games literature. We selectively mention the positional determinacy
result of Ehrenfeucht and Mycielski [10] (i.e., that positional optimal strategies exist for
both players), and the work of Zwick and Paterson [22], who pointed out that positional
determinacy implies that deciding the winner in mean-payoff games is both in NP and
in co-NP, and gave a pseudo-polynomial algorithm for computing values in mean-payoff
games that runs in time O(mn3 C). Brim et al. [2] introduced energy progress measures as
natural witnesses for winning strategies in closely related energy games, they developed a
lifting algorithm to compute the least energy progress measures, and they observed that
this leads to an algorithm for computing values in mean-payoff games whose running time
is O(mn2 C · log(nC)), which is better than the algorithm of Zwick and Paterson [22] if
C = 2o(n) . Comin and Rizzi [8] have further refined the usage of the lifting algorithm for
energy games achieving running time O(mn2 C).
Parity games have been studied in the theory of automata on infinite trees, fixpoint
logics, and in verification and synthesis since early 1990’s [11, 12]. Very selectively, we mention early and influential recursive algorithms by McNaughton [19] and by Zielonka [21],
the running times of which are O(nd+O(1) ). The breakthrough result of Calude et al. [3]
gave the first algorithm that achieved an no(d) running time. Its running time is polynomial O(n5 ) if d ≤ log n and quasipolynomial O(nlg d+6 ) in general. (Throughout the
paper, we write lg x to denote log2 x, and we write log x when the base of the logarithm
is moot.) Note that Calude et al.’s polynomial bound for d ≤ log n implies that parity
games are FPT (fixed parameter tractable) when the number d of distinct vertex priorities
is the parameter. Further analysis by Jurdziński and Lazić [17] established that running
times O(mn2.38 ) for d ≤ lg n, and O(dmnlg(d/ lg n)+1.45 ) for d = ω(lg n), can be achieved
using their succinct progress measures, and Fearnley et al. [13] obtained similar results by
refining the technique and the analysis of Calude et al. [3]. Existence of polynomial-time
algorithms for solving parity games and for solving mean-payoff games are fundamental
long-standing open problems [12, 22, 15].
Mean-payoff parity games have been introduced by Chatterjee et al. [7] as a proof of
concept in developing algorithmic techniques for solving games (and hence for controller
synthesis) which combine qualitative (functional) and quantitative (performance) objectives. Their algorithm for the value problem is inspired by the recursive algorithms of
McNaughton [19] and Zielonka [21] for parity games, from which its running time acquires
the exponential dependence mnd+O(1) C on the number of vertex priorities. Chatterjee and
Doyen [4] have simplified the approach by considering energy parity games first, achieving running time O(dmnd+4 C) for the threshold problem, which was further improved
by Bouyer et al. [1] to O(mnd+2 C) for the value problem. Finally, Chatterjee et al. [6]
3
have achieved the running time O(mnd C log(nC)) for the value problem, but their key
original technical results are for the two special cases of mean-payoff parity games that
allow only two distinct vertex priorities, for which they achieve running time O(mnC) for
the threshold problem, by using amortized analysis techniques from dynamic algorithms.
Note that none of those algorithms escapes the exponential dependence on the number
of distinct vertex priorities, simply because they all follow the recursive structure of the
algorithms by McNaughton [19] and by Zielonka [21].
1.3
Our contributions
Our main technical result is the first pseudo-quasi-polynomial algorithm for solving meanpayoff parity games. More precisely, we prove that the threshold problem for mean-payoff
parity games can be solved in pseudo-polynomial time mn2+o(1) C for d = o(log n), in
pseudo-polynomial time mnO(1) C if d = O(log n) (where the constant in the exponent of n
depends logarithmically on the constant hidden in the big-Oh expression O(log n)), and in
pseudo-quasi-polynomial time O(dmnlg(d/ lg n)+2.45 C) if d = ω(log n). By [6, Theorem 10],
we obtain running times for solving the value problem that are obtained from the ones
above by multiplying them by the O(n log(nC)) term.
Our key conceptual contributions are the notions of strategy decompositions for both
players in mean-payoff parity games, and of mean-payoff parity progress measures. The
former explicitly reveal the underlying strategy structure of winning sets for both players,
and they provide normal forms and succinct representations for winning strategies. The
latter provide an alternative form of a witness and a normal form of winning strategies for
player Dis, which make explicit the order-theoretic structures that underpin the original
progress measure lifting algorithms for parity [16] and energy games [2], respectively, as
well as the recent quasi-polynomial succinct progress measure lifting algorithm for parity
games [17]. The proofs of existence of strategy decompositions follow the well-beaten
track of using McNaughton-Zielonka-like inductive arguments, and existence of progress
measures that witness winning strategies for Dis is established by extracting them from
strategy decompositions for Dis.
Our notion of mean-payoff parity progress measures combines features of parity and energy progress measures, respectively. Crucially, our mean-payoff progress measures inherit
the ordered tree structure from parity progress measures, and the additional numerical
labels of vertices (that capture the energy progress measure aspects) do not interfere substantially with it. This allows us to directly apply the combinatorial ordered tree coding
result by Jurdziński and Lazić [17], which limits the search space in which the witnesses
are sought by the lifting procedure to a pseudo-quasi-polynomial size, yielding our main
result. The order-theoretic properties that the lifting procedure relies on naturally imply
the existence of the least (in an appropriate order-theoretic sense) progress measure, from
which a positional winning strategy for Dis on her winning set can be easily extracted.
In order to synthesize a strategy decomposition—and hence a winning strategy—for
Con in pseudo-quasi-polynomial time, we take a different approach. Progress measures for
games typically yield positional winning strategies for the relevant player [18, 16, 2], but
optimal strategies for Con in mean-payoff parity games may require infinite memory [7].
That motivates us to forgo attempting to pin a notion of progress measures to witness
winning strategies for Con. We argue, instead, that a McNaughton-Zielonka-style recursive
procedure can be modified to run in pseudo-quasi-polynomial time and produce a strategy
decomposition of Con’s winning set. The key insight is to avoid invoking some of the
recursive calls, and instead to replace them by invocations of the pseudo-quasi-polynomial
lifting procedure for Dis, merely to compute the winning set for Dis—and hence also
4
for Con, because by determinacy Con has a winning strategy whenever Dis does not.
As a result, each invocation of the recursive procedure only makes recursive calls on
disjoint subgames, which makes it perform only a polynomial number of steps other than
invocations of the lifting procedure, overall yielding a pseudo-quasi-polynomial algorithm.
Organisation of the paper. In Section 2, we define strategy decompositions for Dis
and Con, and we prove that they exist if and only if the respective player has a winning
strategy. In Section 3, we define progress measures for Dis, and we prove that such a
progress measure exists if and only if Dis has a strategy decomposition. In Section 4, we
give a pseudo-quasi-polynomial lifting algorithm for computing the least progress measure,
from which a strategy decomposition for Dis of her winning set, and the winning set for
Con, can be derived. In Section 5, we show how to also compute a strategy decomposition
for Con on his winning set in pseudo-quasi-polynomial time, using the lifting procedure
to speed up a NcNaughton-Zielonka-style recursive procedure.
2
Strategy decompositions
In this section we introduce our first key concept of strategy decompositions for each of
the two players. They are hierarchically defined objects, of size polynomial in the number
of vertices in the game graph, that witness existence of winning strategies for each of the
two players on their winning sets. Such decompositions are implicit in earlier literature, in
particular in algorithms for mean-payoff parity games [7, 4, 1, 6] that follow the recursive
logic of McNaughton’s [19] and Zielonka’s [21] algorithms for parity games. We make them
explicit because we belive that it provides conceptual clarity and technical advantages.
Strategy decompositions pinpoint the recursive strategic structure of the winning sets in
mean-payoff parity games (and, by specialization, in parity games too), which may provide
valuable insights for future work on the subject. What they allow us to do in this work
is to streamline the proof that the other key concept we introduce—mean-payoff parity
progress measures—witness existence of winning strategies for Dis.
We define the notions of strategy decompositions for Dis and for Con, then in Lemmas 1 and 2 we prove that the decompositions naturally yield winning strategies for the
corresponding players, and finally in Lemma 3 we establish that in every mean-payoff
game, both players have strategy decompositions of their winning sets. The proofs of all
three lemmas mostly use well-known inductive McNaughton-Zielonka-type arguments that
should be familiar to anyone who is conversant in the existing literature on mean-payoff
parity games. We wish to think that for a curious non-expert, this section offers a streamlined and self-contained exposition of the key algorithmic ideas behind earlier works on
mean-payoff parity games [7, 4, 1].
2.1
Preliminaries
Notions of strategies, positional strategies, plays, plays consistent with a strategy, winning
strategies, winning sets, reachability strategies, traps, mean payoff, etc., are defined in the
usual way. We forgo tediously repeating the definitions of those common and routine
concepts, referring a non-expert but interested reader to consult the (typically one-page)
Preliminaries or Definitions sections of any of the previously published papers on meanpayoff parity games [7, 4, 1, 6]. One notable difference between our set-up and those found
in the above-mentioned papers is that for an infinite
Pnsequence of numbers hc1 , c2 , c3 , . . .i,
we define its mean payoff to be lim supn→∞ (1/n) · i=1 ci , rather than the more common
5
ReachDis
Priority b
ReachDis
Priority b
× Con
R
T
ω′
τ
B 6= ∅
U
T
R 6= ∅
ω ′′
τ
ω′
Case 2. b odd
Case 1. b even
Figure 1: Strategy decompositions for Dis
P
lim inf n→∞ (1/n) · ni=1 ci ; this is because we chose Con to be the minimizer of the mean
payoff, instead of the typical choice of making him the maximizer.
2.2
Strategy decompositions for Dis
Let W ⊆ V be a subgame (i.e. a non empty induced subgraph of V with no deadend)
in which the biggest vertex priority is b. We define strategy decompositions for Dis by
induction on b and the size of W . We say that ω is a b-decomposition of W for Dis if the
following conditions hold (pictured in Figure 1).
1. If b is even then ω = (R, ω ′ ), (T, τ ), B , such that:
(a) sets R, T , and B 6= ∅ are a partition of W ;
(b) B is the set of vertices of the top priority b in W ;
(c) τ is a positional reachability strategy for Dis from T to B in W ;
(d) ω ′ is a b′ -decomposition of R for Dis, where b′ < b.
2. If b is odd then ω = (U, ω ′′ ), (T, τ ), (R, ω ′ ) , such that:
(a) sets U , T , and R 6= ∅ are a partition of W ;
(b) ω ′ is either:
i. a b′ -decomposition of R for Dis, where b′ < b; or
ii. a positional strategy for Dis that is mean-payoff winning for her on R;
(c) τ is a positional reachability strategy for Dis from T to R in W ;
(d) ω ′′ is a b′′ -decomposition of U for Dis, where b′′ ≤ b;
(e) R is a trap for Con in W .
We say that a subgame W has a strategy decomposition for Dis if it has a b-decomposition
for some b. A heuristic, if somewhat non-standard, way to think about sets T and R in
the above definition is that sets denoted by T are transient and sets denoted by R are
recurrent. The meanings of those words here are different than in, say, Markov chains,
and refer to strategic, rather than probabilistic, properties.
Given a strategy decomposition ω for Dis, we inductively define a positional strategy
σ(ω) for Dis in the following way:
(
σ(ω ′ ) ∪ τ ∪ β
if ω = (R, ω ′ ), (T, τ ), B ,
σ(ω) =
σ(ω ′′ ) ∪ τ ∪ σ(ω ′ ) if ω = (U, ω ′′ ), (T, τ ), (R, ω ′ ) ,
where β is an arbitrary positional strategy for Dis on B, and σ(ω ′ ) = ω ′ in case 2(b)ii.
6
Lemma 1. If ω is a strategy decomposition of W for Dis and W is a trap for Con, then
σ(ω) is a positional winning strategy for Dis from every vertex in W .
Proof. We proceed by induction on the number of vertices in W . The reasoning involved
in the base cases (when R = ∅ or U = ∅) is analogous and simpler than in the inductive
cases, hence we immediately proceed to the latter.
We consider two cases based on the parity of the biggest
vertex priority b in W . First,
assume that b is even and let ω = (R, ω ′ ), (T, τ ), B be a b-decomposition of W . We
argue that every infinite play consistent with σ(ω) is winning for Dis. If it visits vertices
in B infinitely many times then the parity condition for Dis is satisfied because b is the
biggest vertex priority and it is even. Otherwise, it must be the case that the play visits
vertices in T ∪ B only finitely many times, because visiting a vertex in T always leads
in finitely many steps to visiting a vertex in B by following the reachability strategy τ .
Therefore, eventually the play never leaves R and is consistent with strategy σ(ω ′ ), which
is winning for Dis by the inductive hypothesis.
Next, assume that b is odd, and let ω = (U, ω ′′ ), (T, τ ), (R, ω ′ ) be a b-decomposition.
We argue that every infinite play consistent with σ(ω) is winning for Dis. If it visits T ∪ R,
then by following strategy τ , it eventually reaches and never leaves R (because R is a trap
for Con), and hence it is winning for Dis because σ(ω ′ ) is a winning strategy for Dis by the
inductive hypothesis, or by condition 2(b)ii. Otherwise, if such a play never visits T ∪ R
then it is winning for Dis because σ(ω ′′ ) is a winning strategy for Dis by the inductive
hypothesis.
2.3
Strategy decompositions for Con
Let W ⊆ V be a subgame in which the biggest vertex priority is b. We define strategy
decompositions for Con by induction on b and the size of W . We say that ω is a bdecomposition of W for Con if the following conditions hold (pictured in Figure 2).
1. If b is odd then ω = (R, ω ′ ), (T, τ ), B, λ , such that:
(a) sets R, T , and B 6= ∅ are a partition of W ;
(b) B is the set of vertices of priority b in W ;
(c) τ is a positional reachability strategy for Con from T to B in W ;
(d) ω ′ is a b′ -decomposition of R for Con, where b′ < b;
(e) λ is a positional strategy for Con that is mean-payoff winning for him on W .
2. If b is even then ω = (U, ω ′′ ), (T, τ ), (R, ω ′ ) , such that:
(a) sets U , T , and R 6= ∅ are a partition of W ;
(b) ω ′ is a b′ -decomposition of R for Con, where b′ < b;
(c) τ is a positional reachability strategy for Con from T to R in W ;
(d) ω ′′ is a b′′ -decomposition of U for Con, where b′′ ≤ b;
(e) R is a trap for Dis in W .
We say that a subgame has a strategy decomposition for Con if it has a b-decomposition
for some b. Note that the definition is analogous to that of a strategy decomposition for
Dis in most aspects, with the following differences:
• the roles of Dis and Con, and of even and odd, are swapped;
7
ReachCon
Priority b
Priority b
ReachDis
× Dis
R
T
ω′
τ
B 6= ∅
U
T
R 6= ∅
ω ′′
τ
ω′
λ: winning mean-payoff
Case 2. b even
Case 1. b odd
Figure 2: Strategy decompositions for Con
• the condition 2b is simplified;
• an extra component λ, and the condition 1e, are added.
Given a strategy decomposition ω for Con, we inductively define a strategy σ(ω) for
Con in the following way:
• If b is odd and ω = ((R, ω ′ ), (T, τ ), B, λ), then the strategy proceeds in (possibly
infinitely many) rounds. Round i, for i = 1, 2, 3, . . . , involves the following steps:
1. if starting in R, follow σ(ω ′ ) for as long as staying in R;
2. if starting in T , or having arrived there from R, follow τ until B is reached;
3. once B is reached, follow λ for n + (2n + 3n + 2)nC steps and proceed to
round i + 1.
• If b is even and ω = ((U, ω ′′ ), (T, τ ), (R, ω ′ )), then let:
σ(ω) = σ(ω ′′ ) ∪ τ ∪ σ(ω ′ ).
Lemma 2. If ω is a strategy decomposition of W for Con and W is a trap for Dis, then
σ(ω) is a winning strategy for Con from every vertex in W .
Proof. We proceed by induction on the number of vertices in W , omitting the base cases
(when R = ∅, or U = ∅, respectively), since they are analogous and simpler than the
inductive cases. We strengthen the inductive hypothesis by requiring that: If ω is a
strategy decomposition of W for Con and W is a trap for Dis, then σ(ω) is a winning
strategy for Con from every vertex in W and the sum of the costs of the edges in any
finite play consistent with σ(ω) is bounded by (nW + 3nW )C, where nW is the number of
vertices in W (and recall that C is the maximal cost on all the edges).
We consider two cases based on the parity of b. First, assume that b is even and let
ω = (U, ω ′′ ), (T, τ ), (R, ω ′ ) . Observe that a play consistent with σ(ω) = σ(ω ′′ ) ∪ τ ∪ σ(ω ′ )
either never leaves U , or if it does then after a finite number of steps (following the
reachability strategy τ ) it enters R and then never leaves it because R is a trap for Dis.
It follows that the play is winning for Con by the inductive hypothesis, because it is
eventually either consistent with strategy σ(ω ′′ ) or with σ(ω ′ ). Moreover, let us write nU
(resp. nT , nR ) for the number of vertices in U (resp. T , R). Every finite play consistent
with such a strategy can be decomposed into a play consistent with σ(ω ′′ ), a play going
from U to T , consistent with τ and reaching R (thus using at most nT + 1 edges) and a
play consistent with σ(ω ′ ) (any of these plays can be empty). Suppose that none of those
plays is empty (the other cases can be handled similarly). In particular, nU and nR are
8
smaller than nW . By inductive hypothesis, the sum of the costs of the edges in any of
such finite plays is bounded by (nU + 3nU )C + (nT + 1)C + (nR + 3nR )C, and:
(nU + 3nU )C + (nT + 1)C + (nR + 3nR )C ≤ (nU + nT + nR + 3.3nW −1 )C ≤ (nW + 3nW )C
Next, assume that b is odd, and let ω = (R, ω ′ ), (T, τ ), B, λ be a b-decomposition.
Let us first prove that any infinite play consistent with σ(ω) is winning for Con. If after
a finite number of steps, the play reaches R and never leaves it, then σ(ω) is compatible
with σ(ω ′ ) which is winning for Con by induction hypothesis (because B is non-empty).
Otherwise, vertices in B or T are seen infinitely often. In that case, vertices in B are seen
infinitely often (by contradiction, if not, then necessarily we go through point 2. and 3. in
the strategy definition a finite number of times, and so after a finite number of steps, the
play is in point 1. forever). Because B is the set of vertices of highest priority b which is
odd, Con wins the parity game. Let us prove that the play has also negative mean-payoff.
Let us write nR (resp. nT , nB ) for the number of vertices in R (resp. T , B). By hypothesis
the play can be decomposed into (infinitely many) finite plays, each of them decomposed
into three consecutive (possibly empty) plays p1 , p2 and p3 as follows:
• p1 consists of vertices in R and is consistent with σ(ω ′ ) (point 1.),
• p2 goes from R to T , consists of vertices in T and is consistent with the reachability
strategy to reach B. Then it contains at most nT + 1 edges and is of cost at most
(nT + 1)C (point 2.),
• p3 is consistent with λ and uses nW + (2nW + 3nW + 2)nW C edges. A negative cycle
is thus necessarily reached and the sum of the costs of the edges of p3 is at most
nW C − (2nW + 3nW + 2)C (point 3.).
It is sufficient to prove that such a finite play p = p1 p2 p3 has negative mean-payoff.
By inductive hypothesis, the sum of the costs of the edges of such a play is at most
(nR + 3nR )C + (nT + 1)C + nW C − (2nW + 3nW + 2)C which is negative.
It remains to prove that along a finite play the sum of the costs of the edges never exceeds (nW +3nW )C, which is true using the decomposition above, the inductive hypothesis
and the fact that the sum of the costs of the edges on a play consistent with λ will never
exceed nW C. Thus, the maximum cost of such a finite play is (nR + 3nR + nT + 1 + nW )C
which is smaller than (nR + 3nW −1 + nT + 1 + 3nW −1 )C, or again (nW + 3nW )C.
2.4
Existence of strategy decompositions
In the following lemma, we prove that any game can be partitioned into two sets of
vertices: one where Dis has a strategy decomposition and one where Con has one. Those
sets correspond to the winning sets for Dis and Con.
Lemma 3. There is a partition WDis and WCon of V , such that there is a strategy decomposition of WDis for Dis (provided WDis 6= ∅) and a strategy decomposition of WCon for
Con (provided WCon 6= ∅).
The proof of Lemma 3 follows the usual template of using a McNaughton-Zielonka
inductive argument, as adapted to mean-payoff parity games by Chatterjee et al. [7], and
then simplified for threshold mean-payoff parity games by Chatterjee et al. [6, Appendix C].
9
Proof. The proof is by induction on the size of the game graph. We strengthen the
induction hypothesis by also requiring that WDis and WCon are traps in V for respectively
Con and Dis. The base case of one vertex is straightforward. Let b be the highest vertex
priority, and let B be the set of vertices of the highest priority b. We consider two cases
depending on the parity of b.
The first case is when b is even. Let T be the set of vertices (not including vertices
in B) from which Dis has a strategy to reach a vertex in B, and let τ be a corresponding
positional reachability strategy.
′
Let R = V \ (B ∪ T ). By the inductive hypothesis, there is a partition WDis
and
′
′
′
WCon of R, such that there is a strategy decomposition ωDis of WDis for Dis, and there is a
′
′
′
′
strategy decomposition ωCon
of WCon
for Con. If WCon
= ∅then ωDis
is a b′ -decomposition
′
′
of R for Dis, where b < b, and hence (R, ωDis ), (T, τ ), B is a b-decomposition of V for
Dis. So WDis = V and WCon = ∅ fulfils the conditions of the lemma.
′
′ ) from
If WCon
6= ∅, then let T ′ be the set of vertices (not including vertices in WCon
′
′
which Con has a strategy to reach a vertex in WCon , and let τ be a corresponding positional
′
reachability strategy. Let U = V \ (WCon
∪ T ′ ). By the inductive hypothesis, there is a
′′
′′
′′ of W ′′
partition WDis and WCon of U , such that there is a strategy decomposition ωDis
Dis
′′
′′
′′
′′
for Dis, and a strategy decomposition ωCon of WCon for Con. Moreover, WDis and WCon
are traps for respectively Con and Dis in U .
′′ and W ′′ ∪T ′ ∪W ′
We claim that WDis
Con
Con is a partition of V , traps for respectively Con
and Dis, such that there is a strategy decomposition of the former for Dis, and there is a
′′ is a trap
strategy decomposition of the latter for Con. The former is straightforward: WDis
′′
for Con in U which is itself a trap for Con in V by construction, so WDis is a trap for Con
′′ is a strategy decomposition of W ′′ for Dis. For the latter, W ′′
in V . Moreover, ωDis
D
Con ∪
′
′
T ∪WCon is a trap for Dis by construction and we claim that ω is a strategy decomposition
′′
′
′′ , ω ′′ ), (T ′ , τ ′ ), (W ′ , ω ′
of WCon
∪ T ′ ∪ WCon
for Con, where ω = (WCon
Con
Con
Con ) . Indeed,
′
WCon
is non-empty, is a trap for Dis by induction hypothesis and does not contain any
′
′
vertices of priority b by construction. Thus, ωCon
is a b′ -decomposition of WCon
for Con
′
′′
′′
′
where b < b. Similarly, by induction hypothesis, ωCon is a b -decomposition of WCon
for
′′
Con where b ≤ b.
The second case is when b is odd. Let R be the set of vertices winning for Dis for the
mean-payoff game.
First, suppose that R is non empty, and let U = V \ R. By the inductive hypothesis,
′
′
′
there is a partition WDis
and WCon
of U , such that there is a strategy decomposition ωDis
′
′
′
of WDis for Dis, and there is a strategy decomposition ωCon of WCon for Con. Moreover,
′
′
′
WDis
and WCon
are traps in U for respectively Con and Dis. We claim that WCon
and
′
WDis ∪ R is a partition of V , traps for respectively Dis and Con, such that there is a
strategy decomposition of the former for Con, and there is a strategy decomposition of
′
the latter for Dis. The former is straightforward: WCon
is a trap for Dis in U which is itself
′
a trap for Dis in V by construction (because R is a winning set for Dis), so WCon
is a trap
′
′
for Dis in V . Moreover, ωCon is a strategy decomposition of WCon for Con. For the latter,
′ ∪R is a trap for Con by construction and we claim that ω is a strategy decomposition
WDis
′ ∪ R for Dis, where ω = (W ′ , ω ′ ), (∅, ∅), (R, ω ′ ) , with ω ′ to be a positional
of WDis
Dis
Dis
strategy for Dis that is mean-payoff winning for her on R. Indeed, R is non-empty, is a
trap for Con by definition and ω ′ is a mean-payoff winning positional strategy for Dis on
′
′
it. Moreover, by induction hypothesis, ωDis
is a b′ -decomposition of WDis
for Con where
′
b ≤ b.
Suppose now that R is empty, that is to say that there exist λ, a positional strategy
for Con that is mean-payoff winning for him on V . Let T be the set of vertices (not
10
including vertices in B) from which Con has a strategy to reach a vertex in B, and let τ
be a corresponding positional reachability strategy.
′
Let R′ = V \ (B ∪ T ). By the inductive hypothesis, there is a partition WDis
and
′
′
′
WCon of R, such that there is a strategy decomposition ωDis of WDis for Dis, and there is
′
′
a strategy decomposition ωCon
of WCon
for Con.
′
′
′
If WDis = ∅ then ωCon is a b -decomposition of R for Con, where b′ < b and thus
′
ω = (R′ , ωCon
), (T, τ ), B, λ is a strategy decomposition of V for Con.
′
Otherwise (if WDis
6= ∅), then let T ′ be the set of vertices (not including vertices
′ ) from which Dis has a strategy to reach a vertex in W ′ , and let τ ′ be a corin WDis
Dis
′ ∪ T ′ ). By the inductive
responding positional reachability strategy. Let U ′ = V \ (WDis
′′ and W ′′
′
hypothesis, there is a partition WDis
Con of U , such that there is a strategy decom′′ of W ′′ for Dis, and a strategy decomposition ω ′′
′′
position ωDis
Dis
Con of WCon for Con.
′′
′′
′
′
We claim that WCon and WDis ∪ T ∪ WDis is a partition of V , traps for respectively
Dis and Con, such that there is a strategy decomposition of the former for Con, and there
′′
is a strategy decomposition of the latter for Dis. The former is straightforward: WCon
′
′′
is a trap for Dis in U which is itself a trap for Dis in V by construction, so WCon is a
′′
′′
trap for Dis in V . Moreover, ωCon
is a strategy decomposition of WCon
for Con. For the
′′
′
′
latter, WDis ∪ T ∪ WDis is a trap for Con by construction and we claim that ω is a strategy
′′ ∪ T ′ ∪ W ′
′′
′′
′
′
′
decomposition of WDis
Dis for Dis, where ω = (WDis , ωDis ), (T , τ ), (WDis , ωDis ) .
′ is non-empty, is a trap for Con by induction hypothesis and does not contain
Indeed, WDis
′
′
any vertices of priority b by construction. Thus, ωDis
is a b′ -decomposition of WDis
for Dis
′
′′
′′
′′ for
where b < b. Similarly, by induction hypothesis, ωDis is a b -decomposition of WDis
′′
Dis where b ≤ b.
Observe that Lemmas 1, 2, and 3 form a self-contained argument to establish both
determinacy of threshold mean-payoff parity games (from every vertex, one of the players
has a winning strategy), and membership of the problem of deciding the winner both in
NP and in co-NP. For the latter, it suffices to note that strategy decompositions can be
described in a polynomial number of bits, and it can be routinely checked in small polynomial time whether a proposed strategy decomposition for either of the players satisfies all
the conditions in the corresponding definition. The NP and co-NP membership has been
first established by Chatterjee and Doyen [4]; we merely give an alternative proof.
Corollary 1 (Chatterjee and Doyen [4]). The problem of deciding the winner in meanpayoff parity games is both in NP and in co-NP.
3
Mean-payoff parity progress measures
In this section we introduce the other key concept—mean-payoff parity progress measures—that plays the critical role in achieving our main technical result—the first pseudoquasi-polynomial algorithm for solving mean-payoff parity games. In Lemmas 4 and 5 we
establish that mean-payoff parity progress measures witness existence of winning strategies for Dis, by providing explicit translations between them and strategy decompositions
for Dis.
We stress that the purpose of introducing yet another concept of witnesses for winning strategies for Dis is to shift technical focus from highlighting the recursive strategic
structure of winning sets in strategy decompositions, to an order theoretic formalization
that makes the recursive structure be reflected in the concept of ordered trees. The ordertheoretic formalization then allows us—in Section 4—to apply the combinatorial result
11
on succinct coding of ordered trees by Jurdziński and Lazić [17], paving the way to the
pseudo-quasi-polynomial algorithm.
3.1
The definition
A progress measure maps every vertex with an element of a linearly ordered set. Edges
along which those elements decrease (for another specific defined order) are called progressive, and an infinite path consisting only of progressive edges is winning for Dis. Then,
we can derive a winning strategy for Dis if she can always follow a progressive edge and if
Con has no other choice than following a progressive edge.
Recall the assumption that d—the upper bound on the vertex priorities—is
even.
A progress measurement is a sequence hmd−1 , md−3 , . . . , mℓ i, e , where:
• ℓ is odd and 1 ≤ ℓ ≤ d + 1 (note that if ℓ = d + 1 then hmd−1 , md−3 , . . . , mℓ i is the
empty sequence hi);
• mi is an element of a linearly ordered set (for simplicity, we write ≤ for the order
relation), for each odd i, such that ℓ ≤ i ≤ d − 1;
• e is an integer such that 0 ≤ e ≤ nC, or e = ∞.
A progress labelling (µ, ϕ) maps vertices to progress measurements in such a way that if
vertex v is mapped to
µ(v), ϕ(v) = hmd−1 , md−3 , . . . , mℓ i, e
then
• ℓ ≥ π(v); and
• if e = ∞ then ℓ is the smallest odd number such that ℓ ≥ π(v).
For every priority p, 1 ≤ p ≤ d, we obtain a p-truncation hmd−1 , md−3 , . . . , mℓ i|p
of the sequence hmd−1 , md−3 , . . . , mℓ i, by removing the components corresponding to all
odd priorities smaller than p. For example, if we fix d = 8 then we have ha, b, ci|8 = hi,
ha, b, ci|6 = hai, and ha, b, ci|3 = ha, b, ci|2 = ha, b, ci. We compare sequences using the
lexicographic order; for simplicity, and overloading notation, we write ≤ to denote it. For
example, hai < ha, bi, and ha, b, ci < ha, di if b < d.
Let (µ, ϕ) be a progress labelling. Observe that—by definition—µ(v)|π(v) = µ(v), for
every vertex v ∈ V . We say that an edge (v, u) ∈ E is progressive in (µ, ϕ) if:
1. µ(v) > µ(u)|π(v) ; or
2. µ(v) = µ(u)|π(v) , π(v) is even, and ϕ(v) = ∞; or
3. µ(v) = µ(u), ϕ(v) 6= ∞, and ϕ(v) + c(v, u) ≥ ϕ(u).
We can represent tuples as nodes in a tree where the components of the tuple represent
the branching directions in the tree to go from the root to the node. For example, a tuple
ha, b, ci corresponds to the node reached from the root by first reaching the ath child of
the root, then the bth child of this latter and finally the cth child of this one. This way,
the notion of progressive edges can be seen on a tree as in Figure 3.
A progress labelling (µ, ϕ) is a progress measure if:
12
The siblings are ordered according to the linear order ≤. The smallest child is on the
right and the greatest on the left in the picture. An edge (v, u) is progressive if one of
the three following conditions holds:
- condition 1 µ(u) is one of the blue node, i.e. above or
on the right of µ(v)
..
.
..
.
..
.
µ(v)
..
.
..
.
..
.
..
.
- condition 2 π(v) is even, ϕ(v) = ∞ and µ(u) is one of
the orange node, i.e. belongs to the subtree
rooted in µ(v)
..
.
- condition 3 µ(u) = µ(v), ϕ(u) ∈ Z and ϕ(v) + c(v, u) ≥
ϕ(u)
Figure 3: Conditions for an edge to be progressive.
• for every vertex owned by Dis, there is at least one outgoing edge that is progressive
in (µ, ϕ); and
• for every vertex owned by Con, all outgoing edges are progressive in (µ, ϕ).
In the next two sections, we prove that there is a strategy decomposition for Dis if and
only is there is a progress measure.
3.2
From progress measures to strategy decompositions
Lemma 4. If there is a progress measure then there is a strategy decomposition of V for
Dis.
In the proof we will use the following fact derived from the result of Brim et al. [2]:
if all the edges in an infinite path are progressive and fulfill condition 3. of the definition,
then the mean payoff of this path is non-negative (and thus winning for Dis).
Proof. We proceed by induction on the number of vertices in the game graph. Let b ≤ d
be the highest priority appearing in the game. The base case is easy: if b is even, then by
setting B to be the unique vertex in the game, we obtain a strategy decomposition of V
for Dis. If b is odd, then an edge from the unique vertex to itself can only be progressive
if it fulfills condition 3. of the definition of progressive edges, which means that a strategy
taking only this edge is winning for Dis for the mean-payoff game. Thus by setting R to
be this unique vertex, we obtain a strategy decomposition of V for Dis.
Let us now consider the general case. First, suppose that b is even. Let B be the set
of the vertices of priority b. Let T be the set of vertices from which Dis has a reachability
strategy to B, τ be this positional strategy and let R = V \ (B ∪ T ). Because, by
13
construction, there is no edge from a vertex in R owned by Dis to a vertex in B ∪ T , the
progress measure on V gives also a progress measure on R when restricted to its vertices.
Let ω be a b′ -decomposition of R for Dis that exists by the inductive hypothesis. Note that
b′ < b because the biggest priority in R is smaller than b. It follows that (R, ω), (T, τ ), B
is a strategy decomposition for Dis in V .
Suppose now that b is odd. Let R be the set of vertices labelled by the smallest tuple:
R = {v ∈ V : µ(v) ≤ µ(u) for all u ∈ V }. (If we pictured the tuples on a tree as in
Figure 3, those would be the vertices that are mapped to the rightmost-top node in the
tree among the nodes at least one vertex is mapped to.) Let R′ be the subset of R of those
vertices having a finite ϕ: R′ = {v ∈ R : ϕ(v) 6= ∞}.
Suppose first that R′ 6= ∅. An edge going out from a vertex in R′ can only be progressive
if it fulfills condition 3. in the definition. It then has to go to a vertex of R′ too. Thus, R′
is a trap for Con, and Dis has a winning strategy ω ′ in R′ for the mean-payoff game.
Let T be the set of vertices from which Dis has a strategy to reach R′ and let τ this
positional reachability strategy. Let U = V \ (R′ ∪ T ). Because, by construction, there is
no edge from a vertex in U owned by Dis to a vertex in R′ ∪ T , then the progress measure
on V gives also a progress measure on U when restricted to its vertices. We can then
apply the inductive hypothesis
and get ω a strategy decomposition of U for Dis. Note
that (U, ω), (T, τ ), (R′ , ω ′ ) is a strategy decomposition of V for Dis.
Suppose now that R′ = ∅. The non-empty set R contains only vertices v such that
ϕ(v) = ∞. Then, by definition and because all those vertices are associated with the same
tuple, they must all have priority b′ or b′ + 1 for some even number b′ .
Any edge going out from a vertex of R is progressive if and only if it fulfills condition 2. of the definition. Thus, the priority of all the vertices in R has to be even and is
consequently b′ with b′ < b.
Let R′′ = {u ∈ V : µ(v) = µ(u)|π(v) for v ∈ R}. (If we picture the tuples on a tree as in
Figure 3, those are the vertices that are mapped to the nodes in the subtree rooted in the
node corresponding to R.) By definition, the priority of all those vertices is also smaller
than b. Moreover, an edge going out from a vertex in R′′ can only be progressive if it goes
to a vertex in R′′ too. So, R′′ is a trap for Con and an edge from a vertex in R′′ owned by
Dis to a vertex not in R′′ cannot be progressive. So the progress measure on V gives also
a progress measure on R′′ when restricted to its vertices. By the inductive hypothesis,
there is a strategy decomposition ω ′′ of R′′ for Dis. Let T be the set of vertices from
which Dis has a strategy to reach R′′ and let τ be a corresponding positional reachability
strategy. Let U = V \ (R′′ ∪ T ). Because, by construction, there is no edge in U from
a vertex owned by Dis to a vertex in R′′ ∪ T , the progress measure on V gives also a
progress measure on U when restricted to its vertices. By the inductive hypothesis,
there
is a strategy decomposition ω of U for Dis. Note that (U, ω), (T, τ ), (R′′ , ω ′′ ) is a strategy
decomposition of V for Dis.
3.3
From strategy decompositions to progress measures
Lemma 5. If there is a strategy decomposition of V for Dis then there is a progress
measure.
Proof. The proof is by induction on the size of the game graph. Let b be the biggest vertex
priority in V . We strengthen the inductive hypothesis by requiring that the progress
measure (µ, ϕ) whose existence is claimed in the lemma is such that all sequences in the
14
•
Vertices of B of priority b
•
Progress
measure of R
...
•
• •
• •
Tk
T2 T1
Figure 4: Construction of a progress measure - b even (the common prefix is not pictured)
image of µ have the same prefix corresponding to indices k, such that k > b. We need to
consider two cases based on the parity of b.
Suppose first that b is even. Let ω = (R, ω ′ ), (T, τ ), B be a b-decomposition of V for
Dis. Since B 6= ∅, by the inductive hypothesis there is a progress measure (µ′ , ϕ′ ) on R.
For every vertex v ∈ T , define its τ -distance to B to be the largest number of edges on a
path starting at v, consistent with τ , and whose only vertex in B is the last one. Let k
be the largest such τ -distance, and we define Ti , 1 ≤ i ≤ k, to be the set of vertices in T
whose τ -distance to B is i.
Let hmd−1 , md−3 , . . . , mb+1 i be the common prefix of all sequences in the image of µ′ .
Let t1 , t2 , . . . , tk be elements of the linearly ordered set used in progress measurements,
such that for every r that is the component of a sequence in the image of µ′ corresponding
to priority b − 1, we have r > tk > · · · > t2 > t1 , and let t be a chosen element of the
linearly ordered set (it does not matter which one). Define the progress labelling (µ, ϕ)
for all vertices v ∈ V as follows:
µ′ (v), ϕ′ (v)
if v ∈ R,
µ(v), ϕ(v) =
hmd−1 , md−3 , . . . , mb+1 , ti , mb−3 , . . . , mℓ i, ∞ if v ∈ Ti , 1 ≤ i ≤ k,
hmd−1 , md−3 , . . . , mb+1 i, ∞
if v ∈ B ;
where ℓ is the smallest odd number ≥ π(v) and mb−3 = . . . = mℓ = t.
The progress labelling (µ, ϕ) as defined above is a desired progress measure. It is
illustrated as a tree in Figure 4.
Suppose now that b is odd. Let ω = (U, ω ′′ ), (T, τ ), (R, ω ′ ) be a b-decomposition of V
for Dis. Define τ -distances, sets Ti , and elements ti and t for 1 ≤ i ≤ k, in the analogous
way to the “even b” case, replacing set B by set R. By the inductive hypothesis, there is
a progress measure (µ′′ , ϕ′′ ) on U , and let hmd−1 , md−3 , . . . , mb+2 i be the common prefix
of all sequences in the image of µ′′ . We define a progress labelling (µ, ϕ) for all vertices in
U ∪ T as follows:
(
µ′′ (v), ϕ′′ (v)
if v ∈ U ,
µ(v), ϕ(v) =
hmd−1 , md−3 , . . . , mb+2 , ti , mb−3 , . . . , mℓ i, ∞ if v ∈ Ti , 1 ≤ i ≤ k
where ℓ is the smallest odd number ≥ π(v) and mb−3 = . . . = mℓ = t.
If ω ′ is a b′ -decomposition of R for b′ < b (case 2(b)i), then by the inductive hypothesis,
there is a progress measure (µ′ , ϕ′ ) on R. Without loss of generality, assume that all
sequences in the images of µ′ and of µ′′ have the common prefix hmd−1 , md−3 , . . . , mb+2 i,
and that for all u and r that are the components of a sequence in the images of µ′′ and µ′ ,
respectively, corresponding to priority b, we have u > tk > tk−1 > · · · > t1 > r. Define
15
•
•
...
• ••
Progress
measure of U
• ••
Tk T2T1
Progress
measure of R
Figure 5: Construction of a progress measure - b odd - case 2(b)i
•
Vertices of R
...
• ••
Progress
measure of U
• ••
Tk T2T1
Figure 6: Construction of a progress measure - case 2(b)ii
the progress labelling (µ, ϕ) for all vertices v ∈ R in the following way:
µ(v), ϕ(v) = µ′ (v), ϕ′ (v) .
This is illustrated in Figure 5.
If, instead, ω ′ is a positional strategy for Dis that is mean-payoff winning for him on R
(case 2(b)ii), then by the result of Brim et al. [2], there is an energy progress measure ϕ
b
for Dis on R. Let r ′ be such that r ′ < t1 , and define the progress labelling (µ, ϕ) for all
vertices v ∈ R in the following way:
µ(v), ϕ(v) = hmd−1 , md−3 , . . . , mb+2 , r ′ i, ϕ(v)
b
.
This is illustrated in Figure 6.
The progress labelling (µ, ϕ) as defined above is a desired progress measure.
4
Computing progress measures by lifting
In this section, we give a so-called lifting algorithm which identifies the winning sets for
Dis and for Con by computing a progress measure on the winning set for Dis.
By the tree of a progress labelling (µ, ϕ), we mean the ordered tree whose nodes are
all prefixes of all sequences µ(v) as v ranges over the vertices of the game graph, and such
that every vertex v labels the node µ(v) of the tree. Let us say that progress labellings
(µ, ϕ) and (µ′ , ϕ′ ) are isomorphic if and only if their (partially labelled ordered) trees are
isomorphic and ϕ = ϕ′ .
We shall work with the following ordering on finite binary strings:
0s < ε,
ε < 1s,
bs < bs′ if and only if s < s′ ,
where ε denotes the empty string, b ranges over binary digits, and s, s′ range over binary
strings.
16
Recall that n is the number of vertices, and d (assumed even) is the number of priorities.
Let Sn,d be all sequences hmd−1 , md−3 , . . . , mℓ i of binary strings such that:
• ℓ is odd and 1 ≤ ℓ ≤ d + 1;
Pd−1
•
i=ℓ |mi | ≤ ⌈lg n⌉;
and let us call a progress measurement, labelling or measure succinct if and only if all the
sequences hmd−1 , md−3 , . . . , mℓ i involved are members of Sn,d .
Lemma 6. For every progress labelling, there exists a succinct isomorphic one.
Proof. This is an immediate consequence of [17, Lemma 1], since for every progress labelling, its tree is of height at most d/2 and has at most n leaves.
Corollary 2. Lemmas 4 and 5 hold when restricted to succinct progress measures.
We now order progress measurements lexicographically:
hmd−1 , md−3 , . . . , mℓ i, e < hm′d−1 , m′d−3 , . . . , m′ℓ′ i, e′
if and only if
either hmd−1 , md−3 , . . . , mℓ i < hm′d−1 , m′d−3 , . . . , m′ℓ′ i,
or hmd−1 , md−3 , . . . , mℓ i = hm′d−1 , m′d−3 , . . . , m′ℓ′ i and e < e′
and we extend them by a new greatest progress measurement (⊤, ∞). We then revise the
set of progress labellings to allow the extended progress measurements, and we (partially)
order it pointwise:
(µ, ϕ) ≤ (µ′ , ϕ′ ) if and only if, for all v, µ(v), ϕ(v) ≤ µ′ (v), ϕ(v ′ ) .
We also revise the definition of a progress measure by stipulating that an edge (v, u)
which involves the progress measurement (⊤, ∞) is progressive if and only if the progress
measurement of v is (⊤, ∞).
For any succinct progress labelling (µ, ϕ) and edge (v, u), we set lift(µ,
ϕ, v, u) to be
the minimum
succinct progress measurement hmd−1 , md−3 , . . . , mℓ i, e which is at least
µ(v), ϕ(v) and such that (v, u) is progressive in the updated succinct progress labelling
µ v 7→ hmd−1 , md−3 , . . . , mℓ i , ϕ[v 7→ e] .
For any vertex v, we define an operator Liftv on succinct progress labellings as follows:
if w 6= v,
µ(w), ϕ(w)
Liftv (µ, ϕ)(w) = min(v,u)∈E lift(µ, ϕ, v, u) if Dis owns w = v,
max(v,u)∈E lift(µ, ϕ, v, u) if Con owns w = v.
Theorem 1 (Correctness of lifting algorithm).
1. The set of all succinct progress labellings ordered pointwise is a complete lattice.
2. Each operator Liftv is inflationary and monotone.
3. From every succinct progress labelling (µ, ϕ), every sequence of applications of operators Liftv eventually reaches the least simultaneous fixed point of all Liftv that is
greater than or equal to (µ, ϕ).
17
1. Initialise (µ, ϕ) to the least succinct progress labelling
(v 7→ hi, v 7→ 0)
2. While Liftv (µ, ϕ) 6= (µ, ϕ) for some v, update (µ, ϕ) to become Liftv (µ, ϕ).
3. Return the set WDis = {v : µ(v), ϕ(v) 6= (⊤, ∞)} of winning positions for Dis.
Table 1: The lifting algorithm
4. A succinct progress labelling (µ, ϕ) is a simultaneous fixed point of all operators Liftv
if and only if it is a succinct progress measure.
5. If (µ∗ , ϕ∗ ) is the least succinct progress measure, then {v : µ∗ (v), ϕ∗ (v) 6= (⊤, ∞)}
is the set of winning positions for Dis.
Proof.
1. The partial order of all succinct progress labellings is the pointwise product
of n copies of the finite linear order of all succinct progress measurements.
2. We have inflation, i.e. Liftv (µ, ϕ)(w) ≥ µ(w), ϕ(w) , by the definitions of Liftv (µ, ϕ)(w)
and lift(µ, ϕ, v, u).
For monotonicity, supposing (µ, ϕ) ≤ (µ′ , ϕ′ ), it suffices to show that, for every
edge (v, u), we have lift(µ, ϕ, v, u) ≤ lift(µ′ , ϕ′ , v, u), which is in turn implied by
the straightforward observation that, whenever an edge is progressive with respect
to a progress labelling, it remains progressive after any lessening of the progress
measurement of its target vertex.
3. This holds for any family of inflationary monotone operators on a finite complete
lattice. Consider any such maximal sequence from (µ, ϕ). It is an upward chain
from (µ, ϕ) to some (µ∗ , ϕ∗ ) which is a simultaneous fixed point of all the operators.
For any (µ′ , ϕ′ ) ≥ (µ, ϕ) which is also a simultaneous fixed point, a simple induction
confirms that (µ∗ , ϕ∗ ) ≤ (µ′ , ϕ′ ).
4. Here we have a rewording of the definition of a succinct progress measure.
5. Let W = {v : µ∗ (v), ϕ∗ (v) 6= (⊤, ∞)}. The set of winning positions for Dis is
contained in W by Lemma 3, Lemma 5 and Corollary 2, because (µ∗ , ϕ∗ ) is the least
succinct progress measure.
Since (µ∗ , ϕ∗ ) is a progress measure, we have that, for every progressive edge (v, u),
if µ∗ (v), ϕ∗ (v) 6= (⊤, ∞) then µ∗ (u), ϕ∗ (u) 6= (⊤, ∞). In order to show that Dis
has a winning strategy from every vertex in W , it remains to apply Lemmas 4 and
1 to the subgame consisting of the vertices in W .
Lemma 7 (Jurdziński and Lazić [17]). Depending on the asymptotic growth of d as a
function of n, the size of the set Sn,d is as follows:
1. O n1+o(1) if d = o(log n);
18
2. Θ nlg(δ+1)+lg(eδ )+1
eδ = (1 + 1/δ)δ ;
√
log n if d/2 = ⌈δ lg n⌉, for some positive constant δ, and where
3. O dnlg(d/lg n)+lg e+o(1) if d = ω(log n).
Theorem 2 (Complexity of lifting algorithm). Depending on the asymptotic growth of d
as a function of n, the running time of the algorithm is as follows:
1. O mn2+o(1) C if d = o(log n);
√
2. O mnlg(δ+1)+lg(eδ )+2 C · log d · log n if d ≤ 2⌈δ lg n⌉, for some positive constant δ;
3. O dmnlg(d/lg n)+2.45 C if d = ω(log η).
The algorithm works in space O(n · log n · log d).
Proof. The work space requirement is dominated by the number of bits needed to store a
single succinct progress labelling, which is at most n(⌈lg n⌉⌈lg d⌉ + ⌈lg(nC)⌉).
Since bounded-depth successors of elements of Sn,d are computable in time O(log n ·
log d) (cf. the proof of [17, Theorem 7], the Liftv operators can be implemented to work in
time O(deg(v) · (log n · log d + log C)). It then follows, observing that the algorithm lifts
each vertex at most |Sn,d |(nC + 1) times, that its running time is bounded by
!
X
deg(v) · (log n · log d + log C)|Sn,d |(nC + 1) =
O
v∈V
O (mnC(log n · log d + log C)|Sn,d |) .
From there, the various stated bounds are obtained by applying Lemma 7, and by suppressing some of the multiplicative factors that are logarithmic in the bit-size of the input.
Suppressing the log C factor is justified by using the unit-cost RAM model, which is the
industry standard in algorithm analysis. The reasons for suppressing the log n and log d
factors are more varied: in case 1, they are absorbed by the o(1) term in the exponent
of n, and in case 2, they are absorbed in the 2.45 term in the exponent of n, because
lg e < 1.4427.
5
From winning sets to strategy decompositions for Con
The pseudo-quasi-polynomial lifting algorithm computes the least progress measure and
hence, by Lemmas 4 and 1, it can be easily adapted to synthesize a winning strategy for
Dis from all vertices in her winning set. In this section we tackle the problem of strategy
synthesis for Con. By (the proof of) Lemma 2, in order to synthesize a winning strategy
for Con, it suffices to compute a strategy decomposition for him. We argue that this can
also be achieved in pseudo-quasi-polynomial time.
Theorem 3 (Complexity of computing strategy decompositions). There is a pseudoquasi-polynomial algorithm that computes strategy decompositions for both players on their
winning sets.
In order to establish that strategy decompositions for Con can be computed in pseudoquasi-polynomial time, it suffices to prove the following lemma, because the polynomialtime oracle algorithm becomes a pseudo-quasi-polynomial algorithm, once the oracle for
computing winning strategies in mean-payoff games is replaced by a pseudo-polynomial
algorithm [22, 2, 8], and the oracle for computing the winning sets in mean-payoff parity
games is replaced by the pseudo-quasi-polynomial procedure from Section 4.
19
Lemma 8. There is a polynomial-time algorithm, with oracles for computing winning
strategies in mean-payoff games and for computing winning sets in mean-payoff parity
games, that computes a strategy decomposition for Con of his winning set.
Proof. Without loss of generality, we may assume that Con has a winning strategy from
every vertex in V , since a single call to the oracle allows us to reduce V to the subgame
corresponding to the winning set for Con.
Below, we describe a recursive procedure for computing a strategy decomposition for
Con of the set of all vertices, that has a similar structure to the inductive proof of Lemma 3.
In parallel with the description of the recursive procedure, we elaborate an inductive proof
that it does indeed compute a strategy decomposition for Con on V .
Note that our procedure avoids incurring the penalty of adding to its running time a
factor that is exponential in the number of distinct vertex priorities, by repeatedly using
the oracle for computing the winning sets in appropriately chosen subgames. We give a
detailed analysis of the worst-case running time at the end of this proof.
Let B be the set of vertices of the highest priority b; let T be the set of vertices (not
including vertices in B) from which Dis has a strategy to reach a vertex in B; let τ be a
corresponding positional rechability strategy; and let R = V \ (B ∪ T ). We consider two
cases, depending on the parity of b.
Even b. Call the oracle to obtain the partition RCon and RDis of R, the winning sets for
Con and for Dis, respectively, in the subgame R. We argue that WCon 6= ∅. Otherwise, by
Lemma 3, there is a strategy decomposition ω of R for Dis, and hence (R, ω), (T, τ ), B
is a strategy decomposition of V for Dis, which, by Lemma 1, contradicts the assumption
that Con has a winning strategy from every vertex.
Let T ′ be the set of vertices (not including vertices in WCon ) from which Con has a
strategy to reach a vertex in WCon , and let τ ′ be a corresponding positional reachability
strategy, and let U = V \ (WCon ∪ T ′ ). By the inductive hypothesis, a recursive call of
our procedure on WCon will produce a strategy decomposition ω ′ of WCon for Con, and
another recursive call of the procedure on U will produce
a strategy decomposition ω ′′ of U
′′
′
′
′
for Con. We claim that (U, ω ), (T , τ ), (WCon , ω ) is a strategy decomposition of V for
Con.
Odd b. Call the oracle for computing positional winning strategies in mean-payoff games
to obtain a positional strategy λ for Con that is mean-payoff winning for him on V ; such a
strategy exists because Con has a mean-payoff parity winning strategy from every vertex.
Since R is a trap for Con, it must be the case that Con has a winning strategy from
every vertex in the subgame R. By the inductive hypothesis, a recursive call of our
procedure on R will produce a strategy decomposition ω ′ of R for Con. We claim that
(R, ω ′ ), (T, τ ), B, λ is a strategy decomposition of V for Con.
It remains to argue that the recursive procedure described above works in polynomial
time in the worst case. Observe that in both cases considered above, a call of the procedure
on a game results in two or one recursive calls, respectively. In both cases, the recursive
calls are applied to subgames with strictly fewer vertices, and—crucially for the complexity
analysis—in the former case, the two recursive calls are applied to subgames on disjoint
sets of vertices. Additional work (other than recursive calls and oracle calls) in both cases
can be bounded by O(m), since the time needed is dominated by the worst case bound on
the computation of reachability strategies. Overall, the running time function T (n) of the
20
recursive procedure, where n is the number of vertices in the input game graph, satisfies
the following recurrence:
T (n) ≤ T (n′ ) + T (n′′ ) + O(m),
where n′ + n′′ < n,
and hence T (n) = O(nm).
6
Conclusion
Our main result is the first pseudo-quasi-polynomial algorithm for computing the values
of mean-payoff parity games. The main technical tools that we introduce to achieve the
main result are strategy decompositions and progress measures for the threshold version
of mean-payoff games. We believe that our techniques can be adapted to also produce
optimal strategies for both players (i.e., the strategies that secure the value that we show
how to compute). Another direction for future work is improving the complexity of solving
stochastic mean-payoff parity games [5].
Acknowledgements
This research has been supported by the EPSRC grant EP/P020992/1 (Solving Parity
Games in Theory and Practice).
References
[1] P. Bouyer, N. Markey, J. Olschewski, and M. Ummels. Measuring permissiveness in
parity games: Mean-payoff parity games revisited. In ATVA, pages 135–149, 2011.
[2] L. Brim, J. Chaloupka, L. Doyen, R. Gentilini, and J.-F. Raskin. Faster algorithms
for mean-payoff games. Form. Methods Syst. Des., 38(2):97–118, 2011.
[3] C. S. Calude, S. Jain, B. Khoussainov, W. Li, and F. Stephan. Deciding parity games
in quasipolynomial time. In STOC, pages 252–263, 2017.
[4] K. Chatterjee and L. Doyen. Energy parity games. Theoretical Computer Science,
458:49–60, 2012.
[5] K. Chatterjee, L. Doyen, H. Gimbert, and Y. Oualhadj. Perfect-information stochastic
mean-payoff parity games. In FOSSACS, pages 210–225, 2014.
[6] K. Chatterjee, M. Henzinger, and A. Svozil. Faster algorithms for mean-payoff parity
games. In MFCS, pages 39:1–39:17, 2017.
[7] K. Chatterjee, T. A. Henzinger, and M. Jurdziński. Mean-payoff parity games. In
LICS, pages 178–187, 2005.
[8] C. Comin and R. Rizzi. Improved pseudo-polynomial bound for the value problem
and optimal strategy synthesis in mean payoff games. Algorithmica, 77(4):995–1021,
2017.
[9] A. Condon. The complexity of stochastic games. Information and Computation,
96(2):203–224, 1992.
21
[10] A. Ehrenfeucht and J. Mycielski. Positional strategies for mean payoff games. Journal
of Game Theory, 8(2):109–113, 1979.
[11] E. A. Emerson and C. Jutla. Tree automata, mu-calculus and determinacy. In FOCS,
pages 368–377, 1991.
[12] E. A. Emerson, C. Jutla, and A. P. Sistla. On model-checking for fragments of µcalculus. Theoretical Computer Science, 258(1–2):491–522, 2001.
[13] J. Fearnley, S. Jain, S. Schewe, F. Stephan, and D. Wojtczak. An ordered approach
to solving parity games in quasi polynomial time and quasi linear space. In SPIN,
pages 112–121, 2017.
[14] Y. Gurevich and L. Harrington. Trees, automata, and games. In STOC, pages 60–65,
1982.
[15] D. S. Johnson. The np-completeness column: Finding needles in haystacks. ACM
Transactions on Algorithms, 3(2), 2007.
[16] M. Jurdziński. Small progress measures for solving parity games. In STACS, pages
290–301, 2000.
[17] M. Jurdziński and R. Lazić. Succinct progress measures for solving parity games. In
LICS, pages 1–9, 2017.
[18] N. Klarlund and D. Kozen. Rabin measures. Chicago Journal of Theoretical Computer
Science, 1995. Article 3.
[19] R. McNaughton. Infinite games played on finite graphs. Annals of Pure and Applied
Logic, 65(2):149–184, 1993.
[20] W. Thomas. On the synthesis of strategies in infinite games. In STACS, pages 1–13,
1995.
[21] W. Zielonka. Infinite games on finitely coloured graphs with applications to automata
on infinite trees. Theoretical Computer Science, 200:135–183, 1998.
[22] U. Zwick and M. Paterson. The complexity of mean-payoff games on graphs. Theoretical Computer Science, 158:343–359, 1996.
22
| 3 |
arXiv:1803.07932v1 [math.GN] 10 Feb 2018
C-IMAGE PARTITION REGULARITY NEAR ZERO
SOURAV KANTI PATRA1
Department of Mathematics, Ramakrishna Mission Vidyamandira,
Belur Math, Howrah-711202, West Bengal, India
SUKRIT CHAKRABORTY
Statistics and Mathematics Unit, Indian Statistical Institute,
Kolkata-700108, West Bengal, India
Abstract. In De and Hindman [2009], the concept of image partition regularity near zero was first instigated. In contrast to the
finite case , infinite image partition regular matrices near zero are
very fascinating to analyze. In this regard the abstraction of Centrally image partition regular matrices near zero was introduced
in Biswas et al. [2015]. In this paper we propose the notion of
matrices that are C-image partition regular near zero for dense
subsemigropus of pp0, 8q, `q.
AMS subjclass [2010] : Primary : 05D10 Secondary : 22A15
1. Introduction
A finite or infinite matrix A, with entries from Q, is image partition
regular provided that whenever N is finitely colored, there must be
Ñ
Ñ
some x with entries from N such that all entries of A x are in the same
color class. Several characterizations of infinite image partition regular
matrices involve the notion of “first entries matrix”, a concept based
on Deuber’s pm, p, cq sets.
Definition 1.1. Let A be a u ˆ v matrix with rational
entries. Then
Ñ
A is a first entries matrix if and only if no row of A is 0 and there exist
d1 , d2 , ¨ ¨ ¨ , dv P tx P Q : x ą 0u such that, whenever i P t1, 2, ¨ ¨ ¨ , vu
and l “ mintj P t1, 2, ¨ ¨ ¨ , vu : ai,j ‰ 0u, then dl is a first entry of A.
E-mail addresses: [email protected], [email protected].
Key words and phrases. Algebra in the Stone-C̆ech compactification, central set
near zero, quasi-central set near zero, C-set near zero,C ˚ -set near zero.
1
2
C-IMAGE PARTITION REGULARITY NEAR ZERO
It is well known that for finite matrices, image partition regularity behaves well with respect to Central subsets of the underlying semigroup.
Central sets were introduced in Furstenberg [1981] and was defined in
terms of the notion of topological dynamics. Thesesets enjoy very
strong combinatorics properties. (see for further details Proposition
8.21 of Furstenberg [1981], Chapter 14 of Hindman and Strauss [2012].)
They have a nice characterization in terms of the algebraic structure
of βN, the Stone-Čech compactification of N. We shall present this
characterization below, after introducing the necessary background information.
Let pS, `q be an infinite discrete semigroup. We take the points
of βS to be all the ultrafilters on S, the principal ultrafilters being
identified with points of S. Given a set A Ď S, A “ tp P βS : A P pu.
the set tA : A Ď Su is a basis for the open sets (as well as the closed
sets) of βS. There is a natural extension of the operation ` on S to
βS making βS a compact right topological semigroup (meaning that
for any p P βS, the function ρp : βS Ñ βS defined by ρp pqq “ q ` p is
continuous) with S contained in its topological center (meaning that
for any x P S, the function λx : βS Ñ βS defined by λx pqq “ x ` q
is continuous). Given p, q P βS and A Ď S, A P p ` q if and only if
tx P S : ´x ` A P qu P p, where ´x ` A “ ty P S : x ` y P Au.
A nonempty subset I of a semigroup pT, `q is called a left ideal of T
if T ` I Ď I, a right ideal if I ` T Ď I, and a two sided ideal (or simply
an ideal) if it is both a left and a right ideal. A minimal left ideal is a
left ideal that does not contain any proper left ideal. Similarly, we can
define a minimal right ideal and the smallest ideal.
Any compact Hausdorff right topological semigroup pT, `q contains
idempotents and therefore has a smallest two sided ideal
ď
KpT q “ tL : L is a minimal left ideal of Tu
ď
“ tR : R is a minimal right ideal of Tu.
Given a minimal left ideal L and a minimal right ideal R, it easily
follows that L X R is a group and thus in particular contains an idempotent. If p and q are idempotents in T , we write p ď q if and only
if p ` q “ q ` p “ p. An idempotent is minimal with respect to this
relation if and only if it is a member of the smallest ideal KpT q of T .
See Hindman and Strauss [2012] for an elementary introduction to
the algebra of βS and for any unfamiliar details.
Definition 1.2. Let pS, `q be an infinite discrete semigroup. A set
C Ď S is Central if and only if there is some minimal idempotent p in
C-IMAGE PARTITION REGULARITY NEAR ZERO
3
pβS, `q such that C P p. C is called Central˚ set if it belongs to every
minimal idempotent of pβS, `q.
We will be considering the sets which are dense in pp0, 8q, `q. Here
“dense” means with respect to the usual topology on pp0, 8q, `q. When
passing through the Stone-Čech compactification of such a semigroup
S, we deal with the set S with the discrete topology.
Definition 1.3. If S is a dense subsemigroup of pp0, 8q, `q, one defines, O ` pSq “ tp P βS : p0, ǫq X S P p for all ǫ ą 0u.
It is proved in Lemma 2.5 of Hindman and Leader [1999], that O ` pSq
is a compact right topological semigroup of pβS, `q. It was also noted
there that O ` pSq is disjoint from KpβSq, and hence gives some new
information which are not available from KpβSq. Being compact right
topological semigroup O ` pSq contains minimal idempotents.We denote
KpO ` pSqq to be the smallest ideal of O ` pSq. Note that idempotents
of KpO ` pSqq are minimal idempotents of O ` pSq.
Definition 1.4. Let S be a dense subsemigroup of pp0, 8q, `q. A
set C Ď S is Central near zero if and only if there is some minimal
idempotent p in O ` pSq such that C P p. C is central˚ set near zero if
it belongs to every minimal idempotent of O ` pSq.
In De and Paul [2012], nice combinatorial algebraic properties of
Central sets near zero had been established. Now we present some
well known characterization of image partition regularity of matrices
following Theorem 2.10 of Hindman et al. [2002].
Theorem 1.5. Let u, v P N and let A be a u ˆ v matrix with entries
from Q. The following statements are equivalent:
(1) A is image partition regular.
Ñ
(2) For every central subset C in N, there exists x P Nv such that
Ñ
A x P C u.
Ñ
Ñ
(3) For every central subset C of N, t x P Nv : A x P C u u is central
in Nv .
(4) There exist m P N, a v ˆ m matrix G with entries from ω and
Ñ
no row equal to 0 , and a u ˆ m first entries matrix B from ω
such that AG “ B.
ˆ Ñ˙
Ñ
Ñ
br
(5) For each r P Qzt 0 u there exists b P Qzt0u such that
is
M
image partition regular.
(6) Whenever m P N, φ1 , φ2, ¨ ¨ ¨ , φm are non zero linear mappings
Ñ
from Qv to Q and C is central in N, there exists x P Nv such
Ñ
Ñ
that A x P C u and for each i P t1, 2, ¨ ¨ ¨ , mu, φi p xq ‰ 0.
4
C-IMAGE PARTITION REGULARITY NEAR ZERO
Ñ
(7) For every Central set C in N, there exists x P Nu such that
Ñ
Ñ
Ñ
y “ A x P C u , all entries of x are distinct and for all i, j P
t1, 2, ¨ ¨ ¨ , uu, if row i and j of A are unequal, then yi ‰ yj .
In Hindman et al. [2003], the authors presented some contrast between finite and infinite image partition regular matrices and showed
that some of the interesting properties of finite image partition regular matrices could not be generalized for infinite image partiton regular
matrices. In this regard the notion of Centrally image partition regular
matrices were introduced in Definition 2.7 of Hindman et al. [2003].
Definition 1.6. Let M be an ω ˆω matrix. Then M is centrally image
partition regular if and only if whenever C is a central set in N, there
Ñ
Ñ
exists x P Nω such that M x P C ω .
Note that Definition 1.6 has a natural generalization for an arbitrary
subsemigroup S of pp0, 8q, `q. In Biswas et al. [2015], the authors
introduced another natural candidate to generalize the properties of
finite image partition regularity near zero in case of infinite matrices.
We now recall Definitions 1.7 and 1.8 of Biswas et al. [2015] respectively.
Definition 1.7. Let S be a dense subsemigroup of pp0, 8q, `q. Let
u, v P N and M be a u ˆ v matrix with entries from Q. The matrix
M is image partition regular near zero over S if and only if whenever
Ñ
r P N, ǫ ą 0 and S “ Yri“1 Ci , there exists i P t1, 2, ¨ ¨ ¨ , ru and x P Nv
Ñ
such that M x P pC X p0, ǫqqu .
Definition 1.8. Let M be an ω ˆω matrix with entries from Q, and let
S be a dense subsemigroup pp0, 8q, `q. M is Centrally image partition
regular near zero over S if whenever C is a Central set near zero in S,
Ñ
Ñ
there exists x P S ω such that M x P C ω .
In Section 2, we introduce the C-image partition regular matrices
near zero, which is an interesting subclass of Centrally image partition
regular matrices near zero. We see that both these image partition
regularity behaves almost same.
In Section 3, we give some examples of C-image partition regular
matrices.
2. C-image partition regular matrices near zero
In this section we shall define C-image partition regularity near zero
for dense subsemigroup S of pp0, 8q, `q. Let us recall Definitions 3.1
and 3.2 of Furstenberg [1981] respectively.
C-IMAGE PARTITION REGULARITY NEAR ZERO
5
Definition 2.1. Let S be a dense subsemigroup of p0, 8q. The set of
all sequences in S converging to 0 is denoted by τ0 .
Definition 2.2. Let S be a dense subsemigroup of p0, 8q and A Ď S.
Then A is J-set near zero if and only if whenever F P Pf pτ0 q and
δ ąř
0, there exists a P S X p0, δq and H P Pf pNq such that f P F ,
a ` tPH f ptq P A.
We now present the central sets theorem near zero.
Theorem 2.3. Let S be a dense subsemigroup of pp0, 8q, `q. Let A be
a Central subset of S near zero. Then for each δ P p0, 1q, there exists
functions αδ : Pf pτ0 q Ñ S and Hδ : Pf pτ0 q Ñ Pf pNq such that
(1) αδ pF q ă δ for each F P Pf pτ0 q.
(2) If F, G P Pf pτ0 q and F Ď G, then max Hδ pF q ă min HδpGq and
(3) whenever m P N, G1 , G2 , ¨ ¨ ¨ , Gm P Pf pτ0 q, G1 Ď G2 Ď ¨ ¨ ¨ Ď
Gm and for each i P t1, 2, ¨ ¨ ¨ , mu, fi P Gi one has
m ´
¯
ÿ
ÿ
αδ pGi q `
fi ptq P A.
i“1
tPHδ pGi q
Proof. See Theorem 3.5 of [Bayatmanesh and Akbari Tootkaboni, 2016].
For a dense subsemigroup S of pp0, 8q, `q, a set A Ď S is said to be
a C-set near zero if it satisfy the conclusion of Central Sets Theorem
near zero.
So we have the following definition which is Definition 3.6(a) of [Bayatmanesh and Akbari Tootka
2016].
Definition 2.4. Let S be a dense subsemigroup of pp0, 8q, `q and let
A Ď S. We say A is a C-set near zero if and only if for each δ P p0, 1q,
there exists function αδ : Pf pτ0 q Ñ S and Hδ : Pf pτ0 q Ñ Pf pNq such
that
(1) αδ pF q ă δ for each F P Pf pτ0 q.
(2) If F, G P Pf pτ0 q and F Ď G, then max Hδ pF q ă min HδpGq and
(3) whenever m P N, G1 , G2 , ¨ ¨ ¨ , Gm P Pf pτ0 q, G1 Ď G2 Ď ¨ ¨ ¨ Ď
Gm and for each i P t1, 2, ¨ ¨ ¨ , mu, fi P Gi one has
m ´
¯
ÿ
ÿ
αδ pGi q `
fi ptq P A.
i“1
tPHδ pGi q
The following definition is Definition 3.6(b) of [Bayatmanesh and Akbari Tootkaboni,
2016].
6
C-IMAGE PARTITION REGULARITY NEAR ZERO
Definition 2.5. Let S be a dense subsemigroup of pp0, 8q, `q. Define
J0 pSq “ tp P O ` pSq : for all A P p, A is a J-set near zerou.
Theorem 2.6. Let S be a dense subsemigroup of pp0, 8q, `q. Then
J0 pSq is a compact two-sided ideal of βS.
Proof. See Theorem 3.9 of [Bayatmanesh and Akbari Tootkaboni, 2016].
Theorem 2.7. Let S be a dense subsemigroup of pp0, 8q, `q and A Ď
S. Then A is a C-set near zero if and only if there is an idempotent
p P A X J0 pSq.
Proof. See Theorem 3.14 of Furstenberg [1981].
We call a set A Ď S to be C˚ -set near zero if and only if it is a
member of every idempotent in j0 pSq.
Theorem 2.8. Let S be a dense subsemigroup of pp0, 8q, `q, let u, v P
N, and let M be a u ˆ v matrix with entries from ω which satisfies
the first entries condition. Let A be C-set near zero in S. If, for
every first entry c of M, cS is a C˚ -set, then there exist sequences
8
8
xx1,n y8
n“1 , xx2,n yn“1 , ¨ ¨ ¨ xxv,n yn“1 in S such that for every F P Pf pNq,
Ñ
Ñ
x F P pSzt0uqv and M x F P Au , where
˛
¨ř
řxPF x1,n
˚ xPF x2,n ‹
Ñ
‹.
xF “ ˚
..
‚
˝
ř .
xPF xv,n
Proof. The proof is almost same as Theorem 15.5 of Hindman and Strauss
[2012].
Corollary 2.9. Let S be a dense subsemigroup of pp0, 8q, `q for which
cS is a C˚ -set near zero for each c P N. Also let M be a u ˆ v matrix
with entries from Q which is image partition regular over N. Then for
Ñ
Ñ
each C-set near zero A, there exists x P S v such that M x P Au .
We shall now introduce the notion of C-image partition regular matrices near zero.
Definition 2.10. Let A be an ω ˆ ω matrix with entries from Q. The
matrix A is C-image partition regular near zero if and only if for every
Ñ
Ñ
C-set near zero C of N there exists x P Nω such that A x P C ω .
From Definitions 2.10 and 1.8, it is clear that every C-image partition
regular matrices near zero are Centrally image partition regular. In the
following theorem we shall see that C-image partition regular matrices
are closed under diagonal sum.
C-IMAGE PARTITION REGULARITY NEAR ZERO
7
Theorem 2.11. Let S be a dense subsemigroup of pp0, 8q, `q. For
each n P N, let the matrices Mn be C-image partition regular near zero.
Then the matrix
˛
¨
M1 0
0 ¨¨¨
˚ 0 M2 0 ¨ ¨ ¨ ‹
‹
M “˚
0 M3 ¨ ¨ ¨ ‚
˝ 0
.. . .
..
..
.
.
.
.
is also C-image partition regular near zero.
Ñpnq
Proof. Let A be a C-set near zero. For each n P N choose x
Ñpnq
Ñpnq
such that y “ Mn x P Aω . Let
¨
˛
Ñp1q
x
˚Ñp2q ‹
Ñ
˚
z “ ˝x ‹
‚.
..
.
P Sω
Ñ
Then all entries of M z are in A.
3. Some classes of infinite matrices that are C-image
partition regular near zero
We now present a class of image partition regular matrices, called the
segmented image partition regular matrices which were first introduced
in Hindman and Strauss [2000]. There it was shown that segmented
image partition regular matrices are Centrally image partition regular.
Recall the Definition 3.2 of Biswas et al. [2015].
Definition 3.1. Let M be an ω ˆ ω matrix with entries from Q. Then
M is a segmented image partition regular matrix if and only if:
Ñ
(1) No row of M is 0 .
(2) For each i P ω, tj P ω : ai,j ‰ 0u is finite.
(3) There is an increasing sequence xαn y8
n“0 in ω such that α0 “ 0
and for each n P ω, txai,αn , ai,αn `1 , ai,αn `2 , ¨ ¨ ¨ , ai,αn`1 ´1 y : i P
Ñ
ωuzt 0 u is empty or is the set of rows of a finite image partition
regular matrix.
If each of these finite image partition regular matrices is a first entries
matrix, then M is a segmented first entries matrix. If also the first
nonzero entry of each xai,αn , ai,αn `1 , ai,αn `2 , ¨ ¨ ¨ , ai,αn`1 ´1 y, if any, is 1,
then M is a monic segmented first entries matrix.
The following theorem is Theorem 3.1 in Biswas et al. [2015].
8
C-IMAGE PARTITION REGULARITY NEAR ZERO
Theorem 3.2. Let S be a dense subsemigroup of pp0, 8q, `q for which
cS is Central˚ near zero for every c P N and let M be a segmented image
partition regular matrix with entries from ω. Then M is centrally image
partition regular near zero.
The proof of the following theorem is adapted from the proofs of Theorem 3.2 of Hindman and Strauss [2000] and Theorem 3.1 of Biswas et al.
[2015].
Theorem 3.3. Let S be a dense subsemigroup of pp0, 8q, `q for which
cS is C˚ near zero for every c P N and let M be a segmented image partition regular matrix with entries from ω. Then M is C-image partition
regular near zero.
Ñ
Ñ
Ñ
Proof. Let c 0 , c 1 , c 2 , ¨ ¨ ¨ denote the columns of M. Let xαn y8
n“0 be as
in the definition of a segmented image partition regular matrix. For
Ñ
Ñ
Ñ
each n P ω, let Mn be the matrix whose columns are c αn , c αn `1 , ¨ ¨ ¨ , c αn`1 ´1 .
Then the set of non-zero rows of Mn is finite and, if nonempty, is the
set of rows of a finite image partition regular matrix. Let Bn “ pM0
M1 . . . Mn q. Now by Lemma 2.5 of Hindman and Strauss [2012] 0` pSq
is a compact right topological semigroup so that we can choose a
minimal idempotent p P 0` pSq. Let C Ď S such that C P p. Let
C ˚ “ tx P C : ´x ` C P pu. Then C ˚ P p and, for every x P C ˚ ,
´x ` C ˚ P p by Lemma 4.14 of Hindman and Strauss [2012].
Now the set of non-zero rows of Mn is finite and, if nonempty, is
the set of rows of a finite image partition regular matrix over N and
hence by Theorem 2.3 of De and Hindman [2009] IP R{S0 . Then by
Ñp0q
Ñ
Ñp0q
Theorem 2.8, we can choose x P S α1 ´α0 such that, if y “ M0 x ,
then yi P C ˚ for every i P ω for which the ith row of M0 is non-zero.
We now make the inductive assumption that, for some m P ω, we
Ñp0q Ñp1q
Ñp1q
Ñpiq
have chosen x , x , . . . , x such that x P S αi`1 ´αi for every i P
t0, 1, 2, ¨ ¨ ¨ , mu, and, if
¨
Ñp0q
x
˚ Ñp1q
˚ x
y “ Bm ˚
˚ ..
˝ .
Ñ
Ñpmq
x
˛
‹
‹
‹,
‹
‚
then yj P C ˚ for every j P ω for which the j th row of Bm is non-zero.
Ñ
Let D “ tj P ω : row j of Bm`1 is not 0 u and note that for each
j P ω, ´yj ` C ˚ P p. (Either yj “ 0 or yj P C ˚ ) By Theorem 2.8 we
C-IMAGE PARTITION REGULARITY NEAR ZERO
Ñpm`1q
9
Ñpm`1q
can choose
x
P S αm`2 ´αm`1 such that, if ~z “ Mm`1 x
Ş
zj P tPD p´yt ` C ˚ q for every j P D.
, then
Ñpiq
Thus we can choose an infinite sequence x x yiPω such that, for every
Ñpiq
i P ω, x
P S αi`1 ´αi , and, if
¨
Ñp0q
x
˚ Ñp1q
˚ x
y “ Bi ˚
˚ ..
˝ .
Ñ
Ñpiq
x
˚
˛
‹
‹
‹,
‹
‚
then yj P C for every j P ω for which the j th row of Bi is non-zero.
Let
¨ Ñp0q ˛
x
˚ Ñp1q ‹
˚ x ‹
Ñ
‹
x“˚
˚ Ñp2q ‹
x
‚
˝
..
.
Ñ
Ñ
and let y “ M x. We note that, for every j P ω, there exists m P ω
such that yj is the j th entry of
¨
Ñp0q
x
˚ Ñp1q
˚ x
Bi ˚
˚ ..
˝ .
Ñpiq
x
˛
‹
‹
‹
‹
‚
Ñ
whenever i ą m. Thus all the entries of y are in C ˚ .
Now we turn our attention to to the methods of constructing C-image
partition regular matrices based on existing ones. The proof of the following theorem is adapted from Theorem 4.7 of Hindman and Strauss
[2000].
Theorem 3.4. Let S be a dense subsemigroup of pp0, 8q, `q for which
cS is C˚ -set near zero for each c P N. Let M C-image partition regular
matrix near zero over S and let xbn y8
n“1 be a sequence in N. Let
˛
¨
¨
˛
b0 0 o ¨ ¨ ¨
O
N
˚ 0 b1 o ¨ ¨ ¨‹
‹
˝
‚
N “˚
˝ 0 0 b2 ¨ ¨ ¨‚. Then M O
M N
.. .. .. . .
.
. . .
10
C-IMAGE PARTITION REGULARITY NEAR ZERO
is C-image partition regular near zero over S.
Proof. Let A be a C-set in S. Pick an idempotent p in J0 pSq such
that A P p. Let B “ tx P A : x ` A P pu. Then by Lemma 4.14 of
Hindman and Strauss [2012] B P p and thus B is C-set in S. So pick
Ñ
Ñ
ω
x P S ω such that M x P B
ř8.
Given n P ω, let an “ t“0 an,t ¨ xt . Then A X pan ` Aq P p, so pick
zn P A X pan ` Aq X bn S and let yn “ zn {bn . Then
¨
˛
O N ˆÑ˙
x
˝M O‚ Ñ
P C ω`ω`ω .
y
M N
Theorem 3.5. Let S be a dense subsemigroup of pp0, 8q, `q for which
cS is Central˚ set near zero for all c P N. Let M be a Centrally image
partition regular matrix near zero over S.
Proof. The proof of this theorem is same as that of Theorem 3.4.
Let us quickly recall the following definition which is Definition 4.8
in Hindman and Strauss [2000].
Definition 3.6. Let γ, δ P ω Y tωu and let C be a γ ˆ δ matrix with
finitely many nonzero entries in each row. For each t ă δ, let Bt be
a finite matrix of dimension ut ˆ vt . Let R “ tpi, jq : i ă γ and j P
Ś
tăδ t0, 1, ¨ ¨ ¨ , ut ´ 1uu. Given t ă δ and k P t0, 1, ¨ ¨ ¨ , ut ´ 1u, denote
Ñptq
by b k the k-th row of Bt . Then D is an insertion matrix of xBt ytăδ
into C if and only if the rows of D are all rows of the form
Ñp0q
Ñp1q
ci,0 ¨ b jp0q " ci,1 ¨ b jp1q " ¨ ¨ ¨
where pi, jq P R.
For example consider
given in Hindman
ˆ
˙ one which
ˆ is˙
ˆ that
0
1 1
1 0
, and B1 “
, B0 “
[2000], i.e., if C “
3
5 7
2 1
˛
¨
1 1 0 0
˚ 5 7 0 0‹
‹
˚
˚ 2 2 0 1‹
‹
˚
˚ 2 2 3 3‹
˝10 14 0 1‚
10 14 3 3
is an insertion matrix of xBt ytă2 into C.
and
˙ Strauss
1
, then
3
C-IMAGE PARTITION REGULARITY NEAR ZERO
11
Theorem 3.7. Let C be a segmented first entries matrix and for each
t ă ω, let Bt be a ut ˆ vt (finite) image partition regular matrix. Then
any insertion matrix of xBt ytăω into C is C-image partition regular
near zero.
Proof. Let A be an insertion matrix of xBytăω into C. For each t P ω,
pick by Theorem 0.00000, some mt P N and a ut ˆmt first entries matrix
Ñ
Ñ
Ñ
Ñ
Dt such that for all y P Nmt there exists xNvt such that Bt x “ Dt y .
Let E be an insertion matrix of xDt ytăω into C where the rows occur
in the
Ś corresponding position to those of A. That is, if i ă ω and
j P tăω t0, 1, ¨ ¨ ¨ , ut 1u and
Ñp0q
Ñp1q
Ñp0q
Ñp1q
ci,0 ¨ b jp0q " ci,1 ¨ b jp1q " ¨ ¨ ¨
is row k of A, then
ci,0 ¨ d jp0q " ci,1 ¨ d jp1q " ¨ ¨ ¨
is row k of E.
Let H be a C-set of N. By Lemma 4.9 of Hindman and Strauss
Ñ
[2000], E is a segmented first entries matrix so pick y P Nω such that all
ř
Ñ
entries of E y are in H. Let δ0 “ γ0 “ 0 and for n P N let δn “ n1
t“0 vt
řn1
and let γn “ t“0 mt . For each n P ω, pick
¨
˛
¨
˛
¨
˛
xδn
xδn
yγn
˚ xδn `1 ‹
˚x
‹
˚y
‹
˚ . ‹ P Nvn such that Bt ˚ δn. `1 ‹ “ Dt ˚ γn.`1 ‹ .
˝ .. ‚
˝ .. ‚
˝ .. ‚
xδn`1 ´1
Ñ
xδn`1 ´1
Ñ
Then A x “ E y .
yγn`1 ´1
As a consequence of the above theorem we have the folloing corollary:
Corollary 3.8. Let C be a segmented first entries matrix and for each
t ă ω, let Bt be a ut ˆ vt (finite) image partition regular matrix. Then
any insertion matrix of xBt ytăω into C is Centrally image partition
regular near zero.
References
E. Bayatmanesh and M. Akbari Tootkaboni. Central sets theorem
near zero. Topology Appl., 210:70–80, 2016. ISSN 0166-8641. URL
https://doi.org/10.1016/j.topol.2016.06.014.
T. Biswas, D. De, and R. K. Paul. Matrices centrally image partition
regular near 0. New York J. Math., 21:601–613, 2015. ISSN 10769803. URL http://nyjm.albany.edu:8000/j/2015/21_601.html.
12
C-IMAGE PARTITION REGULARITY NEAR ZERO
D. De and N. Hindman. Image partition regularity near zero. Discrete Math., 309(10):3219–3232, 2009. ISSN 0012-365X. URL
https://doi.org/10.1016/j.disc.2008.09.023.
D. De and R. K. Paul. Combined algebraic properties of IP˚ and
central˚ sets near 0. Int. J. Math. Math. Sci., pages Art. ID 830718,
7, 2012. ISSN 0161-1712.
H. Furstenberg. Recurrence in ergodic theory and combinatorial number
theory. Princeton University Press, Princeton, N.J., 1981. ISBN 0691-08269-3. M. B. Porter Lectures.
N. Hindman and I. Leader. The semigroup of ultrafilters near 0.
Semigroup Forum, 59(1):33–55, 1999. ISSN 0037-1912. URL
https://doi.org/10.1007/s002339900031.
N. Hindman and D. Strauss. Infinite partition regular matrices. II.
Extending the finite results. In Proceedings of the 15th Summer
Conference on General Topology and its Applications/1st Turkish
International Conference on Topology and its Applications (Oxford,
OH/Istanbul, 2000), volume 25, pages 217–255 (2002), 2000.
N. Hindman and D. Strauss. Algebra in the Stone-Čech compactification. De Gruyter Textbook. Walter de Gruyter & Co., Berlin, 2012.
ISBN 978-3-11-025623-9. Theory and applications, Second revised
and extended edition [of MR1642231].
N. Hindman, I. Leader, and D. Strauss. Image partition regular
matrices—bounded solutions and preservation of largeness. Discrete Math., 242(1-3):115–144, 2002. ISSN 0012-365X. URL
https://doi.org/10.1016/S0012-365X(01)00276-X.
N. Hindman, I. Leader, and D. Strauss.
Infinite partition
regular matrices: solutions in central sets.
Trans. Amer.
Math. Soc., 355(3):1213–1235, 2003. ISSN 0002-9947. URL
https://doi.org/10.1090/S0002-9947-02-03191-4.
| 4 |
Coordinated trajectory tracking of multiple vertical take-off
and landing UAVs ⋆
arXiv:1711.03261v1 [cs.SY] 9 Nov 2017
Yao Zou a, Ziyang Meng a,
a
Department of Precision Instrument, Tsinghua University, Beijing 100084, P. R. China.
Abstract
This paper investigates the coordinated trajectory tracking problem of multiple vertical takeooff and landing (VTOL) unmanned
aerial vehicles (UAVs). The case of unidirectional information flow is considered and the objective is to drive all the follower
VTOL UAVs to accurately track the trajectory of the leader. Firstly, a novel distributed estimator is developed for each
VTOL UAV to obtain the leader’s desired information asymptotically. With the outputs of the estimators, the solution to the
coordinated trajectory tracking problem of multiple VTOL UAVs is transformed to individually solving the tracking problem
of each VTOL UAV. Due to the under-actuated nature of the VTOL UAV, a hierarchical framework is introduced for each
VTOL UAV such that a command force and an applied torque are exploited in sequence, then the position tracking to the
estimated desired position and the attitude tracking to the command attitude are achieved. Moreover, an auxiliary system
with proper parameters is implemented to guarantee the singularity-free command attitude extraction and to obviate the
use of the unavailable desired information. The stability analysis and simulations effectively validate the achievement of the
coordinated trajectory tracking of multiple VTOL UAVs with the proposed control approach.
Key words: Unmanned aerial vehicle (UAV); Coordinated trajectory tracking; Directed graph; Distributed estimators
1
Introduction
The past few decades have witnessed a rapid development in the formation control of unmanned aerial vehicles (UAVs). Replacing a single monolithic UAV with a
formation of multiple micro ones can effectively improve
efficiency without costly expense [1]. More recently, the
vertical takeoff and landing (VTOL) UAV, as a representative UAV, has received increasing interest, due
to its capacities of hovering and low-speed/low-altitude
flight [2]. Additionally, the VTOL UAV is a canonical nonlinear system with under-actuation property [3],
which raises a lot of technical problems for control theory research. Therefore, the formation control of multiple VTOL UAVs deserves intensive studies.
⋆ This paper was not presented at any IFAC meeting. This
work has been supported in part by National Natural Science
Foundation of China under Grant 61503249 and Beijing Municipal Natural Science Foundation under Grant 4173075,
and the National Key Research and Development Program
of China under Grant 2016YFB0500900/2.
Corresponding author: Ziyang Meng.
Email addresses: [email protected]
(Yao Zou), [email protected]
(Ziyang Meng).
Preprint submitted to Automatica
Generally, the study of formation control problem is categorized into leaderless and leader-follower formation
control problem. The leaderless formation requires its
members to simply reach a prescribed pattern [4]. For
example, a distributed control algorithm is proposed in
[5] such that the formation of VTOL UAVs with an identical velocity was achieved. The special case with communication delay was also studied [6] for the leaderless
formation objective and corresponding control solutions
were proposed. Another formation protocol was developed in [7] to realize a time-varying formation of VTOL
UAVs without a leader, and the obtained theoretical results were verified with practical experiments.
Compared with the leaderless formation, the objective
of the leader-follower formation is that followers reach
an agreement with the desired information associated
with a leader while forming the prescribed pattern [8].
This may lead the formation to complete some intricate
missions, where the leader is responsible for performing the desired trajectory of the formation and it is
delivered via the communication network between the
leader and followers. Although [9,10,11] proposed control approaches to achieve the leader-follower formation
of VTOL UAVs, the leader’s desired information was
required to be available to all the followers. In prac-
10 November 2017
control approach project and the stability analysis. Section 4 performs some simulations to verify the theoretical results. And section 5 draws final conclusions.
tice, due to limited information exchange and communication constraints, the leader’s desired information
is only accessible to a portion of the followers. To
achieve the leader-follower formation under restricted
communication networks, distributed algorithms were
implemented with a local information exchange mechanism [12,13,14,15,16,17]. Using backstepping and filtering strategies, a distributed control algorithm was
developed in [18] to realize the asymptotically stable
leader-follower formation of VTOL UAVs. A distributed
formation and reconfiguration control approach is designed in [19] to accomplish the leader-follower formation without inter-vehicle collisions. With feedback
linearization technique, [20] proposed a leader-follower
formation protocol for VTOL UAVs, which ensured
their heading synchronization as well. [21] presented
a distributed control strategy over a switched graph
and derived necessary and sufficient conditions on the
time-varying leader-follower formation of VTOL UAVs.
However, the network graphs among the followers used
in [18,19,20,21] are undirected, which means that each
pair of the followers interacts bidirectionally. This undirected graph condition is quite restrictive, which, due
to communication constraints, can hardly be met in
practical applications.
Notations. Rm×n denotes the m × n Euclidean space.
Given a vector x = [x1 , x2 , · · · , xn ]T , define sgn(x)
=
Pn
T
[sgn(x1 ), sgn(x
),
·
·
·
,
sgn(x
)]
,
and
kxk
=
|x
2
n
1
i|
i=1
√
and kxk = xT x are its 1-norm and 2-norm. Given
a square matrix A = [aij ] ∈ Rn×n , define λmin (A) and
λmax (A)qas its minimum and maximum eigenvalues, and
Pn Pn
2
kAk =
i=1
j=1 aij is its F-norm. In is an n × n
identity matrix and 1n is an n-dimensional vector with
all entries being one. Furthermore, given a vector x =
[x1 , x2 , x3 ]T , superscript × represents a transformation
from x to a skew-symmetric matrix:
0
−x3 x2
x× =
x3 0 −x1 .
−x2 x1 0
2
This paper proposes a coordinated trajectory tracking
control approach for multiple VTOL UAVs with local
information exchange, where the desired trajectory information is described by a leader. The network graph
among the follower VTOL UAVs is assumed to be directed. This effectively relaxes the restrictive assumption that the graph is symmetric. By applying a novel
distributed estimator, the leader’s desired information
is accurately estimated for each follower VTOL UAV.
Based on the hierarchial framework, a command force
and an applied torque are synthesized for each VTOL
UAV such that the coordinated trajectory tracking is
achieved for a group of VTOL UAVs. Compared with
the aforementioned work, the main contributions of this
paper are three-fold. First, in contrast to the work in
[5,6,7], where only a prescribed pattern is formed with
a constant velocity, the leader-follower tracking of multiple VTOL UAVs is achieved by introducing a novel
distributed estimator. Second, the coordinated tracking
is achieved with weak connectivity, where the network
graph among the followers is directed, rather than the
limited undirected one used in [18,19,20,21]. Third, instead of solely discussing the position loop [7,21], a complete VTOL UAV system is studied based on a hierarchical framework, where an auxiliary system is proposed
to ensure the non-singular command attitude extraction
and to avoid the use of the unavailable desired information.
2.1
Background
Problem statement
Suppose that there are n follower VTOL UAVs in a team,
which are labeled by V = {1, 2, . . . , n}. Each UAV is
a six-dof (degree of freedom) rigid body and operates
in two reference frames: inertia frame I = {OI xI yI zI }
which is attached to the earth and body frame B =
{OB xB yB zB } which is fixed to the fuselage. To establish
the model of the UAVs, rotation matrix Ri ∈ SO(3) ,
{R ∈ R3×3 | det R = 1, RT R = RRT = I3 } and unit
quaternion Qi = [σi , qiT ]T ∈ Q , {Q ∈ R × R3 | σ 2 +
q T q = 1} are applied to represent the attitude of each
UAV. In terms of Euler formula [22], an explicit relation
between these two attitude representations is derived as
Ri (Qi ) = (σi2 − qiT qi )I3 + 2qi qiT − 2σi qi× .
(1)
Based on Euler-Newton formulae, the kinematics and
dynamics of the i-th VTOL UAV are given by
ṗi = vi ,
Ti
v̇i = −gê3 +
Ri (Qi )ê3 ,
mi
1
Q̇i = Gi (Qi )ωi ,
2
Ji ω̇i = −ωi× Ji ωi + τi ,
(2)
(3)
(4)
(5)
where pi = [pxi , pyi , pzi ]T and vi = [vix , viy , viz ]T denote
the position and velocity of the center of gravity of the
UAV in frame I, respectively, mi is the total mass, g
is the local gravitational acceleration, ê3 , [0, 0, 1]T ,
The remaining sections are arranged as follows. Section
2 describes the problem to be solved and provides some
useful preliminaries. Section 3 states the main results
in detail, including the distributed estimator design, the
2
Ti denotes the applied thrust along ê3 , Qi = [σi , qiT ]T
and Ri (Qi ) are the unit quaternion and rotation matrix, Gi (Qi ) = [−qi , σi I3 − qi× ]T , ωi = [ωix , ωiy , ωiz ]T
denotes the angular velocity of the UAV in frame B,
Ji = diag{Jix , Jiy , Jiz } is the inertial matrix with respect
to frame B, and τi denotes the applied torque in frame B.
di0 > 0 if node i is accessible
to #the leader and di0 = 0
"
0 01×n
otherwise; and L̄ ,
, where M = [mij ] ,
−d0 M
L + diag{d10 , d20 , · · · , dn0 }.
Assumption 2.2 The leader-follower graph Gn+1 has a
directed spanning tree with the leader being the root.
In addition to n followers, there is a leader, labeled by
0, to represent the global desired information including
the desired position pr and its derivatives. The control
objective is to design applied thrust Ti and torque τi
for each follower VTOL UAV described by (2)-(5) such
that all the followers track the leader while maintaining
a prescribed formation pattern. More specifically, given
a desired position offset δi , the objective is to guarantee
that
Some important properties associated with matrix M
are given in Lemma 2.1 [23].
Lemma 2.1 Under Assumption 2.2, M is a nonsingular M-matrix with the properies that all its eigenvalues have positive real parts, and there exists a positive definite diagonal matrix Θ = diag{θ1 , θ2 , · · · , θn } such that
Ξ = MT Θ+ΘM is strictly diagonally dominant and positive definite, where [1/θ1 , 1/θ2 , · · · , 1/θn ]T = M−1 1n .
lim (pi (t)−pr (t)) = δi , lim (vi (t)− ṗr (t)) = 0, ∀i ∈ V.
t→∞
t→∞
(6)
Due to communication constraints, the leader’s desired
information is only available to a subset of the followers and the followers only have access to their neighbors’ information. To solve such a coordinated tracking
problem via local information exchange, distributed algorithms are implemented. Moreover, it follows from (6)
that limt→∞ (pi (t) − pj (t)) = δij , where δij = δi − δj ,
∀i, j ∈ V. This means that the followers form a pattern determined by δi while tracking the leader. Therefore, a proper position offset δi is essential such that the
proposed algorithm ensures followers’ convergence to a
well-defined and unique formation.
2.3
Consider the vector differential equation
ẋ = f (x, t),
(7)
where f : Rn × R → Rn is measurable and essentially
locally bounded.
In what follows, the definitions of Filippov solution, generalized gradient and regular function are given according to [24,25,26].
Assumption 2.1 The desired position pr and its
(3)
derivatives ṗr , p̈r and pr are bounded.
2.2
Filippov solution and non-smooth analysis
Definition 2.1 (Filippov solution) A vector function x(t) is called a solution of (7) on [t0 , t1 ], if x(t)
is absolutely continuous on [t0 , t1 ] and for almost all
t ∈ [t0 , t1 ], ẋ ∈ K[f ](x, t), where
Graph theory
K[f ](x, t) =
Communication topology among UAVs is described by a
graph Gn , (V, E), which is composed of a node set V ,
{1, 2, · · · , n} and an edge set E ⊆ V × V. For a directed
graph, (i, j) ∈ E means that the information of node i is
accessible to node j, but not conversely. All neighbours
of node i are included in set Ni = {j ∈ V | (j, i) ∈ E}.
A path from node i to node j is a sequence of edges.
\ \
ρ>0 µN =0
cof (B(x, ρ) − N, t),
T
µN =0 denotes the intersection over all sets N of
Lebesgue measure zero, co(·) denotes the vector convex
closure, and B(x, ρ) denotes the open ball of radius ρ
centered at x.
For a follower graph Gn , its adjacent matrix D = [dij ] ∈
Rn×n is defined such that dij > 0 if (j, i) ∈ E and dij =
0 otherwise, and the associated nonsymmetric Laplan×n
cian
is defined such that lii =
Pn matrix L = [lij ] ∈ R
j=1,j6=i dij and lij = −dij for j 6= i. For a leaderfollower graph Gn+1 , {V̄, Ē} (leader is labeled as 0)
with V̄ = {0, 1, · · · , n} and Ē ⊆ V̄ × V̄, we define D̄ ∈
R(n+1)×(n+1) and L̄ ∈ R(n+1)×(n+1) as its adjacent matrix and
Laplacian matrix. Specifically,
" nonsymmetric
#
0 01×n
D̄ ,
, where d0 = [d10 , d20 , · · · , dn0 ]T and
d0 D
Definition 2.2 (Generalized gradient) For a locally
Lipschitz function V : Rn × R → R, its generalized gradient at (x, t) is defined as
∂V (x, t) = co{lim ▽V (x, t) | (xi , ti ) → (x, t), (xi , ti ) ∈
/ ΩV },
where ΩV is the set of measure zero where the gradient of V is not defined. Furthermore, the generalized derivative of V along system (7) is defined as
"
#
T
K[f
](x,
t)
˙
.
Ṽ , φ∈∂V φT
1
3
Definition 2.3 (Regular) f (x, t) : Rn × R is called
regular if
(1) for all ν ≥ 0, the usual one-sided directional derivative f ′ (x; ν) exists;
(2) for all ν ≥ 0, f ′ (x; ν) = f o (x; ν), where the generalized directional derivative f o (x; ν) is defined as
f o (x; ν) = lim sup
y→x t↓0
command attitude extraction. For i ∈ V, a distributed
estimator is proposed as follows:
p̂˙i =v̂i − kp
v̂˙ i =âi − kv
f (y + tν) − f (y)
.
t
Lemma 2.2 Let system (7) be essentially locally
bounded and 0 ∈ K[f ](x, t) in a region Rn × [0, ∞).
Suppose that f (0, t) is uniformly bounded for all t ≥ 0.
Let V : Rn × [0, ∞) → R be locally Lipschitz in t
and regular such that W1 (x) ≤ V (t, x) ≤ W2 (x) and
Ṽ˙ (x, t) ≤ −W (x), where W (x) and W (x) are continu-
(8a)
dij (v̂i − v̂j ),
(8b)
j=0
j=0
(8c)
where p̂0 = pr , v̂0 = ṗr and â0 = p̈r are specified,
dij is the (i, j)-th entry of the adjacent matrix D associated with the follower graph Gn , kp , kv , ka and la
are positive parameters, and Γi = diag{µxi , µyi , µzi } with
µki = tanh(γ̇ik )γ̇ik and Γ̄i = diag{µ̄xi , µ̄yi , µ̄zi } with µ̄ki =
1 − tanh2 (γik ) for k = x, y, z. Next, define the estimation
errors p̄i = p̂i −pr , v̄i = v̂i − ṗr and āi = âi − p̈r for i ∈ V.
It then follows from (8) that their dynamics satisfy
2
that ϕ ≤ −W , ∀ϕ ∈ Ṽ˙ . Then, all Filippov solutions of
system (7) are bounded and satisfy limt→∞ W (x(t)) = 0.
Main results
p̄˙ i =v̄i − kp
Due to the under-actuated nature of the VTOL UAV, a
hierarchical strategy is applied to solve the coordinated
trajectory tracking problem of multiple VTOL UAV systems. First, a distributed estimator using local information interaction is designed for each follower UAV to estimate the leader’s desired information. Then, the coordinated trajectory tracking problem of multiple VTOL
UAVs is transformed into the asymptotic stability problem of each individual error system. Next, a command
force and an applied torque are exploited for each UAV
to asymptotically stabilize the position and attitude error systems, respectively. Finally, the stability of each
error system is analyzed.
3.1
dij (p̂i − p̂j ),
j=0
ous positive definite functions, W (x) is a continuous positive semi-definite function, and Ṽ˙ (x, t) ≤ −W (x) means
3
j=0
n
X
γ̈i = − la γ̇i + 2Γi γ̇i
n
n
X
ka −1 X
− Γ̄i
dij (â˙ i − â˙ j )
dij (âi −âj )+
kγ
j=0
j=0
n
n
X
X
dij (â˙ i − â˙ j ) ,
+sgn
dij (âi − âj ) +
The Lyapunov stability criterion for non-smooth systems is given in Lemma 2.2 [27].
1
n
X
v̄˙ i =āi − kv
n
X
j=1
n
X
mij p̄j ,
(9a)
mij v̄j ,
(9b)
j=1
¨i =kγ Γ̄i γ̈i − 2kγ Γ̄i Γi γ̇i − p(4)
ā
r
!
n
n
X
X
= − la kγ Γ̄i γ̇i − ka
mij āj +
mij ā˙ j
j=1
− ka sgn
n
X
mij āj +
mij ā˙ j
j=1
j=1
= − la ā˙ i − ka
j=1
n
X
n
X
mij āj +
− ka sgn
Distributed estimator design
j=1
Since the leader’s desired information including the desired position pr and its derivatives is not available to all
the followers, a distributed estimator is firstly designed
for each VTOL UAV to estimate them.
mij āj +
− p(4)
r
mij ā˙ j
j=1
j=1
n
X
n
X
!
n
X
j=1
mij ā˙ j
!
!
(9c)
+ Np ,
where mij denotes the (i, j)-th entry of M defined in
(4)
(3)
Section 2, and Np = la pr − pr is bounded according
to Assumption 2.1. Equivalently, the error dynamics (9)
can be rewritten as
For i ∈ V, we define p̂i , v̂i and âi = kγ tanh(γi ) as the
estimations of pr , ṗr and p̈r , respectively, where γi is an
auxiliary variable and parameter kγ ≥ supt≥0 p̈r (t). As
will be shown subsequently, the definition of âi using the
hyperbolic tangent function enables the control parameters to be chosen explicitly in case of singularity in the
p̄˙ =v̄ − kp (M ⊗ I3 )p̄,
v̄˙ =ā − kv (M ⊗ I3 )v̄,
˙
¨ = − la ā˙ − ka (M ⊗ I3 )(ā + ā)
ā
˙ + 1n ⊗ Np ,
− ka sgn ((M ⊗ I3 )(ā + ā))
4
(10a)
(10b)
(10c)
According to Definition 2.2, the generalized derivative
of Ls along (11) satisfies
where p̄, v̄ and ā are the column stack vectors of p̄i , v̄i
and āi , respectively, and operator ⊗ denotes the kronecker
define a sliding surface si =
Pn product. Moreover,
Pn
la j=1 mij āj + j=1 mij ā˙ j for i ∈ V, and correspond˙ It
ingly, its column stack vector s = (M ⊗ I3 )(la ā + ā).
follows from (10c) that the dynamics of s satisfies
ṡ = (M ⊗ I3 ) (−ka s − ka sgn(s) + 1n ⊗ Np ) .
\
L̃˙ s = (φ + s)T (Θ ⊗ I3 )K[(M ⊗ I3 )(−ka s − ka sgn(s)
φ∈∂Ls
+ 1n ⊗ Np )],
\
= (φ + s)T (ΘM⊗ I3 )(−ka s− ka ∂ksk1 + 1n ⊗ Np ),
(11)
φ∈∂Ls
(15)
Theorem 3.1 indicates that the developed distributed
estimator (8) with appropriate parameters enables the
estimation errors p̄i , v̄i and āi for each VTOL UAV to
converge to zero asymptotically.
sji ∈ R+
{1}
j
where ∂|si | = {−1} sji ∈ R− , ∀i ∈ V, j = x, y, z,
[−1, 1] sj = 0
i
and the calculation of K is applied using the same argument given in [24].
Theorem 3.1 Under Assumptions 2.1 and 2.2, if the
estimator parameters kp , kv , la and ka are chosen based
on
kΘk2
,
λmin (Ξ)2
kp λmin (Ξ)kΘk2
la >
,
λmin (MT ΘM)(kp kv λmin (Ξ)2 − kΘk2)
√
2 nkΘMkN̄p
ka >
,
λmin (Ξ)
kp kv >
If L̃˙ s 6= ∅, suppose ϕ ∈
/ L̃˙ s , then we know that ∀φ ∈
T
∂ksk1, ϕ = (φ + s) (ΘM ⊗ I3 )(−ka s − ka δ + 1n ⊗ Np )
for some δ ∈ ∂ksk1 . Choose φ = arg minδ∈∂ksk1 (δ +
T
Θ
s)T ( ΘM+M
⊗ I3 )(δ + s). According to [24], for all
2
δ ∈ ∂ksk1 , we have that
(12)
(13)
(14)
(φ + s)T (ΘM ⊗ I3 )(δ + s)
ΘM + MT Θ
T
⊗ I3 (φ + s)
≥(φ + s)
2
1
= (φ + s)T (Ξ ⊗ I3 )(φ + s).
2
where N̄p = supt≥0 kNp (t)k, and Θ and Ξ are given in
Lemma 2.1, the distributed estimator (8) ensures that
limt→∞ p̄i (t) = 0, limt→∞ v̄i (t) = 0 and limt→∞ āi (t) = 0,
∀i ∈ V.
Proof: The proof is divided into two parts: first, the
sliding surface si , i ∈ V is proven in Proposition 3.1 to
converge to zero asymptotically; then, the final result is
shown in Proposition 3.2.
It then follows that L̃˙ s further satisfies
ka
L̃˙ s ≤ − (φ + s)T (Ξ ⊗I3 )(φ + s) +(φ + s)T (1n ⊗Np )
2
√
ka λmin (Ξ)
≤−
kφ + sk2 + nkφ + skkΘMkN̄p
2
√
ka λmin (Ξ)
kφ + sk − nkΘMkN̄p kφ + sk.
=−
2
(16)
Proposition 3.1 Under Assumptions 2.1 and 2.2, if the
estimator parameter ka satisfies (14), the distributed estimator (8) guarantees that limt→∞ si (t) = 0, ∀i ∈ V.
Proof: Obviously, system (11) is non-smooth; thereafter,
the solution of (11) is studied in the sense of Filippov
and the non-smooth framework is applied. The stability
of system (11) is to be proven based on Lemma 2.2.
Note that if φ = 0, then s = 0 and kφ+sk = 0, and if φ 6=
0, then kφ + sk ≥ 1. Hence, if the estimator parameter
ka√ satisfies (14), there exists a constant k̄a satisfying
2 nkΘMkN̄p
(Ξ)
≤ k̄a < ka such that −( k̄a λmin
kφ + sk −
2
√ λmin (Ξ)
nkΘMkN̄p)kφ + sk ≤ 0. Therefore, it follows that
Pn
We first propose a Lyapunov function Ls = i=1 θi ksi k1+
sT (Θ ⊗ I3 )s, where θi is the i-th diagonal entry of Θ.
Note that Ls is non-smooth but regular [24]. It can be
derived that Ls is bounded by
(ka − k̄a )λmin (Ξ)
L̃˙ s ≤ −
kφ + sk2 .
2
s
W1 (s) ≤ L ≤ W2 (s),
(17)
In addition, each entry of s has the same sign as its
counterpart in sgn(s), and thus, it follows that kφ+sk ≥
ksk. We finally have that
√
where W1 (s) = λmin (Θ)ksk2 and W2 (s) = nλmax (Θ)
· (ksk1 + ksk2 ). In terms of Lemma 2.2, the stable result
can be deduced if only the generalized derivative of Ls
satisfies L̃˙ s ≤ W (s), where W (s) is a continuous positive
semi-definite function.
(ka − k̄a )λmin (Ξ)
ksk2 = −W (s).
L̃˙ s ≤ −
2
5
(18)
this case, L̇e further satisfies
Since W (s) ≥ 0 has been ensured, it follows that
Rt
W (s(τ ))dτ is bounded for t ≥ 0, which implies that s
0
is bounded. Hence, it follows from (11) that ṡ is bounded,
which implies that s is uniformly continuous in t. This
means that W (s) is uniformly continuous in t. Based on
Lemma 2.2, we can conclude that limt→∞ W (s(t)) = 0,
which further implies that limt→∞ s(t) = 0.
L̇e ≤ − λmin (Ω)kzk2 + kΘMkkzkksk
√
≤ − ϑ1 Le + ϑ2 ksk Le ,
where ϑ1 =
(19)
1
Le = p̄T (Θ ⊗ I3 )p̄ + v̄ T (Θ⊗I3 )v̄ + āT (MT ΘM ⊗ I3 )ā.
2
λ1 kzk2 ≤ Le ≤ λ2 kzk2,
where λ1 = min{λmin (Θ), 21 λmin (MT ΘM)} and λ2 =
max{λmax (Θ), 12 λmax (MT ΘM)}. The derivative of Le
along (10) and (19) satisfies
L̇e = − p̄T ((MT Θ + ΘM) ⊗ I3 )p̄ + 2p̄T (Θ ⊗ I3 )v̄
3.2
Since the leader’s desired information has been estimated via the distributed estimator (8) for each follower
VTOL UAV, the remaining problem is to transform the
coordinated trajectory tracking problem into the simultaneous tracking problem for the decoupled VTOL UAV
group. This is verified as follows.
+ āT (MT Θ ⊗ I3 )(−la (M ⊗ I3 )ā + s)
= − kp p̄T (Ξ ⊗ I3 )p̄ + 2p̄T (Θ ⊗ I3 )v̄ − kv v̄ T (Ξ ⊗ I3 )v̄
+ 2v̄ T (Θ ⊗ I3 )ā − la āT (MT ΘM ⊗ I3 )ā
+ āT (MT Θ ⊗ I3 )s
≤ − kp λmin (Ξ)kp̄k2 + 2kΘkkp̄kkv̄k − kv λmin (Ξ)kv̄k2
+ 2kΘkkv̄kkāk − la λmin (MT ΘM)kāk2
+ kΘMkkākksk
≤ − z1T Ωz1 + kΘMkkakksk,
Define the position error pei = pi − pr − δi and the velocity error vie = vi − ṗr for i ∈ V. Using the estimations p̂i and v̂i obtained from the distributed estimator
(8), we rewrite pei and vie as pei = pi − p̂i − δi + p̄i and
vie = vi − v̂i + v̄i . Since Theorem 3.1 has shown that
limt→∞ p̄i (t) = 0 and limt→∞ v̄i (t) = 0, ∀i ∈ V, the
coordinated tracking control objective (6) can be transformed into the following simultaneous tracking objective:
(20)
where z1 = [kp̄k, kv̄k, kāk]T and
Ω=
−kΘk
0
Problem transformation
T
− v̄ ((M Θ + ΘM) ⊗ I3 )v̄ + 2v̄ (Θ ⊗ I3 )ā
kp λmin (Ξ)
(22)
Remark 3.1 According to (8), singularity may occur in
the distributed estimator when some diagonal entry of
Γ̄i equals to zero, and this corresponds to the case where
some entry of the auxiliary variable γi tends to infinity. Theorem 3.1 has shown that, with a bounded initial
value, the estimation error āi for each UAV is bounded
all the time. This implies that Γ̄i is always positive definite. Consequently, no singularity is introduced in the
developed distributed estimator (8).
It is bounded by
and kzk = kz1 k has
When V e = 0, it can be shown that D+ V e ≤ ϑ2 ksk.
Thus, it follows that D+ V e satisfies (22) all the time
[28]. For system ẏ = −ϑ1 y + ϑ2 ksk with respect to
y ∈ [0, ∞), it can be proven in terms of input-to-state
stability theory [28] that limt→∞ y(t) = 0 given the fact
that limt→∞ s(t) = 0. According to Comparison Principal [28], it follows that limt→∞ V e (t) = 0, which further implies that limt→∞ p̄(t) = 0, limt→∞ v̄(t) = 0 and
limt→∞ ā(t) = 0.
Define z = [p̄T , v̄ T , āT ]T and assign a Lyapunov function
T
kΘMk
√
,
λ1√
e
V̇ e ≤ −ϑ1 V e + ϑ2 ksk.
Proof: Consider the definition of the sliding surface s,
then the dynamics of the estimator error ā satisfies
T
ϑ2 =
been applied. Next, take V = 2Le . When Le 6= 0, it
follows from (21) that the derivative of V e satisfies
Proposition 3.2 Under Assumptions 2.1 and 2.2, if
the estimator parameters kp , kv and la satisfy (12) and
(13), then limt→∞ si (t) = 0 is sufficient to ensure that
limt→∞ p̄i (t) = 0, limt→∞ v̄i (t) = 0 and limt→∞ āi (t) =
0, ∀i ∈ V.
(M ⊗ I3 )ā˙ = −la (M ⊗ I3 )ā + s.
λmin (Ω)
,
λ2
(21)
−kΘk
kv λmin (Ξ)
−kΘk
0
−kΘk
−la λmin (MT ΘM)
.
lim (pi (t) − p̂i (t)) = δi , lim (vi (t) − v̂i (t)) = 0, ∀i ∈ V.
t→∞
t→∞
We next define p̄ei = pi − p̂i − δi and v̄ie = vi − v̂i for
i ∈ V. It follows from (2), (3) and (8) that their dynamics
If the estimator parameters kp , kv and la are chosen
based on (12) and (13), then Ω is positive definite. In
6
Based on the above discussions, by introducing the distributed estimator (8), the coordinated trajectory tracking problem for multiple VTOL UAV systems (2)-(5) can
be transformed into the simultaneous asymptotic stability problem for each error system (23), (27) and (28).
Lemma 3.2 summarizes this point.
satisfy
p̄˙ ei =v̄ie +
n
X
mij p̄j ,
(23a)
j=1
v̄˙ ie =ui − gê3 − âi +
n
X
mij v̄j +
j=1
Ti
(Ri (Qi )− Ri (Qci ))ê3 ,
mi
Lemma 3.2 Consider the i-th error system (23),
(27) and (28). If a command force ui and an applied
torque τi can be developed such that limt→∞ p̄ei (t) = 0,
limt→∞ v̄ie (t) = 0, limt→∞ qie (t) = 0 and limt→∞ ωie (t) =
0, the coordinated trajectory tracking of multiple VTOL
UAV systems (2)-(5) is achieved in the sense of (6).
(23b)
Ti
Ri (Qci )ê3 is the command force with Qci
where ui = m
i
being the command unit quaternion. Moreover, once
the command force ui can be determined, in view of
kRi (Qci )ê3 k = 1, the applied thrust Ti is derived as
Ti = mi kui k, ∀i ∈ V.
3.3
(24)
In this subsection, a command force ui for each VTOL
UAV will be synthesized. The main difficulties here are
that the command force ui should comply with the nonsingular condition (25) and the desired position pr and
its derivatives are not available in the command force
ui and the subsequent applied torque τi due to limited
communication.
Now that the control strategy is based on a hierarchical
framework, the command unit quaternion Qci , as the attitude tracking objective for each VTOL UAV, should
be extracted from the command force ui . Based on minimal rotation principle, a viable extraction algorithm is
proposed in Lemma 3.1 [5].
To address the above dilemmas, we introduce the virtual
position error p̃ei = p̄ei − ηi and the virtual velocity error
ṽie = v̄ie − η̇i for i ∈ V, where ηi is an auxiliary variable. It
follows from (23) that the dynamics of p̃ei and ṽie satisfy
Lemma 3.1 For i ∈ V, if the command force ui =
[uxi , uyi , uzi ]T satisfies the non-singular condition:
ui ∈
/ U , {u ∈ R3 | u = [0, 0, uz ]T , uz ≤ 0},
the command unit quaternion
tracted as
σic =
s
1 g − uzi
+
, qc =
2
2kui k i
Qci
=
[σic , (qic )T ]T
uyi
1
−ux
i
2kui kσic
0
.
(25)
p̃˙ei = ṽie + ~1 ,
ṽ˙ ie = ui − η̈i − gê3 − âi + ~2 ,
is ex-
1
Gi (Qei )ωie ,
2
(29a)
(29b)
Pn
Pn
where ~1 =
j=1 mij p̄j and ~2 =
j=1 mij v̄j +
Ti
c
mi (Ri (Qi ) − Ri (Qi ))ê3 .
(26)
Lemma 3.3 Consider the i-th virtual position error
system (29). If a command force ui can be synthesized such that limt→∞ p̃ei (t) = 0, limt→∞ ṽie (t) =
0, limt→∞ ηi (t) = 0 and limt→∞ η̇i (t) = 0, then
limt→∞ p̄ei (t) = 0 and limt→∞ v̄ie (t) = 0 are achieved.
Next, define the attitude error Qei = [σie , (qie )T ]T =
(Qci )−1 ⊙ Qi for i ∈ V, where operator ⊙ is the
unit quaternion product. According to [29], Qei =
[±1, 0, 0, 0]T corresponds to the extract attitude tracking. The dynamics of Qei satisfies
Q̇ei =
Command force development
To guarantee the condition in Lemma 3.3, for i ∈ V, we
propose the following command force:
ui = gê3 + âi − kη (tanh(ηi + η̇i ) + tanh(η̇i )),
(30)
(27)
and introduce a dynamic system with respect to the auxiliary variable ηi :
where ωie = ωi − Ri (Qei )T ωic is the angular velocity error
with ωic being the command angular velocity. Please refer
to [29] for the derivations of ωic and its derivative ω̇ic . In
addition, it follows from (5) and Ṙi (Qei ) = Ri (Qei )(ωie )×
that the dynamics of ωie satisfies
η̈i = −kη (tanh(ηi + η̇i ) + tanh(η̇i )) + lp p̃ei + lv ṽie ,
(31)
Ji ω̇ie = − ωi× Ji ωi + τi + Ji [(ωie )× Ri (Qei )T ωic
−
Ri (Qei )T ω̇ic ].
where kη , lp and lv are positive control parameters. Substituting (30) and (31) into (29) yields
p̃˙ei = ṽie + ~1 ,
ṽ˙ e = − lp p̃ei − lv ṽie + ~2 .
(28)
7
(32a)
(32b)
A proper control parameter kη should be chosen such
that the non-singular condition (25) is met. Specifically,
kη <
1
(g − kγ ).
2
functions of the derivatives of the command force ui .
Their expressions are presented as follows:
u̇i =kγ Γ̄i γ̇i − kη [Di η̇i + (Di + Si )η̈i ],
üi = − 2kγ Γ̄i Γi γ̇i + kγ Γ̄i γ̈i − kη [D̄i η̇i + (D̄i + Di + S̄i )η̈i
(33)
(3)
In such a case, the third row of the command force ui
satisfies
+ (Di + Si )ηi ],
where Γi and Γ̄i have been specified below (8),
Di = diag{ǫxi , ǫyi , ǫzi } with ǫji = 1 − tanh2 (ηij + η̇ij ),
Si = diag{νix , νiy , νiz } with νij = 1 − tanh2 (η̇ij ),
D̄i = {ǭxi , ǭyi , ǭzi } with ǭji = −2 tanh(ηij + η̇ij )(1 −
tanh2 (ηij + η̇ij ))(ηij + η̇ij ) and S̄i = {ν̄ix , ν̄iy , ν̄iz } with
ν̄ij = −2 tanh(η̇ij )(1 − tanh2 (η̇ij ))η̇ij , for j = x, y, z, and
uzi =g + âzi − kη (tanh(ηiz + η̇iz ) + tanh(η̇iz ))
≥g − kγ − 2kη > 0,
where âzi = kγ tanh(γiz ) and the property that
| tanh(·)| < 1 have been applied. To this end, kη satisfying (33) is sufficient to guarantee that the developed
command force ui in (30) for each UAV strictly satisfies
the non-singular condition (25).
(3)
ηi
Remark 3.2 By defining âi = kγ tanh(γi ) in the distributed estimator (8) and introducing the auxiliary dynamics (31), the developed command force ui for i ∈ V is
equipped with a saturation property. Based on this property, the choice of the control parameter kη is independent on any estimator state.
From the above derivations, it it trivial to see that the
desired information is not used in the developed applied
torque τi for the UAV without accessibility to the leader.
3.5
Remark 3.3 It follows from (24), (30) and kâi k ≤
√
3kγ that the resulting applied thrust Ti is bounded by
√
√
Ti ≤ mi (g + 2 3kη + 3kγ ), ∀i ∈ V,
(34)
Theorem 3.2 Consider n follower VTOL UAV systems
(2)-(5) with Assumptions 2.1 and 2.2. The synthesized
command force ui in (30) and applied torque τi in (36)
guarantee the coordinated trajectory tracking of multiple
VTOL UAVs in the sense of (6).
Applied torque development
Proof: Theorem 3.1 has shown that, for i ∈ V, the
distributed estimator developed in (8) enables the estimation errors p̄i and v̄i to converge to zero asymptotically. Based on this, it follows from Lemmas 3.2 and
3.3 that the coordinated trajectory tracking objective is
achieved, if the following results are guaranteed by the
synthesized command force ui and applied torque τi :
Th2.i) limt→∞ qie (t) = 0 and limt→∞ ωie (t) = 0, ∀i ∈ V,
Th2.ii) limt→∞ p̃ei (t) = 0 and limt→∞ ṽie (t) = 0, ∀i ∈ V,
Th2.iii) limt→∞ ηi (t) = 0 and limt→∞ η̇i (t) = 0, ∀i ∈ V.
They will be proven in Propositions 3.3-3.5, respectively.
Define a sliding surface ri = lq qie + ωie for i ∈ V, where
lq > 0. From (27) and (28), the dynamics of ri satisfies
lq
Ji ṙi = Ji (σie I3 + (qie )× )ωie − ωi× Ji ωi + τi
2
+ Ji [(ωie )× Ri (Qei )T ωic − Ri (Qei )T ω̇ic ].
(35)
We propose an applied torque τi for each UAV as follows:
lq
Ji (σie I3 + (qie )× )ωie + ωi× Ji ωi
2
− Ji [(ωie )× Ri (Qei )T ωic − Ri (Qei )T ω̇ic ],
τi = − kq ri −
(36)
Proposition 3.3 Consider the attitude error system
(27) and (28). The developed applied torque τi in (36)
guarantees that limt→∞ qie (t) = 0 and limt→∞ ωie (t) = 0,
∀i ∈ V.
where kq > 0. Substituting (36) into (35) yields
Ji ṙi = −kq ri .
Stability analysis
Theorem 3.2 summarizes the final stability result associated with the coordinated trajectory tracking of multiple VTOL UAV systems (2)-(5) controlled by the developed applied thrust and torque.
which means that each Ti is upper bounded by a constant
associated with the individual mass mi and the specified
parameters kη and kγ .
3.4
= − kη [Di η̇i + (Di + Si )η¨i ] + lp p̃˙ ei + lv ṽ˙ ie .
(37)
Proof: It follows from (37) that, for i ∈ V, the developed applied torque τi enables the sliding surface ri
to converge to zero asymptotically. Then, assign a nonnegative function yi = 21 [kqie k2 + (1 − σie )2 ] = 1 − σie ≤ 2
It follows from (36) that, the command angular velocity
ωic and its derivative ω̇ic are necessary to determine each
applied torque τi . According to [29], ωic and ω̇ic are the
8
for i ∈ V. With the definition of ri , the derivative of yi
along (27) satisfies
It is trivial to verify that
1
1
(k tanh(ηi + η̇i )k2 + k tanh(η̇i )k2 + kη̇i k2 ).
2
kη
(39)
The derivative of Lηi along (31) satisfies
Lηi ≥
1
lq
1
ẏi = (qie )T ωie = − kqie k2 + (qie )T ri
2
2
2
lq e 2 1
2
≤ − kqi k + kri k
4
4
lq
1
= (−2yi + yi2 ) + kri k2 .
4
4
L̇ηi = tanh(ηi + η̇i )T η̇i + [tanh(ηi + η̇i ) + tanh(η̇i )
1
+ η̇i ]T [−kη (tanh(ηi + η̇i ) + tanh(η̇i )) + εi ]
kη
= − kη k tanh(ηi + η̇i ) + tanh(η̇i )k2 − η̇iT tanh(η̇i )
1
+ [tanh(ηi + η̇i ) + tanh(η̇i ) + η̇i ]T εi
(40)
kη
≤ − kη k tanh(ηi + η̇i ) + tanh(η̇i )k2 − η̇iT tanh(η̇i )
q
+ 2 Lηi kεi k
q
≤2 Lηi ρ(kεi (0)k, 0).
(41)
l
It can be shown that system ẏ = 4q (−2y + y 2 ) with
respect to y ∈ [0, 2] is asymptotically stable. For system
l
ẏ = 4q (−2y + y 2 ) + 41 kri k2 with respect to y ∈ [0, 2], by
using input-to-state stability theory [28], it follows that
limt→∞ y(t) = 0 given the fact limt→∞ ri (t) = 0. According to Comparison Principal [28], limt→∞ yi (t) = 0
is obtained, that is, limt→∞ qie (t) = 0, which, together with limt→∞ ri (t) = 0, further implies that
limt→∞ ωie (t) = 0, ∀i ∈ V.
Proposition 3.4 Consider the virtual position error system (29) with Assumptions 2.1 and 2.2. If
limt→∞ qie (t) = 0, limt→∞ p̄i (t) = 0 and limt→∞ v̄i (t) =
0 are achieved, the developed command force ui in (30)
guarantees that limt→∞ p̃ei (t) = 0 and limt→∞ ṽie (t) = 0,
∀i ∈ V.
Integrating both sides of (41), we obtain that
q
q
Lηi (t) − Lηi (0) ≤ ρ(kεi (0)k, 0)t, ∀t ≥ 0,
which indicates that Lηi cannot escape to infinity in finite
time. In addition, it follows from (40) that L̇ηi satisfies
e
Proof: It follows from (1) that (Ri (Qei ) − I3 )ê3 = ϕ×
i qi ,
ey ex
e T
c
where ϕi = [−qi , qi , σi ] . In terms of kRi (Qi )k = 1,
kQei k = 1 and (34), it follows from limt→∞ qie (t) = 0
Ti
Ti
that each m
(Ri (Qi )−Ri (Qci ))ê3 = m
Ri (Qci )(Ri (Qei )−
i
i
I3 )ê3 converges to zero asymptotically. This, together
with limt→∞ p̄i (t) = 0 and limt→∞ v̄i (t) = 0, guarantees
that the perturbation items ~1 and ~2 in the virtual position error system (32) converge to zero asymptotically.
Furthermore, it can be shown that system
L̇ηi ≤ − tanh(η̄i )T Λ tanh(η̄i )
1
+ kηi + η̇i k + (1 + )kη̇i k ρ(kεi (0)k, t)
kη
≤ − min{1, kη }kD tanh(η̄i )k2 + c2 kη̄i kρ(kεi (0)k, t)
≤ − c1 k tanh(η̄i )k2 + c2 kη̄i kρ(kεi (0)k, t),
(43)
where η̄i =
p̃˙ei = ṽie ,
ṽ˙ e = − lp p̃ei − lv ṽie ,
0
tanh(χ)T dχ+
Z
0
η̇i
tanh(χ)T dχ+
#
, c1 =
Λ = I3 ⊗
"
kη
kη
kη kη + 1
#
,D=
√
3− 5
2
c (χ )2 ∆
tanh(∆ )
L̇ηi ≤ −c1 (χη )2 kη̄i k2 + c2 kη̄i kρ(kεi (t1 )k, t − t1 ). (44)
Proof: Denote εi = lp p̃ei + lv ṽie for i ∈ V. It follows
from limt→∞ p̃ei (t) = 0 and limt→∞ ṽie (t) = 0 that
limt→∞ εi (t) = 0. To this end, there exists a KL-class
function ρ(kεi (0)k, t) such that kεi (t)k ≤ ρ(kεi (0)k, t).
For i ∈ V, the following Lyapunov function is proposed:
ηi +η̇i
11
+ η̇iT , η̇iT ]T ,
η
t1 ) ≤ 1 cη2 η for t ∈ [t1 , ∞), where χη <
is a
∆η
constant. In particular, k tanh(η̄i )k ≤ χη kη̄i k. Thus, for
t ≥ t1 , L̇ηi satisfies
Proposition 3.5 Consider the auxiliary system (31). If
limt→∞ p̃ei (t) = 0 and limt→∞ ṽie (t) = 0 are achieved,
then limt→∞ ηi (t) = 0 and limt→∞ η̇i (t) = 0, ∀i ∈ V.
Z
"
[ηiT
min{1, kη } and c2 = 1+ k1η . Since
12
Lηi cannot escape in finite time, there exist t1 and ∆η
such that kη̄i (t)k ≤ ∆η for t ∈ [0, t1 ] and ρ(kεi (t1 )k, t −
I3 ⊗
is asymptotically stable. Thus, it follows from inputto-state stability theory [28] that limt→∞ p̃ei (t) = 0
and limt→∞ ṽie (t) = 0, ∀i ∈ V given the fact that
limt→∞ ~(t) = 0, where ~ = [~T1 , ~T2 ]T .
Lηi =
(42)
This implies that L̇ηi is negative outside the set
Zi = η̄i ∈ R6 | kη̄i k ≤
1
kη̇i k2 .
2kη
c2
ρ(kεi (t1 )k, t − t1 ) .
c1 (χη )2
Thus, η̄i is bounded and ultimately converges to the
set Zi . In view of limt→∞ ρ(kεi (t1 )k, t − t1 ) = 0, it
9
100
80
←80 sec
pz (m)
60
←60 sec
p1
p2
p3
p4
pr
←40 sec
40
20
←20 sec
0
←0 sec
−20
5
0
py (m)
0
−5
10
5
px (m)
Fig. 2. Snapshots of coordinated trajectory tracking of
VTOL UAVs.
Fig. 1. Leader-follower graph Gn+1 .
follows that limt→∞ η̄i (t) = 0, which implies that
limt→∞ ηi (t) = 0 and limt→∞ η̇i (t) = 0, ∀i ∈ V.
pex (m)
2
Since Th2.i)-Th2.iii) have been proven, it can be concluded that the coordinated trajectory tracking of multiple VTOL UAVs is achieved in the sense of (6).
0
−2
pey (m)
−4
20
Simulations
In this section, simulations are performed to verify the
proposed distributed control approach on a formation
of four VTOL UAVs described by (2)-(5). The inertial
parameters are assumed to be identical: mi = 0.85(kg)
and Ji = diag{4.856, 4.856, 8.801} × 10−2 (kgm2 ), i =
1, 2, 3, 4. The leader-follower graph Gn+1 is illustrated
in Fig. 1, where each arrow denotes the corresponding
information flow. Furthermore, define dij = 1 if follower j is accessible to follower i, and dij = 0, otherwise, for i, j = 0, 1, · · · , 4. The desired trajectory is
described as pr (t) = [5 cos(0.2t), 5 sin(0.2t), t]T (m), and
the desired position offsets of the followers relative to
the leader are δ1 = [2, 2, 0]T (m), δ2 = [2, −2, 0]T (m),
δ3 = [−2, −2, 0]T (m) and δ4 = [−2, 2, 0]T (m), respectively. This indicates that the desired formation pattern is a square. The distributed estimator states of
each follower UAV are initialized as zeros. The follower
UAVs are initially situated at p1 (0) = [5, 3, −1]T (m),
p2 (0) = [9, −4, 1]T (m), p3 (0) = [4, −2, −3]T (m) and
p4 (0) = [−1, 4, −2]T (m) and their initial attitudes are
Qi (0) = [1, 0, 0, 0]T , i = 1, 2, 3, 4. The estimator and
control parameters are chosen as follows: kγ = 0.5, kp =
kv = 8 based on (12), la = 12 based on (13), ka = 4
based on (14), lp = lv = kη = 4 and lq = kq = 16. The
simulation results are illustrated in Figs. 2-4.
pe1
5
10
15
20
25
30
35
40
pe2
0
pe3
−2
10
pez (m)
4
−5
5
10
15
5
10
15
20
25
30
35
40
20
25
30
35
40
0
pe4
−1
−2
−3
0
time (sec)
vex (m/sec)
Fig. 3. Position error of follower VTOL UAVs.
4
2
0
vey (m/sec)
0
vez (m/sec)
v1e
−2
5
10
15
20
25
30
35
40
2
v2e
0
v3e
−2
0
2
5
10
15
20
25
30
35
40
v4e
1
0
−1
0
5
10
15
20
time (sec)
25
30
35
40
Fig. 4. Velocity error of follower VTOL UAVs.
that each error component converges to zero asymptotically. Consequently, the simulation results validate
that the proposed distributed control approach effectively guarantees the coordinated trajectory tracking of
multiple VTOL UAVs in the sense of (6).
Fig. 2 exhibits the evolution of the VTOL UAV formation with respect to the leader in the three-dimensional
space, where the formation is depicted every twenty seconds. It can be seen that the follower UAVs reach the prescribed square pattern while tracking the leader. Figs. 3
and 4 describe the position and velocity errors of the follower UAVs with respect to the leader. It can be observed
5
Conclusion
A distributed control strategy is proposed in this paper
to achieve the coordinated trajectory tracking of multiple VTOL UAVs with local information exchange. The
10
connectivity of the network graph is weak in the sense
that we only require the graph to contain a directed
spanning tree. A novel distributed estimator is firstly
designed for each VTOL UAV to obtain the leader’s desired information asymptotically. Then, under the hierarchical framework, a command force and an applied
torque are exploited for each VTOL UAV to fulfill the
accurate tracking to the desired information asymptotically. In addition, an auxiliary system is introduced in
the control development to avoid the non-singular command attitude extraction and the use of the unavailable
desired information. Simulations are carried out to validate the theoretical results.
[14] Z. Li, X. Liu, W. Ren, and L. Xie, “Distributed tracking
control for linear multiagent systems with a leader of bounded
unknown input,” IEEE Transactions on Automatic Control,
vol. 58, no. 2, pp. 518–523, 2013.
References
[18] J. Ghommam, L. F. Luque-Vega, B. Castillo-Toledo, and
M. Saad, “Three-dimensional distributed tracking control
for multiple quadrotor helicopters,” Journal of The Franklin
Institute, vol. 353, no. 10, pp. 2344–2372, 2016.
[15] W. Yu, G. Chen, and M. Cao, “Distributed leader-follower
flocking control for multi-agent dynamical systems with timevarying velocities,” Systems & Control Letters, vol. 59, no. 9,
pp. 543–552, 2010.
[16] J. Qin, C. Yu, and B. Anderson, “On leaderless and leaderfollowing consensus for interacting clusters of second-order
multi-agent systems,” Automatica, vol. 74, pp. 214–221, 2016.
[17] T. Yang, Z. Meng, G. Shi, Y. Hong, and K. H. Johansson,
“Network synchronization with nonlinear dynamics and
switching interactions,” IEEE Transactions on Automatic
Control, vol. 61, no. 10, pp. 3103–3108, 2016.
[1] F. Giulietti, L. Pollini, and M. Innocenti, “Autonomous
formation flight,” IEEE Control Systems Magazine, vol. 20,
no. 6, pp. 34–44, 2000.
[19] F. Liao, R. Teo, J. Wang, X. Dong, F. Lin, and K. Peng,
“Distributed formation and reconfiguration control of vtol
uavs,” IEEE Transactions on Control Systems Technology,
vol. 25, no. 1, pp. 270–277, 2017.
[2] M. Hua, T. Hamel, P. Morin, and C. Samson, “Introduction
to feedback control of underactuated vtol vehicles: A review
of basic control design ideas and principles,” IEEE Control
Systems Magazine, vol. 22, no. 1, pp. 61–75, 2013.
[20] A. Mahnmood and Y. Kim, “Leader-following formation
control of quadcopters with heading synchronization,”
Aerospace Science and Technology, vol. 47, pp. 68–74, 2015.
[3] Z. Zuo, “Trajectory tracking control design with commandfiltered compensation for a quadrotor,” IET Control Theory
and Applications, vol. 4, no. 11, pp. 2343–2355, 2010.
[21] X. Dong, Y. Zhou, Z. Ren, and Y. Zhong, “Timevarying formation tracking for second-order multi-agent
systems subjected to switching topologies with application
to quadrotor formation flying,” IEEE Transactions on
Industrial Electronics, vol. DOI: 10.1109/TIE.2016.2593656,
2016.
[4] Y. Zhang and Y. Tian, “Consentability and protocol design
of multi-agent systems with stochastic switching topology,”
Automatica, vol. 45, no. 5, pp. 1195–1201, 2009.
[5] A. Abdessameud and A. Tayebi, “Formation control of vtol
uavs,” in 48th IEEE Conference on Decision and Control,
Shanghai, P. R. China, 2009, pp. 3454–3459.
[22] M. Shuster, “A survey of attitude representations,” The
Journal of the Astronautical Sciences, vol. 41, no. 4, pp. 439–
517, 1993.
[6] A. Abdessameud, I. G. Polushin, and A. Tayebi, “Motion
coordination of thrust-propelled underactuated vehicles with
intermittent and delayed communications,” Systems &
Control Letters, vol. 79, pp. 15–22, 2015.
[23] Z. Qu, Cooperative control of dynamical systems: applications
to autonomous vehicles. Berlin: Springer, 2009.
[7] X. Dong, B. Yu, Z. Shi, and Y. Zhong, “Time-varying
formation control for unmanned aerial vehicles: Theories
and experiments,” IEEE Transactions on Control Systems
Technology, vol. 23, no. 1, pp. 340–348, 2015.
[24] B. Paden and S. Sastry, “A calculus for computing
filipovs differential inclusion with application to the variable
structure control of robot manipulators,” IEEE Transactions
on Circuits and Systems, vol. CAS-34, no. 1, pp. 73–82, 1987.
[8] Y. Hong, G. Chen, and L. Bushnell, “Distributed observers
design for leader-following control of multi-agent networks,”
Automatica, vol. 44, no. 3, pp. 846–850, 2008.
[25] D. Shevitz and B. Paden, “Lyapunov stability theory of
nonsmooth systems,” IEEE Transactions on Automatic
Control, vol. 39, no. 9, pp. 1910–1914, 1994.
[9] B. Yun, B. Chen, K. Lum, and T. Lee, “Design and
implementation of a leader-follower cooperative control
system for unmanned helicopters,” Journal of Control Theory
and Applications, vol. 8, no. 1, pp. 61–68, 2010.
[26] F. Clarke, Optimization and nonsmooth analysis. New York:
Wiley, 1983.
[27] N. Fischer, R. Kamalapurkar, and W. Dixon, “Lasalleyoshizawa corollaries for nonsmooth systems,” IEEE
Transactions on Automatic Control, vol. 58, no. 9, pp. 2333–
2338, 2013.
[10] D. Mercado, R. Castro, and R. Lozano, “Quadrotors flight
formation control using a leader-follower,” in European
Control Conference, Zurich, Switzerland, 2013, pp. 3858–
3863.
[28] H. Khalil, Nonlinear Systems, Third Edition.
Prentice-Hall, 2002.
[11] D. Lee, “Distributed backstepping control of multiple thrustpropelled vehicles on a balanced graph,” Automatica, vol. 48,
no. 11, pp. 2971–2977, 2012.
New Jersey:
[29] Y. Zou, “Nonlinear robust adaptive hierarchical sliding mode
control approach for quadrotors,” International Journal of
Robust and Nonlinear Control, vol. DOI: 10.1002/rnc.3607,
2016.
[12] A. Loria, J. Dasdemir, and N. A. Jarquin, “Leader-follower
formation and tracking control of mobile robots along straight
paths,” IEEE Transactions on Control Systems Technology,
vol. 24, no. 2, pp. 727–732, 2016.
[13] G. Wen, Y. Zhao, Z. Duan, and W. Yu, “Containment of
higher-order mmlti-leader multi-agent systems: A dynamic
output approach,” IEEE Transactions on Automatic Control,
vol. 61, no. 4, pp. 1135–1140, 2016.
11
| 3 |
arXiv:1605.05436v1 [cs.DS] 18 May 2016
Parallel Algorithms for Summing Floating-Point Numbers
Michael T. Goodrich
Ahmed Eldawy
Dept. of Computer Science
University of California, Irvine
Irvine, CA 92697 USA
[email protected]
Dept. of Computer Science and Engineering
University of California, Riverside
Riverside, CA 92521 USA
[email protected]
Abstract
The problem of exactly summing n floating-point numbers is a fundamental problem that
has many applications in large-scale simulations and computational geometry. Unfortunately,
due to the round-off error in standard floating-point operations, this problem becomes very
challenging. Moreover, all existing solutions rely on sequential algorithms which cannot scale to
the huge datasets that need to be processed.
In this paper, we provide several efficient parallel algorithms for summing n floating point
numbers, so as to produce a faithfully rounded floating-point representation of the sum. We
present algorithms in PRAM, external-memory, and MapReduce models, and we also provide
an experimental analysis of our MapReduce algorithms, due to their simplicity and practical
efficiency.
1
Introduction
Floating-point numbers are a common data type for representing values that can vary widely. A
(base-2) floating-point representation1 of a number, x, is a sign bit, b, and pair of integers, (b, M, E),
such that
l−1
x = (−1)b × (1 + 2−t M ) × 2E−2 −1 ,
where t is the number of bits allocated for M and l is the number of bits allocated for E. The
value, M , is called the mantissa or significand and the value E is called the exponent. For
more background information on floating-point numbers, please see, e.g., the excellent survey by
Goldberg [15].
In fixed-precision representations, specific values for t and l are set according to machinearchitecture or established standards. For example, in the IEEE 754 standard, single-precision
floating-point numbers use t = 23 and l = 8, double-precision floating-point numbers use t = 52
and l = 11, and quad-precision floating-point numbers use t = 112 and l = 15.
In arbitrary-precision representations, the number of bits, t, for the mantissa, is allowed to
vary, either to be as large as a machine’s memory size or to be an arbitrary value set by a user. The
number, l, of bits for the exponent in such representations is typically set to be a single memory
word, since a single memory word is usually sufficient to store any memory address. Examples
of systems for processing arbitrary-precision floating point representations include Apfloat for
Java [35], the GNU Multiple Precision (GMP) Arithmatic Library [14], the bigfloat type in
LEDA [4], and the GNU Multiple-Precision Floating-Point (MPFR) Library [12, 19].
1
We take the viewpoint in this paper that floating-point numbers are a base-2 representation; nevertheless, our
algorithms can easily be modified to work with other standard floating-point bases, such as 10.
1
In either fixed-precision or arbitrary-precision representations, we do not assume in this paper
that floating-point numbers are normalized so that the most significant bit of the mantissa is 1.
Given a set X = {x1 , x2 , . . . , xn }, of n floating-point numbers, we are interested in the design and
analysis of parallel algorithms for computing a floating point number, Sn∗ , that best approximates
the sum
n
X
Sn =
xi .
i=1
Moreover, we are interested in methods that are not limited to a specific fixed-precision representation, such as IEEE 754 double-precision. In particular, the specific problem we are interested in is
to compute the floating-point number Sn∗ that is a faithfully rounded representation of Sn , where
we consider the value Sn∗ to be faithfully rounded as long as Sn∗ is either the largest floating-point
number less than or equal to Sn or the smallest floating-point number greater than or equal to Sn .
For other ways of rounding floating-point numbers, please see the IEEE 754 standard.
Although the problem of accurately summing n floating-point numbers might at first seem
simple, it is surprisingly challenging, due to the roundoff error inherent in floating-point arithmetic.
Standard floating-point addition, which we denote as “⊕,” is not exact, so that for two floating-point
numbers, x and y,
x ⊕ y = (x + y)(1 + x,y ); |x,y | ≤ ||,
where x,y is a summand-specific roundoff error and is a machine-specific worst-case error term [23,
25].
As is well-known, a standard way to sum n numbers, given the set X, is to place the numbers
from X in the leaves of a binary summation tree, T , where each internal node, v, is associated with
the sum of the values in v’s two children, which in this case could be the ⊕ operation. Unfortunately,
if the collection, X, of summands contains positive and negative numbers, it is NP-complete to
find the summation tree, T , that utilizes only the ⊕ operation and minimizes the worst-case error
for the sum [23]. Even for the case when all the numbers in X are positive, the optimal ⊕-based
summation tree is a Huffman tree [23], which is not necessarily efficient to compute and evaluate
in parallel. Thus, designing efficient parallel algorithms for accurately summing n floating-point
numbers is a challenge.
Because of these complexity issues and the uncertainty that comes from roundoff errors, many
recent floating-point summation algorithms are based on exactly computing a rounded sum of two
floating point numbers, x and y, and the exact value of its error, utilizing a function,
AddTwo(x, y) → (s, es ),
where s = x ⊕ y and x + y = s + es , with s and es being floating point numbers in the same precision
as x and y. Example implementations of the AddTwo function include algorithms by Dekker [8]
and Knuth [25], both of which utilize a constant number of floating-point operations. As with these
recent approaches, in this paper we take the approach of summing the numbers in X exactly and
then faithfully rounding this exact sum to the an appropriate floating-point representation.
As mentioned above, because we desire precision-independent algorithms, which can work for
either fixed-precision or arbitrary-precision representations, we do not take the perspective in this
paper that floating-point numbers must be restricted to a specific floating-point representation, such
as IEEE 754 double-precision. As is common in other precision-independent algorithms, however,
we do assume that a floating-point representation is sufficiently precise so that the number, n, could
itself be represented exactly as a sum of a constant number of floating-point numbers. A similar
2
assumption is common in integer-based RAM and PRAM algorithms,2 and poses no restriction
in practice, since it amounts to saying that a memory address can be stored in O(1) memory
words. For example, even if all the computer storage on earth3 were interpreted as IEEE 754
single-precision numbers and summed, we could represent the input size, n, as the (exact) sum of
at most four single-precision floating-point numbers.
As a way of characterizing difficult problem instances, having the potential for significant
amounts of cancellation, the condition number, C(X), for X, is defined as follows [34, 39, 40]:
Pn
|xi |
C(X) = Pi=1
,
n
| i=1 xi |
which, of course, is defined only for non-zero sums.4 Intuitively, problem instances with large
condition numbers are more difficult than problem instances with condition numbers close to 1.
Our approach for designing efficient parallel algorithms for summing n floating point numbers, even for difficult problem instances with large condition numbers, is to utilize an approach
somewhat reminiscent of the classic Fast Fourier Transform (FFT) algorithm (e.g., see [3, 18]).
Namely, to compute the sum of n floating-point numbers, we convert the numbers to an alternative
representation, compute the sum of the numbers exactly in this alternative representation, and then
convert the result back to a (faithfully-rounded) floating-point number. In our case, the important
feature of our alternative representation is that it allows us to compute intermediate sums without
propagating carry bits, which provides for superior parallelization.
1.1
Previous Related Results
The floating-point summation problem poses interesting challenges for parallel computation, because most existing published exact-summation algorithms are inherently sequential. For instance,
we are not aware of any previous parallel methods for this problem that run in worst-case polylogarithmic time.
Neal [30] describes sequential algorithms that use a number representation that he calls a
superaccumulator to exactly sum n floating point numbers, which he then converts to a faithfullyrounded floating-point number. Unfortunately, while Neal’s superaccumulator representation reduces carry-bit propagation, it does not eliminate it, as is needed for highly-parallel algorithms.
A similar idea has been used in ExBLAS [6], an open source library for exact floating point
computations. Shewchuck [33] describes an alternative representation for exactly representing
intermediate results of floating-point arithmetic, but his method also does not eliminate carry-bit
propagation in summations; hence, it also does not lead to efficient parallel algorithms. In addition
to these methods, there are a number of inherently sequential methods for exactly summing n
floating point numbers using various other data structures for representing intermediate results,
including ExBLAS [6] and algorithms by Zhu and Hayes [39, 40], Demmel and Hida [9, 10], Rump
et al. [34], Priest [32], and Malcolm [29]. Although the method of Demmel and Hida [10] can be
carry-free for a limited number of summands, none of these sequential methods utilize a completely
carry-free intermediate representation suitable for efficient parallelization. Nevertheless, Rump
2
A PRAM is a synchronized parallel random-access machine model [21], where memory is shared between
processors, so that memory accesses are exclusive-read/exclusive-write (EREW), concurrent-read/exclusive-write
(CREW), or concurrent-read/concurrent-write (CRCW).
3
As of the writing of this paper, the total computer storage on earth is estimated to be less than 8 zettabytes,
that is, 8 trillion gigabytes.
4
We could alternatively define a condition number that adds to the denominator a very small value to avoid
division by zero.
3
et al. [34] provide an interesting analysis that the running time of their sequential method can
be estimated to depend on n and a logarithm of the condition number and other factors. Also,
Demmel and Hida [10] showed that summing the numbers in a decreasing order by exponent yields
a highly accurate answer, yet, the answer does not have to be faithfully-rounded.
There has also been a limited amount of previous work on parallel algorithms for summing
floating-point numbers. Leuprecht and Oberaigner [28] describe parallel algorithms for summing
floating-point numbers, for instance, but their methods only parallelize data pipelining, however,
and do not translate into efficient parallel methods with polylogarithmic running times or efficient
algorithms in the external-memory5 or MapReduce models.6 Indeed, their methods can involve
as many as O(n) passes over the data. Kadric et al. [22] provide a parallel pipelined method
that takes a similar approach to the algorithm of Leuprecht and Oberaigner, while improving its
convergence in practice, but their method nevertheless depends on inherently sequential pipelining
and iterative refinement operations. Recently, Demmel and Nguyen [11] present a parallel floatingpoint summation method based on using a superaccumulator, but, like the previous sequential
superaccumulator methods cited above, their method does not utilize a carry-free intermediate
representation; hence, it has an inherently sequential carry-propagation step as a part of its “inner
loop” computation.
To our knowledge, no previous algorithm for summing floating-point numbers utilizes a carryfree intermediate representation, although such representations are known for integer arithmetic
(e.g., see [31]), the most notable being the redundant binary representation (RBR), which is a
positional binary representation where each position is from the set {−1, 0, 1}.
1.2
Our Results
We show in this paper that one can compute a faithfully-rounded sum, Sn∗ , of n floating-point
numbers with the following performance bounds.
• Sn∗ can be computed in O(log n) time using n processors in the EREW PRAM model. This
is the first such PRAM algorithm and, as we show, it is worst-case optimal.
• Sn∗ can be computed in O(log2 n log log log C(X)) time using O(n log C(X)) work in the
EREW PRAM model, where C(X) is the condition number of X. This is the first parallel
summation algorithm whose running time is condition-number sensitive.
• Sn∗ can be computed in external-memory in O(sort(n)) I/Os, where “sort(n)” is the I/O
complexity of sorting.7
• Sn∗ can be computed in external-memory in O(scan(n)) I/Os, where scan(n) is the I/O
complexity of scanning8 , when the size of internal memory, M , is Ω(σ(n)), where σ(n) is
the size of our intermediate representation for the sum of n floating-point numbers. This I/O
performance bound is, of course, optimal for this case. Typically, σ(n) is O(log n) in practice,
due to the concentration of exponent values in real-world data (e.g., see [20, 27]) or because
5
The external-memory model [36] is a data-parallel model, where data is transferred between an internal memory
of a specified size, M , and external memory in blocks of a specified size, B.
6
A MapReduce algorithm is a coarse-grain parallel method that performs rounds that involve mapping elements
to keys, assigning elements by their keys to “reducer” processors, and having those processors execute on sets of
elements with the same key, possibly with a global sequential finalize phase at the end [7, 16, 24].
7
sort(n) = O((n/B) logM/B (n/B)), where M is the size of internal memory and B is the block size.
8
scan(n) = O(n/B), where B is the block size.
4
fixed-precision floating-point numbers have relatively small values for l, the number of bits
used for exponent values, compared to n.
• Sn∗ can be computed with a single-round MapReduce algorithm in O(σ(n/p)p + n/p) time
and O(n) work, using p processors, with high probability, assuming p is o(n).
In addition, because of the simplicity of our MapReduce algorithm, we provide an experimental
analysis of several variations of this algorithm. We show that our MapReduce algorithm can achieve
up-to 80X speedup over the state-of-the-art sequential algorithm. It achieves a linear scalability
with both the input size and number of cores in the cluster.
2
Our Number Representations
To represent the exact summation of n floating point numbers, we utilize a variety of alternative
number representations. To begin, recall that a (base-2) floating-point number, x, is represented
as a sign bit, b, and a pair of integers, (b, M, E), such that
l−1 −1
x = (−1)b × (1 + 2−t M ) × 2E−2
,
where t is the number of bits allocated for M and l is the number of bits allocated for E. For
example, for double-precision floating-point numbers in the IEEE 754 standard, t = 52 and l = 11.
Thus, for the problem of summing the n floating-point numbers in X, we could alternatively
represent every floating point number, including each partial sum, as a fixed-point binary number
consisting of a sign bit, t + 2l−1 + dlog ne bits to the left of the binary point, and t + 2l−1
bits to the right of the binary point. Using such a representation, we would have no errors in
summing the numbers in X, assuming there are no unnecessary overflows. In other words, we can
consider such a fixed-point representation as a large binary integer representation, which then has
its binary point shifted. Of course, such a representation would potentially waste a lot of memory,
but it is nevertheless instructive to contemplate this as a possibility, as it motivates our other
representations.
Although this fixed-point representation is potentially wasteful of memory, it might not actually
be so bad for exactly representing partial sums of a reasonable numbers of summands of singleprecision numbers. For example, in the IEEE 754 standard, single-precision 32-bit floating-point
numbers can be represented as 256-bit fixed-point numbers. Thus, the memory needed for an
error-free fixed-point representation of a single-precision floating-point number would occupy the
same amount of space as 8 single-precision numbers. Nevertheless, an additional drawback of this
representation is that, in the worst-case, there can be a lot of carry-bit propagations that occur for
any addition, which negatively impacts parallel performance.
In a superaccumulator representation, as used in some previous summation approaches (e.g.,
see [26, 30, 32, 33, 40]), we instead represent a fixed-point (or floating-point) number as a vector, Y ,
of floating-point numbers, (yk , yk−1 , . . . , y0 ), such that the number, y, represented by Y is
y=
k
X
yi ,
i=0
where the numbers in Y have strictly increasing exponents, so that y0 is the least significant number
(and this summation is a true addition, with no roundoff errors).
As mentioned above, there are some interesting recent floating-point exact summation papers
based on the use of superaccumulators, but these are of limited value with respect to parallel
5
algorithms. For instance, Zhu and Hayes [40] present an inherently sequential algorithm that
essentially involves adding the numbers from X one at time to an initially-zero superaccumulator
and then propagating the carries to produce a faithfully-rounded sum from the most-significant
entry. Neal [30] presents a similar algorithm and also discusses in passing an alternative approach
based on a bottom-up parallel summation tree computation based on superaccumulators, but
his parallel algorithm involves inefficiently constructing superaccumulators for every leaf, and his
superaccumulators overlap by half their bits and even then are not carry-free.
In our case, we allow each yi to be positive or negative and we add some additional conditions
so that superaccumulator summations are carry-free. We do this by extending the generalized
signed digit (GSD) integer representation [31] to floating-point numbers. This is a redundant
representation, so that there are multiple ways of representing the same fixed-point number.
To simply our discussion, let us shift the binary point for y so that every y is an integer instead
of a fixed-point number. This shift is made without loss of generality, since we can shift the binary
point back to its proper place after we can computed an exact representation of the sum of numbers
in X.
Next, following the GSD approach [31], we say that a superaccumulator is (α, β)-regularized
if
y i = Yi × R i ,
for a given radix, R, and each mantissa Yi is an integer in the range [−α, β], for α, β ≥ 2. In
particular, for our algorithms, for any fixed t, we choose R to be a power of two, 2t−1 > 2, so
that each yi can be represented using a floating-point exponent storing a multiple of t − 1 (since
we assume floating-point representations are base-2 in this paper). For arbitrary-precision floatingpoint numbers, we likewise choose R = 2t0 −1 > 2, for a fixed value t0 ≥ 2 that is a reasonable
length for the number of bits needed for a mantissa (proportional to our word size). In either case,
we set
α = β = R − 1.
This choice of the parameters, α and β, is done so that we can achieve propagation-free carrying
of components of a summation, as we show next.
Our parallel algorithm for summing two superaccumulators, y and z, is as follows. First, we
compute each component-wise sum of the mantissas, Pi = Yi + Zi . This sum is then reduced to an
interim mantissa sum, Wi = Pi − Ci+1 R, where Ci+1 is a signed carry bit, i.e., Ci+1 ∈ {−1, 0, 1},
that is chosen so that Wi is guaranteed to be in the range [−(α − 1), β − 1]. (We show below
that this is always possible.) The computed mantissa sum is then performed as Si = Wi + Ci , so
that the resulting collection of si components is (α, β)-regularized and no carry-bit propagation
is necessary. As the following lemma shows, taking this approach allows us to avoid propagating
carries across an entire superaccumulator after each addition in a summation computation, while
nevertheless representing each partial sum exactly.
Lemma 1 It is sufficient to choose α = β = R − 1, for R > 2, for the resulting sum of y and z to
be (α, β)-regularized, i.e., so that each Si is in the range [−α, β].
Proof: Given that y and z are (α, β)-regularized, the mantissa, Pi , is in the range [−2α, 2β]. We
wish to show that the result, s = y + z, be (α, β)-regularized, that is, that each mantissa, Si , is in
the range [−α, β]. Note that if −R + 1 < Pi < R − 1, then we can ensure Si is in the range [−α, β]
for any Ci in {−1, 0, 1} by setting Ci+1 = 0. So let us consider the cases when Pi is too negative
or too positive.
Case 1: Pi ≥ R − 1. Note that
Pi ≤ 2β = 2R − 2.
6
Hence, we can choose Ci+1 = 1, so that
Wi = Pi − Ci+1 R = Pi − R ≤ R − 2 = β − 1.
Moreover, Wi ≥ −1 ≥ −(α − 1), in this case, since α ≥ 2. Thus,
Si = Wi + Ci ≤ R − 2 + 1 ≤ R − 1 = β,
and Si ≥ −α.
Case 2: Pi ≤ −R + 1. Note that
Pi ≥ −2α = −2R + 2.
Hence, we can choose Ci+1 = −1, so that
Wi = Pi − Ci+1 R = Pi + R ≥ −R + 2.
Moreover, Wi ≤ 1 ≤ β − 1, in this case, since β ≥ 2. Thus,
Si = Wi + Ci ≥ −R + 2 − 1 ≥ −(R − 1) = −α,
and Si ≤ β.
This implies that, by choosing α = β = R − 1, we can guarantee that the sum of two (α, β)regularized superaccumulators can be done in parallel in constant time, with each carry going to
at most an adjacent component and no further. There is a chance, of course that the sum of two
superaccumulators indexed from 0 to k could result in a superaccumulator indexed from 0 to k + 1.
But this causes no problems for us, since any mantissa can hold Ω(log n) bits; hence, one additional
suparaccumulator component is sufficient for holding all the adjacent-component carries during the
sum of n floating-point numbers.
An additional observation is that, since R is a power of two in our case, we can compute each
Wi and Ci using simple operations involving the addition or subtraction of a power of two to a
number, given Pi . Also, since we reserve an extra bit in each superaccumulator, computing Pi can
be done without overflow in the standard floating-point representation of components.
Intuitively, each pair of consecutive superaccumulator components, yi and yi+1 , “overlap” by
one bit and each component has one additional sign bit. As an alternative to representing each
yi as a floating-point number, then, we could instead represent each number, yi , using an integer
representation, with an explicit overlap of one bit between each consecutive pair of numbers, yi and
yi+1 , being understood. This would allow us to save some space for what amounts to redundant
exponents in the floating-point representations of the yi ’s. For the sake of simplicity, however,
we choose in this discussion to assume that each yi is itself a floating-point number, with the
understanding that our algorithms could be easily modified to work for the case that the numbers
in Y are integers. In either case, the overlap between consecutive numbers in Y allows us to apply a
lazy strategy for accumulating carry bits, without full propagation, which overcomes a shortcoming
in previous representations.
One important comment is in order here. Namely, for the analysis in this paper, we are not
assuming that the number, l, of bits allocated for floating-point exponents is a fixed constant;
hence, our analysis does not assume that, for our floating-point numbers, the size of an equivalent
fixed-point representation or superaccumulator for this number is a fixed constant.
7
Thus, for small numbers of arbitrary-precision floating-point numbers, it is possible that our
(α, β)-regularized superaccumulators may waste space. To avoid these issues, we can represent
numbers using a format we are calling a “sparse superaccumulator.” Given a superaccumulator,
Y = (yk , yk−1 , . . . , y0 ),
the sparse superaccumulator for Y is the vector,
Y 0 = (yij , yij−1 , . . . , yi0 ),
consisting of all the active indices in Y , for i0 < i1 < · · · < ij . We say that a index, i, in a
superaccumulator is active if yi is currently non-zero or has ever been non-zero in the past (when
viewing a superaccumulator as an indexed data structure).
One possible parallel algorithm for summing two sparse superaccumulators, Y 0 = (yij1 , . . . , yi0 )
and Z 0 = (zij2 , . . . , zi0 ), is as follows. We first merge the active indices of Y 0 and Z 0 , and we then do
a summation of corresponding terms (possibly with carries into adjacent components, as needed).
Note, though, that we will not propagate carries, when we use (α, β)-regularized superaccumulators.
Thus, an index in the sum is active if it was active in Y 0 or Z 0 or becomes non-zero as a result
of the sum. This definition is somewhat related to the adaptive floating-point representation of
Shewchuk [33], which introduces sparsity and adaptivity but only for vectors of non-overlapping
floating-point numbers having arbitrary exponents, rather than exponents that are powers of the
radix R, as in our sparse (α, β)-regularized superaccumulator representation.
Furthermore, given a sparse superaccumulator, Y 0 , and a parameter γ, we define the γ-truncated
sparse superaccumulator for Y 0 to be the vector, Y 00 , consisting of the first (most-significant) γ
entries in Y 0 .
3
Our Fast PRAM Algorithm
The first PRAM algorithm we present is based on summing numbers represented using sparse
(α, β)-regularized superaccumulators. Our method runs in O(log n) time using O(n log n) work in
the EREW PRAM model, which is worst-case optimal. The details are as follows.
1. Build a binary summation tree, T , with dlog ne depth, having each leaf i associated with a
distinct floating-point number, xi , in X. This step can be done in O(log n) time and O(n)
work.
2. In parallel, for each xi , convert xi into an equivalent (α, β)-regularized superaccumulator,
x0i . This step can be done in O(1) time and O(n) work, just by splitting each floating-point
number into O(1) numbers such that each has an exponent that is a multiple of R.
3. Perform a parallel merge-sort of the x0i components, using their exponents as keys (not their
mantissas). This creates a sorted list, E(v), for each node in T , consisting of the exponents
found in the subtree in T rooted at v, as well as links for each exponent, e in E(v) to its
predecessors in the lists for v’s children and its parent. This step can be done in O(log n)
time using O(n log n) work [5,17] via the cascading divide-and-conquer technique [1], because
the boundary exponents are known from the beginning.
4. Perform a parallel prefix computation to remove duplicate exponents in each E(v). This step
can be done in O(log n) time using O(n log n) work, since the total size of all the lists is
O(n log n).
8
5. Using the links created in the previous step to match up corresponding components in
constant-time per level, possibly adding new components that represent a carry bit moving
into a component that was previously not included in the sparse representation, perform a
bottom-up sparse superaccumulator summation in T . This results in a sparse superaccumulator representation of the sum being stored in the root of T . This step can be done in O(log n)
time using O(n log n) work.
6. Convert the sparse (α, β)-regularized superaccumulator for the sum at the root of T into a
non-overlapping superaccumulator that is ((R/2)−1, (R/2)−1)-regularized. This amounts to
a signed carry-bit propagation operation, which can be done by a parallel prefix computation
(based on a simple lookup table based on whether the input carry bit is a −1, 0, or 1). We
leave the details to the interested reader. This step can be done in O(log n) time using O(n)
work.
7. Correctly round the non-overlapping superaccumulator from the previous step into a floatingpoint number. This step amounts to locating the most significant non-zero component of the
superaccumulator and then combining that, as needed, with its neighboring components to
create a floating-point number of the appropriate size, rounding the result based on the
truncated bits. This step can be done in O(log n) time using O(n) work.
This gives us the following.
Theorem 2 Given n floating-point numbers, one can compute a faithfully-rounded representation
of their sum in O(log n) time using O(n log n) work (i.e., n processors) in the EREW PRAM model.
These time and processor bounds are worst-case optimal in the algebraic computation tree model.
Proof: We have already established the upper bounds. For the lower bounds, we show that the
set equality problem can be reduced to the floating-point summation problem in O(1) time using n
processors. Suppose, then, that we are given two sets of n positive numbers, C and D, and wish to
determine if C = D. Let τ be the smallest power of two greater than log n. For each element ci in C,
create the floating-point number, (−1, 1, τ ci ), which represents the number, (−1) × 2τ ci . Likewise,
for each element di in D, create the floating-point number, (1, 1, τ di ), which represents the number,
1 × 2τ di . We claim that C = D if and only if the sum of these two sets of numbers is zero. Clearly,
if C = D, then there is a one-to-one matching between equal elements in C and D; hence, there
is a matching for each floating-point number, (−1, 1, τ ci ), to a floating-point number, (1, 1, τ di ),
such that ci = di . Therefore, the sum of all of these numbers is zero in this case. Suppose, for the
“only if” case, that the sum of all these numbers is zero. Note that the exponent of any pair of
floating-point numbers in our collection is either the same or differs by at least τ > log n. Thus,
if two such numbers are different, they will remain different even if we multiply one of them by n.
Therefore, the only way that the sum of all these numbers is zero is if each floating-point number,
(−1, 1, τ ci ), in this collection, has a distinct associated floating-point number, (1, 1, τ di ), in this
collection, such that ci = di . Therefore, if the sum of all these numbers is zero, then C = D. The
lower-bounds follow, then, from the fact that summation is a binary operator and the set equality
problem has an Ω(n log n) lower bound in the algebraic computation tree model [2].
To our knowledge, this is the first PRAM method that achieves O(log n) time and an amount
of work that is worst-case optimal. We note, however, that the above lower bound holds only for
floating-point representations where exponents are represented with Ω(log n) bits.
9
4
Our Condition-Number Sensitive PRAM Algorithm
In our fast PRAM algorithm, we showed that it is possible to sum two sparse superaccumulators in
O(1) time using n processors in the EREW PRAM model, given a sorted merge of the exponents
of the components of each superaccumulator. This result also gives us the following.
Lemma 3 Given two truncated (α, β)-regularized sparse superaccumulators, Y1 and Y2 , of combined size m, we can compute the sum of Y1 and Y2 in O(log m) time using O(m/ log m) processors
in the EREW PRAM model.
Proof: The algorithm follows from the method described above combined with an EREW PRAM
method for merging two sorted lists. (E.g., see [21] for details on such computations.)
Given this tool, we now describe our condition-number sensitive parallel algorithm for summing
the floating-point numbers in X in parallel. Our method runs in polylogarithmic time using
O(n log C(X)) work.9
We begin with a simplified version of our algorithm from the previous section. Initialize an
O(log n)-depth summation tree, T , to have an element of X associated with each of its leaves.
Convert each value, xi , stored in a leaf to an equivalent sparse (α, β)-regularized superaccumulator.
Perform a bottom-up parallel summation computation on T using the method of Lemma 3 to perform each pairwise summation of two sparse superaccumulators. We then complete the algorithm
by a parallel prefix computation on the sparse superaccumulator for the root of T to propagate
all the signed carry bits and we then convert this result to a floating-point number, as in the last
two steps of our algorithm from the previous section. This simplified summation algorithm runs in
O(log2 n) time using O(n log n) work in the EREW PRAM model.
Given this template, our condition-number sensitive algorithm is as follows. Begin by setting
r = 2. Then perform the above bottom-up parallel merge-and-sum algorithm, but do so using rtruncated sparse superaccumulators. Unlike our previous method, this one may cause lossy errors
to occur, due to the restrictions to r-truncated sparse superaccumulators. So we utilize a test,
which we call the “stopping condition,” to determine if the result is correct and we can stop. If
the stopping condition is not satisfied, however, then we set r ← r2 and repeat the computation.
We continue this iteration until either the stopping condition is satisfied or we have increased r so
high that the final sparse superaccumulator is no longer truncated.
Before we analyze this algorithm, let us first provide the details for some sufficient stopping
conditions. Let y denote the summation value computed from our method after a given iteration,
where the final truncated sparse superaccumulator is
Y = (yi1 , yi2 , . . . , yir ),
so that yir is its least-significant component. Also, for the sake of simplicity, let us assume that y
is positive; the method for the case when y is negative is similar. Let Eir denote the exponent for
yir and let
min = × 2Eir ,
where is the smallest mantissa that can possibly be represented in our chosen floating-point
representation. Note that min is the smallest value that could be represented by yir . A sufficient
stopping condition, then, is to test whether or not
y = y ⊕ nmin = y ⊕ −nmin .
9
Technically, we could have C(X) = 1; hence we could have log C(X) = 0. As a notational convenience, therefore,
we assume that the logarithms in our complexity bounds are defined to always have a minimum value of 1.
10
that is, the summation value, y, would be unchanged even after doing a floating-point addition or
subtraction of n copies of a bound whose magnitude is larger than any value we truncated.
As a simplified alternative, which also works in our algorithms as a sufficient stopping condition,
is to determine whether the exponent for the least significant bit in y is at least dlog ne greater
than Eir .
The reason that these tests are sufficient as stopping conditions is that when we are summing
the n floating point numbers using truncated sparse superaccumulators, the largest magnitude that
we can possibility truncate any summand is strictly less than min . The reason for this is that, by
the definition of a truncated sparse superaccumulator, if yir is included in our final truncated sparse
superaccumulator, then yir was not truncated by any partial sum. Thus, the maximum value of
the sum of all the truncated values is at most
nmin ≤ 2dlog ne min .
Interestingly, this algorithm gives us the following condition-number sensitive result.
Theorem 4 Given a set, X, of n floating-point numbers, we can compute a faithfully-rounded
representation of the sum of the numbers in X in time O(log2 n log log log C(X)) using work that
is O(n log C(X)) in the EREW PRAM model, where C(X) is the condition number for X.
Proof: The correctness for our algorithm follows immediately from the above discussion, since we
terminate when we are assured of a faithfully-rounded sum for the numbers in X. We claim that the
number of iterations (each of which involves squaring the truncation parameter, r) is O(log C(X)).
To see that this claim is true, note that
!
!
n
n
X
X
log C(X) = log
|xi | − log
xi .
i=1
i=1
P
P
Thus, if we represented the values, X1 =
|xi | and X2 = | xi |, using exact fixed-point representations, then log C(X) is proportional to the difference, δ, between the bit-position of the most
significant 1-bit in the representation of X1 and the bit-position of the most significant 1-bit in the
representation of X2 . Our algorithm must therefore perform a sufficient number of iterations so
that the number of bits in our truncated sparse superaccumulator for the sum is at least δ. This
indeed occurs in our algorithm and, furthermore, note that our algorithm will terminate when the
number of bits in our truncated sparse superaccumulator for the sum is Θ(δ + log n). That is, our
algorithm terminates when r is O(log C(X)), since we assume that floating-point numbers in our
representation contain Ω(log n) bits. Since we square r in each iteration, this implies the claimed
running-time and work bounds for our parallel algorithm, since we require O(log log log C(X)))
squarings to get r to be large enough and the total work involved is a geometric summation that
adds up to O(n log C(X)).
Thus, for the vast majority of inputs, which have constant-bounded condition numbers, our
algorithm uses a linear amount of work and runs in O(log2 n) parallel time. That is, implemented
as a sequential algorithm, our method would match the linear running time of the inherently
sequential method of adding n numbers, one at a time to a superaccumulator.
5
External-Memory Algorithms
In this section, we describe our efficient algorithms for summing n floating-point numbers in the
external-memory model [36].
11
Suppose we are given a set X of n floating-point numbers. Our sorting-based external-memory
algorithm is as follows.
1. Convert each floating-point number to an (α, β)-regularized sparse superaccumulator. This
can be done with a single scan over the input, using O(scan(n)) I/Os.
2. Sort the components of all the sparse superaccumulators constructed in the previous step
independently by their exponents. This step clearly takes O(sort(n)) I/Os.
3. Scan the sorted list of superaccumulator components, while maintaining a (α, β)-regularized
sparse superaccumulator, S, to hold the sum. With each component, yi,j , we add yi,j to S,
using a localized version of the algorithm for summing two superaccumulators. Note that we
do not need to store all of S in internal memory to implement this step, however, since we are
processing the components in order by their exponents. Instead, we just need to keep a hotswap buffer of S that includes the current exponent, swapping out blocks to external memory
as they become full. Moreover, since summing two (α, β)-regularized superaccumulators is
a carry-free operation, we don’t need to worry about extra I/Os that would have otherwise
been caused by carry-bit propagation. Thus, we can implement this step in O(scan(n)) I/Os.
4. Given the computed superaccumulator, S, which now holds the exact sum of the n floatingpoint numbers, perform a back-to-front scan of S to propagate signed carry bits, to convert
S into a non-overlapping ((R/2) − 1, (R/2) − 1)-regularized sparse superaccumulator. This
step clearly requires O(scan(n)) I/Os.
5. Given a non-overlapping superaccumulator for the sum, we round the most significant nonzero components to produce a correctly rounded floating-point number for the sum of the n
floating-point numbers in X.
This gives us the following.
Theorem 5 Given n floating-point numbers, we can compute a correctly-rounded floating-point
representation of their sum using O(sort(n)) I/Os in the external-memory model, in a cacheoblivious manner.
Proof: We have already established the performance bounds. To establish that this algorithm can
be performed in a cache-oblivious manner, we note that every step involves simple scans over lists
of numbers, except for a sorting step, which itself can be done cache obliviously [13].
The critical step in the above external-memory algorithm, of course, is the scan to add each
floating-point component to our superaccumulator. Since these components were sorted by their
exponents and our superaccumulator representation is carry-free, we need only keep a “sliding
window” of O(1) blocks of our superaccumulator in memory as we perform this scan. Moreover,
such a scan is cache-oblivious given any reasonable page eviction strategy.
If the size of our superaccumulator, σ(n), is less than the size of internal memory, M , however,
then we can utilize an even simpler algorithm. Namely, we can simply keep our entire superaccumulator stored in internal memory and process the floating-point components in any order to add
each one to the superaccumulator. This observation leads to the following, then.
Theorem 6 Given n floating-point numbers, we can compute a correctly-rounded floating-point
representation of their sum using O(scan(n)) I/Os in the external-memory model, in a cacheoblivious manner, if σ(n) ≤ M .
12
C(X)=1
Random
Anderson's
Sum=Zero
100
Time (sec)
10
1
0.1
0.01
0.001
1M
10M
100M
iFastSum
1B 1M
10M
100M
1B 1M
Small Superaccumulator (MapReduce)
10M
100M
1B 1M
10M
100M
1B
Sparse Superaccumulator (MapReduce)
Figure 1: Total running time as the input size increases from 1 million to 1 billion numbers
6
Simple MapReduce Algorithms
In this section, we present a simple MapReduce [7, 16, 24] algorithm for summing n floating-point
numbers. We also report on an experimental evaluation, where we assume that the input is already
loaded in a Hadoop Distributed File System (HDFS) where the input is partitioned into 128 MB
blocks which are stored on the local disks of cluster nodes. Our algorithm is based on a singleround of MapReduce which takes as input a collection of floating point numbers, and produces a
single floating point number that represents the correctly-rounded sum of all input numbers. In
this section, we first give a high level overview of our MapReduce algorithm. Then, we describe
an implementation of that algorithm using Spark [38]. Finally, we give experimental results of our
implementation using large scale data.
6.1
Overview
Our MapReduce algorithm runs as a single MapReduce job as detailed below.
• Map: for each xi , map xi to (r(xi ), xi ), where r(xi ) returns an integer in the range [1, p] that
represents one of the available p reducers. We can simply use a random function r, which
assigns each input record to a randomly chosen reducer. The function r can also be defined
based on domain knowledge of X with the goal of balancing the load across the p reducers.
For example, if p is o(n), then this function assigns roughly O(n/p) values for each reducer.
• Reduce: In the reduce phase, each reducer ri , i ∈ [1, p], sums up all the assigned numbers using
the sequential algorithm described earlier in Section 3. The output of the reduce function
is one sparse superaccumulator that represents the exact sum of the floating-point numbers
assigned to ri . After that, each reducer writes the resulting sparse superaccumulator to the
output.
• Post-process: In this final step, a single machine reads back the p sparse superaccumulators
produced by the p reducers and performs a final step of the sparse superaccumulator addition
algorithm to add all of them into one final sparse superaccumulator. Finally, the resulting
sparse superaccumulator is converted to a correctly-rounded floating point value which is
written to the output as the final answer.
6.2
Implementation
We implemented the MapReduce algorithm described above using Spark [38], a modern distributed
processing framework. This implementation is open source and available at https://github.com/aseldawy/sumn.
13
C(X)=1
Random
Anderson's
Sum=Zero
Time (sec)
100
10
1
10
30 50
100
300 500 1K
2K
10
iFastSum
30 50
100
300 500 1K
2K
10
30 50
Small Superaccumulator (MapReduce)
100
300 500 1K
2K
10
30 50 100
300 500 1K
2K
Sparse Superaccumulator (MapReduce)
Figure 2: Total running time as the parameter δ increases from 10 to 2000
C(X)=1
Random
Anderson's
Sum=Zero
Time (sec)
100
10
1
1
2
4
8
16
32
iFastSum
1
2
4
8
16
32
1
Small Superaccumulator (MapReduce)
2
4
8
16
32
1
2
4
8
16
32
Sparse Superaccumulator (MapReduce)
Figure 3: Total running time as the cluster size increases from 1 to 32 cores
We begin by loading the input from disk into the distributed memory of the cluster nodes. In this
step, each machine loads the HDFS blocks that are physically stored on its local disk. Then, each
machine applies a combine function on each block, which uses the sequential algorithm described
in Section 3 to sum all numbers in each partition into one sparse superaccumulator. The goal of
the combine step is to reduce the size of the data that need to be shuffled between mappers and
reducers. The output of the combine function is a single key-value pair where the key is a random
integer in the range [1, p], and the value is the sparse superaccumulator. Then, the shuffle phase
groups key-value pairs by the reducer number and assigns each group to one reducer. After that,
each reducer sums up all the assigned sparse superaccumulators into one sparse superaccumulator
and writes it to the intermediate output. Finally, the postprocess phase runs on the driver machine
that issued the MapReduce job and it combines all sparse superaccumulators into one and writes
the correctly-rounded floating point value as the final result.
6.3
Experimental Results
To test our MapReduce implementations, we carried out an experimental evaluation of the proposed
MapReduce algorithm on large datasets of up to one billion numbers. The input datasets were
randomly generated using four different distributions as described in [39]:
1. A set of randomly generated positive numbers which results in a condition number C(X) = 1.
2. A mix of positive and negative numbers generated uniformly at random.
3. Anderson’s-ill conditioned data where a set of random numbers are generated, and then their
arithmetic mean is subtracted from each number.
4. A set of numbers with real sum equal to zero.
14
In the four datasets, the parameter δ defines an upper bound for the range of exponents of input
numbers.
We used Apache Spark 1.6.0 running on Ubuntu 14.04 and OpenJDK 1.7.0 91. For experimental
repeatability, we ran all our experiments on Amazon EC2 using an instance of type ‘r3.8xlarge’
which has 32 cores and 244 GB of main memory running on Intel Xeon E5-2670 v2 processors [37].
We used the 32 cores to run a local cluster of 32 worker nodes inside that machine. We ran the
experiments using three algorithms and measured the end-to-end running time of each one:
1. Our proposed MapReduce algorithm that uses sparse superaccumulators.
2. A variation of our MapReduce algorithm that uses small superaccumulators [30] instead of
sparse superaccumulators.
3. The state-of-the-art sequential iFastSum algorithm [40].
For the latter algorithm, we used the original C++ implementation provided by Zhu and Hayes [40],
compiled using gcc 4.8.4. For all the techniques, we first generated a dataset using the random
generator provided in [40] and stored it to disk. Then, we processed the same generated dataset with
each algorithm one after another. We ignored the disk I/O time and focused only on the processing
time. If we take the disk I/O into account, all MapReduce algorithms will be much faster due to
the distributed nature of Hadoop Distributed File System (HDFS) where the machines load data
in parallel from multiple disks.
Figure 1 shows the overall running time of the three algorithms as the input size increases from
1 million to 1 billion numbers, while the value of δ is fixed at 2000. In general, iFastSum is faster
for processing small datasets with less than 10 million records. However, as the size of the data
increases, both MapReduce implementations outperform the sequential iFastSum algorithm with
up to 80x speedup. This shows a great potential for MapReduce algorithms in the problem of
summing a huge array of floating point numbers. We observe that the implementation that uses
small superaccumulator is slightly faster than the one that uses sparse superaccumulator. The
reason for this is that each sparse superaccumulator runs on a single core, which does not allow it
to realize the theoretical limit of doing O(p) work in O(1) time using p processors. In the future,
we could investigate the use of single-instruction multiple-data (SIMD) features to achieve a higher
level of parallelism. Another possibility is to use GPU processing units which can achieve massive
parallelization with thousands of threads on a single chip. We believe that there is a huge potential
in these two options as the design of sparse superaccumulator lends itself to these two types of
parallelization where the same instruction is repeated for every index in the superaccumulator.
Figure 2 shows the running time when the parameter δ increases from 10 to 2000. Notice that the
maximum possible value for δ is 2046 for double-precision floating point numbers. As the input size
is 1 billion, we observe that the two MapReduce algorithms consistently outperform the sequential
iFastSum algorithm. We also observe that the running time of the sparse superaccumulator
algorithm slightly increases as the value of δ increases. This behavior is expected, because the
increased value of δ makes the superaccumulator less sparse as the number of non-zero (active)
indices increases. The only exception is with dataset No. 3, as the subtraction of the mean causes
the range of exponents to decrease to about 15 even if the original δ is very large. Similar to the
behavior in its original paper [39], the running time of iFastSum with dataset No. 4 increases with
the value of δ. Interestingly, the small superaccumulator keeps a consistent performance regardless
of the value of δ.
Figure 3 shows the end-to-end running time as the cluster size increases from 1 to 32 cores.
The performance of iFastSum stays constant as it runs only on a single core. The results of this
15
experiment shows a perfect scalability for our MapReduce algorithm, where the performance scales
linearly with number of machines. The performance starts to saturate as the cluster size increases
from 16 to 32 cores because the underlying processor virtually increases the number of running
threads using the hyper-threading feature. For a small cluster with a few cores, iFastSum is faster,
as it is highly tuned for sequential processing while Spark incurs extra overhead for the other
algorithms. As the number of cores increases, the parallelization of MapReduce algorithms allows
them to outperform iFastSum in all experimented datasets. However, the crossover point changes
from one dataset to another. For example, since dataset No. 4 is the worst case for iFastSum,
sparse superaccumulator proves to be faster even on a single core.
7
Conclusion
In this paper, we have given a number of efficient parallel algorithms for computing a faithfully
rounded floating-point representation of the sum of n floating-point numbers. Our algorithms are
designed for a number of parallel models, including the PRAM, external-memory, and MapReduce
models. The primary design paradigm of our methods is that of converting the input values
to an intermediate representation, called a sparse superaccumulator, summing the values exactly
in this representation, and then converting this exact sum to a faithfully-rounded floating-point
representation. We are able to achieve significant parallelization by utilizing a novel intermediate
floating-point superaccumulator representation that is carry-free. Our experimental evaluation
shows that our MapReduce algorithm can achieve up to 80X performance speedup as compared to
the state-of-the-art sequential algorithm. The MapReduce algorithm yields lineary scalability with
both the input dataset and number of cores in the cluster.
Acknowledgments
This research was supported in part by the National Science Foundation under grant 1228639, and
an AWS in Education Grant. We would like to thank Wayne Hayes for several helpful discussions
concerning the topics of this paper.
References
[1] M. J. Atallah et al. Cascading divide-and-conquer: A technique for designing parallel algorithms.
SICOMP, pages 499–532, 1989.
[2] M. Ben-Or. Lower bounds for algebraic computation trees. In STOC, pages 80–86, 1983.
[3] E. O. Brigham. The Fast Fourier Transform and Its Applications. Prentice-Hall, Inc., 1988.
[4] C. Burnikel et al. Exact Geometric Computation in LEDA. In SoCG, pages 418–419, 1995.
[5] R. Cole. Parallel merge sort. SICOMP, 17(4):770–785, 1988.
[6] S. Collange et al. A Reproducible Accurate Summation Algorithm for High-Performance Computing.
In Proceedings of the SIAM EX14 workshop, 2014.
[7] J. Dean and S. Ghemawat. MapReduce: Simplified data processing on large clusters. Commun. ACM,
51(1):107–113, Jan. 2008.
[8] T. Dekker. A floating-point technique for extending the available precision. Numerische Mathematik,
pages 224–242, 1971.
[9] J. Demmel and Y. Hida. Accurate and efficient floating point summation. SIAM Journal on Scientific
Computing, 25(4):1214–1248, 2004.
16
[10] J. Demmel and Y. Hida. Fast and accurate floating point summation with application to
computational geometry. Numerical Algorithms, 37(1-4):101–112, 2004.
[11] J. Demmel and H. D. Nguyen. Parallel reproducible summation. IEEE TC, 64(7):2060–2070, July
2015.
[12] L. Fousse et al. MPFR: A multiple-precision binary floating-point library with correct rounding.
TOMS, June 2007.
[13] M. Frigo et al. Cache-oblivious algorithms. In FOCS, pages 285–297, 1999.
[14] GMPLib. GMP: the GNU multiple precision arithmetic library. https://gmplib.org/. Accessed
2015-12-16.
[15] D. Goldberg. What every computer scientist should know about floating-point arithmetic. CSUR,
pages 5–48, Mar. 1991.
[16] M. T. Goodrich et al. Sorting, searching, and simulation in the MapReduce framework. In Int. Symp.
on Algorithms and Computation (ISAAC), volume 7074 of LNCS, pages 374–383, 2011.
[17] M. T. Goodrich and S. R. Kosaraju. Sorting on a parallel pointer machine with applications to set
expression evaluation. J. ACM, 43(2):331–361, Mar. 1996.
[18] M. T. Goodrich and R. Tamassia. Algorithm Design and Applications. Wiley Publishing, 2014.
[19] G. Hanrot, V. Lefévre, P. Pélissier, P. Théveny, and P. Zimmermann. The GNU MPFR library.
http://www.mpfr.org/. Accessed 2015-12-16.
[20] M. Isenburg, P. Lindstrom, and J. Snoeyink. Lossless compression of predicted floating-point
geometry. Computer-Aided Design, pages 869–877, 2005.
[21] J. JáJá. An Introduction to Parallel Algorithms. Addison-Wesley, 1992.
[22] E. Kadric, P. Gurniak, and A. DeHon. Accurate parallel floating-point accumulation. In 21st IEEE
Symp. on Computer Arithmetic (ARITH), pages 153–162, April 2013.
[23] M.-Y. Kao and J. Wang. Linear-time approximation algorithms for computing numerical summation
with provably small errors. SISC, pages 1568–1576, 2000.
[24] H. Karloff, S. Suri, and S. Vassilvitskii. A model of computation for mapreduce. In 21st ACM-SIAM
Symp. on Discrete Algorithms (SODA), pages 938–948, 2010.
[25] D. E. Knuth. The Art of Computer Programming, Volume 2 (3rd Ed.): Seminumerical Algorithms.
Addison-Wesley, Boston, MA, USA, 1997.
[26] U. W. Kulisch and W. L. Miranker. The arithmetic of the digital computer: A new approach. SIAM
Review, 28(1):1–40, 1986.
[27] A. Lacroix and F. Hartwig. Distribution densities of the mantissa and exponent of floating point
numbers. In ISCAS, pages 1792–1795, May 1992.
[28] H. Leuprecht and W. Oberaigner. Parallel algorithms for the rounding exact summation of floating
point numbers. Computing, 28(2):89–104, 1982.
[29] M. A. Malcolm. On accurate floating-point summation. Commun. ACM, 14(11):731–736, Nov. 1971.
[30] R. M. Neal. Fast exact summation using small and large superaccumulators. arXiv ePrint,
abs/1505.05571, 2015.
[31] B. Parhami. Generalized signed-digit number systems: a unifying framework for redundant number
representations. IEEE TC, 39(1):89–98, Jan 1990.
[32] D. Priest. Algorithms for arbitrary precision floating point arithmetic. In 10th IEEE Symp. on
Computer Arithmetic (ARITH), pages 132–143, Jun 1991.
17
[33] J. Richard Shewchuk. Adaptive precision floating-point arithmetic and fast robust geometric
predicates. Discrete & Computational Geometry, 18(3):305–363, 1997.
[34] S. M. Rump, T. Ogita, and S. Oishi. Accurate floating-point summation part i: Faithful rounding.
SIAM Journal on Scientific Computing, 31(1):189–224, 2008.
[35] M. Tommila. Apfloat for Java. http://www.apfloat.org/apfloat_java/. Accessed 2015-12-16.
[36] J. S. Vitter. Algorithms and data structures for external memory. TnTCS, pages 305–474, Jan. 2008.
[37] Intel Xeon Processor E5-2670 v2. http://ark.intel.com/products/75275.
[38] M. Zaharia et al. Spark: Cluster Computing with Working Sets. In HotCloud, 2010.
[39] Y.-K. Zhu and W. B. Hayes. Correct rounding and a hybrid approach to exact floating-point
summation. SIAM Journal on Scientific Computing, 31(4):2981–3001, 2009.
[40] Y.-K. Zhu and W. B. Hayes. Algorithm 908: Online Exact Summation of Floating-Point Streams.
TOMS, pages 1–13, 2010.
18
| 8 |
Session Types = Intersection Types + Union Types
Luca Padovani
Dipartimento di Informatica, Università di Torino
Corso Svizzera 185, Torino, Italy
[email protected]
We propose a semantically grounded theory of session types which relies on intersection and union
types. We argue that intersection and union types are natural candidates for modeling branching
points in session types and we show that the resulting theory overcomes some important defects of
related behavioral theories. In particular, intersections and unions provide a native solution to the
problem of computing joins and meets of session types. Also, the subtyping relation turns out to be
a pre-congruence, while this is not always the case in related behavioral theories.
1
Introduction
Session types [10, 11, 12] are protocol descriptions that constrain the use of communication channels
in distributed systems. In these systems, processes engage into a conversation by first establishing a
session on some private channel and then carrying on the conversation within the protected scope of
the session. The session type prescribes, for each process involved in the session, the sequence and the
type of messages the process is allowed to send or expected to receive at each given time. For example,
the session type a.a.b associated with some channel c states that a process can use c for sending two a
messages and then waiting for a b message, in this order. Names a and b may stand for either message
types, labels, method names and so forth, depending on the process language one is considering.
In most session type theories it is possible to specify protocols with branching points indicating
alternative behaviors: for example, the session type a.T @ b.S usually means that a process chooses to
send either an a message or a b message and then behaves according to T or S depending on the message
that it has sent; dually, the session type a.T @ b.S usually means that a process waits for either an a
message or a b message, and then behaves according to the respective continuation. In these examples, as
in the session type theories cited above, one is making the implicit assumption that the process actively
choosing to follow one particular branch is the one that sends messages, while the process passively
waiting for the decision is the one that receives messages. In practice, it is appropriate to devise two
distinct branching operators, instead of a single one @ like in the examples above, to emphasize this fact.
This is the key intuition in [3, 14, 1] where session types are studied as proper terms of a simple process
algebra with action prefixes and two choice operators: the internal choice T ⊕ S denotes that the process
decides which branch, T or S, to take and behaves accordingly; the external choice T + S denotes that
the process offers two possible behaviors, T and S, and leaves the decision as to which one to follow to
the process at the other end of the communication channel.
The approach advocated in [3, 14] recasts session types into well-known formalisms (process algebras) by fully embracing their behavioral nature. This permits the definition of an elegant, semantically
grounded subtyping relation . for session types as an adaptation of the well-known must pre-order for
processes [6, 5]. Nonetheless, the resulting theory of session types suffers from a few shortcomings.
First of all, the semantics of the external choice is a bit involved because in some contexts it is indistinguishable from that of the internal choice: the typical example, which is also one of the pivotal laws
E. Pimentel, B. Venneri and J. Wells (Eds.): Workshop on
Intersection Types and Related Systems 2010 (ITRS 2010).
EPTCS 45, 2011, pp. 71–89, doi:10.4204/EPTCS.45.6
c Luca Padovani
This work is licensed under the
Creative Commons Attribution License.
72
Session Types = Intersection Types + Union Types
of the must pre-order, is a.T + a.S h a.(T ⊕ S) (we write h for the equivalence relation induced by .).
As a direct consequence of this, the subtyping relation . fails to be a pre-congruence. Indeed we have
a.b . a.b + b.c but a.b + b.d 6. a.b + b.c + b.d h a.b + b.(c ⊕ d). This poses practical problems (one
has to characterize the contexts in which subtyping is safe) as well as theoretical ones (. is harder to
characterize axiomatically). Finally, recent developments of session type theories have shown a growing
interest toward the definition of meet and join operators over session types [13], which must be defined
in an ad hoc manner since these do not always correspond to the internal choice and the external choice.
In this paper we propose a language of session types which uses intersection types and union types
for modeling branching points. The idea is that when some channel is typed by the intersection type
a.T ∧ b.S this means that the channel has both type a.T and also type b.S, namely a process conforming
to this type can choose to send an a message or a b message and then use the channel as respectively
prescribed by T and S. Dually, when some channel is typed by the union type a.T ∨ b.S this means that
the process does not precisely know the type of the channel, which may be either a.T or b.S. Hence it
must be ready to receive both an a message and a b message. It is the message received from the channel
that helps the process disambiguate the type of the channel. If the message does not provide enough
information, the ambiguity is propagated, hence one pivotal law of our theory is a.T ∨ a.S h a.(T ∨ S).
In summary, we argue that intersection and union types are natural, type theoretic alternatives for
internal and external choices, respectively. Furthermore, they allow us to develop a decidable theory of
session types that are natively equipped with join and meet operators, and where the semantically defined
subtyping relation is a pre-congruence.
Structure of the paper. We devote Section 2 to presenting a process algebra, so that we can formalize
processes and correct process interactions in dyadic sessions (i.e., we consider sessions linking exactly
two processes). We introduce session types in Section 3, where we use the formalization of processes
from the previous section for defining their semantics. The section includes the description of an algorithm for deciding the subtyping relation, a type system for checking whether a process conforms to
a given session type, as well as an extended example motivating the need to compute meet and join
of session types. We conclude in Section 4 with a summary of the paper and a few hints at future research directions. For the sake of simplicity, in this paper we restrict ourselves to finite processes and
finite types. Indeed, the relationship between branching operators and intersection and union types is
independent of the fact that processes may or may not be infinite. On the contrary, dealing with infinite behaviors introduces some technical difficulties, briefly touched upon in Section 4, that we plan to
address in a forthcoming and more comprehensive work. For the sake of readability, proofs and other
technical details have been postponed to sections A and B.
2
Processes
Let us fix some notation: we let a, b, . . . range over some set N of action names whose meaning is
left unspecified; we let P, Q, . . . range over processes and α, β , . . . range over actions. We distinguish
input actions of the form a from output actions of the form a; we say that α is the co-action of α where
a = a. We consider the simple language of sequential processes whose grammar is described in Table 1.
Syntactically speaking the language is a minor variation of CCS without τ’s [6, 9] without relabeling,
restriction, and parallel composition. The terms 0 and 1 denote idle processes that perform no further
action. The former is deadlocked, while the latter represents a successfully terminated interaction (since
we are going to give processes a testing semantics, we prefer denoting success by means of a dedicated
Luca Padovani
73
Table 1: Syntax of processes.
Process P ::=
|
|
|
|
0
1
α.P
P⊕P
P+P
(deadlock)
(termination)
(prefix)
(internal choice)
(external choice)
Action α ::= a
| a
(input)
(output)
Table 2: Operational semantics of processes (symmetric rules omitted).
( R 1)
X
1 −→ 1
( R 4)
( R 2)
( R 3)
α
α.P −→ P
P ⊕ Q −→ P
( R 6)
( R 5)
α
P −→ P0
P −→ P0
P + Q −→ P0 + Q
P + Q −→ P0
α
a
P −→ P0
P + Q −→ a.P0
term 1 rather than a special action as in other theories [5]). The term α.P denotes a process that performs
the action α and then continues as P. The term P ⊕ Q denotes a process that internally decides whether to
behave as P or as Q. Finally, the term P + Q is the external choice of P and Q and denotes a process that
externally offers two behaviors, P and Q, and lets the environment decide which one it should follow. As
we will see shortly, the decision of the environment is guided, as usual, by the initial actions performed
by P and Q. In the following we will usually omit trailing 1’s and write, for example, a.b instead of
a.b.1. We will also write P for the set of all processes.
The formal meaning of processes is given by a transition system, defined in Table 2 (symmetric rules
have been omitted). The system consists of two relations, an unlabelled one −→ and a labelled one
µ
−→ where µ is a label is an element of N ∪ N ∪ {X} and X 6∈ N ∪ N is a flag denoting successful
termination. We extend the · involution to labels so that X = X and to sets of labels A so that A =
µ
{µ | µ ∈ A}. Intuitively −→ represents internal, invisible transitions of a process, while −→ represents
external, visible transitions of a process. We briefly describe the meaning of the rules in the following
paragraph: rule ( R 1) signals the fact that the process 1 has terminated successfully; rule ( R 2) states
that a process α.P may execute the action α and reduce to P; rule ( R 3) (and the symmetric one) states
that a process P ⊕ Q internally decides to reduce to either P or Q; rule ( R 4) (and the symmetric one)
states that internal decisions taken in some branch of an external choice do not preempt the other branch
of the external choice. This rule is common in process algebras distinguishing between internal and
external choices, such as CCS without τ’s [6] from which out process language is inspired. Rule ( R 5)
(and the symmetric one) states that an external choice offers any action that is offered by either branch
of the choice. Rule ( R 6) and its symmetric is possibly the less familiar one. It states that a process
performing an output action may preempt other branches of an external choice. This rule has been
originally introduced in [4] where the message sent is detached from its corresponding continuation,
which is thus immediately capable of interacting with the surrounding environment. Here, as in [3],
we keep the message and its continuation attached together, so as to model an asynchronous form of
communication where the order of messages is preserved. This is practically justified in our setting as
we aim at modelling dyadic sessions. In the following we will sometimes use the following notation: we
74
Session Types = Intersection Types + Union Types
µ
µ
write =⇒ for the reflexive and transitive closure of −→; we let =⇒ be the composition =⇒−→=⇒; we
µ
µ
def
write P −→
X
if there is no P0 such that P −→ P0 ; we write P =⇒ if P =⇒ P0 for some P0 ; let init(P) =
µ
{µ | P =⇒}.
The next and final step is to describe how two processes “complete each other”, in the sense that they
interact without errors. Informally, P and Q interact without errors if, regardless of the respective internal
choices, they are always capable of synchronizing by means of complementary actions or they have both
successfully terminated. We formalize this as the following orthogonality relation between processes:
Definition 2.1 (orthogonal processes). Let −→ be the smallest relation between systems P | Q of two
processes such that:
P −→ P0
Q −→ Q0
P | Q −→ P0 | Q
P | Q −→ P | Q0
α
P −→ P0
α
Q −→ Q0
P | Q −→ P0 | Q0
and let =⇒ be the reflexive, transitive closure of −→. We write P | Q −→
X
if there are no P0 and Q0 such
0
0
that P | Q −→ P | Q . We say that P and Q are orthogonal, notation P ⊥ Q, if P | Q =⇒ P0 | Q0 −→
X
implies
X
X
P0 −→ and Q0 −→.
def
As an example, consider the process P = a.(a + b). Then a.a, a.b, a.(a ⊕ b) are all orthogonal to P.
X
The processes a and P are not orthogonal because a | P −→ 1 | a + b −→
X
and a + b −→
X
(both processes
must be in a successfully terminated state when they reach a stable configuration). Also a.(a ⊕ c) and P
are not orthogonal because a.(a ⊕ c) | P −→ a ⊕ c | a + b −→ c | a + b −→.
X
Orthogonality provides us with a notion of “test” that we can use for discriminating processes, in the
spirit of the testing framework [5]. Informally, when P ⊥ Q we can see Q as a test that P succeeds to
pass (since orthogonality is symmetric, we can also reason the other way around and see P as a test for
Q). Equivalently, we can see Q as a context that completes P. Then, we can say that two processes are
equivalent if they pass the same tests, or if they are completed by the same contexts. In fact, it makes
sense to interpret processes as the set of tests they pass and to define a pre-order between processes,
which we call refinement, as the inclusion of their corresponding interpretations.
Definition 2.2 (process interpretation and refinement). Let JPK = {Q ∈ P | P ⊥ Q}. We say that Q is
a refinement of P, notation P . Q, if and only if JPK ⊆ JQK. We write h for the equivalence relation
induced by ., namely h = . ∩ .−1 .
def
Intuitively, Q is a refinement of P if any test that P passes is also passed by Q. Therefore, it is safe
to replace P with Q as any context in which P operates correctly will continue to do so also with Q. The
equational theory induced by refinement is closely related to the must testing pre-order [5]. In particular,
we have P ⊕ Q . P since JP ⊕ QK = JPK ∩ JQK. This equivalence lets us appreciate the fact that the
internal choice operator does correspond to an intersection when we interpret processes as the sets of
their orthogonals. Alas, under this interpretation the external choice operator does not correspond to a
union, for three reasons:
• There can be processes in JP + QK that are not contained in JPK ∪ JQK. For example, a ⊕ b ∈
Ja + bK \ JaK ∪ JbK. This is fairly common in every framework that accounts for non-deterministic
entities. In our case, a ⊕ b is orthogonal to a + b, but not to a or b alone.
• Sometimes JP + QK = JP ⊕ QK = JPK ∩ JQK, namely the external choice can be internal choice
in disguise. For example, we have a.a + a.b h a.a ⊕ a.b h a.(a ⊕ b). The problem is that both
Luca Padovani
75
Table 3: Syntax of session types.
Session type T
::=
|
|
|
|
|
0
1
end
α.T
T ∧T
T ∨T
(bottom)
(top)
(termination)
(prefix)
(intersection)
(union)
branches of the external choice are guarded by the same action a, and since it is the initial performed action to determine the chosen branch the process a.a + a.b does not offer an external
choice, but is actually performing an internal one. A different instance of this phenomenon occurs
when both branches of an external choice are guarded by output actions, because of rule ( R 6). For
example, we have a + b h a ⊕ b.
• The fact that output actions can preempt branches of external choices can make such branches
useless. For example a + b h a + 1 h a, since a + P −→ a by rule ( R 6).
A direct consequence of these subtleties related with the external choice is that refinement fails to be
a pre-congruence. In particular, we are now able to justify the (in)equivalences a.b + b.d 6. a.b + b.c +
b.d h a.b + b.(c ⊕ d) that we have anticipated in the introduction.
Observe that there are pathological processes that are intrinsically flawed and cannot interact correctly with any other process. For example, a ⊕ b has no orthogonals since it is not possible to know
which message, a or b, it is ready to receive. As another example the process P = a ⊕ b has no orthogonals: no process interacting with it can send an a message, since P −→ b; at the same time, a process
waiting for the b message from P may starve forever since P −→ a.
3
Session Types
In this section we introduce our language of session types, we study their semantics, and we provide a
subtyping algorithm and a type system for checking processes against session types.
3.1
Syntax
We let T , S, . . . range over session types, which are defined by the grammar in Table 3. The types 0
and 1 characterize channels which cannot be successfully used for any interaction. We postpone a more
detailed discussion about 0 and 1 when we will formally define their semantics. For the time being, it
suffices to say that 0 and 1 represent the largest and smallest element in the lattice of session types we
are about to define. The type end denotes channels on which no further action is possible. There is a
fundamental distinction between end and the two types 0 and 1: end denotes a successfully terminated
interaction, while 0 and 1 denote the impossibility to carry on any interaction; the type α.T denotes
channels on which it is possible to perform an action α. Actions are the same ones that occur within
processes, but the point of view is slightly different: a process executes an action, while a session type
indicates the possibility or the obligation for a process to execute an action. We will appreciate more
concretely this difference in Section 3.4, where we will see that the same process can be successfully
checked against different session types. The type T ∧ S denotes channels that have both types T and S.
76
Session Types = Intersection Types + Union Types
For example a.end ∧ b.end denotes a channel that has both type a.end and also type b.end, namely it can
be used for sending both messages a and b. Finally, the type T ∨ S denotes channels that either have type
T or S. For instance the type a.end ∨ b.end associated with a channel means that a process using that
channel must be ready to receive both a message a and a message b, since it does not know whether the
type of the channel is a.end or b.end.1 To avoid clutter, in the following we will omit trailing end’s and
write, for instance, a ∧ b instead of a.end ∧ b.end when this generates no ambiguity with the syntax of
processes.
Before giving a formal semantics to session types let us discuss a few examples to highlight similarities and differences between them and processes. It should be pretty obvious that ⊕ and ∧ play
similar roles: the ability for a process P ⊕ Q to autonomously decide which behavior, P or Q, to perform
indicates that the session type associated with the channel it is using allows both alternatives, it has both
types. No such correspondence exists between + and ∨. For instance, consider P = a.b.a + a.c.b and
T = a.b.a ∨ a.c.b. The external choice in P is guarded by the same action a, meaning that after performing action a the process may reduce to either b.a or to c.b, the choice being nondeterministic. As we have
already remarked at the end of Section 2, one can show that P is equivalent to a.(b.a ⊕ c.b), where the
nondeterministic choice between the two residual branches is explicit. The session type T , on the other
hand, tells us something different: we do not know whether the channel we are using has type a.b.a or
a.c.b and receiving message a from it does not help to solve this ambiguity. Therefore, after the message
a has been received, we are left with a channel whose associated session type is b.a ∨ c.b. At this stage,
depending on the message, b or c, that is received, we are able to distinguish the type of the channel, and
to send the appropriate message (either a or b) before terminating. In summary, P and T specify quite
different behaviors, and in fact while T is perfectly reasonable, in the sense that there are processes that
conform to T and that can correctly interact with corresponding orthogonal processes, the reader may
easily verify that P has no orthogonals.
3.2
Semantics
Intuitively we want to define the semantics JT K of a session type T as a set of processes, so that session
types can be related by comparing the corresponding interpretations pretty much as we did for processes
(Definition 2.2). To assist the reader with this intuition, consider the scenario depicted below
T `P
c
Q ∈ JT K
where the notation T ` P means that P, which we will think of as the “server”, is using the end point of
channel c according to the session type T . We write T ` P instead of c : T ` P since we assume that P
acts on one channel only. The idea is that the interpretation of T is the set of “client” processes Q that
can interact correctly with P when placed at the other end point of the channel c.
Before we address the formal definition of JT K we must realize that not every set of processes makes
sense when interpreted in this way:
• if a server is able to interact correctly with all of the clients in the set X = {a, b}, then it is also
able to interact correctly with a ⊕ b;
• no server is able to interact correctly with all of the clients in the set Y = {a, b} because this server
would have to perform both an input on a and an output on b at the same time.
1 We are making the implicit assumption that “using a channel” means either sending a message on it or waiting a message
from it and that no type-case construct is available for querying the actual type of a channel.
Luca Padovani
77
We conclude that neither X nor Y above are closed sets of processes that can serve as proper denotations of a session type: X and X ∪ {a ⊕ b} are indistinguishable because every server P that includes X in
its interpretation includes also X ∪ {a ⊕ b}; Y and P are indistinguishable because there is no server that
includes Y in its interpretation just as there is no server that includes the whole P in its interpretation.
We therefore need a closure operation over sets of processes, which we define in terms of orthogonal
sets, defined as follows:
Definition 3.1 (orthogonal set). Let X ⊆ P. Then X ⊥ = {P ∈ P | X ⊆ JPK}.
def
Intuitively, the orthogonal of some set of processes X is the set of those processes that include X in
their interpretation. If we go back to the problematic sets of processes described earlier, we have X ⊥ =
{a + b, a + b + c, a + b + c + d, . . . } and Y ⊥ = 0.
/ Clearly the orthogonal of a set X flips the perspective,
in the sense that if X is a set of “clients”, then X ⊥ is the set of “servers” of those clients. Therefore,
we define the closure as the bi-orthogonal (·)⊥⊥ . For instance we have X ⊥⊥ = {a, b, a ⊕ b, . . . } and
Y ⊥⊥ = P. We say that a set X of processes is closed if it is equal to its closure, namely if X = X ⊥⊥ .
The fact that (·)⊥⊥ is indeed a closure operator is formalized by the following result:
Proposition 3.1. The bi-orthogonal is a closure, namely it is extensive, monotonic, and idempotent:
1. X ⊆ X ⊥⊥ ;
2. X ⊆ Y implies X ⊥⊥ ⊆ Y ⊥⊥ ;
3. X ⊥⊥ = X ⊥⊥⊥⊥ .
Proof. Observe that X ⊥ = {P ∈ P | ∀Q ∈ X : P ⊥ Q}. Then ((·)⊥ , (·)⊥ ) is a Galois connection (more
precisely, a polarity) between the posets h2P , ⊆i and h2P , ⊇i. Then it is a known fact that (·)⊥⊥ =
(·)⊥ ◦ (·)⊥ is a closure operator on the poset h2P , ⊆i.
Then we define the interpretation of session types in terms of closures of sets of processes, where we
interpret ∧ and ∨ as set-theoretic intersections and unions.
Definition 3.2 (session type semantics). The semantics of a session type is inductively defined by the
following equations:
J0K = 0/
J1K = P
JendK = {1}⊥⊥
Jα.T K = {α.P | P ∈ JT K}⊥⊥
JT1 ∧ T2 K = JT1 K ∩ JT2 K
JT1 ∨ T2 K = (JT1 K ∪ JT2 K)⊥⊥
As we comment on the definition of J·K, it is useful to think of JT K as of the set of clients that a
server using a channel with type T must be able to satisfy. Since 0 denotes the empty set of clients, a
channel typed by 0 is the easiest to use for a server, for the server is not required to satisfy any process.
Dually, a channel typed by 1 is the hardest to use, for the server is required to satify any process. As
this is impossible to achieve (there is no process that is dual of every process in P), no server can
effectively use a channel typed by 1. From a type-theoretic point of view, 0 and 1 represent two dual
notions of emptyness: 0 means absence of clients, 1 means absence of servers. Later on we will see
that any session type different from 0 and 1 is inhabited, in the sense that it admits at least one client
and at least one server. A channel typed by end represents those clients that are satisfied even if they do
not receive any further message. The process 1 clearly is a client of end, but it’s not the only one: any
process that guarantees the X action is a client of end. Hence we have JendK = {1, 1 + a, 1 + a + b, . . . }.
78
Session Types = Intersection Types + Union Types
In particular, no process that is immediately able to emit an output is included in this set. Regarding the
session type α.T , its clients are all those processes that perform the co-action α and whose continuation
after α is in JT K. If α is some input action a then any process in Jα.T K sends a (and only a), whereas
if α is some output action a then any process in Jα.T K guarantees the input action a. For example we
have a ∈ Ja.endK and a + b ∈ Ja.endK but a ⊕ b 6∈ Ja.endK. Therefore, a server using a channel typed by
α.T is required to provide action α and to continue the interaction as specified by T . The intersection
type T1 ∧ T2 denotes those channels that have both type T1 and type T2 . Therefore the servers using these
channels have the freedom to use them according to either T1 or T2 . That is why the clients of T1 ∧ T2
must be clients of both T1 and T2 . The union type T1 ∨ T2 can be explained in a dual way with respect to
the intersection. In this case, the server is unsure whether the channel has type T1 or T2 and consequently
it must be able to satisfy (at least) all the clients of T1 and all the clients of T2 as well. Overall we see that
intersections and unions of session types match in a quite natural way their set-theoretic interpretation.
However, note that JT1 ∧ T2 K = JT1 K ∩ JT2 K whereas in general we have JT1 ∨ T2 K ⊇ JT1 K ∪ JT2 K. For
example, a ⊕ b ∈ Ja.end ∨ b.endK \ (Ja.endK ∪ Jb.endK). There is no need to use the closure operator on
JT1 K ∩ JT2 K since it can be shown that this set is already closed.
We use J·K for comparing session types. In particular we say that T is a subtype of S when T ’s clients
are included in S’s clients:
Definition 3.3 (subtype). We say that T1 is a subtype of T2 , written T1 . T2 , if JT1 K ⊆ JT2 K. We write h
for the equivalence relation induced by ., namely h = . ∩ .−1 .
Unlike the refinement relation, subtyping turns out to be a pre-congruence with respect to all the
operators of the session type language.
Proposition 3.2. . is a pre-congruence.
Proof. Immediate from the definition of . and Proposition 3.1(2).
Equally trivial is the fact that ∧ and ∨ provide us with a native way of respectively computing the
greatest lower bound and the least upper bound of two session types. As regards ∧, this is obvious
since JT1 ∧ T2 K = JT1 K ∩ JT2 K by definition. For ∨, it suffices to observe that T1 . S and T2 . S implies
JT1 K ∪ JT2 K ⊆ JSK. Since (JT1 K ∪ JT2 K)⊥⊥ is the smallest closed set that includes JT1 K ∪ JT2 K and since JSK
is closed, we conclude JT1 ∨ T2 K = (JT1 K ∪ JT2 K)⊥⊥ ⊆ JSK, namely T1 ∨ T2 . S. The following extended
example shows the need to compute meets and joins of session types in some contexts. The availability
of native unions and intersections within the language of session types makes this task trivial.
Example 3.1 (global type projection). Global types [12, 2] are abstract descriptions of interactions
between two or more participants from a neutral point of view. For example, the global type
a
b
a
c
A −→ B; A −→ B @ A −→ B; A −→ B
specifies a system with two participants, here indicated by the tags A and B, which interact by exchanging
a
messages ‘a’, ‘b’, and ‘c’. In a global type, an action such as A −→ B indicates that A sends an ‘a’
message to B. Actions can be composed in sequences (with ;) and in alternative paths (with @). Overall,
the global type describes which sequences of interactions are possible, but not who is responsible for
which choices (hence the use of a single operator @ in branching points). The implementation of a
global type begins by projecting it on each participant, so as to synthesize the session type that each
participant must conform to. In this example we obtain the following projections: the projection on A is
a.b on the l.h.s. and a.c on the r.h.s.; the projection on B is a.b on the l.h.s. and a.c on the r.h.s. Since
Luca Padovani
79
A is the only sender, it is natural that its overall projection is a.b ∧ a.c h a.(b ∧ c). Since B is the only
receiver, it must be prepared to receive the messages from A regardless of which messages A decides to
send. Therefore, the correct projection of the global type on B is a.b ∨ a.c h a.(b ∨ c), which is the least
upper bound of the projections on B of the two branches. In a language of session types with behavioral
choices, this upper bound must be computed by an ad hoc operator, since a.b + a.c would be equivalent
to a.(b ⊕ c) which does not correspond to the correct projection for B.
As we have anticipated, for a session type to make sense, its interpretation must be different from both
0/ and P. This condition roughly corresponds to non-emptyness: a standard “value” type is inhabited if
there exists one value of that type; a session type is inhabited if it has at least one server and at least one
client. This explains why there are two distinct “empty” session types.
Definition 3.4 (viable session type). We say that the session type T is viable if T 6h 0, 1.
Viability is a necessary and sufficient condition for T to be implementable: if T 6h 0 take any P ∈ JT K.
From the hypothesis T 6h 1 and the fact that JT K is closed we also know that JT K⊥ 6= 0,
/ because JT K⊥ = 0/
implies JT K⊥⊥ = P. Hence there exists Q ∈ JT K⊥ . By definition of orthogonal set we conclude P ⊥ Q.
This discussion about viability emphasizes the importance of the orthogonal operation since the sets JT K
and JT K⊥ contain precisely those processes that interact correctly via a channel typed by T . We conclude
this section by showing that the orthogonal operator over sets of processes corresponds to a syntactic
duality operation over session types.
Theorem 3.1 (dual session type). The dual of a session type T is the session type T obtained from T by
turning every 0 into 1, every 1 into 0, every action α into the corresponding co-action α, every ∧ into
∨, and every ∨ into ∧. Inductively:
0 = 1
1 = 0
end = end
α.T = α.T
T1 ∧ T2 = T 1 ∨ T 2
T1 ∨ T2 = T 1 ∧ T 2
Then JT K = JT K⊥ .
3.3
Subtyping Algorithm
In this section we define an algorithm for deciding the subtyping relation. Since the interpretation of a
session type is usually an infinite set of processes, we cannot hope to derive a brute force algorithm that
is based directly on Definition 3.3. Fortunately, session types admit a particularly simple and intuitive
normal form. Therefore, we split the decision algorithm in two parts: first we provide an effective
procedure for rewriting every session type into an equivalent normal form, which happens to be unique
up to commutativity and associativity of intersections and unions. Then, we provide a syntax-directed
algorithm that decides the subtyping relation between session types in normal form. In what follows we
V
W
will use n-ary intersections and unions of the form i∈{1,...,n} Ti and i∈{1,...,n} Ti in place of T1 ∧ · · · ∧ Tn
V
W
and T1 ∨ · · · ∨ Tn , respectively; as usual, we let i∈0/ Ti = 1 and i∈0/ Ti = 0 by definition. We will also
write T { ∧ S}φ to indicate that the ∧ S part is present only when φ holds; similarly for T { ∨ S}φ .
Definition 3.5 (normal form). We say that a session type T is in normal form if either
T≡
^
a.Ta { ∧ end}X∈A
or
a∈A
and Ta is viable and in normal form for every a ∈ A.
T≡
_
a∈A
a.Ta { ∨ end}X∈A
80
Session Types = Intersection Types + Union Types
Table 4: Simplification laws (symmetric and dual laws omitted).
( E - PREFIX )
α.0 = 0
( E - INPUT- END )
(a∈A)
Ta viable
a.Ta ∧ end = 0
( E - BOTTOM )
( E - TOP )
0∧T = 0
1∧T = T
( E - INPUT- OUTPUT- END )
( E - INPUT- OUTPUT )
(a∈A)
Ta viable
_
_
a∈A
a∈A
( E - DIST )
α.T ∧ α.S = α.(T ∧ S)
Ta viable(a∈A)
S viable
a.Ta ∨ end ∧ b.S = b.S ∧ end
S viable
a.Ta ∧ b.S = 0
_
a∈A
( E_
- INPUT- INPUT )
_
_
b.Sb { ∨ end}X∈B =
a.(Ta ∧ Sa ){ ∨ end}X∈A∩B
a.Ta { ∨ end}X∈A ∧
a∈A
b∈B
a∈A∩B
A process using a channel whose associated session type is a∈A a.Ta { ∧ end}X∈A may send any
message a ∈ A and it may decide to terminate if X ∈ A. After sending a message a, the process must
continue using the channel as specified by Ta . In a dual fashion, a process using a channel whose
W
associated session type is a∈A a.Ta { ∨ end}X∈A must be ready to receive any message a ∈ A and it must
also be ready to terminate immediately if no such message is received and X ∈ A. In case a message a is
received, the process must continue using the channel as specified by Ta .
The simplicity of normal forms is due to the fact that some behaviors (like sending a message and
receiving a message) are incompatible, in the sense that their combination (intersection or union) yields
non-viable session types. Table 4 presents a set of laws that are used (from left to right) as basic simplification steps in the computation of the normal form (symmetric and dual laws are omitted). Laws ( E PREFIX ), ( E - BOTTOM ), and ( E - TOP ) state that non-viable types absorb prefixes and that 0 and 1 are
respectively neutral for ∨ and ∧, as expected. Law ( E - DIST ) shows that common actions can be factored
while preserving the combining operator. In particular, the dual law α.T ∨ α.S h α.(T ∨ S) distinguishes subtyping from refinement and from the must pre-order, where the law α.P + α.Q h α.(P ⊕ Q)
holds. Rules ( E - INPUT- END ) and ( E - INPUT- OUTPUT ) show that no client that sends a message a ∈ A
can be satisfied by a server that may decide to terminate the interaction or to send a message. This is
because the action of sending a message is irrevocable (see rule ( R 6) in the transition system of processes). Rule ( E - INPUT- OUTPUT- END ) shows that among the clients that either send a message a ∈ A
or terminate are those that can also receive message b. Finally, rule ( E - INPUT- INPUT ) shows that the
clients of a server will send only messages that can surely be received by the server. For example,
(a ∨ b ∨ c) ∧ (b ∨ c ∨ d) h b ∨ c. The dual law concerns messages that can be sent by the server. Thus
(a ∧ b ∧ c) ∨ (b ∧ c ∧ d) h b ∧ c: if the server is unsure whether the type of the channel is a ∧ b ∧ c or
b ∧ c ∧ d, then it can only send those messages that can travel along the channel in both cases.
Lemma 3.1. The laws in Table 4 are sound.
The simplification laws, and the axiomatization of . that we are about to present, would be simpler
if one could prove that ∧ and ∨ distribute over each other. We conjecture that the lattice of closed sets
of processes ordered by set inclusion is indeed distributive (in the process language, the internal and
external choices distribute over each other), but the proof appears to be non-trivial.
Lemma 3.2 (normal form). For every session type T there exists S in normal form such that T h S.
The proof of the normal form lemma is constructive and provides an effective procedure for rewriting
every session type in its normal form using the laws in Table 4. What remains to do now is to provide
the subtyping algorithm for session types in normal form.
V
Luca Padovani
81
Table 5: Subtyping algorithm.
( S - BOTTOM
)
^
06
(_
S - TOP )
(^
S - END )
_
a∈A
a∈A
b∈B
a.Ta { ∨ end}X∈A 6 1
a.Ta { ∧ end}X∈A
a∈A
b.Sb ∨ end
( S - OUTPUT )
( S - INPUT )
A
_
a.Ta ∧ end 6
⊆B
Ta 6 Sa
a.Ta { ∨ end}X∈A 6
a∈A
_
(a∈A)
B
^
b.Sb { ∨ end}X∈B
⊆A
Ta 6 Sa (a∈B)
a.Ta { ∧ end}X∈A 6
a∈A
b∈B
^
b.Sb { ∧ end}X∈B
b∈B
Table 6: Type checking rules.
( T- NIL )
( T- END )
( T- SEND )
( T- RECEIVE )
T `P
0`0
end ` 1
a.T ` a.P
Tai ` Pi
_
i∈I
(i∈I)
ai .Tai ` ∑ ai .Pi
( T- CHOICE )
T `P
( T- SUB )
T `Q
T ` P⊕Q
T `P
S.T
S`P
i∈I
Definition 3.6 (algorithmic subtyping). Let 6 be the least relation defined by axioms and rules in Table 5.
Because of the interpretation of ∧ and ∨ as respectively intersections and unions, the algorithm looks
embarrassingly obvious although it states well-known properties of channel types. In particular, rule ( S INPUT ) states that it is safe to replace a channel c having some input capabilities ( B ) with another one d
having fewer input capabilities (A ⊆ B), because any process originally using c will be ready to handle
any message b ∈ B. Dually, rule ( S - OUTPUT ) states that is safe to replace a channel c having some output
capabilities (B) with another one d having greater output capabilities (A ⊇ B), since the process originally
using c will exercise on d only a subset of the capabilities allowed on it. Observe that ( S - OUTPUT )
and ( S - INPUT ) are just specializations of the well-known laws T ∧ S 6 T and T 6 T ∨ S concerning
intersection and union types. Rules ( S - BOTTOM ) and ( S - TOP ) state obvious facts about 0 and 1 being
the smallest and the largest session types, respectively. Observe that rule ( S - INPUT ) is the counterpart of
rule ( S - BOTTOM ) when A = 0/ and the larger session type is a union. Dually, the rule ( S - OUTPUT ) is the
counterpart of rule ( S - TOP ) when B = 0/ and the smallest session type is an intersection. Rule ( S - END )
is required for the algorithm to be complete: it basically states the reflexivity of 6 on end.
The subtyping algorithm is correct and complete with respect to the set of session types in normal
form:
Theorem 3.2. Let T and S be in normal form. Then T . S if and only if T 6 S.
3.4
Type Checking
We conclude with the definition of a type checker to derive judgments of the form T ` P meaning that P
is a well-typed process using a channel with type T . The type checker is defined by the axioms and rules
in Table 6. We abbreviate a1 .P1 + · · · + an .Pn with ∑i∈{1,...,n} ai .Pi .
Because of the similarities between processes and session types, at first sight the type checker looks
as stating a trivial correspondence between the two languages, but there are some lurking subtleties.
Rules ( T- NIL ), ( T- END ), and ( T- SEND ) are indeed fairly obvious: the deadlocked server 0 can only use
a channel typed by 0 since no client can interact with it; the terminated server 1 can use a channel typed
82
Session Types = Intersection Types + Union Types
by end since it has successfully ended any interaction; the server a.P sending a message a can use a
channel typed by a.T if the continuation P uses the channel according to T . Rule ( T- RECEIVE ) concerns
servers waiting for a message from the set {ai | i ∈ I}. Intuitively, these servers can use channels typed
W
by i∈I ai .Ti where each continuation Pi is well typed with respect to Ti . However, there is the possibility
that two branches of the server are guarded by the same input action. Namely, it may be the case that
ai = a j for some i, j ∈ I such that i 6= j. As we know, this translates into the server performing an internal
choice on how to handle such a message, nondeterministically choosing between the continuations Pi
W
and Pj . Had we typed the server with respect to i∈I ai .Ti , we would be stating that the server is capable
of dealing with all the clients in the sets JTi ∨ T j K, which is not necessarily the case. Therefore, in order
for this typing rule to be sound, we require that the continuations Pi and Pj of different branches guarded
by the same input action ai = a j must be typable with respect to the same type Tai = Ta j . This way,
no matter which continuation is selected, it will be well typed. Rule ( T- CHOICE ) presents a similar
problem, since the server P ⊕ Q may independently reduce to either P or Q. Therefore, we require both
choices to be typable with respect to the same session type T . The attentive reader will have noticed a
close relationship between this typing rule and standard type preservation results stating that (internal)
reductions preserve the type: in this case, from the hypotheses T ` P ⊕ Q and either P ⊕ Q −→ P or
P ⊕ Q −→ Q we easily deduce that the residual process is still well typed with respect to T . The last
rule ( T- SUB ) is a standard subsumption rule, except that it deals with the type of the (implicit) channel
used by the process and not with the type of the process itself. It states that if a process is well typed with
respect to some session type T , then it is also well typed with respect to a smaller session type S. This
is consistent with the intuition that it is safe to replace a value (in this case, a channel) with another one
having a smaller type.
Example 3.2. In the two derivations that follow, rule ( T- SUB ) is essential for rules ( T- RECEIVE ) and ( TCHOICE ) to be applicable.
end ` 1
end ` 1
a`a
a∧b . b
b`b
a∧b . b
end ` 1
end ` 1
a`a
a∧b . b
b`b
a∧b . b
a∧b ` a
a∧b ` b
a
∧
b
`
a
⊕
b
a∧b ` a
a∧b ` b
a.(a ∧ b) ` a.a + a.b
a.(a ∧ b) ` a.(a ⊕ b)
The fact that the two processes a.a + a.b and a.(a ⊕ b) are well typed with respect to the same type
a.(a ∧ b) provides further evidence that they are equivalent, as informally argued in Section 1.
We conclude our study with a soundness result for the type system. If two processes are typed by
dual session types, then they are orthogonal.
Theorem 3.3. If T ` P and T ` Q, then P ⊥ Q.
There is no hypothesis concerning the viability of T , but this is implied. The reader can easily verify
that T ` P implies T 6h 1, coherently with the observation that no process is able to satisfy all processes.
As a consequence the hypotheses T ` P and T ` Q are enough to ensure that T and its dual are viable.
4
Concluding Remarks and Future Work
Previous formalizations of session types [3, 14, 1] are based on the observation that session types are
behavioral types. As such, they are eligible for being studied by means of the numerous and welldeveloped techniques for process equivalence, and testing equivalence in particular [6, 5]. In this view
Luca Padovani
83
the different modalities in which actions are offered coincide with two known behavioral operators, the
internal choice ⊕ and the external choice +. This approach, however natural and elegant, poses a few
problems mostly due to the fact that the external choice is sometimes an internal choice in disguise:
the language of session types may be difficult to understand to the programmer; the resulting subtyping
relation is not a pre-congruence and is thus more difficult to use in practice; also, there are contexts
where the computation of the greatest lower bound and of the least upper bound of session types arises
naturally and these must be computed by means of meta-operators on session types [13].
In this work we propose an alternative language of session types which is not immediately related to
some known process algebra. The basic idea is that the two choices can be naturally modeled by means
of intersection and union types: the session type T ∧ S describes a channel having both type T and type
S and for this reason a process can freely use that channel as having either type; the session type T ∨ S
describes a channel having either type T or type S, therefore a process using that channel cannot make
any assumption on it unless the exchanged messages provide enough information to disambiguate its
type. The intersection and union operators are intuitive alternatives to internal and external choices, they
provide a native mechanism to the computation of greatest lower bounds and least upper bounds, and the
subtyping relation of the resulting theory turns out to be a pre-congruence.
It is worth noting that, in our theory, the semantics of session types solely depends on the process
language, in particular on the adopted communication model and on the orthogonality relation. Any
other concept or result is derived by these two. In this work we have adopted a partially asynchronous
communication model, where output messages must be consumed before the sender can engage into any
other activity, and a symmetric orthogonality relation where both processes involved in a communication
must terminate successfully if the interaction reaches a stable state. These choices led us to rediscover
a familiar theory of session types [8] but it is plausible to expect that different interesting theories can
be developed by varying these two seminal notions. For example, using a truly asynchronous communication model, where an output action does not block subsequent actions, the relation a.b . b.a would
be sound because any “client” of a.b will eventually receive the b message that the “server” of b.a sends
ahead of time. Using a symmetric orthogonality relation might allow us to draw a closer comparison between our theory and more standard testing theories [5, 4], where the notion of “test” is asymmetric. We
remark here just a few planned developments of our theory: first of all, we want to extend the presented
framework to deal with possibly infinite session types. In principle this would amount to using a fix point
operator for determining the semantics of recursive session types as sets of possibly infinite processes.
However, the model presented in this work may need some further technical adjustments. To see why,
consider the infinite session type determined by the equation T = a.T which gives rise to the semantic
equation X = {a.P | P ∈ X}⊥⊥ . Both 0/ and P are solutions of the equation, meaning that the semantics
of a session type may not be uniquely determined. At the same time, neither of 0/ and P is a satisfactory
solution because they denote non-viable session types, while we would expect JT K to contain (recursive)
processes that send an infinite number of a messages. We plan to investigate whether the semantic model
of types described in [15], which shares many properties with ours, can be used to give a proper semantics to infinite session types. The second extension to the presented framework is to consider non-atomic
actions of the form ?t and !t where t is a basic type (such as int, bool, . . . ) and actions of the form ?T
and !T for describing delegations (the input and output of channels of type T ). This will give rise to more
interesting relations such as !int ∨ !real h !int assuming int is a subtype of real) and will allow us
to compare more thoroughly our subtyping relation with the existing ones [8]. Finally, it looks like the
presented approach can be easily extended to incroporate universal and existential quantifiers in session
types, so as to model polymorphism and data encapsulation. In this way we hope to provide semantic
foundations to polymorphic session types [7].
84
Session Types = Intersection Types + Union Types
Acknowledgments. I am grateful to the anonymous referees for the detailed comments and feedback
on an earlier version of this paper. I wish to thank Mariangiola Dezani, Kohei Honda, and Nobuko
Yoshida for the insightful discussions.
References
[1] Franco Barbanera and Ugo de’Liguoro. Two notions of sub-behaviour for session-based client/server systems. In Proceedings of PPDP’10, pages 155–164. ACM, 2010.
[2] Mario Bravetti and Gianluigi Zavattaro. A foundational theory of contracts for multi-party service composition. Fundamenta Informaticae, 89(4):451–478, 2009.
[3] Giuseppe Castagna, Mariangiola Dezani-Ciancaglini, Elena Giachino, and Luca Padovani. Foundations of
session types. In Proceedings of PPDP’09, pages 219–230. ACM, 2009.
[4] Ilaria Castellani and Matthew Hennessy. Testing theories for asynchronous languages. In Proceedings of
FSTTCS’98, pages 90–101. Springer, 1998.
[5] Rocco De Nicola and Matthew Hennessy. Testing equivalences for processes. Theoretical Computer Science,
34:83–133, 1984.
[6] Rocco De Nicola and Matthew Hennessy. CCS without τ’s. In Proceedings of TAPSOFT’87/CAAP’87, LNCS
249, pages 138–152. Springer, 1987.
[7] Simon Gay. Bounded polymorphism in session types. MSCS, 18(5):895–930, 2008.
[8] Simon Gay and Malcolm Hole. Subtyping for session types in the π-calculus. Acta Informatica, 42(2-3):191–
225, 2005.
[9] Matthew Hennessy. Algebraic Theory of Processes. Foundation of Computing. MIT Press, 1988.
[10] Kohei Honda. Types for dyadic interaction. In Proceedings of CONCUR’93, LNCS 715, pages 509–523.
Springer, 1993.
[11] Kohei Honda, Vasco T. Vasconcelos, and Makoto Kubo. Language primitives and type disciplines for
structured communication-based programming. In Proceedings of ESOP’98, LNCS 1381, pages 122–138.
Springer, 1998.
[12] Kohei Honda, Nobuko Yoshida, and Marco Carbone. Multiparty asynchronous session types. In Proceedings
of POPL’08, pages 273–284. ACM, 2008.
[13] Leonardo G. Mezzina. How to infer finite session types in a calculus of services and sessions. In Proceedings
of COORDINATION’98, pages 216–231, 2008.
[14] Luca Padovani. Session types at the mirror. EPTCS, 12:71–86, 2009.
[15] Jerome Vouillon and Paul-André Melliès. Semantic types: a fresh look at the ideal model for types. SIGPLAN
Notices, 39(1):52–63, 2004.
A
Supplement to Section 2
In this section we solely introduce some handy notation related to processes that will be useful for the
proofs in Section B. First we define two relations, that we dub “may” and “must”, distinguishing the fact
that a process may output some message or is always capable to (i.e., must) perform some input or output
action, regardless of its internal transitions.
µ
Definition A.1 (may/must). Let µ ∈ N ∪ {X}. We say that P may output µ, notation P ↓ µ, if P =⇒.
µ
Let µ ∈ N ∪ N ∪ {X}. We say that P must µ, notation P ⇓ µ, if P =⇒ P0 implies P0 =⇒. We say that P
may converge, notation P↓, if P =⇒ P0 implies P ↓ µ for some µ; we say that P must converge, notation
P⇓, if there exists µ such that P ⇓ µ.
Luca Padovani
85
We will sometimes say that a process P guarantees action µ if P ⇓ µ.
Then, we define the continuation of a process P with respect to an action µ as the combination of all
µ
the possible residuals of P after µ. This differs from the relation −→ which relates P with one particular
(not necessarily unique) residual of P after µ.
µ
def
Definition A.2 (continuation). Let P =⇒. The continuation of P with respect to µ is defined as P(µ) =
µ
Q.
P=⇒−→Q
L
a
b
For example, consider P = a.P1 + b.P2 . On the one hand we have P −→ P1 and also P −→ P2 namely,
there are two possibly different residuals of P after a due to two different branches of the external choice
that are guarded by the same action. On the other hand, the (unique) continuation of P after a is P1 ⊕ P2 ,
which expresses the fact that both branches are possible.
B
Supplement to Section 3
B.1
Semantics
We begin by gaining some familiarity with the orthogonal and the bi-orthogonal operators and some of
their properties, in particular we provide alternative characterizations for X ⊥ and X ⊥⊥ , we prove that
(·)⊥ is anti-monotonic, and we state some known properties regarding orthogonal set and set-theoretic
operators.
Proposition B.1. The following properties hold:
1. X ⊥ =
2.
X ⊥⊥
T
=
P∈X JPK;
T
X⊆JPK JPK;
3. X ⊆ Y implies Y ⊥ ⊆ X ⊥ ;
4. X ⊥ is closed;
5. (X ∪Y )⊥ = X ⊥ ∩Y ⊥ .
Proof. We prove the items in order:
1. We have Q ∈ X ⊥ iff X ⊆ JQK iff P ⊥ Q for every P ∈ X iff Q ∈ JPK for every P ∈ X iff Q ∈
2. By item (1) we have
X ⊥⊥
=
3. By item (1) we have Y ⊥ =
T
T
P∈X ⊥ JPK
=
T
T
P∈X JPK.
X⊆JPK JPK.
⊥
P∈X JPK = X .
X ⊥ ⊆ X ⊥⊥⊥ by replacing
P∈Y JPK
⊆
T
4. From Proposition 3.1(1) we obtain
X with X ⊥ . From the same proposi⊥⊥⊥
⊥
⊥
tion and item (3) we obtain X
⊆ X . We conclude X = X ⊥⊥⊥ .
5. By item (1) we have (X ∪Y )⊥ =
T
P∈X∪Y JPK
=
P∈X JPK ∩
T
T
P∈Y JPK
= X ⊥ ∩Y ⊥ .
It should be observed that item (5) of the previous proposition can be generalized to arbitrary unions,
namely that
[ ⊥ \
Xi = Xi⊥
i∈I
i∈I
for arbitrary, possibly infinite family of sets Xi . The reader may also verify that ∧ and ∨ are indeed
commutative and associative operators. These properties will be silently used in some of the proofs that
follow.
86
Session Types = Intersection Types + Union Types
We now present an auxiliary operator that is convenient in the definition of the semantics of session
types. We write Gα (X) for the set of processes that guarantee an α action and whose continuation after
α is a process in X. Formally:
(
P
if X ⊥ = 0/
def
Gα (X) =
{P ∈ P | P ⇓ α and P(α) ∈ X} otherwise
Using G· (·) one can equivalently define the interpretation of α.T as Jα.T K = Gα (JT K). In particular,
the orthogonal of Gα (X) can be computed simply by turning α into the corresponding co-action and by
computing the orthogonal of X:
Proposition B.2. Gα (X)⊥ = Gα (X ⊥ ).
Proof. We distinguish three cases:
• (X = 0)
/ Then X ⊥ = P and we conclude Gα (X)⊥ = 0/ ⊥ = P = Gα (P) = Gα (X ⊥ ).
• (X ⊥ = 0)
/ Then Gα (X)⊥ = P ⊥ = 0/ = Gα (0)
/ = Gα (X ⊥ ).
• (X 6= 0/ and X ⊥ 6= 0)
/ We have:
Q ∈ Gα (X)⊥
⇐⇒
⇐⇒
⇐⇒
⇐⇒
⇐⇒
∀P ∈ Gα (X) : P ⊥ Q
∀P ∈ Gα (X) : Q ⇓ α ∧ Q(α) ⊥ P(α)
Q ⇓ α ∧ ∀P ∈ Gα (X) : Q(α) ⊥ P(α)
Q ⇓ α ∧ Q(α) ∈ X ⊥
Q ∈ Gα (X ⊥ )
(X ⊥ 6= 0)
/
(X 6= 0)
/
namely Gα (X)⊥ = Gα (X ⊥ ).
Corollary B.1. X closed implies Gα (X) closed.
Proof. By Proposition B.2 we have Gα (X)⊥⊥ = Gα (X ⊥ )⊥ = Gα (X ⊥⊥ ) = Gα (X).
We now have all the information for showing that JT K is a closed set of processes, so that we can
rewrite JT K into JT K⊥⊥ and viceversa, whenever useful (Proof of Theorem 3.1).
Proposition B.3. For every T , the set JT K is closed.
Proof. An easy induction on T . The case when T = end follows from the fact that 1 ∈ JendK, hence
JendK⊥ = JendK. The case when T = T1 ∧ T2 is proved using Proposition B.1.
Theorem B.1 (Theorem 3.1). For every T , JT K = JT K⊥ .
Proof. By induction on T and by cases on its shape:
• J0K = J1K = P = 0/ ⊥ = J0K⊥ .
• J1K = J0K = 0/ = P ⊥ = J1K⊥ .
• JendK = {P ∈ P | P ⇓ X} = JendK⊥ .
• Jα.SK = Jα.SK = Gα (JSK) = Gα (JSK⊥ ) = Gα (JSK)⊥ = Jα.SK⊥ .
• JT1 ∧ T2 K = JT 1 ∨T 2 K = (JT 1 K ∪ JT 2 K)⊥⊥ = (JT1 K⊥ ∪ JT2 K⊥ )⊥⊥ = (JT1 K⊥⊥ ∩ JT2 K⊥⊥ )⊥ = (JT1 K ∩ JT2 K)⊥ =
JT1 ∧ T2 K⊥ .
• JT1 ∨ T2 K = JT 1 ∧ T 2 K = JT 1 K ∩ JT 2 K = JT1 K⊥ ∩ JT2 K⊥ = (JT1 K ∪ JT2 K)⊥ = JT1 ∨ T2 K⊥ .
Luca Padovani
B.2
87
Subtyping Algorithm
Lemma B.1. Let T ≡ a∈A a.Ta { ∨ end}X∈A and S ≡ a∈A a.Sa { ∧ end}X∈A for some A ⊆ N ∪ {X}
where Ta and Sa are viable for every a ∈ A. Then the following properties hold:
W
V
1. P ∈ JT K if and only if P↓ and {µ | P ↓ µ} ⊆ A and P ↓ a implies P(a) ∈ JTa K;
2. P ∈ JSK if and only if P⇓ and A ⊆ {µ | P ⇓ µ} and a ∈ A implies P(a) ∈ JSa K.
Proof. We prove the two items in order:
1. Since JT K is closed we have P ∈ JT K if and only if P ∈ JT K⊥⊥ if and only if JT K⊥ ⊆ JPK. Now
!⊥
⊥
JT K =
[
a∈A
Ga (JTa K){ ∪ JendK}X∈A
=
\
a∈A
Ga (JTa K)⊥ {∩JendK⊥ }X∈A =
\
a∈A
Ga (JTa K⊥ ){∩JendK}X∈A
in particular ∑a∈A a.Qa { + 1}X∈A ∈ JT K⊥ for every Qa ∈ JTa K⊥ . We deduce P↓ and P ↓ µ implies
µ ∈ A and P ↓ a implies P(a) ⊥ Qa . Since this holds for every Qa ∈ JTa K⊥ we have JTa K⊥ ⊆ JP(a)K,
which is equivalent to P(a) ∈ JTa K.
2. We have
JSK =
\
a∈A
Ga (JSa K){ ∩ JendK}X∈A
from which we deduce that P ∈ JSK if and only if P⇓ and µ ∈ A implies P ⇓ µ and a ∈ A implies
P(a) ∈ JSa K.
Lemma B.2 (Lemma 3.1). The laws in Table 4 are sound.
Proof. Laws ( E - PREFIX ), ( E - BOTTOM ), and ( E - TOP ) are left as easy exercises for the reader. Regarding
rule ( E - DIST ) we have Jα.T ∧ α.SK = Jα.T K ∩ Jα.SK = Gα (JT K) ∩ Gα (JSK) = {P ∈ P | P ⇓ α ∧ P(α) ∈
JT K}∩{P ∈ P | P ⇓ α ∧P(α) ∈ JSK} = {P ∈ P | P ⇓ α ∧P(α) ∈ JT K∩JSK} = Gα (JT K∩JSK) = Gα (JT ∧
def W
SK) = Jα.(T ∧ S)K. Regarding rule ( E - INPUT- END ), let T = a∈A a.Ta and suppose by contradiction that
P ∈ JT ∧ endK. Then P ∈ JT K and P ∈ JendK which implies P↓ and {µ | P ↓ µ} ⊆ A and P ⇓ X. Since
P ↓ a and P ⇓ X are incompatible properties we deduce A = 0.
/ Then T h 0, which contradicts the
hypothesis P ∈ JT ∧ endK. The proof that rule ( E - INPUT- OUTPUT ) is sound is similar, except that in
def W
this case P ∈ Jb.SK implies P ⇓ b. Regarding rule ( E - INPUT- OUTPUT- END ), let T = a∈A a.Ta ∨ end.
We only need to prove T ∧ b.S . end because T ∧ b.S . b.S is obvious and b.S ∧ end . T ∧ b.S follows
immediately from the fact that end . T and the pre-congruence of .. Let P ∈ JT ∧ b.SK. By Lemma B.1
we deduce P ⇓ b and P↓. The only action in N ∪ {X} that may coexist with a guaranteed input action
(b) is X. Since P↓ we have P ↓ µ implies µ = X, hence P ⇓ X. We conclude P ∈ JendK. Regarding
def W
def W
rule ( E - INPUT- INPUT ) let T = a∈A a.Ta { ∨ end}X∈A and S = b∈B b.Sb { ∨ end}X∈B . By Lemma B.1 we
have P ∈ JT ∧ SK if and only if P↓ and {µ | P ↓ µ} ⊆ A ∩ B and P ↓ a implies P(a) ∈ JTa K ∩ JSa K if and
W
only if P ∈ J a∈A∩B a.(Ta ∧ Sa ){ ∨ end}X∈A∩B K.
Some of the proofs that follow are defined by induction on the depth of session types. By “depth” of
a session type we mean the maximum number of nested actions in it. For example a.b ∧ c has depth 2,
while a.b.c has depth 3. The session types 0, 1, and end all have depth 0.
Lemma B.3 (Lemma 3.2). For every session type T there exists S in normal form such that T h S.
Proof. By induction on the depth of T and by cases on its shape.
88
Session Types = Intersection Types + Union Types
• If T ≡ 1 or T ≡ 0 or T ≡ end, then T is already in normal form.
• If T ≡ α.T 0 , then by induction hypothesis there exists S0 in normal form such that T 0 h S0 . We
reason by cases on S0 for finding S in normal form such that T h S:
– if S0 ≡ 1, then T h α.1 h 1;
– if S0 ≡ 0, then T h α.0 h 0;
– in all the other cases we have T h α.S0 which is in normal form.
• If T ≡ T1 ∧ T2 , then by induction hypothesis there exist S1 and S2 in normal form such that T1 h S1
and T2 h S2 . We reason by cases on S1 and S2 (symmetric cases omitted):
– if S1 ≡ 0 we have T h 0 ∧ S2 h 0;
– if S1 ≡ 1 we have T h 1 ∧ S2 h S2 ;
W
W
– if S1 ≡ a∈A a.S1,a { ∨ end}X∈A and S2 ≡ b∈B b.S2,b { ∨ end}X∈B , then by rule ( E - INPUTW
INPUT ) we have T h S1 ∧ S2 h a∈A∩B a.(S1,a ∧ S2,a ){ ∨ end}X∈A∩B . By induction hypothesis
there exists Sa in normal form such that S1,a ∧ S2,a h Sa for every a ∈ A ∩ B, therefore T h
W
a∈A∩B a.Sa { ∨ end}X∈A∩B .
V
V
V
– if S1 ≡ a∈A a.S1,a {∧end}X∈A and S2 ≡ b∈B b.S2,a {∧end}X∈B , then T h S1 ∧S2 h a∈A\B a.S1,a ∧
V
V
b∈B\A b.S2,b ∧ a∈A∩B a.Sa { ∧ end}X∈A∪B where Sa is in normal form and S1,a ∧ S2,a h Sa for
every a ∈ A ∩ B.
W
V
– if S1 ≡ a∈A a.S1,a and S2 ≡ b∈B b.S2,a {∧end}X∈B , then by rules ( E - INPUT- END ) and/or ( E INPUT- OUTPUT ) we conclude T h 0.
W
V
– if S1 ≡ a∈A a.S1,a ∨ end and S2 ≡ b∈B b.S2,a { ∧ end}X∈B , then by rule ( E - INPUT- OUTPUTV
END ) we conclude T h b∈B b.S2,a ∧ end.
• If T ≡ T1 ∨ T2 , then we reason in a dual fashion with respect to the previous case.
Theorem B.2 (Theorem 3.2). Let T and S be in normal form. Then T . S if and only if T 6 S.
Proof. The “if” part is trivial since 6 axiomatizes obvious properties of .. Regarding the “only if” part,
we proceed by induction on the depth of T and S and by cases on their (normal) form. We omit dual
cases:
• (T ≡ 0) We conclude with an application of either ( S - BOTTOM ) or ( S - INPUT ) according to the
form of S.
• (S ≡ 1) We conclude with an application of either ( S - TOP ) or ( S - OUTPUT ) according to the form
of T .
• (T ≡ a∈A a.Ta {∨end}X∈A and S ≡ b∈B b.Sb {∨end}X∈B ) From the hypothesis T . S and Lemma B.1
we deduce A ⊆ B and Ta . Sa for every a ∈ A. By induction hypothesis we derive Ta 6 Sa for every
a ∈ A, and we conclude with an application of rule ( S - INPUT ).
W
W
• (T ≡ a∈A a.Ta { ∨ end}X∈A and S ≡ b∈B b.Sb { ∧ end}X∈B ) For every P ∈ JT K we have {µ | P ↓
µ} ⊆ A and 0/ 6= B ⊆ {µ | P ⇓ µ} from which we deduce A = B = {X}. We conclude with an
application of rule ( S - END ).
W
V
• (T ≡ a∈A a.Ta { ∧ end}X∈A and S ≡ b∈B b.Sb { ∧ end}X∈B ) For every P ∈ JT K we have A ⊆ {µ |
P ⇓ µ} implies B ⊆ {µ | P ⇓ µ} meaning B ⊆ A. Furthermore, Tb . Sb for every b ∈ B. By
induction hypothesis we deduce Tb 6 Sb for every b ∈ B, and we conclude with an application of
rule ( S - OUTPUT ).
V
V
Luca Padovani
89
• (T ≡ a∈A a.Ta { ∧ end}X∈A and S ≡ b∈B b.Sb { ∨ end}X∈B ) For every P ∈ JT K we have A ⊆ {µ |
P ⇓ µ} implies {µ | P ↓ µ} ⊆ B, from which we deduce X ∈ A ∩ B. We conclude with an application
of rule ( S - END ).
V
B.3
W
Type Checker
Theorem B.3 (Theorem 3.3). If T ` P and T ` Q, then P ⊥ Q.
Proof. It is sufficient to show that T ` P implies P ∈ JT K⊥ for some generic P and T . Then, by Theorem 3.1 we have Q ∈ JT K⊥ = JT K⊥⊥ = JT K and we conclude P ⊥ Q by definition of orthogonal set. We
prove that T ` P implies P ∈ JT K⊥ by induction on the derivation of T ` P and by cases on the last rule
applied:
• ( T- NIL ) Then P = 0 and T = 0 and we conclude 0 ∈ P = 0/ ⊥ = J0K⊥ .
• ( T- END ) Then P = 1 and T = end and we conclude 1 ∈ JendK⊥ = JendK = {P ∈ P | P ⇓ X}.
• ( T- SEND ) Then P = a.Q and T = a.S for some Q and S such that S ` Q. By induction hypothesis
we deduce Q ∈ JSK⊥ . We conclude P ∈ JT K⊥ = Ga (JSK)⊥ = Ga (JSK⊥ ) since P ⇓ a and P(a) = Q ∈
JSK⊥ .
• ( T- RECEIVE ) Then P = ∑i∈I ai .Pi and T = i∈I ai .Tai where Tai ` Pi for every i ∈ I. By inducW
tion hypothesis we have Pi ∈ JTai K⊥ for every i ∈ I. We conclude P ∈ T ⊥ = ( i∈I ai .Tai )⊥ =
S
S
T
T
( i∈I Gai (JTai K))⊥⊥⊥ = ( i∈I Gai (JTai K))⊥ = i∈I Gai (JTai K)⊥ = i∈I Gai (JTai K⊥ ) because P ⇓ ai
L
and P(ai ) = ai =a j Pj ∈ JTai K⊥ for every i ∈ I.
W
• ( T- CHOICE ) Then P = P1 ⊕ P2 where T ` Pi for i ∈ {1, 2}. By induction hypothesis we deduce
Pi ∈ JT K⊥ for i ∈ {1, 2}, hence we conclude P ∈ JT K⊥ because JT K⊥ is closed.
• ( T- SUB ) Then S ` P for some S such that T . S. By induction hypothesis we have P ∈ JSK⊥ hence
we conclude P ∈ JT K⊥ since T . S implies JSK⊥ ⊆ JT K⊥ by Proposition B.1(3).
| 6 |
Deep Neural Network Approximation using Tensor Sketching
Shiva Prasad Kasiviswanathan∗ †
Nina Narodytska‡ †
Hongxia Jin§
arXiv:1710.07850v1 [stat.ML] 21 Oct 2017
Abstract
Deep neural networks are powerful learning models that achieve state-of-the-art performance on many computer
vision, speech, and language processing tasks. In this paper, we study a fundamental question that arises when designing
deep network architectures: Given a target network architecture can we design a “smaller” network architecture that
“approximates” the operation of the target network? The question is, in part, motivated by the challenge of parameter
reduction (compression) in modern deep neural networks, as the ever increasing storage and memory requirements of
these networks pose a problem in resource constrained environments.
In this work, we focus on deep convolutional neural network architectures, and propose a novel randomized
tensor sketching technique that we utilize to develop a unified framework for approximating the operation of both
the convolutional and fully connected layers. By applying the sketching technique along different tensor dimensions,
we design changes to the convolutional and fully connected layers that substantially reduce the number of effective
parameters in a network. We show that the resulting smaller network can be trained directly, and has a classification
accuracy that is comparable to the original network.
1
Introduction
Deep neural networks have become ubiquitous in machine learning with applications, ranging from computer vision, to
speech recognition, and natural language processing. The recent successes of convolutional neural networks (CNNs)
for computer vision applications have, in part, been enabled by recent advances in scaling up these networks, leading to
networks with millions of parameters. As these networks keep growing in their number of parameters, reducing their
storage and computational costs has become critical for meeting the requirements of practical applications. Because
while it is possible to train and deploy these deep convolutional neural networks on modern clusters, their storage,
memory bandwidth, and computational requirements make them prohibitive for embedded mobile applications. On
the other hand, computer vision applications are growing in importance for mobile platforms. This dilemma gives
rise to the following natural question: Given a target network architecture, is it possible to design a new smaller
network architecture (i.e., with fewer parameters), which approximates the original (target) network architecture in its
operations on all inputs? In this paper, we present an approach for answering this network approximation question
using the idea of tensor sketching.
Network approximation is a powerful construct because it allows one to replace the original network with the
smaller one for both training and subsequent deployment [11, 2, 5, 48, 37, 3, 41, 14].1 That is, it completely eliminates
the need for ever realizing the original network, even during the initial training phase, which is a highly desirable
property when working in a storage and computation constrained environments. While approximating any network
(circuit) using a smaller network (circuit) is computationally a hard problem [43], in this paper, we study the problem
of network approximation on convolutional neural networks. To approximate a convolutional neural network NN,
we focus on its parametrized layers (the convolutional and fully connected layers). Consider any such layer L in the
∗ Amazon
ML. Work done while the author was at Samsung Research America, Mountain View, CA, USA. [email protected].
Contributions.
‡ VMware Research, Palo Alto, CA, USA. [email protected].
§ Samsung Research America, Mountain View, CA, USA. [email protected] .
1 For clarity, we distinguish between the terms network and model in this paper: network refers to network architecture that describes the
transformation applied on the input, whereas model refers to a trained network with fixed parameters obtained by training a network with some
training set.
† Equal
1
network NN. Let φ : Γ × Θ → Ω denote the function (transformation) applied by this layer, where Θ represents
the parameter space of the function (generally, a tensor space of some order), Γ and Ω represent the input and
b → Ω, such that
output space respectively. Our general idea is to replace φ by a randomized function φ̂ : Γ × Θ
b
∀θ ∈ Θ, ∃θ̂ ∈ Θ, such that for every inputγ ∈ Γ, E[φ̂(γ; θ̂)] = φ(γ; θ), where the expectation is over randomness
of the function φ̂. In other words, φ̂(γ; θ̂) is an unbiased estimator of φ(γ; θ). Additionally, we establish theoretical
bounds on the variance of this estimator. Ideally, we want the representation length of θ̂ to be much smaller than that of
θ. For the construction of φ̂, we introduce a novel randomized tensor sketching idea. The rough idea here is to create
multiple sketches of the tensor space Θ by performing random linear projections along different dimensions of Θ, and
then perform a simple combination of these sketches. This new operation φ̂ defines a new layer that approximates the
functionality φ of the layer L. Since φ̂ and φ have the same input and output dimensionality, we can replace the layer L
in the network NN with this new (sketch counterpart) layer. Doing so for all the convolutional and fully connected
d which approximates the
layers in NN, while maintaining the rest of the architecture, leads to a smaller network NN,
network NN. To the best of our knowledge, ours is the first work that uses the idea of sketching of the parameter space
for the task of network approximation.
d We show that, with some changes to the
The next issue is: Can we efficiently train the smaller network NN?
standard training procedure, the parameters (which now represent sketches) of the constructed smaller network can be
learnt space efficiently on any training set. Also compared to the original network, there is also a slight improvement in
d directly on D to get
the running time needed for various operations in this smaller network. This allows us to train NN
d D .2 Our extensive experimental evaluations, on different datasets and architectures, corroborate
a reduced model NN
the excellent performance of our approach by showing that it increases the limits of achievable parameter number
reduction while almost preserving the original model accuracy, compared to several existing approximation techniques.
In fact, our technique succeeds in generating smaller networks that provide good accuracy even on large datasets such
as Places2, which other state-of-the-art network approximation techniques seem not to succeed on.
1.1
Preliminaries
We denote [n] = {1, . . . , n}. Vectors are in column-wise fashion, denoted by boldface letters. For a vector v, v>
denotes its transpose and kvk its Euclidean norm. For a matrix M , kM kF denotes its Froebnius norm. We use random
matrices to create sketches of the matrices/tensors involved in the fully connected/convolutional layers. In this paper,
for simplicity, we use random scaled sign (Rademacher) matrices. We note that other families of distributions such as
subsampled randomized Hadamard transforms can probably lead to additional computational efficiency gains when
used for sketching.
Definition 1. Let Z ∈ Rk×d be a random sign matrix
with independent entries that are +1 or −1 with probability 1/2.
√
We define a random scaled sign matrix U = Z/ k.
Here, k is a parameter that is adjustable in our algorithm. We generally assume k d. Note that E[U > U ] = Id
where Id is the d × d identity matrix. Also by linearity of expectation, for any matrix M with d columns, we have
E[M U > U ] = M E[U > U ] = M .
Tensor Preliminaries. We denote matrices by uppercase letters and higher dimensional tensors by Euler script letters,
e.g., T . A real pth order tensor T ∈ ⊗pi=1 Rdi is a member of the tensor product of Euclidean spaces Rdi for i ∈ [p].
As is the case for vectors (where p = 1) and matrices (where p = 2), we may identify a pth order tensor with the p-way
array of real numbers. The different dimensions of the tensor are referred to as modes. The (i1 , . . . , ip )th entry of a
tensor T is denoted by Ti1 i2 ...ip .
The mode-n matrix product (for n ∈ [p]) of a tensor T ∈ Rd1 ×···×dp with a matrix M ∈ Rk×dn is denoted by
T ×n M and has dimensions d1 × · · · × dn−1 × k × dn+1 × . . . dp . Elementwise, we have
(T ×n M )i1 ...in−1 jin+1 ...ip =
dn
X
Ti1 i2 ...ip Mjin .
in =1
2 There memory footprint of the reduced model d
NND can be further reduced using various careful operations such as pruning, binarization,
quantization, low-rank decomposition, etc., [15, 20, 19, 38, 47, 17, 28, 45, 23, 24, 32, 50], which is beyond the scope of this work.
2
Note that the operation can also be applied simultaneously to multiple modes. In general, given p matrices M1 , . . . , Mp
where Mi ∈ Rki ×di , the resulting tensor T ×1 M1 ×2 M2 · · · ×p Mp is a tensor in Rk1 ×k2 ···×kp . For a matrix
W ∈ Rd1 ×d2 is a matrix, it follows that: W ×1 M1 = M1 W and W ×2 M2 = W M2> .
A fiber of T is obtained by fixing all but one of the indices of the tensor. A flattening of tensor T along a mode
(dimension) n (denoted by matn ) is a matrix whose columns correspond to mode-n fibers of T . For example, in a fourth
order tensor T ∈ Rd1 ×d2 ×d3 ×d4 , T = mat4 (T ) ∈ Rd1 d2 d3 ×d4 is a matrix defined as: T(i1 +d1 (i2 −1)+d1 d2 (i3 −1))i4 =
Ti1 i2 i3 i4 , i.e., the (i1 , i2 , i3 , i4 ) entry in the tensor T is assigned to the location (i1 + d1 (i2 − 1) + d1 d2 (i3 − 1), i4 ) in
the matrix T .
The weights of all (two dimensional) filters in a convolutional layer can be denoted by a 4-dimensional tensor in
Rd2 ×w×h×d1 where d1 and d2 represent the number of output and input feature maps, and h and w represent the height
and width of the filter kernels.
2
Tensor Sketching
Our network approximation is based on the idea of tensor sketching. Data sketching ideas have been successfully
used in designing many machine-learning algorithms, especially in the setting of streaming data, see e.g., [46].
Generally, sketching is used to construct a compact representation of the data so that certain properties in the data are
(approximately) preserved. Our usage of sketching is however slightly different, instead of sketching the input data, we
apply sketching on the parameters of the function. Also, we want to design sketching techniques that work uniformly
for both matrices and higher order tensors. For this, we define a new tensor sketch operation defined as follows.
Definition 2 (Mode-n Sketch). Given a tensor, T ∈ ⊗pi=1 Rdi , the mode-n sketch of T with respect to a random scaled
sign matrix Un ∈ Rk×dn for n ∈ [p], is defined as the tensor Sn = T ×n Un .
Since, we generally pick k dn , the space needed for storing the sketch Sn is a factor dn /k smaller than that for
storing T . In the case of matrices, the sketches are created by pre- or post-multiplying the matrix with random scaled
sign matrices of appropriate dimensions. For example, given a matrix W ∈ Rd1 ×d2 , we can construct mode-1 sketch
(resp. mode-2 sketch) of W as W ×1 U1 = U1 W (resp. W ×2 U2 = W U2> ). Given a sketch S1 = W ×1 U1 (resp.
S2 = W ×2 U2 ) of a matrix W and another matrix M ∈ Rd2 ×d3 , it is natural to use U1> S1 M (resp. S2 U2 M ) as an
estimator for the matrix product W M . It is easy to see that both these estimators are unbiased. The second part of the
following proposition (proof in Appendix A) analyzes the variance of these estimators. The result will motivate our
construction of sketch-based convolutional and fully connected layers in the next section.
Proposition 2.1. Let W ∈ Rd1 ×d2 . Let U1 ∈ Rk×d1 and U2 ∈ Rk×d2 be two independent random scaled sign
matrices. Let S1 = U1 W (= W ×1 U1 ) and S2 = W U2> (= W ×2 U2 ). Then for any matrix M ∈ Rd2 ×d3 :
1. E[U1> S1 M ] = W M, and E[S2 U2 M ] = W M.
h
i
2
2d kW M k2
2. E U1> S1 M − W M F ≤ 1 k F , and
i
h
2kW k2F kM k2F
2
E kS2 U2 M − W M kF ≤
.
k
Notice that the variance terms decrease as 1/k. The variance bound can also be plugged into Chebyshev’s inequality
to get a probability bound. Also the variance bounds are quantitatively different based on whether the sketch S1 or S2
is used. In particular, depending on W and M , one of the variance bounds could be substantially smaller than the other
one, e.g., if the columns in M are in the null space of W then W M is a zero matrix, so while one bound gives a tight
zero variance the other one does not.
3
Sketch-based Network Architecture
We now describe our idea of approximating a network using tensor sketching. Our approach, in almost identical fashion,
can be used to reduce the number of parameters involved in both the convolutional and the fully connected layers
without significantly affecting the resulting accuracy.
3
3.1
Sketching Convolutional Layers
A typical convolutional layer in a CNN transforms a 3-dimensional input tensor Iin ∈ Rh1 ×w1 ×d2 into a output
tensor Iout ∈ Rh2 ×w2 ×d1 by convolving Iin with the kernel tensor K ∈ Rd2 ×h×w×d1 , where h2 and w2 depends on
h, w, h1 , w1 and possibly other parameters such as stride, spatial extent, zero padding [16]. We use ∗ to denote the
convolution operation, Iout = Iin ∗ K. The exact definition of the convolution operator (∗) that depends on these above
mentioned additional parameters is not very important for us, and we only rely on the fact that the convolution operation
can be realized using a matrix multiplication as we explain below.3 Also a convolutional layer could be optionally
followed by application of some non-linear activation function (such as ReLU or tanh), which are generally parameter
free, and do not affect our construction.
We use the tensor sketch operation (Definition 2) to reduce either the dimensionality of the input feature map (d2 )
or the output feature map (d1 ) in the kernel tensor K. In practice, the dimensions of the individual filters (h and w)
are small integers, which we therefore do not further reduce. The motivation for sketching along different dimensions
comes from our mathematical analysis of the variance bounds (Theorem 3.1), where as in Proposition 2.1 based on the
relationship between Iin and K the variance could be substantially smaller in one case or the other. Another trick that
works as a simple boosting technique is to utilize multiple sketches each associated with an independent random matrix.
Formally, we define a S K-C ONV layer as follows (see also Figure 1).
Definition 3. A S K-C ONV layer is parametrized by a sequence of tensor-matrix pairs (S11 , U11 ), . . . , (S1` , U1` ),
(S21 , U21 ), . . . , (S2` , U2` ) where for i ∈ [`] S1i ∈ Rd2 ×h×w×k , S2i ∈ Rk×h×w×d1 and U1i ∈ Rk×d1 , U2i ∈
Rkhw×d2 hw are independent random scaled sign matrices,4 which on input Iin ∈ Rh1 ×w1 ×d2 constructs Iout as
follows:
`
Iout =
where S2i
`
1 X
1 X
Iin ∗ (S1i ×4 U1>i ) +
Iin ∗ (S2i
2` i=1
2` i=1
U2>i ),
(1)
U2>i ∈ Rd2 ×h×w×d1 is defined as5
(S2i
U2>i )xyzs =
k X
h X
w
X
S2icijs U2i(cij)(xyz) .
c=1 i=1 j=1
Here (S2i U2>i )xyzs is the (x, y, z, s)th entry, S2icijs is the (c, i, j, s)th entry, and U2i(cij)(xyz) is the (cij, xyz)th entry
in (S2i U2>i ), S2i , and U2i , respectively.
By running multiple sketches in parallel on the same input and taking the average, also results in a more stable
performance across different choices of the random matrices (see the experimental discussion in Appendix C). The
number of free parameters overall in all the S1i and S2i tensors put together equals `hwk(d1 + d2 ).6 Therefore, with a
S K-C ONV layer, we get a reduction in the number of parameters compared to a traditional convolutional layer (with
hwd1 d2 parameters) if k` ≤ d1 d2 /(d1 + d2 ). With this reduction, the time for computing Iout , ignoring dependence
on h and w, reduces from O(h2 w2 d1 d2 ) (in a traditional C ONV layer) to O(h2 w2 `k(d1 + d2 )) (in a S K-C ONV layer).
The convolution operation can be reduced into a matrix multiplication, an idea that is exploited by many deep
learning frameworks [6]. The idea is to reformulate the kernel tensor K by flattening it along the dimension representing
the output feature map, which in our setting is represented along the fourth dimension of K. The input tensor Iin is
used to form a matrix Iin ∈ Rh2 w2 ×d2 hw . This construction is quite standard and we refer the reader to [6] for more
details. Then it follows that Iout defined as Iin mat4 (K) ∈ Rh2 w2 ×d1 is a reshaping of the output tensor Iout (i.e.,
Iout = mat3 (Iout )).
3 In a commonly used setting, with stride of 1 and zero-padding of 0, h
2 = h1 − h + 1 and w2 = w1 − w + 1, and Iout ∈
P
Pw Pd2
R(h1 −h+1)×(w1 −w+1)×d1 is defined as: Ioutxys = h
i=1
j=1
c=1 Kcijs Iin(x+i−1)(y+j−1)c .
4 We define U
khw×d2 hw (instead of U
k×d2 ) for simplifying the construction.
2i ∈ R
2i ∈ R
5 Let O = S
U2>i . The operation can be equivalently defined: mat4 (Oi ) = U2>i mat4 (S2i ).
2i
i
6 The random matrices, once picked are not changed during the training or deployment.
4
Figure 1: A S K-C ONV layer with parameters (S11 , U11 ), . . . , (S1` , U1` ), (S21 , U21 ), . . . , (S2` , U2` ).
Using this equivalence and simple algebraic observations (mat4 (S1i ×4 U1>i ) = mat4 (S1i )U1i and mat4 (S2i
= U2>i mat4 (S2i )), we can re-express the operation in (1) as:
U2>i )
`
Iout
`
1 X
1 X
Iin mat4 (S1i )U1i +
Iin U2>i mat4 (S2i ).
=
2` i=1
2` i=1
(2)
Or in other words,
`
Iout =
`
1 X
1 X
Iin (mat4 (S1i ) ×2 U1>i ) +
Iin (mat4 (S2i ) ×1 U2>i ).
2` i=1
2` i=1
Theoretical Guarantees of a S K-C ONV Layer. Given a traditional convolutional layer with kernel tensor K and
independent random scaled sign matrices U11 , . . . , U1` , U21 , . . . , U2` , we can form a corresponding S K-C ONV layer by
constructing tensors S11 , . . . , S1` , S21 , . . . , S2` such that mat4 (S1i ) = mat4 (K)U1>i and mat4 (S2i ) = U2i mat4 (K)
for i ∈ [`]. The following theorem (proof in Appendix B), based on Proposition 2.1, analyzes the expectation and the
variance of using these sketches as an estimator for Iin ∗ K (≡ Iin mat4 (K)). Since the random matrices are independent
of each other, we drop the subscript and perform the analysis for a single instantiation of these sketches. .
Theorem 3.1. Let K ∈ Rd2 ×h×w×d1 be a kernel tensor and K = mat4 (K). Let U1 ∈ Rk×d1 and U2 ∈ Rkhw×d2 hw
be two independent random scaled sign matrices. Let S1 and S2 be tensors such that mat4 (S1 ) = K ×2 U1
and mat4 (S2 ) = K ×1 U2 . Then for any input matrix Iin ∈ Rh2 w2 ×d2 hw (formed from an input tensor Iin ∈
Rh1 ×w1 ×d2 ):
1. Unbiased Estimation: E[Iin mat4 (S1 )U1 ] = Iin K, and E[Iin U2> mat4 (S2 )] = Iin K.
5
2. Variance Bound:
h
i 2d kI Kk2
1 in
2
F
E kIin mat4 (S1 )U1 − Iin KkF ≤
, and
k
h
i 2kI k2 kKk2
2
in F
F
E Iin U2> mat4 (S2 ) − Iin K F ≤
.
khw
3.1.1
Training a S K-C ONV Layer
In this section, we discuss a procedure for training a S K-C ONV layer. Let Loss() denote some loss function for the
network. For computational and space efficiency, our goal will be to perform the training without ever needing to
construct the complete kernel tensor (K). We focus on deriving the gradient of the loss with respect to the parameters in
a S K-C ONV layer, which can then be used for back-propagating the gradient information.
We can again exploit the equivalence between the convolution operation and matrix multiplication. Consider the
h2 w2 ×d1
. For i ∈ [`],7
operation performed in the S K-C ONV layer as defined in (2). Let G = ∂Loss
∂Iout ∈ R
∂Loss
∂ mat4 (S1i )
∂Loss
∂ mat4 (S2i )
∂Loss
∂Iin
GU1>
i
P`
=
i=1
=
=
mat4 (S1i )
2`
>
Iin
GU1>
i
2`
>
G
U2i Iin
2`
>
+
,
, and
P`
i=1
G mat4 (S2i )> U2i
2`
.
Notice that all the required operations can be carried out without ever explicitly forming the complete d2 × h × w × d1
sized kernel tensor.
3.2
Sketching Fully Connected Layers
Neurons in a fully connected (F C) layer have full connections to all activations in the previous layer. These layers apply
a linear transformation of the input. Let W ∈ Rd1 ×d2 represent a weight matrix and b ∈ Rd1 represent a bias vector.
The operation of the F C layer on input hin can be described as:
a = W hin + b.
(3)
Typically, the F C layer is followed by application of some non-linear activation function. As in the case of convolutional
layers, our construction is independent of the applied activation function and we omit further discussion of these
functions.
Our idea is to use the tensor sketch operation (Definition 2) to sketch either the columns or rows of the weight
matrix.
Definition 4. A S K-F C layer is parametrized by a bias vector b ∈ Rd1 and a sequence of matrix pairs (S11 , U11 ), . . . ,
(S1` , U1` ), (S21 , U21 ), . . . , (S2` , U2` ) where for i ∈ [`], S1i ∈ Rk×d2 , S2i ∈ Rd1 ×k and U1i ∈ Rk×d1 , U2i ∈ Rk×d2
are independent random scaled sign matrices, which on input hin ∈ Rd2 performs the following operation:
`
a=
`
1 X >
1 X
U1i S1i hin +
S2 U2 hin + b.
2` i=1
2` i=1 i i
(4)
Note that a in the above definition could be equivalently represented as:
`
a=
`
1 X
1 X
(S1i ×1 U1>i )hin +
(S2 ×2 U2>i )hin + b.
2` i=1
2` i=1 i
The number of free parameters overall in all the S1i and S2i matrices put together is `k(d1 + d2 ). Therefore, compared
to a traditional weight matrix W ∈ Rd1 ×d2 , we get a reduction in the number of parameters if k` ≤ d1 d2 /(d1 + d2 ).
7 The
gradients computed with respect to mat4 (S1i ) and mat4 (S2i ) can also be converted into a tensor by reversing the mat4 () operator.
6
Figure 2: A S K-F C layer with parameters b, (S11 , U11 ), . . . , (S1` , U1` ), (S21 , U21 ), . . . , (S2` , U2` ).
Another advantage is that the time needed for computing the pre-activation value (a in (4)) in a S K-F C layer is
O(`k(d1 + d2 )) which is smaller than the O(d1 d2 ) time needed in the traditional F C setting if the values of k and `
satisfy the above condition.
Theoretical Guarantees of S K-F C Layer. Given a traditional F C layer with weight matrix W (as in (3)), and
independent random scaled sign matrices U11 , . . . , U1` , U21 , . . . , U2` , we can form a corresponding S K-F C layer by
setting S1i = U1i W and S2i = W U2>i . We now analyze certain properties of this construction. The following theorem,
based on Proposition 2.1, analyzes the expectation and the variance of using these sketches as an estimator for W hin + b
for a vector hin ∈ Rd2 . Since the random matrices are independent of each other, we drop the subscript and perform
the analysis for a single instantiation of these sketches.
Theorem 3.2. Let W ∈ Rd1 ×d2 . Let U1 ∈ Rk×d1 and U2 ∈ Rk×d2 be two independent random scaled sign matrices.
Let S1 = U1 W (= W ×1 U1 ) and S2 = W U2> (= W ×2 U2 ). Then for any hin ∈ Rd2 and b ∈ Rd1 :
1. Unbiased Estimation: E[U1> S1 hin + b] = W hin + b, and E[S2 U2 hin + b] = W hin + b.
2. Variance Bound:
2d1 kW hin k2
,
k
h
i 2kW k2 kh k2
in
2
F
E kS2 U2 hin + b − (W hin + b)k ≤
.
k
h
E
3.2.1
U1> S1 hin + b − (W hin + b)
2
i
≤
Training a S K-F C Layer
In this section, we discuss a procedure for training a network containing S K-F C layers. Let Loss() denote some loss
function for the network. Let a = S2 U2 hin + b. Let g = ∂Loss
∂a . In this case, using chain-rule of calculus
∂Loss
>
>
= gh>
in U2 = (ghin ) ×2 U2 .
∂S2
7
(5)
Similarly, the gradient with respect to hin can be calculated as:
∂Loss
= (S2 U2 )> g = (S2 ×2 U2> )> g.
∂hin
Now let a = U1> S1 hin + b = (S1> U1 )> hin + b. Again let g =
∂Loss
∂a .
(6)
Applying chain-rule gives
d
1
∂Loss X
∂Loss ∂ai
=
,
∂S1
∂ai ∂S1
i=1
where ai denotes the ith entry of a. We can compute
∂ai
∂S1
as:
∂ u>
∂ai
1i S1 hin
=
= u1i h>
in ,
∂S1
∂S1
where u1i is the ith column in U1 . Therefore, we get
d
1
∂Loss X
>
>
gi u1i h>
=
in = U1 ghin = (ghin ) ×1 U1 ,
∂S1
i=1
(7)
where gi denotes the ith entry of g. Finally, the gradient with respect to hin in this case equals:
∂Loss
= (S1> U1 )g = (S1 ×1 U1> )> g.
∂hin
(8)
Putting together (5), (6), (7), and (8) gives the necessary gradients for the S K-F C layer (where a is defined using (4)).
Let g = ∂Loss
∂a . For i ∈ [`],
∂Loss
∂S1i
∂Loss
∂S2i
∂Loss
∂hin
=
P`
i=1
=
=
U1i gh>
in
,
2`
>
gh>
in U2i
2`
S1> U1i g
i
2`
+
, and
P`
i=1
U2> S2i g
i
.
2`
Note that all the above computations can be performed without ever explicitly forming the complete d1 × d2 weight
matrix.
3.3
d
Final Construction of NN
d an approximation of NN, by replacing the convolutional
Given a convolutional neural network NN, construct NN,
layers (resp. fully connected layers) with S K-C ONV layers (resp. S K-F C layers). A nice feature about this construction
is that, based on need, we can also choose to replace only some of the layers of the NN with their sketch counterpart
layers.
4
Comparison to Previous Work
Deep neural networks are typically over-parametrized, and there is significant redundancy in deep learning networks [11].
There have been several previous attempts to reduce the complexity of deep neural networks under a variety of contexts.
Approximating only the Fully Connected Layers. A set of techniques have focused on approximating only the
fully connected layers in some reduced form. Yang et al. [48] use the Fastfood transformation technique of [30] to
approximate the fully connected layers. The HashedNets architecture, proposed by Chen et al. [2], uses a hash function
to enforce parameter sharing between random groups of parameters in a fully connected layer to reduce the number
8
of effective parameters. Cheng et al. [5] achieve parameter reduction by imposing a circulant matrix structure on
fully connected layers. Sindhwani et al. [37] generalize this construction by proposing a broad family of structured
parameter matrix structure and showing its effectiveness on the fully connected layers. Choromanska et al. [7] provide
some theoretical justifications for using structured hashed projections in these layers. While some of these techniques
are highly effective on the fully connected layers, they fall short of achieving a significant reduction in the number
parameters for modern CNNs which are dominated by convolutional layers [40, 21].8 Therefore, any effective technique
for parameter reduction on CNNs should also act on convolutional layers.
Approximating both the Convolutional and Fully Connected Layers. Most relevant to our paper is a line of work
on approximating both the fully connected and convolutional layers. Denil et al. [11], suggested an approach based
on learning a low-rank factorization of the matrices (tensors are viewed as a matrix) involved within each layer of a
CNN. Instead of learning both the factors of a factorization during training, the authors suggest techniques for carefully
constructing one of the factors (called the dictionary), while only learning the other one. Our sketching-based approach
is related to low-rank factorization, however using sketching we eliminate the overhead of carefully constructing
the dictionary. Tai et al. [41] achieve parameter reduction using a tensor decomposition technique that is based on
replacing the convolutional kernel with two consecutive kernels with a lower rank. The issue with this approach is
that with the increased depth of the resulting network, training becomes more challenging, and the authors rely on
batch normalization (proposed by [25]) to overcome this issue. In our proposed approach, the depth of the reduced
network remains equal to that of the original network, and the reduced network can be trained with or without batch
normalization. Very recently, Garipov et al. [14], building upon a work by [36], used a tensor factorization technique,
called tensor train decomposition, to uniformly approximate both the fully connected and convolutional layers. However,
constructing an exact tensor factorization (even computing the tensor rank) is in general a challenging NP-hard problem,
whereas our approach relies only on simple linear transformations. Chen et al. [3] combine the hashing idea from [2]
along with the discrete cosine transform (DCT) to compress filters in a convolutional layer. Their architecture, called
FreshNets, first converts filter weights into frequency domain using discrete cosine transform and then uses the hashing
idea to randomly group the resulting frequency parameters into buckets. Our sketches are created by using random
projections which is related to the hashing trick used in these results, however, our techniques are naturally attractive
for convolutional neural networks as they are known to be preserve spatial locality [27], a property that is not preserved
by simple hashing. Also, in contrast to FreshNets, our architectures require just simple linear transformations for both
fully connected and convolutional layers, and do not require special routines for DCT, Inverse DCT, etc. Additionally,
we provide theoretical bounds on the quality of approximation that is missing in these previous studies.
Other Related Work. There is a long line of work on reducing model memory size based on post-processing a trained
network (with sometimes further fine-tuning of the compressed model) [15, 20, 19, 38, 47, 17, 28, 45, 23, 24, 50, 32].
Techniques such as pruning, binarization, quantization, low-rank decomposition, etc., are intermingled with training of
a network on a dataset to construct a reduced model. These results do not achieve a direct network approximation as the
training happens on the original network. In practice, one can combine our approach with some of the above proposed
model post-processing techniques to further reduce the storage requirements of the trained model (which is beyond the
scope of this paper).
Hinton et al. [22] and Ba et al. [1] proposed approaches to learn a “distilled” model, training a more compact neural
network to reproduce the output of a larger network. The general idea is to train a large network on the original training
labels, then learn a much smaller distilled model on a weighted combination of the original labels and the softmax
output of the larger model. Note that with our network approximation approach, we do not need to train the original
large network. Also unlike distillation-based approaches where a separate distilled model has to be formed with each
dataset, our approach produces a single reduced network that can be then trained on any dataset.
Other techniques proposed for parameter reduction include inducing zeros in the parameter matrices via sparsity
regularizers [8] and storing weights in low fixed-precision formats [18, 9]. These ideas can be readily incorporated with
our approach, potentially yielding further reductions in the model memory size. Daniely et al. [10] generate sketches of
the input and show that it can lead to compact neural networks. Our approach, based on sketching the parameters of the
deep network, is complementary to this idea, and the two approaches can be used in conjunction.
Several works apply related approaches to speed up the evaluation time with CNNs [26, 12, 31, 13]. The focus of
8 Some recent studies [34, 33] have suggested that removing fully connected layers and replacing them with convolutions and pooling could be
beneficial for certain computer vision applications.
9
this line of work is not on parameter reduction but rather decreasing the evaluation time during testing. In each of these
results, any resulting storage reduction comes as a side effect. Other techniques for speeding up convolutional neural
networks include use of Winograd or FFT-based convolutions [29, 35, 44]. Again, unlike here, parameter reduction is
not a focus of these results.
5
Experimental Evaluation
In this section, we experimentally demonstrate the effectiveness of our proposed network approximation approach.
Our goal through the experiments is not to test the limits of reduction possible in deep neural networks, but rather to
demonstrate that through our tensor sketching approach it is possible to design a substantially smaller network that
achieves almost the same performance as the original network on a wide-range of datasets. We used the Torch machine
learning framework and all the experiments were performed on a cluster of GPUs using a single GPU for each run.
Additional experimental results are presented in Appendix C.
Metrics. We define compression rate as the ratio between the number of parameters in the reduced (compressed)
network architecture and the number of parameters in the original (uncompressed) network architecture. Compression
rate < 1 indicates compression with smaller values indicating higher compression. The top-1 error (denoted by
E RRT OP -1) for a trained model on a test set captures the percentage of images in the test set misclassified by the model.
To get a more stable picture of the model performance, E RRT OP -1 is computed by averaging the test error after each of
the last 10 training epochs.
Datasets. We use 5 popular image datasets: CIFAR10 (objects recognition dataset with 3×32×32 images), SVHN (digits
recognition dataset with 3×32×32 images), STL10 (objects recognition dataset with 3×96×96 images), ImageNet10
objects recognition dataset with 3×256×256 images, a subset of ImageNet1000 dataset that we created9 , and Places2
(scene understanding dataset with 365 classes and about 8 million images in the training set). Note that, Places2 is a big
and challenging dataset that was used in the recent ILSVRC 2016 “Scene Classification” challenge.
Network Architectures. We ran our experiments on four different network architectures. The choice of architectures
was done keeping in mind limited computational resources at our disposal and a recent trend of moving away from
fully connected layers in CNNs. A common observation in this area is that reducing the number of parameters in
convolutional layers seems to be a much more challenging problem than that for fully connected layers. The first
network architecture that we experiment with is the popular Network-in-Network (NinN) [33] with minor adjustments
for the corresponding image sizes (we used strides of the first layer to make these adjustments). Network-in-Network
is a moderately sized network which attains good performance on medium sized datasets, e.g. CIFAR10 [49]. For
this network, we did not employ batch normalization [25] or dropout [39] to have a uniform set of experiments across
different techniques. The second network that we consider is the same as NinN with only one change that the last
convolution layer is replaced by a fully connected layer (we denote it as NinN+FC). Following [2], the third network
that we experiment is a simple shallow network, which we refer to as TestNet, with only 2 convolution layers and 2 fully
connected layers which allows us to easily test the efficacy of our approximation technique for each layer individually.
We describe the construction of TestNet in more detail in Appendix C. Table 1 shows the original (uncompressed)
top-1 error (E RRT OP -1) for NinN and NinN+FC. The number of parameters are about 966K for NinN and 1563K for
NinN+FC for all datasets. The statistics about TestNet are presented in Figure 2 (Appendix C). The final network that
we consider is GoogLeNet [40] with batch normalization, which we use for the Places2 dataset. This network has a
top-1 error of 32.3% on the Places2 dataset.
Network
NinN
NinN+FC
CIFAR10
17.7
16.9
STL10
43.2
41.2
SVHN
6.0
5.4
ImageNet10
27.1
26.0
Table 1: Top-1 error of the NinN architecture and its variant on different datasets.
Baseline Techniques. As discussed in Section 4 there are by now quite a few techniques for network approximation.
9 We used following classes: bottle, cat, grasshopper, grocery, truck, chair, running shoes, boat, stove, and clock. The training set consists of
13000 images and the test set consists of 500 images
10
We compare our proposed approach with four state-of-the-art techniques that approximate both the convolutional and the
fully connected layers: FreshNets technique that uses hashing in the frequency domain to approximate the convolutional
layer [3], low-rank decomposition technique of [11] (L OW R ANK1 ), and tensor decomposition technique of [41]
(L OW R ANK2 ). While using the FreshNets technique, we also use the HashedNets technique of feature hashing [2]
for compressing the fully connected layers as suggested by [3]. We used open-source implementations of all these
techniques: HashedNets, FreshNets, and L OW R ANK1 are from [4] and L OW R ANK2 from [42]. We set the required
parameters to ensure that all the compared approaches achieve about the same compression rate.
Figure 3: Top-1 error for the NinN architecture as we decrease the compression rate by compressing one convolutional
layer at a time each by a factor of 10. The x-axis is not to scale.
Figure 4: Top-1 error for the NinN architecture as we decrease the compression rate by compressing one convolutional
layer at a time each by a factor of 4. The x-axis is not to scale.
Figure 5: Top-1 error for the NinN+FC architecture. The size of F C layer is about half of the total size of convolutional
layers C ONV2 to C ONV8 . We compress the fully connected layer by a factor of 4. We then use a similar experimental
setup as in Figure 4 of reducing the number of parameters in the convolutional layers (C ONV2 to C ONV8 ) each by a
factor of 4. The x-axis is not to scale.
Compression on the Convolutional Layers. We performed a set of experiments to evaluate the performance of our
scheme only on the convolutional layers. We used the NinN architecture for this purpose. NinN is, essentially, a
sequence of nine convolution layers (labeled as C ONV1 to C ONV9 ). We compress these layers one by one, starting
from C ONV2 and finishing at C ONV9 by reducing the number of parameters in each layer by a factor of r which is set
as 10. When all these 8 convolution layers are compressed the achieved network compression rate is approximately10
equal to 1/r.
10 We
do not compress the first layer that takes input.
11
Figures 3 and 4 shows the results of our experiments. If a point is missing in the plots then the corresponding
network training failed. We expect the error to go up as we decrease the compression rate, i.e., increase the parameter
reduction. We observe this general trend in almost all our plots, with minor fluctuations such as in Figure 3 on the
SVHN dataset. We make two main observations from these plots. First, our method was always able to get to a better
compression rate compared to other techniques, in that these comparative techniques started failing sooner as we kept
decreasing the compression rate. For example, our approach consistently achieves a compression rate of 0.15 that none
of the other techniques even get close to achieving. Second, our approach also almost always achieves better accuracy
when compared to other techniques. As explained in Section 4, our approach has some advantages over the compared
techniques, especially in terms of its ability to approximate (compress) the convolutional layers. Effects of this become
more pronounced as we decrease the compression rate. In most cases, we gain up to 4% or lose up to 2% of accuracy
compared to original network accuracy. The fact that sometimes our reduced network was able to gain a bit of accuracy
over the original network suggests that our randomized technique probably also adds a regularization effect during the
training.
Compression on both the Convolutional and Fully Connected Layers. We now add fully connected layers into the
mix. To do so, we used a modified NinN architecture (denoted as NinN+FC) in our experiments where we replaced the
last convolution layer (C ONV9 ) with a fully connected layer of size 768 × 768 followed by a classifier layer of size
768 × 10. In Figure 5, we present the results of these experiments. Our approach again outperforms other techniques in
terms of both accuracy and the maximum achievable compression rate. The results demonstrate the effectiveness of
proposed approach on both the convolutional and fully connected layers.
Places2 Dataset. To evaluate our approach on a large dataset, we ran additional experiments on the Places2 dataset
(using a centered crop). Here we used the GoogLeNet architecture with batch normalization. Due to limited computational resources, we ran a single experiment where we compressed all but the first layer to achieve a compression
rate of about 0.2. At this compression level, training for none of the competitor methods succeeded, whereas, our
approach gave a top-1 error of 36.4%. Note that the top-1 error of the original GoogLeNet on this dataset is 32.3%.
This demonstrates that our approach manages to generate smaller networks that perform well even on large datasets.
Again here, as in all the above cases, model storage sizes can be further reduced by taking this reduced model and using
certain post-processing operations as detailed in Section 4, which is outside the scope of this evaluation.
Parameter Sensitivity. In Appendix C, we present experiments that highlight the role of parameters k and ` in our
proposed approach. In general, we observe that the accuracy of the compressed models improve as we increase k or `
(this happens because we are increasing the effective size of the constructed sketches). Also, due to the averaging effect,
increasing ` decreases the variance of top-1 error with respect to the randomization that arises from the use of random
matrices.
Computational Efficiency. While our primary focus is on network approximation (i.e., designing networks with a
smaller set of parameters), an added bonus is that the networks generated through our tensor sketching approach are
also computationally more efficient. For example, at the compression rate of 0.15 the wall-clock testing time, of our
reduced NinN is on average between 1.6-2x smaller compared to the original network across all the tested datasets.
Since the sketch tensors in our construction are dense, further efficiency gains are possible by better exploiting the
dense matrix capabilities of modern GPUs.
References
[1] Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in neural information processing
systems, pages 2654–2662, 2014.
[2] Wenlin Chen, James Wilson, Stephen Tyree, Kilian Weinberger, and Yixin Chen. Compressing neural networks
with the hashing trick. In ICML, 2015.
[3] Wenlin Chen, James Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing convolutional
neural networks in the frequency domain. In KDD, 2016.
[4] Wenlin Chen, James Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Freshnets. http://www.
cse.wustl.edu/~ychen/FreshNets, 2016.
12
[5] Yu Cheng, Felix X Yu, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shi-Fu Chang. An exploration of
parameter redundancy in deep networks with circulant projections. In ICCV, pages 2857–2865, 2015.
[6] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan
Shelhamer. cudnn: Efficient primitives for deep learning. ArXiv, 2014.
[7] Anna Choromanska, Krzysztof Choromanski, Mariusz Bojarski, Tony Jebara, Sanjiv Kumar, and Yann LeCun.
Binary embeddings with structured hashed projections. In ICML, 2016.
[8] MD Collins and P Kohli. Memory-bounded deep convolutional neural networks. In ICASSP, 2013.
[9] Matthieu Courbariaux, Jean-Pierre David, and Yoshua Bengio. Low precision storage for deep learning. In ICLR,
2015.
[10] Amit Daniely, Nevena Lazic, Yoram Singer, and Kunal Talwar. Sketching and neural networks. ArXiv, 2016.
[11] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In
NIPS, pages 2148–2156, 2013.
[12] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within
convolutional networks for efficient evaluation. In NIPS, 2014.
[13] Michael Figurnov, Dmitry Vetrov, and Pushmeet Kohli. Perforatedcnns: Acceleration through elimination of
redundant convolutions. In NIPS, 2016.
[14] Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, and Dmitry Vetrov. Ultimate tensorization: compressing
convolutional and fc layers alike. ArXiv, 2016.
[15] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using
vector quantization. ArXiv, 2014.
[16] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT Press, 2016.
[17] Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In NIPS, 2016.
[18] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited
numerical precision. In ICML, 2015.
[19] Song Han, Huizi Mao, and William J Dally. A deep neural network compression pipeline: Pruning, quantization,
huffman encoding. In ICLR, 2016.
[20] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural
network. In NIPS, 2015.
[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
CVPR, 2016.
[22] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. ArXiv, 2015.
[23] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks.
In Advances in neural information processing systems, pages 4107–4115, 2016.
[24] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks:
Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
[25] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. In ICML, pages 448–456, 2015.
13
[26] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low
rank expansions. ArXiv, 2014.
[27] William B Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary
mathematics, 26(189-206):1, 1984.
[28] Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Compression of deep
convolutional neural networks for fast and low power mobile applications. In ICLR, 2016.
[29] Andrew Lavin. Fast algorithms for convolutional neural networks. ArXiv, 2015.
[30] Quoc Le, Tamás Sarlós, and Alex Smola. Fastfood-approximating kernel expansions in loglinear time. In ICML,
2013.
[31] Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. ArXiv, 2014.
[32] Fengfu Li, Bo Zhang, and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016.
[33] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR, 2014.
[34] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In
CVPR, 2015.
[35] Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast training of convolutional networks through ffts. ArXiv,
2013.
[36] Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. In
NIPS, 2015.
[37] Vikas Sindhwani, Tara Sainath, and Sanjiv Kumar. Structured transforms for small-footprint deep learning. In
NIPS, 2015.
[38] Guillaume Soulié, Vincent Gripon, and Maëlys Robert. Compression of deep neural networks on the fly. ArXiv,
2015.
[39] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a
simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1), 2014.
[40] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan,
Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015.
[41] Cheng Tai, Tong Xiao, Xiaogang Wang, et al. Convolutional neural networks with low-rank regularization. In
ICLR, 2016.
[42] Cheng Tai, Tong Xiao, Xiaogang Wang, et al. Lowrankcnn. https://github.com/chengtaipu/
lowrankcnnl, 2016.
[43] Christopher Umans. The minimum equivalent dnf problem and shortest implicants. In Foundations of Computer
Science, 1998. Proceedings. 39th Annual Symposium on, pages 556–563. IEEE, 1998.
[44] Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun. Fast
convolutional nets with fbfft: A gpu performance evaluation. ArXiv, 2014.
[45] Yunhe Wang, Chang Xu, Shan You, Dacheng Tao, and Chao Xu. Cnnpack: Packing convolutional neural networks
in the frequency domain. In NIPS, 2016.
[46] David P. Woodruff. Sketching as a tool for numerical linear algebra. FnT-TCS, 2014.
14
[47] Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. Quantized convolutional neural networks
for mobile devices. ArXiv, 2015.
[48] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep
fried convnets. In ICCV, 2015.
[49] S. Zagoruyko. Cifar-10 in torch. http://torch.ch/blog/2015/07/30/cifar.html, 2015.
[50] Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint
arXiv:1612.01064, 2016.
A
Proof of Proposition 2.1
Proposition A.1 (Proposition 2.1 Restated). Let W ∈ Rd1 ×d2 . Let U1 ∈ Rk×d1 and U2 ∈ Rk×d2 be two independent
random scaled sign matrices. Let S1 = U1 W (= W ×1 U1 ) and S2 = W U2> (= W ×2 U2 ). Then for any matrix
M ∈ Rd2 ×d3 :
1. E[U1> S1 M ] = W M and E[S2 U2 M ] = W M.
h
i
2
2d kW M k2
2. E U1> S1 M − W M F ≤ 1 k F , and
i
h
2kW k2F kM k2F
2
E kS2 U2 M − W M kF ≤
.
k
Proof. Part 1 follows by simply using linearity of expectation.
We focus on Part 2 which investigates the variance bounds for U1> S1 M and S2 U2 M . For this, we use some standard
ideas from the matrix
sketching literature [46].
Consider first E kS2 U2 M − W M k2F . We start by noting,
S2 U2 M − W M = W U2> U2 M − W M =
1
W Z > ZM − W M,
k
√
where U2 = Z/ k. Let wa , zb , mc denote the a, b, c-th columns of W > , Z, and M respectively. We have
>
W Z ZM
2
F
=
X
wa>
a,c
k
X
!2
!
zb z>
b
mc
.
b=1
Therefore, we get
1
W Z > ZM − W M
k
2
=
F
X
a,c
1 >
w
k a
!2
!
zb z>
b
mc −
wa> mc
b=1
k
X
!
k
1X >
=
zb z>
mc −
wa mc
b
k
a,c
b=1
b=1
!2
k
>
X X
wa> zb z>
b mc − wa mc
=
.
k
a,c
X
1 >
w
k a
k
X
b=1
Let yabc =
>
>
wa
zb z>
b mc −wa mc
k
which can be re-expressed as:
yabc =
1X
War Zrb Zsb Msc ,
k r,s
r6=s
15
!2
(9)
where War is the (a, r)th entry in W , Zrb and Zsb are the (r, b)th and (s, b)th entries in Z respectively, and Msc is the
(s, c)th entry in M . Using this notation, we can re-express (9) as:
1
W Z > ZM − W M
k
2
=
F
k
X X
a,c
=
!2
yabc
b=1
XX
yabc yab0 c
a,c b,b0
=
X
1 XXX
War Zrb Zsb Msc
War0 Zr0 b0 Zs0 b0 Ms0 c .
2
k a,c 0
0
0
b,b r6=s
r 6=s
Taking expectation,
1
E kS2 U2 M − W M k2F = 2
k
X
War War0 Msc Ms0 c E[Zrb Zsb Zr0 b0 Zs0 b0 ].
a,c,b,b0 ,r,s,r 0 ,s0
r6=s
r 0 6=s0
0
0
0
0
0
0
Now, E[Zrb Zsb Zr0 b0 Zs0 b0 ] is non-zero
only if either:
1) r = r , s = s , and b = b or 2) r = s , s = r , and b = b .
2
Therefore, we can simplify E kS2 U2 M − W M kF as,
h
i
2 XXX 2 X 2
2
E kS2 U2 M − W M kF ≤ 2
War
Msc
k a,c
r
s
b
2X 2 X 2
W
M
=
k a,r ar c,s sc
2
kW k2F kM k2F .
(10)
k
This proves the claimed bound for E kS2 U2 M
− W M k2F .
Now we bound E kU1> S1 M − W M k2F . We start by re-expressing the result in (10). Start by noting that
S2 = W U2> . Therefore, from (10),
h
i 2
2
E W U2> U2 M − W M F ≤ kW k2F kM k2F .
k
=
Now by setting, W = Id1 in this result and by noting kId1 k2F = d1 , we get that for any matrix A ∈ Rd2 ×d3 ,
h
i 2d
2
1
E U1> U1 A − A F ≤
kAk2F ,
k
(11)
where the expectation is now over U1 ∈ Rk×d1 .
Since U1> S1 M = U1> U1 W M . Therefore, U1> S1 M − W M = U1> U1 W M − W M . The idea is to invoke (11)
with A = W M . We get,
h
i 2d
2
1
E U1> U1 W M − W M F ≤
kW M k2F .
k
This completes the proof of this theorem.
B
Proof of Theorem 3.1
Theorem B.1 (Theorem 3.1 Restated). Let K ∈ Rd2 ×h×w×d1 be a kernel tensor and K = mat4 (K). Let U1 ∈
Rk×d1 and U2 ∈ Rkhw×d2 hw be two independent random scaled sign matrices. Let S1 and S2 be tensors such that
mat4 (S1 ) = K ×2 U1 and mat4 (S2 ) = K ×1 U2 . Then for any input matrix Iin ∈ Rh2 w2 ×d2 hw (formed from an input
tensor Iin ∈ Rh1 ×w1 ×d2 ):
16
1. Unbiased Estimation: E[Iin mat4 (S1 )U1 ] = Iin K, and E[Iin U2> mat4 (S2 )] = Iin K.
2. Variance Bound:
h
i 2d kI Kk2
1 in
2
F
, and
E kIin mat4 (S1 )U1 − Iin KkF ≤
k
i 2kI k2 kKk2
h
2
in F
F
E Iin U2> mat4 (S2 ) − Iin K F ≤
.
khw
Proof. First note that, by definition,
Iin mat4 (S1 )U1 = Iin KU1> U1 .
Using an analysis similar to Proposition 2.1 gives,
E[Iin mat4 (S1 )U1 ] = Iin K, and
h
i
2d kI Kk2
2
E kIin mat4 (S1 )U1 − Iin KkF ≤ 1 kin F .
Similarly, by definition of mat4 (S2 ), we have:
Iin U2> mat4 (S2 ) = Iin U2> U2 K.
Again relying on an analysis similar to Proposition 2.1 gives,
h
E
E[Iin U2> mat4 (S2 )] = Iin K, and
i
2
2kIin k2F kKk2F
Iin U2> mat4 (S2 ) − Iin K F ≤
.
khw
This completes the proof of this theorem.
C
Additional Experimental Results
In this section, we present some additional experimental results that investigate the role of various parameters in our
proposed approach. We start by describing the TestNet architecture that we use for the following experiments.
Images
3×32×32
3×96×96
3×256×256
C ONV1
d2 = 3, d1 = 30, f = 5×5
d2 = 3, d1 = 20, f = 7×7
d2 = 3, d1 = 20, f = 7×7
M AX P OOL1
f = 2×2
f = 5×5
f = 11×11
C ONV2
d2 = 30, d1 = 30, f = 5×5
d2 = 20, d1 = 40, f = 5×5
d2 = 20, d1 = 30, f = 9×9
M AX P OOL2
f = 4×4
f = 5×5
f = 7×7
F C1
d2 = 480, d1 = 250
d2 = 1960, d1 = 500
d2 = 3000, d1 = 500
F C2
d2 = 250, d1 = 10
d2 = 500, d1 = 10
d2 = 500, d1 = 10
Figure 6: TestNet Architecture.
TestNet Architecture. TestNet is a simple shallow network with only 2 convolution layers and 2 fully connected
layers. This allows us to easily test the efficacy of our approximation technique for each layer individually. Figure 6
shows parameters of TestNet for different image sizes.
A ReLU layer is used after each fully connected and convolutional layer. For example, consider images of size
3×32×32. The first convolutional layer takes 3 input feature maps (d2 = 3) and produces 30 output feature maps
(d1 = 30) using filters of size 5 by 5 (f = 5×5), and we represent it as a 4-dimensional tensor in R3×5×5×30 . Note that
in TestNet the fully connected layers contain much more network parameters than the convolutional layers.
Table 2 shows the original top-1 error (E RRT OP -1) and the number of parameters for all datasets. We used different
number of parameters in F C1 for different image sizes to ensure that the corresponding trained networks converge.
17
CIFAR10
25.7 (147K)
STL10
40.5 (1008K)
SVHN
8.2 (147K)
ImageNet10
27.0 (1561K)
Table 2: Top-1 error of the original TestNet on different datasets. In bracket, we show the number of parameters in each
of these networks.
Parameter Sensitivity. For understanding the role of parameters k and ` in our tensor sketching approach, we train
a number of networks derived from the TestNet architecture for several combinations of these parameters. For the
convolutional layer, we construct different networks each of which is obtained by replacing the C ONV2 layer of TestNet
with a S K-C ONV layer for different values of k and `. We vary ` ∈ {1, 2, 3} and k ∈ {2, 5, 10}, independently, giving
rise to 9 different networks. Similarly, we also construct new networks by replacing the F C1 layer of TestNet with a
S K-F C layer with ` ∈ {1, 2, 3} and k ∈ {5, 10, 25} for smaller images (like CIFAR10) and k ∈ {15, 25, 50} for larger
images (like STL10). Figure 7 shows the results for the CIFAR10 and STL10 datasets (results on other two datasets are
similar and omitted here). For each compressed model, we show its top-1 error (plots in the top row). The plots in the
bottom row present the corresponding compression rate for each network. Note that if the parameters k and ` are set too
high then the compression rate can be > 1. In this case, we have an expansion over the original network. If a point is
missing from a line then the corresponding network failed in the training. As an example, consider the network obtained
by replacing F C1 layer with a S K-F C layer using k = 5 and ` = 2. From the plot in Figure 7, the model obtained by
training this network on CIFAR10 has E RRT OP -1 ≈ 30%. We also see that this network has a compression rate of
≈ 0.5, i.e., the size of the original TestNet has been reduced by a factor of 2. Recall that by design TestNet has much
more parameters in the fully connected layers than the convolutional layers, hence compressing the F C1 layer leads to
smaller compression rates than compressing the C ONV2 layer (as observed in Figure 7).
First, from Figure 7, we observe that the accuracy of the compressed models improve as we increase k or `. This is
expected because, as we discuss in Section 3, by increasing k or ` we are increasing the effective size of the constructed
sketches. For example, on the STL10 dataset with ` = 1 and the fully connected layer compression, as we increase
k from 15 to 50, the top-1 error goes down from around 62% to 45% (which is comparable to the 40.5% top-1 error
of the original (uncompressed) model from Table 1). However, with increasing k or ` the compression rate goes up
(implying lower overall compression).
Second, due to the averaging effect, increasing ` increases the stability in the sketching process. For example,
consider CIFAR10 where F C1 layer is replaced with a S K-F C layer using k = 10 and ` ∈ {1, 2, 3}. We trained each
resulting network architecture 10 different times each time initializing the S K-F C layer with a different set of random
matrices U11 , . . . , U1` , U21 , . . . , U2` and measured the variance in the top-1 error across different runs. Not surprisingly,
increasing ` decreases the variance of top-1 error with respect to the randomization that arises from the use of random
matrices. For example, with ` = 1 we get an average (over these runs) top-1 error of 30.1 with variance of 0.44, for
` = 2 we get an average top-1 error of 29.1 with variance of 0.29, and for ` = 3 we get an average top-1 error of 28.6
with variance of 0.23.
18
Figure 7: The plots on the left show the top-1 error and the compression rate for the CIFAR10 dataset obtained by
varying k and ` in our tensor sketching approach on TestNet. The plots on the right show the same for the STL10
dataset.
19
| 9 |
arXiv:cs/9905016v1 [cs.CE] 27 May 1999
PROGRAMS WITH STRINGENT
PERFORMANCE OBJECTIVES WILL OFTEN
EXHIBIT CHAOTIC BEHAVIOR
M. CHAVES
Escuela de Fisica, Universidad de Costa Rica
San Jose, Costa Rica
[email protected]
March 15, 1999
Abstract
Software for the resolution of certain kind of problems, those that rate high
in the Stringent Performance Objectives adjustment factor (IFPUG scheme),
can be described using a combination of game theory and autonomous systems.
From this description it can be shown that some of those problems exhibit
chaotic behavior, an important fact in understanding the functioning of the
related software. As a relatively simple example, it is shown that chess exhibits
chaotic behavior in its configuration space. This implies that static evaluators
in chess programs have intrinsic limitations.
1
Introduction
IBM’s Deep Blue, a powerful chess playing machine consisting of two parallel-process
tandem supercomputers programmed by a team of experts lead by team manager
C. Tan [Hsu et al., 1990; Horgan, 1996; Hsu, 1990; Slate, 1984], played the world
chess champion G. Kasparov several games in 1996 and 1997 with fairly even results.
Actually, programmer Hsu’s estimate back in 1990 of the future machine’s playing
strength was 4000 ELO points (chess’ rating system), far greater than Kasparov’s
∼2800 present rating. In three minutes, which is the game’s average pondering time,
the machine could calculate 20 billion moves, enough to for a 24-ply search and an
up to 60-ply search in critical tactical lines. Since grandmasters can calculate just a
few moves ahead, it seems very peculiar that a human could hold his own on the face
of such an overwhelming opposition.
In this paper we are interested in a special kind of problem and the software
written for it. It is the kind of problem whose software would score high in the
1
Stringent performance objectives [Abran & Robillard, 1996] adjustment factor of the
International Function Point User’s Group (IFPUG). Examples are, for instance, the
control of air-traffic at a busy airport, the scheduling of trains in areas with heavy
traffic, and field military operations. One way of approaching this kind of problem
is to treat it within the context of game theory, as a 2-player game. The first player
would be the comptroller, central authority or headquarters, and the second is the
system itself, that acts and reacts out of its own nature. The first player pursues to
maintain control of a complicated system by choosing its moves, that is, by affecting
the system in the ways available to him. He would like the system to always remain
in states such that certain state variables (they could be safety, efficiency, lethality
or others) are kept extremized. The performance objectives would be to extremize
these state variables.
The nature of this kind of problem is such that it is necessary to see ahead what
is going to happen. At least in theory, the first player must have arrived at his move
only after having taken into consideration all of the possible responses of the second
player. This is a common situation in game theory, and is another reason why the
language of game theory is very well-suited to discuss both this kind of problem
and the software developed to help deal with it. The typical program contains two
fundamental sectors:
1. a ply calculator, that is able to look ahead at all possible continuations of the
tree a certain number of plies ahead,
2. a static evaluator, that gives an evaluation of the resulting state of the problem
at the end of each branch of plies.
Although there are many different ways of programming the ply calculator or the
static evaluator, their complementary, basic functions are clear: the first is a brute
force calculator of all possible states to come, and the second is an evaluator of the
final resulting state of the system, intrinsically and on the basis of the state itself,
without any resort to further calculation.
The different states of the problem can be seen as a function of time. If one
is able to express each state using a mathematical description, the problem of the
time-development of the state while maintaining certain state variables extremized
can be described as an autonomous system. If the equations describing the timedevelopment of the system are nonlinear, it is very likely that the problem is going
to exhibit chaotic [Alligood et al., 1997] behavior. Therefore, the software for these
problems has an intrinsic limitation on its accuracy, even if it still may be extremely
useful.
As an example we will work out the case of chess, a useful one because, while
nontrivial, it is not nearly as complex as some of the other problems are. Chess’
software scores high in the Stringent performance objectives adjustment factor. We
will prove that chess exhibits chaotic behavior in its configuration space and that this
implies its static evaluators possess intrinsic limitations: there are always going to be
2
states or positions that they will not be able to evaluate correctly. It is likely that this
is precisely the explanation for the peculiar situation mentioned in the first paragraph:
that a human being can hold his own at a chess game with a supercomputer. The ply
calculator part of the program of the supercomputer would be tremendously effective,
but the static evaluator would not be so faultless.
2
An abstract mathematical representation of chess
We have to describe each possible state (or position) in chess. To describe a particular
state we shall use a 64 dimensional vector space, so that to each square of the board
we associate a coordinate that takes a different value for each piece occupying it. A
possible convention is the following:
• A value of zero for the coordinate of the dimension corresponding to a square
means that there is no piece there.
• For the White pieces the convention would be: a value of 1 for the coordinate
means the piece is a pawn, of 2 means it is a pawn without the right to the en
passant move, of 3 that it is a knight, of 4 a bishop, of 5 a rook, of 6 a queen,
of 7 a king, and of 8 a king without the right to castle.
• The values for the Black pieces would be the same but negative.
Let us represent the 64-component vector by the symbol x. A vector filled with the
appropriate numbers can then be used to represent a particular state of the game. We
shall call the 64-dimensional space consisting of all the coordinates the configuration
space C of the game. The succeeding moves of a pure strategy can be plotted in this
space, resulting in a sequence of points forming a path.
Now we construct a function f : C → C that gives, for any arbitrary initial state
of a chess game, the control strategy to be followed by both players. The existence
of this function is assured by the Zermelo-von Neumann theorem [von Neumann
& Morgenstern, 1944] that asserts that a finite 2-person zero-sum game of perfect
information is strictly determined, or, in other words, that a pure strategy exists for
it. For a given initial chess state this means that either
• White has a pure strategy that wins,
• Black has a pure strategy that wins,
• both are in possession of pure strategies that lead to a forced draw.
Consider a certain given initial state of the game where White has a pure strategy
leading to a win. (The two other cases, where Black has the win or both have drawing
strategies can be dealt with similarly and we will not treat them explicitly.) Let the
initial chess state be given by the 64-component vector x0 , where we are assuming
3
that White is winning. The states following the initial one will be denoted by xn ,
where the index is the number of plies that have been played from the initial position.
Thus x1 is the position resulting from White’s first move, x2 is the position resulting
from Black’s first move, x3 is the position resulting from White’s second move, and
so on. Since White has a winning pure strategy, it is obvious that, given a certain
state xn , n even, there must exist a vector function f so that, if xn+1 is the next state
resulting from White’s winning strategy, then f (xn ) = xn+1 . On the other hand, if
n is odd, so that it is Black’s turn, then we define f to be that strategy for Black
that makes the game last the longest before the checkmate. Again, the pure strategy
that is available to Black according to the Zermelo-von Neumann theorem allows us
to define a function f (xn ) = xn+1 . The function f is thus now defined for states with
n both even and odd.
The function f allows us to define another function g : C → C, the control strategy
vector function [Abramson, 1989], defined by g(xn ) = f (xn ) − xn . With it we can
express the numerical difference between the vectors corresponding to two consecutive
moves as follows:
g(xn ) = xn+1 − xn .
(1)
Given any initial state x0 , this function gives us an explicit control strategy for the
game from that point on.
3
Chaos in configuration space
A set N of simultaneous differential equations,
g(x) =
dx
,
dt
(2)
where t is the (independent) time variable, x ∈ RN and the g are known g : RN → RN
vector functions, is called an autonomous system [Alligood et al., 1997]. The time
t takes values in the interval 0 ≤ t ≤ T . Let us discretize this variable, as is often
done for computational purposes [Parker & Chua, 1989]. We assume it takes only
discrete values t = 0, ∆t, 2∆t, . . . , T . After an appropriate scaling of the system one
can take the time steps to be precisely ∆t = 1. Let the initial condition of the system
be x(0) ≡ x0 , and let us define x(1) ≡ x1 , x(2) ≡ x2 , and so on. By taking N = 64
one can then rewrite (2) in a form that is identical to (1).
Nonlinear autonomous systems in several dimensions are always chaotic, as experience shows. Is the control strategy function nonlinear? A moment’s consideration
of the rules of the game tell us that the control function has to be nonlinear and that,
therefore, the system described by (1) has to be chaotic.
For some kinds of chess moves the difference xn+1 − xn has a relatively large value
that would correspond to a jerky motion of the system, and the question can be raised
if such a motion could really occur in a an autonomous system. But the important
4
thing to realize is that if even typical autonomous nonlinear systems (that possess
a smooth function g(x)) do show chaotic behavior, then certainly the system that
represents chess (with a jerky control strategy function) should also show it.
The chaotic nature of the paths in configuration space has several immediate
implications, but certainly one of the most interesting is the following:
Proposition 1 It is not possible to program a static evaluator for chess that works
satisfactory on all positions.
Proof. The point of the proposition is that a program with a good static evaluator
is always going to have shortcomings: it will always evaluate incorrectly at least some
positions. If one programs another static evaluator that evaluates correctly these
positions, one will notice soon that there are others that the new program still cannot
evaluate correctly. In last analysis the perfect evaluator for chess would have to be
an extremely long program, and for more complex systems of this kind, an infinite
one. To see it is not possible to program a static evaluator for chess that works
correctly on all positions, notice that it would have to evaluate on the basis of the
state itself without recourse to the tree. The evaluation of the state has to be done
using heuristics, that is, using rules that say how good a state is on the basis of the
positions of the pieces and not calculating the tree. But this is not possible if chess
is chaotic because then we know the smallest difference between two states leads to
completely diverging paths in configuration space, that is, to wholly differing states a
few plies later. Therefore the heuristic rules of the static evaluator have to take into
account the smallest differences between states, and the evaluators have to be long
or infinite routines. Static evaluators, on the other hand, should be short programs,
since they have to evaluate the states at the end of each branch of the tree.
Another interesting point is that chaos exacerbates the horizon effect [Berliner,
1973]. This is the problem that occurs in game programming when the computer quits
the search in the middle of a critical tactical situation and thus it is likely that the
heuristics return an incorrect evaluation [Shannon, 1950]. In a sense, what the proposition is saying is that practically all states are critical, and that the horizon effect is
happening all the time and not only for some supposedly very special positions.
4
Comments
We have seen that it is likely that the pure strategy paths of chess in configuration
space follow chaotic paths. This implies that practical static evaluators must always
evaluate incorrectly some of the states. As a result the horizon problem is exacerbated.
The reason why a machine such as Deep Blue is not far, far stronger than a human
has to do again with the problem of programming a static evaluator. Even though the
machine searches many more plies than the human does, at the end of each branch it
has to use a static evaluator that is bound to incorrectly evaluate some states. This
adds an element of chance to the calculation. The fact that Deep Blue at present has
5
a playing strength similar to the best human players tells us that the human mind
has a far better static evaluator than Deep Blue (assuming one can apply these terms
to the human mind). If chess were not chaotic the overwhelming advantage in ply
calculation that the machine has would allow it to play much better than any human
could.
In practice, of course, as long as computers keep getting faster and having more
memory available, it is always possible to keep improving the static evaluators. If
computers can be programmed to learn from their experience they could improve
their static evaluators themselves. This was the idea of the program for another
game, link-five [Zhou, 1993].
Now, in a general vein, it should be clear why programs that would score high
in the IFPUG’s Stringent performance objectives adjustment factor would tend to be
exhibit chaotic behavior in their configuration spaces. The software of this type of
program has to foresee what is going to happen while extremizing certain state variables, as we mentioned before. This kind of problem is equivalent to an autonomous
system of differential equations that exhibits chaos, so that the control strategy vector
function g of the system is extremely sensitive to the smallest differences in a state
x. Thus any static evaluator that one programs (that has to be heuristic in nature)
is going to be severely limited.
Nevertheless, the consideration we made two paragraphs ago for chess is also true
for this kind of problem in general: as long as computers get faster and have more
memory the programs can be prepared to deal with more and more situations. Rules
of thumb that humans have learned from experience can be added to the evaluators.
Alternatively, the programs can be written so that they learn from their experience.
But they are always going to be very long programs.
References
Abramson, B. [1989] “Control strategies for two-player games”, ACM Comp. Surveys
21, 137- 161.
Abran, A. & Robillard, P. N. [1996] IEEE Trans. Software Eng. 22, 895-909.
Alligood, K. T., Sauer, T. D. & Yorke J. A. [1997] Chaos: An Introduction to Dynamical Systems (New York, Springer-Verlag).
Berliner, H. J. [1973] “Some necessary condition for a master chess program”, Proceedings of the 3rd International Joint Conference on Artificial Intelligence, Stanford,
CA (Los Altos, Morgan Kaufmann), 77-85.
Horgan, J. [1996 ] “Plotting the next move”, Scien. Am. 274 no. 5, 10.
Hsu, F. [1990] “Large scales parallelization of alpha-beta search: an algorithmic and
architectural study with computer chess”, Ph.D. thesis. Carnegie-Mellon University
Computer Science Department, CMU-CS-90-108.
6
Hsu, F., Anantharaman, T., Campbell M., & Nowatzyk, A. [1990] “A Grandmaster
Chess Machine”, Scient. Am. 263 no. 4, 44-50.
von Neumann, J. & Morgenstern O. [1944] Theory of Games and Economic Behavior
(Princeton, Princeton University Press).
Parker, T.S. & Chua L.O. [1989] Practical Numerical Algorithms for Chaotic Systems
(New York, Springer-Verlag).
Shannon, C.E. [1950] “Programming a computer for playing chess”, Philos. Mag. 41,
256-275.
Slate, D. J. & Atkin, L. R. [1984] “Chess 4.5-The Northwestern University chess
program”, in Chess Skill in Men and Machine, ed. Frey, P. W. (New York, Springer
Verlag).
Zhou, Q. [1993] “A distributive model of playing the game link-five and its computer
implementation”, IEEE Trans. Syst., Man, Cybern., SMC-23, 897-900.
7
| 5 |
arXiv:1706.06180v2 [math.AC] 19 Feb 2018
NEW ALGEBRAIC PROPERTIES OF QUADRATIC QUOTIENTS OF
THE REES ALGEBRA
MARCO D’ANNA AND FRANCESCO STRAZZANTI
Abstract. We study some properties of a family of rings R(I)a,b that are obtained as
quotients of the Rees algebra associated with a ring R and an ideal I. In particular, we
give a complete description of the spectrum of every member of the family and describe the
localizations at a prime ideal. Consequently, we are able to characterize the Cohen-Macaulay
and Gorenstein properties, generalizing known results stated in the local case. Moreover,
we study when R(I)a,b is an integral domain, reduced, quasi-Gorenstein, or satisfies Serre’s
conditions.
Introduction
Let R be a commutative ring with unity, let I 6= 0 be a proper ideal of R and let a, b ∈ R.
In [4] the authors introduce and study the family of quotient rings
R(I)a,b =
R[It]
,
+ at + b))
(I 2 (t2
L
where R[It] = n≥0 (I n tn ) is the Rees algebra associated with the ring R with respect to I
and (I 2 (t2 + at + b)) is the contraction to R[It] of the ideal generated by t2 + at + b in R[t].
This family provides a unified approach to Nagata’s idealization (with respect to an ideal,
see [12, pag. 2]) and to amalgamated duplication (see [7] and [8]); they can be both obtained
as particular members of the family, more precisely they are isomorphic to R(I)0,0 and
R(I)0,−1 respectively. This fact explains why these constructions produce rings with many
common properties; as a matter of fact, it is shown, in [4], that many properties of the rings
in this family (like, e.g., Krull dimension, noetherianity and local Cohen-Macaulayness) do
not depend on the defining polynomial. One interesting fact about this family is that, if R
is an integral domain, we can always find integral domains among its members, whereas the
idealization is never reduced and the amalgamated duplication is never an integral domain.
Hence, this construction revealed to be useful to construct R-algebras (and, in particular,
integral domains) satisfying pre-assigned properties. For instance, in [4] it is given a formula
for the Hilbert function of R(I)a,b in the local case, that depends only on R and I, and in
[13] this formula is used to construct families of one-dimensional Gorenstein local rings with
decreasing Hilbert function, solving a problem formulated by M.E. Rossi.
Date: February 20, 2018.
2010 Mathematics Subject Classification. 13B30, 13H10.
Key words and phrases. Idealization; Amalgamated duplication; Quasi-Gorenstein rings; localizations;
Serre’s conditions.
The second author was partially supported by MTM2016-75027-P, MTM2013-46231-P (Ministerio de
Economı́a y Competitividad) and FEDER.
1
2
MARCO D’ANNA AND FRANCESCO STRAZZANTI
In a subsequent paper (cf. [5]), the same authors deepen the study of this family of rings
in the local case, characterizing when its members are Gorenstein, complete intersection, and
almost Gorenstein, proving that these properties do not depend on the particular member
chosen in the family, but only on R and I. We also notice that other properties of these
quotients have been studied in [10].
In this paper we are interested in understanding the prime spectrum of the rings in the
family, in relation to the prime spectrum of the original ring, and in studying the behaviour
of the localizations. More precisely, we explicitly describe the primes of R(I)a,b lying over
a fixed prime p of R, showing that two cases can arise, depending on the reducibility of
t2 + at + b in Q(R/p)[t] and (if the polynomial is reducible) on its roots. In case there is
only one prime q lying over p, then we obtain (R(I)a,b )q ∼
= Rp (Ip )a,b , while, if there exist two
∼
primes p1 and p2 lying over p, then (R(I)a,b )pi = Rp for i = 1, 2 provided that a technical
hypothesis holds (see Proposition 1.4).
This facts will allow to extend in a natural way some local results contained in the papers
cited above and to investigate other relevant properties that depend on localizations, like
Serre’s conditions. We also notice that the study of localizations has an intrinsic interest
and can be applied in many situations like, e.g., the geometric contest: if we start with a
finitely generated k-algebra R, all the rings in the family are finitely generated k-algebras
and we can determine whether a prime ramifies or not.
Finally, under the hypotheses that t2 + at + b is reducible in R[t] and I is regular, we prove
that R(I)a,b is quasi-Gorenstein if and only if R satisfies the Serre’s condition (S2 ) and I is a
canonical ideal of R. The concept of quasi-Gorenstein rings arises from the theory of linkage
and it is a generalization of the notion of Gorenstein rings in the non-Cohen-Macaulay case.
It is already known when idealization and amalgamated duplication are quasi-Gorenstein
(see [2] and [3]) and we extend these results to the more general case R(I)a,b .
The structure of the paper is the following. In the first section we give a complete description of the prime spectrum of R(I)a,b (see Proposition 1.2) and describe the localizations of
R(I)a,b at prime ideals (see Proposition 1.4); as a corollary, we characterize when R(I)a,b is
an integral domain and when it is a Cohen-Macaulay or a Gorenstein ring. In the second
section we study Serre’s conditions for R(I)a,b (see Propositions 2.1 and 2.2).
Finally, in the last section we consider the particular case t2 + at + b reducible in R[t].
More precisely, we study when R(I)a,b is isomorphic either to the idealization or to the
amalgamated duplication and we characterize the properties of being reduced and quasiGorenstein (see Proposition 3.3 and Theorem 3.12).
1. Spectra and localizations
Throughout this paper R is a commutative ring with identity and I 6= (0) a proper ideal
of R; with Q(R) we will denote its total ring of fractions. In this section we study the prime
spectrum of the rings R(I)a,b . To this aim we recall that the extensions R ⊂ R(I)a,b ⊂
R[t]/(t2 + at + b) are integral and, given p ∈ Spec R, the prime ideals of R(I)a,b lying over
p depend on the reducibility of t2 + at + b on Q(R/p), the field of fractions of R/p, see [4,
Proposition 1.9]. More precisely, if t2 + at + b is irreducible in Q(R/p)[t], there will be only
one prime of R[t]/(t2 + at + b) lying over p and, thus, the same holds for R(I)a,b ; on the
NEW ALGEBRAIC PROPERTIES OF QUADRATIC QUOTIENTS OF THE REES ALGEBRA
3
other hand, when t2 + at + b is reducible, there are two primes of R[t]/(t2 + at + b) lying
over p and they can either contract to the same prime in R(I)a,b or to two different primes,
depending on I, p, and on the factorization of the polynomial.
Assuming that t2 + at + b is reducible in Q(R/p)[t] and that ᾱ/γ̄ and β̄/δ̄ are its roots, we
can always choose a representation with γ̄ = δ̄. In this case, it is easy to see that in Q(R/p)
we have γ̄ā = −ᾱ − β̄ and γ̄ 2 b̄ = ᾱβ̄ and, clearly, the same equalities hold in R/p. We start
with a preparatory lemma.
Lemma 1.1. Let p be a prime ideal of R and suppose that t2 + at + b = (t − ᾱ/γ̄)(t − β̄/γ̄)
in Q(R/p)[t]. Let α, β, γ ∈ R such that their classes modulo p are, respectively, ᾱ, β̄ and γ̄.
Then, the two sets
p1 := {r + it | r ∈ R, i ∈ I, γr + αi ∈ p}
p2 := {r + it | r ∈ R, i ∈ I, γr + βi ∈ p}
do not depend on the choice of α, β, and γ and are prime ideals of R(I)a,b . Moreover,
p1 = p2 if and only if (α − β)I ⊆ p.
Proof. The fact that the sets defined above do not depend on the choice of α, β and γ is an
easy verification. In order to prove that they are prime ideals, it is enough to consider p1 ,
since the proof for p2 is analogous.
By definition γ ∈
/ p, then, γr + αi ∈ p if and only if γ(γr + αi) ∈ p. Let r + it ∈ p1 and
s + jt ∈ R(I)a,b . We have (r + it)(s + jt) = rs − ijb + (rj + si − aij)t and, since γa + α + β,
γ 2 b − αβ, γ 2 r + γαi ∈ p, we get, modulo p,
γ(γ(rs − ijb) + α(rj + si − aij)) =
= γ 2 rs − ijγ 2 b + γαrj + γαsi − γαaij ≡
≡ γ 2 rs − ijαβ + γαrj − γ 2 rs + α2 ij + αβij =
= γαrj + α2 ij = αj(γr + αi) ≡ 0
and this means that (r + it)(s + jt) is in p1 , i.e. p1 is an ideal.
Now we have to prove that p1 is prime. If we suppose that (r + it)(s + jt) ∈ p1 , then
γrs − γijb + αrj + αsi − αija ∈ p and, multiplying by γ, we get, modulo p,
γ 2 rs − ijαβ + γαrj + γαsi + α2 ij + ijαβ ≡
≡ γs(γr + αi) + αj(γr + αi) ≡ (γs + αj)(γr + αi) ∈ p.
Since p is prime, at least one between γs + αj and γr + αi belongs to p, i.e. at least one
between r + it and s + jt is in p1 .
As for the last statement, suppose first that (α − β)I ⊆ p. If r + it ∈ p1 , then γr + βi =
γr + αi − (α − β)i ∈ p; therefore p1 ⊆ p2 and the other inclusion is analogous. Conversely,
we first notice that for every i ∈ I, −αi + γit ∈ p1 . Since p1 = p2 , we get −γαi + βiγ ∈ p
and, therefore, −γ(α − β)i ∈ p. Hence, (α − β)i ∈ p, since γ ∈
/ p and p is a prime ideal.
Proposition 1.2. Let p be a prime ideal of R.
(1) If t2 + at + b is irreducible in Q(R/p)[t], then the only prime ideal of R(I)a,b lying
over p is q := {p + it | p ∈ p, i ∈ I ∩ p}.
4
MARCO D’ANNA AND FRANCESCO STRAZZANTI
(2) If t2 + at + b = (t − ᾱ/γ̄)(t − β̄/γ̄) in Q(R/p)[t], then the ideals p1 and p2 defined in
the previous lemma are the only prime ideals of R(I)a,b lying over p.
Proof. The first case is straightforward, because the prime ideal of R(I)a,b lying over p is
p
(t2
R[t]
∩ R(I)a,b = {p + it | p ∈ p, i ∈ I ∩ p}.
+ at + b)
As for the second case, we easily observe that p1 ∩ R = p = p2 ∩ R and, hence, p1 and p2 are
prime ideals lying over p. In fact, in the proof of [4, Proposition 1.9] it is proved that a prime
ideal of R(I)a,b lying over p is the contraction to R(I)a,b of the ideals J = ϕ−1
p ((t − ᾱ/γ̄))
−1
and H = ϕp ((t − β̄/γ̄)), where ϕp is the composition of the canonical homomorphisms
R[t] → (R/p)[t] ֒→ Q(R/p)[t]. Since J and H contain pR[t], it is easy to see that the
extensions of p1 and p2 in R[t]/(t2 + at + b) are contained in the images J¯ and H̄ in the same
ring of J and H, respectively. In fact, if r + it ∈ p1 , then γr + αi ∈ p and
ᾱ
r̄γ̄
R
ᾱī γ̄r̄ + ᾱī
ᾱ
∈ t−
Q
[t];
ϕp (r + it) =
+ īt = īt −
+
= ī t −
γ̄
γ̄
γ̄
γ̄
γ̄
p
therefore, r + it ∈ J¯ ∩ R(I)a,b . Analogously, p2 ⊂ H̄ ∩ R(I)a,b . By Incomparability, see [9,
Corollary 4.18], we get that p1 = J¯ ∩ R(I)a,b and p2 = H̄ ∩ R(I)a,b and this concludes the
proof.
In [4, Remark 1.10.3] it is noted that R[It](I)a,b is an integral domain, if t2 + at + b is
irreducible in Q(R)[t] and R is an integral domain; thanks to the previous proposition we
can prove the converse.
Corollary 1.3. R(I)a,b is an integral domain if and only if R is an integral domain and
t2 + at + b is irreducible in Q(R)[t].
Proof. Since R ⊆ R(I)a,b , we can assume that R is an integral domain. If t2 + at + b is
irreducible in Q(R)[t], the ideal q = {p + it | p ∈ (0)R, i ∈ I ∩ (0)R} = (0)R(I)a,b is prime
and, thus, R(I)a,b is an integral domain. Conversely, suppose by contradiction that t2 +at+b
is reducible in Q(R)[t] and let p1 , p2 be the prime ideals of R(I)a,b lying over (0)R. They are
minimal primes of R(I)a,b and, since it is an integral domain, they are equal to (0)R(I)a,b .
On the other hand, it is easy to see that, for any i ∈ I, the element iα − iγt is in p1 and it
is different from zero, because R is an integral domain; contradiction.
In order to study the behaviour of localizations, notice that, since the ideals pi are independent of the choice of the elements α, β and γ, we can choose them in such a way that
aγ = −α − β and bγ 2 = αβ + p in R, where p ∈ p. This choice is not unique, in fact for
any q ∈ p, substituting α with α + q and β with β − q, the first equality still holds and the
second one is modified up to a summand in p.
Proposition 1.4. Let p be a prime ideal of R.
(1) Suppose that t2 + at + b is irreducible in Q(R/p)[t] and let q be the prime ideal of
R(I)a,b lying over p. Then, (R(I)a,b )q ∼
= Rp (Ip )a,b .
2
(2) Suppose that t + at + b = (t − ᾱ/γ̄)(t − β̄/γ̄) in Q(R/p)[t] and let p1 , p2 be the prime
ideals of R(I)a,b lying over p.
NEW ALGEBRAIC PROPERTIES OF QUADRATIC QUOTIENTS OF THE REES ALGEBRA
5
(a) If (α − β)I ⊆ p (i.e. p1 = p2 ), then (R(I)a,b )pi ∼
= Rp (Ip )a,b for i = 1, 2.
(b) If (α − β)I * p, i.e. there exists λ ∈ I such that (α − β)λ ∈
/ p, then (R(I)a,b )pi ∼
=
Rp for i = 1, 2, provided that there exists a choice of α, β and γ, such that
aγ = −α − β and bγ 2 = αβ + p in R, where p ∈ p and pλI = 0. In particular,
the last hypothesis holds if t2 + at + b is reducible also in Q(R)[t].
Proof. (1) We have s + jt ∈ R(I)a,b \ q if and only if at least one between s and j is in R \ p.
Given an element (r+it)/(s+jt) ∈ (R(I)a,b )q , we can multiply it by (s−aj −jt)/(s−aj −jt);
in fact, s − aj − jt ∈ R(I)a,b \ q, if j ∈ R \ p, but this also happens if j ∈ p and s ∈ R \ p,
because in this case s − aj ∈ R \ p. Hence, we get an injective homomorphism between
(R(I)a,b )q and Rp (Ip )a,b given by
r + it
r + it (s − aj − jt)
rs − ajr + ijb
si − rj
7−→
·
= 2
+ 2
t.
2
s + jt
s + jt (s − aj − jt)
s − ajs + bj
s − ajs + bj 2
Moreover, it is surjective because a generic element r/s + (i/s′ )t comes from (rs′ +
ist)/(ss′ ) ∈ (R(I)a,b )q .
(2) (a) We recall that in this case p1 = p2 . If s + jt ∈ R(I)a,b \ p1 , then ja − s + jt ∈
/ p1 ,
because γja − γs + αj = −jα − jβ − γs + αj ∈
/ p, since s + jt ∈
/ p2 . Therefore, given an
element (r + it)/(s + jt) ∈ (R(I)a,b )p1 , one has
r + it ja − s + jt
rja − rs − bij
rj − si
·
=
+
t.
2
2
s + jt ja − s + jt
sja − s − bj
sja − s2 − bj 2
Clearly, sja − s2 − bj 2 ∈ R \ p, because p1 ∩ R = p, therefore, we get a well-defined ring
homomorphism
f : (R(I)a,b )p1 → Rp (Ip )a,b
r + it
rja − rs − bij
rj − si
f
=
+
t
2
2
s + jt
sja − s − bj
sja − s2 − bj 2
that is injective by construction. As for the surjectivity, if sr1 + si2 t is an element of Rp (Ip )a,b ,
1t
it is easy to see that this is equal to f ( rs2s+is
). Hence, f is an isomorphism and, therefore,
1 s2
R
(I
)
for
i
=
1,
2.
(R(I)a,b )pi ∼
= p p a,b
(2) (b) Consider the map g1 : Rp → (R(I)a,b )p1 , g1 rs = rs . Clearly, this is well defined and is
r+it
∈ (R(I)a,b )p1
an injective ring homomorphism. As for the surjectivity consider a generic s+jt
2
and let λ be an element of I such that λ(α − β) ∈
/ p. Then, −βλγ + γ λt ∈
/ p1 and it is easy
to see that (r + it)(−βλγ + γ 2 λt) = −rβλγ − αβiλ − piλ + (rγ 2 λ − iβλγ + αiγλ + βiγλ)t =
(rγ + iα)(−βλ + γλt), since piλ = 0. It follows that
r + it −βλγ + γ 2 λt
rγ + iα
·
=
.
2
s + jt −βλγ + γ λt
sγ + jα
and (R(I)a,b )p1 ∼
= Rp . The same argument can be applied to
2
(R(I)a,b )p2 . Finally, note that if t + at + b is reducible in Q(R)[t], then p = 0.
Hence, g1
rγ+iα
sγ+jα
=
r+it
s+jt
Question 1.5. With the notation of the previous proposition, if (α − β)I * p and pλI 6= 0,
for any possible choice of p and λ, is it still true that (R(I)a,b )pi is isomorphic to Rp ?
6
MARCO D’ANNA AND FRANCESCO STRAZZANTI
We recall that an ideal is said to be regular if it contains a regular element. In the light
of the previous proposition, [4, Proposition 2.7] and [5, Corollary 1.4], we can immediately
state the following corollary:
Corollary 1.6. Let R be a ring, let I be an ideal of R and let a, b ∈ R. Assume that t2 +at+b
is reducible in Q(R)[t]. Denote by M the set of all the maximal ideals m of R except those
for which t2 + at + b = (t − αm /γm )(t − βm /γm ) in (R/m)[t] and (αm − βm )I * m. Then:
(1) The ring R(I)a,b is Cohen-Macaulay if and only if R is Cohen-Macaulay and Im is a
maximal Cohen-Macaulay Rm -module for all m ∈ M;
(2) Assume that Im is regular for all m ∈ M. The ring R(I)a,b is Gorenstein if and only
if R is Cohen-Macaulay, Im is a canonical ideal of Rm for all m ∈ M, and Rm is
Gorenstein for all m ∈
/ M.
Example 1.7. Let k be an algebraically closed field of characteristic different from 2 and
assume that R ∼
= k[x1 , . . . , xn ]/J is a domain. Let b ∈ R such that t2 − b is irreducible
in Q(R)[t] (it is proved in [4, Corollary 2.6] that we can always find infinitely many such
b), let I be an ideal of R and consider the ring R(I)0,−b . We can present it as a quotient
of k[x1 , . . . , xn , y1, . . . , ym ], where m is the cardinality of a minimal set of generators of I;
Corollary 1.3 implies that this is an integral domain. For any maximal ideal mQ (corresponding to the point Q in the affine variety V(J)) the polynomial t2 − b is reducible in
/ mQ and if α ∈ R is such
(R/mQ )[t] ∼
= k[t], since k is algebraically closed; moreover, if b ∈
that t2 − b = (t − ᾱ)(t + ᾱ), we have that α ∈
/ mQ , i.e. the condition 2αI ⊂ mQ is equivalent
to I ⊂ mQ . Hence, R(I)0,−b is the coordinate ring of an affine variety double covering V(J),
with ramification points Q corresponding to the roots of b and to those points Q, such that
the corresponding ideal mQ contains I. By Proposition 1.4, the points of the double covering
lying over a ramification point are all singular, since, by [4, Remark 2.4], a local ring of the
form R(I)a,b is regular if and only if R is regular and I = 0.
2. Serre’s conditions
A noetherian ring R satisfies Serre’s condition (Sn ) if depth Rp ≥ min{n, dim Rp } for any
p ∈ Spec R. In the next proposition we study Serre’s condition (Sn ) for R(I)a,b , generalizing
the particular case of the amalgamated duplication studied in [3]. In this section we assume
that t2 + at + b is reducible in Q(R)[t] and, if it is also reducible in Q(R/p)[t], we write
t2 + at + b = (t − ᾱp /γ̄p )(t − β̄p /γ̄p ). We recall that R(I)a,b is noetherian if and only if R is
noetherian (cf. [4, Proposition 1.11]).
Proposition 2.1. Let R be a noetherian ring. Then, R(I)a,b satisfies Serre’s condition
(Sn ) if and only if R satisfies Serre’s condition (Sn ) and depth Ip ≥ min{n, dim Rp } for all
p ∈ Spec R such that either t2 + at + b is irreducible in Q(R/p)[t] or (αp − βp )I ⊆ p.
Proof. Let q be a prime ideal of R(I)a,b and p = q ∩ R. We distinguish two cases according
to Proposition 1.4. In both cases we notice that dim Rp = dim Rp (Ip )a,b = dim(R(I)a,b )q .
• If (αp − βp )I * p, we have depth(R(I)a,b )q = depth Rp ; then, depth(R(I)a,b )q ≥
min{n, dim(R(I)a,b )q } if and only if depth Rp ≥ min{n, dim Rp }.
• If either (αp − βp )I ⊆ p or t2 + at + b is irreducible in Q(R/p)[t], it follows that
depth(R(I)a,b )q = depth Rp (Ip )a,b = min{depth Rp , depth Ip }. Consequently, we get
NEW ALGEBRAIC PROPERTIES OF QUADRATIC QUOTIENTS OF THE REES ALGEBRA
7
depth(R(I)a,b )q ≥ min{n, dim(R(I)a,b )q } if and only if min{depth Rp , depth Ip } ≥
min{n, dim Rp }.
A noetherian ring R satisfies the condition (Rn ) if Rp is regular for all p ∈ Spec R with
ht p ≤ n. Bagheri, Salimi, Tavasoli, and Yassemi ask when the condition (Rn ) holds for the
amalgamated duplication, see [3, Remark 3.9]. The next result gives the answer for the more
general construction R(I)a,b .
Proposition 2.2. Let R be a noetherian ring. Then, R(I)a,b satisfies (Rn ) if and only if R
satisfies (Rn ) and Ip = 0 for all p ∈ Spec R with ht p ≤ n and such that either t2 + at + b is
irreducible in Q(R/p)[t] or (αp − βp )I ⊆ p.
Proof. Let q be a prime ideal of R(I)a,b such that ht p ≤ n and p = q ∩ R. As in the previous
proposition there are two cases:
• If (αp − βp )I * p, then (R(I)a,b )q is regular if and only if Rp is regular.
• If either (αp − βp )I ⊆ p or t2 + at + b is irreducible in Q(R/p)[t], then (R(I)a,b )q
is regular if and only if Rp (Ip )a,b is regular and this is equivalent to Rp regular and
Ip = 0, by [4, Remark 2.4].
Putting together the two cases we get the thesis.
If the polynomial t2 + at + b is reducible in R[t], as in the cases of idealization and
amalgamated duplication, we can be more precise.
Corollary 2.3. Let I be a regular ideal of a noetherian ring R and suppose that t2 + at + b =
(t − α)(t − β) in R[t]. Then, R(I)a,b satisfies (Rn ) if and only if R satisfies (Rn ) and
n < ht(α − β)I.
Proof. If R(I)a,b satisfies (Rn ) it follows from the previous proposition that R satisfies (Rn ).
Suppose by contradiction that n ≥ ht(α − β)I and let p be a minimal prime of (α − β)I
such that ht(p) = ht((α − β)I). By the previous proposition we have Ip = 0, but, if x ∈ I is
regular, this implies that xs = 0 for some s ∈ R \ p, a contradiction.
Conversely, we have (α − β)I * p for any prime ideal p of R of height less than or equal
to n; hence the thesis follows from the previous proposition.
The previous corollary implies that, if I is regular and ht(α − β)I ≤ n, the rings R(I)a,b
never satisfies condition (Rn ). This is the case of idealization, since α = β = 0. As for
amalgamated duplication the factorization is t(t − 1), hence R(I)−1,0 satisfies the property
(Rn ) if and only if R satisfies Rn and n < ht(I).
3. The case t2 + at + b reducible in R[t]
In this section we always assume that t2 + at + b = (t − α)(t − β), where α and β are
elements of R. Particular cases of this setting are both idealization and duplication, since
the corresponding polynomials are t2 and, respectively, t(t − 1). Thus, we get a subfamily of
the family of rings R(I)a,b ; the interest in studying this subfamily comes from the facts that
it is large enough (as we will see, we can obtain elements that are not isomorphic neither to
an idealizazion nor to a duplication) and, for any ring T in it, R is naturally a T module (cf.
Remark 3.2).
8
MARCO D’ANNA AND FRANCESCO STRAZZANTI
We recall that, if R is reduced, the amalgamated duplication is always reduced, while the
idealization is never reduced; in particular, in these two constructions this property doesn’t
depend on the ideal I. Despite this, in the next example we show that could be some choices
of a and b for which R(I)a,b can be reduced or not depending on the ideal I.
Example 3.1. Let k be a field of characteristic two and set R := k[X, Y ]/(XY ), that is a
reduced ring. Denote the images of X, Y in R by x, y and consider R(I)x,y2 = R[It]/I 2 (t2 +
xt + y 2). Notice that (t2 + xt + y 2) = (t + (y + x))(t + y), since char R = 2. If I = (y), then
(y)2(t2 + xt + y 2 ) = (y 2)(t2 + y 2 ) in R[t], so R(I)x,y2 ∼
= R ⋉ I by [4, Proposition 1.5] and, in
particular, it is not a reduced ring.
On the other hand, if I = (x), then (x)2 (t2 + xt + y 2 ) = (x2 )(t2 + xt) in R[t]. If r + it is a
nilpotent element of R(I)x,y2 , it follows that 0 = (r + it)n = r n + t(. . . ) and thus r = 0, since
R is reduced. Therefore, if i = λx for some λ ∈ R, we get 0 = (it)n = (λx)n tn = λn x2n−1 t
and this implies Y |λ in k[X, Y ], that is i = λ1 xy = 0 in R. This proves that R(I)x,y2 is
reduced.
In Corollary 3.7 we will see that the last ring of the previous example is an amalgamated
duplication. However, in Example 3.8 we will produce a ring of our subfamily that is not
isomorphic neither to an idealization nor to an amalgamated duplication, proving that there
are also new rings in the family we are studying.
Remark 3.2. We note that in our case there exists a ring homomorphism
R[t]
R[t] ∼
−→
= R,
((t − α)(t − β))
(t − α)
If we restrict to R(I)a,b , we get a ring homomorphism R(I)a,b → R, that maps s + jt to
s + jα; since there also exists a natural homomorphism R → R(I)a,b , any R-module M is an
R(I)a,b -module and vice versa; moreover λR (M) = λR(I)a,b (M), where λ denotes the length
of a module.
In particular, R is an R(I)a,b -module with the scalar multiplication (s + jt)r = sr + jαr,
where s + jt ∈ R(I)a,b and r ∈ R.
We denote the minimal primes of R by Min(R) and their intersection, the nilradical of R,
by N(R). Moreover, we write Ann(x) and Ann(I) to denote the annihilator of the element
x and the ideal I respectively.
Proposition 3.3. R(I)a,b is reduced if and only if R is reduced and I ∩ Ann(α − β) = (0).
S
Proof. The set of minimal primes of R(I)a,b is A = p∈Min(R) {p1 , p2 }, therefore, R(I)a,b is
T
reduced if and only if N(R(I)a,b ) = A = (0).
Assume that R is reduced and I ∩Ann(α−β) = (0). Let r +it be an element of N(R(I)a,b )
and fix p ∈ Min(R). Since r+it ∈ p1 ∩p2 , then r+αi, r+βi ∈Tp and consequently i(α−β) ∈ p.
This holds for any p ∈ Min(R) and, thus, i(α − β) ∈
p = (0). This implies that
p∈Min(R)
i = 0, since I ∩ Ann(α − β) = (0), and then also r = 0, since R is reduced.
Conversely it is clear that R is reduced if R(I)a,b is. Moreover, if i ∈ I ∩ Ann(α − β), then
(−βi + it)2 = β 2 i2 − bi2 + (−2βi2 − ai2 )t = (β − α)i2 β + (α − β)i2 t = 0,
hence, −βi + it = 0 and consequently i = 0.
NEW ALGEBRAIC PROPERTIES OF QUADRATIC QUOTIENTS OF THE REES ALGEBRA
9
Corollary 3.4. Let R be a reduced ring and assume that α − β is regular, then R(I)a,b
is reduced. Moreover, the converse holds if I is regular. In particular, if R is an integral
domain and α 6= β, then R(I)a,b is reduced.
Proof. The first part follows immediately from the previous proposition. Conversely, if by
contradiction there exists x ∈ R such that x(α − β) = 0 and i ∈ I is regular, then 0 6= xi ∈
I ∩ Ann(α − β) and the previous proposition leads to a contradiction.
Remark 3.5. We note that, if t2 + at + b is irreducible in Q(R/p)[t] for any p ∈ Min(R),
then R is reduced if and only if R(I)a,b is reduced. In fact, if R is reduced it is enough to
compute the nilradical of R(I)a,b :
\
\
N(R(I)a,b ) =
q=
{p + it | p ∈ p, i ∈ I ∩ p} =
q∈Min(R(I)a,b )
p∈Min(R)
= {p + it | p ∈ N(R), i ∈ I ∩ N(R)} = (0)R(I)a,b .
3.1. Idealization and amalgamated duplication. We have already noted that the idealization R ⋉ I and the amalgamated duplication R ✶ I are members of our family; in this
subsection we study when R(I)a,b is isomorphic to them. As consequence, we will show that
it is possible to find rings in our subfamily that are not isomorphic neither to an idealization
nor to an amalgamated duplication (cf. Example 3.8).
Proposition 3.6. The following statements hold.
1) R(I)a,b ∼
= R ⋉ I if α − β ∈ Ann(I 2 ).
2) R(I)a,b ∼
= R ✶ (α − β)I if Ann(α − β) ∩ I = (0).
Proof. We can consider the ring automorphism of R[t] given by t 7→ t + α, then
R[It]
R[It]
∼
R(I)a,b = 2
= R(I)α−β,0 .
= 2 2
I ((t − α)(t − β))
I (t + (α − β)t)
1) If α − β ∈ Ann(I 2 ), then
R(I)a,b ∼
=
I 2 (t2
R[It]
R[It]
= 2 2 ∼
=R⋉I
+ (α − β)t)
I (t )
by [4, Proposition 1.4].
2) Consider the map ϕ : R(I)α−β,0 → R((α − β)I)−1,0 given by ϕ(r + it) = r + (α − β)it.
This is a ring homomorphism, since
ϕ((r + it)(s + jt)) = ϕ(rs + (rj + si + ij(α − β))t) = rs + (α − β)(rj + si + ij(α − β))t
ϕ(r + it)ϕ(s + jt) = (r + (α − β)it)(s + (α − β)jt) = rs + (α − β)(rj + si + ij(α − β))t.
Moreover, ϕ is clearly surjective and, since Ann(α − β) ∩ I = (0), it is injective as well;
hence, ϕ is an isomorphism and the thesis follows, because R((α − β)I)−1,0 ∼
= R ✶ (α − β)I
by [4, Proposition 1.4].
Corollary 3.7. Let R be a reduced ring. The following statements hold.
1) R(I)a,b ∼
= R ⋉ I if and only if α − β ∈ Ann(I).
2) R(I)a,b ∼
= R ✶ (α − β)I if and only if Ann(α − β) ∩ I = (0).
10
MARCO D’ANNA AND FRANCESCO STRAZZANTI
Proof. 1) If α − β ∈ Ann(I), then R(I)a,b ∼
= R ⋉ I by previous proposition. Conversely,
∼
suppose that R(I)a,b = R ⋉ I. Then, only one prime
T ideal of R(I)a,b lies over a prime ideal
of R, and this happens if and only if (α − β)I ⊆
p = (0), because R is reduced; hence
p prime
α − β ∈ Ann(I).
2) We need to prove that, if R(I)a,b ∼
= R ✶ (α − β)I, then Ann(α − β) ∩ I = (0). If this does
not happen, R(I)a,b is not reduced by Proposition 3.3 and this yields a contradiction, since
the amalgamated duplication is reduced if R is.
The next example shows that in the subfamily studied in this section there are rings that
are not isomorphic neither to an idealization nor to an amalgamated duplication.
Example 3.8. Consider R = k[X, Y ]/(XY ) and I = (x, y), where k is a field with chark 6= 2
and x, y denote the images of X, Y in R. Set α = y − r and β = −y − r with r ∈ R, thus,
(t − α)(t − β) = t2 + 2rt + r 2 − y 2 . We have α − β = 2y ∈
/ Ann(I) and x ∈ Ann(α − β) ∩ I,
then R(I)2r,r2 −y2 has two different minimal prime ideals by Lemma 1.1 and Proposition 1.2;
consequently, it cannot be isomorphic to an idealization. Moreover, since Ann(α − β) ∩ I 6=
(0), Proposition 3.3 implies that R(I)2r,r2 −y2 is not reduced and, therefore, it is not isomorphic
to an amalgamated duplication.
If R is not reduced, the first part of Corollary 3.7 does not hold, as it is shown in the next
example.
Example 3.9. Consider R = Z/2n Z. The non units of R are the classes represented by 2α m
with α ≥ 1 and m odd. Then, the square of any ideal is annihilated by 2n−2. This means
that for any ideal I one has R[It]/I 2 (t2 + 2n−2 t) = R[It]/I 2 (t2 ) ∼
= R ⋉ I. Moreover, if we
choose I = (2) we have 2n−2 ∈
/ Ann(I).
3.2. Quasi-Gorenstein rings. Let (R, m) be a d-dimensional local ring. A finitely generb∼
ated R-module ωR is said to be a canonical module of R if ωR ⊗R R
= HomR (Hmd (R), E(R/m)),
where Hmd (R) denotes the d-th local cohomology module of R with support in m and E(R/m)
is the injective hull of R/m. If the canonical module ωR exists, it is unique up to isomorphism.
In this case the ring R is said to be quasi-Gorenstein if its canonical module is a rank one
free R-module, see [14] and references therein for other characterizations and properties of
quasi-Gorenstein rings. In [2], Aoyama characterizes when idealization is quasi-Gorenstein,
while Bagheri, Salimi, Tavasoli and Yassemi do the same for amalgamated duplication in [3].
In this subsection we generalize these results to all the rings of the family R(I)a,b for which
t2 + at + b is reducible in R[t]. We start by recalling a lemma that we will use in Theorem
3.12.
\
∼ b b
Lemma 3.10. [5, Remark 1.7] Let R be a noetherian local ring. Then R(I)
a,b = R(I)a,b as
b
R-algebra.
In this section we consider R as an R(I)a,b -module with scalar multiplication (s + jt)r =
rs + rjα as in Remark 3.2.
Lemma 3.11. If Ann(I) = (0), then I ∼
= HomR(I)a,b (R, R(I)a,b ) as R-modules.
NEW ALGEBRAIC PROPERTIES OF QUADRATIC QUOTIENTS OF THE REES ALGEBRA
11
Proof. For any r ∈ R and i ∈ I we set gi : R → R(I)a,b with gi (r) = riβ − rit, that is an
homomorphism of R(I)a,b -modules. Consider the map
ϕ : I → HomR(I)a,b (R, R(I)a,b ), i 7→ gi ;
it is easy to prove that this is an injective homomorphism of R-modules.
We claim that ϕ is also surjective. Consider h ∈ HomR(I)a,b (R, R(I)a,b ), clearly this is
determined by h(1) = s + jt where s ∈ R and j ∈ I. Since h is an homomorphism of
R(I)a,b -modules, for any i ∈ I we have
rs + rjt = rh(1) = h(r) = h(r − iα + iα) = h((r − iα + it) · 1) =
= (r − iα + it)h(1) = (r − iα + it)(s + jt) =
= rs − iαs − ijαβ + (rj − ijα + si + ijα + ijβ)t,
then, h is well defined only if for any i ∈ I
(
i(−sα − jαβ) = 0
i(s + jβ) = 0
and this implies that s = −jβ because Ann(I) = (0). Hence h = g−j and ϕ is an isomorphism.
The following result is a generalization of [2, Theorem 2.11] and [3, Theorem 3.3]. The
idea of the first implication is essentially the same of [2] and [3], but it requires the previous
two lemmas and Proposition 2.1. We recall that an ideal of R is said to be a canonical ideal
if it is a canonical module of R.
Theorem 3.12. Let R be a noetherian local ring and suppose that I is regular. Then, R(I)a,b
b satisfies Serre’s condition (S2 ) and I is a canonical ideal
is quasi-Gorenstein if and only if R
of R.
\
∼ b b
Proof. If R(I)a,b is quasi-Gorenstein, it is well known that also R(I)
a,b = R(I)a,b is quasiGorenstein. Consequently, since a canonical module always satisfies the condition (S2 ) (see
b I)
b a,b satifies (S2 ) and, in the light of Proposition
[6, Theorem 12.1.18]), it follows that R(
b satisfies (S2 ). Moreover, since we have an homomorphism R(I)a,b → R (see
2.1, also R
Remark 3.2), it follows from [11, Satz 5.12] (or [2, Theorem 1.2]) and the previous lemma
that HomR(I)a,b (R, R(I)a,b ) ∼
= I is a canonical module.
Conversely, using again [11, Satz 5.12], we get that a canonical modules of R(I)a,b is
HomR (R(I)a,b , I), because I is a canonical ideal of R. To prove that R(I)a,b is quasiGorenstein we only need to show an isomorphism ϕ : R(I)a,b → HomR (R(I)a,b , I). To
this aim, we set fr+it (s + jt) = rj + i(s − ja) ∈ I and ϕ(r + it) = fr+it . To check that
this is an isomorphism, it is possible to follow the same proof of [5, Proposition 2.1], bearing
b
in mind that for the surjectivity one can use that (I : I) ֒→ HomR (I, I) ∼
= R, because R
satisfies Serre’s condition (S2 ), see [1, Proposition 2].
Remark 3.13. We notice that by the proof above, if R satisfies the Serre’s condition (S2 )
and I is a canonical ideal, then R(I)a,b is quasi-Gorenstein even if t2 + at + b is not reducible
in R[t].
12
MARCO D’ANNA AND FRANCESCO STRAZZANTI
References
[1] Y. Aoyama, On the depth and the projective dimension of the canonical module, Japan. J. Math.
6 (1980), 61–66.
[2] Y. Aoyama, Some basic results on canonical modules, J. Math. Kyoto Univ. 23 (1983), 85–94.
[3] A. Bagheri, M. Salimi, E. Tavasoli, S. Yassemi, A construction of quasi-Gorenstein rings, J.
Algebra Appl. 11 (2012), no. 1, 1250013 [9 pages].
[4] V. Barucci, M. D’Anna, F. Strazzanti, A family of quotients of the Rees algebra, Commun. Algebra
43 (2015), no. 1, 130–142.
[5] V. Barucci, M. D’Anna, F. Strazzanti, Families of Gorenstein and almost Gorenstein rings, Ark.
Mat. 54 (2016), no. 2, 321–338.
[6] M.P. Brodmann, R.Y. Sharp, Local cohomology: An Algebraic Introduction with Geometric Applications, second edition, Cambridge University Press, 2013.
[7] M. D’Anna, A construction of Gorenstein rings, J. Algebra 306 (2006), no. 2, 507–519.
[8] M. D’Anna, M. Fontana, An amalgamated duplication of a ring along an ideal: basic properties,
J. Algebra Appl. 6 (2007), no. 3, 443–459.
[9] D. Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry, Springer Verlag,
New York, 1995.
[10] C.A. Finocchiaro, A construction od Prüfer rings involving quotients of Rees algebras, J. Algebra
Appl. (2017), Online Ready, doi: 10.1142/S0219498818500986.
[11] J. Herzog, E. Kunz, Kanonische Modul eines Cohen-Macaulay Rings, Springer Lecture Notes in
Math. 238, 1971.
[12] M. Nagata, Local Rings, Interscience, New York, 1962.
[13] A. Oneto, F. Strazzanti, G. Tamone, One-dimensional Gorenstein local rings with decreasing
Hilbert function, J. Algebra 489 (2017), 91–114.
[14] E. Tavanfar, M. Tousi, A study of quasi-Gorenstein rings, arXiv:1508.04597 (2015).
Marco D’Anna - Dipartimento di Matematica e Informatica - Università degli Studi di
Catania - Viale Andrea Doria 6 - 95125 Catania - Italy
E-mail address: [email protected]
Francesco Strazzanti - Departamento de Álgebra & Instituto de Matemáticas (IMUS) Universidad de Sevilla - Avda. Reina Mercedes s/n - 41012 Sevilla - Spain
E-mail address: [email protected]
| 0 |
1
A New Polar Coding Scheme for the
Interference Channel
arXiv:1608.08742v3 [cs.IT] 10 Jan 2018
Mengfan Zheng, Cong Ling, Wen Chen and Meixia Tao
Abstract
Existing polar coding schemes for the two-user interference channel follow the original idea of Han
and Kobayashi, in which component messages are encoded independently and then mapped by some
deterministic functions (i.e., homogeneous superposition coding). In this paper, we propose a new polar
coding scheme for the interference channel based on the heterogeneous superposition coding approach of
Chong, Motani and Garg. We prove that fully joint decoding (the receivers simultaneously decode both
senders’ common messages and the intended sender’s private message) in the Han-Kobayashi strategy
can be simplified to two types of partially joint decoders, which are friendly to polar coding with
practical decoding algorithms. The proposed coding scheme requires less auxiliary random variables
and no deterministic functions. Further, we extend this result to interference networks and show that the
proposed partially joint decoding scheme is a general method for designing heterogeneous superposition
polar coding schemes in interference networks.
Index Terms
Polar codes, interference channel, Han-Kobayashi region, superposition coding, joint decoding.
I. I NTRODUCTION
Polar codes, proposed by Arıkan [1], are the first class of channel codes that can provably
achieve the capacity of any memoryless binary-input output-symmetric channels with low encoding and decoding complexity. Since its invention, polar codes have been widely adopted to many
other scenarios, such as source compression [2]–[5], wiretap channels [6]–[11], relay channels
M. Zheng, W. Chen and M. Tao are with the Department of Electronic Engineering at Shanghai Jiao Tong University,
Shanghai, China. Emails: {zhengmengfan, wenchen, mxtao}@sjtu.edu.cn. C. Ling is with the Department of Electrical and
Electronic Engineering at Imperial College London, United Kingdom. Email: [email protected].
The corresponding author is M. Tao.
2
[6], [12], [13], multiple access channels (MAC) [5], [14]–[17], broadcast channels [18], [19],
broadcast channels with confidential messages [10], [20], and bidirectional broadcast channels
with common and confidential messages [21]. In these scenarios, polar codes have also shown
capacity-achieving capabilities.
The interference channel (IC), first initiated by Shannon [22] and further studied by Ahlswede
[23], models the situation where m sender-receiver pairs try to communicate simultaneously
through a common channel. In this model, it is assumed that there is no cooperation between
any of the senders or receivers, and the signal of each sender is seen as interference by the
unintended receivers. Although the 2-user discrete memoryless IC (DM-IC) is rather simple in
appearance, except for some special cases [24]–[31], determining the capacity region of a general
IC remains an open problem. Reference [23] gave simple but fundamental inner and outer bounds
on the capacity region of the IC. In [32], Carleial determined an improved achievable rate region
for the IC by applying the superposition coding technique of Cover [33], which was originally
designed for the broadcast channel. Later, Han and Kobayashi established the best achievable
rate region for the general IC to date [34]. A more compact description of the Han-Kobayashi
region was given in [35]. The idea of the Han-Kobayashi coding strategy is to split each sender’s
message into a private part and a common part, and allow the unintended receiver to decode the
common part so as to enhance the total transmission rates. To achieve the whole Han-Kobayashi
region, it is required that each receiver decodes its intended private message and both senders’
common messages jointly.
There are limited studies on the design of specific coding schemes that can achieve the HanKobayashi rate region. A low-density parity-check (LDPC) code-based Han-Kobayashi scheme
was proposed for the Gaussian IC in [36], which has close-to-capacity performance in the case of
strong interference. In [37], a specific coding scheme was designed for the binary-input binaryoutput Z IC using LDPC codes, and an example was shown to outperform time sharing of single
user codes. For polar codes, reference [38] pointed out how alignment of polarized bit-channels
can be of use for designing coding schemes for interference networks, and presented an example
of the one-sided discrete memoryless 3-user IC with a degraded receiver structure. A polar coding
scheme that achieves the Han-Kobayashi inner bound for the 2-user IC was proposed in [39], and
[40] used a similar scheme to achieve the Han-Kobayashi region in the 2-user classical-quantum
IC. The idea of [39] is to transform the original IC into two 3-user MACs from the two receivers’
perspectives, and design a compound MAC polar coding scheme for them. The achievable rate
3
region of the compound MAC equals the Han-Kobayashi region, and can be achieved by polar
codes. This design is based on the original Han-Kobayashi scheme in [34], in which component
messages are independently encoded into auxiliary sequences and then mapped to the channel
inputs by some deterministic functions (also known as homogeneous superposition coding [41]).
By ranging over all possible choices of these functions and distributions of auxiliary random
variables (ARV), the whole Han-Kobayashi region can be achieved. However, such an approach
could be problematic in practice since finding such functions may be a very complex task.
Our work is inspired by the compact description of the Han-Kobayashi region based on the
Chong-Motani-Garg scheme [35], in which no deterministic functions are required and less ARVs
are needed. This approach belongs to the heterogeneous superposition coding scheme [41], in
which the common message is encoded first and then a satellite codebook for the private message
is generated around it. When implementing such a scheme using polar codes, we find that the
fully joint decoder which simultaneously decodes all three component messages is difficult to
design, because the encoding scheme forces us to decode the common message of a sender before
its private message when successive cancellation decoding (SCD) is used. By analyzing points
on the dominant faces of the Han-Kobayashi region and utilizing random coding techniques, we
find that it is possible to loosen the fully joint decoding requirement and propose to use two
types of partially joint decoders. Each receiver can either jointly decode both senders’ common
messages first and then the intended sender’s private message, or solely decode the intended
sender’s common message first and then jointly decode the rest two. Based on this finding and
enlightened by Goela et al.’s superposition polar coding scheme for the broadcast channel [18], we
design two types of polar coding schemes and show that every point on the dominant faces of the
Han-Kobayashi region can be achieved. Compared with the existing scheme of [39], our proposed
scheme achieves a larger rate region for the same joint distribution of random variables and has
lower encoding, decoding and code construction complexity. Most notably, with the proposed
scheme, the task of finding proper ARVs for a DM-IC can be reduced significantly. Further, we
extend the partially joint decoding scheme to arbitrary discrete memoryless interference networks
(DM-IN) and show that it is a general method for designing heterogeneous superposition polar
coding schemes in DM-INs that can achieve optimal rate regions.
In our proposed scheme, joint decoders and the corresponding code structure are implemented
using the 2-user MAC polarization method based on Arıkan’s monotone chain rule expansions
[5], whose encoding and decoding complexity is similar to the single-user case, and can be con-
4
structed efficiently. We use Şaşoğlu’s result on polarization for arbitrary discrete alphabet [42] to
extend it to arbitrary prime input alphabet case. To deal with non-uniform input distribution, one
may apply Gallager’s alphabet extension method [43, p. 208] as in [39], the chaining construction
[44], or a more direct approach by invoking results on polar coding for lossless compression
[18], [20], [45], [46]. In this paper, we take Chou and Bloch’s low-complexity approach [20],
[46], which only requires a vanishing rate of shared randomness between communicators. One
crucial point in designing capacity-achieving polar codes for a general multi-user channel is how
to properly align the polar indices. One solution for this problem is the chaining method, which
has already been used in several areas [9]–[11], [19], [47]. Another way is to add additional
stages of polarization to align the incompatible indices, as shown in [48] and used in [39]. In
this paper, we adopt the chaining method as it does not change the original polar transformation
and may be easier to understand.
The rest of this paper is organized as follows. In Section II, we introduce the 2-user DMIC model and the Han-Kobayashi region, and propose two types of partially joint decoders. In
Section III, we review some background on polarization and polar codes necessary for our code
design. In Section IV, we provide an overview of our scheme and analyze its feasibility. Details
of our proposed schemes are presented in Section V, and the performance is analyzed in Section
VI. In Section VII, we extend the proposed scheme to arbitrary DM-INs. Section VIII concludes
this paper with some discussions.
Notations: [N ] is the abbreviation of an index set {1, 2, ..., N }. Vectors are denoted as XN ,
{X 1 , X 2 , ..., X N } or X a:b , {X a , X a+1 , ..., X b } for a ≤ b. For a subset A ⊂ [N ], X A denotes
the subvector {X i : i ∈ A} of X 1:N . GN = BN F⊗n is the generator matrix of polar codes
[1],
1
0
.
where N = 2n with n being an arbitrary integer, BN is the bit-reversal matrix, and F =
1 1
Hq (X) stands for the entropy of X with q-based logarithm, and H(X) is short for the Shannon
β
entropy unless otherwise specified. δN = 2−N with some β ∈ (0, 1/2).
II. P ROBLEM S TATEMENT
A. Channel Model
Definition 1. A 2-user DM-IC consists of two input alphabets X1 and X2 , two output alphabets
Y1 and Y2 , and a probability transition function PY1 Y2 |X1 X2 (y1 , y2 |x1 , x2 ). The conditional joint
5
probability distribution of the 2-user DM-IC over N channel uses can be factored as
N
PY1N Y2N |XN1 XN2 (y1N , y2N |xN
1 , x2 ) =
N
Y
PY1 Y2 |X1 X2 (y1i , y2i |xi1 , xi2 ).
(1)
i=1
Definition 2. A (2N R1 , 2N R2 , N ) code for the 2-user DM-IC consists of two message sets M1 =
{1, 2, ..., [2N R1 ]} and M2 = {1, 2, ..., [2N R2 ]}, two encoding functions
N
N
N
xN
1 (m1 ) : M1 7→ X1 and x2 (m2 ) : M2 7→ X2 ,
(2)
and two decoding functions
m̂1 (y1N ) : Y1N 7→ M1 and m̂2 (y2N ) : Y2N 7→ M2 .
(N )
Definition 3. The average probability of error Pe
(3)
of a (2N R1 , 2N R2 , N ) code for the 2-user
DM-IC is defined as the probability that the decoded message pair is not the same as the
transmitted one averaged over all possible message pairs,
n
o
X
1
Pe(N ) = N (R1 +R2 )
Pr m̂1 (Y1N ), m̂2 (Y2N ) 6= (M1 , M2 )|(M1 , M2 ) sent , (4)
2
(M1 ,M2 )∈M1 ×M2
where (M1 , M2 ) are assumed to be uniformly distributed over M1 × M2 .
B. The Han-Kobayashi Rate Region
In the Han-Kobayashi coding strategy, each sender’s message is split into two parts: a private
message, which only needs to be decoded by the intended receiver, and a common message,
which is allowed to be decoded by the unintended receiver. Each receiver decodes its intended
private message and two common messages jointly so that a higher transmission rate can be
achieved. In the rest of this paper, we will refer to the two senders and two receivers as Sender
1, Sender 2, Receiver 1 and Receiver 2 respectively. Sender 1’s message, denoted as M1 , is
split into (M1p , M1c ), where M1p ∈ M1p , {1, 2, ..., [2N S1 ]} denotes its private message and
M1c ∈ M1c , {1, 2, ..., [2N T1 ]} the common message. Similarly, Sender 2’s message M2 is
split into (M2p , M2c ) with M2p ∈ M2p , {1, 2, ..., [2N S2 ]} and M2c ∈ M2c , {1, 2, ..., [2N T2 ]}.
Define W1 , W2 , V1 and V2 as the random variables for messages M1c , M2c , M1p and M2p
respectively, with W1 , W2 , V1 and V2 being their alphabets. Then each encoding function can
be decomposed into three functions. For xN
1 (m1 ), the three functions are
w1N (M1c ) : M1c 7→ W1N , v1N (M1p ) : M1p 7→ V1N
and
0N
x1 (W1N , V1N )
:
W1N
×
V1N
7→
X1N .
(5)
6
Similarly, for xN
2 (m2 ), the three functions are
w2N (M2c ) : M2c 7→ W2N , v2N (M2p ) : M2p 7→ V2N
(6)
0
and x2N (W2N , V2N ) : W2N × V2N 7→ X2N .
With this approach, Han and Kobayashi established the best achievable rate region for the
general IC to date [34]. The result is summarized in Theorem 1.
Theorem 1 ( [34], [49]). Let P ∗ be the set of probability distributions P ∗ (·) that factor as
P ∗ (q, v1 , v2 , w1 , w2 , x1 , x2 ) = PQ (q)PV1 |Q (v1 |q)PV2 |Q (v2 |q)PW1 |Q (w1 |q)PW2 |Q (w2 |q)
(7)
× PX1 |V1 W1 Q (x1 |v1 , w1 , q)PX2 |V2 W2 Q (x2 |v2 , w2 , q),
where Q ∈ Q is the time-sharing parameter, and PX1 |V1 W1 Q (·) and PX2 |V2 W2 Q (·) equal either 0
or 1, i.e., they are deterministic functions. For a fix P ∗ (·) ∈ P ∗ , Consider Receiver 1 and the
∗
set of non-negative rate-tuples (S1 , T1 , S2 , T2 ) denoted by Ro,1
HK (P ) that satisfy
0 ≤ S1 ≤ I(V1 ; Y1 |W1 W2 Q),
(8)
0 ≤ T1 ≤ I(W1 ; Y1 |V1 W2 Q),
(9)
0 ≤ T2 ≤ I(W2 ; Y1 |V1 W1 Q),
(10)
S1 + T1 ≤ I(V1 W1 ; Y1 |W2 Q),
(11)
S1 + T2 ≤ I(V1 W2 ; Y1 |W1 Q),
(12)
T1 + T2 ≤ I(W1 W2 ; Y1 |V1 Q),
(13)
S1 + T1 + T2 ≤ I(V1 W1 W2 ; Y1 |Q).
(14)
∗
Similarly, let Ro,2
HK (P ) be the set of non-negative rate-tuples (S1 , T1 , S2 , T2 ) that satisfy (8)–(14)
with indices 1 and 2 swapped everywhere. For a set S of 4-tuples (S1 , T1 , S2 , T2 ), let R(S) be the
set of (R1 , R2 ) such that 0 ≤ R1 ≤ S1 + T1 and 0 ≤ R2 ≤ S2 + T2 for some (S1 , T1 , S2 , T2 ) ∈ S.
Then we have that
[
o,2
∗
∗
RoHK = R
Ro,1
(P
)
∩
R
(P
)
HK
HK
(15)
P ∗ ∈P ∗
is an achievable rate region for the DM-IC.
The original Han-Kobayashi scheme can be classified into the homogeneous superposition
coding scheme [41], in which the component messages of each sender are independently encoded
7
into auxiliary sequences and then mapped to the channel input sequence by some symbol-bysymbol deterministic function. The scheme of [39] belongs to the this type. Another variant of
superposition coding is the heterogeneous superposition coding [41], introduced by Bergmans
[50]. In this variant, the coarse messages are encoded into auxiliary sequences first, and then
a satellite codebook for the fine message is generated around it conditionally independently.
Usually the heterogeneous variant is simpler than the homogeneous one since it requires fewer
ARVs. Reference [35] presented a simplified description of Han-Kobayashi region based on this
approach (referred to as the Chong-Motani-Garg scheme in this paper), in which only three ARVs
are used and no deterministic functions are needed. Their result is summarized in Theorem 2.
Theorem 2 ( [35], [49]). Let P1∗ be the set of probability distributions P1∗ (·) that factor as
P1∗ (q, w1 , w2 , x1 , x2 ) = PQ (q)PX1 W1 |Q (x1 , w1 |q)PX2 W2 |Q (x2 , w2 |q),
(16)
where |Wj | ≤ |Xj | + 4 for j = 1, 2, and |Q| ≤ 6. For a fix P1∗ (·) ∈ P1∗ , let RHK (P ∗ ) be the set
of (R1 , R2 ) satisfying
0 ≤ R1 ≤ I(X1 ; Y1 |W2 Q) , a,
(17)
0 ≤ R2 ≤ I(X2 ; Y2 |W1 Q) , b,
(18)
R1 + R2 ≤ I(X1 W2 ; Y1 |Q) + I(X2 ; Y2 |W1 W2 Q) , c,
(19)
R1 + R2 ≤ I(X1 ; Y1 |W1 W2 Q) + I(X2 W1 ; Y2 |Q) , d,
(20)
R1 + R2 ≤ I(X1 W2 ; Y1 |W1 Q) + I(X2 W1 ; Y2 |W2 Q) , e,
(21)
2R1 + R2 ≤ I(X1 W2 ; Y1 |Q) + I(X1 ; Y1 |W1 W2 Q) + I(X2 W1 ; Y2 |W2 Q) , f,
(22)
R1 + 2R2 ≤ I(X2 ; Y2 |W1 W2 Q) + I(X2 W1 ; Y2 |Q) + I(X1 W2 ; Y1 |W1 Q) , g.
(23)
Then we have that
[
RHK =
RHK (P1∗ )
(24)
P1∗ ∈P1∗
is an achievable rate region for the DM-IC.
It is shown in [35] that the regions described in Theorem 1 and 2 are equivalent, and
constraints (9), (10) and (13) and their counterparts for the second receiver are unnecessary.
It is straightforward to see that RoHK ⊆ RHK by using Fourier-Motzkin elimination [35], but
8
to prove the converse, we will need [35, Lemma 2], which states that RHK (P1∗ ) ⊆ RoHK (P ∗ ) ∪
RoHK (P ∗∗ ) ∪ RoHK (P ∗∗∗ ), where
P1∗ (q, w1 .w2 , x1 , x2 ) =
X
P ∗ (q, v1 , v2 , w1 .w2 , x1 , x2 ),
v1 ∈V1 ,v2 ∈V2
P ∗∗ =
X
P ∗,
w1 ∈W1
This indicates that to achieve
RHK (P1∗ )
P ∗∗∗ =
X
P ∗.
w2 ∈W2
for a given P1∗ with the scheme of [39], one generally
will need to use three codes designed for different joint distributions. In this paper, we aim to
design a heterogeneous superposition polar coding scheme to achieve RHK (P1∗ ) directly.
C. Partially Joint Decoding for the 2-User DM-IC
To achieve the whole Han-Kobayashi region, both superposition coding variants require joint
decoding of all component messages at each receiver, which we refer to as fully joint decoding.
For the homogeneous variant, fully joint decoding can be realized by polar codes using MAC
polarization techniques since each component message is independently encoded, as [39] has
adopted. For the heterogeneous variant, however, fully joint decoding may not be easily implemented using polar codes and practical decoding algorithms (such as SCD), as the coarse
message and the fine message are encoded sequentially. When decoding the fine message in
a heterogeneous superposition polar coding scheme (such as [18]), the estimate of the coarse
message is required as side information. To design a polar coding scheme with practical decoding
algorithm that can achieve RHK (P1∗ ) directly, we propose and prove two types of partially joint
decoding orders.
Definition 4 (Partially joint decoding). The two types of partially joint decoding are defined as:
•
(Type I) a receiver jointly decodes two senders’ common messages first, and then decodes
its private message with the estimates of the common messages;
•
(Type II) a receiver decodes its intended common message first, and then jointly decodes
the unintended common message and its private message with the estimate of the intended
common message.
Theorem 3. Let R1P ar (P1∗ ) be the achievable rate region of the DM-IC when both receivers use
the Type I partially joint decoding, and R2P ar (P1∗ ) (resp. R3P ar (P1∗ )) the region when Receiver 1
9
(resp. 2) adopts Type I while Receiver 2 (resp.1) applies Type II. Define RP ar (P1∗ ) = R1P ar (P1∗ )∪
R2P ar (P1∗ ) ∪ R3P ar (P1∗ ). Then we have
RP ar (P1∗ ) = RHK (P1∗ ).
(25)
Proof. See Appendix A.
Remark 1. It is worth noting that we do not consider the case when both receivers use the Type
II partially joint decoding. This is because the Han-Kobayashi region can already be covered
by the other three decoding strategies. In fact, one can easily verify that the achievable rate
region in this case can also be achieved by at least one of the other three strategies since the
upper bounds on the common message rates (Rkc ≤ I(Wk ; Yk |Q) for k = 1, 2) are non-optimal.
This explains why in our proposed polar coding scheme in Section IV we do not need such an
approach either.
Remark 2. The reasons why the fully joint decoder is hard to design are twofold, the decoding
algorithm and the code structure. Existing polar codes are optimized for SCD, which is sequential
in nature. To design a joint decoding scheme using SCD, one has to use methods similar to the
permutation based MAC polarization – mixing different users’ sequences of random variables
into a single one and then decoding them together. However, in the heterogeneous scheme, Wk
and Xk (k = 1, 2) are correlated. If we try to apply this method, the induced random process
will have a complicated memory. Although there have been studies on polarization for processes
with memory [42], [51], [52], the results are still far from handling such a problem now. If we
want to realize genuine fully joint decoding (e.g., using maximum-likelihood (ML) or ML-like
decoding), then the corresponding structure of codes should also be optimized for this decoding
algorithm (we cannot use the same code structure optimized for SCD and just switch to ML
decoding, as the achievable rate region of the scheme remains the same). However, neither the
construction complexity nor the decoding complexity is affordable.
III. P OLAR C ODING P RELIMINARIES
A. Polar Coding for Lossless Source Compression
First, let us recap the lossless source polarization scheme introduced in [2] and generalized to
arbitrary alphabet in [42]. Let (X, Y ) ∼ pX,Y be a pair of random variables over (X × Y) with
10
|X | = qX being a prime number1 . Consider X as the memoryless source to be compressed and
Y as side information of X. Let U 1:N = X 1:N GN . As N goes to infinity, U j (j ∈ [N ]) becomes
either almost independent of (Y 1:N , U 1:j−1 ) and uniformly distributed, or almost determined by
(Y 1:N , U 1:j−1 ) [2]. Define the following sets of polarized indices:
(N )
(26)
(N )
(27)
HX|Y = {j ∈ [N ] : H(U j |Y 1:N , U 1:j−1 ) ≥ log2 (qX ) − δN },
LX|Y = {j ∈ [N ] : H(U j |Y 1:N , U 1:j−1 ) ≤ δN }.
From [20], [42] we have
1 (N )
|HX|Y | = HqX (X|Y ),
N →∞ N
1 (N )
lim
|LX|Y | = 1 − HqX (X|Y ).
N →∞ N
lim
(N )
C
With U (LX|Y )
(28)
and Y 1:N , X 1:N can be recovered at arbitrarily low error probability given
sufficiently large N .
The compression of a single source X can be seen as a special case of the above one by
letting Y = ∅.
B. Polar Coding for Arbitrary Discrete Memoryless Channels
Polar codes were originally developed for symmetric channels. By invoking results in source
polarization, one can construct polar codes for asymmetric channels without alphabet extension,
as introduced in [45]. However, the scheme of [45] requires the encoder and the decoder to share
a large amount of random mappings, which raises a practical concern of not being explicit. In
[18], [20], [46], [56], deterministic mappings are used to replace (part of) the random mappings
so as to reduce the amount of shared randomness needed. Next, we briefly review the method
of [20], [46]2 , which only requires a vanishing rate of shared randomness.
1
Although for composite qX , polarization can also happen if we use some special types of operations instead of group
operation [42], [53]–[55], we only consider the prime number case in this paper for simplicity.
2
We note that the common message encoding scheme in [20] (consider the special case when there is no Eve and no chaining
scheme) and the scheme in [46] share the same essence, although there is a slight difference in the partition scheme for
information and frozen symbols (see (11) of [20] and (10) of [46]), and reference [46] uses deterministic rules for some symbols
while reference [20] uses random rules.
11
Let W (Y |X) be a discrete memoryless channel with a qX -ary input alphabet X , where qX is
(N )
a prime number. Let U 1:N = X 1:N GN and define HX
(N )
(N )
and HX|Y as in (26), and LX|Y as in
(27). Define the information set, frozen set and almost deterministic set respectively as follows:
(N )
(N )
I , HX ∩ LX|Y ,
(N )
(N )
Fr , HX ∩ (LX|Y )c ,
(N )
Fd , (HX )c .
(29)
(30)
(31)
The encoding procedure goes as follows: {uj }j∈I carry information, {uj }j∈Fr are filled with
uniformly distributed frozen symbols (shared between the sender and the receiver), and {uj }j∈Fd
are randomly generated according to conditional probability PU j |U 1:j−1 (u|u1:j−1 ). To guarantee
reliable decoding, {uj }j∈(H(N ) )C ∩(L(N ) )C are separately transmitted to the receiver with some
X
X|Y
reliable error-correcting code, the rate of which vanishes as N goes large [20]. Since {uj }j∈Fr
only need to be uniformly distributed, they can be the same in different blocks. Thus, the rate
of frozen symbols in this scheme can also be made negligible by reusing them over sufficient
number of blocks.
After receiving y 1:N and recovered {uj }j∈(H(N ) )C ∩(L(N ) )C , the receiver computes the estimate
X
X|Y
ū1:N of u1:N with a SCD as
j
(N )
u,
if j ∈ (LX|Y )C
j
.
ū =
arg maxu∈{0,1} PU j |Y 1:N U 1:j−1 (u|y 1:N , u1:j−1 ), if j ∈ L(N )
X|Y
(32)
It is shown that the rate of this scheme, R = |I|/N , satisfies [45]
lim R = I(X; Y ).
N →∞
(33)
C. Polar Coding for Multiple Access Channels
Let PY |X1 X2 (y|x1 , x2 ) be the transition probability of a discrete memoryless 2-user MAC,
where x1 ∈ X1 with |X1 | = qX1 and x2 ∈ X2 with |X2 | = qX2 . For a fixed product distribution
of PX1 (x1 )PX2 (x2 ), the achievable rate region of PY |X1 X2 is given by [57]
0 ≤ R1 ≤ I(X1 ; Y |X2 )
R
1
R(PY |X1 X2 ) ,
.
0 ≤ R2 ≤ I(X2 ; Y |X1 )
R2
R1 + R2 ≤ I(X1 , X2 ; Y )
(34)
Polar coding for MACs has been studied in [5], [14]–[17]. Although [17] provides a more
general scheme that can achieve the whole uniform rate region of a m-user (m ≥ 2) MAC, in
12
our scheme, we adopt the monotone chain rule expansion method in [5] because it has simple
structure and possesses similar complexity to the single-user polar codes. Reference [5] mainly
deals with the Slepian-Wolf problem in source coding, but the method can be readily applied to
the problem of coding for the 2-user MAC since they are dual problems, which has been studied
in [14] and used in [39]. However, both [14] and [39] consider uniform channel inputs. Here
we generalize it to arbitrary input case with the approach of the previous subsection. Note that
although the input alphabets of the two users can be different, the extension is straightforward
since there is no polarization operation between the two channel inputs. For simplicity, we
assume qX1 and qX2 are prime numbers. Define
U11:N = X11:N GN , U21:N = X21:N GN .
(35)
Let S 1:2N be a permutation of U11:N U21:N such that it preserves the relative order of the elements
of both U11:N and U21:N , called a monotone chain rule expansion. Such an expansion can be
represented by a string b2N = b1 b2 ...b2N , called the path of the expansion, where bj = 0
(j ∈ [2N ]) represents that S j ∈ U11:N , and bj = 1 represents that S j ∈ U21:N . Then we have
I(Y 1:N ; U11:N , U21:N ) = H(U11:N , U21:N ) − H(U11:N , U21:N |Y 1:N )
= N H(X1 ) + N H(X2 ) −
2N
X
H(S j |Y 1:N , S 1:j−1 ).
j=1
It is shown in [5] that H(S j |Y 1:N , S 1:j−1 ) (j ∈ [2N ]) polarizes to 0 or 1 as N goes to infinity3 .
Define the rates of the two users as
RU1 = H(X1 ) −
1 X
H(S j |Y 1:N , S 1:j−1 ),
N j∈S
U1
RU2
1 X
= H(X2 ) −
H(S j |Y 1:N , S 1:j−1 ),
N j∈S
(36)
U2
respectively, where SU1 , {j ∈ [2N ] : bj = 0} and SU2 , {j ∈ [2N ] : bj = 1}.
Proposition 1 ( [5]). Let (R1 , R2 ) be a rate pair on the dominant face of R(PY |X1 X2 ). For any
given > 0, there exists N and a chain rule b2N on U11:N U21:N such that b2N is of the form
0i 1N 0N −i (0 ≤ i ≤ N ) and has a rate pair (RU1 , RU2 ) satisfying
|R1 − RU1 | ≤ and |R2 − RU2 | ≤ .
3
(37)
The entropy here is calculated adaptively. If j ∈ SUk (k = 1, 2), then entropy is calculated with qXk -based logarithm.
13
Although the permutations can have lots of variants, even non-monotone [17], Proposition 1
shows that expansions of type 0i 1N 0N −i (0 ≤ i ≤ N ) are sufficient to achieve every point on the
dominant face of R(PY |X1 X2 ) given sufficiently large N , which can make our code design and
construction simpler. To polarize a MAC sufficiently while keeping the above rate approximation
intact, we need to scale the path. For any integer l = 2m , let lb2N denote
b1 · · · b1 b2 · · · b2 · · · · · · b2N · · · b2N ,
| {z } | {z }
| {z }
l
l
which is a monotone chain rule for
l
U11:lN U21:lN .
It is shown in [5] that the rate pair for b2N is
also the rate pair for lb2N .
Now we can construct a polar code for the 2-user MAC with arbitrary inputs. Let fk (i) :
[N ] → SUk (k = 1, 2) be the mapping from indices of Uk1:N to those of S SUk . Define
(N )
HSU , {j ∈ [N ] : H(S fk (j) |S 1:fk (j)−1 ) ≥ log2 (qXk ) − δN },
k
(N )
LSU |Y
k
which satisfy
, {j ∈ [N ] : H(S
fk (j)
|Y
1:N
,S
1:fk (j)−1
(38)
) ≤ δN },
1 X
1 (N )
HqXk (S j |Y 1:N , S 1:j−1 ),
|HSU | =
k
N →∞ N
N j∈S
lim
Uk
1 (N )
1 X
lim
HqXk (S j |Y 1:N , S 1:j−1 ).
|LSU | = 1 −
k
N →∞ N
N j∈S
(39)
Uk
Since X1 and X2 are independent, we have
(N )
(N )
HSU = HXk , {j ∈ [N ] : H(Ukj |Uk1:j−1 ) ≥ log2 (qXk ) − δN }.
k
(40)
Partition user k’s (k = 1, 2) indices as
(N )
(N )
k
k
Ik , HSU ∩ LSU
|Y ,
(N )
(N )
k
k
Fkr , HSU ∩ (LSU
|Y )
C
,
(41)
(N )
Fkd , (HSU )C .
k
Then each user can apply the same encoding scheme as the single-user case. The receiver uses a
SCD to decode two users’ information jointly according to the expansion order. The polarization
result can be summarized as the following proposition.
Proposition 2 ( [5]). Let PY |X1 X2 (y|x1 , x2 ) be the transition probability of a discrete memoryless
2-user MAC. Consider the transformation defined in (35). Let N0 = 2n0 for some n0 ≥ 1 and
fix a path b2N0 for U11:N0 U21:N0 . The rate pair for b2N0 is denoted by (RU1 , RU2 ). Let N = 2l N0
14
Fig. 1. Proposed heterogeneous superposition polar coding scheme for the 2-user DM-IC.
for l ≥ 1 and let S 1:2N be the expansion represented by 2l b2N0 . Then, for any given δ > 0, as l
goes to infinity, we have (the entropy here is also calculated adaptively)
1
{1 ≤ j ≤ 2N : δ < H(S j |Y 1:N , S 1:j−1 ) < 1 − δ} → 0,
2N
|I1 |
|I2 |
→ RU1 and
→ RU2 .
N
N
(42)
Proposition 1 and 2 can be readily extended from Theorem 1 and Theorem 2 in [5] by
considering Y as side information of source pair (X1 , X2 ) and performing the same analysis.
Thus, we omit the proof here.
IV. A N OVERVIEW OF O UR N EW A PPROACH
In this section, we introduce the main idea of our scheme. Since the purpose of introducing
the time-sharing parameter Q in Theorem 1 and 2 is to replace the convex-hull operation, in the
code design part, we will consider a fixed Q = q and drop this condition in the expressions for
simplicity.
Our proposed heterogeneous superposition polar coding scheme is illustrated in Fig. 1. Sender
k’s (k = 1, 2) splits its message Mk into a private message Mkp and a common message Mkc .
0
Encoder Ekb maps Mkc into a sequence Uk1:N of length N , which goes into a polar encoder to
generate an intermediate codeword Wk1:N (corresponding to ARV Wk in Theorem 2). Encoder
Eka then maps Mkp together with Wk1:N into Uk1:N , which goes into another polar encoder to
generate the final codeword Xk1:N .
A. Synthesized MACs for Receivers
For a target rate pair P, let Rkp and Rkc respectively denote the corresponding private and
common message rates of Sender k (k = 1, 2), and define P1 , (R1p + R1c , R2c ) and P2 ,
15
(R1c , R2p + R2c ) as Receiver 1’s and Receiver 2’s receiving rate pairs respectively. Furthermore,
define Pc , (R1c , R2c ) as the common message rate pair. In the rest of this paper, we refer
(R1p , R1c , R2p , R2c ) to a rate decomposition of P.
For the purpose of decomposing a target rate pair into a private and common message rate
tuple suitable for our partially joint decoding scheme, we first define the effective channel of
each receiver. For Receiver 1, its effective channel, PY1 |X1 W2 , is defined as
X
PY1 |X1 W2 (y1 |x1 , w2 ) ,
PY1 |X1 X2 (y1 |x1 , x2 )PX2 |W2 Q (x2 |w2 , q).
(43)
x2
Similarly, the effective channel of Receiver 2 is defined as
X
PY2 |W1 X2 (y2 |w1 , x2 ) ,
PY2 |X1 X2 (y2 |x1 , x2 )PX1 |W1 Q (x1 |w1 , q).
(44)
x1
The achievable rate regions for these two MACs are
0 ≤ R1 ≤ I(X1 ; Y1 |W2 )
R
1
R(PY1 |X1 W2 ) =
0 ≤ R2 ≤ I(W2 ; Y1 |X1 )
R2
R1 + R2 ≤ I(X1 W2 ; Y1 )
0 ≤ R1 ≤ I(W1 ; Y2 |X2 )
R
1
R(PY2 |W1 X2 ) =
0 ≤ R2 ≤ I(X2 ; Y2 |W1 )
R
2
R1 + R2 ≤ I(X2 W1 ; Y2 )
,
(45)
.
(46)
Now we can study the Han-Kobayashi coding problem in PY1 |X1 W2 and PY2 |W1 X2 . In these two
MACs, the rate of Xk (k = 1, 2) equals the overall rate of Sender k, while the rate of Wk equals
the common message rate of Sender k. Obviously, P1 and P2 must lie inside R(PY1 |X1 W2 ) and
R(PY2 |W1 X2 ) respectively in order to make reliable communication possible.
Giving only two effective channels is insufficient to determine the suitable decoding order for
a target rate pair. If we hope to use a partially joint decoder, the following two MACs, PY1 |W1 W2
and PY2 |W1 W2 , will be useful. For k = 1, 2, define
XX
PYk |W1 W2 (yk |w1 , w2 ) ,
PYk |X1 X2 (yk |x1 , x2 )PX1 |W1 Q (x1 |w1 , q)PX2 |W2 Q (x2 |w2 , q), (47)
x1
x2
the achievable rate region of which is
R
1
R(PYk |W1 W2 ) =
R
2
0 ≤ R1 ≤ I(W1 ; Yk |W2 )
0 ≤ R2 ≤ I(W2 ; Yk |W1 )
R1 + R2 ≤ I(W1 W2 ; Yk )
.
(48)
16
Fig. 2. Illustration for the achievable rate regions of the synthesized MACs.
The relations between the above four achievable rate regions are shown in Fig. 2. If the
common message rate pair lies inside R(PYk |W1 W2 ), then Receiver k can apply the Type I
partially joint decoding. Otherwise it will need to use the Type II one.
B. The General Idea of Our Scheme
According the two receivers’ different choices of partially joint decoding type, we define the
following two types of points (rate pairs).
Definition 5 (Type A points). A Type A point P in RHK (P1∗ ) is a rate pair which can be
decomposed into a private and common message rate tuple that satisfies:
(R1c , R2c ) ∈ R(PY1 |W1 W2 ) ∩ R(PY2 |W1 W2 ),
R1p = I(X1 ; Y1 |W1 W2 ),
(49)
R2p = I(X2 ; Y2 |W1 W2 ).
Definition 6 (Type B points). A Type B point P in RHK (P1∗ ) is a rate pair which can be
decomposed into a private and common message rate tuple that satisfies:
(R1c , R2c ) ∈ R(PYk |W1 W2 ),
Rkc 0 ≤ I(Wk0 ; Yk0 ),
Rkp
= I(Xk ; Yk |W1 W2 ),
Rkp0 = I(Xk0 Wk ; Yk0 |Wk0 ) − Rkc ,
(50)
17
where k, k 0 ∈ {1, 2} and k 6= k 0 .
To achieve a Type A point P, both receivers can apply the Type I partially joint decoding.
We first design a MAC polar code for two common messages that achieves Pc in the compound
MAC composed of PY1 |W1 W2 and PY2 |W1 W2 , and then design a point-to-point polar code for each
sender’s private message with the common messages being side information. To achieve a Type
B point, one receiver applies the Type I partially joint decoding while the other applies Type II.
Let us consider k = 2, k 0 = 1 as an example. The code constructions for two common messages
(M1c , M2c ) and Sender 1’s private message M1p are jointly designed in such a way that, Receiver
1 can first decode M1c (equivalently W11:N ) with Y11:N and then jointly decode (M1p , M2c ) with
the estimate of W11:N , while Receiver 2 can jointly decode (M1c , M2c ) with Y21:N . The code
construction for Sender 2’s private message M2p is simply point-to-point polar codes.
In Section II-C we have proved by random coding that partially joint decoding can achieve
the whole Han-Kobayashi region. The following lemma provides another evidence to support
this conclusion.
Lemma 1. Every point on the dominant faces of RHK (P1∗ ) can be classified into either Type A
or Type B.
Proof. See Appendix B.
V. P ROPOSED P OLAR C ODING S CHEMES
In this section, we describe details of our proposed two types of polar coding schemes for the
2-user DM-IC. We consider the case when qX1 = |X1 | and qX2 = |X2 | are two prime numbers,
qW1 = |W1 | is the smallest prime number larger than qX1 + 4, and qW2 = |W2 | is the smallest
prime number larger than qX2 + 4. For a rate pair P, let P(1) and P(2) respectively denote its
first and second component.
A. Common Message Encoding
1) Partition Scheme for Type A Points: Let Pc = (R1c , R2c ) be the common message rate pair
for a Type A point P on a dominant face of RHK (P1∗ ). Obviously, Pc must lie on the dominant
face of either R(PY1 |W1 W2 ) or R(PY2 |W1 W2 ), otherwise we can choose a larger common message
rate pair to achieve higher rates. Without loss of generality, we assume that Pc is on the dominant
face of R(PY1 |W1 W2 ) in this subsection as an example.
18
First, choose a point P̃c on the dominant face of R(PY2 |W1 W2 ) which is larger than Pc in the
sense that P̃c (1) ≥ Pc (1) and P̃c (2) ≥ Pc (2), as the target point for conducting the monotone
chain rule expansion in our code design. Let S 1:2N be the monotone chain rule expansion that
achieves Pc in R(PY1 |W1 W2 ), and T 1:2N the expansion that achieves P̃c in R(PY2 |W1 W2 ). Denote
0
0
the sets of indices in S 1:2N with S j ∈ U11:N and S j ∈ U21:N by SU10 and SU20 respectively, and
0
0
those in T 1:2N with T j ∈ U11:N and T j ∈ U21:N by TU10 and TU20 respectively. For k = 1, 2, let
0
fk (j) : [N ] → SUk0 be the mapping from indices of Uk1:N to those of S
SU 0
k
, and gk (j) : [N ] → TUj0
TU 0
0 1:N
the mapping from indices of Uk
to those of T k . Define the following polarized sets
(N )
HSU 0 , j ∈ [N ] : H(S fk (j) |S 1:fk (j)−1 ) ≥ log2 (qWk ) − δN ,
k
(N )
LS 0 |Y1
U
, j ∈ [N ] : H(S fk (j) |Y11:N , S 1:fk (j)−1 ) ≤ δN ,
k
(51)
(N )
HTU 0
, j ∈ [N ] : H(T
(N )
, j ∈ [N ] : H(T gk (j) |Y21:N , T 1:gk (j)−1 ) ≤ δN .
gk (j)
|T
1:gk (j)−1
) ≥ log2 (qWk ) − δN ,
k
LT
U 0 |Y2
k
Since two senders’ common messages are independent from each other, we have
(N )
(N )
k
k
(N )
HSU 0 = HTU 0 = HWk ,
(N )
0j
0 1:j−1
where HWk , j ∈ [N ] : H(Uk |Uk
) ≥ log2 (qWk ) − δN .
Define the following sets of indices for Sender 1,
(N )
(N )
C11 , HSU 0 ∩ LS
1
(N )
0 |Y1
U1
(N )
, C12 , HTU 0 ∩ LT
1
0 |Y2
U1
,
(52)
and similarly define C21 and C22 for Sender 2. From (42) we have
1 1
|C1 | = Pc (1),
N
1 1
lim
|C2 | = Pc (2),
N →∞ N
lim
N →∞
1 2
|C1 | = P̃c (1) ≥ Pc (1),
N
1 2
lim
|C2 | = P̃c (2) ≥ Pc (2).
N →∞ N
lim
N →∞
(53)
Choose an arbitrary subset of C12 \ C11 , denoted as C121 , such that |C121 | = |C11 \ C12 |, and an arbitrary
0
subset of C22 \ C21 , denoted as C221 , such that |C221 | = |C21 \ C22 |. Partition the indices of U11:N as
follows:
2
1
= C121 ,
= C11 \ C12 , I1c
I1c = C11 ∩ C12 , I1c
(N )
(N )
0
1
2
0
F1r
= HW1 \ (I1c ∪ I1c
∪ I1c
), F1d
= (HW1 )C ,
1
2
0
0
as shown in Fig. 3, and similarly define I2c , I2c
, I2c
, F2r
and F2d
for Sender 2.
(54)
19
0
Fig. 3. Graphical representation of the partition for U11:N of Type A points.
2) Partition Scheme for Type B Points: Let P be a point of Type B, Pc be the corresponding
common message rate pair, and P1 and P2 be Receiver 1’s and Receiver 2’s rate pairs respectively. Without loss of generality, we consider the case when Pc ∈ R(PY2 |W1 W2 ) \ R(PY1 |W1 W2 )
and Pc (1) ≤ I(W1 ; Y1 ) in this subsection as an example. In this case, Receiver 1 applies the
Type II partially decoding while Receiver 2 adopts Type I.
Choose P̄1 = I(X1 W2 ; Y1 ) − P1 (2), P1 (2) , which is on the dominant face of R(PY1 |X1 W2 )
and larger than P1 , and P̃c = I(W1 W2 ; Y2 ) − Pc (2), Pc (2) , which is on the dominant face
of R(PY2 |W1 W2 ) and larger than Pc , as the target points for conducting monotone chain rule
expansions in our code design. Let S 1:2N be the monotone chain rule expansion that achieves
P̄1 in R(PY1 |X1 W2 ), and T 1:2N the expansion that achieves P̃c in R(PY2 |W1 W2 ). Denote the sets
0
of indices in S 1:2N with S j ∈ U11:N and S j ∈ U21:N by SU1 and SU20 respectively, and those in
0
0
T 1:2N with T j ∈ U11:N and T j ∈ U21:N by TU10 and TU20 respectively. Let f1 (j) : [N ] → SU1
be the mapping from indices of U11:N to those of S SU1 , f2 (j) : [N ] → SU20 the mapping from
0
indices of U21:N to those of S
those of T
TU 0
k
SU 0
2
0
, and gk (j) : [N ] → TUj0 the mapping from indices of Uk1:N to
(N )
(N )
(N )
(N )
(N )
(N )
for k = 1, 2. Define HW1 , HW2 , HSU 0 , HTU 0 , HTU 0 , LS
2
1
0 |Y1
U2
2
(N )
, LT
0 |Y2
U1
(N )
and LT
0 |Y2
U2
in the same way as in the Type A case, and additionally define
0
0
(N )
LW1 |Y1 , j ∈ [N ] : H(U1j |Y11:N , U11:j−1 ) ≤ δN .
(55)
Define the following sets of indices for two senders:
(N )
(N )
(N )
(N )
C10 , HW1 ∩ LW1 |Y1 , C100 , HW1 ∩ LT
0 |Y2
U1
C21
,
(N )
HW2
∩
(N )
LS 0 |Y1 ,
U
2
C22
,
(N )
HW2
∩
,
(N )
LT 0 |Y2 ,
U
2
(56)
20
which satisfy
1 00
1 0
|C1 | = I(W1 ; Y1 ) ≥ Pc (1), lim
|C1 | = P̃c (1) ≥ Pc (1),
N →∞ N
N →∞ N
1 1
1 2
lim
|C2 | = P̄1 (2) = Pc (2), lim
|C2 | = P̃c (2) = Pc (2).
N →∞ N
N →∞ N
lim
If Pc (1) = I(W1 ; Y1 ), let C11 = C10 . Otherwise choose a subset C11 of C10 such that |C11 | = N Pc (1).
Similarly, if P̃c (1) = Pc (1), let C12 = C100 . Otherwise choose a subset C12 ⊂ C100 such that |C12 | =
0
N Pc (1). Partition the indices of U11:N as follows:
1
2
I1c = C11 ∩ C12 , I1c
= C11 \ C12 , I1c
= C12 \ C11 ,
0
F1r
=
(N )
HW1
\ (I1c ∪
1
I1c
∪
2
I1c
),
0
F1d
=
(57)
(N )
(HW1 )C ,
1
2
0
0
and similarly define I2c , I2c
, I2c
, F2r
and F2d
for Sender 2.
3) Chaining Scheme for Common Messages: Suppose the number of chained blocks is K.
1
0
|
|, |I1c
Let F1c , F01c and F001c (resp. F2c , F02c and F002c ) be three random sequences of length |F1r
2
1
0
2
|) respectively and uniformly distributed over W1 (resp. W2 ).
| and |I2c
|, |I2c
| (resp. |F2r
and |I1c
Sender 1 encodes its common message as follows.
(1) In Block 1,
0
•
1 store common message symbols.
{u1j }j∈I1c ∪I1c
•
0 = F1c .
{u1j }j∈F1r
•
00
2 = F .
{u1j }j∈I1c
1c
•
0 are randomly generated according to conditional probability P 0 j
0
{u1j }j∈F1d
(u1j |u11:j−1 ).
U |U 1:j−1
0
0
0
0
0
1
1
(2) In Block i (1 < i < K),
0
0
0
•
j
0
1 , {u }j∈F 0
and {u1j }j∈F1d
are determined in the same way as in Block 1.
{u1j }j∈I1c ∪I1c
1
1r
•
j
2 are assigned to the same value as {u }j∈I 1 in Block i − 1.
{u1j }j∈I1c
1
1c
0
0
(3) In Block K,
0
•
0
0
0
j
j
0 , {u }
2 and {u }j∈F 0 are determined in the same way as in Block
{u1j }j∈I1c , {u1j }j∈F1r
1 j∈I1c
1
1d
i (1 < i < K).
0
•
0
1 = F .
{u1j }j∈I1c
1c
0
0
In each block, a vanishing fraction of the almost deterministic symbols, {u1j }j∈D11 and {u1j }j∈D12 ,
are separately transmitted to Receiver 1 and 2 respectively with some reliable error-correcting
(N )
(N )
code, where D11 = (HW1 )C ∩ (LS
0 |Y1
U1
(N )
(N )
(N )
)C in the Type A case and D11 = (HW1 )C ∩ (LW1 |Y1 )C
(N )
in the Type B case, and D12 = (HW1 )C ∩ (LT
0 |Y2
U1
)C in both cases. Note that random sequence
21
F1c is reused over K blocks. Thus, the rate of frozen symbols that need to be shared between
Sender 1 and Receiver 1 in the common message encoding,
1
(|F1c |
KN
+ |F01c | + |F001c |), can be
made negligible by increasing K.
Sender 2 encodes its common messages similarly by swapping subscripts 1 and 2.
B. Private Message Encoding
1) Partition Scheme for Type A Points: Define
0
0
(N )
HX1 |W1 W2 , j ∈ [N ] : H(U1j |U11:N , U21:N , U11:j−1 ) ≥ log2 (qX1 ) − δN ,
0
0
(N )
LX1 |Y1 W1 W2 , j ∈ [N ] : H(U1j |Y11:N , U11:N , U21:N , U11:j−1 ) ≤ δN ,
(N )
(58)
(N )
and similarly define HX2 |W1 W2 and LX2 |Y2 W1 W2 . Due to the independence between two senders’
messages, we have
(N )
(N )
(N )
(N )
HX1 |W1 W2 = HX1 |W1 , HX2 |W1 W2 = HX2 |W2 ,
0
(N )
where HXk |Wk , j ∈ [N ] : H(Uij |Uk1:N , Uk1:j−1 ) ≥ log2 (qXk ) − δN
(59)
for k = 1, 2. Then define
the following sets for U11:N
(N )
(N )
I1p , HX1 |W1 W2 ∩ LX1 |Y1 W1 W2 ,
(N )
(N )
F1r = HX1 |W1 W2 ∩ (LX1 |Y1 W1 W2 )C ,
(N )
(HX1 |W1 W2 )C ,
F1d =
(N )
(60)
(N )
D1 = (HX1 |W1 W2 )C ∩ (LX1 |Y1 W1 W2 )C .
For U21:N , I2p , F2r , F2d and D2 are defined similarly.
2) Partition Scheme for Type B Points: From Definition 6 we know that
(N )
(N )
R1p = P̄1 (1) − I(W1 ; Y1 ),
(61)
R2p = I(X2 ; Y2 |W1 W2 ).
(62)
(N )
(N )
Define HX1 |W1 , HX2 |W1 W2 , HX2 |W2 and LX2 |Y2 W1 W2 in the same way as in the Type A case,
and additionally define
(N )
LSU
1
|Y1 W1
0
, j ∈ [N ] : H(S f1 (j) |Y11:N , U11:N , S 1:f1 (j)−1 ) ≤ δN .
(63)
22
Then define I2p , F2r , F2d and D2 for U21:N in the same way as in the Type A case, and define
(N )
(N )
I1p = HX1 |W1 ∩ LSU
(N )
|Y1 W1 ,
1
(N )
F1r = HX1 |W1 ∩ (LSU
1
C
|Y1 W1 ) ,
(64)
(N )
F1d = (HX1 |W1 )C ,
(N )
(N )
D1 = (HX1 |W1 )C ∩ (LSU
1
C
|Y1 W1 ) ,
for U11:N . Note that the permutation S 1:2N is chosen to achieve P̄1 in Receiver 1’s effective
channel PY1 |X1 W2 without the knowledge of W1 , but the code construction for U11:N is determined
jointly by this permutation and the side information of W11:N .
3) Encoding for Private Messages: Let F1p (resp. F2p ) be a random sequence of length |F1p |
(resp. |F2p |) and uniformly distributed over X1 (resp. X2 ). Sender 1 encodes its private message
in each block as follows.
•
{uj1 }j∈I1p store private message symbols.
•
{uj1 }j∈F1r = F1p .
•
{uj1 }j∈F1d are randomly generated according to probability PU j |U 1:N U 1:j−1 (uj1 |u11:N , u1:j−1
).
1
•
{uj1 }j∈D1 are separately transmitted to Receiver 1 with some reliable error-correcting code.
0
1
1
1
Sender 2 encodes its private message similarly by swapping subscripts 1 and 2. Note that
random sequence F1p and F2p are reused over K blocks. Thus, the rate of frozen symbols in
the private message encoding can also be made negligible by increasing K.
C. Decoding
1) Decoding for Type A Points: Receiver 1 decodes two senders’ common messages from
Block 1 to Block K.
•
In Block 1, for k = 1, 2,
0
(N )
ukj ,
if j ∈ (LS 0 |Y1 )C
0j
U
k
ūk =
arg maxu∈{0,1} P fk (j) 1:N 1:fk (j)−1 (u|y 1:N , s1:fk (j)−1 ), if j ∈ L(N )
k
S
|Y
S
S 0 |Y1
k
•
0j
(65)
U
k
0j
0j
0
j
2 and {ū }j∈I 2 are deduced from {ū }j∈I 1 and {ū }j∈I 1
In Block i (1 < i < K), {ū1 }j∈I1c
2
1
2
2c
1c
2c
in Block i − 1 respectively, and the rest are decoded in the same way as in Block 1.
0
•
0
j
1 and {ū }j∈I 1 are assigned to the pre-shared value between Sender 1
In Block K, {ū1j }j∈I1c
2
2c
and the two receivers, and the rest are decoded in the same way as in Block i (1 < i < K).
23
Having recovered the common messages in a block, Receiver 1 decodes its private message
in that block as
(N )
uj ,
if j ∈ (LX1 |Y1 W1 W2 )C
1
j
ū1 =
arg maxu∈{0,1} P j 1:N 0 1:N 0 1:N 1:j−1 (u|y 1:N , ū0 1:N , ū0 1:N , u1:j−1 ), if i ∈ L(N )
1
1
1
2
U1 |Y1 U1
U2
U1
X1 |Y1 W1 W2
(66)
Receiver 2 decodes similarly, except that it decodes from Block K to Block 1.
2) Decoding for Type B Points: Receiver 1 decodes from Block 1 to Block K.
•
In Block 1, Sender 1 first decodes its intended common message as
(N )
u 0 j ,
if j ∈ (LW1 |Y1 )C
0j
1
ū1 =
arg maxu∈{0,1} P 0 j 1:N 0 1:j−1 (u|y 1:N , ū0 1:j−1 ), if i ∈ L(N )
1
1
W1 |Y1
U |Y
U
1
1
(67)
1
Then it decodes its private message and Sender 2’s common message jointly as
(N )
uj ,
if j ∈ (LSU |Y1 )C
1
j
1
ū1 =
arg maxu∈{0,1} P f1 (j) 1:N 0 1:N 1:f1 (j)−1 (u|y 1:N , ū0 1:N , s1:f1 (j)−1 ), if j ∈ L(N )
1
1
S
|Y1 U1
S
SU1 |Y1
(68)
0j
(N
)
u2 ,
if j ∈ (LS 0 |Y1 )C
0j
U2
(69)
ū2 =
(N
arg maxu∈{0,1} P f2 (j) 1:N 1:f2 (j)−1 (u|y 1:N , s1:f2 (j)−1 ), if j ∈ L )
1
S
|Y1 S
S 0 |Y1
U2
0
•
0
0
0
j
j
j
2 and {ū }j∈I 2 are deduced from {ū }j∈I 1 and {ū }j∈I 1
In Block i (1 < i < K), {ū1j }j∈I1c
2
1
2
2c
1c
2c
in Block i − 1 respectively, and the rest are decoded in the same way as in Block 1.
0
•
0
j
1 and {ū }j∈I 1 are assigned to the pre-shared value between Sender 1
In Block K, {ū1j }j∈I1c
2
2c
and the two receivers, and the rest are decoded in the same way as in Block i (1 < i < K).
Receiver 2 decodes from Block K to Block 1 in the same way as in the Type A scheme.
VI. P ERFORMANCE A NALYSIS
A. Achievable Rates
1) Type A Scheme: In the Type A scheme, the common message rates of the two senders in
this scheme are
1
K|I1c | + (K − 1)|I1c
|
|C 1 | |I 1 |
= 1 − 1c ,
KN
N
KN
1
1
K|I2c | + (K − 1)|I2c |
|C | |I 1 |
R2c =
= 2 − 2c .
KN
N
KN
R1c =
(70)
24
From (53) we have
lim
N →∞,K→∞
R1c = Pc (1),
lim
N →∞,K→∞
R2c = Pc (2).
(71)
The private message rates of the two senders are
R1p =
1
1
|I1p |, R2p = |I2p |.
N
N
(72)
Since the private message encoding is just standard point-to-point polar coding, we have
lim R1p = I(X1 ; Y1 |W1 W2 ),
N →∞
lim R2p = I(X2 ; Y2 |W1 W2 ).
N →∞
(73)
Thus, our proposed scheme achieves the target Type A point P.
2) Type B Scheme: In the Type B scheme, the common message rates can also be written as
R1c =
1
|C11 | |I1c
|
|C 1 | |I 1 |
−
, R2c = 2 − 2c ,
N
KN
N
KN
(74)
with
lim
N →∞,K→∞
R1c = Pc (1),
lim
N →∞,K→∞
R2c = Pc (2).
Same as in the Type A case, the private message rate of Sender 2 achieves (62). For Sender 1’s
private message rate, the following lemma shows that our proposed scheme achieves (61).
1
|I1p |
N →∞ N
Lemma 2. lim
= P̄1 (1) − I(W1 ; Y1 ).
Proof. See Appendix C.
B. Total Variation Distance
Let PU (u) denote the target distribution of random variable U , QU (u) denote the induced
distribution of U by our encoding scheme, and k P − Q k denote the total variation distance
between distributions P and Q. We have the following lemma.
Lemma 3. For i ∈ [1, K],
p
p
k PW11:N W21:N X11:N X21:N Y11:N Y21:N − Q(W11:N W21:N X11:N X21:N Y11:N Y21:N )i k≤ 4 log 2 N δN ,
where (·)i stands for random variables in Block i (1 ≤ i ≤ K).
Proof. See Appendix D.
(75)
25
C. Error Performance
Lemma 4. The error probability of a receiver with the Type I partially joint decoding in the
overall K blocks can be upper bounded by
PeI ≤
p
p
(K + 1)(K + 2)
N δN + 2K(K + 1) log 2 N δN ,
2
(76)
while error probability of a receiver with the Type II partially joint decoding in the overall K
blocks can be upper bounded by
PeII ≤
p
K(K + 1)(K + 5)
2K(K 2 + 6K − 1) p
N δN +
log 2 N δN .
6
3
(77)
Proof. See Appendix E.
We can see that the chaining scheme has a more detrimental effect on the Type II decoding than
on the Type I one. This is because in the Type I decoding, only the common message decoding
stage involves chaining, while in the Type II decoding, both stages of decoding involve chaining.
D. Complexity
In this subsection we compare the complexity of our proposed scheme and the scheme of
[39].
1) Complexity in Finding Auxiliary Random Variables: The Han-Kobayashi region is expressed with ARVs. Finding suitable ARVs to achieve a target rate pair is in fact part of the
code design, since unlike the channel statistics which are given, the ARVs need to be designed
and optimized. Although the Han-Kobayashi region has an explicit expression, to compute it
for a given channel is a very difficult task in general, since one has to consider every possible
choice of the joint distribution of ARVs.
For a given 2-user DM-IC PY1 Y2 |X1 X2 (y1 , y2 |x1 , x2 ), suppose we fix the input distribution of
X1 and X2 , and the alphabet size of X1 , X2 , W1 , W2 , V1 and V2 . If we wish to find a suitable
joint distribution to achieve a target rate pair in the compact Han-Kobayashi region (which we
refer to as Problem 1), we will need to search from all joint distributions of PXk Wk (k = 1, 2)
that satisfy
X
wk ∈Wk
PXk Wk (xk , wk ) =
X
wk ∈Wk
PWk (wk )PXk |Wk (xk |wk ) = PXk (xk ).
(78)
26
If we wish to find a suitable joint distribution to achieve a target rate pair in the original HanKobayashi region (which we refer to as Problem 2), then we will need to search from all
distributions of PVk and PWk , and every choice of deterministic function PXk |Vk Wk , such that
X
PXk (xk ) =
PVk (vk )PWk (wk )PXk |Vk Wk (xk |vk , wk )
wk ∈Wk ,vk ∈Vk
=
X
wk ∈Wk
X
PWk (wk )
PVk (vk )PXk |Vk Wk (xk |vk , wk ).
(79)
vk ∈Vk
From (78) and (79) we can see that Problem 1 is a sub-problem of Problem 2 by letting
X
PXk |Wk (xk |wk ) =
PVk (vk )PXk |Vk Wk (xk |vk , wk ).
(80)
vk ∈Vk
Thus, Problem 2 can be solved by first solving Problem 1 and then solving the sub-problem
of finding a proper distribution of PVk and deterministic mappings PXk |Vk Wk that satisfy (80),
which we refer to as Problem 3. To the best of our knowledge, there is no efficient solution
to this problem unless we impose extra constraints on the random variables. Note that there
may be multiple (or no) solutions to this problem. To find every feasible solution, one needs to
calculate {PVk : vk ∈ Vk } for every possible deterministic function set {PXk |Vk Wk (xk |vk , wk ) :
vk ∈ Vk , wk ∈ Wk , xk ∈ Xk } by solving linear equation systems. Since there exist 2|Vk |×|Wk |×|Xk |
different deterministic function sets, we can conclude that our proposed scheme can greatly
reduce the complexity in designing ARVs, especially for large alphabet size cases.
2) Code Construction Complexity: As pointed out by a reviewer of this paper, existing efficient
construction algorithms (such as [58], [59]) for point-to-point polar codes may not be directly
applied to the permutation based MAC polar codes in the general case, as the permutation
introduces a random variable that involves a complicated relation with the original pair of
random variables. Thus, it is currently not clear how much the code construction complexity of
permutation based MAC polar codes is. Nevertheless, as has been shown in [5], permutations of
type 0i 1N 0N −i (0 ≤ i ≤ N ) is sufficient to achieve the whole achievable rate region of a 2-user
MAC. In this case, we have U11:i = S 1:i and U1i+1:N = S N +i+1:2N . Thus, the polarization of user
1’s first i symbols is the same as that in the equivalent point-to-point channel when user 2’s
signal is treated as noise, and the polarization of user 1’s last N − i symbols is the same as that
in the equivalent point-to-point channel when user 2’s signal is treated as side information. For
user 2, the code construction of U21:N can be estimated using methods of [58], [59] with some
modifications by taking the side information of U11:i into account when calculating likelihood
ratios of the channels. Thus, the construction complexity of our proposed polar codes is O(N ).
27
Note that it is not clear whether the 3-user MAC polar codes used in [39] can also be
constructed in a similar way. Therefore the construction complexity of our scheme is smaller than
that in [39] (at least equal if the permutation based m-user MAC polar codes can be constructed
at complexity O(mN )).
3) Encoding Complexity: Both the scheme of [39] and ours use Arıkan’s monotone chain
rule based MAC polarization, the encoding complexity of which is the same as that of point-topoint polar codes. However, if we take the deterministic mappings in the original Han-Kobayashi
scheme into account, the scheme of [39] will have an extra encoding complexity of O(N ).
4) Decoding Complexity: It is shown in [17] that the decoding complexity of the permutation
based MAC polar codes is O(mN (log N − log N0 + 1 + 2mN0 )), where m is the number of users
and N0 is defined in Proposition 2. For a fixed N0 , the decoding complexity of the scheme in
[39] is O(3N (log N − log N0 + 1 + 23N0 )), while that of our proposed scheme is O(2N (log N −
log N0 + 1 +22N0 )+N log N ), since in our proposed scheme each receiver applies a 2-user MAC
polar decoding and a point-to-point polar decoding. It can be seen that the decoding complexity
is reduced by O((3 · 23N0 − 22N0 +1 + 1 − log N0 )N ) compared to [39].
VII. E XTENSION TO I NTERFERENCE N ETWORKS
So far we have shown that our proposed two types of partially decoding schemes can achieve
the Han-Kobayashi region of the 2-user DM-IC via both random coding and polar coding. A
natural question is whether they can be extended to arbitrary DM-INs. In this section, we show
that partially joint decoding can also make heterogeneous superposition polar coding schemes
easier to realize in DM-INs.
A K-sender L-receiver DM-IN, denoted by (K, L)-DM-IN, consists of K senders and L
receivers. Each sender k ∈ [K] transmits an independent message Mk at rate Rk , while each
receiver l ∈ [L] wishes to recover a subset Dl ⊂ [K] of the messages. Similar to the HanKobayashi strategy in the 2-user DM-IC, Mk can be split into several component messages,
each intended for a group of receivers. If a message is intended for only one receiver, we refer
to it as a private message. Otherwise we refer to it as a common message. We only consider the
case when each sender has only one private message intended for some receiver and (possibly)
multiple common messages intended also for this receiver. More complicated cases can be
resolved by decomposing a sender with multiple private and common messages into a certain
number of virtual senders of this type.
28
Fig. 4. Sender 1’s part in the equivalent channel of the (K, L)-DM-IN.
Fig. 5. Sender 1’s part in the equivalent channel of the (K, L)-DM-IN with the proposed approach.
Fig. 4 shows Sender 1’s part of the equivalent channel of the (K, L)-DM-IN with a private
message M11 intended for Receiver 1, and common messages M1C1 , M1C2 , ..., M1Ca1 (a1 ≥ 1)
intended for Receiver 1 and some other receiver groups. It is shown in [60] that the optimal
achievable rate region when the encoding is restricted to random coding ensembles is the
intersection of rate regions for its component multiple access channels in which each receiver
recovers its private message as well as its common messages. Thus, one can design a code
for the compound MAC to achieve the optimal rate region, which belongs to the homogeneous
superposition variant and has been realized by polar codes in [39]. Here we discuss using the
proposed partially joint decoding idea to design a heterogeneous one.
Firstly, consider the case when only Sender 1 uses the heterogeneous approach. Instead of
generating a codeword for each message and then merging them with some mapping function as in Fig. 4, now we generate codewords for common messages first and then encode
them together with private message M11 via superposition coding, as shown in Fig. 5. Let
P ∗ (X1 |U11 , U1C1 , ..., U1Ca1 ) be the deterministic mapping from U11 , U1C1 , ..., U1Ca1 to X1 in Fig.
P
4, and let P1∗ (X1 |U1C1 , ..., U1Ca1 ) = U11 P (U11 )P ∗ (X1 |U11 , U1C1 , ..., U1Ca1 ) be the conditional
distribution of random variables X1 , U1C1 , ..., U1Ca1 in Fig. 5. We can see that synthesized MACs
for other receivers are not affected with this setting since U11 plays no part in them. Thus,
29
the achievable rate regions of other receivers’ synthesized MACs remain the same. Note that
deterministic mapping P ∗ and ARV U11 are no longer needed in this design.
Now let us discuss the achievable rates from Receiver 1’s point of view. Denote Sender 1’s
common messages as a whole by U1c1 with rate R1c1 , and other senders’ common messages
which are intended for Receiver 1 by U1co with rate R1co . The private message rate is denoted
by R1p . With the homogeneous approach in Fig. 4, the achievable rate region of
R1p ≤ I(U11 ; Y1 |U1c1 , U1co )
R1c1 ≤ I(U1c1 ; Y1 |U11 , U1co )
p
R1co ≤ I(U1co ; Y1 |U11 , U1c1 )
R1
R1IN (P ∗ ) = R1c1
R1p + R1c1 ≤ I(U11 , U1c1 ; Y1 |U1co )
R1p + R1co ≤ I(U11 , U1co ; Y1 |U1c1 )
R1co
R1c1 + R1co ≤ I(U1co , U1c1 ; Y1 |U11 )
Rp + Rc1 + Rco ≤ I(U , U , U ; Y )
1
1
1
11
1c1
1co
1
Receiver 1 is
.
(81)
With the heterogeneous approach in Fig. 5, the achievable rate region of Receiver 1 becomes
p
c
o
R1 ≤ I(U1co ; Y1 |X1 )
R1
0
c1
1
∗
p
c1
.
(82)
RIN (P1 ) = R1
R1 + R1 ≤ I(X1 ; Y1 |U1co )
Rco
R1p + R1c1 + R1co ≤ I(X1 , U1co ; Y1 )
1
Since (U11 , U1c1 ) → X1 is a deterministic mapping, we can readily see that upper bounds for
R1co , R1 = R1p + R1c1 and R1all = R1p + R1c1 + R1co are invariant with the heterogeneous approach.
Thus, if we are interested in the overall rate between the user pair of Sender 1 and Receiver 1
rather than each component message rate, the heterogeneous approach can achieve the same or
even a larger rate region than the homogeneous approach for a given joint distribution.
Similar to the 2-user DM-IC case, when we apply polar codes to realize the heterogeneous
scheme, the design of fully joint decoders is a problem as a sender’s common messages must be
decoded before its private message. Now consider using the proposed partially joint decoding
scheme. With the Type I decoding order, all common messages intended for Receiver 1 are
jointly decoded before the private message. The achievable rate region is
R1p ≤ I(X1 ; Y1 |U1c1 , U1co )
p
R1
R1c1 ≤ I(U1c1 ; Y1 |U1co )
P arI
∗
RIN (P1 ) = R1c1
R1co ≤ I(U1co ; Y1 |U1c1 )
co
R1
R1c1 + R1co ≤ I(U1c1 , U1co ; Y1 )
.
(83)
30
With the Type II decoding order, Sender 1’s common messages are decoded first, and then the
private message and other senders’ common messages are jointly decoded. The achievable rate
region is
p
R1
∗
P arII
RIN (P1 ) = R1c1
co
R
1
R1c1 ≤ I(U1c1 ; Y1 )
R1p ≤ I(X1 ; Y1 |U1c1 , U1co )
R1co ≤ I(U1co ; Y1 |X1 )
R1p + R1co ≤ I(X1 , U1co ; Y1 |U1c1 )
.
(84)
It is easy to verify that the following two regions, {(R1 , R1co ) : R1 = R1p + R1c1 , (R1p , R1c1 , R1co ) ∈
0
RPINarI (P1∗ ) ∪ RPINarII (P1∗ )} and {(R1 , R1co ) : R1 = R1p + R1c1 , (R1p , R1c1 , R1co ) ∈ R1IN (P1∗ )}, are
equivalent.
In the above we have discussed the case when only one user pair applies heterogeneous
superposition coding and partially joint decoding. More complicated cases can be extended
from this case by adding one user pair with the proposed scheme at a time. To apply polar
coding, one simply needs to adopt MAC polarization with more than 2 users and follow our
proposed scheme for the 2-user DM-IC. To conclude, we have the following proposition.
Proposition 3. The proposed heterogeneous superposition polar coding scheme with the two
types of partially joint decoding achieves the optimal rate region of DM-INs when the encoding
is restricted to random coding ensembles.
Remark 3. Comparing (81) and (82) we can see that the heterogeneous approach has a much
simpler expression of achievable rate region. Since we have shown that these two superposition
schemes result in the same achievable rate region with respect to the overall rate between
each user pair, the heterogeneous approach can serve as an useful tool for deriving simplified
achievable rate regions for DM-INs.
VIII. C ONCLUSION R EMARKS
Based on the compact description of the Han-Kobayashi region and the coding strategy lying
behind [35], we have shown that every point on the dominant faces of the Han-Kobayashi region
can be achieved by polar codes in a simpler way compared to the scheme of [39]. We prove that
the fully joint decoding requirement in the Han-Kobayashi coding strategy can be loosened to
partially joint decoding, which is more friendly to polar code designs. This result reveals more
insights on the roles of ARVs and coding strategies for DM-INs.
31
The chaining method we used in this paper and the polar alignment technique used in [39] both
make polar coding schemes lengthy. It is shown in [42] that the non-universality of polar codes is
a property of the successive cancellation decoding algorithm. Under ML decoding, a polar code
constructed for the binary symmetric channel (BSC) universally achieves the capacity for any
binary memoryless symmetric (BMS) channel. Also, as we have mentioned in Remark 2, fully
joint decoding in the heterogeneous superposition coding scheme is possible with ML decoding.
This makes us wonder if there exist ML-like decoding algorithms and the corresponding code
structures for polar codes which maintain universality while still enjoying low complexity. If the
answer is yes, our proposed scheme may be further simplified as well as polar coding schemes
for other multi-user channels.
APPENDIX A
P ROOF O F T HEOREM 3
Definition 7. ( [57, p. 521]) Let (X1 , X2 , ..., Xk ) denote a finite collection of discrete random
variables with some fixed joint distribution, PX1 X2 ...Xk (x1 , x2 , ..., xk ), (x1 , x2 , ..., xk ) ∈ X1 ×X2 ×
... × Xk . Let S denote an ordered subset of these random variables and consider N independent
copies of S. Thus,
N
N
Pr(S = s ) =
n
Y
Pr(Si = si ), sN ∈ S N .
i=1
(n)
The set T
N
N
of -typical N -sequences (xN
1 , x2 , ..., xk ) is defined as
T(n) (X1 , X2 , ..., Xk )
1
N
N
N
N
= (x1 , x2 , ..., xk ) : − log PSN (s ) − H(S) < , ∀S ⊆ (X1 , X2 , ..., Xk ) .
N
Codebook generation. Consider a fixed PQ (q)PX1 W1 |Q (x1 , w1 |q)PX2 W2 |Q (x2 , w2 |q). Generate
Q
N Rkc
a sequence q 1:N ∼ N
j=1 PQ (q). For k = 1, 2, randomly and independently generate 2
Q
c
j j
codewords wk1:N (mkc ), mkc ∈ [1 : 2N Rk ], each according to N
j=1 PWk |Q (wk |q ). For each mkc ,
p
randomly and conditionally independently generate 2N Rk codewords x1:N
k (mkc , mkp ), mkp ∈ [1 :
Q
p
j
j
j
2N Rk ], each according to N
j=1 PXk |Wk Q (xk |wk (mkc ), q ).
Encoding. To send mk = (mkc , mkp ), Sender k (k = 1, 2) transmits x1:N
k (mkc , mkp ).
Decoding.
In the Type I partially joint decoding, Receiver k (k = 1, 2) decodes in the following two
steps:
32
•
(Simultaneous decoding for two senders’ common messages) The decoder declares that
(m̂1c , m̂2c ) is sent if it is the unique message pair such that
q 1:N , w11:N (m̂1c ), w21:N (m̂2c ), yk1:N ∈ T(N ) ;
otherwise it declares an error.
•
(Private message decoding) If such a (m̂1c , m̂2c ) is found, the decoder finds the unique m̂kp
such that
1:N
∈ T(N ) ;
q 1:N , w11:N (m̂1c ), w21:N (m̂2c ), x1:N
(
m̂
,
m̂
),
y
kc
kp
k
k
otherwise it declares an error.
In the Type II partially joint decoding, Receiver k decodes in the following two steps:
•
(Intended common message decoding) The decoder declares that m̂kc is sent if it is the
unique message such that
q 1:N , wk1:N (m̂kc ), yk1:N ∈ T(N ) ;
otherwise it declares an error.
•
(Simultaneous decoding for the unintended common message and the private message) If
such a m̂kc is found, the decoder finds the unique (m̂k0 c , m̂kp ) such that
1:N
(m̂kc , m̂kp ), yk1:N ∈ T(N ) ,
q 1:N , wk1:N (m̂kc ), wk1:N
0 (m̂k 0 c ), xk
where k 0 = mod (k, 2) + 1; otherwise it declares an error.
Error analysis. First we consider Type I. Assume that message pair ((1, 1), (1, 1)) is sent and
Receiver 1 applies the Type I decoding. Define the following error events
(I)
1:N
E10 , {(q 1:N , w11:N (1), w21:N (1), x1:N
/ T(N ) },
1 (1, 1), y1 ) ∈
(I)
E11 , {(q 1:N , w11:N (m1c ), w21:N (1), y11:N ) ∈ T(N ) for some m1c 6= 1},
(I)
E12 , {(q 1:N , w11:N (1), w21:N (m2c ), y11:N ) ∈ T(N ) for some m2c 6= 1},
(I)
E13 , {(q 1:N , w11:N (m1c ), w21:N (m2c ), y11:N ) ∈ T(N ) for some m1c 6= 1, m2c 6= 1},
(I)
1:N
(N )
E14 , {(q 1:N , w11:N (1), w21:N (1), x1:N
for some m1p 6= 1},
1 (1, m1p ), y1 ) ∈ T
(I)
1:N
(N )
E15 , {(q 1:N , w11:N (1), w21:N (m2c ), x1:N
for some m2c 6= 1, m1p 6= 1}.
1 (1, m1p ), y1 ) ∈ T
33
The average probability of error for Receiver 1 can be upper bounded as
(I)
(I)
(I)
(I)
(I)
(I)
(I)
(I)
(I)
(I)
(I)
P(E1 ) ≤ P(E10 ) + P(E11 ) + P(E13 ) + P(E14 ) + P(E15 )
≤ P(E10 ) + P(E11 ) + P(E12 ) + P(E13 ) + P(E14 ),
(I)
(85)
(I)
(I)
where (85) holds because P(E15 ) ≤ P(E12 ). By the law of large numbers (LLN), P(E10 ) tends
(I)
(I)
(I)
(I)
to 0 as N → ∞. By the packing lemma, P(E11 ), P(E12 ), P(E13 ) and P(E14 ) tend to 0 as
N → ∞ if the conditions
R1c ≤ I(W1 ; Y1 |W2 Q)
R2c ≤ I(W2 ; Y1 |W1 Q)
R1c
+
R2c
(86)
≤ I(W1 W2 ; Y1 |Q)
R1p ≤ I(X1 ; Y1 |W1 W2 Q)
are satisfied, respectively. The rate constraints when Receiver 2 applies Type I decoding are
similar by swapping subscripts 1 and 2 in (86).
Next we consider Type II. We also assume that message pair ((1, 1), (1, 1)) is sent and Receiver
1 applies the Type II decoding. Define the following error events
(II)
1:N
E10 , {(q 1:N , w11:N (1), w21:N (1), x1:N
/ T(N ) },
1 (1, 1), y1 ) ∈
(II)
E11 , {(q 1:N , w11:N (m1c ), y11:N ) ∈ T(N ) for some m1c 6= 1},
(II)
1:N
(N )
for some m1p 6= 1},
E12 , {(q 1:N , w11:N (1), w21:N (1), x1:N
1 (1, m1p ), y1 ) ∈ T
(II)
1:N
(N )
E13 , {(q 1:N , w11:N (1), w21:N (m2c ), x1:N
for some m2c 6= 1, m1p 6= 1}.
1 (1, m1p ), y1 ) ∈ T
The average probability of error for Receiver 1 can be upper bounded as
(II)
P(E1
(II)
(II)
(II)
(II)
) ≤ P(E10 ) + P(E11 ) + P(E12 ) + P(E13 ).
(II)
(87)
(II)
(II)
Similarly, by the LLN, P(E10 ) tends to 0 as N → ∞. By the packing lemma, P(E11 ), P(E12 ),
(II)
and P(E13 ) tend to 0 as N → ∞ if the conditions
R1c ≤ I(W1 ; Y1 |Q)
R1p ≤ I(X1 ; Y1 |W1 W2 Q)
(88)
R1p + R2c ≤ I(X1 W2 ; Y1 |W1 Q)
are satisfied, respectively. The rate constraints when Receiver 2 applies the Type II decoding are
similar by swapping subscripts 1 and 2 in (88).
34
Suppose both receivers adopt the Type I decoding. From (86) and its counterpart for Receiver
2 we know that the achievable rate region is
R1c ≤ min{I(W1 ; Y1 |W2 Q), I(W1 ; Y2 |W2 Q)}
c
R1
R2c ≤ min{I(W2 ; Y1 |W1 Q), I(W2 ; Y2 |W1 Q)}
R c
2
R1P ar (P1∗ ) = p
R1c + R2c ≤ min{I(W1 W2 ; Y1 |Q), I(W1 W2 ; Y2 |Q)}
R
1
R1p ≤ I(X1 ; Y1 |W1 W2 Q)
p
R2
R2p ≤ I(X2 ; Y2 |W1 W2 Q)
.
(89)
Now suppose Receiver 1 uses Type I while Receiver 2 adopts Type II. From (86) and the
counterpart of (88) for Receiver 2 we have
R1c ≤ I(W1 ; Y1 |W2 Q)
c
R2c ≤ min{I(W2 ; Y1 |W1 Q), I(W2 ; Y2 |Q)}
R
1
R c
R1c + R2c ≤ I(W1 W2 ; Y1 |Q)
2
R2P ar (P1∗ ) = p
R
R1p ≤ I(X1 ; Y1 |W1 W2 Q)
1
R2p ≤ I(X2 ; Y2 |W1 W2 Q)
R2p
Rp + Rc ≤ I(X W ; Y |W Q)
2
1
2
1
2
2
Similarly if Receiver 2 uses Type I while Receiver 1 adopts Type II, we have
R1c ≤ min{I(W1 ; Y2 |W2 Q), I(W1 ; Y1 |Q)}
R2c ≤ I(W2 ; Y2 |W1 Q)
R1c
R c
R1c + R2c ≤ I(W1 W2 ; Y2 |Q)
2
R3P ar (P1∗ ) = p
R
R1p ≤ I(X1 ; Y1 |W1 W2 Q)
1
R2p
R2p ≤ I(X2 ; Y2 |W1 W2 Q)
R1p + R2c ≤ I(X1 W2 ; Y1 |W1 Q)
.
(90)
.
(91)
From (89) we have
R1p + R1c + R2c ≤ min{I(X1 W2 ; Y1 |Q), I(X1 ; Y1 |W1 W2 Q) + I(W1 W2 ; Y2 |Q)},
R2p + R1c + R2c ≤ min{I(X2 W1 ; Y2 |Q), I(X2 ; Y2 |W1 W2 Q) + I(W1 W2 ; Y1 |Q)}.
From (90) we have
R1p + R1c + R2c ≤ I(X1 W2 ; Y1 |Q),
I(X2 W1 ; Y2 |Q)
p
c
c
R2 + R1 + R2 ≤ min I(X2 W1 ; Y2 |W2 Q) + I(W2 ; Y1 |W1 Q), .
I(X2 ; Y2 |W1 W2 Q) + I(W1 W2 ; Y1 |Q)
35
From (91) we have
I(X1 W2 ; Y1 |Q)
p
c
c
R1 + R1 + R2 ≤ min I(X1 W2 ; Y1 |W1 Q) + I(W1 ; Y2 |W2 Q), ,
I(X1 ; Y1 |W1 W2 Q) + I(W1 W2 ; Y2 |Q)
R2p + R1c + R2c ≤ I(X2 W1 ; Y2 |Q).
Then we can obtain the following achievable rate region
R1c ≤ I(W1 ; Y1 |W2 Q)
R1p ≤ I(X1 ; Y1 |W1 W2 Q)
R1c
R1p + R2c ≤ I(X1 W2 ; Y1 |W1 Q)
R c
R1p + R1c + R2c ≤ I(X1 W2 ; Y1 |Q)
2
∗
RP ar (P1 ) = p
R
R2c ≤ I(W2 ; Y2 |W1 Q)
1
R2p
R2p ≤ I(X2 ; Y2 |W1 W2 Q)
R2p + R1c ≤ I(X2 W1 ; Y2 |W2 Q)
Rp + Rc + Rc ≤ I(X W ; Y |Q)
2
1
2
2
1
2
.
(92)
Using the Fourier-Motzkin elimination we can readily show that (92) results in the same region
as in Theorem 2 with the following two additional constraints (same as the Chong-Motani-Garg
region shown in [35, Lemma 4]):
R1 ≤ I(X1 ; Y1 |W1 W2 Q) + I(X2 W1 ; Y2 |W2 Q),
(93)
R2 ≤ I(X2 ; Y2 |W1 W2 Q) + I(X1 W2 ; Y1 |W1 Q).
From [35] we know that the Chong-Motani-Garg region is smaller than the compact HanKobayashi region for a P1 only if
I(X2 W1 ; Y1 |W2 Q) < I(W1 ; Y1 |Q) or I(X1 W2 ; Y2 |W1 Q) < I(W2 ; Y2 |Q).
(94)
For the former case, an intuitive interpretation is that Receiver 2 is unable to achieve the
unintended common message rate of R1c = I(W1 ; Y1 |Q) even if it tries its best. In this case,
Sender 1 will not transmit any common message (i.e., W1 = ∅) [49, Problem 6.12]. Similarly,
for the latter case, we will set W2 = ∅. Thus, these rates are still achievable with the proposed
scheme. This completes the proof.
36
APPENDIX B
P ROOF OF L EMMA 1
Since R1 + R2 = c, R1 + R2 = d, R1 + R2 = e, 2R1 + R2 = f and R1 + 2R2 = g are the
possible dominant faces of the Han-Kobayashi region, we prove Lemma 1 by deriving value
ranges of common message rates for points on each of them.
A. Points on R1 + R2 = c and R1 + R2 = d
Suppose P ∈ RHK (P1∗ ) is a point on line
R1 + R2 = c.
(95)
Let (R1p , R1c , R2p , R2c ) be a rate decomposition of P. The equality of (95) forces those in the
counterpart of (8) for Receiver 2 and (14) to hold. Thus,
R2p = I(X2 ; Y2 |W1 W2 Q),
R1p + R1c + R2c = I(X1 W2 ; Y1 |Q).
(96)
(97)
From (97) and (17) we have
R2c ≥ I(W2 ; Y1 |Q).
From (96) and (18) we have
R2c ≤ I(W2 ; Y2 |W1 Q).
From (95) and (22) we have
R2 ≥ 2c − f = I(X2 ; Y2 |W1 W2 Q) + I(W1 W2 ; Y1 |Q) − I(W1 ; Y2 |W2 Q).
From (95)and (23) we have
R2 ≤ g − c = I(X2 W1 ; Y2 |Q) − I(W1 ; Y1 |Q).
Thus,
I(W1 W2 ; Y1 |Q) − I(W1 ; Y2 |W2 Q) ≤ R2c ≤ I(W1 W2 ; Y2 |Q) − I(W1 ; Y1 |Q),
which implies
I(W2 ; Y1 |W1 Q) ≤ I(W2 ; Y2 |Q).
(98)
37
Besides, if R1 + R2 = c is a dominant face of the Han-Kobayashi region, c ≤ d and c ≤ e must
hold. From (19), (20) and (21) we have
I(W1 ; Y2 |W2 Q) ≥ I(W1 ; Y1 |Q),
I(W1 W2 ; Y2 |Q) ≥ I(W1 W2 ; Y1 |Q).
1) For max{I(W2 ; Y1 |Q), I(W1 W2 ; Y1 |Q) − I(W1 ; Y2 |W2 Q)} ≤ R2c ≤ I(W2 ; Y1 |W1 Q), let
R1c = I(W1 W2 ; Y1 |Q) − R2c ,
(99)
R1p = I(X1 ; Y1 |W1 W2 Q).
Obviously (R1c , R2c ) ∈ R(PY1 |W1 W2 ). From (98) and (99) we have R2c ≤ I(W2 ; Y2 |Q) and R1c ≤
I(W1 ; Y2 |W2 Q). Thus, (R1c , R2c ) ∈ R(PY2 |W1 W2 ). Therefore P is of Type A.
2) For I(W2 ; Y1 |W1 Q) ≤ R2c ≤ min{I(W1 W2 ; Y2 |Q) − I(W1 ; Y1 |Q), I(W2 ; Y2 |W1 Q)}, let
R1c = I(W1 ; Y1 |Q),
R1p = I(X1 W2 ; Y1 |W1 Q) − R2c .
In this case, P belongs to Type B.
For a point P ∈ RHK (P1∗ ) on line R1 + R2 = d, the analysis is similar.
B. Points on R1 + R2 = e
Suppose P ∈ RHK (P1∗ ) is a point on line
R1 + R2 = e.
(100)
Let (R1p , R1c , R2p , R2c ) be a rate decomposition of P. The equality of (100) forces those in (12)
and its counterpart for Receiver 2 to hold. Thus,
R1p + R2c = I(X1 W2 ; Y1 |W1 Q),
R2p + R1c = I(X2 W1 ; Y2 |W2 Q).
Then from (8), (14) and their counterparts for Receiver 2, we have
I(W1 ; Y2 |W2 Q) ≤ R1c ≤ I(W1 ; Y1 |Q),
(101)
I(W2 ; Y1 |W1 Q) ≤ R2c ≤ I(W2 ; Y2 |Q).
(102)
38
From (22) and (100) we have
R1p + R1c ≤ I(X1 ; Y1 |W1 W2 Q) + I(W1 ; Y1 |Q),
R2p + R2c ≥ I(X2 W1 ; Y2 |W2 Q) + I(W2 ; Y1 |W1 Q) − I(W1 ; Y1 |Q).
From (23) and (100) we have
R1p + R1c ≥ I(X1 W2 ; Y1 |W1 Q) + I(W1 ; Y2 |W2 Q) − I(W2 ; Y2 |Q),
R2p + R2c ≤ I(X2 ; Y2 |W1 W2 Q) + I(W2 ; Y2 |Q).
1) If I(X2 ; Y2 |W1 W2 Q) + I(W2 ; Y1 |W1 Q) ≤ P(2) ≤ I(X2 ; Y2 |W1 W2 Q) + I(W2 ; Y2 |Q), let
R2p = I(X2 ; Y2 |W1 W2 Q). Then
R2c = P(2) − I(X2 ; Y2 |W1 W2 Q),
R1c = I(W1 ; Y2 |W2 Q),
R1p = I(X1 W2 ; Y1 |W1 Q) − R2c .
From (101) we know that R1c ≤ I(W1 ; Y1 |Q). Thus, P belongs to Type B.
2) If I(X2 W1 ; Y2 |W2 Q) + I(W2 ; Y1 |W1 Q) − I(W1 ; Y1 |Q) ≤ P(2) < I(X2 ; Y2 |W1 W2 Q) +
I(W2 ; Y1 |W1 Q), let R1p = I(X1 ; Y1 |W1 W2 Q). Then
R2c = I(W2 ; Y1 |W1 Q),
R2p = P(2) − I(W2 ; Y1 |W1 Q),
R1c = I(X2 W1 ; Y2 |W2 Q) + I(W2 ; Y1 |W1 Q) − P(2).
From (102) we know that R2c ≤ I(W2 ; Y2 |Q). Thus, P belongs to Type B.
C. Points on 2R1 + R2 = f and R1 + 2R2 = g
Suppose P ∈ RHK (P1∗ ) is a point on line
2R1 + R2 = f.
(103)
Let (R1p , R1c , R2p , R2c ) be a rate decomposition of P. The equality of (103) forces those in (8),
(14) and the counterpart of (12) to hold. Thus,
R1p = I(X1 ; Y1 |W1 W2 Q),
(104)
R1c + R2p = I(X2 W1 ; Y2 |W2 Q),
(105)
R1c + R2c = I(W1 W2 ; Y1 |Q).
(106)
39
Then we obtain from (11) that
R1c ≤ I(W1 ; Y1 |W2 Q).
(107)
From (19)–(21), (103) and (104) we have
I(W
;
Y
|W
Q)
1
2
2
c
R1 ≥ max I(W1 W2 ; Y1 |Q) − I(W2 ; Y2 |Q) .
I(W1 ; Y1 |Q)
Thus,
I(W1 W2 ; Y1 |Q) − I(W1 ; Y2 |W2 Q)
R2c ≤ min
.
I(W2 ; Y2 |Q)
I(W2 ; Y1 |W1 Q)
We can see that (R1c , R2c ) ∈ R(PY1 |W1 W2 ) and R2c ≤ I(W2 ; Y2 |Q). Thus, P belongs to Type B.
For a point P ∈ RHK (P1∗ ) on line R1 + 2R2 = g, the analysis is similar.
Now we have completed the proof.
APPENDIX C
P ROOF O F L EMMA 2
Define
(N )
HSU
1
|Y1 W1
(N )
and let BSU
|Y W
1 1 1
0
, j ∈ [N ] : H(S f1 (j) |Y11:N , U11:N , S 1:f1 (j)−1 ) ≥ log2 (qX1 ) − δN ,
(N )
, (HSU
(N )
|Y W
1 1 1
∪ LSU
1
C
|Y1 W1 ) .
Then we have
1 (N )
1
(N )
(N )
|I1p | = |HX1 |W1 ∩ (HSU |Y1 W1 ∪ BSU |Y1 W1 )C |
1
1
N
N
1 (N )
(N )
(N )
= |HX1 |W1 ∩ (HSU |Y1 W1 )C ∩ (BSU |Y1 W1 )C |
1
1
N
1 (N )
1
(N )
(N )
≥ |HX1 |W1 ∩ (HSU |Y1 W1 )C | − |BSU |Y1 W1 |
1
1
N
N
1 (N )
1 (N )
1 (N )
= |HX1 |W1 | − |HSU |Y1 W1 | − |BSU |Y1 W1 |.
1
1
N
N
N
From (28) we have
1 (N )
|HX1 |W1 | = HqX1 (X1 |W1 ).
N →∞ N
lim
From [61, Lemma 1] we have
1 (N )
|BSU |Y1 W1 | = 0.
1
N →∞ N
lim
(108)
40
From (39) we have
1 (N )
|HSU |Y1 W1 |
1
N →∞ N
1 X
= lim
HqX1 (S j |Y11:N , W11:N , S 1:j−1 )
N →∞ N
j∈S
lim
U1
= lim
N →∞
1
1 X
HqX1 (S 1:2N |Y11:N , W11:N ) −
HqX1 (S j |Y11:N , W11:N , S 1:j−1 )
N
N j∈S
0
U2
1
1
HqX1 (S 1:2N , W11:N |Y11:N ) − HqX1 (W11:N |Y11:N )
N →∞ N
N
1 X
HqX1 (S j |Y11:N , W11:N , S 1:j−1 )
−
N j∈S
= lim
0
U2
1
1
HqX1 (S 1:2N |Y11:N ) + lim
HqX1 (W11:N |Y11:N , S 1:2N )
N →∞ N
N →∞ N
1 X
HqX1 (S j |Y11:N , W11:N , S 1:j−1 )
− HqX1 (W1 |Y1 ) − lim
N →∞ N
j∈S
= lim
0
U2
1 X
HqX1 (S j |Y11:N , S 1:j−1 )
N →∞ N
j∈S
= HqX1 (X1 W2 |Y1 ) − HqX1 (W1 |Y1 ) − lim
(109)
0
U2
= HqX1 (X1 W2 |Y1 ) − HqX1 (W1 |Y1 ) − HqX1 (W2 ) − I(X1 W2 ; Y1 ) + P̄1 (1)
(110)
= HqX1 (X1 ) − HqX1 (W1 |Y1 ) − P̄1 (1),
where (109) holds because HqX1 (W11:N |Y11:N , S 1:2N ) = 0, and (110) holds by
1 X
lim
HqX1 (S j |Y11:N , S 1:j−1 ) = HqX1 (W2 ) − P̄1 (2)
N →∞ N
j∈S
0
U2
= HqX1 (W2 ) − I(X1 W2 ; Y1 ) − P̄1 (1) .
Thus,
1
|I1p | = HqX1 (X1 |W1 ) − HqX1 (X1 ) − HqX1 (W1 |Y1 ) − P̄1 (1)
N →∞ N
lim
= HqX1 (X1 W1 ) − HqX1 (W1 ) − HqX1 (X1 ) + HqX1 (W1 |Y1 ) + P̄1 (1)
= P̄1 (1) − I(W1 ; Y1 ),
where (111) holds because HqX1 (X1 W1 ) = HqX1 (X1 ).
APPENDIX D
(111)
41
P ROOF O F L EMMA 3
We drop the subscript (·)i for simplicity here as the analysis for any i is the same. Since GN
is an invertible mapping, by the chain rule for the Kullback-Leibler divergence we have
D(PW11:N ||QW11:N ) = D(PU 0 1:N ||QU 0 1:N )
1
1
=
N
X
D(PU 0 j |U 0 1:j−1 ||QU 0 j |U 0 1:j−1 )
1
j=1
=
X
1
1
1
D(PU 0 j |U 0 1:j−1 ||QU 0 j |U 0 1:j−1 )
1
1
1
(112)
1
(N )
1
j∈HW
=
X
0
0
log2 (qW1 ) − H(U1j |U11:j−1 )
(113)
(N )
j∈HW
1
≤ N δN ,
(114)
where (112) holds by our common message encoding scheme, (113) holds by the fact that
information symbols and frozen symbols are uniformly distributed, and (114) holds by the
(N )
definition of set HW1 . Similarly,
D(PU11:N |W11:N ||QU11:N |W11:N ) =
N
X
D(PU j |U 1:j−1 W 1:N ||QU j |U 1:j−1 W 1:N )
1
1
1
1
1
1
j=1
=
X
D(PU j |U 1:j−1 W 1:N ||QU j |U 1:j−1 W 1:N )
1
1
1
1
1
1
(N )
j∈HX |W
1
1
=
X
log2 (qX1 ) − HqX1 (U1j |U11:j−1 W11:N )
(N )
1 |W1
j∈HX
≤ N δN .
Then by the chain rule for the Kullback-Leibler divergence we have
D(PW11:N X11:N ||QW11:N X11:N ) = D(PW11:N U11:N ||QW11:N U11:N )
= D(PU11:N |W11:N ||QU11:N |W11:N ) + D(PW11:N ||QW11:N )
(115)
≤ 2N δN .
(116)
Similarly,
D(PW21:N X21:N ||QW21:N X21:N ) ≤ 2N δN .
(117)
42
Then we have
k PW11:N W21:N X11:N X21:N − QW11:N W21:N X11:N X21:N k
=k PW11:N X11:N PW21:N X21:N − QW11:N X11:N QW21:N X21:N k
≤k PW11:N X11:N PW21:N X21:N − QW11:N X11:N PW21:N X21:N k
+ k QW11:N X11:N PW21:N X21:N − QW11:N X11:N QW21:N X21:N k
=k PW11:N X11:N − QW11:N X11:N k + k PW21:N X21:N − QW21:N X21:N k
p
p
≤ 4 log 2 N δN ,
(118)
(119)
(120)
where (118) holds by the triangle inequality, (119) holds by [62, Lemma 17], and (120) holds
by (116), (117) and Pinsker’s inequality.
Since PY11:N Y21:N |X11:N X21:N = QY11:N Y21:N |X11:N X21:N , by [62, Lemma 17] we have
k PW11:N W21:N X11:N X21:N Y11:N Y21:N − QW11:N W21:N X11:N X21:N Y11:N Y21:N k
=k PW11:N W21:N X11:N X21:N − QW11:N W21:N X11:N X21:N k
p
p
≤ 4 log 2 N δN .
(121)
APPENDIX E
P ROOF O F L EMMA 4
To evaluate all error events in the proposed scheme, we denote the random variables drawn
from the target distribution as U1 , X1 , Y1 , etc., those induced by our encoding scheme as Ũ1 ,
X̃1 , Ỹ1 , etc., and those of the decoding results as Ū1 , X̄1 , Ȳ1 , etc.
We first bound the error probability of a receiver with the Type I partially joint decoding. As
an example, we consider Receiver 1 in the Type A scheme. Define the following error events
E1,i , {(W̃11:N W̃21:N X̃11:N Ỹ11:N )i 6= (W11:N W21:N X11:N Y11:N )},
0
0
0
0
ch
EW
, {(Ū1chaining Ū2chaining )i−1 6= (Ũ1chaining Ũ2chaining )i−1 },
1 W2 ,i−1
0
0
0
0
EW1 W2 ,i , {(Ū11:N Ū21:N )i 6= (Ũ11:N Ũ21:N )i },
EX1 ,i , {(Ū11:N )i 6= (Ũ11:N )i },
where ”chaining” in the superscript stands for the elements used for chaining.
43
The error probability of Receiver 1 when decoding messages in Block i (1 ≤ i ≤ K) can be
upper bounded by
I
Pe,i
≤ P [EX1 ,i or EW1 W2 ,i ]
C
C
= P [EX1 ,i or EW1 W2 ,i |E1,i ]P [E1,i ] + P [EX1 ,i or EW1 W2 ,i |E1,i
]P [E1,i
]
C
≤ P [E1,i ] + P [EX1 ,i or EW1 W2 ,i |E1,i
]
C
C
C
C
C
, EW
]P [EW
|E1,i
]
] + P [EX1 ,i |E1,i
= P [E1,i ] + P [EW1 W2 ,i |E1,i
1 W2 ,i
1 W2 ,i
C
C
C
≤ P [E1,i ] + P [EW1 W2 ,i |E1,i
] + P [EX1 ,i |E1,i
, EW
].
1 W2 ,i
(122)
Using optimal coupling [63, Lemma 3.6] we have
P [E1,i ] =k PW11:N W21:N X11:N Y11:N − Q(W11:N W21:N X11:N Y11:N )i k .
(123)
For i ≥ 2, we have
C
P [EW1 W2 ,i |E1,i
]
C
ch
ch
C
ch
ch
= P [EW1 W2 ,i |E1,i
, EW
]P [EW
] + P [EW1 W2 ,i |Ec,i
, (EW
)C ]P [(EW
)C ]
1 W2 ,i−1
1 W2 ,i−1
1 W2 ,i−1
1 W2 ,i−1
ch
C
ch
≤ P [EW
] + P [EW1 W2 ,i |Ec,i
, (EW
)C ]
1 W2 ,i−1
1 W2 ,i−1
≤ P [EW1 W2 ,i−1 ] + N δN ,
(124)
where (124) holds by the error probability of source polar coding [2]. Since
C
C
P [EW1 W2 ,i−1 ] = P [EW1 W2 ,i−1 |E1,i−1
]P [E1,i−1
] + P [EW1 W2 ,i−1 |E1,i−1 ]P [E1,i−1 ]
C
≤ P [EW1 W2 ,i−1 |E1,i−1
] + P [E1,i−1 ],
we have
C
C
P [EW1 W2 ,i |E1,i
] ≤ P [EW1 W2 ,i−1 |E1,i−1
] + N δN + 4
p
p
log 2 N δN .
For i = 1, from our chaining scheme we know that
C
P [EW1 W2 ,1 |E1,1
] ≤ N δN .
Then by induction we have
p
p
C
P [EW1 W2 ,i |E1,i
] ≤ iN δN + 4(i − 1) log 2 N δN .
By the error probability of source polar coding [2] we have
C
C
] ≤ N δN .
P [EX1 ,i |E1,i
, EW
1 W2 ,i
44
Thus,
Pe1,i ≤ (i + 1)N δN + 4i
p
p
log 2 N δN .
Then error probability of Receiver 1 in the overall K blocks can be upper bounded by
PeI
≤
K
X
i=1
I
Pe,i
≤
p
p
(K + 1)(K + 2)
N δN + 2K(K + 1) log 2 N δN .
2
(125)
Next we bound the error probability of a receiver with the Type II partially joint decoding.
As an example, we consider Receiver 1 in the Type B scheme. Define the following error events
E1,i , {(W̃11:N W̃21:N X̃11:N Ỹ11:N )i 6= (W11:N W21:N X11:N Y11:N )},
0
0
0
0
ch
EW
, {(Ū1chaining )i−1 6= (Ũ1chaining )i−1 },
1 ,i−1
ch
EW
, {(Ū2chaining )i−1 6= (Ũ2chaining )i−1 },
2 ,i−1
0
0
EW1 ,i , {(Ū11:N )i 6= (Ũ11:N )i },
0
0
EX1 W2 ,i , {(Ū11:N Ū21:N )i 6= (Ũ11:N Ũ21:N )i }.
Similar to (122), the error probability of Receiver 1 when decoding messages in Block i (1 ≤
i ≤ K) can be upper bounded by
II
Pe,i
≤ P [EX1 W2 ,i or EW1 ,i ]
C
C
C
≤ P [E1,i ] + P [EW1 ,i |E1,i
] + P [EX1 W2 ,i |E1,i
, EW
].
1 ,i
C
] in the Type I case, we have
Similar to the analysis for P [EW1 W2 ,i |E1,i
p
p
C
P [EW1 ,i |E1,i
] ≤ iN δN + 4(i − 1) log 2 N δN ,
and
C
C
P [EW1 ,i ] ≤ P [EW1 ,i |E1,i
]P [E1,i
] + P [E1,i ]
p
p
≤ iN δN + 4i log 2 N δN .
(126)
(127)
45
For i ≥ 2, we have
C
C
]
, EW
P [EX1 W2 ,i |E1,i
1 ,i
C
C
ch
ch
C
C
ch
ch
= P [EX1 W2 ,i |E1,i
, EW
, (EW
)C ]P [(EW
)C ] + P [EX1 W2 ,i |E1,i
, EW
, EW
]P [EW
]
1 ,i
2 ,i−1
2 ,i−1
1 ,i
2 ,i−1
2 ,i−1
ch
ch
C
C
]
)C ] + P [EW
, (EW
, EW
≤ P [EX1 W2 ,i |E1,i
2 ,i−1
2 ,i−1
1 ,i
≤ N δN + P [EX1 W2 ,i−1 ]
C
C
] + P [EX1 W2 ,i−1 |E1,i−1 ]P [E1,i−1 ]
]P [E1,i−1
= N δN + P [EX1 W2 ,i−1 |E1,i−1
p
p
C
≤ P [EX1 W2 ,i−1 |E1,i−1
] + N δN + 4 log 2 N δN
p
p
C
C
]
+
P
[E
]
+
N
δ
+
4
, EW
≤ P [EX1 W2 ,i−1 |E1,i−1
log
2
N δN
W
,i−1
N
1
1 ,i−1
p
p
C
C
≤ P [EX1 W2 ,i−1 |E1,i−1
, EW
] + iN δN + 4i log 2 N δN .
1 ,i−1
For i = 1, from our chaining scheme we have
C
C
P [EX1 W2 ,1 |E1,1
, EW
] ≤ N δN .
1 ,1
Thus, by induction we have
C
C
P [EX1 W2 ,i |E1,i
, EW
]≤
1 ,i
p
p
(i + 2)(i − 1)
(N δN + 4 log 2 N δN ) + N δN .
2
(128)
From (126), (123), (127) and (128) we have
II
Pe,i
p
p
i2 + 3i − 2
≤
(N δN + 4 log 2 N δN ) + N δN ..
2
Then we have
PeII ≤
K
X
II
Pe,i
≤
i=1
p
2K(K 2 + 6K − 1) p
K(K + 1)(K + 5)
N δN +
log 2 N δN .
6
3
(129)
R EFERENCES
[1] E. Arikan, “Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input
memoryless channels,” IEEE Transactions on Information Theory, vol. 55, no. 7, pp. 3051–3073, 2009.
[2] ——, “Source polarization,” in 2010 IEEE International Symposium on Information Theory, 2010, pp. 899–903.
[3] S. B. Korada, “Polar codes for channel and source coding,” Ph.D. dissertation, ÉCOLE POLYTECHNIQUE FÉDÉRALE
DE LAUSANNE, 2009.
[4] S. B. Korada and R. L. Urbanke, “Polar codes are optimal for lossy source coding,” IEEE Transactions on Information
Theory, vol. 56, no. 4, pp. 1751–1768, 2010.
[5] E. Arikan, “Polar coding for the Slepian-Wolf problem based on monotone chain rules,” in 2012 IEEE International
Symposium on Information Theory Proceedings (ISIT), July 2012, pp. 566–570.
46
[6] M. Andersson, V. Rathi, R. Thobaben, J. Kliewer, and M. Skoglund, “Nested polar codes for wiretap and relay channels,”
IEEE Communications Letters, vol. 14, no. 8, pp. 752–754, 2010.
[7] H. Mahdavifar and A. Vardy, “Achieving the secrecy capacity of wiretap channels using polar codes,” IEEE Transactions
on Information Theory, vol. 57, no. 10, pp. 6428–6443, 2011.
[8] O. O. Koyluoglu and H. El Gamal, “Polar coding for secure transmission and key agreement,” IEEE Transactions on
Information Forensics and Security, vol. 7, no. 5, pp. 1472–1483, 2012.
[9] E. Şaşoğlu and A. Vardy, “A new polar coding scheme for strong security on wiretap channels,” in 2013 IEEE International
Symposium on Information Theory Proceedings (ISIT), July 2013, pp. 1117–1121.
[10] T. Gulcu and A. Barg, “Achieving secrecy capacity of the wiretap channel and broadcast channel with a confidential
component,” in Information Theory Workshop (ITW), 2015 IEEE, April 2015, pp. 1–5.
[11] Y. P. Wei and S. Ulukus, “Polar coding for the general wiretap channel with extensions to multiuser scenarios,” IEEE
Journal on Selected Areas in Communications, vol. 34, no. 2, pp. 278–291, 2016.
[12] M. Karzand, “Polar codes for degraded relay channels,” in Proceedings of the International Zurich Seminar on
Communications, 2012, pp. 59–62.
[13] R. Blasco-Serrano, R. Thobaben, M. Andersson, V. Rathi, and M. Skoglund, “Polar codes for cooperative relaying,” IEEE
Transactions on Communications, vol. 60, no. 11, pp. 3263–3273, 2012.
[14] S. Onay, “Successive cancellation decoding of polar codes for the two-user binary-input mac,” in 2013 IEEE International
Symposium on Information Theory Proceedings (ISIT), July 2013, pp. 1122–1126.
[15] E. Şaşoğlu, E. Telatar, and E. Yeh, “Polar codes for the two-user multiple-access channel,” IEEE Transactions on
Information Theory, vol. 59, no. 10, pp. 6583–6592, Oct 2013.
[16] E. Abbe and I. Telatar, “Polar codes for the m-user multiple access channel,” IEEE Transactions on Information Theory,
vol. 58, no. 8, pp. 5437–5448, Aug 2012.
[17] H. Mahdavifar, M. El-Khamy, J. Lee, and I. Kang, “Achieving the uniform rate region of general multiple access channels
by polar coding,” IEEE Transactions on Communications, vol. 64, no. 2, pp. 467–478, 2016.
[18] N. Goela, E. Abbe, and M. Gastpar, “Polar codes for broadcast channels,” IEEE Transactions on Information Theory,
vol. 61, no. 2, pp. 758–782, 2015.
[19] M. Mondelli, S. H. Hassani, I. Sason, and R. L. Urbanke, “Achieving Marton’s region for broadcast channels using polar
codes,” IEEE Transactions on Information Theory, vol. 61, no. 2, pp. 783–800, 2015.
[20] R. A. Chou and M. R. Bloch, “Polar coding for the broadcast channel with confidential messages: A random binning
analogy,” IEEE Transactions on Information Theory, vol. 62, no. 5, pp. 2410–2429, 2016.
[21] M. Andersson, R. Schaefer, T. Oechtering, and M. Skoglund, “Polar coding for bidirectional broadcast channels with
common and confidential messages,” IEEE Journal on Selected Areas in Communications, vol. 31, no. 9, pp. 1901–1908,
September 2013.
[22] C. E. Shannon, “Two-way communication channels,” in Proceedings of the Fourth Berkeley Symposium on Mathematical
Statistics and Probability, vol. 1, 1961, pp. 611–644.
[23] R. Ahlswede, “The capacity region of a channel with two senders and two receivers,” The Annals of Probability, vol. 2,
no. 5, pp. 805–814, 1974.
[24] A. Carleial, “A case where interference does not reduce capacity,” IEEE Transactions on Information Theory, vol. 21,
no. 5, pp. 569–570, 1975.
[25] H. Sato, “The capacity of the Gaussian interference channel under strong interference,” IEEE Transactions on Information
Theory, vol. 27, no. 6, pp. 786–788, 1981.
47
[26] S. T. Chung and J. M. Cioffi, “The capacity region of frequency-selective Gaussian interference channels under strong
interference,” IEEE Transactions on Communications, vol. 55, no. 9, pp. 1812–1821, 2007.
[27] N. Liu and S. Ulukus, “The capacity region of a class of discrete degraded interference channels,” IEEE Transactions on
Information Theory, vol. 54, no. 9, pp. 4372–4378, 2008.
[28] R. Benzel, “The capacity region of a class of discrete additive degraded interference channels,” IEEE Transactions on
Information Theory, vol. 25, no. 2, pp. 228–231, 1979.
[29] H. F. Chong and M. Motani, “The capacity region of a class of semideterministic interference channels,” IEEE Transactions
on Information Theory, vol. 55, no. 2, pp. 598–603, 2009.
[30] A. E. Gamal and M. Costa, “The capacity region of a class of deterministic interference channels,” IEEE Transactions on
Information Theory, vol. 28, no. 2, pp. 343–346, 1982.
[31] M. Costa and A. E. Gamal, “The capacity region of the discrete memoryless interference channel with strong interference,”
IEEE Transactions on Information Theory, vol. 33, no. 5, pp. 710–711, 1987.
[32] A. Carleial, “Interference channels,” IEEE Transactions on Information Theory, vol. 24, no. 1, pp. 60–70, 1978.
[33] T. Cover, “An achievable rate region for the broadcast channel,” IEEE Transactions on Information Theory, vol. 21, no. 4,
pp. 399–404, 1975.
[34] T. Han and K. Kobayashi, “A new achievable rate region for the interference channel,” IEEE Transactions on Information
Theory, vol. 27, no. 1, pp. 49–60, 1981.
[35] H. F. Chong, M. Motani, H. K. Garg, and H. E. Gamal, “On the Han-Kobayashi region for the interference channel,”
IEEE Transactions on Information Theory, vol. 54, no. 7, pp. 3188–3195, 2008.
[36] S. Sharifi, A. K. Tanc, and T. M. Duman, “Implementing the Han-Kobayashi scheme using low density parity check codes
over Gaussian interference channels,” IEEE Transactions on Communications, vol. 63, no. 2, pp. 337–350, 2015.
[37] ——, “LDPC code design for binary-input binary-output z interference channels,” in 2015 IEEE International Symposium
on Information Theory (ISIT), 2015, pp. 1084–1088.
[38] K. Appaiah, O. O. Koyluoglu, and S. Vishwanath, “Polar alignment for interference networks,” in Communication, Control,
and Computing (Allerton), 2011 49th Annual Allerton Conference on, 2011, pp. 240–246.
[39] L. Wang, “Channel coding techniques for network communication,” Ph.D. dissertation, UNIVERSITY OF CALIFORNIA,
SAN DIEGO, 2015.
[40] C. Hirche, C. Morgan, and M. M. Wilde, “Polar codes in network quantum information theory,” IEEE Transactions on
Information Theory, vol. 62, no. 2, pp. 915–924, 2016.
[41] L. Wang, E. aolu, B. Bandemer, and Y. H. Kim, “A comparison of superposition coding schemes,” in 2013 IEEE
International Symposium on Information Theory, 2013, pp. 2970–2974.
[42] E. Şaşoğlu, “Polar coding theorems for discrete systems,” Ph.D. dissertation, ÉCOLE POLYTECHNIQUE FÉDÉRALE
DE LAUSANNE, 2011.
[43] R. G. Gallager, Information theory and reliable communication.
Wiley: New York, 1968.
[44] M. Mondelli, R. Urbanke, and S. H. Hassani, “How to achieve the capacity of asymmetric channels,” in 2014 52nd Annual
Allerton Conference on Communication, Control, and Computing (Allerton), 2014, pp. 789–796.
[45] J. Honda and H. Yamamoto, “Polar coding without alphabet extension for asymmetric models,” IEEE Transactions on
Information Theory, vol. 59, no. 12, pp. 7829–7838, 2013.
[46] R. A. Chou and M. R. Bloch, “Using deterministic decisions for low-entropy bits in the encoding and decoding of polar
codes,” in 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2015, pp. 1380–
1385.
48
[47] S. H. Hassani and R. Urbanke, “Universal polar codes,” in 2014 IEEE International Symposium on Information Theory,
2014, pp. 1451–1455.
[48] E. Şaşoğlu and L. Wang, “Universal polarization,” IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 2937–2946,
2016.
[49] A. El Gamal and Y.-H. Kim, Network information theory.
Cambridge university press, 2011.
[50] P. Bergmans, “Random coding theorem for broadcast channels with degraded components,” IEEE Transactions on
Information Theory, vol. 19, no. 2, pp. 197–207, 1973.
[51] R. Wang, J. Honda, H. Yamamoto, R. Liu, and Y. Hou, “Construction of polar codes for channels with memory,” in 2015
IEEE Information Theory Workshop - Fall (ITW), 2015, pp. 187–191.
[52] E. Şaşoğlu and I. Tal, “Polar coding for processes with memory,” in 2016 IEEE International Symposium on Information
Theory (ISIT), 2016, pp. 225–229.
[53] R. Nasser and E. Telatar, “Polar codes for arbitrary DMCs and arbitrary MACs,” IEEE Transactions on Information Theory,
vol. 62, no. 6, pp. 2917–2936, 2016.
[54] R. Nasser, “An ergodic theory of binary operations–part I: Key properties,” IEEE Transactions on Information Theory,
vol. 62, no. 12, pp. 6931–6952, 2016.
[55] ——, “An ergodic theory of binary operations–part II: Applications to polarization,” IEEE Transactions on Information
Theory, vol. 63, no. 2, pp. 1063–1083, 2017.
[56] E. E. Gad, Y. Li, J. Kliewer, M. Langberg, A. A. Jiang, and J. Bruck, “Asymmetric error correction and flash-memory
rewriting using polar codes,” IEEE Transactions on Information Theory, vol. 62, no. 7, pp. 4024–4038, 2016.
[57] T. M. Cover and J. A. Thomas, Elements of information theory.
John Wiley & Sons, 2012.
[58] R. Mori and T. Tanaka, “Performance of polar codes with the construction using density evolution,” IEEE Communications
Letters, vol. 13, no. 7, pp. 519–521, 2009.
[59] I. Tal and A. Vardy, “How to construct polar codes,” IEEE Transactions on Information Theory, vol. 59, no. 10, pp.
6562–6582, Oct 2013.
[60] B. Bandemer, A. E. Gamal, and Y. H. Kim, “Optimal achievable rates for interference networks with random codes,” IEEE
Transactions on Information Theory, vol. 61, no. 12, pp. 6536–6549, 2015.
[61] R. A. Chou, M. R. Bloch, and E. Abbe, “Polar coding for secret-key generation,” IEEE Transactions on Information
Theory, vol. 61, no. 11, pp. 6213–6237, 2015.
[62] P. W. Cuff, “Communication in networks for coordinating behavior,” Ph.D. dissertation, STANFORD UNIVERSITY, 2009.
[63] D. Aldous, “Random walks on finite groups and rapidly mixing markov chains,” in Séminaire de Probabilités XVII 1981/82.
Springer, 1983, pp. 243–297.
| 7 |
ULRICH IDEALS AND MODULES OVER TWO-DIMENSIONAL
RATIONAL SINGULARITIES
arXiv:1307.2093v1 [math.AC] 8 Jul 2013
SHIRO GOTO, KAZUHO OZEKI, RYO TAKAHASHI, KEI-ICHI WATANABE, KEN-ICHI YOSHIDA
Abstract. The main aim of this paper is to classify Ulrich ideals and Ulrich modules
over two-dimensional Gorenstein rational singularities (rational double points) from a
geometric point of view. To achieve this purpose, we introduce the notion of (weakly)
special Cohen–Macaulay modules with respect to ideals, and study the relationship between those modules and Ulrich modules with respect to good ideals.
Contents
1. Introduction
2. Preliminaries
2.1. Ulrich ideals and modules
2.2. Two-dimensional rational singularities
3. Weakly special Cohen–Macaulay modules over Gorenstein local domains
4. Special Cohen–Macaulay modules over two-dimensional rational singularities
5. Ulrich ideals and modules over rational double points
6. Ulrich ideals of non-Gorenstein rational singularities
7. Examples
References
1
4
4
5
7
9
13
23
27
29
1. Introduction
In the paper [GOTWY] we established the theory of Ulrich ideals and modules with
a generalized form. The concept of Ulrich modules, or maximally generated maximal
Cohen–Macaulay modules (MGMCM modules) was introduced by [U, BHU]. In our
language, MGMCM modules are just Ulrich modules with respect to the maximal ideal.
While there are very few MGMCM modules in general, any maximal Cohen–Macaulay
module over a hypersurface local ring of multiplicity (degree) 2 is a finite direct sum of
free modules and Ulrich modules. So, our Ulrich modules include much more members
than MGMCM modules.
2010 Mathematics Subject Classification. 13C14, 14B05, 14C25, 14E16.
Key words and phrases. Ulrich ideal, Ulrich module, special Cohen–Macaulay module, rational singularity, finite CM–representation type.
This work was partially supported by JSPS Grant-in-Aid for Scientific Research (C)
20540050/22540047/22540054/23540059, JSPS Grant-in-Aid for Young Scientists (B) 22740008/22740026
and by JSPS Postdoctoral Fellowships for Research Abroad.
1
To state the main results, let us begin with the definition of Ulrich ideals and modules.
Let A be a Cohen–Macaulay local ring with maximal ideal m and d = dim A ≥ 0, and
let I ⊂ A be a nonparameter m-primary ideal. For simplicity, we assume that I contains
a parameter ideal Q = (a1 , a2 , . . . , ad ) of A as a reduction, that is, I r+1 = QI r for some
integer r ≥ 1.
Definition 1.1. We say that I is an Ulrich ideal of A if it satisfies the following conditions:
(1) I 2 = QI.
(2) I/I 2 is a free A/I-module.
Let XA denote the set of all Ulrich ideals that are not parameter ideals.
For instance, if (A, m) is a Cohen–Macaulay local ring of maximal embedding dimension
([S1]) if and only if m is an Ulrich ideal.
Definition 1.2. Let M be a nonzero finitely generated A-module. Then we say that M
is an Ulrich A-module with respect to I, if the following conditions are satisfied:
(1) M is a maximal Cohen–Macaulay A-module.
(2) e0I (M) = ℓA (M/IM).
(3) M/IM is A/I-free.
Here e0I (M) denotes the multiplicity of M with respect to I and ℓA (M/IM) denotes the
length of the A-module M/IM.
In [GOTWY], we proved that all higher syzygy modules SyziA (A/I) of an Ulrich ideal I
are Ulrich modules with respect to I. Moreover, if A is of finite CM-representation type,
then XA is a finite set. Recall here that a Cohen–Macaulay local ring is said to be of
finite CM-representation type if there are only a finite number of isomorphism classes of
indecomposable maximal Cohen–Macaulay A-modules. Thus we consider the following
natural question.
Problem 1.3. Let (A, m) be a Cohen–Macaulay local ring of finite CM-representation
type.
(1) Classify all Ulrich ideals I of A.
(2) Classify all Ulrich A-modules with respect to a given m-primary ideal I.
(3) Determine all ideals I so that there exists an Ulrich A-module with respect to I.
In [GOTWY, Section 9], we gave an answer to the problem as above in the case of a onedimensional Gorenstein local ring of finite CM-representation type by using techniques
from representation theory of maximal Cohen–Macaulay modules. We want to give a
complete answer to the question as above in the case of a two-dimensional Gorenstein
local ring of finite CM-representation type. Notice that 2-dimensional Gorenstein local
rings of finite CM-representation type (over an algebraically closed field of characteristic
0) are 2-dimensional Gorenstein rational singularities.
Let us explain the organization of the paper. In Section 3, we introduce the notion
of weakly special Cohen–Macaulay modules; let A be a Gorenstein local domain and
I ⊂ A an m-primary ideal. An maximal Cohen–Macaulay A-module M is called a weakly
special Cohen–Macaulay A-module with respect to I if µA (M) = 2 · rankA M and M/IM
2
is A/I-free, where µA (M) denotes the cardinality of a minimal set of generators of M; see
Definition 3.1. Then we prove that M is an Ulrich A-module with respect to I and I is a
good ideal (see Section 2) if and only if M is a weakly special Cohen–Macaulay A-module
with respect to I for a Gorenstein local domain A and a nonparameter m-primary stable
ideal I; see Theorem 3.2 for details. As an application, we give a partial answer to the
Problem 1.3(3). This implies that I is an Ulrich ideal if and only if there exists an Ulrich
A-module with respect to I for any two-dimensional Gorenstein rational singularity.
In Section 4, we modify the notion of special Cohen–Macaulay A-modules introduced by
Wunram [Wu]: Let A be a two-dimensional rational singularity, and M a maximal Cohen–
Macaulay A-module without free summands. Then M is a special Cohen–Macaulay Amodule with respect to I if and only if Ext1A (M, A) = 0 and M/IM is A/I-free; see
Definition 4.5. Special Cohen–Macaulay A-modules are weakly special Cohen–Macaulay
A-modules (but the converse is not true in general). The main result in this section is
the following theorem, which gives a criterion for I (resp. Z) to be a special ideal (resp.
a special cycle) in terms of cycles.
P
Theorem 4.10. Let Z = rj=1 aj Ej 6= Z0 be an anti-nef cycle on the minimal resolution
P
X → Spec A, and put I = IZ . Let Z0 = rj=1 nj Ej denote the fundamental cycle on X.
Then the following conditions are equivalent for every i, 1 ≤ i ≤ r.
(1) Mi is a special Cohen–Macaulay A-module with respect to I.
(2) ai = ni · ℓA (A/I).
(3) There exist positive cycles 0 < Ys ≤ . . . ≤ Y1 ≤ Z0 and anti-nef cycles Z1 , . . . , Zs
so that Zk = Zk−1 + Yk for each k = 1, . . . , s and
Zk−1 · Yk = 0,
pa (Yk ) = 0 and coeff Ei Yk = ni
for every k = 1, 2, . . . , s,
where coeff Ei W stands for the coefficient of Ei in a cycle W .
When this is the case, ℓA (A/I) = s + 1 and every Ik := IZk is a special ideal. Moreover,
for every k = 1, 2, . . . , s, we obtain that Supp(Yk ) is connected, Supp(Yk ) ⊂ ∪{Ej ⊂
Supp(Yk−1) | Ej Zk−1 = 0}, and Yk is the fundamental cycle on Supp(Yk ).
In Section 5, we give a complete list of Ulrich ideals and Ulrich modules with respect
to some ideal I for any two-dimensional Gorenstein rational Cohen–Macaulay singularity.
Main tools are the Riemann–Roch formula, the McKay correspondence and results in
Section 4. The following theorem is the main result in this paper.
Theorem 1.4. Let A be a two-dimensional Gorenstein rational singularity. Then the set
XA of all nonparameter Ulrich ideals is given by:
(A2m )
{(x, y, z), (x, y 2, z), . . . , (x, y m , z)}.
(A2m+1 ) {(x, y, z), (x, y 2, z), . . . , (x, y m+1 , z)}.
(D2m )
{(x, y, z),√(x, y 2, z), . . . , (x, y m−1√
, z),
m−1
m
, y , z), (x − −1y m−1 , y m , z), (x2 , y, z)}.
(x + −1y
3
(D2m+1 ) {(x, y, z), (x, y 2, z), . . . , (x, y m , z), (x2 , y, z)}.
(E6 )
(E7 )
(E8 )
{(x, y, z), (x, y 2, z)}.
{(x, y, z), (x, y 2, z), (x, y 3 , z)}.
{(x, y, z), (x, y 2, z)}.
In Section 6, we discuss Ulrich ideals of two-dimensional non-Gorenstein rational singularities. We show that any Ulrich ideal is an integrally closed and represented on the
minimal resolutuion of singularities, and also is a special ideal in the sense of Section 4.
For instance, any non-Gorenstein cyclic quotient singularity admits a unique Ulrich ideal,
that is, the maximal ideal; see also Section 7.
2. Preliminaries
2.1. Ulrich ideals and modules. First we recall the notion of good ideals in a Gorenstein local ring.
Definition 2.1 (See [GIW]). Suppose that A is a Gorenstein local ring. Let I ⊂ A be
a nonparameter m-primary ideal. If I 2 = QI holds for some minimal reduction Q of I,
then I is called a stable ideal. If I is stable and Q : I = I, then I is called a good ideal.
An m-primary stable ideal I is good if and only if e0I (A) = 2 · ℓA (A/I).
An Ulrich ideal in a Gorenstein local ring is always a good ideal.
Proposition 2.2 (See [GOTWY, Lemma 2.3, Corollary 2.6]). Let A be a d-dimensional
Cohen–Macaulay local ring, and let I ⊂ A be a nonparameter m-primary ideal. Then:
(1) Suppose that I is stable. Then e0I (A) ≤ (µA (I) − d + 1) · ℓA (A/I). Equality holds
if and only if I is an Ulrich ideal.
(2) Suppose that A is Gorenstein. Then the following conditions are equivalent:
(a) I is an Ulrich ideal.
(b) I is a good ideal and µA (I) = d + 1.
(c) I is a good ideal and A/I is Gorenstein.
Let us give two typical examples of Ulrich ideals.
Example 2.3. It is well-known that µA (m) ≤ e0m (A) + dim A − 1 holds true. Equality
holds if and only if the maximal ideal m is stable; see [S1]. Then A is said to have
maximal embedding dimension. By 2.2 (1), m is an Ulrich ideal if and only if A has
maximal embedding dimension.
Suppose that A is a two-dimensional hypersurface of degree 2. Then the maximal ideal
m is an Ulrich ideal. Moreover, a power mk is a good ideal but not an Ulrich ideal for all
k ≥ 2.
Example 2.4. Let A = k[[x0 , x1 , . . . , xd ]]/(xn0 0 + · · · + xnd d ) be a diagonal hypersurface.
kd
k1
Suppose that n0 = 2m is even. Then (xm
0 , x1 , . . . , xd ) is an Ulrich ideal for every 1 ≤
ni
ki ≤ ⌊ 2 ⌋ (i = 1, 2, . . . , d).
The following theorem gives a relationship between Ulrich ideals and Ulrich modules
with respect to ideals.
4
Theorem 2.5 (cf. [GOTWY, Theorem 4.1]). Let A be a Cohen–Macaulay local ring of
dimension d. Then the following conditions are equivalent:
(1) I is a nonparameter Ulrich ideal.
(2) SyziA (A/I) is an Ulrich A-module with respect to I for all i ≥ d.
Note that there exists a non-Ulrich ideal I so that SyziA (A/I) is an Ulrich A-module
with respect to I; see e.g. Examples 3.7, 3.8.
On the other hand, we can construct new Ulrich modules from a given Ulrich module
by the following theorem.
Theorem 2.6 (See also [GOTWY, Lemma 4.2, Theorem 5.1]). Suppose that A is a
Cohen–Macaulay local ring of dimension d which admits a canonical module KA . Assume
that I is an Ulrich ideal with µ(I) > d and M is an Ulrich A-module with respect to I.
Then
(1) Syz1A (M) is an Ulrich A-module with respect to I.
(2) M ∨ = HomA (M, KA ) is an Ulrich A-module with respect to I.
2.2. Two-dimensional rational singularities. Throughout this subsection, let A be a
two-dimensional complete normal local domain with unique maximal ideal m containing
an algebraically closed field k of characteristic 0, unless otherwise specified. (Many results in this paper hold true if k is an algebraically closed field of positive characteristic.
For simplicity, we assume that k has characteristic 0.) Moreover, assume that A has a
rational singularity, that is, there exists a resolution of singularities ϕ : X → Spec A with
H1 (X, OX ) = 0; see [Li1, Li2]. A typical example of rational singularities is a quotient singularity. Moreover, (two-dimensional) Gorenstein rational singularities are called rational
double points, which are hypersurfaces of degree 2.
Positive cycles, anti-nef cycles. In what follows, let ϕ : X → Spec A be a resolution
of singularities with E = ϕ−1 (m) the exceptional divisor. Let E = ∪ri=1 Ei be the decomP
position into irreducible components of E. In the set C = ri=1 ZEi of cycles supported
on E, we define a partial order ≤ as follows: for Z, Z ′ ∈ C, Z ≤ Z ′ if every coefficient of
P
Ei in Z ′ − Z is nonnegative. A cycle Z = ri=1 ai Ei is called positive, denoted by Z > 0,
if 0 ≤ Z and Z 6= 0.
P
On the other hand, a positive cycle Z = i=1 ai Ei is said to be anti-nef if ZEi ≤ 0 for
every i = 1, . . . , r, where ZY denotes the intersection number of Z and Y .
Virtual genus. Since the intersection matrix [Ei Ej ]1≤i,j≤r is negative definite, there
exists the unique Q-divisor KX , the canonical divisor, so that the following equation
Ei2 + KX Ei
+1=0
2
holds for every i = 1, . . . , r, where KX is the canonical divisor of X. If Ei2 = KX Ei = −1,
then Ei ∼
= P1 is called a (−1)-curve. We say that X is a minimal resolution if X contains
no (−1)-curve. Such a resolution is unique up to isomorphism. Moreover, for any positive
cycle Y > 0, we put
Y 2 + KX Y
+ 1,
pa (Y ) =
2
pa (Ei ) :=
5
which is called the virtual genus of Y . One can easily see that
pa (Y + Y ′ ) = pa (Y ) + pa (Y ′ ) + Y Y ′ − 1.
Furthermore, it is well-known that if A is a rational singularity then pa (Z) ≤ 0 holds true
for every positive cycle Z ([Ar, Proposition 1]).
Dual graph. In what follows, assume that ϕ : X → Spec A is the minimal resolution
of singularities with ϕ−1 (m) = ∪ri=1 Ei . Then the dual graph Γ of ϕ is a simple graph with
the vertex set {Ei }ri=1 and the edge defined by the following:
the edge Ei − Ej exists (resp. does not exist) if and only if Ei Ej = 1 (resp. Ei Ej = 0).
For instance, we have the following example:
❣
E1
Γ=
❣
E2
❣
E3
❣
E4
❣
E5
❣
E6
(E1 E4 = E2 E3 = E3 E4 = E4 E5 = E5 E6 = 1,
Ei Ej = 0 (others)
P
Let Y = rj=1 aj Ej be a positive cycle on X. Then we put Supp(Y ) = ∪{Ei | ai > 0},
the support of Y . Such a set is called connected if the induced subgraph is connected.
Note that if Y is positive and pa (Y ) = 0 then Y is connected.
Integrally closed ideal. Let I be an m-primary ideal of A. Then I is said to be
represented on X if the sheaf IOX is invertible, that is, there exists an anti-nef cycle
Z with support in E so that IOX = OX (−Z) and I = H0 (X, OX (−Z)). Then we
denote such an ideal I by I = IZ . The product of two integrally closed ideals of A is
also integrally closed ([Li1]). There is a one-to-one correspondence between the set of
integrally closed m-primary ideals of A that are represented on X and the set of anti-nef
P
cycles Z = ri=1 ai Ei on X.
Good ideal. Now we recall the notion of good ideals of rational singularities.
Definition 2.7. Let I be an m-primary ideal of A. Then I is called good if I is represented
on the minimal resolution of singularities.
Notice that this definition is different from that of Definition 2.1. But for any mprimary ideal I of a two-dimensional Gorenstein rational singularity, I is good in the
sense of Definition 2.1 if and only if it is good in the sense of Definition 2.7; see also
[GIW, Theorem 7.8] or [WY]).
The following fact is well-known.
Lemma 2.8. Let A be a two-dimensional (not necessarily Gorenstein) rational singularity,
and ϕ : X → Spec A denotes the minimal resolution of singularities. Then:
(1) The minimum element (say, Z0 ) among all non-zero anti-nef cycles on X exists.
This cycle Z0 is called the fundamental cycle on X which corresponds to the maximal ideal m. In particular, m = H0 (X, OX (−Z0 )) is a good ideal.
(2) If I = H0 (X, OX (−Z)) and J = H0 (X, OX (−Z ′ )) are good ideals of A, then
IJ = H0 (X, OX (−(Z + Z ′ ))) is also a good ideal.
6
(3) If I = H0 (X, OX (−Z)), then e0I (A) = −Z 2 .
The colength ℓA (A/I) can also be determined by the anti-nef cycle Z; see the Riemann–
Roch formula (Lemma 4.12).
3. Weakly special Cohen–Macaulay modules over Gorenstein local
domains
Throughout this section, let A be a Gorenstein local domain and I ⊂ A a nonparameter
m-primary ideal, unless otherwise specified. In this section, we introduce the notion of
weakly special Cohen–Macaulay modules, which are closely related to Ulrich modules.
Definition 3.1 (Weakly special CM module, ideal). Let A be a Cohen–Macaulay
local domain, and let M be an maximal Cohen–Macaulay A-module. If M satisfies
µA (M) = 2 · rankA M and M/IM is A/I-free, then M is called a weakly special Cohen–
Macaulay A-module with respect to I.
Suppose that I ⊂ A is a stable ideal. If there exists a weakly special Cohen–Macaulay
A-module with respect to I, then I is called a weakly special ideal of A.
Now suppose that A is a Gorenstein local ring. Let I ⊂ A be a stable m-primary ideal
with minimal reduction Q. Then as I ⊆ Q : I, we have I/Q ⊆ (Q : I)/Q ∼
= KA/I . Hence
e0I (A) = ℓA (A/Q) ≤ ℓA (A/I) + ℓ(KA/I ) = 2 · ℓA (A/I),
where the last equality follows from the Matlis duality theorem. Note that equality holds
if and only if I is a good ideal.
The following theorem is the main result in this section.
Theorem 3.2. Suppose that A is a Gorenstein local domain and I is a stable ideal of
A. Let M be a maximal Cohen–Macaulay A-module. Then the following condition are
equivalent:
(1) M is an Ulrich A-module with respect to I, and I is a good ideal.
(2) M is a weakly special Cohen–Macaulay A-module with respect to I.
Proof of Theorem 3.2. We may assume that M/IM is A/I-free. Thus
ℓA (M/IM) = µA (M) · ℓA (A/I).
(1) =⇒ (2) : By assumption we have
ℓA (M/IM) = e0I (M) = e0I (A) · rankA M = 2 · ℓA (A/I) · rankA M,
where the second equality follows from the associativity formula of multiplicities (e.g.
[Ma, Theorem 14.8]). It follows from the above two equalities that µA (M) = 2 · rankA M.
Thus M is a weakly special Cohen–Macaulay A-module with respect to I.
(2) =⇒ (1) : Since M is a maximal Cohen–Macaulay A-module, we have
ℓA (M/IM) ≤ e0I (M).
On the other hand, by the observation and the equality described as above, we get
ℓA (M/IM) = µA (M) · ℓA (A/I) = 2 · rankA M · ℓA (A/I) ≥ e0I (A) · rankA M = e0I (M).
7
Therefore ℓA (M/IM) = e0I (M) and e0I (A) = 2·ℓA (A/I). That is, M is an Ulrich A-module
with respect to I and I is a good ideal.
Corollary 3.3. Suppose that A is a Gorenstein local domain. If I is an Ulrich ideal, then
it is a weakly special ideal.
Proof. If I is an Ulrich ideal, then it is a good ideal by Proposition 2.2 and M =
A
Syzdim
(A/I) is an Ulrich A-module with respect to I by Theorem 2.5. By Theorem
A
3.2, M is a weakly special Cohen–Macaulay A-module with respect to I. Hence I is a
weakly special ideal, as required.
Proposition 3.4. Suppose that A is a hypersurface local domain. Then I ⊂ A is an
Ulrich ideal if and only if it is a weakly special ideal.
Proof. It suffices to prove the ‘if’ part. Now suppose that I is a weakly special ideal. Take
a weakly special Cohen–Macaulay A-module M with respect to I. By Theorem 3.2, M
is an Ulrich A-module with respect to I. Since A is a hypersurface and M is a maximal
Cohen–Macaulay A-module without free summands, we have a minimal free presentation
Aµ → Aµ → M → 0, which induces an exact sequence
f
(A/Q)µ → (A/Q)µ −
→ M/QM → 0.
As M/QM = M/IM is A/I-free, we have M/QM ∼
= (A/I)µ . It is easy to observe that
µ
the kernel of f is isomorphic to (I/Q) . Hence there is a surjection (A/Q)µ → (I/Q)µ ,
which shows µA (I/Q) ≤ 1. Thus µA (I) = d + 1, and hence Proposition 2.2 implies that
I is an Ulrich ideal.
The following corollary gives a partial answer to Problem 1.3.
Corollary 3.5. Suppose that A is a hypersurface local domain, and I ⊂ A is a good ideal.
If there exists an Ulrich A-module with respect to I, then I is an Ulrich ideal.
Proof. The assertion follows from Theorem 3.2 and Proposition 3.4.
Question 3.6. Let A be a Gorenstein local domain and I ⊂ A be a stable ideal. Suppose
that there exists an Ulrich A-module M with respect to I. Is then I an Ulrich ideal
(especially, a good ideal)?
The next examples shows that we cannot relax the assumption that I is stable in
Question 3.6.
Example 3.7. Let k be a field and let e ≥ 3 be an integer. Set A = k[[te , te+1 ]] and
M = (te , te+1 )e−1 . Then A is a hypersurface local domain and M is an Ulrich A-module
with respect to m = (te , te+1 ), the maximal ideal of A. But m is not stable.
The next example shows that we cannot relax the assumption that A is a local domain
(or dim A ≥ 1) in Question 3.6.
Example 3.8. Let k be a field, and let a, e be integers with 2a > e > a ≥ 2. Set
A = k[[t]]/(te ), and I = (ta ). Then I 2 = 0 but I 6= 0 : I = (te−a ). Hence I is stable but
not good. Then te−a A ∼
= A/I is an Ulrich A-module with respect to I.
8
4. Special Cohen–Macaulay modules over two-dimensional rational
singularities
Throughout this section, let (A, m) be a two-dimensional complete normal local domain
with an algebraically closed residue field k of chracteristic zero. Let ϕ : X → Spec A be
the minimal resolution of singularities with E = ϕ−1 (m) the exceptional divisor. Let
E = ∪rj=1 Ej be the decomposition into irreducible components of E. Let I ⊂ A be an
m-primary ideal, and Q a minimal reduction of I. For every maximal Cohen–Macaulay
f = ϕ∗ (M)/torsion.
A-module M, we put M
First we recall the notion of special Cohen–Macaulay modules.
Theorem–Definition 4.1 (Special McKay correspondence due to Wunram). Assume that A is a rational singularity, and let ϕ : X → Spec A be as above. For every
i, there exists a unique indecomposable maximal Cohen–Macaulay A-module Mi (up to
fi ∨ ) = 0 so that
isomorphism) with H 1 (M
fi )Ej = δij
c1 ( M
f denotes the 1st Chern class of M
f
for every 1 ≤ i, j ≤ r and rankA Mi = ni , where c1 (M)
and Z0 denotes the fundamental cycle on X.
Based upon this theorem, we define a (nontrivial) special Cohen–Macaulay A-module,
which has been defined in more general settings.
Definition 4.1 (Special CM module). Suppose that A is a two-dimensional rational
singularity. Let M be a maximal Cohen–Macaulay A-module. Then M is called a special
Cohen–Macaulay A-module if M is isomorphic to a finite direct sum of M1 , . . . , Mr .
Remark 4.2. Let KA denote the canonical module of A. A maximal Cohen–Macaulay Amodule M is said to be a special Cohen–Macaulay A-module if M ⊗A KA /torsion is Cohen–
Macaulay. This condition is equivalent to Ext1A (M, A) = 0; see [Wu]. In particular, any
free A-module or any maximal Cohen–Macaulay module over a Gorenstein local domain
A is a special Cohen–Macaulay A-module in this sense. But in this paper, we use the
notion of special Cohen–Macaulay modules for two-dimensional rational singularities only.
Iyama–Wemyss [IW] proved the following characterization of special Cohen–Macaulay
modules.
Proposition 4.3 (cf. [IW, Theorem 3.6]). Suppose that A is a two-dimensional rational
singularity. Let M be a maximal Cohen–Macaulay A-module without free summands.
Then M is a special Cohen–Macaulay A-module if and only if Syz1A (M) ∼
= M ∗ , where
M ∗ = HomA (M, A).
Remark 4.4. Suppose that A is Gorenstein rational singularity, that is, A is a rational
double point. Then any maximal Cohen–Macaulay A-module is a finite direct sum of free
modules and special Cohen–Macaulay A-modules.
As in the case of Ulrich modules, we define a special CM module with respect to an
Ulrich ideal I.
9
Definition 4.5 (Special CM module w.r.t. I). Suppose that A is a two-dimensional
rational singularity. Let M be a finitely generated A-module. Then M is called a special
Cohen–Macaulay A-module with respect to I if the following conditions are satisfied:
(1) M is a special Cohen–Macaulay A-module, that is, Syz1 (M) ∼
= M ∗.
A
(2) M/IM is A/I-free.
Any special Cohen–Macaulay A-module is a weakly special Cohen–Macaulay A-module
in the sense of 2.1 but we believe that the converse is not true in general.
Lemma 4.6. Suppose that A is a two-dimensional rational singularity. Let M be a
maximal Cohen–Macaulay A-module. Then
(1) If M is a special Cohen–Macaulay A-module with respect to I, then it is a weakly
special Cohen–Macaulay A-module with respect to I.
(2) When rankA M = 1, the converse of (1) holds true.
Proof. (1) Suppose that M is a special Cohen–Macaulay A-module. Then we have the
following exact sequence:
0 → M ∗ → An → M → 0,
where n = µA (M). This yields µA (M) = rankA M + rankA M ∗ = 2 · rankA M.
(2) Take an ideal J ⊂ A that is isomorphic to M. Then htJ = 1 and A/J is Cohen–
Macaulay. It suffices to show that Syz1A (J) ∼
= J ∗.
As µA (J) = 2 · rankA J = 2, we can write J = (x, y). Then
α
1
2
∼
SyzA (J) =
∈ A αx + βy = 0 ∼
= J ∗.
= (x) : y ∼
β
Hence M ∼
= J is a special Cohen–Macaulay A-module with respect to I.
Remark 4.7. Let S = k[s, t] be a graded polynomial ring with two variables over an
algebrically closed field of characteristic 0 with deg(s) = deg(t) = 1. Let D be an
invariant subring of S by
√
−1
0
0
−1
√
,
.
G=
1 0
0
− −1
Then D is a two-dimensional rational singularity of type (D4 ), and it is isomorphic to the
graded subring k[x, y, z], where x = s4 + t4 , y = s2 t2 , and z = st(s4 − t4 ).
Let A be the third Veronese subring of D, that is, A = k[z, x3 , xy 2 , y 3] is a rational
triple point whose dual graph is given by the following:
1
1
❣
❣
1
❦
−3
1
❣
In particular, all indecomposable special Cohen–Macaulay A-modules have rank 1.
Now let L be an indecomposable maximal Cohen–Macaulay D-module generated by s,
2
s t, t3 and s(s4 − t4 ). Put M = ⊕k∈Z L3k+1 . Then M is a graded A-module of rank 2. One
d
can see that M is generated by s, s2 t5 , s4 t3 and t7 . In particular, M
m is a weakly special
10
cm -module. We believe that M
d
c
Cohen–Macaulay A
m is indecomposable as Am -module. If
d
it is true, then M
m is not special, as required.
Michael Wemyss taught us that one could obtain from [Wu, Example 1] a weakly special
Cohen–Macaulay R-module of rank 2 which is not a special Cohen–Macaulay R-module.
Next, we introduce the notion of special ideals.
Definition 4.8 (Special ideal). An m-primary ideal I ⊂ A is called a special ideal if it
is a good ideal (cf. Definition 2.7) and there exists a special Cohen–Macaulay A-module
M (equivalently, Mj for some j) with respect to I. When this is the case, such a cycle Z
is called a special cycle.
In the rest of this section, we give a characterization of special ideals in terms of cycles.
Before doing that, we need the following lemma, which also plays an important role in
Section 6.
Pr
Pr
ai Ei and W =
Let Z =
i=1 bi Ei be anti-nef cycles on X. Put inf(Z, W ) =
i=1
Pr
i=1 inf(ai , bi )Ei , then one can easily see that inf(Z, W ) is also an anti-nef cycle on X.
Lemma 4.9. Assume that Z 6= Z0 is an anti-nef cycle on X. Then we can find the
following anti-nef cycles Z1 , . . . , Zs and positive cycles Y1 , . . . , Ys so that 0 < Ys ≤ Ys−1 ≤
· · · ≤ Y 1 ≤ Z0 :
Z = Zs = Zs−1 + Ys ,
Zs−1 = Zs−2 + Ys−1 ,
..
(4.9.1)
.
Z
=
Z1 + Y 2 ,
2
Z1 = Z0 + Y 1 ,
where Z0 denotes the fundamental cycle on X.
Proof. We can take an integer s ≥ 1 such that Z 6≤ sZ0 and Z ≤ (s + 1)Z0 . Put
Zk = inf(Z, (k + 1)Z0 ) for every k = 1, . . . , s. Then Z1 , . . . , Zs are anti-nef cycles. In
particular, Z0 ≤ Z1 ≤ Z2 ≤ · · · ≤ Zs = Z. Moreover, if we put Yk = Zk − Zk−1 for every
k = 1, . . . , s, then we can obtain the required sequence.
Under the notation as in Lemma 4.9, we put Ik = IZk = H 0(X, OX (−Zk )) for every
k = 0, 1, . . . , s. Then each Ik is a good ideal and
I = Is ⊂ Is−1 ⊂ · · · ⊂ I1 ⊂ I0 = m.
The following theorem is the main theorem in this section, which gives a criterion for
I = IZ to be a special ideal in terms of cycles.
P
Theorem 4.10. Let Z = rj=1 aj Ej 6= Z0 be an anti-nef cycle on the minimal resolution
P
X → Spec A, and put I = IZ . Let Z0 = rj=1 nj Ej denote the fundamental cycle on X.
Suppose that 1 ≤ i ≤ r. Then the following conditions are equivalent:
(1) Mi is a special Cohen–Macaulay A-module with respect to I.
(2) ai = ni · ℓA (A/I).
11
(3) There exist positive cycles 0 < Ys ≤ . . . ≤ Y1 ≤ Z0 and anti-nef cycles Z1 , . . . , Zs
so that Zk = Zk−1 + Yk for each k = 1, . . . , s and
Zk−1 · Yk = 0,
pa (Yk ) = 0 and coeff Ei Yk = ni
for every k = 1, 2, . . . , s,
where coeff Ei W stands for the coefficient of Ei in a cycle W .
When this is the case, ℓA (A/I) = s + 1 and every Ik := IZk is a special ideal. Moreover,
for every k = 1, 2, . . . , s, we obtain that Supp(Yk ) is connected, Supp(Yk ) ⊂ ∪{Ej ⊂
Supp(Yk−1) | Ej Zk−1 = 0}, and Yk is the fundamental cycle on Supp(Yk ).
Remark 4.11. Zk := Yk + Zk−1 is not always anti-nef even if Yk is the fundamental cycle
on X which satisfies pa (Yk ) = 0 and Yk Zk−1 = 0.
Let us begin the proof of Theorem 4.10. The following formula is one of the main tools
in this paper.
Lemma 4.12 (Kato’s Riemann–Roch formula; [Ka], [WY]). Let Z be an anti-nef
cycle on the minimal resolution of singularities X, and put IZ = H 0 (X, OX (−Z)). Then
for any maximal Cohen–Macaulay A-module M, we have
f)Z.
ℓA (M/IZ M) = rankA M · ℓA (A/IZ ) + c1 (M
In particular,
Z 2 + KX Z
= 1 − pa (Z).
2
The next lemma easily follows from Lemma 4.12.
ℓA (A/IZ ) = −
Lemma 4.13. Under the notation as in Theorem 4.10, we have
ℓA (A/Ik ) = ℓA (A/Ik−1 ) − Yk Zk−1 + 1 − pa (Yk ).
Proof. By Lemma 4.12, we have
ℓA (A/Ik ) = 1 − pa (Zk ) = 1 − pa (Zk−1 + Yk )
= 1 − pa (Zk−1) − pa (Yk ) − Yk Zk−1 + 1
as required.
= ℓA (A/Ik−1) − Yk Zk−1 + 1 − pa (Yk ),
The following lemma is a key lemma in the proof of the theorem.
Lemma 4.14. Under the notation as in Theorem 4.10, we have
(1) ai ≤ ni · ℓA (A/I).
(2) Equality holds in (1) if and only if Mi is a special Cohen–Macaulay A-module with
respect to I.
Proof. By Kato’s Riemann–Roch formula, we have
fi ) · Z = ni · ℓA (A/I) + ai .
ℓA (Mi /IMi ) = rankA Mi · ℓA (A/I) + c1 (M
On the other hand, µA (Mi ) = 2ni because Mi is a special Cohen–Macaulay A-module
(with respect to m). Hence
ℓA (Mi /IMi ) ≤ µA (Mi ) · ℓA (A/I) = 2ni · ℓA (A/I).
12
Therefore ai ≤ ni · ℓA (A/I) and equality holds true if and only if Mi /IMi is A/I-free,
which means that Mi is a special Cohen–Macaulay A-module with respect to I.
Proof of Theorem 4.10. (1) ⇐⇒ (2) follows from Lemma 4.14.
(2) ⇐⇒ (3): We use induction on s. By Lemma 4.13, we have
ni · ℓA (A/I) − coeff Ei Z
= ni {ℓA (A/Is−1 ) − Ys Zs−1 + 1 − pa (Ys )} − coeff Ei Zs−1 − coeff Ei Ys
= {ni · ℓA (A/Is−1 ) − coeff Ei Zs−1 } + ni (−Ys Zs−1) + ni (−pa (Ys )) + {ni − coeff Ei Ys } .
By the induction hypothesis, ni · ℓA (A/Is−1 ) − coeff Ei Zs−1 ≥ 0. Since Zs−1 is anti-nef,
−Ys Zs−1 ≥ 0. As A is rational, −pa (Ys ) ≥ 0. By the choice of Ys , ni − coeff Ei Ys ≥ 0.
Hence we obtain the required inequality, and equality holds if and only if
ni · ℓA (A/Is−1) = coeff Ei Zs−1,
js Zs−1 = pa (Ys ) = 0 and ni = coeff Ei Ys .
Therefore the assertion follows from the induction hypothesis.
Now suppose that one of (1),(2),(3) holds. The induction hypothesis implies that
Is−1 is a special ideal with ℓ(A/Is−1 ) = s and Supp(Yk ) is connected and Supp(Yk ) ⊂
∪{Ej | Ej Zk−1 = 0} and Yk is the fundamental cycle on Supp(Yk ) for every k = 1, . . . , s−1.
Then it follows from Lemma 4.13 that ℓ(A/Is ) = ℓ(A/Is−1 ) + 1 = s + 1. As Ys Zs−1 = 0
implies that Supp(Ys ) ⊂ ∪{Ej | Ej Zs−1 = 0}. Moreover, since pa (Ys ) = 0, Ys must be
connected.
Let us show that Ys is the fundamental cycle on Supp(Ys ). For each Ej ⊂ Supp(Ys ),
Ys Ej = Ys Ej + Zs−1 Ej = Zs Ej ≤ 0. Hence Ys is anti-nef on Supp(Ys ). If Ys is not the
fundamental cycle on Supp(Ys ), then there exist an Ej ⊂ Supp(Ys ) and an anti-nef cycle
Ys′ on Supp(Ys ) so that Ys = Ys′ + Ej . Then
0 = pa (Ys ) = pa (Ys′ ) + pa (Ej ) + Ys′ Ej − 1 ≤ pa (Ys′ ) − 1 ≤ −1.
This is a contradiction.
5. Ulrich ideals and modules over rational double points
The goal of this section is to classify Ulrich ideals of any two-dimensional Gorenstein
rational singularity (rational double point) A and determine all of the Ulrich A-modules
with respect to those ideals.
First we recall the definition of rational double points.
Definition 5.1 (Rational double point). Let A be a two-dimensional complete Noetherian local ring with unique maximal ideal m containing an algebraically closed field
k. Then A is said to be a rational double point if it is isomorphic to the hypersurface
k[[x, y, z]]/(f ), where f is one of the following polynomials:
(An )
(Dn )
(E6 )
(E7 )
(E8 )
z 2 + x2 + y n+1 (n ≥ 1),
z 2 + x2 y + y n−1 (n ≥ 4),
z 2 + x3 + y 4,
z 2 + x3 + xy 3 ,
z 2 + x3 + y 5.
13
Note that A is a 2-dimensional Gorenstein rational singularity (of characteristic 0) if
b is a rational double point in the above sense.
and only if the m-adic completion A
The following theorem is the first main result in this section. In the latter half of this
section, we give the complete classification of Ulrich ideals and modules as an application
of the theorem.
Theorem 5.2 (See also Theorem 3.2). Assume that A is a rational double point of dimension 2, and let I ⊂ A be a nonparameter m-primary ideal. Then the following conditions
are equivalent:
(1) M is an Ulrich A-module with respect to I.
(2) M is a special Cohen–Macaulay A-module with respect to I.
(3) M is a weakly special Cohen–Macaulay A-module with respect to I.
(4) M/IM is A/I-free and M has no free summands.
When this is the case, I is an Ulrich ideal and M ∗ ∼
= Syz1A (M) is also an Ulrich A-module
with respect to I.
In what follows, we prove Theorem 5.2. We need several lemmata.
Lemma 5.3. Assume that A is a rational double point of dimension 2, and let I ⊂ A be
an m-primary ideal. Then e0I (A) ≤ 2 · ℓA (A/I) holds true and equality holds if and only if
I is a good ideal.
Proof. The lemma is well-known but we give a proof here for the convenience of the reader.
Let I denote the integral closure of I. Take a minimal reduction Q of I. Then since Q is
2
also a minimal reduction of I and I = QI, we have
I ⊂ I ⊂ Q : I ⊂ Q : I.
The Matlis duality theorem implies that
e0I (A) = ℓA (A/Q) = ℓA (A/I) + ℓA (I/Q) ≤ ℓA (A/I) + ℓA (Q : I/Q) = 2 · ℓA (A/I),
and equality holds if and only if I = Q : I, that is, I is a good ideal.
Almost all of the maximal Cohen–Macaulay A-modules over a hypersurface of multiplicity 2 can be regarded as Ulrich modules in the classical sense.
Lemma 5.4 (cf. [HKuh, Corollary 1.4]). Let A be a hypersurface local domain of
e0m (A) = 2. Then every maximal Cohen–Macaulay A-module without free summands
satisfies µA (M) = e0m (M) = 2 · rankA M, that is, M is an Ulrich A-module with respect
to m.
Any two-dimensional rational double point A can be regarded as an invariant subring
B , where B = k[[s, t]] is a formal power series ring over k, and G is a finite subgroup of
SL(2, k). Thus we can apply the so-called McKay correspondence, which is a special case
of ‘special McKay correspondence’ (see Section 4).
G
Lemma 5.5 (McKay Correspondence). Let A = B G as above. Then:
(1) The ring A is of finite CM-representation type. Let {Mi }ri=0 be the set of isomorphism classes of indecomposable maximal Cohen–Macaulay A-modules, where
Lr
M0 = A. Then B ∼
= i=0 Mi⊕ni , where ni = rankA Mi .
14
P
(2) The fundamental cycle is given by Z0 = rj=1 nj Ej so that if we choose indices
fi )Ej = δij for 1 ≤ i, j ≤ r, where c1 (∗) denotes the Chern
suitably, then c1 (M
fi = ϕ∗ (Mi )/torsion. In particular, Mi is a special Cohen–Macaulay
class and M
A-module (with respect to m) for every i = 1, 2, . . . , r.
We are now ready to prove Theorem 5.2.
Proof of Theorem 5.2. (1) =⇒ (2) : Since M is an Ulrich A-module with respect to I,
it has no free summands because no free module is an Ulrich A-module with respect
to I. Thus M is an Ulrich A-module with respect to m by Lemma 5.4 and it is also
a special Cohen–Macaulay A-module with respect to m by Lemma 5.5. Hence M is a
special Cohen–Macaulay A-module with respect to I because M/IM is A/I-free.
(2) =⇒ (3) : See Lemma 4.6.
(3) =⇒ (4) : Trivial.
(4) =⇒ (1) : By Lemma 5.4, M is a weakly special Cohen–Macaulay A-module with
respect to I. Note that e0I (A) ≤ 2 · ℓA (A/I) by Lemma 5.3. By a similar argument as in
the proof of Theorem 3.2, we have
ℓA (M/IM) = e0I (M) and e0I (A) = 2 · ℓA (A/I),
whence M is an Ulrich A-module with respect to I and I is a good ideal.
Since A is a hypersurface local domain, Proposition 3.4 implies that I is an Ulrich ideal.
In particular, A/I is Gorenstein. Thus applying Theorem 2.6 yields M ∗ ∼
= Syz1A (M) is
also an Ulrich A-module with respect to I.
The corollary below follows from Proposition 3.4 and Theorem 5.2.
Corollary 5.6. Assume that A is a rational double point of dimension 2. Let I be an
m-primary ideal. Then the following conditions are equivalent.
(1) I is an Ulrich ideal.
(2) I is a special ideal.
(3) I is a weakly special ideal.
(4) There exist an Ulrich A-module with respect to I.
In the rest of this section, we classify all Ulrich ideals and Ulrich modules over rational
double points of dimension 2 using the results in the previous section.
Let {Mi }ri=0 be the set of indecomposable maximal Cohen–Macaulay A-modules so that
fi )Ej = δij for all 1 ≤ i, j ≤ r.
M0 = A and c1 (M
Now suppose that M is an Ulrich A-module with respect to I. Then M is a finite direct
sum of M1 , . . . , Mr :
M∼
= M1⊕k1 ⊕ · · · ⊕ Mr⊕kr .
because M has no free summands. Whenever ki > 0, Mi must be an Ulrich A-module
with respect to I. Hence it suffices to characterize Mi that is an Ulrich A-module with
respect to I. On the other hand, Theorem 5.2 implies that I is an Ulrich ideal and whence
I is a special ideal. Thus those ideals I (or cycles Z) are determined by Theorem 4.10.
Moreover, it is not difficult to determine all Mi that is an Ulrich module with respect to
IZ by Theorem 5.2.
15
Let I be a good ideal of A and let Z be an anti-nef cycle on the minimal resolution X
such that IOX = OX (−Z) and I = H0 (X, OX (−Z)), that is, I = IZ . Then we call Z an
Ulrich cycle if I is an Ulrich ideal. Note that Z is an Ulrich cycle if and only if it is a
special cycle.
Now let us illustrate the main theorem by the following example. Let Z = 2E1 + 3E2 +
4E3 + 3E4 + 2E5 + 2E6 be an Ulrich cycle of a rational double point A = k[[x, y, z]]/(x3 +
y 4 + z 2 ), and put I = H0 (X, OX (−Z)). Then since Z is an anti-nef cycle on the minimal
S
resolution X → Spec A with support in E = 6i=1 Ei , Z can be described as follows:
2
❣
E6
Z=
2
❣
E1
3
❣
E2
4
❣
E3
3
❣
E4
2
❣
E5
Furthermore, by Theorem 4.10(2) Mi is an Ulrich A-module with respect to I if and only
if i = 1 or 5 because Z0 = E1 + 2E2 + 3E3 + 2E4 + E5 + 2E6 and ℓA (A/I) = 2. In other
words, any Ulrich A-module with respect to I is given by M ∼
= M1⊕a ⊕ M5⊕b for some
integers a, b ≥ 0. We can describe this by the following picture.
2
❣
E6
Z=
2
✇
E1
3
❣
E2
4
❣
E3
3
❣
E4
2
✇
E5
We are now ready to state the main theorem in this section.
Theorem 5.7. Let A is a two-dimensional rational double point. Let ϕ : X → Spec A
S
be the minimal resolution of singularities with E = ϕ−1 (m) = ri=1 Ei , the exceptional
divisor on X. Then all Ulrich cycles Zk of A and all indecomposable Ulrich A-modules
with respect to Ik = H0 (X, OX (−Zk )) are given by the following:
• (An ) x2 + y n+1 + z 2
When n = 2m, the complete list of all Ulrich cycles is given by the following:
Zk =
1
❣
2
❣
···
for k = 0, 1, · · · , m − 1(=
k
❣
n
2
k+1
✇
|
k+1
✇
···
{z
k+1
✇
n − 2k
k+1
✇
k
}
❣
···
2
❣
1
❣
− 1). Then ℓA (A/Ik ) = k + 1 for each k = 0, 1, . . . , m − 1.
When n = 2m + 1, the complete list of all Ulrich cycles is given by the following:
Zk =
1
❣
2
❣
···
for k = 0, 1, · · · , m(=
k
❣
n−1
).
2
k+1
✇
|
k+1
✇
···
{z
k+1
✇
n − 2k
k+1
✇
}
k
❣
···
2
❣
Then ℓA (A/Ik ) = k + 1 for each k = 0, 1, . . . , m.
• (Dn ) x2 y + y n−1 + z 2 (n ≥ 4)
When n = 2m, the complete list of all Ulrich cycles is given by the following:
k+1
Zk =
1
❣
2
❣
3
❣
···
2k + 2
✇
|
···
{z
2k + 2
n − 2k −16
3
✇
✇
} ❅ k✇+ 1
1
❣
for k = 0, 1, . . . , m − 2(=
n−4
).
2
m
1
2
Zm−1 = ❣
3
❣
❣
···
2m − 3 2m − 2
❣
❣
❣
❅ m✇− 1
m−1
′
Zm−1
=
1
2
❣
3
❣
❣
···
✇
❣
❅ m❣
2m − 3 2m − 2
❣
1
Z2′
2
=
2
✇
2
❣
❣
2
···
2
❣
❣
❣
❅ 1❣
′
Then ℓA (A/Ik ) = k + 1 for each k = 0, 1, . . . , m − 2, ℓA (A/Im−1 ) = ℓA (A/Im−1
) = m and
n
′
ℓA (A/I2 ) = 2, where m = 2 .
When n = 2m + 1, the complete list of all Ulrich cycles is given by the following:
k+1
Zk =
1
2
❣
3
❣
❣
···
for k = 0, 1, . . . , m − 2(=
1
2
Zm−1 = ❣
3
❣
❣
n−5
).
2
2k + 2
✇
|
✇
✇
} ❅ k✇+ 1
2k + 2
···
{z
n − 2k − 3
m
❣
···
2m − 2 2m − 1
❣
✇
❣
❅ m✇
1
Z2′
2
=
2
✇
2
❣
❣
2
···
❣
2
❣
❣
❅ 1❣
Then ℓA (A/Ik ) = k + 1 for each k = 0, 1, . . . , m − 1, and ℓA (A/I2′ ) = 2.
• (E6 ) x3 + y 4 + z 2
The Ulrich cycles of A are the following Z0 and Z1 with ℓA (A/Ik ) = k + 1 for each
k = 0, 1:
2
Z0 =
1
✇
2
✇
3
2
✇
✇
2
✇
1
✇
Z1 =
2
✇
3
❣
4
❣
❣
3
❣
2
✇
• (E7 ) x3 + xy 3 + z 2
The Ulrich cycles of A are the following Z0 , Z1 and Z2 with ℓA (A/Ik ) = k + 1 for each
k = 0, 1, 2.
17
2
Z0 =
2
✇
3
✇
4
3
Z1 =
2
❣
4
❣
6
✇
✇
3
✇
2
✇
1
✇
3
❣
❣
5
❣
4
✇
2
✇
Z2 =
2
❣
4
❣
6
❣
❣
5
❣
4
❣
3
✇
• (E8 ) x3 + y 5 + z 2
The Ulrich cycles of A are the following Z0 and Z1 with ℓA (A/Ik ) = k + 1 for each
k = 0, 1.
3
Z0 =
2
✇
4
✇
6
5
Z1 =
4
✇
7
❣
10
✇
✇
5
✇
4
✇
3
2
✇
✇
❣
❣
8
❣
6
❣
4
2
❣
❣
In our previous paper [GOTWY, Section 9], we gave a complete list of the nonparameter Ulrich ideals for one-dimensional simple singularities. We can also do it for
two-dimensional simple singularities (rational double points).
Corollary 5.8. With the same notation as in Theorem 5.7, the set XA is equal to:
(A2m ) {(x, y, z), (x, y 2, z), . . . , (x, y m , z)}.
(A2m+1 ) {(x, y, z), (x, y 2, z), . . . , (x, y m+1 , z)}.
(D2m )
{(x, y, z), (x, y 2, z), . . . , (x, y m−1 , z),
√
√
(x + −1y m−1 , y m , z), (x − −1y m−1 , y m , z), (x2 , y, z)}.
(D2m+1 ) {(x, y, z), (x, y 2, z), . . . , (x, y m , z), (x2 , y, z)}.
(E6 )
(E7 )
(E8 )
{(x, y, z), (x, y 2, z)}.
{(x, y, z), (x, y 2, z), (x, y 3 , z)}.
{(x, y, z), (x, y 2, z)}.
Proof. One can easily see that any ideal I appearing in the corollary has the form I =
Q + (z), where Q is a parameter ideal of A and I 2 = QI, ℓA (A/Q) = 2 · ℓA (A/I) and
µ(I) = 3. Hence those ideals I are Ulrich.
On the other hand, Theorem 5.7 implies that ♯XA = m (resp. m + 1, m + 2, m + 1, 2,
3 ,2) if A is a rational double point of type (A2m ) (resp. (A2m+1 ), (D2m ), (D2m+1 ), (E6 ),
(E7 ), (E8 )). Hence the set as above coincides with XA , respectively.
Proof of Theorem 5.7. We first consider the cases (E6 ), (E7 ), (E8 ).
The case (E6 ) : f = x3 + y 4 + z 2 . The fundamental cycle Z0 on the minimal resolution
is given by
18
❣E6
2
Z0 =
1
2
❣
3
❣
E2
E1
2
❣
E3
❣
E4
1
❣
E5
Now suppose that Z0 + Y is a special cycle for some positive
cycle Y ≤ Z0 . Since
P
∪{E | EZ0 = 0} = ∪5i=1 Ei is connected, we have Y = Y1 := 5i=1 Ei in Theorem 4.10(3).
Conversely, if we put
❣E6
2
.
Z1 = Z0 + Y 1 =
2
3
❣
E1
4
❣
E2
3
❣
E3
❣
E4
2
❣
E5
then Z1 is anti-nef and p(Y1 ) = 0 because Y1 can be regarded as the fundamental cycle
on the dual graph of (the minimal resolution) of type (A5 ). Hence Z1 is a special cycle
and M is an Ulrich A-module with respect to IZ1 if and only if it is a finite direct sum of
M1 and M5 because coeff Ei Y1 = ni (= 1) ⇐⇒ i = 1, 5; see Theorem 4.10.
Suppose that Z2 = Z1 + Y is a special cycle for some positive cycle Y ≤ Y1 . As
∪{E ⊂ Supp(Y1 ) | EZ1 = 0} = E2 ∪ E3 ∪ E4 is connected, we have that Y = E2 + E3 + E4
as the fundamental cycle on Supp(Y ) = E2 ∪ E3 ∪ E4 . But Z2 = Z1 + Y = 2E1 + 4E2 +
5E3 + 4E4 + 2E5 + 2E6 is not anti-nef becauseZ2 E6 = 1. So the special cycles of (E6 ) are
Z0 and Z1 .
The case (E7 ) : f = x3 + xy 3 + z 2 . The fundamental cycle Z0 on the minimal resolution is given by
❣E7
2
Z0 =
2
3
❣
4
❣
E2
E1
3
❣
E3
❣
E4
2
❣
E5
1
❣
E6
Since ∪{E | EZ0 = 0} = ∪7i=2 Ei is isomorphic to the dual graph of (D6 ), if Z1 = Z0 + Y
is a special cycle for some positive cycle Y ≤ Z0 , then we have
❣E7
1
1
Y =
2
❣
2
❣
E3
E2
❣
E4
2
❣
E5
1
❣
:= Y1 .
E6
Conversely, one can easily see that the following Z1 is a special cycle by Theorem 4.10.
❣E7
3
4
2
Z1 = Z0 + Y 1 = ❣
❣
E2
E1
6
5
❣
E3
❣
E4
4
❣
E5
2
❣
E6
Note that ∪{E ⊂ Supp(Y1 ) | EZ1 = 0} admits two connected components:
❣E7
❣
E2
❣
E3
❣
E4
❣
E6
19
The fundamental cycles Y2 of their components are E2 +2E3 +E4 +E5 and E6 , respectively.
Note that Z1 + (E2 + 2E3 + E4 + E5 ) is not anti-nef. So we are done.
The case (E8 ) : x3 + y 5 + z 2 . The fundamental cycle Z0 on the minimal resolution is
given by
❣E8
3
Z0 =
2
❣
E1
4
6
❣
E2
❣
E3
5
❣
E4
4
❣
E5
3
❣
E6
2
❣
E7
Suppose that Z0 + Y is a special cycle for some positive cycle Y ≤ Z0 . As ∪{E | EZ0 =
0} = ∪i6=7 Ei is connected and the corresponding graph is isomorphic to the dual graph of
(E7 ), we have
❣E8
2
Y =
2
❣
E1
3
4
❣
E2
❣
E3
3
❣
2
❣
1
❣
E4
E5
E6
8
6
4
:= Y1
Conversely, if we put
❣E8
5
4
7
E1
E2
Z1 = Z0 + Y 1 = ❣
❣
10
❣
E3
❣
E4
❣
E5
❣
E6
2
❣
E7
then Z1 is a special cycle by Theorem 4.10.
Now suppose that Z1 + Y is a special cycle for some positive cycle Y ≤ Y1 . Since
∪{E ⊂ Supp(Y1 ) | EZ1 = 0} = E3 ∪ E4 ∪ E5 ∪ E6 ∪ E8 is connected, we have Y =
E3 + E4 + E5 + E6 + E8 . But Z1 + (E3 + E4 + E5 + E6 + E8 ) is not anti-nef.
We next consider the case (An ).
The case (A2m ): f = x2 + y 2m+1 + z 2 . The fundamental cycle Z0 on the minimal resolution is given by
Z0 =
1
❣
E1
1
❣
E2
1
❣
E3
···
1
❣
En−2
1
❣
1
En−1 En
❣
P
that is, Z0 = ni=1 Ei .
Now suppose that Z0 + Y is a special cycle for some positive cycle Y ≤ Z0 . Since
n−1
∪{E | EZ0 = 0} = ∪i=2
Ei is connected, Y = Y1 , where Y1 is the fundamental cycle on
Pn−1
n−1
Ei . Conversely,
∪i=2
Ei , that is, Y1 = i=2
1
2
E1
E2
Z1 = Z0 + Y 1 = ❣
❣
2
❣
E3
···
2
❣
En−2
2
❣
1
En−1 En
❣
is a special cycle by Theorem 4.10. Similarly, if we put Yk =
1, 2, . . . , m − 1, then we have
(a) 0 < Ym−1 < Ym−2 < . . . Y2 < Y1 ≤ Z0 ,
(b) pa (Yk ) = 0 and Yk Zk−1 = 0 for every k = 1, . . . , m − 1,
20
P2m−k
i=k+1 Ei
for every k =
(c) Zk = Zk−1 + Yk is anti-nef for every k = 1, . . . , m − 1.
(d) coeff Ei Yk = ni if and only if k + 1 ≤ i ≤ 2m − k.
This produces a sequence of Ulrich ideals:
IZm−1 ⊂ IZm−2 ⊂ · · · ⊂ IZ1 = m.
We can determine Ulrich ideals in the case of (A2m+1 ) similarly.
Finally, we consider the case (Dn ) : f = x2 + xy n−3 + z 2 .
The case (D2m ) : f = x2 + xy 2m−3 + z 2 . The fundamental cycle Z0 on the minimal
resolution of singularities is given by
1
1
Z0 =
❣
E1
2
2
❣
E2
❣
···
E3
2
❣
E2m−3
❣E2m−1
❣
E2m−2❅
❅1❣E
2
2m
P2m−2
That is, Z0 = E1 + 2 i=2 Ei + E2m−1 + E2m .
Now suppose that Z0 + Y is a special cycle on X for some positive cycle Y ≤ Z0 . Since
∪{E | EZ0 = 0} has two connected components, we have that Y = E1 or Y = Y1 :
1
Y1 =
1
❣
E3
2
❣
···
E4
2
❣
E2m−3
2
❣E2m−1
❣
E2m−2❅ 1❣
E2m
Conversely,
1
Z1′
= Z0 + E1 =
2
❣
E1
2
❣
E2
2
❣
···
E3
2
❣
E2m−3
❣E2m−1
❣
E2m−2❅
❅1❣E
2
2m
and
2
Z1 = Z0 + Y 1 =
1
❣
E1
2
❣
E2
3
4
❣
E3
❣
E4
···
4
❣
E2m−3
4
❣
❣E2m−1
E2m−2❅
❅2❣E
2m
are special cycles by Theorem 4.10.
Suppose that Z1 + Y is a special cycle for some positive cycle Y ≤ Y1 . Since ∪{E ⊂
Supp(Y1 ) | EZ1 = 0} has two connected components, we have Y = E3 or Y = Y2 , where
1
1
Y2 =
❣
2
❣
E5
E6
···
2
3
4
2
❣
E2m−3
2
❣
❣E2m−1
E2m−2❅
❅1❣E
2m
Then Z1 + E3 is not anti-nef, but
2
Z2 =
1
❣
E1
❣
E2
❣
E3
❣
E4
21
···
6
❣
E2m−3
6
❣
❣E2m−1
E2m−2❅
❅2❣E
2m
is a special cycle. Similarly, if we put
Yk = E2k+1 + 2
2m−2
X
Ei + E2m−1 + E2m ,
Zk = Zk−1 + Yk
i=2k+2
for each k = 1, 2, . . . , m − 2, then we have a sequence of positive cycles
0 < Ym−2 < Ym−3 < · · · < Y1 ≤ Z0 .
By Theorem 4.10, Z0 , Z1 , . . . , Zm−2 are special cycles. Note that
∪{E ⊂ Supp(Ym−2 ) | EZm−2 = 0} = E2m−3 ∪ E2m−1 ∪ E2m
and
m−1
Zm−2 =
1
❣
E1
2
❣
E2
3
❣
···
E3
2m − 3 2m − 2
❣
E2m−3
❣
E2m−2❅
❅
❣E2m−1
m−1
❣E2m
By a similar argument as above, we obtain two minimal special cycles:
m
Zm−1 =
1
❣
2
❣
3
❣
E1
E2
E3
1
2
3
···
2m − 3 2m − 2
❣
E2m−3
❣
E2m−2❅
❅
❣E2m−1
m−1
❣E2m
m−1
′
Zm−1
=
❣
E1
❣
E2
❣
···
E3
2m − 3 2m − 2
❣
E2m−3
❣
E2m−2❅
❅
❣E2m−1
m
❣E2m
The case of (D2m+1 ). The fundamental cycle Z0 on the minimal resolution is given
by
1
1
Z0 =
❣
E1
2
❣
E2
2
❣
···
E3
2
❣
E2m−2
2
❣
E2m−1❅
❅
❣E2m
1
❣E2m+1
If we put
Yk = E2k+1 + 2
2m−1
X
Ei + E2m + E2m+1
i=2k+2
Zk = Zk−1 + Yk =
2k+1
X
iEi + (2k + 2)
i=1
2m−1
X
Ei + (k + 1)(E2m + E2m+1 )
i=2k+2
for each k = 1, . . . , m − 2, then 0 < Ym−2 < · · · < Y2 < Y1 ≤ Z0 are positive cycles and
Z0 , Z1 , . . . , Zm−2 are special cycles.
Now suppose that Zm−2 + Y is a special cycle for some positive cycle Y ≤ Ym−2 . Since
∪{E ⊂ Supp(Ym−2 ) | EZm−2 = 0} = E2m−1 ∪ E2m ∪ E2m+1 ,
is connected, we have that Y = E2m+1 + E2m + E2m+1 .
Set Ym−1 = E2m+1 + E2m + E2m+1 . Conversely, Zm−1 = Zm−2 + Ym−1 is a special cycle
by Theorem 4.10. Note that Zm−1 is the minimal one among those special cycles.
22
6. Ulrich ideals of non-Gorenstein rational singularities
In this section, we study Ulrich ideals of two-dimensional non-Gorenstein rational singularities. Notice that the maximal ideal m is always an Ulrich ideal of such a local
ring.
We first show that any Ulrich ideal of a two-dimensional rational singularity is a good
ideal. In order to obtain a characterzation of Ulrich ideals, we need the following definition.
Throughout this section, let (A, m) be a 2-dimensional non-Gorenstein rational singularity and ϕ : X → Spec A be the minimal resolution of singularities.
e → Spec A be a resolution of singularities of Spec A. Decompose
Definition 6.1. Let ϕ
e: X
e → X. Let π ∗ Z0 denote the pull-back of the fundamental
ϕ
e as ϕ
e = ϕ ◦ π, where π : X
e Then for any anti-nef cycle Z on X,
e we put
cycle Z0 on the minimal resoluition to X.
U(Z) = (ϕ∗ Z0 · Z)(pa (Z) − 1) + Z 2 ,
where pa (Z) denotes the virtual genus of Z; see the remark below.
Theorem 6.2. Let (A, m) be a two-dimensional rational singularity. Let I be an mprimary ideal with µA (I) > 2. Then the following conditions are equivaelnt:
(1) I is an Ulrich ideal.
(2) e0I (A) = (µ(I) − 1) · ℓA (A/I).
(3) I is an integrally closed ideal represented on the minimal resolution of singularities
ϕ : X → Spec A such that IOX = OX (−Z), I = H0 (X, OX (−Z)) and U(Z) = 0.
Proof. (1) ⇐⇒ (2) follows from [GOTWY, Lemma 2.3].
(3) =⇒ (2) : Any integrally closed ideal I in a two-dimensional rational singularity is
stable. Moreover, U(Z) = 0 means that e0I (A) = (µ(I) − 1) · ℓA (A/I). Thus the assertion
immediately follows from this.
(2) =⇒ (3) : Since I is an Ulrich ideal by (1), we have that I = Q : I for any minimal
2
reduction Q of I by [GOTWY, Corollary 2.6]. Then as I = QI, we get I ⊆ I ⊆ Q : I ⊆
Q : I. Hence I = I is integrally closed.
e → Spec A be a resolution of singularities so that I = H0 (X,
e O e (−Z)) and
Let ϕ
e: X
X
e
IOXe = OXe (−Z) is invertible for some anti-nef cycle Z on X. Then (2) implies that
U(Z) = 0.
Now suppose that I is not represented on the minimal resolution of singularities ϕ : X →
e → X ′ of a (−1)-curve E on X
e such that I
Spec A. Then there exists a contraction ψ : X
is not represented on X ′ . Consider the following commutative diagram:
π ✲
e
X
X
′
✒
ψ❅
π
❘ ′
❅
X
23
Then we may assume that Z = ψ ∗ Z ′ + nE for some anti-nef cycle Z ′ on X ′ and an integer
n ≥ 1. Note that π ∗ Z0 · E = ψ ∗ Z ′ · E = 0; see e.g. [GIW, Fact 7.7]. Then
U(Z) − U(Z ′ ) = (π ∗ Z0 · (ψ ∗ Z ′ + nE)) (pa (ψ ∗ Z ′ ) + pa (nE) + ψ ∗ Z ′ · nE − 2)
+(ψ ∗ Z ′ + nE)2 − (π ∗ Z0 · ψ ∗ Z ′ ) (pa (ψ ∗ Z ′ ) − 1) − (ψ ∗ Z ′ )2
= (π ∗ Z0 · ψ ∗ Z ′ )(pa (nE) − 1) + (nE)2
(nE)2 n(KX · E)
+
((π ′ )∗ Z0 · Z ′ ) .
= ((π ′ )∗ Z0 · Z ′ + 2)
2
2
′ ∗
′
2
Since (π ) Z0 · Z ≤ −2 and E = KX · E = −1, we get U(Z) > U(Z ′ ) ≥ 0. This is a
contradiction.
In what follows, we always assume that ϕ : X → Spec A be the minimal resolution of
singularities and IOX = OX (−Z) is invertible and I = H0 (X, OX (−Z)) for some antiS
nef cycle Z on X. Let ϕ−1 (m) = i Ei denote the exceptional divisor on X with the
irreducible components {Ei }1≤i≤r . Let Z0 (resp. K) denotes the fundamental cycle (resp.
the canonical divisor) on X. Notice that Z0 E ≤ 0 and KE = −E 2 − 2 for all exceptional
curves E.
The next target is to characterize Ulrich cycles in terms of dual graphs. In order to
do that, we recall the sequence of anti-nef cycles introduced in Lemma 4.9. Assume that
Z 6= Z0 is an anti-nef cycle on X. Then we can find the following anti-nef cycles Z1 , . . . , Zs
and positive cycles Y1 , . . . , Ys so that 0 < Ys ≤ Ys−1 ≤ · · · ≤ Y1 ≤ Z0 :
Z = Zs = Zs−1 + Ys ,
Zs−1 = Zs−2 + Ys−1 ,
..
(6.2.1)
.
Z
=
Z1 + Y 2 ,
2
Z1 = Z0 + Y 1 .
The following lemma plays a key role in the proof of the main theorem in this section.
Lemma 6.3. Let Z, Z ′ be anti-nef cycles on X with Z ′ = Z + Y , where Y is a positive
cycle. Then:
(1) U(Z ′ ) − U(Z) = (Y Z0 ) (pa (Z) − 1) + (pa (Y ) − 1) + (Y Z)(Z ′ Z0 + 2)
+ (pa (Y ) − 1)(ZZ0 + 2) − KY .
(2) Assume that 0 6= Y ≤ Z0 and e = e0m (A) ≥ 3. Then U(Z ′ ) ≥ U(Z) holds true, and
equality holds if and only if Y Z = Y Z0 = pa (Y ) = (Z − Z0 )Z0 = K(Z0 − Y ) = 0.
Proof. Since pa (Z + Y ) = pa (Z) + pa (Y ) + Y Z − 1 by definition, we have
U(Z ′ ) − U(Z) = (ZZ0 + Y Z0 )(pa (Z) − 1 + pa (Y ) − 1 + Y Z) + (Z 2 + 2Y Z + Y 2 )
−(ZZ0 )(pa (Z) − 1) − Z 2
= (Y Z0 ) (pa (Z) − 1) + (pa (Y ) − 1) + (Y Z)(ZZ0 + Y Z0 + 2)
+(pa (Y ) − 1)(ZZ0 ) + Y 2
= (Y Z0 ) (pa (Z) − 1) + (pa (Y ) − 1) + (Y Z)(Z ′ Z0 + 2)
+(pa (Y ) − 1)(ZZ0 + 2) − KY,
where the last equality follows from 2(pa (Y ) − 1) = KY + Y 2 .
24
(2) Assume that Y ≤ Z0 . As X → Spec A is the minimal resolution, we have that
KY ≤ KZ0 because KE ≥ 0 for all curves E on X. Since Z0 is anti-nef and Z − Z0 , Y
are positive, we get
Z ′ Z0 + 2 = (Z − Z0 )Z0 + Y Z0 + (Z02 + 2) ≤ Z02 + 2 = −e + 2 < 0.
Moreover, pa (Z0 ) = 0 implies that
(pa (Y ) − 1)(ZZ0 + 2) − KY = pa (Y )(ZZ0 + 2) − (Z − Z0 )Z0 − K(Y − Z0 ) ≥ 0
and equality holds if and only if pa (Y ) = (Z − Z0 )Z0 = K(Y − Z0 ) = 0.
Note that Y Z0 , Y Z ≤ 0 and pa (Z) − 1 + pa (Y ) − 1 < 0. Hence U(Z ′ ) ≥ U(Z) and
equality holds if and only if Y Z0 = Y Z = 0 and pa (Y ) = (Z −Z0 )Z0 = K(Y −Z0 ) = 0.
The main result in this section is the following theorem, which enables us to determine
all Ulrich ideals of a two-dimensional (non-Gorenstein) rational singularity. For a positve
P
cycle Z on X, we write Z = E ZE E, where ZE is a nonnegative integer.
Theorem 6.4. Let (A, m) be a two-dimensional rational singularity with e = e0m (A) ≥ 3,
P
and let ϕ : X → Spec A be the minimal resolution of singularities. Set Z0 = E nE E,
the fundamental cycle on X. Let Z be an anti-nef cycle on X with IOX = OX (−Z) and
I = H0 (X, OX (−Z)). Then the following conditions are equivalent:
(1) I is an Ulrich ideal, that is, Z is an Ulrich cycle on X.
(2) There exist a sequence of anti-nef cycles Z1 , . . . , Zs and a sequence of positive
cycles 0 < Ys ≤ · · · ≤ Y1 ≤ Z0 for some s ≥ 1 so that
Z = Zs = Zs−1 + Ys ,
Zs−1 = Zs−2 + Ys−1 ,
(6.4.1)
..
.
Z1 = Z0 + Y 1 .
and Yk Zk−1 = pa (Yk ) = K(Z0 − Yk ) = 0 for every k = 1, . . . , s.
When this is the case, the following conditions are satisfied.
(a) {E | E 2 ≤ −3} is contained in Supp(Y1 ).
(b) Supp(Yk ) is given as one of the connected components of {E | EZk−1 = 0} in
{E | EZ0 = 0}.
(c) Yk is the fundamental cycle on Supp(Yk ).
(d) coeff E Z0 = coeff E Yk for every E with E 2 ≤ −3.
If, in addition, we put Ik = H0 (X, OX (−Zk )), then Ik is an Ulrich ideal so that
m = I0 ⊇ I1 ⊇ · · · ⊇ Is = I
and ℓA (A/I) = s + 1.
Proof. Take a sequence as in (2).
(1) =⇒ (2) : Lemma 6.3 implies that
0 = U(Z) = U(Zs ) ≥ U(Zs−1 ) ≥ · · · ≥ U(Z1 ) ≥ U(Z0 ) = 0.
Hence all Zk are Ulrich cycles and
Yk Zk−1 = Yk Z0 = pa (Yk ) = (Zk − Z0 )Z0 = K(Z0 − Yk ) = 0
25
for every k = 1, . . . , s. By a similar argument as in the proof of Theorem 4.10, we have
ℓA (A/I) = s + 1.
If E 2 ≤ −3, then KE = −E 2 − 2 > 0. Thus K(Z0 − Y1 ) = 0 implies that coeff E Z0 =
coeff E Y1 for every E with E 2 ≤ −3. In particular, Supp(Yk ) ⊇ {E | E 2 ≤ −3}. On the
other hand, Yk Z0 = 0 implies that Supp(Yi ) ⊆ {E | EZ0 = 0} because Z0 is an anti-nef
cycle.
Now suppose (2). Fix i with 1 ≤ k ≤ r. Since Zk−1 is anti-nef and Yk Zk−1 = 0, a
similar argument to the above yields that {E | E 2 ≤ −3} ⊆ Supp(Yk ) ⊆ {E | EZk−1 = 0}.
As pa (Yk ) = 0, Supp(Yk ) is connected. Moreover, Supp(Yk ) is one of the connected
components of {E | EZk−1 = 0}. Indeed, if there exists a curve E ∈
/ Supp(Yk ) such that
EE ′ > 0 for some E ′ ∈ Supp(Yk ), then EZk−1 < 0 since EYk ≥ 1 and EZk−1 + EYk =
EZk ≤ 0.
Claim: Yk is the fundamental cycle on Supp(Yk ).
Take E ∈ Supp(Yk ). As Yk Zk−1 = 0, we have EZk−1 = 0. If EYk > 0, then EYk = EZk ≤
0. This is a contradiction. Hence EYk ≤ 0. Namely, Yk is anti-nef. Moreover, if Yk is not
the fundamental cycle on Supp(Yk ), then we know that pa (Yk ) ≤ −1. This contradicts
the assumption pa (Yk ) = 0. Hence Yk must be the fundamental cycle on Supp(Yk ).
To see (2) =⇒ (1), we notice that (c) implies that pa (Yk ) = 0. Condition (b) means
that Yk Zk−1 = 0. Hence Yk Z0 = 0. Note that the equalities Yk Z0 = Yk−1 Z0 = · · · =
Y1 Z0 = 0 yield (Zk − Z0 )Z0 = 0. Condition (d) implies that K(Z0 − Yk ) = 0. Therefore
U(Z) = U(Zr ) = · · · = U(Z1 ) = U(Z0 ) = 0, as required.
The following assertion does not hold true without the assumption that A is rational;
see [GOTWY, Example 2.2].
Corollary 6.5. Let A be a two-dimensional rational singularity. If I is an Ulrich ideal
of A, then I is a special ideal and A/I is Gorenstein.
Proof. Denote by m the maximal ideal of R. We may assume that A is not Gorenstein,
that is, e = e0m (A) ≥ 3. Then by Theorem 6.4, we can find a sequence of Ulrich cycles
Z1 , . . . , Zs and positive cycles 0 < Y1 ≤ · · · ≤ Ys ≤ Z0 satisfying all conditions in Theorem
6.4 so that
Z = Zs = Zs−1 + Ys ,
Zs−1 = Zs−2 + Ys−1 ,
(6.5.1)
..
.
Z1 = Z0 + Y 1 .
Then Z ≤ (s + 1)Z0 and Z 6≤ sZ0 . In particular, ms 6⊆ I and ms+1 ⊆ I. Moreover, I is a
special ideal by Theorem 4.10. We have only to show the following claim.
Claim: There exists a minimal set of generators {u1 , . . . , up , t} such that I =
(u1, . . . , up , ts+1 ).
Set Is−1 = H0 (X, OX (−Zs−1 )). Then Is−1 is also an Ulrich ideal. So we may assume
that we can write Is−1 = (u1 , . . . , up , ts ) for some minimal set of generators of m. Since
m(u1 , . . . , up ) ⊆ I and ms 6⊆ I, we have that ts ∈
/ I. Hence by ℓA (Is−1/I) = 1, we can
26
choose an element ai ∈ A such that ui −ai ts ∈ I for every i. By replacing ui with ui −ai ts ,
we may assume that I ′ = (u1 , . . . , up , ts+1 ) ⊆ I. As ℓA (Is−1 /I ′ ) = 1 and I 6= Is−1 , we can
conclude that I = I ′ , as required.
7. Examples
Throughout this section, let k be an algebraically closed field of characteristic 0. Let
XA denote the set of nonparameter Ulrich ideals of A.
Let A be a rational double point of type (An ). Then the following example indicates
that for any Ulrich module with respect to I is a direct summand of Syz2A (A/I).
Example 7.1. Let A = k[[x, y, z]]/(x2 + y 4 + z 2 ) ∼
= k[[s4 , st, t4 ]] be a two-dimensional
rational double point of type (A4 ). Theorem 5.7 implies that XA = {m, I1 }, where m =
(x, y, z) = (s4 , st, t4 ) and I1 = (x, y 2 , z) = (s4 , s2 t2 , t4 ).
The corresponding anti-nef cycle to m (resp. I1 ) on the minimal resolution is
Z0 =
1
❣
E1
1
1
❣
E2
❣
E3
resp.
Z1 =
1
❣
E1
2
❣
E2
1
❣
E3
.
Moreover, if we put
M0 = A,
M1 = As + At3 ,
M2 = As2 + At2 , and M3 = As3 + At,
then they are representatives of indecomposable maximal Cohen–Macaulay A-modules,
and thus any maximal Cohen–Macaulay A-module can be written as
A⊕k ⊕ M1⊕k1 ⊕ M2⊕k2 ⊕ M3⊕k3 .
Note that Syz2A (A/m) ∼
= M2⊕2 . Moreover, Theorem 5.7 says
= M1 ⊕ M3 and Syz2A (A/I1 ) ∼
that any Ulrich A-module with respect to m (resp. I1 ) can be written as
M1⊕k1 ⊕ M2⊕k2 ⊕ M3⊕k3
resp. M2⊕k2 .
In general, there exists an Ulrich A-module with respect to I but not a direct summand
of SyziA (A/I).
Example 7.2. Let A = k[[x, y, z]]/(x3 + y 5 + z 2 ) be a rational double point of type (E8 ).
Then the set of Ulrich ideals is XA = {m, I}, where I = (x, y 2, z) with ℓA (A/I) = 2.
❣E8
5
Z1 =
4
✇
E1
7
❣
E2
10
❣
E3
8
❣
E4
6
❣
E5
4
❣
E6
2
❣
E7
Let Mi denote the indecomposable maximal Cohen–Macaulay A-module corresponding
to Ei (up to equivalence) via the McKay correspondence for every i = 1, . . . , 8. Then
we have Syz2A (A/I) ∼
= M1 . Indeed, since Syz2A (A/I) is an Ulrich A-module with respect
to I, it is isomorphic to M1⊕k for some k ≥ 1. Then k = 1 because rank Syz2A (A/I) =
rank M1 = 2.
Next we see that Syz2A (A/m) ∼
= M7 . Set Ω = Syz2A (A/m). As rankA Ω = 2, we have
Ω∼
= M7 .
= M7 . It follows from [GOTWY, Corollary 7.7] that Ω ∼
= M1 or Ω ∼
27
Similarly, one can easily see that SyziA (A/m) ∼
= M7 and SyziA (A/I) ∼
= M1 for every
i ≥ 2. Hence M2 cannot be written as a direct summand of SyziA (A/J) for any Ulrich
ideal J.
Two-dimensional rational double points (An ) (Gorenstein quotient singularities) admits
a sequence of Ulrich ideals of length m = ⌈ n2 ⌉:
(x, y m , z) ⊂ · · · ⊂ (x, y 2, z) ⊂ (x, y, z).
However the following example shows that each two-dimensional non-Gorenstein cyclic
quotient singularity has a unique Ulrich ideal (that is, the maximal ideal m).
Example 7.3 (Cyclic quotient singularity). Let A be a two-dimensional cyclic quotient singularity of type n1 (1, q), where q and n are integers with 1 < q < n, (q, n) = 1.
Namely, A is the invariant subring of the cyclic group generated by
εn 0
g=
,
0 εqn
where εn denotes the primitive nth root of 1 ∈ k.
Now suppose that A is not Gorenstein, that is, q + 1 is not divided by n. Then there
exists an exceptional curve Ei so that b := −Ei2 ≥ 3. In particular, KEi = b − 2 ≥ 1. Let
Γ be the dual graph of the minimal resolution of singularities X → Spec A.
1
♠
E1
1
···
♠
Ei−1
1
♠
−b
1
Ei
Ei+1
♠
1
···
♠
Er
It is well-known that m is an Ulrich ideal. Now suppose that there exists an Ulrich
ideal other than m. Then we can take Y1 satisfying the conditions (3)(a)(b) in Theorem
P
6.4. In particular, Z0 Y1 = 0 and K(Z0 − Y1 ) = 0 and 0 < Y1 ≤ Z0 . Set Y1 = j∈J Ej for
some non-empty subset J of {1, . . . , r}.
If i ∈ J, then Z0 Ei = 0 because Z0 Y1 = 0. On the other hand, Z0 Ei ≤ Ei2 + 2 = 2 − b ≤
−1. This is a contradiction. Hence i ∈
/ J. Then Ei ⊂ Supp(Z0 − Y1 ). This implies that
KEi = 0 because K(Z0 − Y1 ) = 0, which contradicts the choice of Ei . Hence the maximal
ideal is the only Ulrich ideal of A.
Remark 7.4. Let A be a cyclic quotient singularity as above. Then one can obtain many
examples of special cycles in general by a similar argument to the proof of Theorem 5.7.
Example 7.5 (Rational triple points). Let a ≥ b ≥ c ≥ 2. If we set A =
k[[T, sT a , s−1 T b , (s + 1)−1 T c ]], then it is a two-dimensional rational singularity with
e0m (A) = 3 and
A ∼
= k[[t, x, y, z]]/(xy − ta+b , xz − ta+c + zta , yz − ytc + ztb )
b
c
x
t
t
−
z
∼
.
= k[[t, x, y, z]] I2
ta y
z
Then Ik = (tk , x, y, z) is an Ulrich ideal of colength k for every k with 1 ≤ k ≤ c. In fact,
if we put Qk = (tk , x + y + z), then Ik = Qk + (x, y) and Ik2 = Qk Ik . Furthermore, we
have e0Ik (A) = ℓA (A/Qk ) = 3k = (µ(I) − 1) · ℓA (A/I). Hence Ik is an Ulrich ideal.
28
Let a = b = c = 3. Now consider the corresponding cycles.
Z0 = 1 ❣
Z2 = 1 ❣
1
2
❣
❣
1
❣
1
❣
1
❣
2
❣
1
♠
−3
1
❣
2
❣
3
♠
−3
1
2
❣
❣
1
1
Z1 = 1 ❣
❣
2
❣
2
♠
−3
2
❣
1
❣
❣
Note that all special cycles are Ulrich cycles.
Example 7.6. Let A = k[[s7 , s4 t, st2 , t7 ]]. Then A is a two-dimensional cycle quotient
singularity, which is an invariant subring of a cyclic group generated by
ε7
.
g=
ε37
Since 7/3 = 7 −
1
, the dual graph can be written as the following form:
2 − 1/2
♠
−3
E1
♠
E2
♠
E3
If we put Na = hsi tj | i + 3j ≡ a (mod 3)i for a = 0, 1, . . . , 6, then {Na }6a=0 forms a
representative of isomorphism classes of indecomposable maximal Cohen–Macaulay Amodules. Then M1 = N3 = As + At5 , M2 = N2 = As2 + At3 and M3 = N1 = As3 + At
are indecomposable special Cohen–Macaulay A-modules. On the other hand, N4 = As4 +
Ast + At6 +, N5 = As5 + As2 t + At4 , N6 = As6 + As3 t + At2 are indecomposable Ulrich
A-modules with respect to the maximal ideal m.
All special cycles are Z0 = E1 + E2 + E3 and Z1 = E1 + 2E2 + E3 . (Note that Z1 is not
an Ulrich cycle; see Example 7.3). Any special Cohen–Macaulay module with respect to
IZ1 is of the form M2⊕k .
However, we do not have the complete list of Ulrich modules with respect to some ideal.
References
[Ar]
[Av]
[BHU]
[BH]
[G]
[GIK]
M. Artin, On isolated rational singularities of surfaces, Amer. J. Math. 88 (1966), 129–136.
L. L. Avramov, Infinite free resolutions, Six lectures on commutative algebra (Bellaterra,
1996), 1–118, Progr. Math., 166, Birkhäuser, Basel, 1998.
J. Brennan, J. Herzog, and B. Ulrich, Maximally generated Cohen–Macaulay modules, Math.
Scand. 61 (1987), 181–203.
W. Bruns and J. Herzog, Cohen–Macaulay rings, revised edition, Cambridge Studies in Advanced Mathematics, 39, Cambridge University Press, Cambridge, 1998.
S. Goto, Almost Gorenstein rings – an attempt towards higher-dimensional cases –, Preprint
2012.
S. Goto, S. Iai, and M. K. Kim, Good ideals in Gorenstein local rings obtained by idealization,
Proc. Amer. Math. Soc., 130 (2001), 337–344.
29
[GIW]
S. Goto, S. Iai, and K. Watanabe, Good ideals in Gorenstein local rings, Trans. Amer. Math.
Soc., 353 (2000), 2309–2346.
[GOTWY] S. Goto, K. Ozeki, R. Takahashi, K.-i. Watanabe and K.-i. Yoshida, Ulrich ideals and modules,
to appear in Math. Proc. Cambridge Philos. Soc., arXiv:1206.3197.
[GW]
S. Goto and K.-i. Watanabe, On graded rings, I, J. Math. Soc. Japan, 309 (1978), 179–213.
[H]
J. Herzog, Generators and relations of abelian semigroups and semigroup rings, Manuscripta
Math., 3 (1970), 175–193.
[HKuh]
J. Herzog and M. Kühl, Maximal Cohen–Macaulay modules over Gorenstein rings and
Bourbaki-sequences, Commutative algebra and combinatorics (Kyoto, 1985), 65–92, Adv.
Stud. Pure Math., 11, North-Holland, Amsterdam, 1987.
[HKun]
J. Herzog and E. Kunz, Der kanonische Modul eines Cohen–Macaulay–Rings, Lecture Notes
in Mathematics 238, Springer–Verlag,1971.
[IW]
O. Iyama, M. Wemyss, The classification of special Cohen–Macaulay modules, Math. Z. 265
(2010), 41–83.
[Ka]
M. Kato, Riemann-Roch Theorem for strongly convex manifolds of dimension 2 , Math.
Ann.222 (1976), 243–250.
[Li1]
J. Lipman, Rational singularities, with applications to algebraic surfaces and unique factorization, Publ. Math. IHES 36 (1969), 195–279.
[Li2]
J. Lipman, Desingularization of two-dimensional schemes, Ann. of Math. 107 (1978), 151–
207.
[Ma]
H. Matsumura, Commutative Ring Theory, Cambridge Studies in Advanced Mathematics, 8,
Cambridge Cambridge University Press, 1986.
[S1]
J. Sally, Cohen–Macaulay local rings of maximal embedding dimension, J. Algebra, 56 (1979),
168–183.
[S2]
J. Sally, Number of generators of ideals in local rings, Lecture Notes in Pure and Applied
Mathematics, 35, Dekker, 1978.
[U]
B. Ulrich, Gorenstein rings and modules with high numbers of generators, Math. Z. 188
(1984), 23–32.
[W]
K.-i. Watanabe, Some examples of one dimensional Gorenstein domains, Nagoya Math. J.
49 (1973), 101-109.
[WY]
K.-i. Watanabe and K. Yoshida, Hilbert-Kunz multiplicity, McKay correspondence and good
ideals in two-dimensional rational singularities, manuscripta math. 104 (2001), 275–294.
[Wu]
J. Wunram, Reflexive modules on quotient surface singularities, Math. Ann. 279 (1988),
583–598.
[Y]
Y. Yoshino, Cohen–Macaulay modules over Cohen–Macaulay rings, London Mathematical
Society, Lecture Note Series, 146, Cambridge University Press, Cambridge, 1990.
S. Goto: Department of Mathematics, School of Science and Technology, Meiji University, 1-1-1 Higashimita, Tama-ku, Kawasaki 214-8571, Japan
E-mail address: [email protected]
K. Ozeki: Department of Mathematical Science, Faculty of Science, Yamaguchi University, 1677-1 Yoshida, Yamaguchi 853-8512, Japan
E-mail address: [email protected]
R. Takahashi: Graduate School of Mathematics, Nagoya University, Furocho,
Chikusaku, Nagoya 464-8602, Japan
E-mail address: [email protected]
URL: http://www.math.nagoya-u.ac.jp/~takahashi/
K.-i. Watanabe and K. Yoshida: Department of Mathematics, College of Humanities
and Sciences, Nihon University, 3-25-40 Sakurajosui, Setagaya-Ku, Tokyo 156-8550, Japan
E-mail address: [email protected]
E-mail address: [email protected]
30
| 0 |
Constructive Galois Connections
arXiv:1511.06965v4 [cs.PL] 26 Oct 2016
Taming the Galois Connection Framework for Mechanized Metatheory
David Darais
David Van Horn
University of Maryland, USA
[email protected]
University of Maryland, USA
[email protected]
Abstract
pings between the domains known as abstraction α ∈ C 7→ A and
concretization γ ∈ A 7→ C such that c ⊑ γ(a) ⇐⇒ α(c) a.
Since its introduction by Cousot and Cousot in the late 1970s,
this theory has formed the basis of many static analyzers, type
systems, model-checkers, obfuscators, program transformations,
and many more applications [7].
Given the remarkable set of intellectual tools contributed by
this theory, an obvious desire is to incorporate its use into proof
assistants to mechanically verify proofs by abstract interpretation.
When embedded in a proof assistant, verified algorithms such as
static analyzers can then be extracted from these proofs.
Monniaux first achieved the goal of mechanization for the theory of abstract interpretation with Galois connections in Coq [26].
However, he notes that the abstraction side (α) of Galois connections poses a serious problem since it requires the admission of
non-constructive axioms. Use of these axioms prevents the extraction of certified programs. So while Monniaux was able to mechanically verify proofs by abstract interpretation in its full generality,
certified artifacts could not generally be extracted.
Pichardie subsequently tackled the extraction problem by mechanizing a restricted formulation of abstract interpretation that relied
only on the concretization (γ) side of Galois connections [29]. Doing so avoids the use of axioms and enables extraction of certified
artifacts. This proof technique is effective and has been used to construct several certified static analyzers [1, 5, 6, 29], most notably the
Verasco static analyzer, part of the CompCert C compiler [18, 19].
Unfortunately, this approach sacrifices the full generality of the theory. While in principle the technique could achieve mechanization
of existing soundness theorems, it cannot do so faithful to existing
proofs. In particular, Pichardie writes [29, p. 55]:1
Galois connections are a foundational tool for structuring abstraction in semantics and their use lies at the heart of the theory of
abstract interpretation. Yet, mechanization of Galois connections
remains limited to restricted modes of use, preventing their general
application in mechanized metatheory and certified programming.
This paper presents constructive Galois connections, a variant
of Galois connections that is effective both on paper and in proof
assistants; is complete with respect to a large subset of classical
Galois connections; and enables more general reasoning principles,
including the “calculational” style advocated by Cousot.
To design constructive Galois connection we identify a restricted mode of use of classical ones which is both general and
amenable to mechanization in dependently-typed functional programming languages. Crucial to our metatheory is the addition of
monadic structure to Galois connections to control a “specification effect”. Effectful calculations may reason classically, while
pure calculations have extractable computational content. Explicitly moving between the worlds of specification and implementation is enabled by our metatheory.
To validate our approach, we provide two case studies in mechanizing existing proofs from the literature: one uses calculational
abstract interpretation to design a static analyzer, the other forms a
semantic basis for gradual typing. Both mechanized proofs closely
follow their original paper-and-pencil counterparts, employ reasoning principles not captured by previous mechanization approaches,
support the extraction of verified algorithms, and are novel.
Categories and Subject Descriptors F.3.2 [Semantics of Programming Languages]: Program analysis
Keywords
The framework we have retained nevertheless loses an important property of the standard framework: being able to
derive a correct approximation f ♯ from the specification
α ◦ f ◦ γ. Several examples of such derivations are given
by Cousot [8]. It seems interesting to find a framework for
this kind of symbolic manipulation, while remaining easily
formalizable in Coq.
Abstract Interpretation, Galois Connections, Monads
1. Introduction
Abstract interpretation is a general theory of sound approximation
widely applied in programming language semantics, formal verification, and static analysis [10–14]. In abstract interpretation, properties of programs are related between a pair of partially ordered
sets: a concrete domain, hC, ⊑i, and an abstract domain, hA, i.
When concrete properties have a -most precise abstraction, the
correspondence is a Galois connection, formed by a pair of map-
This important property is the so-called “calculational” style,
whereby an abstract interpreter (f ♯ ) is derived in a correct-byconstruction manner from a concrete interpreter (f ) composed with
abstraction and concretization (α◦f ◦γ). This style of abstract interpretation is detailed in Cousot’s monograph [8], which concludes:
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from [email protected].
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ICFP’16, September 18–24, 2016, Nara, Japan
ACM. 978-1-4503-4219-3/16/09...$15.00
http://dx.doi.org/10.1145/2951913.2951934
The emphasis in these notes has been on the correctness of
the design by calculus. The mechanized verification of this
formal development using a proof assistant can be foreseen
with automatic extraction of a correct program from its
correctness proof.
1
1
Translated from French by the present authors.
2. Verifying a Simple Static Analyzer
In the subsequent 17 years, this vision has remained unrealized,
and clearly the paramount technical challenge in achieving it is
obtaining both generality and constructivity in a single framework.
This paper contributes constructive Galois connections, a framework for mechanized abstract interpretation with Galois connections that achieves both generality and constructivity, thereby enabling calculational style proofs which make use of both abstraction (α) and concretization (γ), while also maintaining the ability
to extract certified static analyzers.
We develop constructive Galois connections from the insight
that many classical Galois connections used in practice are of a particular restricted form, which is reminiscent of a direct-style verification. Constructive Galois connections are the general abstraction
theory for this setting and can be mechanized effectively.
We observe that constructive Galois connections contain monadic
structure which isolates classical specifications from constructive
algorithms. Within the effectful fragment, all of classical Galois
connection reasoning can be employed, while within the pure fragment, functions must carry computational content. Remarkably,
calculations can move between these modalities and verified programs may be extracted from the end result of calculation.
To support the utility of our theory we build a library for constructive Galois connections in Agda [28] and mechanize two existing abstract interpretation proofs from the literature. The first is
drawn from Cousot’s monograph [8], which derives a correct-byconstruction analyzer from a specification induced by a concrete
interpreter and Galois connection. The second is drawn from Garcia, Clark and Tanter’s “Abstracting Gradual Typing” [17], which
uses abstract interpretation to derive static and dynamic semantics for gradually typed languages from traditional static types.
Both proofs use the “important property of the standard framework” identified by Pichardie, which is not handled by prior mechanization approaches. The mechanized proofs closely follow the
original pencil-and-paper proofs, which use both abstraction and
concretization, while still enabling the extraction of certified algorithms. Neither of these papers have been previously mechanized.
Moreover, we know of no existing mechanized proof involving calculational abstract interpretation.
Finally, we develop the metatheory of constructive Galois connections, prove them sound, and make precise their relationship
to classical Galois connections. The metatheory is itself mechanized; claims are marked with “AGDAX” whenever they are proved
in Agda. (All claims are marked.)
In this section we contrast two perspectives on verifying a static analyzer: using a direct approach, and using the theory of abstract interpretation with Galois connections. The direct approach is simple
but lacks the benefits of a general abstraction framework. Abstract
interpretation provides these benefits, but at the cost of added complexity and resistance to mechanized verification. In Section 3 we
present an alternative perspective: abstract interpretation with constructive Galois connections—the topic of this paper. Constructive
Galois connections marry the worlds presented in this section, providing the simplicity of direct verification, the benefits of a general
abstraction framework, and support for mechanized verification.
To demonstrate both verification perspectives we design a parity
analyzer in each style. For example, a parity analysis discovers
that 2 has parity even, succ(1) has parity even, and n + n has
parity even if n has parity odd. Rather than sketch the highlevel details of a complete static analyzer, we instead zoom into
the low-level details of a tiny fragment: analyzing the successor
arithmetic operation succ(n). At this level of detail the differences,
advantages and disadvantages of each approach become apparent.
2.1 The Direct Approach
Using the direct approach to verification one designs the analyzer,
defines what it means for the analyzer to be sound, and then completes a proof of soundness. Each step is done from scratch, and in
the simplest way possible.
This approach should be familiar to most readers, and exemplifies how most researchers approach formalizing soundness for
static analyzers: first posit the analyzer and soundness framework,
then attempt the proof of soundness. One limitation of this approach is that the setup—which gives lots of room for error—isn’t
known to be correct until after completing the final proof. However,
a benefit of this approach is it can easily be mechanized.
Analyzing Successor A parity analysis answers questions like:
“what is the parity of succ(n), given that n is even?” To answer
these questions, imagine replacing n with the symbol even, a
stand-in for an arbitrary even number. This hypothetical expression
succ(even) is interpreted by defining a successor function over
parities, rather than numbers, which we call succ♯ . This successor
operation on parities is designed such that if p is the parity for n,
succ♯ (p) will be the parity of succ(n):
Contributions This paper contributes the following:
P := {even, odd}
succ♯ (even) := odd
succ♯ : P → P
succ♯ (odd) := even
Soundness The soundness of succ♯ is defined using an interpretation for parities, which we notate JpK:
• a foundational theory of constructive Galois connections which
is both general and amenable to mechanization using a dependently typed functional programming language;
J K : P → ℘(N)
• a proof library and two case studies from the literature for
JevenK := {n | even(n)}
JoddK := {n | odd(n)}
Given this interpretation, a parity p is a valid analysis result for a
number n if the interpretation for p contains n, that is n ∈ JpK. The
analyzer succ♯ (p) is then sound if, when p is a valid analysis result
for some number n, succ♯ (p) is a valid analysis result for succ(n):
mechanized abstract interpretation; and
• the first mechanization of calculational abstract interpretation.
The remainder of the paper is organized as follows. First we
give a tutorial on verifying a simple analyzer from two different
perspectives: direct verification (§2.1) and abstract interpretation
with Galois connections (§2.2), highlighting mechanization issues
along the way. We then present constructive Galois connections as a
marriage of the two approaches (§3). We provide two case studies:
the mechanization of an abstract interpreter from Cousot’s calculational monograph (§4), and the mechanization of Garcia, Clark and
Tanter’s work on gradual typing via abstract interpretation (§5). Finally, we formalize the metatheory of constructive Galois connections (§6), relate our work to the literature (§7), and conclude (§8).
n ∈ JpK =⇒ succ(n) ∈ Jsucc♯ (p)K
(DA-Snd)
The proof is by case analysis on JpK; we show the case p = even:
2
n ∈ JevenK
⇔ even(n)
⇔ odd(succ(n))
⇔ succ(n) ∈ JoddK
* defn. of J K +
* defn. of even/odd +
* defn. of J K +
⇔ succ(n) ∈ Jsucc♯ (even)K
* defn. of succ♯ +
An Even Simpler Setup There is another way to define and prove
soundness: use a function which computes the parity of a number
in the definition of soundness. This approach is even simpler, and
will help foreshadow the constructive Galois connection setup.
parity : N → P
2.2 Classical Abstract Interpretation
To verify an analyzer using abstract interpretation with Galois connections, one first designs abstraction and concretization mappings
between sets N and P. These mappings are used to synthesize an
optimal specification for succ♯ . One then proves that a postulated
succ♯ meets this synthesized specification, or alternatively derives
the definition of succ♯ directly from the optimal specification.
In contrast to the direct approach, rather than design the definition of soundness, one instead designs the definition of abstraction within a structured framework. Soundness is not designed, it
is derived from the definition of abstraction. Finally, there is added
boilerplate in the abstract interpretation approach, which requires
lifting definitions and proofs to powersets ℘(N) and ℘(P).
parity(0) := even
parity(succ(n)) := f lip(parity(n))
where f lip(even) := odd and f lip(odd) := even. This gives an
alternative and equivalent way to relate a number and a parity, due
to the following correspondence:
n ∈ JpK ⇐⇒ parity(n) = p
(DA-Corr)
The soundness of the analyzer is then restated:
parity(n) = p =⇒ parity(succ(n)) = succ♯ (p)
Abstracting Sets Powersets are introduced in abstraction and
concretization functions to support relational mappings, like mapping the symbol even to the set of all even numbers. The mappings
are therefore between powersets ℘(N) and ℘(P). The abstraction
and concretization mappings must also satisfy correctness criteria,
detailed below, at which point they are called a Galois connection.
The abstraction mapping from ℘(N) to ℘(P) is notated α, and
is defined as the pointwise lifting of parity(n):
or by substituting parity(n) = p:
parity(succ(n)) = succ♯ (parity(n))
(DA-Snd*)
Both this statement for soundness and its proof are simpler than
before. The proof follows directly from the definition of parity
and the fact that succ♯ is identical to f lip.
The concretization mapping from ℘(P) to ℘(N) is notated γ, and
is defined as the flattened pointwise lifting of JpK:
γ(P ) := {n | p ∈ P ∧ n ∈ JpK}
γ : ℘(P) → ℘(N)
The correctness criteria for α and γ is the correspondence:
N ⊆ γ(P ) ⇐⇒ α(N ) ⊆ P
(GC-Corr)
The correspondence means that, to relate elements of different
sets—in this case ℘(N) and ℘(P)—it is equivalent to relate them
through either α or γ. Mappings like α and γ which share this
correspondence are called Galois connections.
An equivalent correspondence to (GC-Corr) is two laws relating
compositions of α and γ, called expansive and reductive:
Mechanized Verification This direct approach to verification is
amenable to mechanization using proof assistants like Coq and
Agda. These tools are founded on constructive logic in part to
support verified program extraction. In constructive logic, functions
f : A → B are computable and often defined inductively to ensure
they can be extracted and executed as programs. Analogously,
propositions P : ℘(A) are encoded constructively as undecidable
predicates P : A → prop where x ∈ P ⇔ P (x).
To mechanize the verification of succ♯ we first translate its
definition to a constructive setting unmodified. Next we translate
JpK to a relation I(p, n) defined inductively on n:
I(even, 0)
α(N ) := {parity(n) | n ∈ N }
α : ℘(N) → ℘(P)
The Main Idea Correspondences like (DA-Corr)—between an
interpretation for analysis results (JpK) and a function which computes analysis results (parity(n))—are central to the constructive
Galois Connection framework we will describe in Section 3. Using
correspondences like these, we build a general theory of abstraction
that recovers this direct approach to verification, mirrors all of the
benefits of abstract interpretation with classical Galois connections,
supports mechanized verification, and in some cases simplifies the
proof effort. We also observe that many classical Galois connections used in practice can be ported to this simpler setting.
N ⊆ γ(α(N ))
α(γ(P )) ⊆ P
(GC-Exp)
(GC-Red)
Property (GC-Red) ensures α is the best abstraction possible w.r.t.
γ. For example, a hypothetical definition α(N ) := {even, odd} is
expansive but not reductive because α(γ({even})) 6⊆ {even}.
In general, Galois connections are defined for arbitrary posets
hA, ⊑A i and hB, ⊑B i. The correspondence (GC-Corr) and its expansive/reductive variants are generalized in this setting to use partial orders ⊑A and ⊑B instead of subset ordering. We are also omitting monotonicity requirements for α and γ in our presentation (although (GC-Corr) implies monotonicity).
I(p, n)
I(f lip(p), succ(n))
The mechanized proof of (DA-Snd) using I is analogous to the
one we sketched, and the mechanized proof of (DA-Snd*) follows
directly by computation. The proof term for (DA-Snd*) in both Coq
and Agda is simply refl, the reflexivity judgment for syntactic
equality modulo computation in constructive logic.
Powerset Lifting The original functions succ and succ♯ cannot
be related through α and γ because they are not functions between
powersets. To remedy this they are lifted pointwise:
Wrapping Up The two different approaches to verification we
present are distinguished by which parts of the design are postulated, and which parts are derived. Using the direct approach, the
analysis succ♯ , the interpretation for parities JpK and the definition
of soundness are all postulated up-front. When the soundness setup
is correct but the analyzer is wrong, the proof at the end will not go
through and the analyzer must be redesigned. Even worse, when the
soundness setup and the analyzer are both wrong, the proof might
actually succeed, giving a false assurance in the soundness of the
analyzer. However, the direct approach is attractive because it is
simple and supports mechanized verification.
↑succ : ℘(N) → ℘(N)
↑succ(N ) := {succ(n) | n ∈ N }
↑succ♯ : ℘(P) → ℘(P)
↑succ♯ (P ) := {succ♯ (p) | p ∈ P }
These lifted operations are called the concrete interpreter and abstract interpreter, because the former operates over the concrete
domain ℘(Z) and the latter over the abstract domain ℘(P). In
the framework of abstract interpretation, static analyzers are just
abstract interpreters. Lifting to powersets is necessary to use the
abstract interpretation framework, and has the negative effect of
adding boilerplate to definitions and proofs of soundness.
3
Soundness The definition of soundness for succ♯ is synthesized
by relating ↑succ♯ to ↑succ composed with α and γ:
α(↑succ(γ(P ))) ⊆ ↑succ♯ (P )
Calculational Derivation of Abstract Interpreters Rather than
posit ↑succ♯ and prove it correct directly, one can instead derive its
definition through a calculational process. The process begins with
the optimal specification on the left-hand-side of (GC-Opt), and
reasons equationally towards the definition of a function. In this
way, ↑succ♯ is not postulated, rather it is derived by calculation,
and the result is both sound and optimal by construction.
The derivation is by case analysis on P which has four cases:
{}, {even}, {odd} and {even, odd}; we show P = {even}:
(GC-Snd)
The left-hand side of the ordering is an optimal specification for any
abstraction of ↑succ (a consequence of (GC-Corr)), and the subset ordering says ↑succ♯ is an over-approximation of this optimal
specification. The reason to over-approximate is because the specification is a mathematical description, and the abstract interpreter
is usually an algorithm, and therefore not always able to match the
specification precisely. The proof of (GC-Snd) is by case analysis
on P . We do not show the proof, rather we demonstrate a proof
later in this section which also synthesizes the definition of succ♯ .
One advantage of the abstract interpretation framework is that it
gives the researcher the choice between four soundness properties,
all of which are equivalent and generated by α and γ:
♯
α(↑succ(γ(P ))) ⊆ ↑succ (P )
(GC-Snd/αγ)
↑succ(γ(P )) ⊆ γ(↑succ♯ (P ))
(GC-Snd/γγ)
♯
α(↑succ(N )) ⊆ ↑succ (α(N ))
♯
↑succ(N ) ⊆ γ(↑succ (α(N )))
(GC-Snd/αα)
(GC-Snd/γα)
(GC-Opt)
Not all analyzers are optimal, however optimality helps identify
those which approximate too much. Consider the analyzer ↑succ♯′ :
↑succ♯′ (P ) := {even, odd}
Resistance to Mechanized Verification Despite the beauty and
utility of Galois connections, advocates of the approach have yet to
reconcile their use with advances in mechanized reasoning: every
mechanized verification of an executable abstract interpreter todate has resisted the use of Galois connections, even when initially
designed to take advantage of the framework.
The issue in mechanizing Galois connections amounts to a conflict between supporting both classical set-theoretic reasoning and
executable static analyzers. Supporting executable static analyzers
calls for constructive mathematics, a problem for α functions because they are often non-constructive, an observation first made
by Monniaux [26]. To work around this limitation, Pichardie [29]
advocates for designing abstract interpreters which are merely inspired by Galois connections, but ultimately avoiding their use
in verification, which he terms the “γ-only” approach. Successful verification projects such as Verasco adopt this “γ-only” approach [18, 19], despite the use of Galois connections in designing
the original Astrée analyzer [4].
To better understand the foundational issues with Galois connections and α functions, consider verifying the abstract interpretation approach to soundness for our parity analyzer using a
proof assistant built on constructive logic. In this setting, the encoding of the Galois connection must support elements of infi-
This analyzer reports that succ(n) could have any parity regardless
of the parity for n; it’s the analyzer that always says “I don’t know”.
This analyzer is perfectly sound but non-optimal.
Just like soundness, four completeness statements are generated
by α and γ, however each of the statements are not equivalent:
(GC-Cmp/αγ)
↑succ(γ(P )) = γ(↑succ (P ))
(GC-Cmp/γγ)
α(↑succ(N )) = ↑succ♯ (α(N ))
(GC-Cmp/αα)
↑succ(N ) = γ(↑succ♯ (α(N )))
(GC-Cmp/γα)
* defining ↑succ♯ +
Added Complexity The abstract interpretation approach requires
a Galois connection up-front which necessitates the introduction of
powersets ℘(N) and ℘(P). This results in powerset-lifted definitions and adds boilerplate set-theoretic reasoning to the proofs.
This is in contrast to the direct approach which never mentions
powersets of parities. Not using powersets results in more understandable soundness criteria, requires no boilerplate set-theoretic
reasoning, and results in fewer cases for the proof of soundness.
This boilerplate becomes magnified in a mechanized setting where
all details must be spelled out to a proof assistant. Furthermore,
the simpler proof of (DA-Snd*)—which was immediate from the
definition of parity—cannot be recovered within the abstract interpretation framework, which shows one abandons simpler proof
techniques in exchange for the benefits of abstract interpretation.
Because the left-hand-side is an optimal specification, an abstract
interpreter will never be strictly more precise. Therefore, optimality
is written equivalently using an equality:
♯
* defn. of α +
The derivation of the other cases is analogous, and together they
define the implementation of ↑succ♯ .
Deriving analyzers by calculus is attractive because it is systematic, and because it prevents the issue where an analyzer is postulated and discovered to be unsound only after failing to complete
its soundness proof. However, this calculational style of abstract
interpretation is not amenable to mechanized verification with program extraction because α is often non-constructive, an issue we
describe later in this section.
α(↑succ(γ(P ))) ⊇ ↑succ♯ (P )
α(↑succ(γ(P ))) = ↑succ♯ (P )
= {odd}
, ↑succ ({even})
Completeness The mappings α and γ also synthesize an optimality statement for ↑succ♯ , a kind of completeness property, by stating that it under-approximates the optimal specification:
↑succ♯′ : ℘(P) → ℘(P)
* defn. of γ +
* defn. of ↑succ +
* defn. of even/odd +
♯
Because each soundness property is equivalent (also a consequence
of (GC-Corr)), one can choose whichever variant is easiest to prove.
The soundness setup (GC-Snd) is the αγ rule, however any of the
other rules can also be used. For example, one could choose αα
or γα; in these cases the proof considers four disjoint cases for N :
N is empty, N contains only even numbers, N contains only odd
numbers, and N contains both even and odd numbers.
α(↑succ(γ(P ))) = ↑succ♯ (P )
α(↑succ(γ({even})))
= α(↑succ({n | even(n)}))
= α({succ(n) | even(n)})
= α({n | odd(n)})
Abstract interpreters which satisfy the αγ variant are called optimal
because they lose no more information than necessary, and those
which satisfy the γα variant are called precise because they lose no
information at all. The abstract interpreter succ♯ is optimal but not
precise, because γ(↑succ♯ (α({1}))) 6= ↑succ({1})
To overcome mechanization issues with Galois connections, the
state-of-the-art is restricted to use γγ rules only for soundness
(GC-Snd/γγ) and completeness (GC-Cmp/γγ). This is unfortunate
for completeness properties because each completeness variant is
not equivalent.
4
nite powersets—like the set of all even numbers—as well as executable abstract interpreters which manipulate elements of finite
powersets—like {even, odd}. To support representing infinite sets,
the powerset ℘(N) is modelled constructively as a predicate N →
prop. To support defining executable analyzers that manipulate sets
of parities, the powerset ℘(P) is modelled as an enumeration of its
inhabitants, which we call Pc :
Abstracting Sets A constructive Galois connection between sets
A and B contains two mappings: the first is called extraction,
notated η, and the second is called interpretation, notated µ:
η:A→B
η and µ are analogous to classical Galois connection mappings α
and γ. In the parity analysis described in Section 2.1, the extraction
function was parity and the interpretation function was J K.
Constructive Galois connection mappings η and µ must form a
correspondence similar to (GC-Corr):
Pc := {even, odd, ⊥, ⊤}
where ⊥ and ⊤ represent {} and {even, odd}. This enables a
definition for ↑succ♯ : Pc → Pc which can be extracted and
executed. The consequence of this design is a Galois connection
between N → prop and Pc ; the issue is now α:
α : (N → prop) → P
µ : B → ℘(A)
x ∈ µ(y) ⇐⇒ η(x) = y
(CGC-Corr)
The intuition behind the correspondence is the same as before: to
compare an element x in A to an element y in B, it is equivalent to
compare them through either η or µ.
Like classical Galois connections, the correspondence between
η and µ is stated equivalently through two composition laws. Extraction functions η which form a constructive Galois connection
are also a “best abstraction”, analogously to α in the classical setup:
c
This version of α cannot be defined constructively, as doing so
requires deciding predicates over φ : N → prop. To define α one
must perform case analysis on predicates like ∃n, φ(n) ∧ even(n)
to compute an element of Pc , which is not possible for arbitrary φ.
However, γ can be defined constructively:
sound : x ∈ µ(η(x))
tight : x ∈ µ(y) =⇒ η(x) = y
γ : Pc → (N → prop)
(CGC-Ext)
(CGC-Red)
In general, any theorem of soundness using Galois connections
can be rewritten to use only γ, making use of (GC-Corr); this is
the essence of the “γ-only” approach, embodied by the soundness
variant (GC-Snd/γγ). However, this principle does not apply to all
proofs of soundness using Galois connections, many of which mention α in practice. For example, the γ-only setup does not support
calculation in the style advocated by Cousot [8]. Furthermore, not
all completeness theorems can be translated to γ-only style, such as
(GC-Cmp/γα) which is used to show an abstract interpreter is fully
precise.
Aside We use the term extraction function and notation η from
Nielson et al [27] where η is used to simplify the definition of an
abstraction function α. We recover α functions from η in a similar
way. However, their treatment of η is a side-note to simplifying
the definition of α and nothing more. We take this simple idea
much further to realize an entire theory of abstraction around η/µ
functions and their correspondences. In this “lowered” theory of
η/µ we describe soundness/optimality criteria and calculational
derivations analogous to that of α/γ while support mechanized
verification, none of which is true of Nielson et al’s use of η.
Wrapping Up Abstract interpretation differs from the direct approach in which parts of the design are postulated and which parts
are derived. The direct approach requires postulating the analyzer
and definition of soundness. Using abstract interpretation, a Galois
connection between sets is postulated instead, and definitions for
soundness and completeness are synthesized from the Galois connection. Also, abstract interpretation support deriving the definition
of a static analyzer directly from its proof of correctness.
The downside of abstract interpretation is that it requires lifting succ and succ♯ into powersets, which results in boilerplate
set-theoretic reasoning in the proof of soundness. Finally, due to
foundational issues, the abstract interpretation framework is not
amenable to mechanized verification while also supporting program extraction using constructive logic.
Induced Specifications Four equivalent soundness criteria are
generated by η and µ just like in the classical framework. Each
soundness statement uses η and µ in a different but equivalent way
(assuming (CGC-Corr)). For a concrete f : A → A and abstract
f ♯ : B → B, f ♯ is sound if f any of the following properties hold:
x ∈ µ(y) ∧ y ′ = η(f (x)) =⇒ y ′ = f ♯ (y)
′
′
♯
x ∈ µ(y) ∧ x = f (x) =⇒ x ∈ µ(f (y))
♯
(CGC-Snd/ηµ)
(CGC-Snd/µµ)
y = η(f (x)) =⇒ y = f (η(x))
(CGC-Snd/ηη)
x′ = f (x) =⇒ x′ ∈ µ(f ♯ (η(x)))
(CGC-Snd/µη)
In the direct approach to verifying an example parity analysis
described in Section 2.1, the first soundness property (DA-Snd) is
generated by the µµ variant, and the second soundness property
(DA-Snd*) which enjoyed a simpler proof is generated by the ηη
variant. We write these soundness rules in a slightly strange way
so we can write their completeness analogs simply by replacing
⇒ with ⇔. The origin of these rules comes from an adjunction
framework, which we discuss in Section 6.
The mappings η and µ also generate four completeness criteria
which, like classical Galois connections, are not equivalent:
3. Constructive Galois Connections
In this section we describe abstract interpretation with constructive Galois connections—a parallel universe of Galois connections
analogous to classical ones. The framework enjoys all the benefits
of abstract interpretation, but like the direct approach avoids the
pitfalls of added complexity and resistance to mechanization.
We will describe the framework of constructive Galois connections between sets A and B. When instantiated to N and P, the
framework recovers exactly the direct approach from Section 2.1.
We will also describe constructive Galois connections in the absence of partial orders, or more specifically, we will assume the
discrete partial order: x ⊑ y ⇔ x = y. (Partial orders didn’t appear in our demonstration of classical abstract interpretation, but
they are essential to the general theory.) We describe generalizing
to partial orders and recovering classical results from constructive
ones at the end of this section. The fully general theory of constructive Galois connections is described in Section 6 where it is
compared side-by-side to classical Galois connections.
x ∈ µ(y) ∧ y ′ = η(f (x)) ⇐⇒ y ′ = f ♯ (y)
′
′
(CGC-Cmp/ηµ)
♯
x ∈ µ(y) ∧ x = f (x) ⇐⇒ x ∈ µ(f (y)) (CGC-Cmp/µµ)
y = η(f (x)) ⇐⇒ y = f ♯ (η(x))
′
′
♯
x = f (x) ⇐⇒ x ∈ µ(f (η(x)))
(CGC-Cmp/ηη)
(CGC-Cmp/µη)
Inspired by classical Galois connections, we call abstract interpreters f ♯ which satisfy the ηµ variant optimal and those which
satisfy the µη variant precise.
The above soundness and completeness rules are stated for concrete and abstraction functions f : A → A and f ♯ : B → B.
5
However, they generalize easily to relations R : ℘(A × A) and
predicate transformers F : ℘(A) → ℘(A) (i.e. collecting semantics) through the adjunction framework discussed in Section 6. The
case studies in Sections 4 and 5 describe abstract interpreters over
concrete relations and their soundness conditions.
and sound and optimal by construction. In addition to these benefits of a general abstraction framework, constructive Galois connections are amenable to mechanized verification. Both extraction
(η) and interpretation (µ) can be mechanized effectively, as well as
proofs of soundness, completeness, and calculational derivations.
Calculational Derivation of Abstract Interpreters The constructive Galois connection framework also supports deriving abstract
interpreters through calculation, analogously to the calculation we
demonstrated in Section 2.2. To support calculational reasoning,
the four logical soundness criteria are rewritten into statements
about subsumption between powerset elements:
3.1 Partial Orders and Monotonicity
♯
{η(f (x)) | x ∈ µ(y)} ⊆ {f (y)}
The full theory of constructive Galois connections generalizes to
posets hA, ⊑iA and hB, ⊑B i by making the following changes:
• Powersets must be downward-closed, that is for X : ℘(A):
x ∈ X ∧ x′ ⊑ x =⇒ x′ ∈ X
(CGC-Snd/ηµ*)
♯
{f (x) | x ∈ µ(y)} ⊆ µ(f (y))
Singleton sets {x} are reinterpreted to mean {x | x′ ⊑ x}.
For mechanization, this means ℘(A) is encoded as an antitonic
function, notated with a down-right arrow A → prop, where
the partial ordering on prop is by implication.
(CGC-Snd/µµ*)
♯
{η(f (x))} ⊆ {f (η(x))}
(CGC-Snd/ηη*)
♯
{f (x)} ⊆ µ(f (η(x)))
(PowerMon)
′
(CGC-Snd/µη*)
• Functions must be monotonic, that is for f : A → A:
x ⊑ x′ =⇒ f (x) ⊑ f (x′ )
The completeness analog to the four rules replaces set subsumption
with equality. Using the ηµ* completeness rule, one calculates
towards a definition for f ♯ starting from the left-hand-side, which
is the optimal specification for abstract interpreters of f .
To demonstrate calculation using constructive Galois connections, we show the derivation of succ♯ from its induced specification, the result of which is sound and optimal (because each step is
= in addition to ⊆) by construction; we show p = even:
(FunMon)
We notate monotonic functions f : A → A. Monotonicity
is required for mappings η and µ, and concrete and abstract
interpreters f and f ♯ .
• The constructive Galois connection correspondence is general-
ized to partial orders in place of equality, that is for η and µ:
x ∈ µ(y) ⇐⇒ η(x) ⊑ y
(CGP-Corr)
{parity(succ(n)) | n ∈ JevenK}
= {parity(succ(n)) | even(n)}
= {f lip(parity(n)) | even(n)}
= {f lip(even)}
* defn. of J K +
* defn. of parity +
* Eq. DA-Corr +
= {odd}
* defn. of f lip +
x ∈ µ(y) ∧ y ′ ⊑ η(f (x)) =⇒ y ′ ⊑ f ♯ (y) (CGP-Snd/ηµ)
, {succ♯ (even)}
* defining succ♯ +
x ∈ µ(y) ∧ x′ ⊑ f (x) =⇒ x′ ∈ µ(f ♯ (y)) (CGP-Snd/µµ)
or alternatively, by generalizing the reductive property:
x ∈ µ(y) =⇒ η(x) ⊑ y
• Soundness criteria are also generalized to partial orders:
y ⊑ η(f (x)) =⇒ y ⊑ f ♯ (η(x))
We will show another perspective on this calculation later in this
section, where the derivation of succ♯ is not only sound and optimal
by construction, but computable by construction as well.
′
′
♯
x ⊑ f (x) =⇒ x ∈ µ(f (η(x)))
(CGP-Snd/ηη)
(CGP-Snd/µη)
We were careful to write the equalities in Section 3 in the right
order so this change is just swappping = for ⊑. Completeness
criteria are identical with ⇔ in place of ⇒.
Mechanized Verification In addition to the benefits of a general abstraction framework, constructive Galois connections are
amenable to mechanization in a way that classical Galois connections are not. In our Agda library and case studies we mechanize
constructive Galois connections in full generality, as well as proofs
that use both mapping functions, such as calculational derivations.
As we discussed in Sections 2.1 and 2.2, the constructive encoding for infinite powersets ℘(A) is A → prop. This results in
the following types for η and µ when encoded constructively:
η:N→P
(CGP-Red)
To demonstrate when partial orders and monotonicity are necessary, consider designing a parity analyzer for the max operator:
max♯ : P × P → P
max♯ (even, even) := even
max♯ (even, odd) := ?
max♯ (odd, odd) := odd
max♯ (odd, even) := ?
The last two cases for max♯ cannot be defined because the maximum of an even and odd number could be either even or odd, and
there is no representative for “any number” in P. To remedy this,
we add any to the set of parities: P+ := P∪{any}; the new element
any is interpreted: JanyK := {n | n ∈ N}; the partial order on P+
becomes: even, odd ⊑ any; and the correspondence continues to
hold using this partial order: n ∈ Jp+ K ⇐⇒ parity(n) ⊑ p+ .
max♯ is then defined using the abstraction P+ and proven sound
and optimal following the abstract interpretation paradigm.
µ : P → N → prop
In constructive logic, the arrow type N → P classifies computable
functions, and the arrow type P → N → prop classifies undecidable relations. (CGC-Corr) is then mechanized without issue:
µ(p, n) ⇐⇒ η(n) = p
See the mechanization details in Section 2.1 for how η and µ are
defined constructively for the example parity analysis.
Wrapping Up Constructive Galois connections are a general abstraction framework similar to classical Galois connections. At the
heart of the constructive Galois connection framework is a correspondence (CGC-Corr) analogous to its classical counterpart. From
this correspondence, soundness and completeness criteria are synthesized for abstract interpreters. Constructive Galois connections
also support calculational derivations of abstract interpreters which
3.2 Relationship to Classical Galois Connections
We clarify the relationship between constructive and classical Galois connections in three ways:
• Any constructive Galois connection can be lifted to obtain an
equivalent classical Galois connection, and likewise for soundness and completeness proofs.
6
• Any classical Galois connection which can be recovered by a
The monadic structure of classical powersets is standard, and is
analogous to the nondeterminism monad familiar to Haskell programmers. However, the model ℘(A) = A → prop is the uncomputable nondeterminism monad and mirrors the use of setcomprehensions on paper to describe uncomputable sets (specifications), rather than the use of monad comprehensions in Haskell
to describe computable sets (constructed values).
We generalize ℘( ) to a monotonic monad, similarly to how
we generalized powersets to posets in Section 3.1. This results in
monotonic versions of monad operators ret and bind:
constructive one contains no additional expressive power, rendering it an equivalent theory with added boilerplate reasoning.
• Not all classical Galois connections can be recovered by con-
structive ones.
From these relationships we conclude that one benefits from using
constructive Galois connections whenever possible, classical Galois connections when no constructive one exists, and both theories
together as needed. We make these claims precise in Section 6.
A classical Galois connection is recovered from a constructive
one by the following lifting:
α : ℘(A) → ℘(B)
γ : ℘(B) → ℘(A)
ret : A → ℘(A)
ret(x) := {x′ | x′ ⊑ x}
α(X) := {η(x) | x ∈ X}
γ(Y ) := {x | y ∈ Y ∧ x ∈ µ(y)}
bind : ℘(A) × (A → ℘(B)) → ℘(B)
bind(X, f ) := {y | x ∈ X ∧ y ∈ f (x)}
We adopt Moggi’s notation [25] for monadic extension where
bind(X, f ) is written f ∗ (X), or just f ∗ for λX.f ∗ (X).
We call the powerset type ℘(A) a specification effect because it
has monadic structure, supports encoding arbitrary properties over
values in A, and cannot be “escaped from” in constructive logic,
similar to the IO monad in Haskell. In classical mathematics, there
is an isomorphism between singleton powersets ℘1 (A) and the set
A. However, no such constructive mapping exists for ℘1 (A) → A.
Such a function would decide arbitrary predicates in A → prop to
compute the A inside the singleton set. This observation, that you
can program inside ℘( ) monadically in constructive logic, but you
can’t escape the monad, is why we call it a specification effect.
Given the monadic structure for powersets, and the intuition
that they encode a specification effect in constructive logic, we can
recast the theory of constructive Galois connections using monadic
operators. To do this we define a helper operator which injects
“pure” functions into the “effectful” function space:
When a classical Galois connection can be written in this form for
some η and µ, then one can use the simpler setting of abstract interpretation with constructive Galois connections without any loss
of generality. We also observe that many classical Galois connections in practice can be written in this form, and therefore can be
mechanized effectively using constructive Galois connections. The
case studies in presented in Sections 4 and 5 are two such cases,
although the original authors of those works did not initially write
their classical Galois connections in this explicitly lifted form.
An example of a classical Galois connection which is not recovered by lifting a constructive Galois is the Independent Attributes
(IA) abstraction, which abstracts relations R : ℘(A × B) with their
component-wise splitting hRl , Rr i : ℘(A) × ℘(B):
α : ℘(A × B) → ℘(A) × ℘(B)
α(R) := h{x | ∃y.hx, yi ∈ R}, {y | ∃x.hx, yi ∈ R}i
γ : ℘(A) × ℘(B) → ℘(A × B)
γ(Rl , Rr ) := {hx, yi | x ∈ Rl , y ∈ Rr }
pure : (A → B) → (A → ℘(B)) pure(f )(x) := ret(f (x))
We then rewrite (CGC-Corr) using ret and pure:
ret(x) ⊆ µ(y) ⇐⇒ pure(η)(x) ⊆ ret(y)
This Galois connection is amenable to mechanized verification. In
a constructive setting, α and γ are maps between A × B → prop
and (A → prop) × (B → prop), and can be defined directly using
logical connectives ∃ and ∧:
(CGM-Corr)
and we rewrite the expansive and reductive variant of the correspondence using ret, bind (notated f ∗ ) and pure:
ret(x) ⊆ µ∗ (pure(η)(x))
α(R) := hλ(x).∃(y).R(x, y), λ(y).∃(x).R(x, y)i
γ(Rl , Rr ) := λ(x, y).Rl (x) ∧ Rr (y)
∗
pure(η) (µ(y)) ⊆ ret(y)
(CGM-Exp)
(CGM-Red)
The four soundness and completeness conditions can also be written in monadic style; we show the ηµ soundness property here:
IA can be mechanized effectively because the Galois connection
consists of mappings between specifications and the foundational
issue of constructing values from specifications does not appear.
IA is not a constructive Galois connection because there is no pure
function µ underlying the abstraction function α.
Because constructive Galois connections can be lifted to classical ones, a constructive Galois connection can interact directly with
IA through its lifting, even in a mechanized setting. However, once
a constructive Galois connection is lifted it loses its computational
properties and cannot be extracted and executed. In practice, IA is
used to weaken (⊑) an induced optimal specification after which
the calculated interpreter is shown to be optimal (=) up-to-IA. IA
never appears in the final calculated interpreter, so not having a
constructive Galois connection formulation poses no issue.
pure(η)∗ (pure(f )∗ (µ(y))) ⊆ pure(f ♯ )(y)
(CGM-Snd)
The left-hand-side of the ordering is the optimal specification for
f ♯ , just like (CGC-Snd/ηµ) but using monadic operators. The righthand-side of the ordering is f ♯ lifted to the monadic function space.
The constructive calculation of succ♯ we showed earlier in this
section is a calculation of this form. The specification on the left
has type ℘(P), and it has effects, meaning it uses classical reasoning
and can’t be executed. The abstract interpreter on the right also has
type ℘(P), but it has no effects, meaning it can be extracted and
executed. The calculated abstract interpreter is thus not only sound
and optimal by construction, it is computable by construction.
Constructive Galois connections are empowering because they
treat specification like an effect, which optimal specifications ought
to have, and which computable abstract interpreters ought not to
have. Using a monadic effect discipline we support calculations
which start with a specification effect, and where the “effect” is
eliminated through the process of calculation. The monad laws
are crucial in canceling uses of ret with bind to arrive at a final
pure computation. For example, the first step in a derivation for
(CGM-Snd) can immediately simplify using monad laws to:
3.3 The “Specification Effect”
The machinery of constructive Galois connections follow a monadic
effect discipline, where the effect type is the classical powerset
℘( ); we call this a specification effect. First we will describe the
monadic structure of powersets ℘( ) and what we mean by “specification effect”. Then we will recast the theory of constructive
Galois connections in this monadic style, giving insights into why
the theory supports mechanized verification, and foreshadowing
key fragments of the metatheory we develop in Section 6.
pure(η ◦ f )∗ (µ(y)) ⊆ pure(f ♯ )(y)
7
i∈
b∈
Z := {. . . , −1, 0, 1, . . .}
B := {true, f alse}
integers
ρ ∈ env := var ⇀ Z
a
ς ∈ Σ ::= hρ, cei
a
J K ∈ aop → Z × Z ⇀ Z
⊢ ⇓
variables
J Kc ∈ cmp → Z × Z → B
⊢ ⇓b ∈ ℘(env × bexp × B)
⊕ ∈ aop ::= + | − | × | /
arithmetic op.
J Kb ∈ bop → B × B → B
< ∈ cmp ::= < | =
comparison op.
<
booleans
x ∈ var ::= . . .
∈ bop ::= ∨ | ∧
arithmetic exp.
be ∈ bexp ::= b | ae < ae | be
boolean exp.
<
ae ∈ aexp ::= i | x | rand | ae ⊕ ae
be
AR AND
ρ ⊢ ae1 ⇓a i1
ρ ⊢ ae2 ⇓a i2
AO P
a
ρ ⊢ ae1 ⊕ ae2 ⇓ J⊕Ka (i1 , i2 )
ce ∈ cexp ::= skip | ce ; ce | x := ae
ρ ⊢ ae ⇓a i
CA SSIGN
hρ, x := aei 7→c hρ[x ← i], skipi
| if be then ce else ce
| while be do ce
7→c ∈ ℘(Σ × Σ)
ρ ⊢ rand ⇓a i
boolean op.
∈ ℘(env × aexp × Z)
command exp.
ρ ⊢ be ⇓b true
CW HILE -T
hρ, while be do cei 7→c hρ, ce ; while be do cei
Figure 1. Case Study: WHILE abstract syntax
ρ ⊢ be ⇓b f alse
CW HILE -F
hρ, while be do cei 7→c hρ, skipi
4. Case Study: Calculational AI
In this section we apply constructive Galois connections to the Calculational Design of a Generic Abstract Interpreter from Cousot’s
monograph [8]. To our knowledge, we achieve the first mechanically verified abstract interpreter derived by calculus.
The key challenge in mechanizing the interpreter is supporting
both abstraction (α) and concretization (γ) mappings, which are required by the calculational approach. Classical Galois connections
do not support mechanization of the abstraction mapping without
the use of axioms, and the required axioms block computation, preventing the extraction of verified algorithms.
To verify Cousot’s generic abstract interpreter we use constructive Galois connections, which we describe in Section 3 and formalize in Section 6. Using constructive Galois connections we encode extraction (η) and interpretation (µ) mappings as constructive
analogs to α and γ, calculate an abstract interpreter for an imperative programming language which is sound and computable by
construction, and recover the original classical Galois connection
results through a systematic lifting.
First we describe the setup for the analyzer: the abstract syntax, the concrete semantics, and the constructive Galois connections involved. Following the abstract interpretation paradigm with
constructive Galois connections we design abstract interpreters for
denotation functions and semantics relations. We show a fragment
of our Agda mechanization which closely mirrors the pencil-andpaper proof, as well as Cousot’s original derivation.
Figure 2. Case Study: WHILE concrete semantics
4.2 Abstract Semantics with Constructive GCs
Using abstract interpretation with constructive Galois connections,
we design an abstract semantics for WHILE in the following steps:
1. An abstraction for each set Z, B and env.
2. An abstraction for each denotation function J Ka , J Kc and J Kb .
3. An abstraction for each semantics relation ⇓a , ⇓b and 7→c .
Each abstract set forms a constructive Galois connection with its
concrete counterpart. Soundness criteria is synthesized for abstract
functions and relations using constructive Galois connection mappings. Finally, we verify and calculate abstract interpreters from
these specifications which are sound and computable by construction. We describe the details of this process only for integers and
environments (the sets Z and env), arithmetic operators (the denotation function J Ka ), and arithmetic expressions (the semantics
relation ⇓a ). See the Agda development accompanying this paper
for the full mechanization of WHILE.
Abstracting Integers We design a simple sign abstraction for integers, although more powerful abstractions are certainly possible [24]. The final abstract interpreter for WHILE is parameterized
by any abstraction for integers, meaning another abstraction can be
plugged in without added proof effort.
The sign abstraction begins with three representative elements:
neg, zer and pos, representing negative integers, the integer 0, and
positive integers. To support representing integers which could be
negative or 0, negative or positive, or 0 or positive, etc. we design
a set which is complete w.r.t these logical disjunctions:
4.1 Concrete Semantics
The WHILE language is an imperative programming language with
arithmetic expressions, variable assignment and while-loops. We
show the syntax for this language in Figure 1. WHILE syntactically
distinguished arithmetic, boolean and command expressions. rand
is an arithmetic expression which can evaluate to any integer. Syntactic categories ⊕, < and range over arithmetic, comparison and
boolean operators, and are introduced to simplify the presentation.
The WHILE language is taken from Cousot’s monograph [8].
The concrete semantics of WHILE is sketched without full definition in Figure 2. Denotation functions J Ka , J Kc and J Kb give
semantics to arithmetic, conditional and boolean operators. The semantics of compound syntactic expressions are given operationally
with relations ⇓a , ⇓b and 7→c . Relational semantics are given for
arithmetic expressions and commands due to the nondeterminism
of rand and nontermination of while. These semantics serve as
the starting point for designing an abstract interpreter.
<
i♯ ∈ Z♯ := {none, neg, zer, pos, negz, nzer, posz, any}
Z♯ is given meaning through an interpretation function µz , the
analog of a γ from the classical Galois connection framework:
µz : Z♯ → ℘(Z)
µz (none) := {}
µz (neg) := {i | i < 0}
µz (zer) := {0}
µz (pos) := {i | i > 0}
8
µz (negz) := {i | i ≤ 0}
µz (nzer) := {i | i 6= 0}
µz (posz) := {i | i ≥ 0}
µz (any) := {i | i ∈ Z}
The partial ordering on abstract integers coincides with subset
ordering through µz , that is i♯1 ⊑z i♯2 ⇐⇒ µz (i♯1 ) ⊆ µz (i♯2 ):
The specification which encodes soundness and optimality for
J K♯a is generated using the constructive Galois connection for Z:
neg ⊑z negz, nzer
pos ⊑z nzer, posz
♯′
z
a
♯′ ♯′
♯a ♯ ♯
hi1,i2i ∈ µz(i♯1,i♯2) ∧ hi♯′
1 ,i2 i ⊑ η (JaeK (i1,i2)) ⇔ hi1 ,i2 i ⊑ JaeK (i1,i2)
zer ⊑z negz, posz
none ⊑z i♯ ⊑z i♯ ⊑z any
(See (CGC-Cmp/ηµ) in Section 3 for the origin of this equation.)
For J K♯a , we postulate its definition and verify its correctness postfacto using the above property, although we omit the proof details
here. The definition of J K♯a is standard, and returns none in the
case of division by zero. We show only the definition of + here:
To be a constructive Galois connection, µz forms a correspondence
with a best abstraction function η z :
neg if i < 0
η z : Z → Z♯
η z (n) := zer if i = 0
pos if i > 0
J K♯a : aexp → Z♯ × Z♯ → Z♯
pos if pos ⊑z
z
neg if neg ⊑
G
♯a ♯ ♯
J+K (i1 , i2 ) :=
zer if zer ⊑z
zer if pos ⊑z
zer if neg ⊑z
and we prove the constructive Galois connection correspondence:
i ∈ µz (i♯ ) ⇐⇒ η z (i) ⊑z i♯
The Classical Design To contrast with Cousot’s original design
using classical abstract interpretation, the key difference is the abstraction function. The abstraction function using classical Galois
connections is recovered through a lifting of our η z :
G z
αz (I) :=
η (i)
αz : ℘(Z) → Z♯
J Ka℘ ∈ aexp → ℘(Z × Z) → ℘(Z)
JaeKa℘ (II) := {JaeKa (i1 , i2 ) | hi1 , i2 i ∈ II}
Abstraction functions of this form—℘(B) → A, for some concrete set A and abstract set B—are representative of most Galois
connections used in the literature for static analyzers. However,
these abstraction functions are precisely the part of classical Galois
connections which inhibit mechanized verification. The extraction
function η z does not manipulate powersets, does not inhibit mechanized verification, and recovers the original non-constructive αz
through this standard lifting.
and then J K♯a is proven correct w.r.t. this lifting using αz and γ z :
αz (JaeKa℘ (γ z (i♯1 , i♯2 ))) = JaeK♯a (i♯1 , i♯2 )
This property cannot be mechanized without axioms because αz is
non-constructive. Furthermore, the proof involves additional powerset boilerplate reasoning, which is not present in our mechanization of correctness for J K♯a using constructive Galois connections.
The state-of-the art approach of “γ-only” verification would instead
mechanize the γγ variant of correctness:
Abstracting Environments An abstract environment maps variables to abstract integers rather than concrete integers.
ρ♯ ∈ env♯ := var → Z♯
JaeKa℘ (γ z (i♯1 , i♯2 )) = γ z (JaeK♯a (i♯1 , i♯2 ))
r
env is given meaning through an interpretation function µ :
which is similar to our µµ rule:
µr ∈ env♯ → ℘(env) µr (ρ♯ ) := {ρ | ∀x.ρ(x) ∈ µz (ρ♯ (x))}
An abstract environment represents concrete environments that
agree pointwise with some represented integer in the codomain.
The order on abstract environments is the standard pointwise
ordering and obeys ρ♯1 ⊑r ρ♯2 ⇐⇒ µr (ρ♯1 ) ⊆ µr (ρ♯2 ):
hi1, i2i ∈ µz(i♯1,i♯2) ∧ hi′1, i′2i = JaeKa(i1,i2) ⇔ hi′1, i′2i ∈ µz(JaeK♯a(i♯1,i♯2))
The benefit of our approach is that soundness and completeness
properties which also mention extraction (η) can also be mechanized, like calculating abstract interpreters from their specification.
Abstracting Relations The verification of an abstract interpreter
for relations is similar to the design for functions: induce a specification using the constructive Galois connection, and prove correctness w.r.t. the induced spec. The relations we abstract are ⇓a ,
⇓b and 7→c , and we call their abstract interpreters A♯ , B♯ and C ♯ .
Rather than postulate the definitions of the abstract interpreters, we
calculate them from their specifications, the results of which are
sound and computable by construction. The arithmetic and boolean
abstract interpreters are functions from abstract environments to abstract integers, and the abstract interpreter for commands computes
the next abstract transition states of execution. We only present select calculations for A♯ ; see our accompanying Agda development
for each calculation in mechanized form. A♯ has type:
ρ♯1 ⊑r ρ♯2 ⇐⇒ (∀x.ρ♯1 (x) ⊑z ρ♯2 (x))
To form a constructive Galois connection, µr forms a correspondence with a best abstraction function η r :
η r ∈ env → env♯
i♯2
i♯2
i♯2
i♯2
i♯2
The Classical Design To contrast with Cousot’s original design
using classical abstract interpretation, the key difference is that we
avoid powerset liftings all-together. Using classical Galois connections, the concrete denotation function must be lifted to powersets:
i∈I
♯
i♯1 ∨ pos ⊑z
i♯1 ∨ neg ⊑z
i♯1 ∧ zer ⊑z
i♯1 ∧ neg ⊑z
i♯1 ∧ pos ⊑z
η r (ρ) := λx.η z (ρ(x))
and we prove the constructive Galois connection correspondence:
ρ ∈ µr (ρ♯ ) ⇐⇒ η r (ρ) ⊑r ρ♯
The Classical Design To contrast with Cousot’s original design
using classical abstract interpretation, the key difference is again
the abstraction function. The abstraction function using classical
Galois connections is:
A♯ [ ] : aexp → env♯ → Z♯
αr (R) := λx.αz ({ρ(x) | ρ ∈ R})
αr : ℘(env) → env♯
which is also not amenable to mechanized verification.
To induce a spec for A♯ , we first revisit the concrete semantics
relation as a powerset-valued function, which we call A:
Abstracting Functions After designing constructive Galois connections for Z and env we define what it means for J K♯a , some abstract denotation for arithmetic operators, to be a sound abstraction
of J Ka , its concrete counterpart. This is done through a specification induced by mappings η and µ, analogously to how specifications are induced using classical Galois connections.
A[ ] : aexp → env → ℘(Z)
A[ae](ρ) := {i | ρ ⊢ ae ⇓a i}
The induced spec for A♯ is generated with the monadic bind operator, which we notate using Moggi’s star notation ∗ :
pure(η z )∗ (A[ae]∗ (µr (ρ♯ ))) ⊆ pure(A♯ [ae])(ρ♯ )
9
Case ae = rand:
z
r
-- Agda Calculation of Case ae = x:
♯
a
α[A] (Var x) ρ♯ = [proof-mode]
{η (i) | ρ ∈ µ (ρ ) ∧ ρ ⊢ rand ⇓ i}
z
r
♯
do [[ (pure · η z ) ∗ · (A[ Var x ] ∗ · (µr · ρ♯)) ]]
a
= {η (i) | ρ ∈ µ (ρ ) ∧ i ∈ Z}
* defn. of ρ ⊢ rand ⇓ i +
⊆ {η z (i) | i ∈ Z}
* ∅ when µr (ρ♯ ) = ∅ +
⊆ {any}
* {any} mon. w.r.t. ⊑z +
, {A♯ [rand](ρ♯ )}
* defining A♯ [rand] +
[focus-right [· ] of (pure · η z ) ∗ ] begin
do [[ A[ Var x ] ∗ · (µr · ρ♯) ]]
* A[Var]/≡ +
[[ (pure · lookup[ x ]) ∗ · (µr · ρ♯) ]]
Case ae = x:
* lookup/µr /≡ +
{η z (i) | ρ ∈ µr (ρ♯ ) ∧ ρ ⊢ x ⇓a i}
[[ µz ∗ · (pure · lookup♯[ x ] · ρ♯) ]]
= {η z (ρ(x)) | ρ ∈ µr (ρ♯ )}
* defn. of ρ ⊢ x ⇓a i +
= {η z (i) | i ∈ µz (ρ♯ (x))}
* defn. of µr (ρ♯ ) +
[[ (pure · η z ) ∗ · (µz ∗ · (pure · lookup♯[ x ] · ρ♯)) ]]
* Eq. CGC-Red +
* reductive[ηµ] +
♯
⊆ {ρ (x)}
♯
♯
end
♯
, {A [x](ρ )}
[[ ret · (lookup♯[ x ] · ρ♯) ]]
* defining A [x] +
[[ pure · A♯[ Num n ] · ρ♯ ]]
Case ae = x (Monadic):
z ∗
r
∗
♯
pure(η ) (A[x] (µ (ρ )))
Figure 4. Constructive GC calculations in Agda
= pure(λρ.η z (ρ(x)))∗ (µr (ρ♯ ))
* defn. of A[x] +
= pure(η z )∗ (µz∗ (ρ♯ (x)))
* defn. of µr (ρ♯ ) +
⊆ ret(ρ♯ (x))
* Eq. CGC-Red +
♯
♯
, pure(A [x])(ρ )
The Classical Design Classically, one first designs a powerset
lifting of the concrete semantics, called a collecting semantics:
A℘ [ ] : aexp → ℘(env) → ℘(Z)
A℘ [ae](R) := {i | ρ ∈ R ∧ ρ ⊢ ae ⇓a }
* defining A♯ [x] +
The classical soundness specification for A♯ [ae](ρ♯ ) is then:
Figure 3. Constructive GC calculations on paper
αz (A℘ [ae](γ r (ρ♯ ))) ⊑ A♯ [ae](ρ♯ )
However, as usual, the abstraction αz cannot be mechanized effectively, preventing a mechanized derivation of A♯ by calculus.
which unfolds to:
{η z (i) | ρ ∈ µr (ρ♯ ) ∧ ρ ⊢ ae ⇓a i} ⊆ {A♯ [ae](ρ♯ )}
5. Case Study: Gradual Type Systems
To calculate A♯ we reason equationally from the spec on the left
towards the singleton set on the right, and declare the result the
definition of A♯ . We do this by case analysis on ae; we show the
cases for ae = rand and ae = x in Figure 3. Each calculation can
also be written in monadic form, which is the style we mechanize;
we repeat the variable case in monadic form in the figure.
Recent work in metatheory for gradual type systems by Garcia
et al. [17] shows how a Galois connection discipline can guide the
design of gradual typing systems. Starting with a Galois connection
between precise and gradual types, both the static and dynamic
semantics of the gradual language are derived systematically. This
technique is called Abstracting Gradual Typing (AGT).
The design presented by Garcia et al is to begin with a precise
type system, like the simply typed lambda calculus, and add a new
type ? which functions as the ⊤ element in the lattice of type precision. The precise typing
rules are presented with meta-operators <:
..
for subtyping and ∨ for the join operator in the subtyping lattice.
The gradual type system is then written using abstract variants <:♯
.. ♯
and ∨ which are proven correct w.r.t. specifications induced by the
Galois connection.
Mechanized Calculation Our Agda calculation of A♯ strongly
resembles the on-paper monadic one. We show the Agda proof
code for abstract variable references in Figure 4. The first line is the
top level definition site for the derivation of A♯ for the Var case.
The proof-mode term is part of our “proof-mode” library which
gives support for calculational reasoning in the form of Agda proof
combinators with mixfix syntax. Statements surrounded by double
square brackets [[e]] restate the current proof state, which Agda will
check is correct. Reasoning steps are employed through *e+ terms,
which transform the proof state from the previous form to the next.
The term [focus-right [· ] of e] focuses the goal to the right of
the outermost application, scoped between begin and end.
Using constructive Galois connections, our mechanized calculation closely follows Cousot’s classical one, uses both η and µ
mappings, and results in a verified, executable static analyzer. Such
a result is not possible using classical Galois connections, due to
the inability to encode α functions constructively.
We complete the full calculation of Cousot’s generic abstract
interpreter for WHILE in Agda as supplemental material to this
paper, where the resulting interpreter is both sound and computable
by construction. We also provide our “proof-mode” library which
supports general calculational reasoning with posets.
The Precise Type System The AGT paper describes two designs
for gradual type systems in increasing complexity. We chose to
mechanize a hybrid of the two which is simple, like the first design, yet still exercises key challenges addressed by the second. We
also made slight modifications to the design at parts to make mechanization easier, but without changing the nature of the system.
The precise type system we mechanized is the simply typed
lambda calculus with booleans, and top and bottom elements for
a subtyping lattice, which we call any and none:
τ ∈ type ::= none | B | τ → τ | any
The first design in the AGT paper does not involve subtyping,
and their second design incorporates record types with width and
depth subtyping. By just focusing on none and any, we exercise
10
Γ ⊢ e 1 : τ1
Γ ⊢ e 2 : τ2
Γ ⊢ e 3 : τ3
Γ♯ ⊢♯ e♯1 : τ1♯
Γ♯ ⊢♯ e♯2 : τ2♯
Γ♯ ⊢♯ e♯3 : τ3♯
τ1 <: B
..
Γ ⊢ if e1 then e2 else e3 : τ1 ∨ τ2
IF
.. ♯
Γ♯ ⊢♯ if e1 then e2 else e3 : τ1♯ ∨ τ2♯
Γ ⊢ e 1 : τ1
τ1 <: τ11 → τ21
Γ ⊢ e 2 : τ2
τ2 <: τ11
A PP
Γ ⊢ e1 (e2 ) : τ21
Γ♯ ⊢♯ e♯1 : τ1♯
Γ♯ ⊢♯ e♯2 : τ2♯
Γ♯ ⊢♯ e♯ : τ1♯
closed(un) =⇒ ⊢ ⌈un⌉ : ?
⊢
♯
♯
♯
♯
♯
♯
♯
♯
τ11
⊑ τ12
∧ τ21
⊑ τ22
=⇒ τ11
→ τ21
⊑ τ12
→ τ22
Just as in our other designs by abstract interpretation, type♯ is
given meaning by an interpretation function µ, which is the constructive analog of a classical concretization (γ) function:
µ : type♯ → ℘(type)
µ(?) := {τ | τ ∈ type}
µ(τ1♯ → τ2♯ ) := {τ1 → τ2 | τ1 ∈ µ(τ1♯ ) ∧ τ2 ∈ µ(τ2♯ )}
The extraction function η is, remarkably, the identity function:
η(τ ) = τ
and the constructive Galois correspondence holds:
τ ∈ µ(τ ♯ ) ⇐⇒ η(τ ) ⊑ τ ♯
.. ♯
hτ1 , τ2 i ∈ µ(τ1♯ , τ2♯ ) ∧ τ3♯ ⊑ η(τ1 ∨ τ2 ) ⇐⇒ τ3♯ ⊑ τ1♯ ∨ τ2♯
Key properties of gradual subtyping and the gradual join operator
is how they operate over the unknown type ?:
? <: τ
τ <: ?
⊑
e♯2
=⇒ ⊢
(EDL)
♯
e♯2
:
τ2♯
∧
τ1♯
⊑
τ2♯
(GG)
Kleisli Galois Connections We summarize Kleisli Galois connections in Figure 7. Kleisli Galois connections are analogous to
classical ones, but with monadic analogs to α and γ, and monadic
identity and composition operators ret and ⊛ in place of the function space identity and composition operators id and ◦.
τ1 ∈ µ(τ1♯ ) ∧ τ2 ∈ µ(τ2♯ ) ∧ τ1 <: τ2 ⇐⇒ τ1♯ <:♯ τ2♯
♯
∧
e♯1
Powerset Monad See Sections 3.1 and 3.3 for the downwardclosure monotonicity property, and monad definitions and notation
for the monotonic powerset monad. The monad operators obey
standard monad laws. We introduce one new operator for monadic
function composition: (g ⊛ f )(x) := g ∗ (f (x)).
Gradual Operators Given the constructive Galois connection between gradual and precise types, we synthesize specifications for
..
abstract analogs of subtyping <: and the subtyping join operator ∨,
.. ♯
and relate them to their abstractions <:♯ and ∨ :
.. ♯
:
τ1♯
Classical Galois Connections We review classical Galois connections in Figure 7. A Galois connection between posets A and
B contains two adjoint functors α and γ which share a correspondence. An equivalent formulation of the correspondence is two unit
equations called extensive and reductive. Abstract interpreters are
sound by over-approximating a specification induced by α and γ.
τ ♯ ∈ {none, B, any}
♯
e♯1
In this section we develop the full metatheory of constructive Galois connection and prove precise claims about their relationship to
classical Galois connections. We develop the metatheory of constructive Galois connections as an adjunction between posets with
powerset-Kleisli adjoint functors. This is in contrast to classical Galois connections which come from an identical setup, but with the
monotonic function space as adjoint functors, as shown in Figure 7.
We connect constructive to classical Galois connections through
an isomorphism between a subset of classical to the entire space of
constructive. To form this isomorphism we introduce an intermediate structure, Kleisli Galois connections, which we show are isomorphic to the classical subset, and isomorphic to constructive ones
using the constructive theorem of choice, as depicted in Figure 8.
♯
And arrow types are monotonic:
♯
♯
6. Constructive Galois Connection Metatheory
τ ⊑τ ⊑?
♯
(FAT)
♯
The partial ordering is reflexive and has ? at the top:
♯
G-C OE
⊢ e : τ ⇐⇒ ⊢♯ e : τ
τ ♯ ∈ type♯ ::= none | B | τ ♯ → τ ♯ | any | ?
..
G-A PP
Gradual Metatheory Using AGT, the gradual type system is a
syntactic analog to the precise one but with gradual types and
operators, which we show in Figure 6. Using this system, and
constructive Galois connections, we mechanize in Agda the key
AGT metatheory results from the paper: equivalence for fullyannotated terms (FAT), embedding of dynamic language terms
(EDL), and gradual guarantee (GG):
Gradual Types The essence of AGT is to design a gradual type
system by abstract interpretation of the precise type system. To do
this, a new top element is added to the precise type system, although
rather than representing the top of the subtyping lattice like any, it
represents the top of the precision lattice, and is notated ?:
η : type → type♯
G-I F
Figure 6. Case Study: gradual type system
the subtyping machinery of their approach without the blowup in
complexity from formalizing record types.
The typing rules in AGT are written in strictly syntax-directed
form, with explicit use of subtyping in rule hypotheses. We show
three precise typing rules for if-statements, application and coercion in Figure 5. The subtyping lattice in the precise system is
the “safe for substitution” lattice, and well typed programs enjoy
progress and preservation.
when
τ1♯ <:♯ τ2♯
Γ♯ ⊢♯ e♯ :: τ2♯ : τ2♯
Figure 5. Case Study: precise type system
µ(τ ♯ ) := τ ♯
♯
♯
τ1♯ <:♯ τ11
→ τ21
♯
♯ ♯
τ2 <: τ11
♯
Γ♯ ⊢ e♯1 (e♯2 ) : τ21
Γ ⊢ e : τ1
τ1 <: τ2
C OE
Γ ⊢ e :: τ2 : τ2
♯
τ1♯ <:♯ B
Kleisli to Classical and Back All Kleisli Galois connections
hκα, κγi between A and B can be lifted to recover a classical Galois connection hα, γi between ℘(A) and ℘(B) through a monadic
♯ .. ♯
?∨ τ =τ ∨ ?=?
11
Adjunction
Category
Adjoints
LAdjoint
RAdjoint
Corr
Extensive
Reductive
Soundness
Optimality
Classical GCs
Posets
Mono. Functions
α:A →B
γ:B →A
id(x) ⊑ γ(y)
⇔ α(x) ⊑ id(y)
id ⊑ γ ◦ α
α ◦ γ ⊑ id
α ◦ f ◦ γ ⊑ f♯
α ◦ f ◦ γ = f♯
Kleisli/constructive GCs
Posets
℘-Monadic Functions
κα : A → ℘(B)
κγ : B → ℘(A)
ret(x) ⊆ κγ(y)
⇔ κα(x) ⊆ ret(y)
ret ⊑ κγ ⊛ κα
κα ⊛ κγ ⊑ ret
κα ⊛ f ⊛ κγ ⊑ f ♯
κα ⊛ f ⊛ κγ = f ♯
Computational
Constructive
Kleisli
Classical
Set inclusion
Theorem of choice
Figure 8. Relationship between classical, Kleisli and constructive
Lemma 1 (CGC-Induce).AGDAX For every Kleisli Galois connection hκα, κγi, there exists a constructive Galois connection hη, µi
where hpure(η), µi = hκα, κγi.
Figure 7. Comparison of constructive v classical adjunctions
Because the mapping from Kleisli to constructive is interesting we provide a proof, which to our knowledge is novel. The
proof builds a constructive Galois connection hη, µi from a Kleisli
hκα, κγi by exploiting the Kleisli correspondence and making use
of the constructive theorem of choice.
lifting operator on Kleisli Galois connections hκα, κγi∗ :
hα, γi = hκα, κγi∗ = hκα∗ , κγ ∗ i
This lifting is sound, meaning Kleisli soundness and optimality
results can be translated to classical ones.
Proof. To turn an arbitrary Kleisli Galois connection into a constructive one, we show that the effect on κα : A → ℘(B) is benign,
or in other words, that there exists some η such that κα = pure(η).
We prove this using two ingredients: a constructive interpretation of
the Kleisli extensive law, and the constructive theorem of choice.
We first expand the Kleisli expansive property, unfolding definitions of ⊛ and ret, to get an equivalent logical statement:
Theorem 1 (KGC-Sound). AGDAX For any Kleisli relationship of
soundness between f and f ♯ , that is κα ⊛ f ⊛ κγ ⊑ f ♯ , its
lifting to classical is also sound, that is α ◦ f ∗ ◦ γ ⊑ f ♯∗ where
hα, γi = hκα, κγi∗ , and likewise for optimality relationships.
This lifting is also complete, meaning classical Galois connection soundness and optimality results can always be translated to
Kleisli ones, when α and γ are of lifted form.
∀x.∃y.y ∈ κα(x) ∧ x ∈ κγ(y)
(KGC-Exp)
Statements of this form can be used in conjunction with an axiom
of choice in classical mathematics, which is:
Theorem 2 (KGC-Complete).AGDAX For any classical relationship
of soundness between f ∗ and f ♯∗ , that is α ◦ f ∗ ◦ γ ⊑ f ♯∗ , its
lowering to Kleisli is also sound when hα, γi = hκα, κγi∗ , that is
κα ⊛ f ⊛ κγ ⊑ f ♯ , and likewise for optimality relationships.
(∀x.∃y.R(x, y)) =⇒ (∃f.∀x.R(x, f (x)))
(AxChoice)
This theorem is admitted as an axiom in classical mathematics,
but in constructive logic—the setting used for extracting verified
algorithms–(AxChoice)is definable as a theorem, due to the computational interpretation of logical connectives ∀ and ∃. We define
(AxChoice) as a theorem in Agda without trouble:
Due to soundness and completeness, one can work with the simpler setup of Kleisli Galois connections without any loss of generality. The setup is simpler because Kleisli Galois connection theorems only quantify over individual elements rather than elements of
powersets. For example, the soundness criteria κα ⊛ f ⊛ κγ ⊑ f ♯
is proved by showing κα∗ (f ∗ (κγ(x))) ⊆ f ♯ (x) for an arbitrary
element x : A, whereas in the classical proof one must show
κα∗ (f ∗ (κγ ∗ (X))) ⊆ f ♯∗ (X) for arbitrary sets X : ℘(A).
choice : ∀{A B}{R : A → B → Set}
→ (∀ x → ∃ y st R x y)
→ (∃ f st ∀ x → R x (f x))
choice P = ∃ (λ x → π1 (P x)) , (λ x → π2 (P x))
Constructive Galois Connections Constructive Galois connections are a restriction of Kleisli Galois connections where the abstraction mapping is a pure rather than monadic function. We call
the left adjoint extraction, notated η, and the right adjoint interpretation, notated µ. The constructive Galois connection correspondence, alternative expansive and reductive formulation of the correspondence, and soundness and optimality criteria are identical to
Kleisli Galois connections where hκα, κγi = hpure(η), µi.
Applying (AxChoice) to (KGC-Exp) then gives:
∃η.∀x.η(x) ∈ κα(x) ∧ x ∈ κγ(η(x))
(ExpChioce)
which proves the existence of a pure function η : A → B.
In order to form a constructive Galois connection η and µ must
satisfy the correspondence, which we prove in split form:
x ∈ µ(η(x))
x ∈ µ(y) =⇒ η(x) ⊑ y
Constructive to Kleisli and Back Our main theorem which justifies the soundness and completeness of constructive Galois connections is an isomorphism between constructive and Kleisli Galois
connections. The easy direction is soundness, where a Kleisli Galois connection is formed by defining hκα, κγi = hpure(η), µi.
Soundness and optimality theorems are then lifted from constructive to Kleisli without modification.
(CGC-Exp)
(CGC-Red)
The expansive property is immediate from the second conjunct
in (ExpChioce). The reductive property follows from the Kleisli
reductive property:
x ∈ κγ(y) ∧ y ′ ∈ κα(x) =⇒ y ′ ⊑ y
(KGC-Red)
The constructive variant of reductive is proved by satisfying the first
two premises of (KGC-Red), where x ∈ κγ(y) is by assumption
and y ′ ∈ κα(x) is by the first conjunct in (ExpChioce).
So far we have shown that for a Kleisli Galois connection
hκα, κγi, there exists a constructive Galois connection hη, µi
where µ = κγ. However, we have yet to show pure(η) = κα.
To show this, we prove an analog of a standard result for classical
Galois connections: that α and γ uniquely determine each other.
Theorem 3 (CGC-Sound).AGDAX For any constructive relationship
of soundness between f and f ♯ , that is pure(η) ⊛ f ⊛ µ ⊑ f ♯ ,
its lifting to classical is sound, that is κα ⊛ f ⊛ κγ ⊑ f ♯ where
hκα, κγi = hpure(η), µi, and likewise for optimality relationships.
The other direction, completeness, is much more surprising.
First we establish a lowering for Kleisli Galois connections.
12
Lemma 2 (Unique Abstraction).AGDAX For any two Kleisli Galois
connections hκα1 , κγ1 i and hκα2 , κγ2 i, κα1 = κα2 iff κγ1 = κγ2
Deductive Synthesis Fiat [16] is a library for the Coq proof assistant which supports semi-automated synthesis of programs as refinements of their specifications. Fiat uses the same powerset type
and monad as we do, and their “deductive synthesis” process similarly derives correct-by-construction programs by calculus. Fiat
derivations start with a user-defined specification and calculate towards an under-approximation (⊒), whereas calculational abstract
interpretation starts with an optimal specification and calculates towards an over-approximation (⊑). It should be possible to generalize their framework to use partial orders to recover aspects of
our work, or to invert the lattice used in our abstract interpretation
framework to recover aspects of theirs. A notable difference in approach is that Fiat makes heavy use of Coq’s tactic programming
language to automate rewrites inside respectful contexts, whereas
our system provides no interactive proof automation and each calculational step must be justified explicitly.
We then conclude pure(η) = κα as a consequence of the above
lemma and the fact that µ = κγ.
Given the above mapping from Kleisli Galois connections to
constructive ones, we prove the completeness of this mapping.
Theorem 4 (CGC-Complete).AGDAX For any Kleisli relationship of
soundness between f and f ♯ , that is κα⊛f ⊛κγ ⊑ f ♯ , its lowering
to constructive is also sound, that is pure(η) ⊛ f ⊛ µ ⊑ f ♯ where
hη, µi is induced, and likewise for optimality relationships.
Mechanization We mechanize the metatheory for constructive
Galois connections and both case studies from Sections 4 and 5
in Agda, as well as a general purpose proof library for posets and
calculational reasoning with the monotonic powerset monad. The
development is available at: github.com/plum-umd/cgc.
Monadic Abstract Interpretation Monads in abstract interpretation have recently been applied to good effect for modularity [15,
31]. However, that work uses monads to structure the semantics,
not the Galois connections and proofs.
Wrapping Up In this section we showed that constructive Galois
connections are sound w.r.t. classical Galois connections, and complete w.r.t. the subset of classical Galois connections recovered by
lifting constructive ones. We showed this by introducing an intermediate space of Galois connections, Kleisli Galois connections,
and by establishing two sets of isomorphisms between a subset
of classical and Kleisli, and between Kleisli and constructive. The
proof of isomorphism between constructive and Kleisli yielded an
interesting proof which applies the constructive theorem of choice
to one of the Kleisli Galois connection correspondence laws.
Future Directions Now that we have established a foundation
for constructive Galois connection calculation, we see value in
verifying larger derivations (e.g. [23, 30]). Furthermore we would
like to explore whether or not our techniques have any benefit in
the space of general-purpose program calculations à la Bird.
Currently our framework requires the user to justify every detail of the program calculation, including monotonicity proofs and
proof scoping for rewrites inside monotonic contexts. We imagine
much of this can be automated, requiring the user to only provide
the interesting parts of the proof, à la Fiat[16]. Our experience has
been that even Coq’s tactic system slows down considerably when
automating all of these details, and we foresee using proof by reflection in either Coq (e.g. Rtac [20]) or Agda to automate these
proofs in a way that maintains proof-checker performance.
There have been recent developments on compositional abstract
interpretation frameworks [15] where abstract interpreters and their
proofs of soundness are systematically derived side-by-side. That
framework relies on correctness properties transported by Galois
transformers, which we posit would benefit from mechanization
since they hold both computational and specification content.
7. Related Work
This work connects two long strands of research: abstract interpretation via Galois connections and mechanized verification via
dependently typed functional programming. The former is founded
on the pioneering work of Cousot and Cousot [11, 12]; the latter on
that of Martin-Löf [21], embodied in Norell’s Agda [28]. Our key
technical insight is to use a monadic structure for Galois connections, following the example of Moggi [25] for the λ-calculus.
Calculational Abstract Interpretation Cousot describes calculational abstract interpretation by example in his lecture notes [9] and
monograph [8], and recently introduced a unifying calculus for Galois connections [14]. Our work mechanizes Cousot’s calculations
and provides a foundation for mechanizing other instances of calculational abstract interpretation (e.g. [22, 30]). We expect our work
to have applications to the mechanization of calculational program
design [2, 3] by employing only Galois retractions, i.e. α ◦ γ is an
identity [14]. There is prior work on mechanized program calculation [33], but it is not based on abstract interpretation.
8. Conclusions
This paper realizes the vision of mechanized and constructive Galois connections foreshadowed by Cousot [8, p. 85], giving the first
mechanically verified proof by calculational abstract interpretation;
once for his generic static analyzer and once for the semantics of
gradual typing. Our proofs by calculus closely follow the originals.
The primary discrepancy is the use of monads to isolate specification effects. By maintaining this discipline, we are able to verify calculations by Galois connections and extract computational content
from pure results. The resulting artifacts are correct-by-verifiedconstruction, thereby avoiding known bugs in the original.2
Verified Static Analyzers Verified abstract interpretation has
many promising results [1, 5, 6, 29], scaling up to large-scale realworld static analyzers [18]. However, mechanized abstract interpretation has yet to benefit from the Galois connection framework.
Until now, approaches use classical axioms or “γ-only” encodings of soundness and (sometimes) completeness. Our techniques
for mechanizing Galois connections should complement these approaches.
Acknowledgments
We thank Ron Garcia and Éric Tanter for discussions of their work.
Éric also helped with our French translation. We thank the Colony
Club in D.C. and the Board & Brew in College Park for providing
fruitful environments in which to work. We thank the anonymous
reviewers of ICFP 2016 for their helpful feedback. This material is
partially based on research sponsored by DARPA under agreement
number AFRL FA8750-15-2-0104.
Galculator The Galculator [32] is a proof assistant founded on
an algebra of Galois connections. This tool is similar to ours in
that it mechanically verifies Galois connection calculations. Our
approach is more general, supporting arbitrary set-theoretic reasoning and embedded within a general purpose proof assistant, however their approach is fully automated for the small set of derivations which reside within their supported theory.
2 di.ens.fr/~cousot/aisoftware/Marktoberdorf98/Bug History
13
References
the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles
of Programming Languages, POPL 2015. ACM, 2015.
[1] G. Barthe, D. Pichardie, and T. Rezk. A certified lightweight noninterference java bytecode verifier. In R. D. Nicola, editor, Programming Languages and Systems, Lecture Notes in Computer Science.
Springer Berlin Heidelberg, 2007.
[2] R. Bird and O. de Moor. The Algebra of Programming. Prentice Hall,
1996.
[3] R. S. Bird. A calculus of functions for program derivation. In D. A.
Turner, editor, Research Topics in Functional Programming. AddisonWesley, 1990. Also available as Technical Monograph PRG-64, from
the Programming Research Group, Oxford University.
[4] B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné,
D. Monniaux, and X. Rival. A static analyzer for large safety-critical
software. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’03. ACM,
2003.
[17] R. Garcia, A. M. Clark, and É. Tanter. Abstracting gradual typing.
In Proceedings of the 43rd ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages (POPL 2016), St Petersburg,
FL, USA, Jan. 2016. ACM Press. To appear.
[18] J. H. Jourdan, V. Laporte, S. Blazy, X. Leroy, and D. Pichardie. A
Formally-Verified c static analyzer. In Proceedings of the 42Nd Annual
ACM SIGPLAN-SIGACT Symposium on Principles of Programming
Languages, POPL ’15. ACM, 2015.
[19] X. Leroy. Formal verification of a realistic compiler. Commun. ACM,
2009.
[20] G. Malecha and J. Bengtson. Extensible and efficient automation
through reflective tactics. In Programming Languages and Systems
- 25th European Symposium on Programming, ESOP 2016. Springer,
2016.
[21] P. Martin-Löf. Intuitionistic Type Theory. Bibliopolis, 1984.
[22] J. Midtgaard and T. Jensen. A calculational approach to Control-Flow
analysis by abstract interpretation. In M. Alpuente and G. Vidal, editors, Static Analysis, Lecture Notes in Computer Science, chapter 23.
Springer Berlin Heidelberg, 2008.
[5] S. Blazy, V. Laporte, A. Maroneze, and D. Pichardie. Formal verification of a c value analysis based on abstract interpretation. In F. Logozzo and M. Fähndrich, editors, Static Analysis, Lecture Notes in
Computer Science. Springer Berlin Heidelberg, 2013.
[6] D. Cachera and D. Pichardie. A certified denotational abstract interpreter. In M. Kaufmann and L. Paulson, editors, Interactive Theorem
Proving, Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2010.
[7] P.
Cousot.
Abstract
interpretation.
URL
http://www.di.ens.fr/~ cousot/AI/.
[23] J. Midtgaard and T. Jensen. A calculational approach to ControlFlow analysis by abstract interpretation. In M. Alpuente and G. Vidal,
editors, Static Analysis Symposium, LNCS. Springer, 2008.
[24] A. Miné. The octagon abstract domain. Higher-Order and Symbolic
Computation, 2006.
[25] E. Moggi. An abstract view of programming languages. Technical
report, Edinburgh University, 1989.
[8] P. Cousot. The calculational design of a generic abstract interpreter. In
M. Broy and R. Steinbrüggen, editors, Calculational System Design.
NATO ASI Series F. IOS Press, Amsterdam, 1999.
[26] D. Monniaux. Réalisation mécanisée d’interpréteurs abstraits. Rapport de DEA, Université Paris VII, 1998. French.
[27] F. Nielson, H. R. Nielson, and C. Hankin. Principles of Program
Analysis. Springer-Verlag, 1999.
[9] P. Cousot. MIT 16.399: Abstract interpretation, 2005.
[10] P. Cousot and R. Cousot. Static determination of dynamic properties
of programs. In 2nd International Symposium on Programming, 1976.
[11] P. Cousot and R. Cousot. Abstract interpretation: a unified lattice
model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN
Symposium on Principles of Programming Languages. ACM, 1977.
[28] U. Norell. Towards a practical programming language based on
dependent type theory. PhD thesis, Department of Computer Science and Engineering, Chalmers University of Technology, SE-412
96 Göteborg, Sweden, Sept. 2007.
[29] D. Pichardie. Interprétation abstraite en logique intuitionniste: extraction d’analyseurs Java certifiés. PhD thesis, Université Rennes,
2005.
[30] I. Sergey, J. Midtgaard, and D. Clarke. Calculating graph algorithms
for dominance and shortest path. In J. Gibbons and P. Nogueira, editors, Mathematics of Program Construction, Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2012.
[31] I. Sergey, D. Devriese, M. Might, J. Midtgaard, D. Darais, D. Clarke,
and F. Piessens. Monadic abstract interpreters. In Proceedings of the
34th ACM SIGPLAN Conference on Programming Language Design
and Implementation. ACM, 2013.
[12] P. Cousot and R. Cousot. Systematic design of program analysis
frameworks. In Proceedings of the 6th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, POPL ’79. ACM,
1979.
[13] P. Cousot and R. Cousot. Inductive definitions, semantics and abstract
interpretations. In Proceedings of the 19th ACM SIGPLAN-SIGACT
Symposium on Principles of Programming Languages, POPL ’92.
ACM, 1992.
[14] P. Cousot and R. Cousot. A Galois connection calculus for abstract
interpretation. In Proceedings of the 41st ACM SIGPLAN-SIGACT
Symposium on Principles of Programming Languages, POPL ’14.
ACM, 2014.
[32] P. F. Silva and J. N. Oliveira. ‘Galculator’: Functional prototype of a
Galois-connection based proof assistant. In Proceedings of the 10th
International ACM SIGPLAN Conference on Principles and Practice
of Declarative Programming, PPDP ’08. ACM, 2008.
[15] D. Darais, M. Might, and D. Van Horn. Galois transformers and modular abstract interpreters. In Proceedings of the 2015 ACM SIGPLAN
International Conference on Object-Oriented Programming, Systems,
Languages, and Applications, OOPSLA 2015, pages 552–571. ACM,
2015.
[16] B. Delaware, C. Pit-Claudel, J. Gross, and A. Chlipala. Fiat: Deductive
synthesis of abstract data types in a proof assistant. In Proceedings of
[33] J. Tesson, H. Hashimoto, Z. Hu, F. Loulergue, and M. Takeichi. Program calculation in Coq. In M. Johnson and D. Pavlovic, editors,
Algebraic Methodology and Software Technology, Lecture Notes in
Computer Science, chapter 10. Springer Berlin Heidelberg, 2011.
14
| 6 |
Justifications in Constraint Handling Rules for
Logical Retraction in Dynamic Algorithms
arXiv:1706.07946v2 [cs.AI] 11 Sep 2017
Thom Frühwirth
Ulm University, Germany
[email protected]
Abstract. We present a straightforward source-to-source transformation that introduces justifications for user-defined constraints into the
CHR programming language. Then a scheme of two rules suffices to allow for logical retraction (deletion, removal) of constraints during computation. Without the need to recompute from scratch, these rules remove not only the constraint but also undo all consequences of the rule
applications that involved the constraint. We prove a confluence result
concerning the rule scheme and show its correctness.
When algorithms are written in CHR, constraints represent both data
and operations. CHR is already incremental by nature, i.e. constraints
can be added at runtime. Logical retraction adds decrementality. Hence
any algorithm written in CHR with justifications will become fully dynamic. Operations can be undone and data can be removed at any point
in the computation without compromising the correctness of the result.
We present two classical examples of dynamic algorithms, written in our
prototype implementation of CHR with justifications that is available online: maintaining the minimum of a changing set of numbers and shortest
paths in a graph whose edges change.
1
Introduction
Justifications have their origin in truth maintenance systems (TMS) [McA90]
for automated reasoning. In this knowledge representation method, derived information (a formula) is explicitly stored and associated with the information it
originates from by means of justifications. This dependency can be used to explain the reason for a conclusion (consequence) by its initial premises. With the
help of justifications, conclusions can be withdrawn by retracting their premises.
By this logical retraction, e.g. default reasoning can be supported and inconsistencies can be repaired by retracting one of the reasons for the inconsistency. An
obvious application of justifications are dynamic constraint satisfaction problems
(DCSP), in particular over-constrained ones [BM06].
In this work, we extend the applicability of logical retraction to arbitrary
algorithms that are expressed in the programming language Constraint Handling
Rules (CHR) [Frü09,Frü15]. To accomplish logical retraction, we have to be
aware that CHR constraints can also be deleted by rule applications. These
2
constraints may have to be restored when a premise is retracted. With logical
retraction, any algorithm written in CHR will become fully dynamic1 .
Minimum Example. Given a multiset of numbers min(n1), min(n2),...,
min(nk ). The constraint (predicate) min(ni ) means that the number ni is a candidate for the minimum value. The following CHR rule filters the candidates.
min(N) \ min(M) <=> N=<M | true.
The rule consists of a left-hand side, on which a pair of constraints has to be
matched, a guard check N=<M that has to be satisfied, and an empty right-hand
side denoted by true. In effect, the rule takes two min candidates and removes
the one with the larger value (constraints after the \ symbol are deleted). Note
that the min constraints behave both as operations (removing other constraints)
and as data (being removed).
CHR rules are applied exhaustively. Here the rule keeps on going until only
one, thus the smallest value, remains as single min constraint, denoting the current minimum. If another min constraint is added during the computation, it
will eventually react with a previous min constraint, and the correct current
minimum will be computed in the end. Thus the algorithm as implemented in
CHR is incremental. It is not decremental, though: We cannot logically retract
a min candidate. While removing a candidate that is larger than the minimum
would be trivial, the retraction of the minimum itself requires to remember all
deleted candidates and to find their minimum. With the help of justifications,
this logical retraction will be possible automatically.
Contributions and Overview of the Paper. In the next section we recall
syntax and operational semantics for CHR. Our contributions are as follows:
– We introduce CHR with justifications (CHRJ ) in Section 3. We enhance
standard CHR programs with justifications by a source-to-source program
transformation. We show the operational equivalence of rule applications in
both settings. Thus CHRJ is a conservative extension of standard CHR.
– We define a scheme of two rules to enable logical retraction of constraints
based on justifications in Section 4. We show that the rule scheme is confluent with each rule in any given program, independent of the confluence
of that program. We prove correctness of logical retraction: the result of a
computation with retraction is the same as if the constraint would never
have been introduced in the computation.
– We present a proof-of-concept implementation of CHRJ in CHR and Prolog
(available online) in Section 5. We discuss two classical examples for dynamic
algorithms, maintaining the minimum of a changing set of numbers and
maintaining shortest paths in a graph whose edges change.
The paper ends with discussion of related work in Section 6 and with conclusions
and directions for future work.
1
Dynamic algorithms for dynamic problems should not be confused with dynamic
programming.
3
2
Preliminaries
We recall the abstract syntax and the equivalence-based abstract operational
semantics of CHR in this section. Upper-case letters stand for (possibly empty)
conjunctions of constraints in this paper.
2.1
Abstract Syntax of CHR
Constraints are relations, distinguished predicates of first-order predicate logic.
We differentiate between two kinds of constraints: built-in (pre-defined) constraints and user-defined (CHR) constraints which are defined by the rules in a
CHR program.
Definition 1. A CHR program is a finite set of rules. A (generalized) simpagation rule is of the form
r : H1 \H2 ⇔ C|B
where r : is an optional name (a unique identifier) of a rule. In the rule head (lefthand side), H1 and H2 are conjunctions of user-defined constraints, the optional
guard C| is a conjunction of built-in constraints, and the body (right-hand side)
B is a goal. A goal is a conjunction of built-in and user-defined constraints. A
state is a goal. Conjunctions are understood as multisets of their conjuncts.
In the rule, H1 are called the kept constraints, while H2 are called the removed
constraints. At least one of H1 and H2 must be non-empty. If H1 is empty, the
rule corresponds to a simplification rule, also written
s : H2 ⇔ C|B.
If H2 is empty, the rule corresponds to a propagation rule, also written
p : H1 ⇒ C|B.
In this work, we restrict given CHR programs to rules without built-in constraints in the body except true and false. This restriction is necessary as long
as built-in constraint solvers do not support the removal of built-in constraints.
2.2
Abstract Operational Semantics of CHR
Computations in CHR are sequences of rule applications. The operational semantics of CHR is given by the state transition system. It relies on a structural equivalence between states that abstracts away from technical details in a
transition[RBF09,Bet14].
State equivalence treats built-in constraints semantically and user-defined
constraints syntactically. Basically, two states are equivalent if their built-in
constraints are logically equivalent (imply each other) and their user-defined
constraints form syntactically equivalent multisets. For example,
X=<Y ∧ Y =<X ∧ c(X, Y ) ≡ X=Y ∧ c(X, X) 6≡ X=Y ∧ c(X, X) ∧ c(X, X).
For a state S, the notation Sbi denotes the built-in constraints of S and Sud
denotes the user-defined constraints of S.
4
Definition 2 (State Equivalence). Two states S1 = (S1bi ∧ S1ud ) and S2 =
(S2bi ∧ S2ud ) are equivalent, written S1 ≡ S2 , if and only if
|= ∀(S1bi → ∃ȳ((S1ud = S2ud ) ∧ S2bi )) ∧ ∀(S2bi → ∃x̄((S1ud = S2ud ) ∧ S1bi ))
with x̄ those variables that only occur in S1 and ȳ those variables that only
occur in S2 .
Using this state equivalence, the abstract CHR semantics is defined by a
single transition (computation step). It defines the application of a rule. Note
that CHR is a committed-choice language, i.e. there is no backtracking in the
rule applications.
Definition 3 (Transition). Let the rule (r : H1 \H2 ⇔ C|B) be a variant2 of
a rule from a given program P. The transition (computation step) S 7→r T is
defined as follows, where S is called source state and T is called target state:
S ≡ (H1 ∧ H2 ∧ C ∧ G) (r : H1 \H2 ⇔ C|B) ∈ P
S 7→r T
(H1 ∧ C ∧ B ∧ G) ≡ T
The goal G is called context of the rule application. It is left unchanged.
A computation (derivation) of a goal S in a program P is a connected sequence Si 7→ri Si+1 beginning with the initial state (query) S0 that is S and
ending in a final state (answer, result) or the sequence is non-terminating (diverging). We may drop the reference to the rules ri to simplify the presentation.
The notation 7→∗ denotes the reflexive and transitive closure of 7→.
If the source state can be made equivalent to a state that contains the head
constraints and the guard built-in constraints of a variant of a rule, then we delete
the removed head constraints from the state and add the rule body constraints
to it. Any state that is equivalent to this target state is in the transition relation.
The abstract semantics does not account for termination of inconsistent
states and propagation rules. From a state with inconsistent built-in constraints,
any transition is possible. If a state can fire a propagation rule once, it can do
so again and again. This is called trivial non-termination of propagation rules.
Minimum Example, contd. Here is a possible transition from a state
S = (min(0) ∧ min(2) ∧ min(1)) to a state T = (min(0) ∧ min(1)):
S ≡ (min(X) ∧ min(Y ) ∧ X ≤ Y ∧ (X = 0 ∧ Y = 2 ∧ min(1)))
(min(X)\min(Y ) ⇔ X ≤ Y |true)
(min(X) ∧ X ≤ Y ∧ true ∧ (X = 0 ∧ Y = 2 ∧ min(1))) ≡ T
S 7→ T
2
A variant (renaming) of an expression is obtained by uniformly replacing its variables
by fresh variables.
5
3
CHR with Justifications (CHRJ )
We present a conservative extension of CHR by justifications. If they are not
used, programs behave as without them. Justifications annotate atomic CHR
constraints. A simple source-to-source transformation extends the rules with
justifications.
Definition 4 (CHR Constraints and Initial States with Justifications).
A justification f is a unique identifier. Given an atomic CHR constraint G, a CHR
constraint with justifications is of the form GF , where F is a set of justifications.
Vn
{f }
An initial state with justifications is of the form i=1 Gi i where the fi are
distinct justifications.
We now define a source-to-source translation from rules to rules with justifications. Let kill and rem (remove) be to unary reserved CHR constraint symbols.
This means they are only allowed to occur in rules as specified in the following.
Definition 5 (Translation to Rules with Justifications). Given a generalized simpagation rule
r:
l
^
Ki \
m
^
Rj ⇔ C |
j=1
i=1
n
^
Bk
k=1
Its translation to a simpagation rule with justifications is of the form
rf :
l
^
i=1
KiFi
\
m
^
j=1
F
Rj j
⇔C|
m
^
j=1
F
rem(Rj j )F
∧
n
^
BkF
where F =
k=1
l
[
i=1
Fi ∪
m
[
Fj .
j=1
The translation ensures that the head and the body of a rule mention exactly
the same justifications. More precisely, each CHR constraint in the body is annotated with the union of all justifications in the head of the rule, because its
creation is caused by the head constraints. The reserved CHR constraint rem/1
(remember removed) stores the constraints removed by the rule together with
their justifications.
3.1
Operational Equivalence of Rule Applications
Let A, B, C . . . be states. For convenience, we will often consider them as multisets of atomic constraints. Then the notation A−B denotes multiset difference, A
without B. By abuse of notation, let AJ , B J , C J . . . be conjunctions and corresponding states whose atomic CHR constraints are annotated with justifications
according to the above definition of the rule scheme. Similarly, let rem(R)J
Vm
F
denote the conjunction j=1 rem(Rj j )F .
We show that rule applications correspond to each other in standard CHR
and in CHRJ .
6
Lemma 1 (Equivalence of Program Rules). There is a computation step
S 7→r T with simpagation rule
r : H1 \H2 ⇔ C|B
if and only if there is a computation step with justifications S J 7→rf T J ∧
rem(H2 )J with the corresponding simpagation rule with justifications
rf : H1J \H2J ⇔ C|rem(H2 )J ∧ B J .
Proof. We compare the two transitions involving rule r and rf, respectively:
(r : H1 \H2 ⇔ C|B)
S ≡ (H1 ∧ H2 ∧ C ∧ G) (H1 ∧ C ∧ B ∧ G) ≡ T
S 7→r T
S
J
≡
(H1J
∧
H2J
(rf : H1J \H2J ⇔ C|rem(H2 )J ∧ B J )
∧ C ∧ GJ ) (H1J ∧ C ∧ B J ∧ GJ ) ≡ T J ∧ rem(H2 )J
S J 7→rf T J ∧ rem(H2 )J
Given the standard transition with rule r, the transition with justifications
with rule rf is always possible: The rule rf by definition does not impose any
constraints on its justifications. The justifications in the rule body are computed
as the union of the justifications in the rule head, which is always possible.
Furthermore, the reserved rem constraints always belong to the context of the
transition since by definition there is no rule rf that can match any of them.
Conversely, given the transition with justifications with rule rf , by the same
arguments, we can strip away all justifications from it and remove rem(H2 )J
from the rule and the target state to arrive at the standard transition with rule
r.
⊓
⊔
Since computations are sequences of connected computation steps, this lemma
implies that computations in standard CHR program and in CHRJ correspond
to each other. Thus CHR with justifications is a conservative extension of CHR.
4
Logical Retraction Using Justifications
We use justifications to remove a CHR constraint from a computation without
the need to recompute from scratch. This means that all its consequences due
to rule applications it was involved in are undone. CHR constraints added by
those rules are removed and CHR constraints removed by the rules are re-added.
To specify and implement this behavior, we give a scheme of two rules, one for
retraction and one for re-adding of constraints. The reserved CHR constraint
kill (f ) undoes all consequences of the constraint with justification f .
Definition 6 (Rules for CHR Logical Retraction). For each n-ary CHR
constraint symbol c (except the reserved kill and rem), we add a rule to kill
constraints and a rule to revive removed constraints of the form:
kill : kill (f ) \ GF ⇔ f ∈ F | true
7
revive : kill (f ) \ rem(GFc )F ⇔ f ∈ F | GFc ,
where G = c(X1 , . . . , Xn ), where X1 , . . . , Xn are different variables.
Note that a constraint may be revived and subsequently killed. This is the case
when both Fc and F contain the justification f .
4.1
Confluence of Logical Retraction
Confluence of a program guarantees that any computation starting from a given
initial state can always reach equivalent states, no matter which of the applicable rules are applied. There is a decidable, sufficient and necessary syntactic
condition to check confluence of terminating programs and to detect rule pairs
that lead to non-confluence when applied.
Definition 7 (Confluence). If A 7→∗ B and A 7→∗ C then there exist states
D1 and D2 such that B 7→∗ D1 and C 7→∗ D2 where D1 ≡ D2 .
Theorem 1. [Abd97,AFM99] A terminating CHR program is confluent if and
only if all its critical pairs are joinable.
Decidability comes from the fact that there is only a finite number of critical
pairs to consider.
Definition 8 (Overlap, Critical Pair). Given two (not necessarily different)
simpagation rules whose variables have been renamed apart, K1 \R1 ⇔ C1 |B1
and K2 \R2 ⇔ C2 |B2 . Let A1 and A2 be non-empty conjunctions of constraints
taken from K1 ∧ R1 and K2 ∧ R2 , respectively. An overlap of the two rules is the
state consisting of the rules heads and guards:
((K1 ∧ R1 ) − A1 ) ∧ K2 ∧ R2 ∧ A1 =A2 ∧ C1 ∧ C2 .
The critical pair are the two states that come from applying the two rules to
the overlap, where E = (A1 =A2 ∧ C1 ∧ C2 ):
(((K1 ∧ K2 ∧ R2 ) − A2 ) ∧ B1 ∧ E <> ((K1 ∧ R1 ∧ K2 ) − A1 ) ∧ B2 ∧ E).
Note that the two states in the critical pair differ by R2 ∧ B1 and R1 ∧ B2 .
A critical pair is trivially joinable if its built-in constraints are inconsistent
or if both A1 and A2 do not contain removed constraints [AFM99].
We are ready to show the confluence of the kill and revive rules with each
other and with each rule in any given program. It is not necessary that the given
program is confluent. This means for any given program, the order between
applying applicable rules from the program and retracting constraints can be
freely interchanged. It does not matter for the result, if we kill a constraint first
or if we apply a rule to it and kill it and its consequences later.
8
Theorem 2 (Confluence of Logical Retraction). Given a CHR program
whose rules are translated to rules with justifications together with the kill
and revive rules. We assume there is at most one kill (f ) constraint for each
justification f in any state. Then all critical pairs between the kill and revive
rules and any rule from the program with justifications are joinable.
Proof. There is only one overlap between the kill and revive rules,
kill : kill (f ) \ GF ⇔ f ∈ F | true
revive : kill (f ) \ rem(GFc )F ⇔ f ∈ F | GFc ,
since GF cannot have the reserved constraint symbol rem/1 . The overlap is in
the kill (f ) constraint. But since it is not removed by any rule, the resulting
critical pair is trivially joinable.
By our assumption, the only overlap between two instances of the kill rule
must have a single kill (f ) constraint. Again, since it is not removed, the resulting
critical pair is trivially joinable. The same argument applies to the only overlap
between two instances of the revive rule.
Since the head of a simpagation rule with justifications from the given program
rf : K J \RJ ⇔ C|rem(R)J ∧ B J
cannot contain reserved kill and rem constraints, these program rules cannot
have an overlap with the revive rule.
But there are overlaps between program rules, say a rule rf , and the kill rule.
They take the general form:
kill (f ) ∧ K J ∧ RJ ∧ GF =AF ∧ f ∈F ∧ C,
where AF occurs in K J ∧ RJ . This leads to the critical pair
(kill (f ) ∧ ((K J ∧ RJ ) − GF ) ∧ E <> kill (f ) ∧ K J ∧ rem(R)J ∧ B J ∧ E),
where E = (GF =AF ∧ f ∈F ∧ C). In the first state of the critical pair, the kill
rule has been applied and in the second state the rule rf . Note that AF is atomic
since it is equated to GF in E. Since GF has been removed in the first state and
GF =AF , rule rf is no longer applicable in that state.
We would like to join these two states. The joinability between a rule rf and
the kill rule can be visualized by the diagram:
kill(f ) ✭∧ K J ∧ RJ ∧ E
❤❤❤
❤❤❤❤
❤
❤
❤
❤❤❤
t❤❤❤❤
∗
kill(f ) ∧ ((K J ∧ RJ ) − GF ) ∧ E o
∗
kill
revive ,kill∗
✓❙❙❙❙❙
❙❙❙rf
❙❙❙
❙❙❙
)
✤ kill(f ) ∧ K J ∧ rem(R)J ∧ B J ∧ E
We now explain this joinability result. The states of the critical pair differ. In
the first state we have the constraints RJ and have GF removed from K J ∧ RJ ,
9
while in the second state we have the body constraints rem(R)J ∧ B J of rule
rf instead. Any constraint in rem(R)J ∧ B J must include f as justification by
definition, because f occurred in the head constraint AF and E contains f ∈F .
The goal rem(R)J contains rem constraints for each removed constraint from
J
R . But then we can use kill (f ) with the revive rule to replace all rem constraints
by the removed constraints, thus adding RJ back again. Furthermore, we can use
kill (f ) with the revive rule to remove each constraint in B J , as each constraint in
B J contains the justification f . So rem(R)J ∧ B J has been removed completely
and RJ has been re-added.
The two states may still differ in the occurrence of GF (which is AF ). In the
first state, GF was removed by the kill rule. Now if AF (GF ) was in RJ , it has
been revived with RJ . But then the kill rule is applicable and we can remove
AF again. In the second state, if AF was in RJ it has been removed together
with RJ by application of rule rf. Otherwise, AF is still contained in K J . But
then the kill rule is applicable to AF and removes it from K J . Now AF (GF )
does not occur in the second state either.
We thus have arrived at the first state of the critical pair. Therefore the
critical pair is joinable.
⊓
⊔
This means that given a state, if there is a constraint to be retracted, we can
either kill it immediately or still apply a rule to it and use the kill and revive
rules afterwards to arrive at the same resulting state.
Note that the confluence between the kill and revive rules and any rule from
the program is independent of the confluence of the rules in the given program.
4.2
Correctness of Logical Retraction
We prove correctness of logical retraction: the result of a computation with
retraction is the same as if the constraint would never have been introduced in
the computation. We show that given a computation starting from an initial
state with a kill(f ) constraint that ends in a state where the kill and revive rules
are not applicable, i.e. these rules have been applied to exhaustion, then there is
a corresponding computation without constraints that contain the justification
f.
Theorem 3 (Correctness of Logical Retraction). Given a computation
AJ ∧ G{f } ∧ kill (f ) 7→∗ B J ∧ rem(R)J ∧ kill (f ) 67→kill,revive ,
where f does not occur in AJ . Then there is a computation without G{f } and
kill (f )
AJ 7→∗ B J ∧ rem(R)J .
Proof. We distinguish between transitions that involve the justification f or
do not. A rule that applies to constraints that do not contain the justification f
will produce constraints that do not contain the justification. A rule application
10
that involves at least one constraint with a justification f will only produce
constraints that contain the justification f .
We now define a mapping from a computation with G{f } to a corresponding
computation without G{f } . The mapping essentially strips away constraints that
contain the justification f except those that are remembered by rem constraints.
In this way, the exhaustive application of the revive and kill rules kill (f ) is
mimicked.
strip(f, AJ ∧ B J ) := strip(f, AJ ) ∧ strip(f, B J )
strip(f, rem(GF1 )F2 ) := strip(f, GF1 ) if f ∈ F2
strip(f, GF ) := true if G is an atomic constraint except rem/1 and f ∈ F
strip(f, GF ) := GF otherwise.
We extend the mapping from states to transitions. We keep the transitions
except where the source and target state are equivalent, in that case we replace
the transition 7→ by an equivalence ≡. This happens when a rule is applied that
involves the justification f . The mapping is defined in such a way that in this
case the source and target state are equivalent. Otherwise a rule that does not
involve f has been applied. The mapping ensures in this case that all necessary
constraints are in the source and target state, since it keeps all constraints that
do not mention the justification f . For a computation step C J 7→ DJ we define
the mapping as:
strip(f, C J →
7 rf DJ ) := strip(f, C J ) ≡ strip(f, DJ ) if rule rf involves f
J
strip(f, C →
7 rf DJ ) := strip(f, C J ) 7→rf strip(f, DJ ) otherwise.
We next have to show is that the mapping results in correct state equivalences
and transitions. If a rule is applied that does not involve justification f , then it is
easy to see that the mapping strip(f, . . .) leaves states and transitions unchanged.
Otherwise the transition is the application of a rule rf from the program,
the rule kill or the rule revive where f is contained in the justifications. Let the
context E J be an arbitrary goal where f ∈ J . Then we have to compute
strip(f, kill (f ) ∧ GF ∧ f ∈ F ∧ E J 7→kill kill (f ) ∧ E J )
strip(f, kill (f ) ∧ rem(GFc )F ∧ f ∈ F ∧ E J 7→revive kill (f ) ∧ GFc ∧ E J )
strip(f, K J ∧ RJ ∧ C ∧ E J 7→rf K J ∧ rem(R)J ∧ B J ∧ C ∧ E J )
and to show that equivalent states are produced in each case. The resulting
states are
′
′
true ∧ true ∧ true ∧ E J ≡ true ∧ E J
′
′
′
′
true ∧ GFc ∧ true ∧ E J ≡ true ∧ GFc ∧ E J if f 6∈ Fc
true ∧ true ∧ true ∧ E J ≡ true ∧ true ∧ E J if f ∈ Fc
′
′
′
′
′
′
K J ∧ RJ ∧ C ∧ E J ≡ K J ∧ RJ ∧ C ∧ E J where f 6∈ J ′ ,
′
where, given a goal A, the expression AJ contains all constraints from AJ that
do not contain the justification f .
11
In the end state of the given computation we know that the revive and
kill rules have been applied to exhaustion. Therefore all rem(GF1 )F2 where F2
contains f have been replaced by GF1 by the revive rule. Therefore all standard
constraints with justification f have been removed by the kill rule (including
those revived), just as we do in the mapping strip(f, . . .).
Therefore the end states are indeed equivalent except for the remaining kill
constraint.
⊓
⊔
5
Implementation
As a proof-of-concept, we implement CHR with justifications (CHRJ ) in SWIProlog using its CHR library. This prototype source-to-source transformation
is available online at http://pmx.informatik.uni-ulm.de/chr/translator/.
The translated programs can be run in Prolog or online systems like WebCHR.
Constraints with Justifications. CHR constraints annotated by a set
of justifications are realized by a binary infix operator ##, where the second
argument is a list of justifications:
C {F1 ,F2 ,...} is realized as C ## [F1,F2,...].
For convenience, we add rules that add a new justification to a given constraint C. For each constraint symbol c with arity n there is a rule of the form
addjust @ c(X1,X2,...Xn) <=> c(X1,X2,...Xn) ## [ F].
where the arguments of X1,X2,...Xn are different variables.
Rules with Justifications. A CHR simpagation rule with justifications is
realized as follows:
rf :
l
^
i=1
KiFi \
m
^
j=1
F
Rj j ⇔ C |
m
^
F
rem(Rj j )F ∧
j=1
n
^
k=1
BkF where F =
l
[
i=1
Fi ∪
m
[
Fj
j=1
rf @ K1 ## FK1,... \ R1 ## FR1,... <=> C |
union([FK1,...FR1,...],Fs), rem(R1##FR1) ## Fs,...B1 ## Fs,...
where the auxiliary predicate union/2 computes the ordered duplicate-free union
of a list of lists3 .
Rules remove and revive. Justifications are realized as flags that are initially unbound logical variables. This eases the generation of new unique justifications and their use in killing. Concretely, the reserved constraint kill (f )
is realized as built-in equality F=r, i.e. the justification variable gets bound. If
kill (f ) occurred in the head of a kill or revive rule, it is moved to the guard as
equality test F==r. Note that we rename rule kill to remove in the implementation.
revive : kill (f ) \ rem(C Fc )F ⇔ f ∈ F | C Fc
kill : kill (f ) \ C F ⇔ f ∈ F | true
3
More precisely, a simplification rule is generated if there are no kept constraints and
a propagation rule is generated if there are no removed constraints.
12
revive @ rem(C##FC) ## Fs <=> member(F,Fs),F==r | C ## FC.
remove @ C ## Fs <=> member(F,Fs),F==r | true.
Since rules are tried in program order in the CHR implementation, the constraint
C in the second rule is not a reserved rem/1 constraint when the rule is applicable.
The check for set membership in the guards is expressed using the standard
nondeterministic Prolog built-in predicate member/2.
Logical Retraction with killc/1. We extend the translation to allow
for retraction of derived constraints. The constraint killc(C) logically retracts
constraint C. The two rules killc and killr try to find the constraint C - also
when it has been removed and is now present in a rem constraint. The associated
justifications point to all initial constraints that where involved in producing the
constraint C. For retracting the constraint, it is sufficient to remove one of its
producers. This introduces a choice implemented by the member predicate.
killr @
killc @
killc(C), rem(C ## FC) ## _Fs <=> member(F,FC),F=r.
killc(C), C ## Fs <=> member(F,Fs),F=r.
Note that in the first rule, we bind a justification F from FC, because FC contains
the justifications of the producers of constraint C, while Fs also contains those
that removed it by a rule application.
5.1
Examples
We discuss two classical examples for dynamic algorithms, maintaining the minimum of a changing set of numbers and shortest paths when edges change.
Dynamic Minimum. Translating the minimum rule to one with justifications results in:
min(A)##B \ min(C)##D <=> A<C | union([B,D],E), rem(min(C)##D)##E.
The following shows an example query and the resulting answer in SWI-Prolog:
?- min(1)##[A], min(0)##[B], min(2)##[C].
rem(min(1)##[A])##[A,B], rem(min(2)##[C])##[B,C],
min(0)##[B].
The constraint min(0) remained. This means that 0 is the minimum. The constraints min(1) and min(2) have been removed and are now remembered. Both
have been removed by the constraint with justification B, i.e. min(0).
We now logically retract with killc the constraint min(1) at the end of the
query. The killr rule applies and removes rem(min(1)##[A])##[A,B]. (In the
rule body, the justification A is bound to r - to no effect, since there are no other
constraints with this justification.)
?- min(1)##[A], min(0)##[B], min(2)##[C], killc(min(1)).
rem(min(2)##[C])##[B,C],
min(0)##[B].
13
What happens if we remove the current minimum min(0)? The constraint
min(0) is removed by binding justification B. The two rem constraints for min(1)
and min(2) involve B as well, so these two constraints are re-introduced and
react with each other. Note that min(2) is nwo removed by min(1) (before it
was min(0)). The result is the updated minimum, which is 1.
?- min(1)##[A], min(0)##[B], min(2)##[C], killc(min(0)).
rem(min(2)##[C])##[A,C],
min(1)##[B].
Dynamic Shortest Path. Given a graph with directed arcs e(X,Y), we
compute the lengths of the shortest paths between all pairs of reachable nodes:
%
pp
%
e
%
ep
keep shorter of two paths from X to Y
@ p(X,Y,L1) \ p(X,Y,L2) <=> L1=<L2 | true.
edges have a path of unit length
@ e(X,Y) ==> p(X,Y,1).
extend path in front by an edge
@ e(X,Y), p(Y,Z,L) ==> L1=:=L+1 | p(X,Z,L1).
The corresponding rules in the translated program are:
pp@p(A,B,C)##D \ p(A,B,E)##F <=> C=<E |
union([D,F],G), rem(p(A,B,E)##F)##G.
e @e(A,B)##C ==> true | union([C],D), p(A,B,1)##D.
ep@e(A,B)##C,p(B,D,E)##F ==> G is E+1 | union([C,F],H),p(A,D,G)##H.
We now use constraints without justifications in queries. Justifications will
be added by the addjust rules.
?- e(a,b), e(b,c), e(a,c).
rem(p(a, c, 2)##[A, B])##[A,B,C],
p(a, b, 1)##[A], e(a, b)##[A],
p(b, c, 1)##[B], e(b, c)##[B],
p(a, c, 1)##[C], e(a, c)##[C].
We see that a path of length 2 has been removed by the constraint e(a,c)##[C],
which produced a shorter path of length one. We next kill this constraint e(a,c).
?- e(a,b), e(b,c), e(a,c), kill(e(a,c)).
p(a, b, 1)##[A], e(a, b)##[A],
p(b, c, 1)##[B], e(b, c)##[B],
p(a, c, 2)##[A,B].
Its path p(a,c,1) disappears and the removed remembered path p(a,c,2) is
re-added. We can see that the justifications of a path contains are those from the
edges in that path. The same happens if we logically retract p(a,c,1) instead
of e(a,c).
What happens if we remove p(a,c,2) from the initial query? The killr
rule applies. Since the path has two justifications, there are two computations
14
generated by the member predicate. In the first one, the constraint e(a,b) disappeared, in the second answer, it is e(b,c). In both cases, the path cannot be
computed anymore, i.e. it has been logically retracted.
?- e(a,b), e(b,c), e(a,c), kill(p(a,c,2)).
p(b, c, 1)##[B], e(b, c)##[B],
p(a, c, 1)##[C], e(a, c)##[C]
;
p(a, b, 1)##[A], e(a, b)##[A],
p(a, c, 1)##[C], e(a, c)##[C].
6
Related Work
The idea of introducing justifications into CHR is not new. The thorough work
of Armin Wolf on Adaptive CHR [WGG00] was the first to do so. Different to
our work, this technically involved approach requires to store detailed information about the rule instances that have been applied in a derivation in order
to undo them. Adaptive CHR had a low-level implementation in Java [Wol01],
while we give an implementation in CHR itself by a straightforward source-tosource transformation that we prove confluent and correct. Moreover we prove
confluence of the rule scheme for logical retraction with the rules of the given
program. The application of adaptive CHR considered dynamic constraint satisfaction problems (DCSP) only, in particular for the implementation of search
strategies [Wol05], while we apply our approach to arbitrary algorithms in order
to make them fully dynamic.
The issue of search strategies was further investigated by Leslie De Koninck
et. al. [DKSD08]. They introduce a flexible search framework in CHR∨ (CHR
with disjunction) extended with rule and search branch priorities. In their work,
justifications are introduced into the semantics of CHR∨ to enable dependencydirected backtracking in the form of conflict-directed backjumping. Our work
does not need a new semantics for CHR, nor its extension with disjunction, it
rather relies on a source-to-source transformation within the standard semantics.
Our work does not have a particular application of justifications in mind,
but rather provides the basis for any type of application that requires dynamic
algorithms. There are, however, CHR applications that use justifications.
The work of Jeremy Wazny et. al. [SSW03] introduced informally a particular
kind of justifications into CHR for the specific application of type debugging and
reasoning in Haskell. Justifications correspond to program locations in the given
Haskell program. Unlike other work, the constraints in the body of CHR rules
have given justifications to which justifications from the rule applications are
added at runtime.
The more recent work of Gregory Duck [Duc12] introduces SMCHR, a tight
integration of CHR with a Boolean Satisfiability (SAT) solver for quantifier-free
formulae including disjunction and negation as logical connectives. It is mentioned that for clause generation, SMCHR supports justifications for constraints
15
that include syntactic equality constraints between variables. A dynamic unification algorithm using justifications has been considered in [Wol98].
7
Conclusions
In this paper, the basic framework for CHR with justifications (CHRJ ) has
been established and formally analyzed. We defined a straightforward sourceto-source program transformation that introduces justifications into CHR as a
conservative extension. Justifications enable logical retraction of constraints. If
a constraint is retracted, the computation continues as if the constraint never
was introduced. We proved confluence and correctness of the two-rule scheme
that encodes the logical retraction. We presented a prototype implementation
that is available online together with two classical examples.
Future work could proceed along three equally important lines: investigate
implementation, dynamic algorithms and application domains of CHR with justifications. First, we would like to research how logical as well as classical algorithms implemented in CHR behave when they become dynamic. While our
approach does not require confluence of the given program, the arbitrary reintroduction of removed constraints may lead to unwanted orders of rule applications in non-confluent programs. Second, we would like to improve the implementation, optimize and benchmark it [Fru17]. Currently, the entire history of
removed constraints is stored. It could suffice to remember only a partial history
if only certain constraints can be retracted or if partial recomputation proves
to be more efficient for some constraints. A lower-level implementation could
benefit from the propagation history that comes with the application of propagation rules in most CHR implementations. Third, we would like to extend the
rule scheme to support typical application domains of justifications: explanation of derived constraints by justifications (for debugging), detection and repair
of inconsistencies (for error diagnosis), and implementing nonmonotonic logical
behaviors (e.g. default logic, abduction, defeasible reasoning). Logical retraction
of constraints can also lead to reversal of computations, and as a first step the
related literature on reversibility should be explored.
Acknowledgements. We thank Daniel Gall for implementing the online
transformation tool for CHRJ . We also thank the anonymous reviewers for their
helpful suggestions on how to improve the paper.
References
Abd97.
AFM99.
Bet14.
Slim Abdennadher. Operational semantics and confluence of constraint
propagation rules. In G. Smolka, editor, CP ’97: Proc. Third Intl. Conf.
Principles and Practice of Constraint Programming, volume 1330 of LNCS,
pages 252–266. Springer, 1997.
Slim Abdennadher, Thom Frühwirth, and Holger Meuss. Confluence and
semantics of constraint simplification rules. Constraints, 4(2):133–165, 1999.
Hariolf Betz. A unified analytical foundation for constraint handling rules.
BoD, 2014.
16
BM06.
Kenneth N Brown and Ian Miguel. Uncertainty and change, chapter 21.
Handbook of Constraint Programming, pages 731–760, 2006.
DKSD08. Leslie De Koninck, Tom Schrijvers, and Bart Demoen. A flexible search
framework for chr. In Constraint Handling Rules — Current Research Topics, volume LNAI 5388, pages 16–47. Springer, 2008.
Duc12.
Gregory J Duck. Smchr: Satisfiability modulo constraint handling rules.
Theory and Practice of Logic Programming, 12(4-5):601–618, 2012.
Frü09.
Thom Frühwirth. Constraint Handling Rules. Cambridge University Press,
2009.
Frü15.
Thom Frühwirth. Constraint handling rules – what else? In Rule Technologies: Foundations, Tools, and Applications, pages 13–34. Springer International Publishing, 2015.
Fru17.
Thom Fruehwirth. Implementation of Logical Retraction in Constraint Handling Rules with Justifications. In 21st International Conference on Applications of Declarative Programming and Knowledge Management (INAP
2017), September 2017.
McA90. David A McAllester. Truth maintenance. In AAAI, volume 90, pages 1109–
1116, 1990.
RBF09. Frank Raiser, Hariolf Betz, and Thom Frühwirth. Equivalence of CHR
states revisited. In F. Raiser and J. Sneyers, editors, CHR ’09, pages 33–48.
K.U.Leuven, Dept. Comp. Sc., Technical report CW 555, July 2009.
SSW03. Peter J Stuckey, Martin Sulzmann, and Jeremy Wazny. Interactive type
debugging in haskell. In Proceedings of the 2003 ACM SIGPLAN workshop
on Haskell, pages 72–83. ACM, 2003.
WGG00. Armin Wolf, Thomas Gruenhagen, and Ulrich Geske. On the incremental
adaptation of chr derivations. Applied Artificial Intelligence, 14(4):389–416,
2000.
Wol98.
Armin Wolf. Adaptive solving of equations over rational trees. In Principles
and Practice of Constraint Programming, pages 475–475, Berlin, Heidelberg,
1998. Springer Berlin Heidelberg.
Wol01.
Armin Wolf. Adaptive constraint handling with chr in java. In International
Conference on Principles and Practice of Constraint Programming, pages
256–270. Springer, 2001.
Wol05.
Armin Wolf. Intelligent search strategies based on adaptive constraint handling rules. Theory and Practice of Logic Programming, 5(4-5):567–594,
2005.
| 2 |
arXiv:1803.08961v1 [math.AC] 23 Mar 2018
COHEN-MACAULAY CRITERIA FOR PROJECTIVE MONOMIAL
CURVES VIA GRÖBNER BASES
JÜRGEN HERZOG, DUMITRU I. STAMATE
Abstract. We prove new characterizations based on Gröbner bases for the CohenMacaulay property of a projective monomial curve.
Contents
Introduction
1. A Cohen-Macaulay criterion via Gröbner bases
2. Dual sequences
3. A bound for the number of generators of in(I(a)), when C(a) is
arithmetically Cohen-Macaulay
4. Applications
References
1
3
5
10
12
14
Introduction
Let K be any field. For any sequence of distinct positive integers a : a1 , . . . , an
we denote I(a) the kernel of the K-algebra homomorphism φ : S → K[t] where
S = K[x1 , . . . , xn ] and φ(xi ) = tai for i = 1, . . . , n. The image of this map is the
semigroup ring over K of the semigroup H generated by a1 , . . . , an . We no not insist
that a is a minimal generating set for H.
In the following, we assume that gcd(a1 , . . . , an ) = 1 and an > ai for all i < n.
We note that the homogenization of I(a) with respect to the variable x0 is again a
toric ideal, namely it is the kernel of the K-algebra map ψ : S[x0 ] → K[s, t] where
ψ(xi ) = tai san −ai for 1 ≤ i ≤ n and ψ(x0 ) = san . The image of the map ψ is the
subalgebra K[A] of K[t, s] generated by the monomials whose exponents are read
from the columns of the matrix
0
a1
...
an−1
an
(1)
A=
.
an an − a1 . . . an − an−1 0
In case K is an algebraically closed field, I(a) is the vanishing ideal of the affine
curve C(a) given parametrically by t 7→ (ta1 , . . . , tan ), while I(a)h is the vanishing
2010 Mathematics Subject Classification. Primary 13H10, 13P10, 16S36; Secondary 13F20,
14M25.
Key words and phrases. arithmetically Cohen-Macaulay, projective monomial curve, revlex,
Gröbner basis, numerical semigroup, Apéry set.
1
ideal of the projective closure C(a) of C(a) in Pn , given parametrically by [s : t] 7→
[tan : sa1 tan −a1 : · · · : san−1 tan −an−1 : san ]. Curves of this type are called projective
monomial curves.
The projective monomial curve C(a) is called arithmetically Cohen-Macaulay if
its vanishing ideal I(a)h is a Cohen-Macaulay ideal. It is known that this is the case
if and only if tan , san is a regular sequence on K[A]. This fact is a special case of
[10, Theorem 2.6].
Arithmetically Cohen-Macaulay curves are not rare among the projective monomial curves. It follows from Vu’s [18, Theorem 5.7] that for any fixed a, the curve
C(a1 + k, . . . , an + k) is arithmetically Cohen-Macaulay for all k ≫ 0. In small
embedding dimension, Bresinsky, Schenzel and Vogel [4] characterized the arithmetically Cohen-Macaulay projective monomial curves in P3 by the property that
I(a)h is generated by at most 3 elements.
In the context of numerical semigroups, Gröbner bases have been used in algorithms in [16] and [17] to find the Frobenius number of the numerical semigroup H
generated by a, or to characterize when is the tangent cone of the semigroup algebra
K[H] Cohen-Macaulay, see [1], [2].
One of the main results in this paper is that C(a) is arithmetically Cohen–
Macaulay if and only if in(I(a)h ), respectively in(I(a)), is a Cohen-Macaulay ideal,
see Theorem 1.2 (b) and (c). Here the initial ideals are taken for a reverse lexicographic order for which xn , x0 , respectively xn are the smallest variables. These
conditions are also equivalent to condition (f) which says that
in(xn , I(a)) = (xn , in(I(a))).
Yet other equivalent properties are (d) and (e), namely that xn , respectively x0 , xn ,
do not divide any minimal monomial generator of in(I(a)) and of in(I(a)h ), respectively, where the monomial orders are as before.
A Cohen-Macaulay criterion for a simplicial semigroup ring in terms of the Gröbner
basis of its defining ideal is given by Kamoi in [15, Corollary 2.9] and [14, Theorem
1.2]. In the particular case considered in this paper, equivalences (a), (d) and (e) in
Theorem 1.2 sharpen Kamoi’s mentioned results and his [14, Corollary 3.6].
The dual sequence of a is defined to be the sequence a′ : an − a1 , . . . , an − an−1 , an .
The projective monomial curves associated to the sequences a and a′ are obviously
isomorphic. So it is natural to compare the ideals I(a) and I(a′ ) and their reduced
Gröbner bases. That is the focus of Section 2.
P
For w = (w1 , . . . , wn ) ∈ Zn we denote hw, ai = ni=1 wi ai , and we set L(a) =
{w ∈ Zn : hw, ai = 0}. Obviously, I(a) is just the lattice ideal of the lattice
+
−
L(a). Indeed, I(a) is generated by the binomials fw = xw − xw with wP∈ L(a).
Let σ : Zn → Zn be the map given by σ(w1 , . . . , wn ) = (w1 , . . . , wn−1 , − ni=1 wi ).
Then σ is an automorphism of the group Zn such that σ 2 = idZn which induces an
isomorphism between L(a) and L(a′ ). In particular, L(a′ ) = (fσ(w) : w ∈ L(a)).
In general, a minimal set of binomial generators of L(a) is not mapped to a minimal set of binomial generators of L(a′ ), see Remark 3.2. However, in Theorem 2.2 we
show that C(a) is arithmetically Cohen-Macaulay if and only if in(fσ(w) ) = in(fw )
2
for all fw ∈ G, where G denotes the reduced Gröbner basis of I(a) with respect to a
reverse lexicographic monomial order with xn the smallest variable. Moreover, these
conditions are also equivalent to the fact that fw ∈ G if and only if fσ(w) ∈ G ′ , for
all w ∈ Zn , where G ′ is the reduced Gröbner basis of I(a′ ) with respect to the same
monomial order.
Let H denote the numerical semigroup generated by a. For any nonzero element h
in H its Apéry set is defined as Ap(H, h) = {x ∈ H : x−h ∈
/ H}. For h ∈ Ap(H, an )
we denote ϕa (h) the smallest monomial in S for the reverse lexicographic order such
that its a-degree equals h. The close relationship between the ideals I(a) and I(a′ )
is also outlined by the fact that the curve C(a) is arithmetically Cohen-Macaulay if
and only if
in(xn , I(a)) = in(xn , I(a′ )),
see Theorem 2.6. Here one uses a reverse lexicographic order with xn the smallest
variable. For the proof, a key observation is that the latter equation is equivalent
to the fact that for all h in Ap(H, an ) the a′ -degree of ϕa (h) is in Ap(H ′ , an ), where
H ′ denotes the semigroup generated by the dual sequence a′ . As a consequence,
in Corollary 2.7 we recover a criterion of Cavaliere and Niesi ([5]) for C(a) to be
arithmetically Cohen-Macaulay.
When n = 3, it is known from [11] that µ(I(a)) ≤ 3. However, we give examples
showing that a reduced Gröbner basis may be arbitrarily large, see Proposition 3.1.
In case C(a) is arithmetically
Cohen-Macaulay, in Proposition 3.4 we show that
an
µ(in(I(a))) ≤ n−2 .
In Section 4 we apply Theorem 1.2 to test the arithmetically Cohen-Macaulay
property for two families of projective monomial curves in P3 that have appeared
in the literature. For these families of 4-generated numerical semigroups which
were introduced by Arslan ([1]) and by Bresinsky ([3]), respectively, we show that
the corresponding projective monomial curve is (respectively, is not) arithmetically
Cohen-Macaulay.
1. A Cohen-Macaulay criterion via Gröbner bases
The following lemma appears in [7, Exercise 5, page 392]. Lacking reference to a
proof, we prefer to provide one.
Lemma 1.1. Let I be an ideal in the polynomial ring S = K[x1 , . . . , xn ] and I h ⊂
S[x0 ] its homogenization with respect to the variable x0 . We denote < any reverse
lexicographic monomial order on S and <0 the reverse lexicographic monomial order
on S[x0 ] extended from S and such that x0 is the least variable.
If f1 , . . . , fr is the reduced Gröbner basis for I with respect to <, then f1h , . . . , frh
is the reduced Gröbner basis for I h with respect to <0 . Moreover, in<0 (I h ) =
(in< (I))S[x0 ].
Proof. Let F h = {f1h , . . . , frh }. It is proved in [9, Proposition 3.15] that F h is a
Gröbner basis for I h with respect to the block order <′ on S[x0 ] which is defined as
xα xa0 <′ xβ xb0 if (xα < xβ ) or (xα = xβ and a < b)
3
for all α, β ∈ Nn and all nonnegative integers a, b.
Let f be a nonzero polynomial in I. We write f = m1 + · · · + mq as a sum of
monomials with mi > mi−1 for 2 ≤ i ≤ q. Then deg mi ≥ deg mi−1 for 2 ≤ i ≤ q
and
ε
f h = m1 + m2 xε02 + · · · + mq x0q ,
where εi = deg m1 − deg mi for i = 2, . . . , q. Moreover, in the above decomposition
of f h the monomials are listed decreasingly with respect to <0 . Thus in<0 (f h ) =
m1 = in< (f ) = in<′ (f h ) for all f in I. It follows that
in<′ (I h ) = in<′ (fih : 1 ≤ i ≤ r) = (in<′ (fih ) : 1 ≤ i ≤ r)
= (in<0 (fih ) : 1 ≤ i ≤ r) ⊆ in<0 (I h ).
Since the homogeneous ideals in<′ (I h ) and in<0 (I h ) have the same Hilbert function,
we conclude that they are equal and that F h is a Gröbner basis for I h with respect
to <0 .
Assume there exist i, j such that in<0 (fih ) divides a monomial in tail(fjh ), i.e.
in< (fi ) divides mxε0 with m a monomial in tail(fj ). This implies that in< (fi ) divides
m, which contradicts the fact that F is the reduced Gröbner basis for I with respect
to <. Therefore F h is the reduced Gröbner basis for I h with respect to <0 .
The following theorem is one of the main results of this paper.
Theorem 1.2. Let a : a1 , . . . , an be a sequence of positive integers with an > ai for
all i < n. Denote < any reverse lexicographic order on S = K[x1 , . . . , xn ] such that
xn is the smallest variable and <0 the induced reverse lexicographic order on S[x0 ],
where xn > x0 . The following conditions are equivalent:
(a) the projective monomial curve C(a) is arithmetically Cohen-Macaulay;
(b) in<0 (I(a)h ) is a Cohen-Macaulay ideal;
(c) in< (I(a)) is a Cohen-Macaulay ideal;
(d) xn does not divide any element of G(in< (I(a)));
(e) xn and x0 do not divide any element of G(in<0 (I(a)h ));
(f) in< (xn , I(a)) = (xn , in< (I(a)).
Proof. Lemma 1.1 implies that G(in< (I(a))) = G(in<0 (I(a)h )). Therefore (b) ⇐⇒
(c) and (d) ⇐⇒ (e). The implication (b) ⇒ (a) is a general fact, see for example
[12, Corollary 3.3.5]. Assuming (e), we get that x0 , xn is a regular sequence modulo
in<0 (I(a)h ), which implies (b).
Since xn is regular on S/I(a) which is a domain, using [6, Proposition 1.4] we
have that xn is regular on S/ in< (I(a)) if and only if in< (xn , I(a)) = (xn , in< (I(a)).
This shows (d) ⇐⇒ (f).
It remains to prove that (a) ⇒ (e). It is known that the ring K[A] is CohenMacaulay if and only if san , tan is a regular sequence on it, see [10, Lemma 2.4].
That is equivalent to x0 , xn being a regular sequence on S[x0 ]/I(a)h .
Let F = {f1 , . . . , fr } be the reduced Gröbner basis of I(a) with respect to <. For
all i = 1, . . . , r, fi is a binomial of the form ui − vi , where ui = in< (fi ), and ui and
vi have disjoint supports. After a reordering, we may assume that there exists ℓ ≤ r
such that fi is homogeneous if and only if i ≤ ℓ.
4
Using Lemma 1.1, we see that G(in<0 (I(a)h )) = {u1, . . . , ur }. Clearly, x0 does
not divide ui for any i, since deg ui ≥ deg vi .
We first show that for i ≤ ℓ, ui is not divisible by xn . Indeed, if that were not
the case, by the properties of the chosen monomial order <, we would have that xn
divides vi , as well, hence ui and vi do not have disjoint supports, which is false.
It remains to prove that xn does not divide ui for any i > ℓ. For that, we set
gi = fih |x0 =0 for all i = 1, . . . , r and J = (g1 , . . . , gr )S. Then K[A]/(san ) ∼
= S/J.
For i ≤ ℓ, since fi is homogeneous we obtain that x0 does not divide vi . Therefore,
J = (f1 , . . . , fℓ , uℓ+1, . . . , ur ). Since xn is regular on S/J, we infer that xn is not
in the support of any of the monomials ui with i > ℓ. This finishes the proof of
(a) ⇒ (e).
2. Dual sequences
Given a : a1 , . . . , an a sequence of distinct nonnegative integers such that an > ai
for all i < n, the dual sequence is defined to be a′ : an − a1 , . . . , an − an−1 , an . It is
clear that this procedure is a duality: (a′ )′ = a.
The projective monomial curves associated to the sequences a and a′ are isomorphic. Indeed, the ideals I(a)h and I(a′ )h are the kernel of the maps on S[x0 ] sending
x0 , . . . , xn to the monomials having as exponent vectors the columns of the matrix
A in (1), and respectively the columns of the matrix
0 an − a1 . . . an − an−1 an
′
A =
.
an
a1
...
an−1
0
This implies that the polynomials in the ideal I(a′ )h are obtained from those in
I(a)h by switching the variables x0 and xn .
In this section we compare the Gröbner bases of the ideals I(a) and I(a′ ) with
respect to a reverse lexicographic order, in connection to the Cohen-Macaulay property of the associated projective monomial curve.
P
Let σ : Zn → Zn be the map given by σ(w1 , . . . , wn ) = (w1 , . . . , wn−1 , − ni=1 wi ).
It is easy to see that σ is an automorphism P
of the group Zn and that σ 2 = idZn . For
n
w = (w1 , . . . , wn ) ∈ Z we denote hw, ai = ni=1 wi ai . We set
L(a) = {w ∈ Zn : hw, ai = 0}.
Lemma 2.1. With notation as above, the map σ induces an isomorphism between
the groups L(a) and L(a′ ).
P
Proof. If w = (w1 , . . . , wn ) ∈ Zn with ni=1 wi ai = 0 then
!
!
n
n
X
X
(an − ai )wi −
wi an = 0,
i=1
i=1
′
′
equivalently hσ(w), a i = 0 and σ(w) ∈ L(a ). Similarly, if w ′ ∈ L(a′ ) then σ(w ′ ) ∈
L(a′′ ) = L(a).
If the entries of the vector α = (α1 , . . . , αn ) are nonnegative integers we let xα =
· · · xαnn . For w = (w1 , . . . , wn ) ∈ Zn , let w + and w − be the unique vectors with
xα1 1
5
nonnegative entries and disjoint supports such that w = w + − w − . We denote
+
−
fw = xw − xw . It is clear that f−w = −fw . Therefore, a difference of two
monomials with disjoint supports can be identified with a vector w ∈ Zn .
It is known that I(a) = (fw : w ∈ L(a)), hence I(a′ ) = (fσ(w) : w ∈ L(a)).
However, it is not always true that σ maps a minimal generating set (or a Gröbner
basis) for I(a) into a minimal generating set (or a Gröbner basis) for I(a′ ), see
Proposition 3.1 and Remark 3.2.
Theorem 2.2. Let a : a1 , . . . , an be a sequence of nonnegative integers with an > ai
for all i < n. Let G and G ′ be the reduced Gröbner bases of I(a) and I(a′ ), respectively, with respect to a reverse lexicographic monomial order on S = K[x1 , . . . , xn ]
such that xn is the smallest variable. The following conditions are equivalent:
(a) the projective monomial curve C(a) is arithmetically Cohen-Macaulay;
(b) in(fσ(w) ) = in(fw ), for all fw ∈ G;
(c) fw ∈ G ⇐⇒ fσ(w) ∈ G ′ , for all w ∈ Zn .
Proof. We first prove that conditions (a) and (b) are equivalent.
(a) ⇒ (b): We assume that I(a)h is a Cohen-Macaulay ideal in S[x0 ]. We pick
fw in G, where w = (w1 , . . . , wn ). We denote w ′ = σ(w) = (w1′ , . . . , wn′ ). Since the
P
+
leading coefficient
LC(fw ) = 1 we get that in(fw ) = xw , hence d = ni=1 wi ≥ 0
P
and wn′ = − ni=1 wi = −d ≤ 0.
By Theorem 1.2 we obtain that xn does not divide in(fw ), hence wn ≤ 0. ConseP
′ +
′ −
quently, (w ′ )+ = w + . Also, ni=1 wi′ = −wn ≥ 0, and hence deg x(w ) ≥ deg x(w ) .
We distinguish two cases.
′ +
+
′ +
′ −
Firstly, if wn < 0 then deg x(w ) > deg x(w ) , hence in(fw′ ) = x(w ) = xw =
−
′ −
n
in(fw ). Moreover, x(w ) = xw · xd+w
.
n
′ +
′ −
Secondly, in case wn = 0 we get that deg x(w ) = deg x(w ) . Now, if d = 0 then
′ +
′ −
w ′ = w. If d > 0, for the chosen monomial order we obtain that x(w ) > x(w ) ,
′ −
−
because x(w ) = xw · xdn .
+
Thus in(fw′ ) = xw = in(fw ) in all cases, and property (b) holds.
(b) ⇒ (a): If I(a)h is not a Cohen-Macaulay ideal, by Theorem 1.2 there exists
fw in G such that xn divides in(fw ). Let w = (w1 , . . . , wn ) and w ′ = σ(w) =
+
(w1′ , . . . , wn′ ). Since LC(fw ) = 1 we get that in(fw ) = xw , hence wn > 0 and
P
n
i=1 wi ≥ 0.
There exists i0 6= n such that wi0 > 0, otherwise, since hw, ai = 0 we get that
Pn−1
Pn−1
i=1 ai (−wi ) = wn an ≥ −(w1 + · · · + wn−1 )an > −
i=1 ai wi , which is a contradiction. P
′ −
′ +
Since ni=1 wi′ = −wn < 0, we obtain that deg x(w ) > deg x(w ) and in(fw′ ) =
′
−
′ +
x(w ) . As i0 < n, we have that wi′0 = wi0 > 0, hence xi0 divides x(w ) . On the other
′ −
hand, condition (b) implies that in(fσ(w) ) = in(fw ), and therefore xi0 divides x(w )
as well, which gives a contradiction. We conclude that the projective monomial
curve C(a) is Cohen-Macaulay.
Next we prove that (a),(b) ⇒ (c). From the proof above of the equivalence
(a) ⇐⇒ (b), we see that under the assumption that (a) (hence also (b)) holds, for
6
all fw in G one has
(2)
tail(fσ(w) ) = tail(fw ) · xan for some integer a.
From Theorem 1.2 we have that in(I(a′ )) = in(I(a)), therefore property (b) implies
that G ′′ = {fσ(w) : fw ∈ G} is a minimal Gröbner basis for I(a′ ). We show that it is
reduced.
+
Let fw ∈ G with w = (w1 , . . . , wn ) and σ(w) = (w1′ , . . . , wn′ ). Then in(fw ) = xw
+
−
and deg(xw ) ≥ deg(xw ). Condition (a) and Theorem 1.2 imply that wn ≤ 0. Thus
Pn
′ )−
′
(w ′ )+
) ≥ deg(x(w P
).
i=1 wi = −wn ≥ 0, which implies that deg(x
′
′
If wn = 0 then w is homogeneous and either w = w (if ni=1 wi = 0), or xn
P
′ +
′ −
divides x(w ) (if ni=1 wi > 0), hence x(w ) = in(fσ(w) ) and LC(fσ(w) ) = 1.
′ +
′ −
If wn < 0 then deg(x(w ) ) > deg(x(w ) ), and again LC(fσ(w) ) = 1.
If there are fw and fwe in G such that in(fσ(w) ) divides tail(fσ(w)
e ), then, since xn
is not in the support of in(fw ) = in(fσ(w) ), by using (2) we get that in(fw ) divides
tail(fwe ). This contradicts the fact that G is the reduced Gröbner basis for I(a).
Hence G ′′ = G ′ , which proves (c).
For (c) ⇒ (a): If we assume that (c) holds, but I(a)h is not Cohen-Macaulay, then
+
by Theorem 1.2 there exists fw in G such that xn divides in(fw ) = xw . Let σ(w) =
P
−
+
(w1′ , . . . , wn′ ). Since ni=1 wi′ = −wn < 0, it follows that deg(xσ(w) ) > deg(xσ(w) ),
−
hence in(fσ(w) ) = xσ(w) . On the other hand, property (c) implies that fσ(w) ∈ G ′ ,
+
hence in(fσ(w) ) = xσ(w) , which is a contradiction. Therefore, property (a) holds.
This ends the proof of the theorem.
Remark 2.3. The fact that the involution σ maps some (minimal) Gröbner basis
for I(a) into a (minimal) Gröbner basis for I(a′ ) is not enough to imply that the
curve C(a) is arithmetically Cohen-Macaulay.
Indeed, let a : 4, 13, 19. Then a′ : 15, 6, 19. A Singular [8] computation shows
that
G = {y 5 − x2 z 3 , x3 y 2 − z 2 , x5 z − y 3 , x8 − yz}, and
G ′ = {y 5 − x2 , x3 y 2 − z 3 , y 3z 3 − x5 , x8 − yz 6 }
are the reduced Gröbner bases with respect to the reverse lexicographic order with
x > y > z > t for the ideals I(a) and I(a′ ), respectively.
One has G = {fw1 , fw2 , fw3 , fw4 }, where w1 = (−2, 5, −3), w2 = (3, 2, −2), w3 =
(5, −3, 1) and w4 = (8, −1, −1). Since σ(w1 ) = (−2, 5, 0), σ(w2 ) = (3, 2, −3),
σ(w3 ) = (5, −3, −3) and σ(w4 ) = (8, −1, −6) we note that
G ′ = {fσ(w1 ) , fσ(w2 ) , −fσ(w3 ) , fσ(w4 ) }.
This means that {fσ(w1 ) , fσ(w2 ) , fσ(w3 ) , fσ(w4 ) } is a minimal Gröbner basis for I(a′ ),
although different from the reduced one G ′ .
Let a = a1 , . . . , an be a sequence of nonnegative integers with gcd(a1 , . . . , an ) = 1
and an > ai for all i = 1, . . . , n−1. Our next goal is to describe the Cohen-Macaulay
property of C(a) in terms of the Apéry sets of the semigroup generated by a or the
7
dual sequence a′ . We recall that for a numerical semigroup H and 0 6= h in H, the
Apéry set of H with respect to h is
Ap(H, h) = {x ∈ H : x − h ∈
/ H}.
It is known that | Ap(H, h)| = h and that the elements of Ap(H, h) have distinct
residues modulo h.
The sequence a induces a grading on S = k[x1 , . . . , xn ] by letting deg(xi ) = ai
for all i = 1,P. . . , n. For any monomial xα = xr11 · · · xrnn we denote its a-degree by
dega (xα ) = ni=1 ri ai . We denote H the numerical semigroup generated by a. For
any h in Ap(H, an ) we denote ϕa (h) the smallest monomial in S (with respect to
a reverse lexicographic monomial order where xn is the smallest variable) such that
its a-degree equals h. Since h − an ∈
/ H we see that the monomial ϕa (h) is in
′
S = k[x1 , . . . , xn−1 ].
Proposition 2.4. With notation as above, for any h in Ap(H, an ) the monomial
ϕa (h) ∈
/ (xn , in(I(a)))S.
Proof. Let h ∈ Ap(H, an ) and ϕa (h) = xα . Assume that xα ∈ in(xn , I(a)). Then
xα = in(F ) for some F in (xn , I(a)). Since the ideal (xn , I(a)) is generated by
monomials and binomials which are the difference of two monomials with the same
a-degree, without loss of generality we may assume that F is a-homogeneous. Thus
we may write
xα = in(xn f + f1 fw1 + · · · + fq fwq ),
where w1 , . . . , wq ∈ L(a), and f, f1 , . . . , fq are a-homogeneous with dega (xn f ) =
dega (f1 fw1 ) = · · · = dega (fq fwq ).
We notice that f = 0, otherwise h = dega (xα ) = dega (xn f ) = an + dega (f ),
gives that h − an ∈ H, which is false. Hence xα ∈ in(I(a)). The ideal I(a) has
a Gröbner basis of binomials, hence we can write xα = m · in(fw ) for some binomial fw ∈ I(a) and m a monomial in S ′ . Without loss of generality, we may
+
+
−
−
assume in(fw ) = xw . Thus xw > xw , which gives ϕa (h) = xα > xw m. But
−
dega (w + ) = dega (w − ), hence dega (xα ) = dega (xw m), which contradicts the choice
of xα . Therefore, ϕa (h) ∈
/ (xn , in(I(a))).
If we identify a monomial which is in S and not in in(xn , I(a)) with its residue class
modulo the monomial ideal in(xn , I(a)), by Proposition 2.4 the assignment ϕa (−)
defines a map from Ap(H, an ) into Mon(S/ in(xn , I(a))), the K-basis of monomials
of S/ in(xn , I(a)). We prove that this is a bijection.
Proposition 2.5. The map ϕa : Ap(H, an ) → Mon(S/ in(xn , I(a))) is bijective.
Proof. Let h, h′ in Ap(H, an ) with ϕa (h) = ϕa (h′ ). Then h = dega (ϕa (h)) =
dega (ϕa (h′ )) = h′ , and the map ϕa is injective.
By Macaulay’s theorem ([9, Theorem 2.6]) the monomials in S which do not
belong to in(xn , I(a)) form a K-basis for S/(xn , I(a)). Therefore,
| Mon(S/ in(xn , I(a)))| = dimK S/(xn , I(a))
= dimK K[H]/(tan ) = an .
Since | Ap(H, an )| = an , we conclude that the map ϕa is bijective.
8
Theorem 2.6. Let a : a1 , . . . , an be a sequence of distinct positive integers such that
gcd(a1 , . . . , an ) = 1 and an > ai for all i = 1, . . . , n − 1. We denote a′ the dual
sequence of a. Let H and H ′ be the numerical semigroups generated by a and a′ ,
respectively. The following statements are equivalent:
(a) the projective monomial curve C(a) is arithmetically Cohen-Macaulay;
(b) in(xn , I(a)) = in(xn , I(a′ ));
(c) Mon(S/ in(xn , I(a))) = Mon(S/ in(xn , I(a′ )));
(d) dega′ (ϕa (h)) ∈ Ap(H ′ , an ) for all h in Ap(H, an ),
where the initial ideals are taken with respect to the reverse lexicographic term order
on S.
Proof. Assume (a) holds. It follows from Theorem 1.2 that in(xn , I(a)) = (xn , in(I(a)))
and in(xn , I(a′ )) = (xn , in(I(a′ ))). We get from Lemma 1.1 that G(in(I(a))) =
G(in(I h (a))) = G(in(I h (a′ ))) = G(in(I(a))), hence the statement (b) is true.
Clearly, the properties (b) and (c) are equivalent.
We now prove that (b) ⇐⇒ (d). Assume that (b) holds. Let h ∈ Ap(H, an ). By
Proposition 2.4 we have that the monomial ϕa (h) is not in in(xn , I(a)), hence it is
not in in(xn , I(a′ )). Using Proposition 2.5 we get that ϕa (h) = ϕa′ (h′ ) for some h′
in Ap(H ′ , an ). Hence dega′ (ϕa (h)) ∈ Ap(H ′, an ), which proves (d).
Conversely, we assume that (d) holds and we consider the monomial xα not in
in(xn , I(a)). By Proposition 2.5 there exists h in Ap(H, an ) such that ϕa (h) = xα .
Property (d) implies that there exists h′ in Ap(H ′, an ) such that h′ = dega′ (xα ),
which by Proposition 2.4 gives that xα ∈
/ in(xn , I(a′ )). Hence (d) ⇒ (b).
To finish the proof of the theorem we are left to show that (b) ⇒ (a). Assume
in(xn , I(a)) = in(xn , I(a′ )). By Theorem 1.2, it is enough to prove that xn does
not divide any monomial in G(in(I(a))). Assume there exists a monomial u · xcn in
G(in(I(a))) with u not divisible by xn and c > 0. Then u is not a constant, otherwise,
rn−1
in
since I(a) has a Gröbner basis of binomials, there exists f = xcn − xr11 . . . xn−1
Pn−1
c
I(a) with xn = in(f ). This implies that we have a relation c · an = i=1 ri ai with
Pn−1
ri , which is false since an > ai for all i < n.
c ≥ i=1
Let u · xcn = in(fw ), where w = (w1 , . . . , wn ) ∈ L(a). Without loss of generality we
+
−
may assume in(fw ) = xw , hence wn = c. Set v = xw and d = deg(u · xcn ) − deg v.
Then d > 0 by the above discussion. The sum of the components of σ(w) equals
Pn−1
Pn
−
σ(w)+
) < deg(xσ(w) ) and
i=1 wi + (−
i=1 wi ) = −wn = −c < 0, hence deg(x
fσ(w) = u − xdn · v. This gives that
u = xdn · v + fσ(w) ∈ (xn , I(a′ )),
and also that u ∈ in(xn , I(a′ )), which by our hypothesis (b) implies that u ∈
in(xn , I(a)). Since the ideal I(a) is a-homogeneous we can write
u = in(xn f + f1 fz1 + · · · + fq fzq ),
where z1 , . . . , zq ∈ L(a), and f, f1 , . . . , fq are a-homogeneous with dega (xn f ) =
dega (f1 fz1 ) = · · · = dega (fq fzq ). We see that f 6= 0, otherwise u ∈ in(I(a)), which
contradicts the fact that u · xcn is a minimal monomial generator for in(I(a)).
9
Let h = dega (u). Since f 6= 0 we get that h − an ∈ H. We may write h =
h1 + λn an with λn a maximal positive integer and h1 ∈ H, i.e. h1 ∈ Ap(H, an ). Let
λn−1
u1 = ϕa (h1 ) = xλ1 1 . . . xn−1
. Then the binomial f1 = u1 xλnn −u is in I(a). As u · xcn ∈
G(in(I(a))) with c > 1, we get that u ∈
/ in(I(a)), hence in(f1 ) = u1 · xλnn ∈ in(I(a)).
By Proposition 2.4, u1 ∈
/ in(xn , I(a)), hence u1 ∈
/ in(I(a)), as well. This implies
that u1 · xλnn is divisible by a monomial u2 xen ∈ G(in(I(a)) with xn and u2 coprime,
e > 0. Therefore u2 divides u1 , hence dega (u2 ) + h2 = dega (u1 ) for some positive h2
in H. This gives dega (u2 ) ∈ Ap(H, an ).
We may write u2 · xen = in(fwe ) with w
e ∈ L(a), and arguing as before we get that
d′
′
u2 = xn · v1 + fσ(w)
e ∈ in(xn , I(a)) for some positive d . Thus
u2 = in(xn f ′ + f1′ fz1′ + · · · + fℓ′ fzℓ′ ),
where z1′ , . . . , zℓ′ ∈ L(a), and f ′ , f1′ , . . . , fq′ are a-homogeneous with dega (xn f ′ ) =
dega (f1′ fz1′ ) = · · · = dega (fℓ′ fzℓ′ ). If f ′ = 0, then u2 ∈ in(I(a)), which is false since
u2 · xen ∈ G(in(I(a))). On the other hand, f ′ 6= 0 implies that an + dega (f ′ ) =
dega (u2 ), hence dega (u2 ) ∈
/ Ap(H, an ), which is also false. Therefore xn does not
divide any monomial in G(in(I(a))). This concludes the proof of the implication
(b) ⇒ (a) and of the theorem.
Let Ap(H, an ) = {0, ν1 , . . . , νan −1 }. We may assume that νi ≡ i mod an for
e
all i. For each νi , let µi ∈ H ′ be the smallest element such that (νi , µi ) ∈ H.
Note that µi ≡ −i mod an for all i. Cavalieri and Niesi [5] call Ap(H, an ) good, if
{0, µ1 , . . . , µnd −1 } = Ap(H ′ , an ).
As a consequence of Theorem 2.6 we obtain
Corollary 2.7. (Cavaliere-Niesi, [5, Theorem 4.6]) The projective monomial curve
C(a) is arithmetically Cohen-Macaulay if and only if Ap(H, an ) is good.
Pn−1
Pn−1
Proof. Let νi =
j=1 rj (an − aj )
j=1 rj aj with integer coefficients rj ≥ 0 and
Pn−1
Pn−1
Pn−1
minimal. Then µi = j=1 rj (an − aj ). Thus µi = ( j=1 rj )an − νi with j=1
rj
Pn−1
minimal and j=1 rj aj = νi .
Qn−1 sj
Pn−1
xj , then dega′ (ϕa (νi )) = i=1
This shows that if ϕa (νi ) = j=1
sj (an − aj ) =
Pn−1
Pn−1
Pn−1
( j=1 sj )an − j=1 sj aj = ( j=1 rj )an − νi = µi . Hence Theorem 2.6(a) ⇐⇒ (d)
yields the desired conclusion.
3. A bound for the number of generators of in(I(a)), when C(a) is
arithmetically Cohen-Macaulay
In this section we show by examples that the number of generators of in(I(a))
may be arbitrarily large, already if a has only 3 elements.
Proposition 3.1. For the integer h ≥ 2, let a = 4, 6h+1, 6h+7. Then µ(in(I(a))) =
h + 2, where the initial ideal is computed with respect to the reverse lexicographic
monomial order with x1 > x2 > x3 .
10
Proof. We first find I(a) using the method from [11]. For 1 ≤ i ≤ 3 we let ci be the
smallest prositive integer such that
(3)
ci ai = rij aj + rik ak ,
with rij , rik nonnegative integers and {i, j, k} = {1, 2, 3}. Since a1 , a2 , a3 are pairwise
coprime, it is known from [11] that the rij ’s are unique and positive, and ci = rji +rki
for all {i, j, k} = {1, 2, 3}.
From the equations (3h + 2)a1 = a2 + a3 and 2a3 = 3a1 + 2a2 we find c1 = 3h + 2,
c3 = 2 and the corresponding rij ’s from (3). Hence c2 = 3 and 3a2 = (3h−1)a1 +a3 is
the corresponding equation from (3). According to [11], the ideal I(a) is minimally
generated by f1 = x3h+2
− x2 x3 , f3 = x31 x22 − x23 and g1 = x3h−1
x3 − x32 .
1
1
We introduce recursively the polynomials gi+1 = S(gi , f3 ) for 1 ≤ i ≤ h − 1. It
3(h−i)+2 2i−1
follows easily by induction that gi = x1
x3 − x2i+1
, for 1 ≤ i ≤ h. We claim
2
that
(4)
G = {f1 , g1 , . . . , gh , f3 }
is the reduced Gröbner basis of I(a).
To prove that G is a Gröbner basis we need to check that the S-pairs of elements
in G reduce to zero with respect to G, see [9, Theorem 2.14]. Here are the relevant
computations.
G
S(f1 , f3 ) = x22 − x3h−1
f3 = x3 (x32 − x3h−1
x3 ) = −x3 g1 → 0.
1
1
G
S(gh , f1 ) → 0 since gcd(in(gh )), in(f1 )) = 1.
G
S(gh , f3 ) = x31 gh − x2h−1
f3 = x23 gh−1 → 0.
2
For 1 ≤ i ≤ h − 1 :
G
2i−1
3i 2i
2
3 2
S(gi , f1 ) = x3i
f1 = x2 (x2i
1 gi −x3
3 −x1 x2 ) = x2 ·(x3 −x1 x2 )·(. . . ) = x2 ·f3 ·(. . . ) →
0, and
G
S(gi , f3 ) = gi+1 → 0.
For 1 ≤ i < j < h:
2(i+1) 3(j−i) 2(j−i)
2(j−i) 2i+1
2(j−i)
3(j−i)
3(j−i) 2j+1
x2 = x2
(x1
x2
−
S(gi , gj ) = x3
gi −x1
gj = x1
x2 −x3
G
2(j−i)
2(i+1)
x3
) = x2
· f3 · (· · · ) → 0.
G
For 1 ≤ i < h we have that S(gi , gh ) → 0, since gcd(in(gi ), in(gh )) = 1.
By inspecting the binomials in G it follows that they are in fact the reduced
Gröbner basis for I(a). This shows that µ(in(I(a)) = |G| = h + 2.
Remark 3.2. The sequence dual to the one from Proposition 3.1 is
a′ = 6h + 3, 6, 6h + 7.
It is easy to check (using [11]) that the corresponding toric ideal is a complete
intersection I(a′ ) = (x2h+1
− x21 , x31 x22 − x33 ). In particular, this shows that the image
2
through the involution σ of a minimal set of binomial generators for I(a) may no
longer be a minimal generating system for I(a′ ).
Arguing as in the proof of Proposition 3.1, it is routine to verify that for the
reduced Gröbner basis G in (4), the set {fσ(w) : fw ∈ G} is a minimal Gröbner basis
for I(a′ ). The latter set of binomials is fully interreduced, yet it is not the reduced
11
Gröbner basis for I(a′ ) since the leading coefficients of the binomials coming from
g1 , . . . , gh−1 equal −1.
From Theorem 2.2 we infer that C(a) is not arithmetically Cohen–Macaulay. This
can also be seen from the fact that in(g1 ) = x3h−1
x3 ∈ G(in(I(a))) (as h > 1) and
1
using Theorem 1.2.
If C(a) is arithmetically Cohen-Macaulay, we give an explicit bound for µ(in< (I(a)))
depending on an , the largest element of the sequence a. To prove this we first show
Lemma 3.3. Let n ≥ 2 and I ⊂ S = K[x1 , . . . , xn ] be a graded ideal with mk ⊂ I,
where m = (x1 , .. . , xn ). Let < be a monomial
order on S. Then µ(I) ≤ µ(in< (I)) ≤
n+k−1
µ(mk ) = n+k−1
,
and
µ(I)
=
if
and
only
if I = mk .
n−1
n−1
Proof. It suffices to show that for a monomial ideal J ⊂ S with mk ⊂ J, one has
µ(J) ≤ µ(mk ), and µ(J) = µ(mk ), if and only if J = mk . We prove this by induction
on k − a, where a is the least degree of a monomial generator of J. If a = k,
the assertion is trivial. Suppose now that a < k. We denote by G(J) the unique
minimal set of monomial generators of J, and set G(J)j = {u ∈ G(J) : deg u = j}
for all j, and let J ′ = mJa + J≥a+1 , where J≥a+1 = (u ∈ J : deg u ≥ a + 1). Then
G(J ′ )j = 0 for j < a + 1 and mk ⊂ J ′ . Therefore, by our induction hypothesis,
we have µ(J ′ ) ≤ µ(mk ). On the other hand, G(J ′ )a+1 is the disjoint union of
G(mJ)a+1 and G(J)a+1 . Furthermore, G(J ′ )j = G(J)j for j > a + 1. Hence, since
|G(J)a | < |G(mJ)a+1 |, it follows that µ(J) = |G(J)| < |G(J ′ )| = µ(J ′ ) ≤ µ(mk ), as
desired.
Proposition 3.4. Suppose that the monomial curve C(a) is arithmetically Cohen–
an
.
Macaulay. Then µ(in(I(a)) ≤ n−2
Proof. As before we assume that a = a1 , . . . , an with an > ai for all i, and we let
H be the numerical semigroup generated by a. Then I(a) ⊂ S = K[x1 , . . . , xn ] and
G(in< (I(a)) ⊂ S̄ = K[x1 , . . . , xn−1 ], by Theorem 1.2. Therefore,
length(S̄/ in< (I(a)) = length(S/(xn , in< (I(a)))) = length(S/(xn , I(a))) =
length(K[H]/(tan )) = an .
Let k be the smallest number such that (x1 , . . . , xn−1 )k ⊂ in< (I(a)). Then
an =
k−1
X
dimK (S̄/ in< (I(a))j = 1 + (n − 1) +
j=0
k−1
X
dimK (S̄/ in< (I(a))j
j=2
≥ 1 + (n − 1) + (k − 2) = (n − 2) + k.
Thus, k ≤
(n
− 1) + 1, and hence by Lemma 3.3 we get µ(in< (I(a))) ≤
an −
(n−1)+k−1
an
≤ n−2 .
n−2
4. Applications
In this section we use the criteria in Theorem 1.2 to test the arithmetically CohenMacaulay property for two families of projective monomial curves.
12
4.1. Bresinsky semigroups. In [3] Bresinsky introduced the semigroup
Bh = h(2h − 1)2h, (2h − 1)(2h + 1), 2h(2h + 1), 2h(2h + 1) + 2h − 1i,
where h ≥ 2. He showed that the toric ideal IBh ⊂ S = K[x, y, z, t] is minimally
generated by more than 2h binomials. Based on that, in [13, Section 3.3] it is proved
that
F = {xt−yz}∪{z i−1 t2h−i −xi+1 y 2h−i : 1 ≤ i ≤ 2h}∪{x2h+1−j z j −y 2h−j tj : 0 ≤ j ≤ 2h−2}
is a minimal generating set for IBh . Combining the generators corresponding to
i = 2h and j = 1 we get that
u = −z(z 2h−1 − x2h+1 ) − x(x2h z − y 2h−1t) = xy 2h−1 t − z 2h ∈ IBh ,
hence in(u) = xy 2h−1 t ∈ in(IBh ), where one uses the reverse lexicographic monomial
order with x > y > z > t.
If the projective monomial curve associated to Bh were Cohen-Macaulay, then, as
the generators of Bh above are listed increasingly, by Theorem 1.2 we obtain that
xy 2h−1 ∈ in(IBh ). The ideal IBh is generated by binomials with disjoint support,
hence in the reduced Gröbner basis of IBh there exists v = xy d − z α tβ with in(v) =
xy d , 0 < d ≤ 2h − 1 and α, β nonnegative integers. We denote a1 , . . . , a4 the
generators of Bh in the given order. Since a1 < a2 < a3 < a4 we have (d + 1)a2 >
a1 + da2 = αa3 + βa4 > (α + β)a2 , hence α + β ≤ 2h − 1.
If α + β = 2h − 1, after adding to v the binomial z α tβ − y β xα+2 from the given
minimal generating set of IBh , we obtain that xy d − xα+2 y β ∈ IBh . Thus β ≤ d and
y d−β − xα+1 ∈ IBh , which is false, since d < 2h and one can see from F that 2h · a2 is
the smallest positive multiple of a2 which is in the semigroup generated by a1 , a3 , a4 .
Thus α + β < 2h − 1. If we denote I¯ = IBh mod x ⊂ K[y, z, t], then given F it
follows that I¯ = (yz) + (t, z)2h−1 + y 2(y, t)2h−1. It is easy to see that the monomial
¯ which is a contradiction.
v̄ = z α tβ is not in I,
Therefore, we proved the following proposition, which was first obtained by Cavaliere and Niesi in [5, Remark 5.4], as an application of their criterion from Corollary
2.7.
Proposition 4.1. (Cavaliere and Niesi, [5]) The projective monomial curve associated to Bh is not arithmetically Cohen-Macaulay, for any h ≥ 2.
4.2. Arslan semigroups. For h ≥ 2, let
Ah = hh(h + 1), h(h + 1) + 1, (h + 1)2 , (h + 1)2 + 1i.
This family of numerical semigroups was studied by Arslan who shows in [1, Proposition 3.2] that the defining ideal of K[Ah ] is
(5) IAh = (xh−i z i+1 − y h−i+1ti : 0 ≤ i < h)+
(xi+1 y h−i − z i th−i : 0 ≤ i ≤ h) + (xt − yz).
Proposition 4.2. The projective monomial curve associated to Ah is aritmetically
Cohen-Macaulay, for any h ≥ 2.
13
Proof. Letting gi = xh−i z i+1 − y h−i+1ti for 0 ≤ i ≤ h, fi = xi+1 y h−i − z i th−i for
0 ≤ i ≤ h and f = xt − yz, we claim that G = {g0 , . . . , gh , f0 , . . . , fh , f } is the
reduced Gröbner basis of IAh with respect to the reverse lexicographic term order
with x > y > z > t. As a consequence, by inspecting the leading monomials we
may use Theorem 1.2 to conclude the following statement.
We show that all the S-pairs of binomials in G reduce to 0 with respect to G,
and consequently by Buchberger’s criterion ([9, Theorem 2.14]) it follows that G is
a Gröbner basis for IAh .
G
S(g0 , f ) = zg0 + y h f = xh z 2 − xy h t = xg1 → 0.
For 1 ≤ i ≤ h:
G
S(gi , f ) = ygi − xh−i z i f = xh−i+1 z i t − y h−i+2ti = tgi−1 → 0.
For 0 ≤ i < h:
G
S(fi , f ) = zfi − xi+1 y h−i−1f = xi+2 y h−i−1t − z i+1 th−i = tfi+1 → 0.
G
Also, S(fh , f ) → 0 since gcd(in(fh ), in(f )) = gcd(xh+1 , yz) = 1.
For 1 ≤ i ≤ h:
G
S(gi , g0 ) → 0 since gcd(in(gi ), in(g0 )) = 1.
For 0 ≤ i ≤ h:
S(fi , g0 ) = y i+1fi + xi+1 g0 = xh+i+1 z − y i+1 z i th−i = xi (xh+1 − z h )z + z i (xi z h−i+1 −
G
y i+1 th−i ) = xi zfa + z i gh−i → 0.
For 0 ≤ j < i ≤ h:
S(gj , gi ) = z i−j gj − xi−j gi = y h−j+1z i−j tj − xi−j y h−i+1ti = y h−i+1tj (y i−j z i−j −
G
xi−j ti−j ) = y h−i+1tj · f · (. . . ) → 0.
For 1 ≤ i ≤ a, 0 ≤ j ≤ a with i ≤ h − j − 1, i.e. i + j < h:
S(fj , gi ) = xh−i−j−1z i+1 fj −y h−j gi = y 2h−i−j+1ti −xh−i−j−1 z i+j+1 th−j = y h−i−j ti (y h+1−
G
xh z) + xh−i−j−1 zti (xi+j+1 y h−i−j − z j+i th−j−i ) = −ti y h−i−j g0 + xh−i−j−1zti fi+j → 0.
For 1 ≤ i ≤ a, 0 ≤ j ≤ a with i > h − j − 1, i.e. i + j ≥ h:
S(fj , gi ) = z i+1 fj − xi+j+1−h y h−j gi = xi+j+1−h y 2h−i−j+1ti − z i+j+1 th−j =
G
yti (xi+j+1−h y 2h−i−j −z i+j−h t2h−i−j )+z i+j−h th−j (yth −z h+1 ) = yti fi+j−h +z i+j−h th−j gh →
0.
For 0 ≤ j < i ≤ a:
S(fj , fi ) = xi−j fj − y i−j fi = y i−j z i th−i − xi−j z j th−j = z j th−i (y i−j z i−j − xi−j ti−j ) =
G
z j th−i (yz − xt) · (. . . ) = z j th−i · f · (. . . ) → 0.
Acknowledgement. We gratefully acknowledge the use of the Singular ([8])
software for our computations. Dumitru Stamate was supported by the University
of Bucharest, Faculty of Mathematics and Computer Science, through the 2017
Mobility Fund.
References
[1] F. Arslan, Cohen-Macaulayness of tangent cones, Proc. Amer. Math. Soc. 128 (1999), 2243–
2251.
14
[2] F. Arslan, P. Mete, M. Şahin, Gluing and Hilbert functions of monomial curves, Proc. Amer.
Math. Soc. 137 (2009), 2225–2232.
[3] H. Bresinsky, On prime ideals with generic zero xi = tni , Proc. Amer. Math. Soc. 47 (1975),
329–332.
[4] H. Bresinsky, P. Schenzel, W. Vogel, On Liaison, Arithmetical Buchsbaum Curves and Monomial Curves in P3 , J. Algebra 86 (1984), 283–301.
[5] M.P. Cavaliere, G. Niesi, On monomial curves and Cohen-Macaulay type, Manuscripta math.
42 (1983), 147–159.
[6] M. Cimpoeaş, D.I. Stamate, Gröbner–nice pairs of ideals, in preparation.
[7] D. Cox, J. Little, D. O’Shea, Ideals, Varieties, and Algorithms, Third Edition, Springer, 2007.
[8] W. Decker, G.-M. Greuel, G. Pfister, H. Schönemann, Singular 3-1-6 — A computer algebra
system for polynomial computations. http://www.singular.uni-kl.de (2012).
[9] V. Ene, J. Herzog, Gröbner bases in commutative algebra, Graduate Studies in Mathematics
130, American Mathematical Society, 2012.
[10] S. Goto, N. Suzuki, K.-i Watanabe, On affine semigroup rings, Japan J. Math. 2 (1976), 1–12.
[11] J. Herzog, Generators and relations of Abelian semigroups and semigroup rings, Manuscripta
Math. 3 (1970), 175–193.
[12] J. Herzog, T. Hibi, Monomial Ideals, Graduate Texts in Mathematics 260, Springer–Verlag,
London, 2011.
[13] J. Herzog, D.I. Stamate, On the defining equations of the tangent cone of a numerical semigroup ring, J. Algebra 418 (2014), 8–28.
[14] Y. Kamoi, Defining ideals of Cohen-Macaulay semigroup rings, Comm. Alg. 20 (1992), 3163–
3189.
[15] Y. Kamoi, Defining ideals of Buchsbaum semigroup rings, Nagoya Math. J. 136 (1994), 115–
131.
[16] M. Morales, N. Thi Dung, Gröbner basis, a “pseudo-polynomial” algorithm for computing the
Frobenius number, Preprint 2015, arXiv:1510.01973 [math.AC].
[17] B.H. Roune, Solving thousand-digit Frobenius problems using Gröbner bases, J. Symb. Comput. 43 (2008), 1–7.
[18] T. Vu, Periodicity of Betti numbers of monomial curves, J. Algebra 418 (2014), 66–90.
Jürgen Herzog, Fachbereich Mathematik, Universität Duisburg-Essen, Campus
Essen, 45117 Essen, Germany
E-mail address: [email protected]
Dumitru I. Stamate, Faculty of Mathematics and Computer Science, University
of Bucharest, Str. Academiei 14, Bucharest – 010014, Romania
E-mail address: [email protected]
15
| 0 |
On the convergence of discrete-time linear systems: A linear time-varying
Mann iteration converges iff the operator is strictly pseudocontractive
arXiv:1803.10469v1 [math.OC] 28 Mar 2018
Giuseppe Belgioioso
Filippo Fabiani
Abstract— We adopt an operator-theoretic perspective to
study convergence of linear fixed-point iterations and discretetime linear systems. We mainly focus on the so-called
Krasnoselskij–Mann iteration x(k + 1) = (1 − αk )x(k) +
αk Ax(k), which is relevant for distributed computation in
optimization and game theory, when A is not available in a
centralized way. We show that convergence to a vector in the
kernel of (I − A) is equivalent to strict pseudocontractiveness
of the linear operator x 7→ Ax. We also characterize some
relevant operator-theoretic properties of linear operators via
eigenvalue location and linear matrix inequalities. We apply
the convergence conditions to multi-agent linear systems with
vanishing step sizes, in particular, to linear consensus dynamics
and equilibrium seeking in monotone linear-quadratic games.
I. I NTRODUCTION
State convergence is the quintessential problem in multiagent systems. In fact, multi-agent consensus and cooperation, distributed optimization and multi-player game theory
revolve around the convergence of the state variables to
an equilibrium, typically unknown a-priori. In distributed
consensus problems, agents interact with their neighboring
peers to collectively achieve global agreement on some value
[1]. In distributed optimization, decision makers cooperate
locally to agree on primal-dual variables that solve a global
optimization problem [2]. Similarly, in multi-player games,
selfish decision makers exchange local or semi-global information to achieve an equilibrium for their inter-dependent
optimization problems [3]. Applications of multi-agent systems with guaranteed convergence are indeed vast, e.g.
include power systems [4], [5], demand side management
[6], network congestion control [7], [8], social networks [9],
[10], robotic and sensor networks [11], [12].
From a general mathematical perspective, the convergence
problem is a fixed-point problem [13], or equivalently, a zerofinding problem [14]. For example, consensus in multi-agent
systems is equivalent to finding a collective state in the kernel
of the Laplacian matrix, i.e., in operator-theoretic terms, to
finding a zero of the Laplacian, seen as a linear operator.
Fixed-point theory and monotone operator theory are
then key to study convergence to multi-agent equilibria
G. Belgioioso is with the Control Systems group, TU Eindhoven,
The Netherlands. F. Fabiani is with the Department of Information
Engineering, University of Pisa, Italy. F. Blanchini is with the
Department of Mathematics and Informatics, University of Udine,
Italy. S. Grammatico is with the Delft Center for Systems and
Control (DCSC), TU Delft, The Netherlands. E-mail addresses:
[email protected],
[email protected],
[email protected], [email protected].
This work was partially supported by NWO under research projects
OMEGA (grant n. 613.001.702) and P2P-TALES (grant n. 647.003.003).
Franco Blanchini
Sergio Grammatico
[15]. For instance, Krasnoselskij–Mann fixed-point iterations
have been adopted in aggregative game theory [16], [17],
monotone operator splitting methods in distributed convex
optimization [18] and monotone game theory [19], [3], [20].
The main feature of the available results is that sufficient
conditions on the problem data are typically proposed to
ensure global convergence of fixed-point iterations applied
on nonlinear mappings, e.g. compositions of proximal or
projection operators and linear averaging operators.
Differently from the literature, in this paper, we are
interested in necessary and sufficient conditions for convergence, hence we focus on the three most popular fixedpoint iterations applied on linear operators, that essentially
are linear time-varying systems with special structure. The
motivation is twofold. First, there are still several classes of
multi-agent linear systems where convergence is the primary
challenge, e.g. in distributed linear time-varying consensus
dynamics with unknown graph connectivity (see Section VIA). Second, fixed-point iterations applied on linear operators
can provide non-convergence certificates for multi-agent dynamics that arise from distributed convex optimization and
monotone game theory (Section VI-B).
Our main contribution is to show that the Krasnoselskij–
Mann fixed-point iterations, possibly time-varying, applied
on linear operators converge if and only if the associated matrix has certain spectral properties (Section III). To motivate
and achieve our main result, we adopt an operator-theoretic
perspective and characterize some regularity properties of
linear mappings via eigenvalue location and properties, and
linear matrix inequalities (Section IV). In Section VII, we
conclude the paper and indicate one future research direction.
Notation: R, R≥0 and C denote the set of real, nonnegative real and complex numbers, respectively. Dr :=
{z ∈ C | |z − (1 − r)| ≤ r} denotes the disk of radius
r > 0 centered in (1 − r, 0), see Fig. 1 for some graphical
examples. H (k·k) denotes a finite-dimensional Hilbert space
with norm k·k. Sn0 is the set of positive
√ definite symmetric
matrices and, for P ∈ Sn0 , kxkhP := x> P ix. Id denotes
cos(·) − sin(·)
the identity operator. R(·) := sin(·) cos(·) denotes the
rotation operator. Given a mapping T : Rn → Rn , fix(T ) :=
{x ∈ Rn | x = T (x)} denotes the set of fixed points, and
zer(T ) := {x ∈ Rn | 0 = T (x)} the set of zeros. Given
a matrix A ∈ Rn×n , ker(A) := {x ∈ Rn | 0 = Ax} =
zer(A ·) denotes its kernel; Λ(A) and ρ(A) denote the
spectrum and the spectral radius of A, respectively. 0N and
1N denote vectors with N elements all equal to 0 and 1,
respectively.
II. M ATHEMATICAL DEFINITIONS
•
A. Discrete-time linear systems
In this paper, we consider discrete-time linear timeinvariant systems,
x(k + 1) = Ax(k) ,
(1)
and linear time-varying systems with special structure, i.e.,
x(k + 1) = (1 − αk )x(k) + αk A x(k) ,
(2)
for some positive sequence (αk )k∈N . Note that for αk = 1
for all k ∈ N, the system in (2) reduces to that in (1).
B. System-theoretic definitions
We are interested in the following notion of global convergence, i.e., convergence of the state solution, independently
on the initial condition, to some vector.
Definition 1 (Convergence): The system in (2) is convergent if, for all x(0) ∈ Rn , its solution x(k) converges to
some x̄ ∈ Rn , i.e., lim kx(k) − x̄k = 0.
k→∞
Note that in Definition 1, the vector x̄ can depend on the
initial condition x(0). In the linear time-invariant case, (1),
it is known that semi-convergence holds if and only if the
eigenvalues of the A matrix are strictly inside the unit disk
and the eigenvalue in 1, if present, must be semi-simple, as
formalized next.
Definition 2 ((Semi-) Simple eigenvalue): An eigenvalue
is semi-simple if it has equal algebraic and geometric multiplicity. An eigenvalue is simple if it has algebraic and
geometric multiplicities both equal to 1.
Lemma 1: The following statements are equivalent:
i) The system in (1) is convergent;
ii) ρ(A) ≤ 1 and the only eigenvalue on the unit disk is 1,
which is semi-simple.
C. Operator-theoretic definitions
With the aim to study convergence of the dynamics in (1),
(2), in this subsection, we introduce some key notions from
operator theory in Hilbert spaces.
Definition 3 (Lipschitz continuity): A mapping T : Rn →
n
R is `-Lipschitz continuous in H (k·k), with ` ≥ 0, if
∀x, y ∈ Rn , kT (x) − T (y)k ≤ ` kx − yk .
Definition 4: In H (k·k), an `-Lipschitz continuous mapping T : Rn → Rn is
• `-Contractive (`-CON) if ` ∈ [0, 1);
•
•
NonExpansive (NE) if ` ∈ [0, 1];
η-Averaged (η-AVG), with η ∈ (0, 1), if ∀x, y ∈ Rn
2
2
kT (x) − T (y)k ≤ kx − yk
−
1−η
η
2
k(Id − T ) (x) − (Id − T ) (y)k , (3)
or, equivalently, if there exists a nonexpansive mapping
B : Rn → Rn and η ∈ (0, 1) such that
T = (1 − η)Id + ηB .
κ-strictly Pseudo-Contractive (κ-sPC), with κ ∈ (0, 1),
if ∀x, y ∈ Rn
2
2
kT (x) − T (y)k ≤ kx − yk
2
+ κ k(Id − T ) (x) − (Id − T ) (y)k . (4)
n
n
Definition 5: A mapping T : R → R is:
• Contractive (CON) if there exist ` ∈ [0, 1) and a norm
k·k such that it is an `-CON in H (k·k);
• Averaged (AVG) if there exist η ∈ (0, 1) and a norm
k·k such that it is η-AVG in H (k·k);
• strict Pseudo-Contractive (sPC) if there exists κ ∈
(0, 1) and a norm k·k such that it is κ-sPC in H (k·k).
III. M AIN RESULTS :
F IXED - POINT ITERATIONS ON LINEAR MAPPINGS
In this section, we provide necessary and sufficient conditions for the convergence of some well-known fixed-point
iterations applied on linear operators, i.e.,
A : x 7→ Ax,
with A ∈ Rn×n .
(5)
First, we consider the Banach–Picard iteration [14, (1.69)]
on a generic mapping T : Rn → Rn , i.e., for all k ∈ N,
x(k + 1) = T (x(k)) ,
(6)
whose convergence is guaranteed if T is averaged, see [14,
Prop. 5.16]. The next statement shows that averagedness is
also a necessary condition when the mapping T is linear.
Proposition 1 (Banach–Picard iteration): The following
statements are equivalent:
(i) A in (5) is averaged;
(ii) the solution to the system
x(k + 1) = Ax(k)
(7)
converges to some x ∈ fix(A) = ker(I − A).
If the mapping T is merely nonexpansive, then the sequence generated by the Banach–Picard iteration in (6) may
fail to produce a fixed point of T . For instance, this is the
case for T = −Id. In these cases, a relaxed iteration can be
used, e.g. the Krasnoselskij–Mann iteration [14, Equ. (5.15)].
Specifically, let us distinguish the case with time-invariant
step sizes, known as Krasnoselskij iteration [13, Chap. 3],
and the case with time-varying, vanishing step sizes, known
as Mann iteration [13, Chap. 4]. The former is defined by
x(k + 1) = (1 − α)x(k) + αT (x(k)) ,
(8)
for all k ∈ N, where α ∈ (0, 1) is a constant step size.
The convergence of the discrete-time system in (8) to
a fixed point of the mapping T is guaranteed, for any
arbitrary α ∈ (0, 1), if T is nonexpansive [14, Th. 5.15],
or if T , defined from a compact, convex set to itself, is
strictly pseudo-contractive and α > 0 is sufficiently small
[13, Theorem 3.5]. In the next statement, we show that if
the mapping T : Rn → Rn is linear, and α is chosen small
enough, then strict pseudo-contractiveness is necessary and
sufficient for convergence.
Theorem 1 (Krasnoselskij iteration): Let κ ∈ (0, 1) and
α ∈ (0, 1 − κ). The following statements are equivalent:
(i) A in (5) is κ-strictly pseudo-contractive;
(ii) the solution to the system
x(k + 1) = (1 − α)x(k) + αAx(k)
(9)
converges to some x ∈ fix(A) = ker(I − A).
In Theorem 1, the admissible step sizes for the Krasnoselskij iteration depend on the parameter κ that quantifies the
strict pseudo-contractiveness of the mapping A = A ·. When
the parameter κ is unknown, or hard to quantify, one can
adopt time-varying step sizes, e.g. the Mann iteration:
x(k + 1) = (1 − αk )x(k) + αk T (x(k)) ,
(10)
for all k ∈ N, where the step sizes (αk )k∈N shall be chosen
as follows.
Assumption 1 (Mann sequence): The sequence (αk )k∈N
is such that 0 < αk ≤ αmaxP< ∞ for all k ∈ N, for some
∞
αmax , limk→∞ αk = 0 and k=0 αk = ∞.
The convergence of (10) to a fixed point of the mapping
T is guaranteed if T , defined from a compact, convex set to
itself, is strictly pseudo-contractive [13, Theorem 3.5]. In the
next statement, we show that if the mapping T : Rn → Rn
is linear, then strict pseudo-contractiveness is necessary and
sufficient for convergence.
Theorem 2 (Mann iteration): Let (αk )k∈N be a Mann sequence as in Assumption 1. The following statements are
equivalent:
(i) A in (5) is strictly pseudocontractive;
(ii) the solution to
x(k + 1) = (1 − αk )x(k) + αk Ax(k)
(11)
converges to some x ∈ fix(A) = ker(I − A).
IV. O PERATOR - THEORETIC CHARACTERIZATION OF
LINEAR MAPPINGS
In this section, we characterize the operator-theoretic properties of linear mappings via necessary and sufficient linear
matrix inequalities and conditions on the spectrum of the
corresponding matrices. We exploit these technical results in
Section V, to prove convergence of the fixed-point iterations
presented in Section III.
Lemma 2 (Lipschitz continuous linear mapping): Let
` > 0 and P ∈ Sn0 . The following statements are
equivalent:
(i) A in (5) is `-Lipschitz continuous in H (k·kP );
(ii) A> P A 4 `2 P .
Proof: It directly follows from Definition 3.
Lemma 3 (Linear contractive/nonexpansive mapping):
Let ` ∈ (0, 1). The following statements are equivalent:
(i) A in (5) is an `-contraction;
(ii) ∃P ∈ Sn0 such that A> P A 4 `2 P ;
(iii) the spectrum of A is such that
(
Λ(A) ⊂ ` D1
∀λ ∈ Λ(A) ∩ bdr(` D1 ), λ semi-simple
(12)
If ` = 1, the previous equivalent statements hold if and only
if A in (5) is nonexpansive.
Proof: The equivalence between (i) and (ii) follows
from Lemma 2. By the Lyapunov theorem, (iii) holds if and
only if the discrete-time linear system x(k +1) = 1` A x(k) is
(at least marginally) stable, i.e., Λ(A) ⊂ ` D1 and the eigenvalues of A on the boundary of the disk, Λ(A) ∩ bdr(` D1 ),
are semi-simple. The last statement follows by noticing that
an 1-contractive mapping is nonexpansive.
Lemma 4 (Linear averaged mapping): Let η ∈ (0, 1).
The following statements are equivalent:
(i) A in (5) is η-averaged;
(ii) ∃P ∈ Sn0 such that
A> P A 4 (2η − 1) P + (1 − η) A> P + P A ;
(iii) Aη := Aη · := 1 − η1 I · + η1 A· is nonexpansive;
(iv) the spectrum of A is such that
(
Λ(A) ⊂ Dη
(13)
∀λ ∈ Λ(A) ∩ bdr(Dη ), λ semi-simple.
Proof: The equivalence (i) ⇔ (ii) follows directly by
inequality (3) in Definition 4. By [14, Prop. 4.35], A is ηAVG if and only if the linear mapping Aη is NE, which
proves (i) ⇔ (iii). To conclude, we show that (iii) ⇔ (iv).
By Lemma 3, the linear mapping Aη is NE if and only if
(
Λ(Aη ) ⊂ D1
(14)
∀λ ∈ Λ(Aη ) ∩ bdr(D1 ), λ semi-simple
(
Λ(A) ⊂ (1 − η){1} + ηD1 = Dη
⇔
(15)
∀λ ∈ Λ(A) ∩ bdr(Dη ), λ semi-simple
where the equivalence (14) ⇔ (15) holds because Λ(Aη ) =
(1− η1 ){1}+ηΛ(A), and because the linear combination with
the identity matrix does not alter the geometric multiplicity
of the eigenvalues.
Lemma 5 (Linear strict pseudocontractive mapping): Let
κ, η ∈ (0, 1). The following statements are equivalent:
(i) A in (5) is κ-strictly pseudocontractive;
(ii) ∃P ∈ Sn0 such that
(1 − κ)A> P A 4 (1 + κ)P − κ(A> P + P A); (16)
(iii) Asκ := Asκ · := κI · +(1 − κ)A· is nonexpansive;
(iv) the spectrum of A is such that
Λ(A) ⊂ D 1
1−κ
∀λ ∈ Λ(A) ∩ bdr D 1 , λ semi-simple
1−κ
(17)
(v) Aα := Aα · := (1 − α)I · +αA· is η-averaged, with
α = η(1 − κ) ∈ (0, 1).
Proof of Theorem 1 (Krasnoselskij iteration)
=
0
1
<
Fig. 1. Spectrum of a linear η-AVG mapping: Disk centered in 1 − η with
radius η, Dη (dark-grey disk). Spectrum of a linear κ-sPC mapping: Disk
κ
1
centered in − 1−κ
with radius 1−κ
, D 1 (light-grey disk).
1−κ
Proof: The equivalence (i) ⇔ (ii) follows directly by
inequality (4) in Definition (5). To prove that (ii) ⇔ (iii), we
note that the LMI in (16) can be recast as
>
(κI + (1 − κ)A) P (κI + (1 − κ)A) 4 P,
(18)
which, by Lemma 3, holds true if and only if the mapping
Asκ is NE.
(iii) ⇔ (iv): By Lemma 3, Asκ is NE if and only if
(
Λ(Asκ ) ⊂ D1
(19)
∀λ ∈ Λ(Asκ ) ∩ bdr(D1 ), λ semi-simple
n
o
1
Λ(A) ⊂ − κ
1
+ 1−κ
D1 = D 1−κ
1−κ
⇔
(20)
∀λ ∈ Λ(A) ∩ bdr D 1 , λ semi-simple
1−κ
where the equivalence (19) ⇔ (20) holds because Λ(Asκ ) =
Asκ := κI + (1 − κ)A, and because the linear combination with the identity matrix does not alter the geometric
multiplicity of the eigenvalues. (iii) ⇔ (v): By Definition
4 and [14, Prop. 4.35], Asκ · is NE if and only if Aα · =
(1 − η)I · +ηAsκ · is η-AVG, for all η ∈ (0, 1). Since
α = η(1 − κ), Aα = (1 − η(1 − κ))Id + η(1 − κ)A, which
concludes the proof.
V. P ROOFS OF THE MAIN RESULTS
Proof of Proposition 1 (Banach–Picard iteration)
We recall that, by Lemma 4, A is AVG if and only if there
exists η ∈ (0, 1) such that Λ(A) ⊂ Dη and ∀λ ∈ Λ(A) ∩
bdr(Dη ), λ is semi-simple and we notice that Dη ∩ D1 =
{1} for all η ∈ (0, 1). Hence A is averaged if and only if
the eigenvalues of A are strictly contained in the unit circle
except for the eigenvalue in λ = 1 which, if present, is semisimple. The latter is a necessary and sufficient condition for
the convergence of x(k + 1) = A x(k), by Lemma 1.
(i) ⇔ (ii): By Lemma 5, A is k-sPC if and only if (1 −
α)Id + αA is η-AVG, with α = η(1 − κ) and η ∈ (0, 1);
therefore, if and only if (1 − α)Id + αA is AVG with α ∈
(0, 1 − κ). By proposition (1), the latter is equivalent to the
global convergence of the Banach–Picard iteration applied
on (1 − α)Id + αA, which corresponds to the Krasnoselskij
iteration on A, with α ∈ (0, 1 − κ).
Proof of Theorem 2 (Mann iteration)
Proof that (i) ⇒ (ii): Define the bounded sequence
βk := 1 αk > 0, for some > 0 to be chosen. Thus,
x(k + 1) = (1 − αk )x(k) + αk Ax(k) = (1 − βk )x(k) +
βk Ax(k) = (1 − βk )x(k) + βk ((1 − )I + A) x(k). Since
A· is sPC, we can choose > 0 small enough such that
B := (1 −
we shall choose
o
n )Id + A· is NE, specifically,
1
< min |λ| | λ ∈ Λ(A) \ {1} . Note that 0 ∈ fix(A) =
P∞
P∞
fix(B) 6= ∅. Since ∞ = k=0 αk = k=0 βk , we have
that limk→∞ βk = 0, hence ∀ > 0, ∃k̄ ∈ N such that
Pk̄
βk ≤ 1 for all k ≥ k̄. Moreover, since
k=0 βk < ∞,
for all x(0) ∈ Rn , we have that the solution x(k̄) is finite.
Therefore, we can define h := k − k̄ ∈ N for all k ≥ k̄,
y(0) := x(k̄) and y(h + 1) := x(h + k̄ + 1) for all h ≥ 0.
The proof then follows by applying [14, Th. 5.14 (iii)] to the
Mann iteration y(h + 1) = (1 − βh )y(h) + βh By(h).
Proof that (ii) ⇒ (i): For the sake of contradiction, suppose
that A is not sPC, i.e., at least one of the following facts must
hold: 1) A has an eigenvalue in 1 that is not semi-simple;
2) A has a real eigenvalue greater than 1; 3) A has a pair
of complex eigenvalues σ ± jω, with σ ≥ 1 and ω > 0.
We show next that each of these three facts implies nonconvergence of (10). Without loss of generality (i.e., up to
a linear transformation), we can assume that A is in Jordan
normal form.
1) A has an eigenvalue in 1 that is not semi-simple. Due to
(the bottom part of) the associated Jordan block, the vector
dynamics in (10) contain the two-dimensional linear timevarying dynamics
1 0
1 1
y(k + 1) = (1 − αk )
+ αk
y(k)
0 1
0 1
1 αk
=
y(k).
0 1
For y2 (0) := c > 0, we have that the solution y(k) is such
that y2 (k) = y2 (0) > 0 and y1 (k + 1) = y1 (k) + c, which
implies that y1 (k) = y1 (0) + k c. Thus, x(k) diverges and
we have a contradiction.
2) Let A has a real eigenvalue equal to 1 + > 1. Again
due to (the bottom part of) the associated Jordan block, the
vector dynamics in (10) must contain the scalar dynamics
s(k+1) = (1−αk )s(k)+αk (1+)s(k)
Q= (1+ αk )s(k).
The
k
solution then reads as s(k + 1) =
(1
+
α
)
s(0).
h
h=0
Qk
Now, since αh > 0, it holds that h=0 (1 + αh ) ≥
Pk
h=0 αh = ∞, by Assumption 1. Therefore, s(k) and
hence x(k) diverge, and we reach a contradiction.
3) A has a pair of complex eigenvalues σ ± jω, with σ =
1 + ≥ 1 and ω > 0. Due to the structure of the associated
Jordan block, the vector dynamics in (10) contain the twodimensional dynamics
1 0
σ −ω
z(k + 1) = (1 − αk )
+ αk
z(k)
0 1
ω
σ
1 + αk −ωαk
=
z(k).
ωαk
1 + αk
p
(1 + αk )2 + ω 2 αk2 ≥
pNow, we define ρk :=
2
2
1 + ω αk > 1, and the angle θk > 0 such that
cos(θk ) = (1+ αk )/ρk and sin(θk ) = (ωαk )/ρk , i.e., θk =
ωαk
atan 1+α
. Then, we have that z(k + 1) = ρk R(θk )z(k),
k
hence, the solution z(k) reads as
P
Q
k
k
θ
z(0).
ρ
R
z(k + 1) =
h
h
h=0
h=0
Q∞
Since kR(·)k = 1, if the product ( h=0 ρh ) diverges,
then z(k) and hence x(k)
Q∞diverge as well. Thus, let us
assume that the product ( h=0 ρh ) converges. By the limit
P∞
P∞
ωαh
comparison test, the series h=0 θh = h=0 atan 1+α
P∞ ωαh h
conconverges (diverges) if and only the series h=0 1+α
h
P∞ ωα
h
verges (diverges). The latter diverges since h=0 1+α
≥
h
P∞
P∞
αh
ω
ω
=
α
=
∞.
It
follows
that
max
max
h
h=0
1+α
P∞h=0 1+α
h=0 θh diverges, hence z(k) keeps rotating indefinitely,
which is a contradiction.
VI. A PPLICATION TO MULTI - AGENT LINEAR SYSTEMS
A. Consensus via time-varying Laplacian dynamics
We consider a connected graph of N nodes, associated
with N agents seeking consensus, with Laplacian matrix
L ∈ RN ×N . To solve the consensus problem, we study the
following discrete-time linear time-varying dynamics:
x(k + 1) = x(k) − αk Lx(k)
(21a)
= (1 − αk )x(k) + αk (I − L)x(k) ,
(21b)
>
where x(k) := [x1 (k), . . . , xN (k)] ∈ RN and, for simplicity, the state of each agent is a scalar variable, xi ∈ R.
Since the dynamics in (21) have the structure of a Mann
iteration, in view of Theorem 2, we have the following result.
Corollary 1: Let (αk )k∈N be a Mann sequence. The
system in (21) asymptotically reaches consensus, i.e., the
solution x(k) to (21) converges to x 1N , for some x ∈ R.
Proof: Since the graph is connected, L has one (simple)
eigenvalue at 0, and N − 1 eigenvalues with strictly-positive
real part. Therefore, the matrix I −L in (21b) has one simple
eigenvalue in 1 and N − 1 with real part strictly less than
1. By Lemma 5, (I − L)(·) is sPC and by Theorem 2, x(k)
globally converges to some x ∈ fix(I − L) = zer(L), i.e.,
Lx = 0N . Since L is a Laplacian matrix, Lx = 0N implies
consensus, i.e., x = x 1N , for some x ∈ R.
We emphasize that via (21), consensus is reached without
assuming that the agents know the algebraic connectivity of
the graph, i.e., the strictly-positive Fiedler eigenvalue of L.
We have only assumed that the agents agree on a sequence
102
100
-2
10
10-4
10-6
10-8
10-10
1
10
20
50
Fig. 2. Disagreement vector norm versus discrete time. Since the mapping
Id − L · is strictly pseudocontractive, consensus is asymptotically reached.
of vanishing, bounded, step sizes, αk . However, we envision
that agent-dependent step sizes can be used as well, e.g. via
matricial Mann iterations, see [13, §4.1].
Let us simulate the time-varying consensus dynamics in
(21) for a graph with N = 3 nodes, adjacency matrix A =
[ai,j ] with a1,2 = a1,3 = 1/2, a2,3 = a3,1 = 1, hence with
Laplacian matrix
1 −1/2 −1/2
L = Dout − A =
.
0 1
−1
−1
0
1
We note that L has eigenvalues Λ(L) = 0, 23 ± j 12 .
Since we do not assume that the agents known about the
connectivity of the graph, we simulate with step sizes that
are initially larger than the maximum constant-step value
for which convergence would hold. In Fig. 2, we compare
the norm of the disagreement vectors, kLx(k)k, obtained
with
√ two different Mann sequences, αk = 2/k and αk =
2/ k, respectively. We observe that convergence with small
tolerances is faster in the latter case with larger step sizes.
B. Two-player zero-sum linear-quadratic games:
Non-convergence of projected pseudo-gradient dynamics
We consider two-player zero-sum games with linearquadratic structure, i.e., we consider N = 2 agents, with
cost functions f1 (x1 , x2 ) := x>
1 Cx2 and f2 (x1 , x2 ) :=
>
−x>
C
x
,
respectively,
for
some
square matrix C = C > 6=
1
2
0. In particular, we study discrete-time dynamics for solving
the Nash equilibrium problem, that is the problem to find a
pair (x∗1 , x∗2 ) such that:
∗
∗
x1 ∈ argminf1 (x1 , x2 )
x∗2
x1 ∈Rn
∈ argminf2 (x∗1 , x2 ).
x2 ∈Rn
A classic solution approach is the pseudo-gradient method,
namely the discrete-time dynamics
x(k + 1) = x(k) − αk F x(k)
= (1 − αk )x(k) + αk (I − F )x(k) ,
(22a)
(22b)
where F · is the so-called pseudo-gradient mapping of the
game, which in our case is defined as
∇x1 f1 (x1 , x2 )
Cx2
0 1
F(x) :=
=
=
⊗ C x,
∇x2 f2 (x1 , x2 )
−Cx1
−1 0
{z
}
|
=:F
and (αk )k∈N is a sequence of vanishing step sizes, e.g. a
Mann sequence. In our case, (x∗1 , x∗2 ) is a Nash equilibrium
if and only if [x∗1 ; x∗2 ] ∈ fix (Id − F) = zer (F) [19, Th. 1].
By Theorem 2, convergence of the system in (22) holds if
and only if I − F is strictly pseudocontractive. In the next
statement, we show that this is not the case for F in (22).
Corollary 2: Let (αk )k∈N be a Mann sequence and C =
C > 6= 0. The system in (22) does not globally converge.
Proof: It follows by Lemma 5 that the mapping Id−F ·
is strictly pseudocontractive if and only if the eigenvalues of
F either have strictly-positive
real
part, or are semi-simple
and equal to 0. Since Λ −10 10 = {±j}, we have that the
eigenvalues of F = −10 10 ⊗ C are either with both positive
and negative real part, or on the imaginary axis and not equal
to 0, or equal to 0 are not semi-simple. Therefore, Id − F ·
is not strictly pseudocontractive and the proof follows by
Theorem 2.
Let us numerically simulate the discrete-time system in
(22), with the following parameters: n = 1, C = 1, x1 (0) =
1/2, x2 (0) = 0, and αk = 1/(k + 1) for all k ∈ N. Figure 3
shows persistent oscillations, due to the lack of strict pseudocontractiveness of I − F . In fact, Λ(I − F ) = {1 ± j}. The
example provides a non-convergence result: pseudo-gradient
methods do not ensure global convergence in convex games
with (non-strictly) monotone pseudo-gradient mapping, not
even with vanishing step sizes and linear-quadratic structure.
Fig. 3. Solution to the discrete-time system in (22) in semi-log scale. The
lack of strict pseudo-contractiveness causes persistent oscillations.
VII. C ONCLUSION AND OUTLOOK
Convergence in discrete-time linear systems can be equivalently characterized via operator-theoretic notions. Remarkably, the time-varying Mann iteration applied on linear
mappings converges if and only if the considered linear
operator is strictly pseudocontractive. This result implies that
Laplacian-driven linear time-varying consensus dynamics
with Mann step sizes do converge. It also implies that
projected pseudo-gradient dynamics for Nash equilibrium
seeking in monotone games do not necessarily converge.
Future research will focus on studying convergence of
other, more general, linear fixed-point iterations and of
discrete-time linear systems with uncertainty, e.g. polytopic.
R EFERENCES
[1] R. Olfati-Saber, A. Fax, and R. Murray, “Consensus and cooperation
in networked multi-agent systems,” Proc. of the IEEE, vol. 95, no. 1,
pp. 215–233, 2010.
[2] A. Nedić, A. Ozdaglar, and P. Parrillo, “Constrained consensus and
optimization in multi-agent networks,” IEEE Trans. on Automatic
Control, vol. 55, no. 4, pp. 922–938, 2010.
[3] P. Yi and L. Pavel, “A distributed primal-dual algorithm for computation of generalized Nash equilibria via operator splitting methods,”
Proc. of the IEEE Conf. on Decision and Control, pp. 3841–3846,
2017.
[4] F. Dörfler, J. Simpson-Porco, and F. Bullo, “Breaking the hierarchy:
Distributed control and economic optimality in microgrids,” IEEE
Trans. on Control of Network Systems, vol. 3, no. 3, pp. 241–253,
2016.
[5] F. Dörfler and S. Grammatico, “Gather-and-broadcast frequency control in power systems,” Automatica, vol. 79, pp. 296–305, 2017.
[6] A.-H. Mohsenian-Rad, V. Wong, J. Jatskevich, R. Schober, and
A. Leon-Garcia, “Autonomous demand-side management based on
game-theoretic energy consumption scheduling for the future smart
grid,” IEEE Trans. on Smart Grid, vol. 1, no. 3, pp. 320–331, 2010.
[7] R. Jaina and J. Walrand, “An efficient Nash-implementation mechanism for network resource allocation,” Automatica, vol. 46, pp. 1276–
1283, 2010.
[8] J. Barrera and A. Garcia, “Dynamic incentives for congestion control,”
IEEE Trans. on Automatic Control, vol. 60, no. 2, pp. 299–310, 2015.
[9] J. Ghaderi and R. Srikant, “Opinion dynamics in social networks
with stubborn agents: Equilibrium and convergence rate,” Automatica,
vol. 50, pp. 3209–3215, 2014.
[10] S. R. Etesami and T. Başar, “Game-theoretic analysis of the
hegselmann-krause model for opinion dynamics in finite dimensions,”
IEEE Trans. on Automatic Control, vol. 60, no. 7, pp. 1886–1897,
2015.
[11] S. Martı́nez, F. Bullo, J. Cortés, and E. Frazzoli, “On synchronous
robotic networks – Part i: Models, tasks, and complexity,” IEEE Trans.
on Automatic Control, vol. 52, pp. 2199–2213, 2007.
[12] M. Stanković, K. Johansson, and D. Stipanović, “Distributed seeking
of Nash equilibria with applications to mobile sensor networks,” IEEE
Trans. on Automatic Control, vol. 57, no. 4, pp. 904–919, 2012.
[13] V. Berinde, Iterative Approximation of Fixed Points. Springer, 2007.
[14] H. H. Bauschke and P. L. Combettes, Convex analysis and monotone
operator theory in Hilbert spaces. Springer, 2010.
[15] E. K. Ryu and S. Boyd, “A primer on monotone operator methods,”
Appl. Comput. Math., vol. 15, no. 1, pp. 3–43, 2016.
[16] S. Grammatico, F. Parise, M. Colombino, and J. Lygeros, “Decentralized convergence to Nash equilibria in constrained deterministic mean
field control,” IEEE Trans. on Automatic Control, vol. 61, no. 11, pp.
3315–3329, 2016.
[17] S. Grammatico, “Dynamic control of agents playing aggregative games
with coupling constraints,” IEEE Trans. on Automatic Control, vol. 62,
no. 9, pp. 4537 – 4548, 2017.
[18] P. Giselsson and S. Boyd, “Linear convergence and metric selection
for Douglas–Rachford splitting and ADMM,” IEEE Transactions on
Automatic Control, vol. 62, no. 2, pp. 532–544, 2017.
[19] G. Belgioioso and S. Grammatico, “Semi-decentralized Nash equilibrium seeking in aggregative games with coupling constraints and nondifferentiable cost functions,” IEEE Control Systems Letters, vol. 1,
no. 2, pp. 400–405, 2017.
[20] S. Grammatico, “Proximal dynamics in multi-agent network games,”
IEEE Trans. on Control of Network Systems, https://doi.org/
10.1109/TCNS.2017.2754358, 2018.
| 3 |
Detection of parallel steps in programs with arrays
R. Nuriyev
([email protected])
The problem of detecting of information and logically independent (DILD) steps in programs is a
key for equivalent program transformations. Here we are considering the problem of
independence of loop iterations – the concentration of massive data processing and hence the
most challenge construction for parallelizing. We introduced a separated form of loops when
loop‟s body is a sequence of procedures each of them are used array‟s elements selected in a
previous procedure. We prove that any loop may be algorithmically represented in this form and
number of such procedures is invariant. We show that for this form of loop the steps connections
are determined with some integer equations and hence the independence problem is
algorithmically unsolvable if index expressions are more complex than cubical. . We suggest a
modification of index semantics that made connection equations trivial and loops iterations can be
executed in parallel.
We are considering not only algorithmic fullness of the programming language but also
its data fullness when selection functions of elements of arrays are algorithmically full class too.
These two features are independent. We suggest a modification of index semantics that made
connection equations trivial and loops iterations can be executed in parallel. This modified
language is a full in both senses.
We consider a DILD problem for programming languages with arrays and focusing on
the syntax constructions determining the DILDs.
Transformations of programs, and parallelizing in particular, are based on fact that
execution order for given start values can be changed without changing result. For program with
single variables for fragment
x=f(u) // step 1
y=g(x) // step 2
we may say that step 2 informational depends on step 1.
But for the case of indexed variable
x[i] =f(u) // step 3
y=g(x[j]) // step 4
the information dependency takes a place only if values i and j are the same.
Let‟s consider two program fragments, structured and unstructured in Dijkstra‟s terms,
when each loop has one entrance and one exit points as in figure 1:
Figure 1
For the first fragment to identify an execution step for operator q we may use 2dementional vector (p, k): p is a number of work out iterations of the upper loop and k is a
number of lower loop iterations in the iteration p of upper loop. Step q3,7 means that we are
talking about such execution of the operator q when the lower loop made 7 iterations in a third
iteration of the upper loop.
To identify a step for the second fragment we have to use variable length vector q(i1,i2,…)
meaning that we are talking about point of execution when upper loop executes i1 times, then
lower executes i2 times, then upper executes i3 times and then inner executes i4 times and so on.
Good news is that any program can be algorithmically structured (Dijkstra‟s goto
elimination problem) so that only nested loops may be used.
In section 3 we will extend the canonization result to recursion programs.
So we will consider from now and up only structured programs: programs in which
each repetition steps are formed only with nested loops.
We will extend this idea by introducing forced syntax principle approach to modify of
the language that syntactically implicit properties we need to solve problem in hands are
encapsulated in a (new if necessary) syntactically explicit constructions.
Applying this principle to the DILD problem with indexed variables we will introduce a
representation of a loop body as a chain of special subprograms (last one is called kernel, others
are called controllers) which are responsible for calculation values of indexes for variables of
next levels controllers or kernel but not for themselves. Such organized loops are called
separated forms of loops. Other word, in separated loops the two loop body‟s functions of
selecting data from some sets or arrays and processing that data are separated. For programs with
separated loops we have enough syntactical objects to be able to define a condition if some set of
iterations can be run in parallel.
A separated loop for processing connected list has two level controllers. First level
controller selects node and the second one selects pointer to the next element.
In this paper we'll prove that for FORTRAN like languages (or containing FORTRAN)
the class of algorithmically parallelizing programs can't be remarkably bigger than the class with
linear index expression. So to create language with algorithmically parallelizing programs we
have to change some fundamental constructions of the languages and our goal is to point to these
constructions.
1. Basic definitions
We will consider schemas of programs in which functions and predicates are symbols
and they get interpretation (implementations) together with start data. But also we will allow
some of them to have a fixed interpretation.
Let‟s A, X, F be finite alphabets without common elements and A*, X*, F* are a sets of
strings in an appropriate alphabets.
Def. 1. 1. We call a memory M a countable set of cells - two types of elements: x and
a[w1, .., wm], where x belong to the set X* and is called single cell, a belongs to A* and is
called array of cells, w1,…, wn belong to X* and is called indexes of the cell.
A cell may get (be assigned to) a value from set and keep it until next changing by a
program operator. Cells without value are called empty. Each memory element has no more than
one value and the value of element x will be denoted <x>, so <x>
.
To keep this notation more close to programming languages let‟s suppose that one array
name can‟t have different dimensions. Pairs of sets X, A and A, do not have common
elements.
Def. 1.2. Variables of a program schemas are elements of X* and expressions
a[K1(z1),…, Kn(zn)], where a A* is called array name, K1, …, Kn F are called index functions,
z1, …, zn are program variables from X*. Variables of a first type are called simple and of a
second type – indexed variables.
For X={x,y}, A={a,b}, expressions x, y, xx, xyx are a simple variables, expression
ba[xyx, yx] is an example of indexed variables, expression aa[ba[x]] is not a variable –
expression in [ ] parenthesis has to contain only simple variables.
Def. 1.3. Schema of program is a 5D vector (Opers, F, VarX, VarA, subP) where
VarX is a set of single variables, VarA is a set of array variables, F is a set of function an
predicate symbols, subP is a set of (sub) schemas, also called procedures, Opers is a finite set
of program instructions – strings of one of the forms:
a. l1: x0=g(x1, …, xn) then l2; // assignment operator
b. l1: if p(x1,…,xn) then l2 else l3;// conditional operator
c. l1: do P1 while p(x1,…,xn) then l2;// loop with body P1 and iteration condition
p(x1,…,xn),
d. l1: do P1 then l2; // call sub schema or procedure P1.
Text after “//” is a comment, not a part of instructions.
Here l1, l2, l3 are called labels, l1 is called an input label, l2 and l3 are called output
labels. P1 is called a sub schema and set of it labels does not have common labels with upper
level program or any other procedures, p1 is called a repetition predicate,
, x0, x1, …, xn
VarX VarA.
Output labels which are not input label are called final, and only one final label is
allowed. Collection of labels of program or procedure P will be denote L(P).
One separate label l0 is called start label and its operator – start operator.
We assume that each of sets Opers, F, VarX, VarA, subP includes such sets for
procedures too.
Schema is called determined if its instructions have different input labels.
Interpretation of the schema is a map of some finite set of memory elements to elements
from (they called start value for the program), function symbols interpreted as maps [ n
],
n
predicate symbols interpreted as maps [
{true, false}].
Program variable x from X has a value <x> of the element of memory x, variable
a[K1(x1, …, xn)…,Kn(x1, …, xn)] has a value a[IK1(<x1>,.., <xn>), …, IKn(<x1>,…, <xn>)], IKj
is an interpretation of Ki. Value of the variable is empty (not defined) if some of its index
function has no value or one of memory element used as argument for index function is empty.
The execution of operators for a given interpretation consists of three parts calculating some function or sub schema from the operator, changing the memory elements and
marking some operator as next to execute.
More accurate, for operator of type:
a. calculate function Ig (interpretation of g) with <x1>, …, <xn> arguments and put the
result to cell x0, mark next to execute operator with input label l2;
b. calculate Ip(<x1>,…,<xn>), if it is true - mark next to execute operator with label l2, if
it is false - mark next to execute operator with label l3;
c. calculate sub schema P1 and then if Ip(<x1>, …, <xn>) is true repeat this operator
again, if it is false then mark next to execute operator with label l2;
d. calculate sub schema P1 and then mark next to execute operator with label l2;
Operator execution is defined in following cases
a. all variables x1, …, xn are defined and function Ig(<x1>, …,<xn>) is defined as well;
b. all variables x1,…, xn are defined and predicate Ip(<x1>,…,<xn>) is also defined;
c. any iterations of P1 are finished with final label and Ip(<x1>,…,<xn>) after each
iteration has a value true and becomes false after finite number of steps;
d. sub schema P1 stops in his final label.
Execution of schema for a given interpretation is a chain of executions of marked
operators starting with operator with start label l0.
Execution is considered ended successfully if after finite number of steps some of
marked label is final label.
Execution ended without result if some supposed to be calculated predicate or function
does not have value or one of its arguments does not have value.
Remark 1.1. It is possible that schema will run forever without reaching final label. So
schemas calculate partially defined function.
For shortness we will omit letter I in interpretation of functions and predicates if it is
clear from context what it has to be - symbol or its interpretation.
3. Loops in a separated form
Canonic forms of the studying objects always have a theoretical and practical value
because they allow classify objects, to work with smaller diversity and to use smaller description
of the objects. We saw it for step enumeration of structured program. Also it nicely comes with
level of abstraction for program design.
We hope that studying a canonic form for information processes in programs will help to
simplify design, develop better debugging tools and increase the code reusability.
In this section we will show that any loop body can be represent as a sequence of sub
procedures determine values of array indexes for next procedures, but not for itself or upper level
controllers. Last sub procedure called loop‟s kernel, others are called controllers of
corresponding levels. So we separate the loop‟s body execution in two functions: hierarchical
selecting data parts and parts processing them.
Such implementation allows to reduce a debugging of complex processes of data
selection in arbitrary arrays to the chain of debugging a simple blocks without arrays. It comes
from functions of controllers. If each upper level controller works properly, then data stream for
this block works correctly and hence we need to check if this block is correct.
In C++ STL classes and generic collections in C# algorithms represent loops with fixed
controllers for getting next elements from lists, maps, stacks and so on.
For theoretical studies this result is important because it gives syntactical objects for
describing information connections in loops and, as it will be proved next, to divide the loops
with different numbers of controllers or their connection graphs to unequal classes.
Def. 3.1. Let S(P) be a set of sub procedures of P, SS(P) be a transitive closure of this
relations. Then sub procedure P called recursive if P SS(P).
Def .3.2. Schema P is called loop structured (for short L-schema) if for P and for each
of its sub schemas Pi:
a. a relation “input label is less than output labels” is a partial order, called “structural”,
b. Pi is not a recursive procedure.
So there is no recursion call, no spaghetti paths and no loops created by labels.
Remark 3.1. In L-schemas any repetition process can be generated only with nested
loops.
Remark 3.2. L-schemas are structured in Dijkstra‟s terms: each loop has one start point
and one exit point (we consider one final label only).
Def. 3.3. Two schemas with the same set of non interpreted symbols of functions and
predicates are called equal if for each interpretation one schema finished successfully if and only
if another schema finished successfully too and their state of memory is the same.
We can be proved the following
Theorem 3.1. There exists an algorithm to transform any schema to an equivalent Lschema.
Full proof is mostly technical, too bulky and obvious. Instructions that breach the label
orders can be encapsulated in loops. Recursions can be replaced with pair loops and additional
interpreted stack operations. Duplications of sub procedures can be eliminated by renaming and
copying.
Def. 3.4. For a given instruction with input label m denote Ind(m), Arg(m), Val(m) sets
of its index variables, set of its argument variables, set of its output variables. More accurately,
for instruction m of following type
a)
Ind(m) is a set of indexes of array variables x0, x1, …, xn; Arg(m) = {x1,…,
xn} Ind(m); Val(m)={x0},
b)
Ind(m) is a set of indexes of array variables from {x1,…,xn};
Arg(m)={x1,…,xn} Ind(m);Val(m)= ,
c)
Ind(m)= Ind(k|k L(P)); Arg(m)= Arg(k|k L(P)) Ind(m);
Val(m)= Val(k|k L(P));
d)
Ind(m)= Ind(k|k L(P) JI, where J1 is set of indexes of array variables of
predicate p; Arg(m)= Arg(k|k L(P)) Ind(m); Val(m)= Val(k|k L(P)),
For program or sub procedure P a set Ind(P)= Ind(k|k P), Arg(P)= Arg(k|k P) Ind(P),
Val(P)= Val(k|k P).
Def. 3.5. Loop C with non empty set of index variables is called separated if its body P
is a set of instructions
m0: do P1 then m1;
…..
mn: do P2 then mn+1,
where i j [(Ind(Pi) Val(Pi+j)= ) &(Ind(Pi) Val(Pi-1)
)], i, j N.
Here P1,…Pk-1 are called controllers of levels 1,…,k-1, the last Pk is called a kernel of
the loop.
Def. 3.6. The separated loop called strictly separated if
Pi(i<k) Val(Pk) (Arg(Pi) Ind(Pi))= .
Other words, a kernel of a strictly separated loop does not change any variable used by
any controller.
Def. 3.7. Schema is called simple if it consists only of instructions of type a) and d).
Def. 3.8. Two schemas called t-equally if they are equal for each interpretation where
functions and predicates are define for each arguments (also called totally define).
Def .3.9. Schema is called forward oriented if each loop has a body in which index of
variable for any interpretation can be changed only before using. Syntactically, for each branch
with array variables there is no operator changing its indexes with bigger label.
The following auxiliary statement can be proven.
Lemma 3.1. There exists an algorithm of transformation of any L-schema to the t-equal
forward oriented schema.
Idea of proof. Let‟s have two instructions S1 and S2 in one branch of P. S1 is executed
early (input label of S1 is less than for S2) and uses index variable E and S2 is executed later and
changes E in one of the branches. Then we‟ll add new variable newE and add ahead of S1
instruction
newE = E;
and replace index E to newE in S1.
Modified branch looks like next
….
newE=E;
x1=f(x[.., newE,..]…);
….
E=g(…);
…
It clearly equals to old branch for any interpretation of f and g and the branch now satisfies for
forward orientation condition. By repeating such modification for any branch and loop body we
will end up with forward oriented schema.
Now we are ready for the main result of this section.
Theorem 3.2. There exists an algorithm to transform any loop C with arrays to t-equally
separated loops with some n controllers and there is no any equally separated loop with different
number of controllers.
Proof of the first part of theorem. Let‟s B be a body of the loop C. According to lemma
2.1 we may assume that B has a “structural” partial order “<” on L(B). Let‟s for each branch
collect such of instructions k with bigger input label than start instruction in order “<” and built a
set of left side variables Vs = Val(k) until we meet instruction with indexes from Vs or get a
final one. Let‟s call this instruction “limited”, and continue with other branches. Process stops
when all branches are visited and each ended with limited (red) or the final instruction. Visited
set of instructions constitutes a first level controller.
To finish building the first controller let‟s add interpreted instructions with a new variable
vLeb1 which will keep the output label of last executed instructions. It may looks like the
following. Let mr is a final or limited output label. Then we‟ll replace mr with new label mAux
and add instruction
mAux: vLab1=‟mr‟.
We also add the next interpreted instructions to the rest set of instructions after removing
organizer‟s instructions:
maux0: if (vLeb1==m1) then m1 else maux1 // start instruction of next procedure
maux1: if (vLeb1==m2) then m2 else maux2
maux2: ..,
here mauxi are new auxiliary labels.
These additions guarantee that after execution of the first controller the calculations will
continue in an original order.
Clear, that if the rest set of instructions has instruction which changes indexes of others
we may repeat the above process: mark as limited those instructions that have indexes from
Ind(k|k L(P)) – Val(P0) and separate next level controller Pi. If there are no such instructions
then the loop has only i controllers and the set of rest instructions is a kernel.
Second part of the statement (about number of controllers) can be proven using special
interpretations which we borrowed from Gödel‟s model theory. He developed it to study logical
model (and called it model on constants). We will use this technique several times latter in this
study.
Suppose we have some formal system with signature consisting of symbols of functions
from F and predicates from P. Then the set of values for standard interpretations is a set of
terms T, built with elements of F and variable expressions. Formally T can be defined by
induction:
1. Simple variables X, used in schema (finite set) are elements of T;
2. If a[r(x1,.., xn1), ..] is an array variable in schema, then expression (string of symbols)
„a[r1(x1,…, xn1), ..]‟ is in T for any x1,…, xn from T.
3. term f(t1,…,tn) for each t1,..tn from T and f F also belongs to T.
Standard interpretation of a functional symbol f with n arguments is a map terms
t1,…, tn to the term (chain of symbols) „f(t1,…, tn)‟ if it is in T and is not define if it is not in T. In
other word, the value of a function is a path of its calculation.
Standard interpretation of a predicate symbol is determined with finite set D, called a
diagram, of expressions „p(t1,…,tn)‟ or „¬ p( t1,…,tn)‟ (only one of these two expressions is
allowed to be in D ), where t1, …,tn are from T.
Now we may prove the second part of the Theorem 2.2 that number of controllers is
invariant for equivalent transformations.
Let‟s consider loop C for standard interpretation. By construction, each controller
changes at least ones the indexes used in next organizer. So the value of indexes for standard
interpretation have to be a word as „a[…, t,…]‟where term t is a value of previous controller.
Next level controller has to have a branch which changes indexes for next after it level and has to
contain this expression. Otherwise if each path (for total schema it is also a branch) of such
controller does not have this word in expression t for index, then all paths must be included in
the previous levels.
Therefore execution of n controllers has to have value with n-1 deepness of parenthesis
[]. Hence two loops with different number of controllers can‟t be equal.
Proof is finished.
This technique may be used for more detailed classification of the loops. For example,
from the proof it also follows that loops with different dependency graph between controllers
also can‟t be equal.
4. Immediate information dependency between iterations
Each loop iteration has to depend from previous immediate iteration, otherwise iteration
will be just a repetition of the previous iteration (memory used by iteration doesn‟t change) and
hence, loop will run infinitely long. So iterations of whole loop body can‟t be executed in
parallel, but parts of it can be. Iterations of level 1 controller have to have connections with body
iterations; otherwise it can be run only one time.
For body with an indexed variable the result created on one iteration n0 can be used on
another iteration n1 (n0<n1), not immediate next to n0. To determine that the value of a[K0(i|n0)]
created on iteration n0 used with indexed variable a[K1(j|n1)] on iteration n1 we have to solve an
equation
K0(i|n0) =K1(j|n1)
for n0 and n1. The expression i|n0 means value of i on iteration n0.
For this equation we have to identify the system execution steps of the nested loops.
We‟ll use the following notations for nested loops and its elements. If Ci1 is a loop with
body Bi1, then loops that belong to it will be denoted as Ci1,i2. In general Ci1,…,in will denote loops
belonging to the body of the loop Ci1,…,in-1. So depth of the nested loop is reflected in the index
dimension of its name. The next diagram illustrates this notation:
Figure 4.1.
For simplicity we suppose that in a schema all instructions are different and each loop has
a counter of iterations starting at 0 when loop instruction is initialized. Then for loop Ci1,…,in a
vector of counters m=m1,…,mn of the inner loops Ci1,…,ip for p<n +1 will be unique for the step
of execution of any instruction q from its body and we will use the notation qm.
4.1. Connection equation
Immediate connection between steps q1 m1 and q2 m2 (let q1m1<q2m2) takes a place when
there is a simple or indexed variable value of it is created on the step q1 m1 and used on the step
q2 m2, or the result of q1m1 is overwritten with q2 m2 and it is the nearest step with such properties
(there are no any operator q3 and iteration m3 (m3<m2) in between have a property like q2).
For case of simple variable the nearest m2 is the next iteration after vector m1. In case of
indexed variables a[K1(i)] Val(q1) and a[K2(j)] Arg(q2), immediate connection means that
K1(i|m1)=K2(j|m2) and there is no any instruction q3m3 , where q1m1<q3m3<q2m2, with such a
variable a[K3(p)] Val(q3) that K3(p|m3) = K1(j|m1) for m1<m3<m2; i|m1 means value for i
calculated by some controller on iteration m1.
So to detect information connections we have to have solution for following equation:
K1(i|m1)=K2(j|m2).
We will call it a connection equation.
It is a natural number equation which is a superposition of index expressions and
functions of controllers on both sides. We can solve this problem for the system of linear
equations but not for polynomials higher than 3 degrees.
It means that class of programs for solving system of linear equations, matrix operations
and differential equations can be parallelized automatically [1,2]. But this way is a dead end.
Even for a case when connection equation is a polynomial, solving algorithm does not exist: we
have a Diofant‟s equation problem.
Remark 4.1. We can show that the equation is the only problem – if we have solution for
connection equation, no more algorithmically unsolvable problems are left for parallelizing of
free program schemas.
This is technical result and we are not going to show it here.
4.2. Avoiding the connection equation
Let modify programming language (and call it language of programs with
predecessors) by changing only semantics of index variables: instead of asking connection
equation we may ask its solution. Now a variable with index expression like a[g(i)] may be used
only in right side of assignment operator and it means that on current iteration must be used
value for a that was created on previous iteration g(i) and hence g(i)<i. Other word, index
expression reflects a point in iteration space, not a point in an array.
There is no needs to show an index expression in any left side, it always the current
iteration (when this operator was executed).
Example 4.1. Let‟s have a 4-point deferential approximation:
fk(i,j)=1/4(fk(i-1, j)+fk(i, j-1)+fk-1( i+1, j)+f k-1(i, j+1)).
The equivalent program schema with predecessors is next system of loops
for (k=1;k<N+1;k++)
for (i=1;i<N+1;i++)
for (j=1;j<N+1;j++)
f = 1/4(f[k, i-1, j]+f[k, i, j-1]+f[k-1, i+1,j]+f[k-1, i, j+1]);
In reality last line has to be more complex because of start data. Start data or elements of
input arrays might be introduced as negative number indexes. But for simplicity we are not
showing calculation along the coordinate plains.
The natural way to execute a program with predecessors is to repeatedly run in parallel
all iterations of loop‟s body fragments which have data. Obviously more efficient execution is to
look out only iterations that just got data.
For example above controllers are simple and just increase indexes j, i, k. Their values do
not depend on kernel (it a last line of code) and controller iterations can be executed before any
kernel iterations. So we will get a list of 3D vectors (1,1,1), (1,1,2),…, (1,1,N),…, (1,2,1),…,
(1.2,N), …, (1,N,N),…, (N,1,1),…, (N,N,N). For each of them we check if there is data for the
arguments. In practice we have to check only points changed on a previous step.
At the beginning it is the point (1,1,1). Then data will be ready for 3 points (1,2,1),
(1,1,2), (2,1,1).
Let P be a plain having these tree points, and Norm is its normal vector. Then at the next
step data will be ready for iteration lying on next nearest plain with the same normal vector and
including whole number coordinates. One can see that each next available for execution set of
iterations is belonged to next parallel plain with integer points.
Each plain iterations on can be calculated in parallel. Really, for any integer point on the
plain, all points used for this iteration are in an area strictly below this plain. It means that these
points were already calculated and any point of plain can‟t be argument for another point of the
same plain and hence points on one plain can be calculated in parallel.
So this 4-point approximation is highly parallel and for this 3D task parallel execution
time is linear function of the dimension size.
Next figure 4.2 illustrates this case.
Figure 4.2.
Remarkable feature of this language is that it has a trivial algorithmically solvable
problem of finding nearest iteration in which result of current iteration will be used. The index
expression of right side variable is a solution of a connection equation. Still both languages are
algorithmically full. It is easy to prove that there exists a fixed interpretation of functional and
predicate symbols so that each algorithmic function can be calculated with a program from these
classes.
There is no contradiction to Rise‟s theorem about insolvability of mass problems. The
class of our programs is not an exact subclass – it is a full class.
For programs we have to consider two dimensions of fullness: algorithmically when any
partial recursion function has a program for it calculation and data fullness when data selection
functions are an algorithmically full class, any data structure can be represented.
Loops with simple controllers are very suited for parallelizing. They produced well
organized data streams that can be efficiently implemented in parallel systems with separated
memories. Questions are how useful such programs, how to organize parallel execution and how
complex is it. The only comprehensive but expensive answer for that questions is a building of a
real hardware and software system and applications of a wide enough area that executed in the
system. But as it is shown in [3] there are many problems as well.
4.3. Cone of dependency and parallelism
Program with predecessors has a simple geometric interpretation. Let‟s consider
hierarchy of loops with body B of the deepest loop. Then for each iteration i1,…,in we may
calculate all immediate predecessors which values are used on this step. Repeating this for each
predecessor, for predecessors of predecessors and so on we will get a set of iteration which we
call a cone of dependency. It is clear that to get result for the current iteration we need results of
any iteration from this cone.
Figure 4.2 represents the cone for iteration (k,i,j) for the 4-point example above:
Figure 4.2.
Any dependency cones for any point of any plain of iterations executable in parallel do
not include each others, so they are independent from each other. This plain is a tangential to the
cone at point (k, i, j) and the cone lies behind one side of it.
The relationship between sets of independent and sets of iterations executable in parallel
is not simple.
Next example shows that even if any set of iterations are lying on any lines, parallel to
some given one, are independent the parallel execution for such sets does not exist.
Example 4.2.
For (z=1; z<N; z++)
{
For (x=1; x<N; x++)
{
For (y=1; y<N; y++)
{
for (p=1; ((x+p<n)&(y+p<n)); p++)
{
m0: if (x>y) then m1 else m2
m1:v=f1(u[p,x+p], u[x+p,p],v) then m2
m2:v=f2(u[p,y+p], u[y,y+p],v) then m3
m3: u[x,y]=f3(u[x,y], u[x-1,y-1],v) then ms
}
}
}
}
The cone of dependency for these nested loops is shown in figure the 4.4.
Here any set of iterations parallel to L (bold line) consists of independent iterations. But
the two lines cannot be executed in parallel because dependency cone of one of them will contain
points of another one. Thus any parallel to L line is a set of independent iterations, but only one
line iterations can be executed in parallel.
Figure 4.4.
It can be proved that any forward loops can be transform to loops in separated form with
the same technique as for ordinary program schemas. Number of its controllers and their
connection topologies are an invariant for transformation. Therefore loops with different
characteristics like these can‟t be equivalent.
Important feature of the forward loops is its data selection fullness. The constructors
may represent any functions. Hence any data structure might be represented and processed in parallel.
Conclusion. We‟ve shown that problem of loop iterations independency contains the
problem of solving connection equations. The last problem might be avoided by changing
semantic of index expressions. The getting class is algorithmically full and has full data
selections.
References
1. R. Scott, T. Clark, B. Baghery, Scientific parallel computing, Princeton University
Press, 2005.
2. S.L. Graham, M. Sniz, C.A. Patterson, The Future of Supercomputing: An Interim
Report, Washington, DC, National Academies Press. 2005.
3. H. Zima, K. Kennedy, C. Koelbel, The rise and fall of High Performance Fortran: an
historical object lessons, Proceedings of the third ACM SIGPLAN conference on
History of programming languages, 2007, San Diego, California June 09 - 10, 2007
q
q
Figure 1.1. Structured and unstructured loops
C1
C2
C1,2
q3
C1,1
C1,2,1
q1
q2
q4
C3
Figure 4.1. Loop naming
k
j
(k,i-1,j)
(k-1,i,j+1)
3 (k,i,j-1)
(k-1,i,j-1)
2
2
3
i
Figure 4.2. Hyper plains for parallel execution
k
(k,i,j)
i
j
Figure 4.3. A cone for point (k,i,j)
z
L
y
(x0, y0, z0)
x
Figure 4.4. Cone of dependence for system from an example 2.4.2.
| 8 |
arXiv:1709.07358v1 [cs.DS] 21 Sep 2017
Non-Depth-First Search against Independent
Distributions on an AND-OR Tree
Toshio Suzuki
Department of Mathematics and Information Sciences,
Tokyo Metropolitan University,
Minami-Ohsawa, Hachioji, Tokyo 192-0397, Japan
[email protected]
September 22, 2017
Abstract
Suzuki and Niida (Ann. Pure. Appl. Logic, 2015) showed the following
results on independent distributions (IDs) on an AND-OR tree, where
they took only depth-first algorithms into consideration. (1) Among IDs
such that probability of the root having value 0 is fixed as a given r such
that 0 < r < 1, if d is a maximizer of cost of the best algorithm then d is
an independent and identical distribution (IID). (2) Among all IDs, if d
is a maximizer of cost of the best algorithm then d is an IID. In the case
where non-depth-first algorithms are taken into consideration, the counter
parts of (1) and (2) are left open in the above work. Peng et al. (Inform.
Process. Lett., 2017) extended (1) and (2) to multi-branching trees, where
in (2) they put an additional hypothesis on IDs that probability of the
root having value 0 is neither 0 nor 1. We give positive answers for the two
questions of Suzuki-Niida. A key to the proof is that if ID d achieves the
equilibrium among IDs then we can chose an algorithm of the best cost
against d from depth-first algorithms. In addition, we extend the result
of Peng et al. to the case where non-depth-first algorithms are taken into
consideration.
Keywords: Non-depth-first algorithm; Independent distribution; Multibranching tree; Computational complexity; Analysis of algorithms
MSC[2010] 68T20; 68W40
1
Introduction
In this paper, we are interested in bi-valued minimax trees, in other words, ANDOR trees. Thus, every internal node is labeled either AND or OR, and each
leaf is assigned 1 (true) or 0 (false). AND layers and OR layers alternate. At
each internal node, the number of child nodes is 2 or more. Given such a tree, a
1
distribution on it denotes a probability distribution on the truth assignments to
the leaves. In particular, we investigate sufficient conditions for an independent
distribution has an optimal algorithm that is depth-first.
To be more precise, at the beginning of computation, truth values of leaves
are hidden. An algorithm A for a given AND-OR tree T is a Boolean decision
tree whose nodes are labeled by leaves of T . An objective of A is to find value of
the root of T . During computation, if A has enough information to determine an
internal node x of T then A omits to make queries to the remaining descendants
of x.
Definition 1. A is a depth-first algorithm if for each internal node x, once A
makes a query to a leaf that is a descendent of x, A does not make a query to a
leaf that is not a descendent of x until A finds value of x. If A is not depth-first,
A is non-depth-first.
Cost of a computation is measured by the number of queries made by A
during the computation. In the case where a distribution is given, cost denotes
expected value of the above mentioned cost.
When a tree T and a distribution on it are given, an optimal algorithm
denotes an algorithm whose cost achieves the minimum. An independent distribution (ID) denotes a distribution such that each leaf has probability of having
value 0, and value of the leaf is independent of the other leaves. If all the leaves
have the same probability then such an ID is an independent and identical
distribution (IID).
Early in the 1980s, optimal algorithms for IID were studied by Pearl [3, 4]
and Tarsi [10].
Definition 2. [10] A tree is balanced if (1) and (2) hold.
(1) Internal nodes of the same depth (distance from the root) has the same
number of child nodes.
(2) All the leaves have the same depth.
If, in addition, all the internal nodes have the same number k of child nodes
then we call such a tree a uniform k-ary tree.
Tarsi investigated the case where a tree is a balanced NAND tree and a
distribution is an IID such that probability of a leaf is neither 0 nor 1. He
showed that, under this hypotheses, there exists an optimal algorithm that is
depth-first and directional. Here, an algorithm A for a tree T is directional [3] if
there is a fixed linear order of the leaves of T such that for any truth assignments
to the leaves, priority of A probing the leaves is consistent with the order.
In the mid 1980s, Saks and Wigderson [7] studied game trees with a focus
on correlated distributions, in other words, distributions not necessarily independent. However, they did not investigate non-depth-first search against an
independent distribution.
In the late 2000s, Liu and Tanaka [2] shed light on the result of SaksWigderson again. Since then, optimal algorithms and equilibria have been studied in subsequent works, both in the correlated distribution case [8, 6] and in
the independent distribution case [9, 5].
2
Among these works, Suzuki and Niida [9] studied independent distributions
on a binary tree such that probability of the root having value 0 is r, where
r is a fixed real number such that 0 < r < 1. In this setting, they showed
that any distribution achieving an equilibrium is IID. In addition, they showed
that the same as above holds when a distribution runs over all IDs. Peng et
al. [5] extended the results of [9] to the case of multi-branching trees (under an
additional hypothesis).
However, in a series of studies from [2] to [5], algorithms are assumed to be
depth-first. In [9], they raised questions whether the results in that paper hold
when we take non-depth-first algorithms into consideration.
In this paper, we study counter parts to the results in [9] and [5] in the
presence of non-depth-first algorithms.
In section 2, we give definitions and review former results.
In section 3, we look at specific examples on binary AND-OR trees (and
OR-AND trees) whose heights are 2 or 3. The class of IDs has a nice property
in the case of height 2. There exists an optimal algorithm that is depth-first.
On the other hand, there is a uniform binary OR-AND tree of height 3 and an
ID on it such that all optimal algorithms are non-depth-first.
In section 4, we give positive answers to the questions in [9] (Theorem 6,
the main theorem). A key to the proof is that if ID d achieves the equilibrium
among IDs then there is an optimal algorithm for d that is depth-first. In the
proof, we do not use explicit induction. We use results of Suzuki-Niida (2015)
[9] and Tarsi (1983) [10]. Induction is in use in the proofs of these results.
By using a result of Peng et al. (2017) [5] in the place of the result of Suzuki
and Niida, we show similar results on multi-branching trees. This result extends
the result of Peng et al. to the case where non-depth-first algorithms are taken
into consideration.
2
2.1
Preliminaries
Notation
The root of an AND-OR tree is labeled by AND. Each child node of an AND
node (OR node, respectively) is either an internal node labeled by OR (AND,
respectively) or a leaf. The concept of an OR-AND tree is defined in a similar
way with the roles of AND and OR exchanged.
We let λ denote the empty string. Given a tree, we denote its root by xλ .
Suppose u is a string and xu is an internal node of a given tree, and that xu has
n child nodes. Then we denote the child nodes by xu0 , xu1 , . . . , and xu(n−1) .
Throughout the paper, unless otherwise specified, an algorithm denotes a
deterministic algorithm, that is, it does not use a random number generator.
The terminology “straight algorithm” [10] is a synonym of “depth-first algorithm”. Since we have assumed that if an algorithm has enough information
it skips a leaf, a depth-first algorithm is a special case of an alpha-beta pruning
algorithm [1].
3
Suppose that a balanced tree is given. In [10], a depth-first directional
algorithm SOLVE is defined as follows. A leaf xu has higher priority of probing
than a leaf xv if and only if u is less than v with respect to lexicographic order.
For example, if a given tree is binary and of height 2, then x00 has the highest
priority, then x01 , x10 and x11 follow in this order.
Suppose that given a tree, we investigate algorithms on this fixed tree. We
introduce the following conventions on notation.
“A: non-depth” stands for “A is an algorithm”. A may be depth-first or
non-depth-first. A may be directional or non-directional.
“A: depth” stands for “A is a depth-first algorithm”. A may be directional
or non-directional.
Suppose that we use one of the above as a suffix of an operator, say as
follows.
max
A:non-depth
Then we mean that A runs over the domain designated by the suffix. In
the above example, A runs over all deterministic algorithms, where A may be
depth-first or non-depth-first and A may be directional or non-directional.
For any node x of a given tree, unless otherwise specified, probability of x
denotes probability of x having value 0.
2.2
Previous results
Theorem 1. (Tarsi [10]) Suppose that T is a balanced NAND-tree and that
d is an IID and probability of the root is neither 0 nor 1. Then there exists a
depth-first directional algorithm A0 with the following property.
cost(A0 , d) =
min
A:non-depth
cost(A, d)
(1)
In short, under the above assumption, optimal algorithm is chosen from
depth-first algorithms.
Theorem 2 (2) is asserted in [2] without a proof, and later, a proof is given
in [9] by using Theorem 2 (1).
Theorem 2. (Suzuki-Niida [9]; see also Liu-Tanaka [2]) Suppose that T is a
uniform binary AND-OR tree.
(1) Suppose that r is a real number such that 0 < r < 1. Suppose that d0 is
an ID such that probability of the root is r and the following equation holds.
min cost(A, d0 ) = max min cost(A, d)
d:ID,r A:depth
A:depth
(2)
Here, d runs over all IDs such that probability of the root is r. Then d0 is
an IID.
(2) Suppose that d1 is an ID such that the following equation holds.
4
min cost(A, d1 ) = max min cost(A, d)
d:ID A:depth
A:depth
(3)
Here, d runs over all IDs. Then d1 is an IID.
Peng et al. [5] extended Theorem 2 to the multi-branching case with an
additional assumption.
Theorem 3. (Peng et al. [5]) Suppose that T is a balanced (multi-branching)
AND-OR tree.
(1) Suppose that r is a real number such that 0 < r < 1. Suppose that d0
is an ID with the following property. The probability of the root is r, and the
following equation holds.
min cost(A, d0 ) = max min cost(A, d)
d:ID,r A:depth
A:depth
(4)
Here, d runs over all IDs such that probability of the root is r.
Then d0 is an IID.
(2) Suppose that d1 is an ID with the following property. Probability of root
is neither 0 nor 1, and the following equation holds.
min cost(A, d1 ) = max min cost(A, d)
d:ID A:depth
A:depth
(5)
Here, d runs over all IDs. Then d1 is an IID.
3
Specific examples
We look at specific examples on uniform binary AND-OR trees (and, OR-AND
trees) with heights 2 or 3. We begin by a tree of height 2. The class of IDs on
such a tree has a nice property.
Proposition 4. Suppose that T is a uniform binary AND-OR tree (or a uniform
binary OR-AND tree, respectively) of height 2. In addition, suppose that d is
an ID. Then the following holds.
(1) There exists a depth-first directional algorithm A0 with the following
property.
cost(A0 , d) =
min
cost(A, d)
(6)
A:non-depth
If, in addition, we put an assumption that
(*) at every leaf, probability given by d is not 0 (not 1, respectively),
then any algorithm A0 satisfying (6) is depth-first.
(2) If we do not assume (*) then the following holds. For any algorithm A0
satisfying (6), probability (given by d) of A0 performing non-depth-first move is
0.
5
Proof. (sketch) We investigate the case of AND-OR trees only. The other case
is shown in the same way.
(1) We begin by defining some non-depth-first algorithms.
Algorithm A(xij , x(1−i)k , x(1−i)(1−k) , xi(1−j) ): Probe xij . If xij = 1 then
we know xi = 1. Then probe the subtree under x1−i by the most efficient
depth-first algorithm.
Otherwise, that is, if xij = 0 then probe x(1−i)k . If x(1−i)k = 1 then we
know x1−i = 1. Then probe xi(1−j) .
Otherwise, that is, if x(1−i)k = 0 then probe x(1−i)(1−k) . If x(1−i)(1−k) = 0
then we know x1−i = 0 and xλ = 0 thus we finish.
Otherwise, that is, if x(1−i)(1−k) = 1 then probe xi(1−j) . This completes the
definition of algorithm A(xij , x(1−i)k , x(1−i)(1−k) , xi(1−j) ).
Algorithm A′ (xij , x(1−i)k , xi(1−j) , x(1−i)(1−k) ): Probe xij . If xij = 1 then
we know xi = 1. Then probe the subtree under x1−i by the most efficient
depth-first algorithm.
Otherwise, that is, if xij = 0 then probe x(1−i)k . If x(1−i)k = 1 then we
know x1−i = 1. Then probe xi(1−j) .
If x(1−i)k = 0 then probe xi(1−j) . If xi(1−j) = 0 then we know xi = 0 and
xλ = 0 thus we finish.
Otherwise, that is, if xi(1−j) = 1 then probe x(1−i)(1−k) . This completes the
definition of algorithm A′ (xij , x(1−i)k , xi(1−j) , x(1−i)(1−k) ).
Thus we have defined 16 algorithms A(xij , x(1−i)k , x(1−i)(1−k) , xi(1−j) ) and
A′ (xij , x(1−i)k , xi(1−j) , x(1−i)(1−k) ) (i, j, k ∈ {0, 1}).
Claim 1 The minimum cost among all non-depth-first algorithms is achieved
by one of the above 16.
Proof of Claim 1: Straightforward.
Q.E.D.(Claim 1)
Now, suppose that d is an ID such that the probability of leaf xij is qij (i, j ∈
{0, 1}). Without loss of generality, we may assume that qi0 ≤ qi1 (i ∈ {0, 1})
and q00 q01 ≥ q10 q11 . Throughout rest of the proof, cost of a given algorithm
denotes cost of the algorithm with respect to d.
Claim 2 If q01 ≤ q11 then among all non-depth-first algorithms, the minimum cost is achieved by A(x00 , x10 , x11 , x01 ). Otherwise, that is, if q01 > q11
then among all non-depth-first algorithms, the minimum cost is achieved by
A(x10 , x00 , x01 , x11 ).
Proof of Claim 2:
Let f (x, y, z, w) = −xyz + xy − xw + 2x + w + 1,
for non-negative real numbers x, y, z ≤ 1 and w. For each of 16 algorithms,
its cost is written by using f . For example, cost of A(x00 , x10 , x11 , x01 ) is
f (q00 , q10 , q11 , q10 + 1), where qij is the probability of xij .
By using properties of f (for example, if x ≤ x′ and w ≤ 2 then f (x, y, z, w) ≤
f (x′ , y, z, w)), it is not difficult to verify Claim 2.
Q.E.D.(Claim 2)
Let SOLVE be the depth-first directional algorithm defined in Preliminaries
section, that is, SOLVE probes the leaves in the order x00 , x01 , x10 , x11 . Let
6
SOLVE′ be the depth-first directional algorithm probing the leaves in the order
x10 , x11 , x00 , x01 .
Among all depth-first algorithms, either SOLVE or SOLVE′ achieves the
minimum cost.
We consider three cases. Case 1, q01 ≤ q11 and cost(SOLVE) > cost(SOLVE′ ):
Case 2, q01 ≤ q11 and cost(SOLVE) ≤ cost(SOLVE′ ): Case 3, q11 < q01 . The
remainder of the proof is routine.
(2) The case of all the leaves having positive probability is reduced to (1).
Otherwise, recall that we assumed that qi0 ≤ qi1 (i ∈ {0, 1}) and q00 q01 ≥ q10 q11 .
Therefore, q10 = 0. We consider two cases depending on whether q00 = 0 or
not. The remainder of the proof is easy.
Tarsi [10] showed an example of an ID on a NAND-tree for which no depthfirst algorithm is optimal. The tree of the example is not balanced, and every
leaf has distance 4 from the root. More precisely, at the level of distance 3 from
the root, some nodes have two leaves as child nodes while the other nodes have
just one leaf as child nodes.
In the case of height 3 binary tree, we are going to show that the counterpart
of Proposition 4 does not hold.
Proposition 5. Suppose that T is a uniform binary OR-AND tree of height 3.
Then there exists an ID d0 on T with the following property. At any leaf, probability of having value 0 is neither 0 nor 1, and for any deterministic algorithm
A0 such that
cost(A0 , d0 ) =
min
cost(A, d0 )
(7)
A:non-depth
holds, A0 is not depth-first.
Proof. Given an ID, for each binary string u of length at most 3, let qu denote
probability of node xu having value 0.
Let ε be a positive real number that is small enough.
Let dε be the ID such that qij0 = (1+ε)/2 and qij1 = 1/(1+ε) for each i, j ∈
{0, 1}. Since the labels of xij , xi and xλ are OR, AND and OR, respectively,
it holds that qij = qij0 qij1 = 1/2, qi = 3/4 and qλ = 9/16. The optimal
cost among deterministic depth-first algorithms is given by cost(SOLVE, dε ) =
(1 + 3/4)(1 + 1/2)(1 + (1 + ε)/2).
We define a deterministic non-depth-first algorithm A0 as follows.
First, probe the subtree under x00 where x000 has higher priority. If we have
(x000 , x001 ) = (0, 0), that is, if x00 = 0 then we know x0 = 0. In this case, probe
the subtree under x1 in the same manner as SOLVE until finding the value of
the root.
Otherwise, that is, if either x000 = 1 or (x000 , x001 ) = (0, 1), we know x00 =
1. In this case, probe x010 . If x010 = 1 then we know x0 = 1, thus xλ = 1 and
we finish.
Otherwise, that is, if x010 = 0, put x011 on ice, and probe the subtree under
x1 in the same manner as SOLVE until finding the value of x1 . If x1 = 1 then
we know xλ = 1 and finish.
7
Otherwise, that is, if x1 = 0 then we probe x011 . This completes the definition of A0 .
x0
x00
x010 x011
Figure 1: ID dε . Each black leaf has probability (1 + ε)/2, and each white leaf
has probability 1/(1 + ε).
When we have ε → 0+, it holds that cost(SOLVE, dε ) → 63/16 and cost(A0 , dε ) →
31/8. Hence, for ε small enough, we have the following.
cost(A0 , dε ) <
4
min cost(A, dε )
A:depth
(8)
Main theorem
Questions 1 and 2 in [9] are whether Theorem 2 (1) and (2) hold in the case
where an algorithm runs over all deterministic algorithms including non-depthfirst ones. In the following, we give positive answers for these questions.
As we have seen in the previous section, given an ID, an optimal algorithm
is not necessarily chosen form depth-first ones. However, if ID d achieves the
equilibrium among IDs then there is an optimal algorithm for d that is depthfirst, which is a key to the following proof.
Theorem 6. (Main Theorem) Suppose that T is a uniform binary AND-OR
tree (or, OR-AND tree).
(1) Suppose that r is a real number such that 0 < r < 1. Suppose that d0
is an ID such that probability of the root having value 0 is r, and the following
equation holds.
min
A:non-depth
cost(A, d0 ) = max
min
cost(A, d)
d:ID,r A:non-depth
(9)
Here, d runs over all IDs such that probability of the root having value 0 is
r.
Then there exists a depth-first directional algorithm B0 with the following
property.
cost(B0 , d0 ) =
min
cost(A, d0 )
(10)
A:non-depth
8
In addition, d0 is an IID.
(2) Suppose that d1 is an ID that satisfies the following equation.
min
A:non-depth
cost(A, d1 ) = max
min
cost(A, d)
d:ID A:non-depth
(11)
Here, d runs over all IDs. Then there exists a depth-first directional algorithm B0 with the following property.
cost(B0 , d1 ) =
min
A:non-depth
cost(A, d1 )
(12)
In addition, d1 is an IID.
Proof. Proofs of the two assertions are similar. We are going to prove assertion
(2).
By the result of Suzuki-Niida (Theorem 2 of the present paper), there exists
an IID d2 such that the following equation holds.
max min cost(B, d) = min cost(B, d2 )
d:ID B:depth
B:depth
(13)
Claim 1 There exists a depth-first algorithm B0 satisfying (12).
Proof of Claim 1: We show the claim by contraposition. Thus, given an
ID d1 , we assume that no depth-first algorithm B0 satisfies (12). Thus, we have
the following inequality.
min
A:non-depth
cost(A, d1 ) <
min
cost(B, d1 )
(14)
min
cost(B, d2 )
(15)
B:depth
Our goal is to show the negation of (11).
By (13) and (14), we have the following.
min
A:non-depth
cost(A, d1 ) <
B:depth
By Theorem 6 of [9], in d2 , probability of the root having value 0 is neither
0 nor 1. Therefore, we can apply the result of Tarsi (Theorem 1 of the present
paper) to d2 . Thus, the right-hand side of (15) equals the following.
min
A:non-depth
cost(A, d2 )
Hence, we have the following.
min
A:non-depth
cost(A, d1 ) <
Therefore, the negation of (11) holds.
By (11), we have the following.
9
min
A:non-depth
cost(A, d2 )
(16)
Q.E.D.(Claim 1)
min
A:non-depth
cost(A, d1 ) ≥
min
A:non-depth
cost(A, d2 )
(17)
By Claim 1, the left-hand side of (17) equals the following.
min
B:depth
cost(B, d1 )
Since d2 is an IID, by the result of Tarsi (Theorem 1), the right-hand side
of (17) equals the following.
min
B:depth
cost(B, d2 )
Therefore, we have the following.
min
B:depth
cost(B, d1 ) ≥
min
B:depth
cost(B, d2 )
(18)
Since d2 satisfies (13), we have the following.
min
B:depth
cost(B, d1 ) = max min cost(B, d)
d:ID B:depth
(19)
Hence, by the result of Suzuki-Niida (Theorem 2), d1 is an IID.
Since we have shown that d1 is an IID, without loss of generality, by the
result of Tarsi, we may assume that B0 is directional.
Corollary 7 extends the result of Peng et al. (Theorem 3) to the case where
non-depth-first algorithms are considered.
Corollary 7. Suppose that T is a balanced (multi-branching) AND-OR tree (or,
OR-AND tree).
(1) Suppose that r is a real number such that 0 < r < 1. Suppose that d0 is
an ID such that probability of the root is r, and the following equation holds.
min
A:non-depth
cost(A, d0 ) = max
min
cost(A, d)
d:ID,r A:non-depth
(20)
Here, d runs over all IDs such that probability of the root is r.
Then there exists a depth-first directional algorithm B0 with the following
property.
cost(B0 , d0 ) =
min
cost(A, d0 )
(21)
A:non-depth
In addition, d0 is an IID.
(2) Suppose that d1 is an ID such that the following equation holds.
min
A:non-depth
cost(A, d1 ) = max
min
cost(A, d)
d:ID A:non-depth
Here, d runs over all IDs.
10
(22)
Then there exists a depth-first directional algorithm B0 with the following
property.
cost(B0 , d1 ) =
min
cost(A, d1 )
(23)
A:non-depth
If, in addition, probability of the root is neither 0 nor 1 in d1 then d1 is an
IID.
Proof. (1) By using the result of Peng et al. (Theorem 3) in the place of the
result of Suzuki-Niida (Theorem 2), the present assertion (1) is shown in the
same way as Theorem 6.
(2) In the case where probability of the root having value 0 is neither 0 nor
1 in d1 , the present assertion is reduced to assertion (1). In the following, we
investigate the case where the probability is either 0 or 1.
For example, suppose the probability is 0. Then the root has value 1 with
probability 1. Let A be an optimal algorithm. In order to know that the
root has value 1, A has to find that values of all child nodes are 1. If u is a
child node of the root, u has value 1 with probability 1. Thus, without loss
of generality, the leftmost child of u has value 1 with probability 1. Now, it
is easy to see that there exists a depth-first directional algorithm B0 such that
cost(B0 , d1 ) = cost(A, d1 ).
Acknowledgements
We are grateful to Masahiro Kumabe, Mika Shigemizu and Koki Usami for
helpful discussions. We made an oral presentation of this work at Workshop on
Computability Theory and the Foundations of Mathematics (8 - 12 September
2017) at National University of Singapore. This work was partially supported by
Japan Society for the Promotion of Science (JSPS) KAKENHI (C) 16K05255.
References
[1] Donald E. Knuth and Ronald W. Moore. An analysis of alpha-beta pruning.
Artif. Intell., 6 (1975) 293–326.
[2] ChenGuang Liu and Kazuyuki Tanaka. Eigen-distribution on random assignments for game trees. Inform. Process. Lett., 104 (2007) 73–77.
[3] Judea Pearl. Asymptotic properties of minimax trees and game-searching
procedures. Artif. Intell., 14 (1980) 113–138.
[4] Judea Pearl. The solution for the branching factor of the alpha-beta pruning
algorithm and its optimality. Communications of the ACM, 25 (1982) 559–
564.
[5] Weiguang Peng, NingNing Peng, KengMeng Ng, Kazuyuki Tanaka and
Yue Yang. Optimal depth-first algorithms and equilibria of independent
11
distributions on multi-branching trees. Inform. Process. Lett., 125 (2017)
41–45.
[6] Weiguang Peng, Shohei Okisaka, Wenjuan Li and Kazuyuki Tanaka. The
Uniqueness of eigen-distribution under non-directional algorithms. IAENG
International Journal of Computer Science, 43 (2016) 318–325.
http://www.iaeng.org/IJCS/issues_v43/issue_3/IJCS_43_3_07.pdf
[7] Michael Saks and Avi Wigderson. Probabilistic Boolean decision trees and
the complexity of evaluating game trees. In: Proc. 27th IEEE FOCS, 1986,
29–38.
[8] Toshio Suzuki and Ryota Nakamura. The eigen distribution of an AND-OR
tree under directional algorithms. IAENG Int. J. Appl. Math., 42 (2012)
122–128.
http://www.iaeng.org/IJAM/issues_v42/issue_2/IJAM_42_2_07.pdf
[9] Toshio Suzuki and Yoshinao Niida. Equilibrium points of an AND-OR tree:
under constraints on probability. Ann. Pure Appl. Logic , 166 (2015) 1150–
1164.
[10] Michael Tarsi. Optimal search on some game trees. J. ACM, 30 (1983)
389–396.
12
| 8 |
Partially Independent Control Scheme for Spacecraft
Rendezvous in Near-Circular Orbits
Neng Wan and Weiran Yao
Abstract
Due to the complexity and inconstancy of the space environment, accurate mathematical models for spacecraft
arXiv:1409.2332v4 [cs.SY] 4 Jan 2016
rendezvous are difficult to obtain, which consequently complicates the control tasks. In this paper, a linearized timevariant plant model with external perturbations is adopted to approximate the real circumstance. To realize the robust
stability with optimal performance cost, a partially independent control scheme is proposed, which consists of a robust
anti-windup controller for the in-plane motion and a H∞ controller for the out-of-plane motion. Finally, a rendezvous
simulation is given to corroborate the practicality and advantages of the partially independent control scheme over a
coupled control scheme.
Keywords: Spacecraft rendezvous; Near-circular orbits; Partially independent control; Robust control; Anti-windup.
1
Introduction
Widely applied to crew exchange, large-scale assembly, spacecraft maintenance, docking, interception, formation flying
and other astronautic missions involving more than one spacecraft, autonomous spacecraft rendezvous has been regarded as a
crucial operational technology in aerospace engineering. As the autonomous control scheme is a cardinal and decisive issue
that determines the success of the rendezvous, it has been and continues to be an engaging area of study.
Most of the mathematical models employed in investigating spacecraft rendezvous are derived from the two-body
problem. Because of their concise and linearized form, Clohessy-Wiltshire equations (Clohessy and Wiltshire, 1960) were
favored by many researchers, though this model was initially developed to describe the rendezvous in circular orbits. The
models put forward by De Vries (1963) and Tschauner (1967) extended our knowledge to the rendezvous in elliptical orbits;
however, nonlinear terms were involved, which circumscribed their broader implementations in control engineering.
Considering the fact that most of the rendezvous missions were conducted in near-circular orbits with small eccentricities,
researchers began to search for some eclectic models that are linearized and sufficiently precise. A comprehensive survey on
these efforts was given by Carter (1998); nevertheless, all the linearization results introduced in this literature are in terms of
either the true or eccentric anomaly of one spacecraft and require the solution of the Kepler problem, which is time and
computational consuming. A time-explicit dynamical model overcoming this defect was first introduced by Anthony and
Sasaki (1965), and a more recent development on time-explicit models was contributed by Melton (2000).
Robust guaranteed cost control was first raised by Chang and Peng (1972) to optimize preassigned cost function, and
many of the following literatures were carried out based on their works. Petersen and McFarlane (1994) synthesized a state
feedback guaranteed cost controller via a Riccati equation approach. Yu and Chu (1999) designed a guaranteed cost controller
for linear uncertain time-delay systems via a linear matrix inequality (LMI) method. Esfahani and Petersen (2000) solved
the guaranteed cost output feedback control problem in a matrix substitution manner. More recently, Guan and Chen (2004)
and Wu et al. (2011a) investigated the guaranteed cost control methods for time-delay systems. Zhang et al. (2008) studied a
guaranteed cost control scheme for a class of uncertain stochastic nonlinear systems with multiple time delays. Tanaka et al.
(2009) presented a guaranteed cost control for polynomial fuzzy systems via a sum of squares approach.
Neng Wan is with Department of Mathematics and Statistics, University of Minnesota Duluth, Duluth 55811, USA (corresponding author). Email:
[email protected].
Weiran Yao is with School of Astronautics, Harbin Institute of Technology, Harbin 150001, China. Email: [email protected].
1
Robust H∞ control technique is frequently used in synthesizing guaranteed cost controllers for systems with external
disturbances. This technique was first proposed by Zames (1981). Nonetheless, the focus of the H∞ control problem quickly
shifted from its applications to formulating the solvable control problems due to the lack of an efficient tool to solve the H∞
problem. Time domain approach (Barmish, 1983), frequency domain approach (Francis, 1987) and Riccati equation approach
(Khargonekar et al., 1990) were the three main methods in solving the H∞ control problems before the LMI approach (Boyd,
1994; Chilali and Gahinet, 1996) became widely used. For more recent papers on robust H∞ control, refer to Liu et al. (2011),
Wu et al. (2011b) and references therein.
Optimal spacecraft rendezvous problem has attracted numerous researchers. Some previous works on this topic have
been introduced in Wan et al. (2013). Based on the sliding mode control theory, Ebrahimi et al. (2008) and Zhao et al. (2013)
developed the optimal guidance laws for spacecraft rendezvous. Gao et al. (2009) investigated a multi-object robust H∞
control scheme for rendezvous in circular orbits. Li et al. (2013b) proposed a sample-data control technique for rendezvous
via a discontinuous Lyapunov approach. Yang and Gao (2013) synthesized a robust reliable controller for thrust-limited
rendezvous in circular orbits. Gao et al. (2012) studied a robust H∞ control approach for rendezvous in elliptical orbits. Yang
et al. (2012) considered the spacecraft rendezvous with thrust nonlinearity and sampled-data control. Wan et al. (2013) put
forward a robust tracking control method for relative position holding and rendezvous with actuator saturation in near-circular
orbits; and in another paper of Wan et al. (2014), they provided an observer-based control scheme for spacecraft rendezvous.
More recent works on optimal spacecraft rendezvous can be found in Li et al. (2013a), Li et al. (2013c), Sheng et al. (2014)
and Zhou et al. (2014). Nevertheless, to the best of the authors’ knowledge, most of the existing literatures either synthesized
a coupled rendezvous controller that regulated the in-plane and out-of-plane motions jointly or neglected the control task
of out-of-plane motion. Although the in-plane and out-of-plane motions were treated separately by Gao et al. (2011), an
identical control method was applied to two motions. Therefore, up to now, an efficient control scheme which accommodates
the different dynamical and engineering features of the in-plane and the out-of-plane motions has not been proposed yet.
In this paper, a time-explicit linearized model for rendezvous in near-circular orbits is established in a concise form that
facilitates the controller synthesis; non-circularity of the reference orbits and external perturbations are considered to ensure
the accuracy of the plant model. In-plane and out-of-plane motion controllers are synthesized respectively in order to meet
the dynamical properties and requirements of each motion. For the in-plane motion usually driven by high-thrust propellers
with high fuel consumption, a robust anti-windup guaranteed cost controller is synthesized to realize optimal rendezvous
under the constraints of orbital non-circularity and actuator saturation. Moreover, it is well known that the out-of-plane
maneuver or maneuver that changes the orbital inclination consumes much more energy compared with other kinds of orbital
maneuvers (Curtis, 2005); therefore a robust H∞ controller is synthesized to guarantee the robust stability of the out-of-plane
motion, which is usually driven by low-thrust propellers thus very sensitive to the external disturbances. Then the partially
independent controller is obtained by solving two convex optimization problems subject to LMI constraints. At the end of this
paper, a numerical rendezvous simulation is presented to verify the advantages of the partially independent control scheme
over a coupled robust controller.
The remainder of this paper is organized as follows. Section 2 establishes the dynamical models and formulates the
control problems; Section 3 shows the main result of the partially independent control scheme; Section 4 presents a numerical
simulation; and Section 5 draws the conclusion.
Notation. The notations used throughout this paper are defined in this paragraph. k · k2 refers to the Euclidean vector norm.
diag(· · · ) stands for a block-diagonal matrix. In symmetric block matrices or complex matrix expressions, an asterisk (∗) is
used to represent a term that is induced by symmetry. For a matrix A, AT stands for the transpose of A; and sym(A) stands
for A + AT when A is a square matrix. For a real symmetric matrix B, the notation B > 0 (B < 0) is used to denote its
positive- (negative-) definiteness. I and 0 respectively denote the identity matrix and zero matrix with compatible dimension.
If the dimensions of matrices are not explicitly stated, they are assumed to be compatible for algebraic operation.
2
2
Dynamical Model and Problem Formulation
In this section, dynamical models for the in-plane and out-of-plane motions are established, and the control problems are
formulated with the consideration of the different dynamical features and engineering demands of each motion.
Suppose that a target vehicle is moving on a near-circular orbit with a chase vehicle nearby. Both of the spacecrafts
are only influenced by a central gravitational source, and the target vehicle does not maneuver during the rendezvous. A
relative Cartesian coordinate system adopted to describe the relative motion between the spacecrafts is defined in Figure 1.
The system’s origin is fixed at the centroid of the target vehicle. The x-axis is parallel to the vector r from the Earth’s centroid
to the target’s centroid; rc is the vector from the Earth’s centroid to the chaser’s centroid. The z-axis is aligned with the target
orbit’s angular momentum vector, and the y-axis completes a right-handed coordinate system.
rc
z
r
x
o
x
y
Figure 1: Relative Cartesian coordinate system for spacecraft rendezvous.
Some other important assumptions employed in this paper are also presumed here in case of ambiguity.
Assumption 1. The propulsions of the chase vehicle are continuous and independent along each axis defined in Figure 1.
Assumption 2. The initial out-of-plane distance and velocity between the chase and target spacecrafts are zeros.
Assumption 3. Only the disturbance along the z-axis is included in the plant model, i.e., external perturbations along the
orbital plane are neglected in this paper.
Remark 1. As redundancy is a fundamental technology for spacecraft system, independent propulsions can be realized with a
proper actuator allocation; therefore Assumption 1 is frequently taken in the existing literatures, such as Ebrahimi et al. (2008),
Gao et al. (2009) and Zhou et al. (2014). Since out-of-plane maneuver that changes orbital inclination is fuel consuming (for
example, when both the initial and terminal orbits are circular, the velocity increase ∆v required for an inclination change
∆i is ∆v = 2v sin(∆i/2), where v is the orbital velocity in a large magnitude.), the relative distance and velocity along the
z−axis are often eliminated by launch vehicle before the close-range rendezvous, the phase we mainly investigates in this
paper; therefore, Assumption 2 is reasonable. As it was mentioned above, for the out-of-plane motion, due to its dynamical
and engineering properties, the fuel consumption and stability are more sensitive to the external perturbations compared with
the other motions along the orbital plane; therefore, it is reasonable for us to conduct a special investigation in the crucial one
while omitting the trivial ones, which is the main propose of Assumption 3.
2.1
Relative Motion Model
Define the state vector as x(t) = [x, y, z, ẋ, ẏ, ż]T , which contains the relative distances and velocities along each axis;
and define the control input vector as u(t) = [ fx , fy , fz ]T , where fi for i = x, y, z are the control forces acting on the chase
3
vehicle along each axis. Relative motion models for spacecraft rendezvous in all types of conic orbits can be uniformly
expressed in a matrix form as
ẋ(t) = An x(t) + Bu(t) ,
(1)
where
0
0
0
An =
3
2µ/r + ω 2
−ω̇
0
0
0
1
0
0
0
0
1
0
0
0
0
ω̇
0
0
2ω
−µ/r3 + ω 2
0
−2ω
0
0
−µ/r3
0
0
0
0
1
,
0
0
0
0
0
1
0
B=
m 1
0
0
0
0
0
0
0
1
0
0
0
.
0
0
1
µ is the gravitational parameter; r is the radius of the reference orbit; ω and ω̇ are the angular rate and angular acceleration
of the target vehicle; and m is the mass of the chase vehicle. As can be seen in (1), nonlinear terms exist in system matrix An ,
which makes the controller synthesis difficult. Therefore, a further linearization on (1) is necessary, and a lemma known as
generalized Lagrange’s expansion theorem is introduced here before the linearization procedures.
Lemma 1 (Battin, 1999). Let y be a function of x in terms of a parameter α by
y = x + αφ (y) .
(2)
Then for sufficiently small α, any function F(y) can be expanded as a power series in α,
∞
α n dn−1
n dF(x)
F(y) = F(x) + ∑
φ (x)
.
n−1
dx
n=1 n! dx
(3)
With equation
r = a(1 − e cos E) ,
(4)
where a and e denote the semimajor axis and the eccentricity of the reference orbit; E denote the eccentric anomaly of the
target vehicle; nonlinear terms in system matrix An can be rewritten as the functions in terms of E, such as
a 3
µ
= n2
= n2
3
r
r
1
1 − e cos E
3
,
2
h
1
ω = 2 =n
,
r
1 − e cos E
ω2 =
ω̇ = −
h2
= n2
r4
1
1 − e cos E
(5a)
(5b)
4
,
2h
e sin E
ṙ = −2n2
,
r3
(1 − e cos E)4
(5c)
(5d)
where h is the angular momentum of the reference orbit. Moreover, according to Lemma 1 and Kepler’s time equation
E = M + e sin E ,
where M = n(t − t p ) and n =
p
(6)
µ/a3 are the mean anomaly and the mean motion of the target vehicle respectively; t p is the
time of periapsis passage; when eccentricity e is sufficiently small, any function F(E) can be expanded as a power series in
constant e. Therefore, equations (5a-d) can be expanded as
µ
= n2
r3
(
"
#
)
e2
12e2 sin4 M
9e cos M sin2 M
−
+
−
+··· ,
2 (1 − e cos M)5
(1 − e cos M)3 (1 − e cos M)4
(1 − e cos M)4
1
3e2 sin2 M
4
(7a)
(
"
#
)
e2
6e2 sin4 M
6e cos M sin2 M
ω =n
−
+
−
+··· ,
2 (1 − e cos M)4
(1 − e cos M)2 (1 − e cos M)3
(1 − e cos M)3
#
)
(
"
2
2 sin2 M
2
2 sin4 M
4e
e
12e
cos
M
sin
M
1
20e
−
+
−
+··· ,
ω 2 = n2
2 (1 − e cos M)6
(1 − e cos M)4 (1 − e cos M)5
(1 − e cos M)5
(
"
#
)
e sin M
cos M
4e sin2 M
2
2
ω̇ = −2n
+ e sin M
−
+··· .
(1 − e cos M)4
(1 − e cos M)4 (1 − e cos M)5
2e2 sin2 M
1
(7b)
(7c)
(7d)
Computing the Taylor series expansions of (7a-d) around point e = 0, we have
e2
µ
e3
2
4
=
n
1
+
3e
cos
M
+
(9
cos
2M
+
3)
+
(53
cos
3M
+
27
cos
M)
+
O
e
,
r3
2
8
(8a)
e3
e2
ω = n 1 + 2e cos M + (5 cos 2M + 1) + (13 cos 3M + 3 cos M) + O e4 ,
2
4
(8b)
e3
ω 2 = n2 1 + 4e cos M + e2 (7 cos 2M + 3) + (23 cos 3M + 17 cos M) + O e4 ,
2
(8c)
23
19
5e2
sin 2M + e3
sin 3M + 4 cos 2M sin M + sin M + O e4 .
ω̇ = −2n2 e sin M +
2
8
8
(8d)
Truncating the expansions (8a-d) to order e and substituting the results into (1), the linearized relative motion model becomes
ẋ(t) = (A + ∆A)x(t) + Bu(t) ,
(9)
where
0
0
0
A= 2
3n
0
0
0
0
0
0
0
0
0
0
0
0
0
−n2
0
0
0
∆A =
2
10en cos M
2en2 sin M
0
0
0
1 0
0
0 1
,
0
2n 0
−2n 0 0
0
0 0
1
0
0
0
1
0
B=
m 1
0
0
0
0
0
0
1
0
0
0
0
,
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
−2en2 sin M
0
0
4en cos M
en2 cos M
0
−4en cos M
0
0
−3en2 cos M
0
0
0
0
0
.
0
0
0
The time-variant and norm-bounded matrix ∆A is defined as the non-circularity matrix, which contains the shape information
of the reference orbit. Hereby, we have finished the linearization procedures.
Remark 2. Compared with C-W equations (Clohessy and Wiltshire, 1960), the non-circularity matrix ∆A makes the
model (9) more accurate and practical for engineering applications, while compared with the nonlinear model (1), the
linearized representation of (9) makes the controller synthesis easier.
Remark 3. Although the non-circularity matrix ∆A in model (9) is exactly known, it satisfies the matched condition usually
employed to describe the unknown uncertainty, ∆A = DF(t)E (Khargonekar et al., 1990), which brings some convenience
to controller synthesis. Therefore, this kind of time-variant matrices are sometimes treated as unknown uncertainty in some
literatures, such as Yang and Gao (2013), Wang et al. (2014) and Wan et al. (2014).
5
In order to construct a partially independent control scheme with the in-plane and out-of-plane controllers synthesized
separately, we will decompose model (9) and formulate the control problems with regard to each plane respectively in the rest
of this section.
In-Plane Motion Model
The state vector of in-plane motion is defined as p(t) = [x, y, ẋ, ẏ]T , and the control input vector is denoted as u p (t) =
[ fx , fy ]T . Then according to (9), the mathematical model of in-plane motion can be extracted as
ṗ(t) = (A p + ∆ A p )p(t) + B p u p (t) ,
(10)
where
0
0
Ap =
3n2
0
0
1
0
0
0
0
0
0
−2n
1
,
2n
0
0
0
∆A p =
10en2 cos M
2en2 sin M
0
0
,
0
1
0
1 0
Bp =
m
1
0
0
0
0
0
0
−2en2 sin M
0
en2 cos M
−4en cos M
.
4en cos M
0
0
The norm-bounded matrix ∆A p can be factorized as
∆ A p = E p1 Λ p E p2 ,
(11)
where E p1 , E p2 and Λ p are matrices with proper dimensions and satisfy ΛTp Λ p < I.
Out-of-Plane Motion Model
The state vector of out-of-plane motion is defined as q(t) = [z, ż]T , and the control input is denoted as uq (t) = fz .
According to Assumption 3, external disturbance wq (t) should be involved into the out-of-plane motion model extracted from
(9). Then the model can be expressed as
q̇(t) = (Aq + ∆ Aq ) q(t) + Bq [uq (t) + wq (t)] ,
(12)
where
"
Aq =
0
1
−n2
0
#
"
,
∆Aq =
0
#
0
−3en2 cos M
0
,
" #
1 0
Bq =
.
m 1
The norm-bounded matrix ∆Aq can be factorized as
∆ Aq = Eq1 Λq Eq2 ,
(13)
where Eq1 , Eq2 and Λq are matrices with proper dimensions and satisfy ΛTq Λq < I.
Remark 4. From equations (10) and (12), it can be seen that the motions along the orbital plane are coupled, which means a
coupled controller should be employed, while the motion along the z-axis can be governed by an independent controller. That
is the reason why the authors name this method partially independent control scheme.
6
2.2
Problem Formulation
Robust stability, bounded propulsions and optimal cost function are the three main objectives we will consider when
designing the partially independent control scheme. With these requirements, the control problems of in-plane and out-ofplane motions will be formulated successively as follows.
Control Problem for In-Plane Motion
In order to assess the fuel and time consumptions of in-plane motion within a performance index, the quadratic cost
function of in-plane motion is defined as
Jp =
Z ∞
0
pT (t)Q p p(t) + uTp R p u p (t) dt ,
(14)
where the positive symmetric matrix R p ∈ R2×2 is related to the fuel consumption; and the positive symmetric matrix Q p ∈
R4×4 is related to the state convergence rate and the smoothness of trajectory (Yang and Gao, 2011). With two auxiliary
matrices, U px = [1, 0]T [1, 0] and U py = [0, 1]T [0, 1], thrust constraints along the x- and y-axis can be formulated as
| fi | = U pi u(t) ≤ u pi,max ,
(i = x, y) ,
(15)
where u pi,max are the maximum control forces that can be generated by the propellers along i-axis. With the motion model
(10) and the requirements presented at the preliminary of Section 2.2, the control task of in-plane motion can be described as:
design an anti-windup robust guaranteed cost controller such that
(i) In-plane motion system (10) is asymptotically stable at p(t) = 0, i.e., the chase vehicle can eventually rendezvous with
the target vehicle;
(ii) Quadratic cost function (14) is minimal, i.e., an optimal compromise between the fuel consumption and the state
convergence rate shall be reached;
(iii) Control forces along the x- and y-axis should satisfy the saturation constraints (15).
Control Problem for Out-of-Plane Motion
In order to evaluate the fuel and time consumptions of out-of-plane motion within a performance index, the quadratic
cost function for out-of-plane motion is defined as
Jq =
Z ∞
qT (t)Qq q(t) + uTq Rq uq (t) dt ,
(16)
0
where Qq and Rq are the state weighting matrix and control weighting scale, which have the same functions as matrices Q p
and R p introduced in (14). When external perturbation wq (t) is considered in (12), to keep the chase vehicle from deviating
from the orbital plane, the capability of actuator uq,max must be greater than the largest perturbation force wq,max . Moreover,
to attenuate or to cancel the perturbation, out-of-plane propulsion uq (t) should follow wq (t) exactly; therefore, additional
consideration of actuator saturation along the z-axis is unnecessary. With the motion model (12) and the requirements
illustrated above, the control task of out-of-plane motion can be summarized as: design a robust H∞ controller such that
(iv) Out-of-plane motion system (12) is robustly stable at q(t) = 0, i.e., the chase vehicle can be stabilized on the reference
orbital plane in the presence of non-circularity ∆Aq and external perturbation wq (t);
(v) Quadratic cost function (16) is minimal, i.e., an optimal compromise between the fuel consumption and the state
convergence rate shall be realized subject to the external perturbation wq (t).
7
3
Partially Independent Control Scheme
In this section, an anti-windup robust guaranteed cost controller and a robust H∞ controller will be synthesized
successively to construct the partially independent control scheme for spacecraft rendezvous. Firstly, a lemma that will be
employed in the subsequent derivation is introduced here.
Lemma 2 (Khargonekar et al., 1990). Given matrices Y = Y T , D and E of appropriate dimensions,
Y + DFE + ET FT DT < 0 ,
(17)
for all F satisfying FT F ≤ I, if and only if there exists a scalar ε > 0 such that
Y + ε D DT + ε −1 ET E < 0 .
3.1
(18)
In-Plane Motion Controller
Consider the following state feedback control law
u p (t) = −K p p(t) ,
(19)
where K p ∈ R2×4 is the state feedback gain matrix of in-plane motion controller. Substituting equation (19) into the plant
model (10), the closed-loop model for in-plane motion is
ṗ(t) = (A p + ∆ A p − B p K p ) p(t) .
(20)
Sufficient condition for the existence of a thrust-limited robust guaranteed cost controller is described in Theorem 1.
Theorem 1. Consider the closed-loop system (20) with the state feedback control law in (19). For a given initial state vector
p(0), if there exist a positive symmetric matrix X p ∈ R4×4 , a matrix Y p ∈ R2×4 , positive scalars ε p and ρ satisfying
sym (A p X p − B p Y p ) + ε p E p1 ETp1 X p ETp2
Y Tp
∗
−ε p I
0
∗
∗
−R−1
p
∗
∗
∗
"
#
−ρ −1 ρ −1 pT (0)
<0,
∗
−X p
"
#
−ρ −1 I
U pi Y p
<0,
∗
−u2pi,max X p
Xp
0
<0,
0
−Q−1
p
(21)
(22)
(23)
then there exists an in-plane motion controller such that requirements (i), (ii) and (iii) are satisfied, and positive scalar ρ is an
upper bound of the quadratic cost function (14).
Proof. Consider the Lyapunov function Vp (t) = pT (t)P p p(t), where P p ∈ R4×4 is a positive symmetric matrix.
Substituting (20) into the derivative of Vp (t), we have
V̇p (t) = sym pT (t)P p (A p + ∆ A p − B p K p ) p(t) .
(24)
In order to optimize the cost function (14) and guarantee the asymptotic stability of in-plane motion, let inequalities (25) hold
V̇p (t) < − pT (t)Q p p(t) + uTp (t)R p u p (t) < 0 .
8
(25)
Integrating (25) from 0 to ∞ and noticing that p(t) → 0 as t → ∞, we get
0 < Jp =
Z ∞
0
pT (t)Q p p(t) + uTp (t)R p u p (t) dt ≤ Vp (0) .
(26)
From (26), we know that when inequalities (25) hold, Vp (0) = pT (0)P p p(0) will be an upper bound of the quadratic cost
function J p . Substituting (11), (19) and (24) into (25) yields
Ψ p + P p E p1 Λ p E p2 + ETp2 ΛTp (P p E p1 )T < 0 ,
(27)
where
Ψ p = sym [P p (A p − B p K p )] + Q p + K Tp R p K p .
Since Ψ p is a symmetric matrix, according to Lemma 2 and (11), there exists a positive scalar ε p ensuring (27) by
Ψ p + ε p P p E p1 (P p E p1 )T + ε p−1 ETp2 E p2 < 0 .
By Schur complement, inequality (28) can be rewritten in a matrix form as
"
#
Π11 Π12
<0,
∗
Π22
(28)
(29)
where
Π11 = sym [P p (A p − B p K p )] + ε p P p E p1 ETp1 PTp ,
h
i
Π12 = ETp2 K Tp I ,
−1
Π22 = diag −ε I, −R−1
.
p , −Q p
−1
With the variable substitutions, X p = P−1
p and Y p = K p P p , pre- and post-multiply (29) with diag(X p , I), and then (21) in
Theorem 1 is obtained. To minimize Vp (0), an upper bound of J p , a positive scalar ρ is introduced and meets
Vp (0) = pT (0)P p p(0) ≤ ρ .
By Schur complement, inequality (30) is equivalent to
"
ρ
pT (0)
∗
−P−1
p
(30)
#
<0.
(31)
Pre- and post-multiplying (31) with diag(ρ −1 , I), the LMI constraint (22) in Theorem .1 is obtained. LMIs (21) and (22)
have fulfilled the requirements (i) and (ii). In order to meet the requirement (iii), squaring both sides of (15) and dividing each
side by u2pi,max , then there is
T
u−2
pi,max [U pi K p p(t)] U pi K p p(t) ≤ 1 .
(32)
Dividing both sides of (30) by ρ and considering V̇p (t) < 0, we have
ρ −1Vp (t) < ρ −1Vp (0) ≤ 1 .
(33)
T
−1
u−2
Pp .
pi,max [U pi K p ] U pi K p < ρ
(34)
Then we can guarantee the inequality (32) by
By Schur complement, inequality (34) can be rewritten as
"
−ρ −1 I
∗
U pi K p
−u2pi,max P p
#
<0.
(35)
Pre- and post-multiplying (35) with diag(I, X p ), the LMI constraint (23) in Theorem 1 is obtained. This completes the
proof.
9
It can be inferred from (30) that the quadratic cost function J p will be optimal if the positive scalar ρ is minimized.
Therefore, another positive scalar σ is introduced and meets σ > ρ, which is equivalent to
"
#
−σ
1
<0.
1 −ρ −1
(36)
Then combining Theorem 1 and (36), the thrust-limited robust guaranteed cost controller for in-plane motion with initial state
p(0) can be obtained by solving the following convex optimization problem
min
ε p , ρ −1 , X p , Y p
σ,
(37)
s.t. (21), (22), (23) and (36).
The state feedback gain matrix K p can be solved by K p = Y p X−1
p .
Remark 5. Since no preassigned parameter is needed in Theorem 1, the motion controller obtained from (37) is less
conservative and therefore more practical than the controllers employed in Gao et al. (2009) and Yang and Gao (2013) when
implemented to spacecraft rendezvous, which can be drawn from the minimum feasible upper bounds of actuators.
3.2
Out-of-Plane Motion Controller
Consider the following state feedback control law
uq (t) = −Kq q(t) ,
(38)
where Kq ∈ R1×2 is the state feedback gain matrix of the out-of-plane motion controller. Substituting (38) into the plant
model (12), the closed-loop model for out-of-plane motion is
q̇(t) = (Aq + ∆ Aq − Bq Kq ) q(t) + Bq wq (t) .
(39)
To optimize cost function Jq in the presence of external disturbance wq (t), define a controlled output as
1
1
zq (t) = Qq2 q(t) + Rq2 uq (t) .
(40)
Then requirement (v) can be fulfilled by minimizing kzq (t)k2 , which is assumed to be bounded by
kzq (t)k2 ≤ γkw(t)k2 ,
(41)
where γ is the H∞ performance. Sufficient condition for the existence of a robust H∞ controller is given in Theorem 2.
Theorem 2. Consider the closed-loop system (39) with the state feedback control law in (38). If there exist a positive
symmetric matrix Xq ∈ R2×2 , a matrix Yq ∈ R1×2 and a positive scalar εq satisfying
sym (Aq Xq − Bq Yq ) + εq Eq1 ETq1
Bq
Xq ETq2
0
YqT
∗
−γ 2 I
0
0
0
∗
∗
−εq I
0
0
∗
∗
∗
−ε
I
0
q
∗
∗
∗
∗
−R−1
q I
∗
∗
∗
∗
∗
Xq
0
0
<0,
0
0
−Q−1
q
then there exists an in-plane motion controller such that requirements (iv) and (v) are satisfied.
10
(42)
Proof. Consider the Lyapunov function Vq (t) = qT (t)Pq q(t), where Pq ∈ R2×2 is a positive symmetric matrix.
Substituting (39) into the derivative of Vq (t), there is
"
V̇q (t) =
q(t)
#T "
sym [P p (Aq + ∆Aq − Bq Kq )]
∗
wq (t)
Pq Bq
#"
q(t)
#
wq (t)
0
.
(43)
Assuming external disturbance wq (t) to be 0, the derivate of Vq (t) becomes
V̇q0 (t) = sym qT (t)Pq (Aq + ∆ Aq − Bq Kq ) q(t) .
(44)
zTq (t)zq (t) − γ 2 wTq (t)wq (t) ≤ 0 .
(45)
Squaring both sides of (41), there is
Integrating (45) from 0 to ∞, we have
Z ∞
0
zTq (t)zq (t) − γ 2 wTq (t)wq (t) + V̇q (t) dt +Vq (0) −Vq (∞) ≤ 0 .
(46)
According to Assumption 2 of zero-initial condition and the fact Vq (∞) > 0, inequalities (41), (45) and (46) can be guaranteed
by
zTq (t)zq (t) − γ 2 wTq (t)wq (t) + V̇q (t) ≤ 0 .
Substituting (40) and (43) into (47), we can obtain
"
sym [P p (Aq + ∆Aq − Bq Kq )] + Qq + KqT Rq Kq
Pq Bq
−γ 2 I
∗
(47)
#
<0.
(48)
By Schur complement, inequality (48) can be rewritten as
Θ1 < Θ2 ,
(49)
where
Θ1 = sym [Pq (Aq + ∆Aq − Bq Kq )] ,
Θ2 = −Qq − KqT Rq Kq − γ −2 Pq Bq (Pq Bq )T .
From (49), we can learn that Θ2 < 0; thus Θ1 < 0 and V̇q0 < 0, i.e., inequality (48) guarantees the stabilities of the nominal
model (without disturbance) as well as the perturbed model (12), which fulfills requirement (iv). Substituting (13) into (48),
we have
Ψq + ∆q Φq Eq + ETq ΦTq ∆Tq < 0 ,
(50)
where
Ψq =
"
sym [Pq (Aq − Bq Kq )] + Qq + KqT Rq Kq
∆q =
Pq Eq1
#
0
0
0
,
Φq =
"
Λq
0
#
0
0
#
−γ 2 I
"
∗
"
Pq Bq
,
Eq =
,
#
Eq2
0
0
0
.
Since Ψq is a symmetric matrix, according to Lemma 2 and (13), there exists a positive scalar εq ensuring (50) by
Ψq + εq ∆q ∆Tq + εq−1 ETq Eq < 0 .
By Schur complement, inequality (51) is equivalent to
"
Ω11
∗
Ω12
Ω22
11
(51)
#
<0,
(52)
where
Ω11 =
"
sym [Pq (Aq − Bq Kq )] + εq Pq Eq1 ETq1 PqT
Pq Bq
−γ 2 I
∗
#
"
,
Ω12 =
#
ETq2
0
KqT
I
0
0
0
0
,
−1
Ω22 = diag −ε I, ε I, −R−1
.
q , −Qq
Define the variable substitutions Xq = Pq−1 and Yq = Kq Pq−1 . Pre- and post-multiplying (52) with diag(Xq , I), the LMI
constraint (42) is obtained. This completes the proof.
Remark 6. In the proof of Theorem 2, zero-initial condition has been utilized to synthesize the robust H∞ controller, which
is a reasonable simplification for the engineering problem described in this paper. However, for the situation when zeroinitial condition is not satisfied, some extended robust H∞ control methods can be adopted, which have been discussed by
Khargonekar et al. (1991), Namerikawa et al. (2002), Savkin et al. (2003) and Foo (2006). Nevertheless, due to the implicit
expressions of H∞ performance and more rigorous assumptions, extended robust H∞ controllers are not frequently employed
in the existing literatures.
The robust H∞ controller for out-of-plane motion can be obtained by solving the following convex optimization problem
min γ ,
εq , Xq , Yq
(53)
s.t. (42) .
State feedback gain matrix Kq can be determined by Kq = Yq Xq−1 . With the state feedback gain matrices K p and Kq solved
from (37) and (53), a partially independent control scheme for spacecraft rendezvous can be constructed, and we will discuss
this procedure detailedly in the next section with an illustrative example.
4
Illustrative Example
In this section, a comparison between the partially independent and coupled control schemes will be conducted to
illustrate the advantages of the former. All the simulation results were obtained from a two-body model:
µ
r = 0,
r3
(54a)
u+w
µ
rc =
,
rc3
m
(54b)
r̈ +
r̈c +
where the position vectors r and rc have been defined in Figure 1 and satisfy rc − r = x(t); m and u are the mass and control
vector of the chase vehicle; and w is the disturbance, which consists of long and short period perturbations along the z−axis.
Consider a rendezvous scenario as follows. A target vehicle is in a low earth orbit (LEO) with eccentricity e = 0.05 and
semimajor axis a = 7082.253 km; then we can figure out that the mean motion of the target vehicle is n = 1.059 × 10−3
rad/s, i.e., the period of the reference orbit is T = 5931.53 s; the initial state vector is x(0) = [−5000, 5000, 0, 5, −5, 0], and
the mass of the chase vehicle is m = 500 kg. When solving the convex problem (37), the minimum feasible upper bound of
the in-plane propulsion is 6.8 N; nevertheless, considering the discrepancy between the plant models (10, 12) and simulation
model (54a-b), the upper bounds of the in-plane propulsions are set u px,max = u py,max = 15 N to guarantee the robustness of
the controllers. In (12), consider an extreme unknown out-of-plane disturbance that may rarely exist in reality
wq (t) = 4.3 sin 1.059 × 10−3t + 0.5 sin (0.1059t) .
(55)
where the first term represents long period perturbation caused by the nonhomogeneity of central planet and gravitational
forces from other celestial bodies, etc; while the second term represents short period perturbation caused by the solar wind,
12
atmospheric drag, etc. Therefore, in this example, the upper bound of the out-of-plane propulsion is set uq,max = 5 N, which is
greater than the maximum disturbance wq,max ≈ 4.8 N. All the weighting matrices and scalar, Q p , Qq , R p and Rq , are assigned
to be units. With these parameters, a partially independent controller and a coupled controller for comparison are to be solved
in the following sections.
4.1
Partially Independent Controller
The partially independent control scheme can be synthesized by solving (37) and (53). For in-plane motion controller
(37), the initial state vector is p(0) = [−5000, 5000, 5, −5]T , and the matrices E p1 , E p2 and Λ p in (11) are assigned as
follows:
0
0
0
E p1 =
0
0
0
,
4e 0
0 4e
2e
2e
0
0
0
0
n2
0
E p2 =
2.5n2
0
0
n2
0
0.25n2
0
0 0
,
0 n
−n 0
0
(56)
Λ p = diag (sin M, − sin M, cos M, cos M) ,
where the mean anomaly M = nt. Then solving (37), the state feedback gain matrix for in-plane motion controller is obtained
h
K p = K p,11
i
K p,12 =
"
0.0024
−0.0013
0.7535
#
0.0593
0.0015
0.0010
0.2952
1.3332
.
(57)
where K p,11 and K p,12 ∈ R2×2 . For the out-of-plane motion controller (53), the initial state vector is q(0) = [0, 0]T , and the
matrices Eq1 , Eq2 and Λq in (13) are assigned as follows:
"
#
"
#
"
#
0 0
n2 0
−0.5 cos M 0
Eq1 =
,
Eq2 =
,
Λq =
.
(58)
6e 0
0 0
0
0
Solving (53), the optimal H∞ performance is γ = 1.000778383, and the state feedback gain matrix for out-of-plane motion
controller is
h
Kq = Kq,11
i h
Kq,12 = 196.8030
i
5.8353 × 104 .
Combing (57) and (59) together, the state feedback gain matrix for partially independent controller is
"
#
K p,11 02×1 K p,12 02×1
K pic =
.
01×2 Kq,11 01×2 Kq,12
(59)
(60)
where K pic ∈ R3×6 , and the control vector in (54b) is generated by u pic (t) = −K pic x(t).
4.2
Coupled Controller
In Section 3, the in-plane motion controllers for x− and y−axis were synthesized jointly, while the out-of-plane motion
controller was designed independently. To verify the advantages of this scheme in robustness, we will introduce a coupled
rendezvous controller in this section for comparison. The coupled control scheme synthesizes x−, y− and z−axis controllers
together and meets the requirements similar as (i), (ii) and (iii); therefore, the coupled controller can be attained by solving a
convex optimization problem similar as Theorem 1. For brevity, the result of coupled control scheme will be given directly,
while the detailed derivations of it will not be included in this paper. However, some similar procedures for synthesizing a
coupled controller can be found in Yang and Gao (2013), Sheng et al. (2014) and Wan et al. (2013, 2014). With the same
13
parameters assigned in previous sections, the control vector in (54b) for coupled control scheme is generated by ucc (t) =
−Kcc x(t), where the state feedback gain matrix Kcc is
4.3
0.0024
−0.0014
2.1542 × 10−4
0.8445
0.0467
0.1198
Kcc =
0.0017
7.487 × 10−4
−4.3822 × 10−4
0.5689
1.3525
3.5306 × 10−4
−2.0446 × 10−4
5.2548 × 10−4
0.1901 ,
0.1792
0.0234
0.7065
(61)
Simulation Results
All the simulation data are collected from the two-body model (4.1a-b), which is more adjacent to the practical
circumstance than plant models (9), (10) and (12). The simulation results of in-plane and out-of-plane motions will be
shown successively as follows.
In-Plane Motion
The relative in-plane trajectories of the chase vehicles with different control schemes are depicted in Figure 2. The
in-plane distances and control propulsions of the chase vehicle with partially independent control scheme are illustrated in
Figure 3 and Figure 4 respectively.
5000
Chaser
upic
ucc
4000
y/m
3000
2000
Trajectories
1000
Target
0
-1000
-5000
-4000
-3000
-2000
-1000
0
1000
x/m
Figure 2: In-plane rendezvous trajectories in first 5000 s.
Relative distances along x-axis and y-axis (m)
5000
x-axis
y-axis
4000
3000
2000
1000
0
-1000
-2000
-3000
-4000
-5000
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Time (s)
Figure 3: In-plane relative distances between two spacecraft in the first 5000 s.
14
Control thrusts along x-axis and y-axis (N)
16
x-axis
y-axis
14
12
10
8
6
4
2
0
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Time (s)
Figure 4: In-plane control propulsions of chase vehicle in the first 5000 s.
Remark 7. Figure 2 and Figure 3 show that the partially independent controller u pic (t) fulfilled the requirement (i), asymptotic
stability at p(t) = 0, while the coupled controller ucc (t) failed in finishing the rendezvous, which is one of the advantages of
u pic (t) over ucc (t). Figure 4 shows that the in-plane control propulsions of the chase vehicle with u pic (t) are restricted below
the upper bounds u px,max = u py,max = 15 N, which fulfilled the requirement (iii).
Out-of-Plane Motion
The out-of-plane distances and control propulsions of the chase vehicles with different control schemes are illustrated in
Figure 5 and Figure 6. Figure 7 depicts the overall performance costs of the rendezvouses with different schemes.
(a)
(b)
0.025
5000
Relative distance along z-axis (m)
Relative distance along z-axis (m)
0.02
0.015
0.01
0.005
0
-0.005
-0.01
-0.015
-0.02
-0.025
4000
3000
2000
1000
0
-1000
-2000
-3000
-4000
0
1000
2000
3000
4000
5000
6000
7000
8000
-5000
9000 10000
0
1000
2000
3000
Time (s)
4000
5000
6000
7000
8000
9000 10000
Time (s)
Figure 5: Relative out-of-plane distance between two spacecrafts in the first 10000 s. (a) Out-of-plane distance with partially
independent controller u pic (t). (b) Out-of-plane distance with coupled controller ucc (t).
Remark 8. From Figure 5, we can conclude that the partially independent controller u pic (t) fulfilled the requirement (iv),
robust stability at q(t) = 0, while the coupled controller ucc (t) failed again. From Figure 6, we can find that u pic (t) tracked
and suppressed the disturbance w(t) well, while the disturbance rejection ability of ucc (t) was very poor; moreover, the
magnitude of the out-of-plane propulsion was bounded and proportional to the magnitude of wq (t), which made our control
method practical for engineering applications. From Figure 7, we can find that although the coupled control scheme optimize
the overall cost function jointly, when out-of-plane disturbance exists, the overall cost function of the partially independent
control scheme is much lower, which is another advantage of u pic (t) over ucc (t).
15
(b)
5
5
4
4
Control propulsion along z-axis (N)
Control propulsion along z-axis (N)
(a)
3
2
1
0
-1
-2
-3
3
2
1
0
-1
-2
-3
-4
-5
-4
0
1000
2000
3000
4000
5000
6000
7000
8000
-5
9000 10000
0
1000
2000
3000
4000
Time (s)
5000
6000
7000
8000
9000 10000
Time (s)
Figure 6: Out-of-plane control propulsion of chase vehicle in the first 10000 s. (a) Out-of-plane control propulsion with
partially independent controller u pic (t). (b) Out-of-plane control propulsion with coupled controller ucc (t).
11
Overall cost function Jp + Jq
10
10
10
9
10
upic
ucc
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Time (s)
Figure 7: Overall cost function in the first 5000 s.
5
Conclusions
In sum, this paper has proposed a partially independent control scheme for thrust-limited rendezvous in near-circular
orbits. Based on the two-body problem, a linearized dynamical model for near-circular rendezvous has been established. An
anti-windup robust guaranteed controller for in-plane motion and a robust H∞ controller have been synthesized to construct
the partially independent control scheme. Finally, a comparative simulation has been employed to verify the advantages of
the partially independent scheme over the coupled one. Due to its robust stability, optimal performance cost and bounded
control propulsion, the partially independent control scheme has a wide range of application in spacecraft rendezvous.
6
Acknowledgment
The authors specially acknowledge the staff and the readers from arXiv.org.
References
Anthony, M. L. and Sasaki, F. T. (1965). “Rendezvous problem for nearly circular orbits.” AIAA J., 3(9), 1666–1673.
Barmish, B. R. (1983). “Stabilization of uncertain systems via linear control.” IEEE T. Automat. Contr., 28(8), 848–850.
16
Battin, R. H. (1999). An introduction to the mathematics and methods of astrodynamics. AIAA, Ohio.
Boyd, S. P. (1994). Linear matrix inequalities in system and control theory, Vol. 15. SIAM, Philadelphia.
Carter, T. E. (1998). “State transition matrices for terminal rendezvous studies: brief survey and new example.” J. Guid.
Control Dyn., 21(1), 148–155.
Chang, S. S. L. and Peng, T. K. C. (1972). “Adaptive guaranteed cost control of systems with uncertain parameters.” IEEE T.
Automat. Contr., 17(4), 474–483.
Chilali, M. and Gahinet, P. (1996). “H∞ design with pole placement constraints: an LMI approach.” IEEE T. Automat. Contr.,
41(3), 358–367.
Clohessy, W. H. and Wiltshire, R. S. (1960). “Terminal guidance system for satellite rendezvous.” J. Aerosp. Sci., 29, 653–658.
Curtis, H. (2005). Orbital mechanics for engineering students. Butterworth-Heinemann.
De Vries, J. P. (1963). “Elliptic elements in terms of small increments of position and velocity components.” J. Aerosp. Sci.,
1(11), 2626–2629.
Ebrahimi, B., Bahrami, M., and Roshanian, J. (2008). “Optimal sliding-mode guidance with terminal velocity constraint for
fixed-interval propulsive maneuvers.” Acta Astronaut., 62(10), 556–562.
Esfahani, S. H. and Petersen, I. R. (2000). “An LMI approach to the output-feedback guaranteed cost control for uncertain
time-delay systems.” Int. J. Robust Nonlin., 10(3), 157–174.
Foo, Y. K. (2006). “H∞ control with initial conditions.” IEEE T. Circuits Syst., 53(9), 867–871.
Francis, B. A. (1987). A course in H∞ control theory. Springer, Berlin.
Gao, H., Yang, X., and Shi, P. (2009). “Multi-objective robust H∞ control of spacecraft rendezvous.” IEEE T. Contr. Syst. T.,
17(4), 794–802.
Gao, X., Teo, K. L., and Duan, G. R. (2011). “Non-fragile guaranteed cost control for robust spacecraft orbit transfer with
small thrust.” IMA J. Math. Control I., 28(4), 507–524.
Gao, X., Teo, K. L., and Duan, G. R. (2012). “Robust H∞ control of spacecraft rendezvous on elliptical orbit.” J. Franklin I.,
349(8), 2515–2529.
Guan, X. P. and Chen, C. L. (2004). “Delay-dependent guaranteed cost control for t-s fuzzy systems with time delays.” IEEE
T. Fuzzy Syst., 12(2), 236–249.
Khargonekar, P. P., Nagpal, K. M., and Poolla, K. R. (1991). “H∞ control with transients.” SIAM J. Control Optim., 29(6),
1373–1393.
Khargonekar, P. P., Petersen, I. R., and Zhou, K. (1990). “Robust stabilization of uncertain linear system: quadratic stability
and H∞ control theory.” IEEE T. Automat. Contr., 35(3), 356–361.
Li, Z., Liu, M., Karimi, H. R., and Cao, X. (2013a). “Observer-based stabilization of spacecraft rendezvous with variable
sampling and sensor nonlinearity.” Math. Probl. Eng., Article ID 902452, 11 pages.
Li, Z., Liu, M., Karimi, H. R., and Cao, X. (2013b). “Sampled-data control of spacecraft rendezvous with discontinuous
lyapunov approach.” Math. Probl. Eng., Article ID 814271, 10 pages.
Li, Z., Yang, X., and Gao, H. (2013c). “Autonomous impulsive rendezvous for spacecraft under orbital uncertainty and
thruster faults.” J. Franklin I., 350(9), 2455–2473.
Liu, M., You, J., and Ma, X. (2011). “H∞ filtering for sampled-data stochastic systems with limited capacity channel.” Signal
Process., 91(8), 1826–1837.
Melton, R. G. (2000). “Time-explicit representation of relative motion between elliptical orbits.” J. Guid. Control Dyn., 23(4),
604–610.
Namerikawa, T., Fujita, M., and Smith, R. S. (2002). “A generalized H∞ control system design attenuating initial state
uncertainties.” Proceedings of the American Control Conference, 2204–2209.
Petersen, I. R. and McFarlane, D. C. (1994). “Optimal guaranteed cost control and filtering for uncertain linear systems.”
IEEE T. Automat. Contr., 39(3), 1971–1977.
Savkin, A. V., Pathirana, P. N., and Faruqi, F. A. (2003). “Problem of precision missile guidance: LQR and H∞ control
17
frameworks.” IEEE T. Aero. Elec. Sys, 39(3), 901–910.
Sheng, D., Yang, X., and Karimi, H. R. (2014). “Robust control for autonomous spacecraft evacuation with model uncertainty
and upper bound of performance with constraints.” Math. Probl. Eng., Article ID 589381, 16 pages.
Tanaka, K., Ohtake, H., and Wang, H. O. (2009). “Guaranteed cost control of polynomial fuzzy systems via a sum of squares
approach.” IEEE T. Syst. Man Cy. B, 38(2), 561–567.
Tschauner, J. (1967). “Elliptic orbit rendezvous.” AIAA J., 5(6), 1110–1113.
Wan, N., Liu, M., and Karimi, H. R. (2013). “Robust tracking control for rendezvous in near-circular orbits.” Math. Probl.
Eng., Article ID 726945, 11 pages.
Wan, N., Liu, M., and Karimi, H. R. (2014). “Observer-based robust control for spacecraft rendezvous with thrust saturation.”
Abstr. Appl. Anal., Article ID 710850, 10 pages.
Wang, Q., Zhou, B., and Duan, G. (2014). “Robust gain scheduled control of spacecraft rendezvous system subject to input
saturation.” Proceedings of the Chinese Control Conference, 4204–4209.
Wu, L., Lam, J., and Xiong, J. (2011a). “Robust guaranteed cost control of discrete-time networked control systems.” Math.
Probl. Eng., 32(1), 95–112.
Wu, L., Su, X., and Shi, P. (2011b). “Mixed H2 /H∞ approach to fault detection of discrete linear repetitive processes.” J.
Franklin I., 348(2), 393–414.
Yang, X., Cao, X., and Gao, H. (2012). “Sampled-data control for relative position holding of spacecraft rendezvous with
thrust nonlinearity.” IEEE T. Ind. Electron., 59(2), 1146–1153.
Yang, X. and Gao, H. (2011). “Guaranteed cost output tracking control for autonomous homing phase of spacecraft
rendezvous.” J. Aerospace Eng., 24(4), 478–487.
Yang, X. and Gao, H. (2013). “Robust reliable control for autonomous spacecraft rendezvous with limited-thrust.” Aerosp.
Sci. Technol., 24(1), 161–168.
Yu, L. and Chu, J. (1999). “An LMI approach to guaranteed cost control of linear uncertain time-delay systems.” Automatica,
35(6), 1155–1159.
Zames, G. (1981). “Feedback and optimal sensitivity: model reference transformation multiplicative seminorms, and
approximate inverse.” IEEE T. Automat. Contr., 26(2), 301–320.
Zhang, H., Wang, Y., and Liu, D. (2008). “Delay-dependent guaranteed cost control for uncertain stochastic fuzzy systems
with multiple time delays.” IEEE T. Syst. Man Cy. B, 38(1), 126–140.
Zhao, L., Jia, Y., and Matsuno, F. (2013).
“Adaptive time-varying sliding mode control for autonomous spacecraft
rendezvous.” Proceedings of Decision and Control, 5504–5509.
Zhou, B., Wang, Q., Lin, Z., and Duan, G. (2014). “Gain scheduled control of linear systems subject to actuator saturation
with application to spacecraft rendezvous.” IEEE T. Contr. Syst. T., 22(5), 2031–2038.
18
| 3 |
Efficient quantum tomography
arXiv:1508.01907v2 [quant-ph] 13 Sep 2015
Ryan O’Donnell∗
John Wright∗
September 15, 2015
Abstract
In the quantum state tomography problem, one wishes to estimate an unknown d-dimensional
mixed quantum state ρ, given few copies. We show that O(d/ǫ) copies suffice to obtain an
estimate ρ̂ that satisfies kρ̂− ρk2F ≤ ǫ (with high probability). An immediate consequence is that
O(rank(ρ)·d/ǫ2 ) ≤ O(d2 /ǫ2 ) copies suffice to obtain an ǫ-accurate estimate in the standard trace
distance. This improves on the best known prior result of O(d3 /ǫ2 ) copies for full tomography,
and even on the best known prior result of O(d2 log(d/ǫ)/ǫ2 ) copies for spectrum estimation.
Our result is the first to show that nontrivial tomography can be obtained using a number of
copies that is just linear in the dimension.
Next, we generalize these results to show that one can perform efficient principal component
analysis on ρ. Our main result is that O(kd/ǫ2 ) copies suffice to output a rank-k approximation
ρ̂ whose trace distance error is at most ǫ more than that of the best rank-k approximator to ρ.
This subsumes our above trace distance tomography result and generalizes it to the case when ρ
is not guaranteed to be of low rank. A key part of the proof is the analogous generalization
of our spectrum-learning results: we show that the largest k eigenvalues of ρ can be estimated
to trace-distance error ǫ using O(k 2 /ǫ2 ) copies. In turn, this result relies on a new coupling
theorem concerning the Robinson–Schensted–Knuth algorithm that should be of independent
combinatorial interest.
1
Introduction
Quantum state tomography refers to the task of estimating an unknown d-dimensional quantum
mixed quantum state, ρ, given the ability to prepare and measure n copies, ρ⊗n . It is of enormous
practical importance for experimental detection of entanglement and the verification of quantum
technologies. For an anthology of recent advances in the area, the reader may consult [BCG13]. As
stated in its introduction,
The bottleneck limiting further progress in estimating the states of [quantum] systems
has shifted from physical controllability to the problem of handling. . . the exponential
scaling of the number of parameters describing quantum many-body states.
Indeed, a system consisting of b qubits has dimension d = 2b and is described by a density matrix
with d2 = 4b complex parameters. For practical experiments with, say, b ≤ 10, it is imperative
to use tomographic methods in which n grows as slowly as possible with d. For 20 years or so,
the best known method used n = O(d4 ) copies to estimate ρ to constant error; just recently this
was improved [KRT14] to n = O(d3 ). Despite the practical importance and mathematical elegance
∗
Department of Computer Science, Carnegie Mellon University. Supported by NSF grants CCF-0747250 and
CCF-1116594. The second-named author is also supported by a Simons Fellowship in Theoretical Computer Science.
{odonnell,jswright}@cs.cmu.edu
1
of the quantum tomography problem, the optimal dependence of n on d remained “shockingly
unknown” [Har15] as of early 2015.
In this work we analyze known measurements arising from the representation theory of the
symmetric and general linear groups S(n) and GLd = GLd ( ) — specifically, the “Empirical
Young Diagram (EYD)” measurement considered by [ARS88, KW01], followed by Keyl’s [KW01,
Key06] state estimation measurement based on projection to highest weight vectors. The former
produces a random height-d partition λ ⊢ n according to the Schur–Weyl distribution SWn (α),
which depends only on the spectrum α1 ≥ α2 ≥ · · · ≥ αd of ρ; the latter produces a random
d-dimensional unitary U according to what may be termed the Keyl distribution Kλ (ρ). Writing
λ for (λ1 /n, . . . , λd /n), we show the following results:
C
Theorem 1.1.
Theorem 1.2.
d
.
n
En
kλ − αk22 ≤
E
kU diag(λ)U † − ρk2F ≤
λ∼SW (α)
λ∼SWn (α)
U ∼Kλ (ρ)
4d − 3
.
n
In particular, up to a small constant factor, full tomography is no more expensive than spectrum
estimation. These theorems have the following straightforward consequences:
C
Corollary 1.3. The spectrum of an unknown rank-r mixed state ρ ∈ d×d can be estimated to
error ǫ in ℓ2 -distance using n = O(r/ǫ2 ) copies, or to error ǫ in total variation distance using
n = O(r 2 /ǫ2 ) copies.
C
Corollary 1.4. An unknown rank-r mixed state ρ ∈ d×d may be estimated to error ǫ in Frobenius
distance using n = O(d/ǫ2 ) copies, or to error ǫ in trace distance using n = O(rd/ǫ2 ) copies.
(These bounds are with high probability; confidence 1 − δ may be obtained by increasing the copies
by a factor of log(1/δ).)
The previous best result for spectrum estimation [HM02, CM06] used O(r 2 log(r/ǫ)/ǫ) copies
for an ǫ-accurate estimation in KL-divergence, and hence O(r 2 log(r/ǫ)/ǫ2 ) copies for an ǫ-accurate
estimation in total variation distance. The previous best result for tomography is the very recent [KRT14, Theorem 2], which uses n = O(rd/ǫ2 ) for an ǫ-accurate estimation in Frobenius
distance, and hence n = O(r 2 d/ǫ2 ) for trace distance.
As for lower bounds, it follows immediately from [FGLE12, Lemma 5] and Holevo’s bound that
e
Ω(rd) copies are necessary for tomography with trace-distance error ǫ0 , where ǫ0 is a universal
e
constant. (Here and throughout Ω(·)
hides a factor of log d.) Also, Holevo’s bound combined
Ω(d)
e
with the existence of 2
almost-orthogonal pure states shows that Ω(d)
copies are necessary for
tomography with Frobenius error ǫ0 , even in the rank-1 case. Thus our tomography bounds are
optimal up to at most an O(log d) factor when ǫ is a constant. (Conversely, for constant d, it is easy
to show that Ω(1/ǫ2 ) copies are necessary even just for spectrum estimation.)
√ Finally, we remark
e 2 ) is a lower bound for tomography with Frobenius error ǫ = Θ(1/ d); this also matches
that Ω(d
our O(d/ǫ2 ) upper bound. This last lower bound follows from Holevo and the existence
√ [Sza82] of
2)
Ω(d
2
normalized rank-d/2 projectors with pairwise Frobenius distance at least Ω(1/ d).
1.1
Principal component analysis
Our next results concern principal component analysis (PCA), in which the goal is to find the best
rank-k approximator to a mixed state ρ ∈ d×d , given 1 ≤ k ≤ d. Our algorithm is identical
C
2
to the Keyl measurement from above, except rather than outputting U diag(λ)U † , it outputs
U diag(k) (λ)U † instead, where diag(k) (λ) means diag(λ1 , . . . , λk , 0, . . . , 0). Writing α1 ≥ α2 ≥
. . . ≥ αd for the spectrum of ρ, our main result is:
r
kd
(k)
†
Theorem 1.5.
E
kU diag (λ)U − ρk1 ≤ αk+1 + . . . + αd + 6
.
n
λ∼SWn (α)
U ∼Kλ (ρ)
As the best rank-k approximator to ρ has trace-distance error αk+1 + . . . + αd , we may immediately
conclude:
C
Corollary 1.6. Using n = O(kd/ǫ2 ) copies of an unknown mixed state ρ ∈ d×d , one may find
a rank-k mixed state ρ̂ such that the trace distance of ρ̂ from ρ is at most ǫ more than that of the
optimal rank-k approximator.
Since αk+1 = . . . = αd = 0 when ρ has rank k, Corollary 1.6 strictly generalizes the tracedistance tomography result from Corollary 1.4. We also remark that one could consider performing
Frobenius-norm PCA on ρ, but it turns out that this is unlikely to give any improvement in copy
complexity over full tomography; see Section 6 for details.
As a key component of our PCA result, we investigate the problem of estimating just the
largest k eigenvalues, α1 , . . . , αk , of ρ. The goal here is to use a number of copies depending only
on k and not on d or rank(ρ). We show that the standard EYD algorithm achieves this:
P
1.92 k + .5
(k)
(k)
√
Theorem 1.7.
En dTV (λ, α) ≤
, where dTV (β, α) denotes 12 ki=1 |βi − αi |.
n
λ∼SW (α)
From this we immediately get the following strict generalization of (the total variation distance
result in) Corollary 1.3:
Corollary 1.8. The largest k eigenvalues of an unknown mixed state ρ ∈
to error ǫ in in total variation distance using n = O(k2 /ǫ2 ) copies.
Cd×d can be estimated
The fact that this result has no dependence on the ambient dimension d or the rank of ρ may
make it particularly interesting in practice.
1.2
A coupling result concerning the RSK algorithm
For our proof of Theorem 1.7, we will need to establish a new combinatorial result concerning
the Robinson–Schensted–Knuth (RSK) algorithm applied to random words. We assume here the
reader is familiar with the RSK correspondence; see Section 2 for a few basics and, e.g., [Ful97] for
a comprehensive treatment.
Notation 1.9. Let α be a probability distribution on [d] = {1, 2, . . . , d}, and let w ∈ [d]n be a
random word formed by drawing each letter wi independently according to α. Let λ be the shape
of the Young tableaus obtained by applying the RSK correspondence to w. We write SWn (α) for
the resulting probability distribution on λ.
P
P
Notation 1.10. For x, y ∈ d , we say x majorizes y, denoted x ≻ y, if ki=1 x[i] ≥ ki=1 y[i] for all
k ∈ [d] = {1, 2, . . . , d}, with equality for k = d. Here the notation x[i] means the ith largest value
among x1 , . . . , xd . We also use the traditional notation λ ☎ µ instead when λ and µ are partitions
of n (Young diagrams).
R
In Section 7 we prove the following theorem. The proof is entirely combinatorial, and can be
read independently of the quantum content in the rest of the paper.
Theorem 1.11. Let α, β be probability distributions on [d] with β ≻ α. Then for any n ∈
is a coupling (λ, µ) of SWn (α) and SWn (β) such that µ ☎ λ always.
3
N there
1.3
Independent and simultaneous work.
Independently and simultaneously of our work, Haah et al. [HHJ+ 15] have given a slightly different
measurement that also achieves Corollary 1.4, up to a log factor. More precisely, their measurement
achieves error ǫ in infidelity with n = O(rd/ǫ) · log(d/ǫ) copies, or error ǫ in trace distance with
n = O(rd/ǫ2 ) · log(d/ǫ) copies. They also give a lower bound of n ≥ Ω(rd/ǫ2 )/ log(d/rǫ) for
quantum tomography with trace distance error ǫ. After seeing a draft of their work, we observed
that their measurement can also be shown to achieve expected squared-Frobenius error 4d−3
n , using
the techniques in this paper; the brief details appear at [Wri15].
1.4
Acknowledgments.
We thank Jeongwan Haah and Aram Harrow (and by transitivity, Vlad Voroninski) for bringing [KRT14] to our attention. We also thank Aram Harrow for pointing us to [Key06]. The
second-named author would also like to thank Akshay Krishnamurthy and Ashley Montanaro for
helpful discussions.
2
Preliminaries
We write λ ⊢ n to denote that λ is a partition of n; i.e., λ is a finite sequence of integers λ1 ≥ λ2 ≥
λ3 ≥ · · · summing to n. We also say that the size of λ is |λ| = n. The length (or height) of λ,
denoted ℓ(λ), is the largest d such that λd 6= 0. We identify partitions that only differ by trailing
zeroes. A Young diagram of shape λ is a left-justified set of boxes arranged in rows, with λi boxes
in the ith row from the top. We write µ ր λ to denote that λ can be formed from µ by the addition
of a single box to some row. A standard Young tableau T of shape λ is a filling of the boxes of λ
with [n] such that the rows and columns are strictly increasing. We write λ = sh(T ). Note that T
can also be identified with a chain ∅ = λ(0) ր λ(1) ր · · · ր λ(n) = λ, where λ(t) is the shape of
the Young tableau formed from T by entries 1 .. t. A semistandard Young tableau of shape λ and
alphabet A is a filling of the boxes with letters from A such that rows are increasing and columns
are strictly increasing. Here an alphabet means a totally ordered set of “letters”, usually [d].
The quantum measurements we analyze involve the Schur–Weyl duality theorem. The symmetric group S(n) acts on ( d )⊗n by permuting factors, and the general linear group GLd acts on it
diagonally; furthermore, these actions commute. Schur–Weyl duality states that as an S(n) × GLd
representation, we have the following unitary equivalence:
M
Spλ ⊗ Vλd .
( d )⊗n ∼
=
C
C
λ⊢n
ℓ(λ)≤d
Here we are using the following notation: The Specht modules Spλ are the irreducible representations
spaces of S(n), indexed by partitions λ ⊢ n. We will use the abbreviation dim(λ) for dim(Spλ );
recall this equals the number of standard Young tableaus of shape λ. The Schur (Weyl) modules
Vλd are the irreducible polynomial representation spaces of GLd , indexed by partitions (highest
weights) λ of length at most d. (For more background see, e.g., [Har05].) We will write πλ : GLd →
End(Vλd ) for the (unitary) representation itself; the domain of πλ naturally extends to all of d×d
by continuity. We also write |Tλ i for the highest weight vector in Vλd ; it is characterized by the
Q
property that πλ (A) |Tλ i = ( dk=1 Aλkki ) |Tλ i if A = (Aij ) is upper-triangular.
The character of Vλd is the Schur polynomial sλ (x1 , . . . , xd ), a symmetric, degree-|λ|, homogeneous polynomial in x = (x1 , . . . , xd ) defined by sλ (x) = aλ+δ (x)/aδ (x), where δ = (d − 1, d −
C
4
P Qd
µ
#i T
2, . . . , 1, 0) and aµ (x) = det(xi j ). Alternatively, it may be defined as
, where T
T
i=1 xi
ranges over all semistandard tableau of shape λ and alphabet [d], and #i T denotes the number of
occurrences of i in T . We have dim(Vλd ) = sλ (1, . . . , 1), the number of semistandard Young tableaus
in the sum. We’ll write Φλ (x) for the normalized Schur polynomial sλ (x1 , . . . , xd )/sλ (1, . . . , 1). Finally, we recall the following two formulas, the first following from Stanley’s hook-content formula
and the Frame–Robinson–Thrall hook-length formula, the second being the Weyl dimension formula:
Y (λi − λj ) + (j − i)
dim(λ) Y
(d + j − i) =
.
(1)
sλ (1, . . . , 1) =
|λ|!
j−i
(i,j)∈λ
1≤i<j≤d
C
R
Given a positive semidefinite matrix ρ ∈ d , we typically write α ∈ d for its sorted spectrum;
i.e., its eigenvalues α1 ≥ α2 ≥ · · · ≥ αd ≥ 0. When ρ has trace 1 it is called a density matrix (or
mixed state), and in this case α defines a (sorted) probability distribution on [d].
We will several times use the following elementary majorization inequality:
If c, x, y ∈
Rd are sorted (decreasing) and x ≻ y then c · x ≥ c · y.
(2)
Recall [Ful97] that the Robinson–Schensted–Knuth correspondence is a certain bijection between strings w ∈ An and pairs (P, Q), where P is a semistandard insertion tableau filled by the
multiset of letters in w, and Q is a standard recording tableau, satisfying sh(Q) = sh(P ). We
write RSK(w) = (P, Q) and write shRSK(w) for the common shape of P and Q, a partition of n
of length at most |A|. One way to characterize λ = shRSK(w) is by Greene’s Theorem [Gre74]:
λ1 + · · · + λk is the length of the longest disjoint union of k increasing subsequences in w. In particular, λ1 = LIS(w), the length of the longest increasing (i.e., nondecreasing) subsequence in w.
We remind the reader here of the distinction between a subsequence of a string, in which the letters
need not be consecutive, and a substring, in which they are. We use the notation w[i .. j] for the
substring (wi , wi+1 , . . . , wj ) ∈ Aj−i+1 .
Let α = (α1 , . . . , αd ) denote a probability distribution on alphabet [d], let α⊗n denote the
associated product probability distribution on [d]n , and write α⊗∞ for the product probability
distribution on infinite sequences. We define the associated Schur–Weyl growth process to be the
(random) sequence
∅ = λ(0) ր λ(1) ր λ(2) ր λ(3) ր · · ·
(3)
where w ∼ α⊗∞ and λ(t) = shRSK(w[1 .. t]). Note that the marginal distribution on λ(n) is what
we call SWn (α). The Schur–Weyl growth process was studied in, e.g., [O’C03], wherein it was
noted that the RSK correspondence implies
Pr[λ(t) = λ(t)
∀t ≤ n] = sλ(n) (α)
(4)
for any chain ∅ = λ(0) ր · · · ր λ(n) . (Together with the fact that sλ (α) is homogeneous of
degree |λ|, this gives yet another alternate definition of the Schur polynomials.) One consequence
of this is that for any i ∈ [d] we have
Pr[λ(n+1) = λ + ei | λ(n) = λ] =
sλ+ei (α)
.
sλ (α)
(5)
(This formula is correct even when λ + ei is not a valid partition of n + 1; in this case sλ+ei ≡ 0 formally under the determinantal definition.) The above equation is also a probabilistic interpretation
of the following special case of Pieri’s rule:
(x1 + · · · + xd )sλ (x1 , . . . , xd ) =
5
d
X
i=1
sλ+ei (x1 , . . . , xd ).
(6)
We will need the following consequence of (5):
R
Proposition 2.1. Let λ ⊢ n and let α ∈ d be a sorted probability distribution. Then
sλ+ed (α)
sλ+e1 (α)
,...,
≻ (α1 , . . . , αd ).
sλ (α)
sλ (α)
(7)
Proof. Let β be the reversal of α (i.e. βi = αd−i+1 ) and let (λ(t) )t≥0 be a Schur–Weyl growth
process corresponding to β. By (5) and the fact that the Schur polynomials are symmetric, we
conclude that the vector on the left of (7) is (p1 , . . . , pd ), where pi = Pr[λ(n+1) = λ + ei | λ(n) = λ].
Now p1 + · · · + pk is the probability, conditioned on λ(n) = λ, that the (n + 1)th box in the process
enters into one of the first k rows. But this is indeed at least α1 + · · · + αk = βd + · · · + βd−k+1 ,
because the latter represents the probability that the (n + 1)th letter is d − k + 1 or higher, and
such a letter will always be inserted within the first k rows under RSK.
A further consequence of (4) (perhaps first noted in [ITW01]) is that for λ ⊢ n,
Pr
λ∼SWn (α)
[λ = λ] = dim(λ)sλ (α).
(8)
At the same time, as noted in [ARS88] (see also [Aud06, Equation (36)]) it follows from Schur–Weyl
duality that if ρ ∈ d×d is a density matrix with spectrum α then
C
tr(Πλ ρ⊗n ) = dim(λ)sλ (α),
where Πλ denotes the isotypic projection onto Spλ ⊗ Vλd . Thus we have the identity
tr(Πλ ρ⊗n ) =
3
Pr
λ∼SWn (α)
[λ = λ].
(9)
Spectrum estimation
Several groups of researchers suggested the following method for estimating the sorted spectrum α of
a quantum mixed state ρ ∈ d×d : measure ρ⊗n according to the isotypic projectors {Πλ }λ⊢n ; and,
on obtaining λ, output the estimate α̂ = λ = (λ1 /n, . . . , λd /n). The measurement is sometimes
called “weak Schur sampling” [CHW07] and we refer to the overall procedure as the “Empirical
Young Diagram (EYD)” algorithm. We remark that the algorithm’s behavior depends only on
the rank r of ρ; it is indifferent to the ambient dimension d. So while we will analyze the EYD
algorithm in terms of d, we will present the results in terms of r.
In [HM02, CM06] it is shown that n = O(r 2 log(r/ǫ)/ǫ2 ) suffices for EYD to obtain dKL (λ, α) ≤
2
2ǫ and hence dTV (λ, α) ≤ ǫ with high probability. However we give a different analysis. By equation (9), the expected ℓ22 -error of the EYD algorithm is precisely Eλ∼SWn (α) kλ− αk22 . Theorem 1.1,
which we prove in this section, bounds this quantity by nr . Thus
√
√ q
r
E dTV (λ, α) = 21 E kλ − αk1 ≤ 12 r E kλ − αk2 ≤ 12 r E kλ − αk22 ≤ √ ,
2 n
C
which is bounded by ǫ/4, say, if n = 4r 2 /ǫ2 . Thus in this case Pr[dTV (λ, α) > ǫ] < 1/4. By a
standard amplification (repeating the EYD algorithm O(log 1/δ) times and outputting the estimate
which is within 2ǫ total variation distance of the most other estimates), we obtain Corollary 1.3.
We give two lemmas, and then the proof of Theorem 1.1.
6
Lemma 3.1. Let α ∈
Rd be a probability distribution. Then
En
λ∼SW (α)
d
X
i=1
λ2i ≤
d
X
(nαi )2 + dn.
i=1
Proof. Define the polynomial function
p∗2 (λ)
=
ℓ(λ)
X
i=1
(λi − i + 21 )2 − (−i + 12 )2 .
P
By Proposition 2.34 and equation (12) of [OW15], Eλ∼SWn (α) [p∗2 (λ)] = n(n − 1) · di=1 α2i . Hence,
#
"
d
d
d
d
X
X
X
X
α2i + dn.
(2i − 1)(n/d) ≤ n2 ·
(2i − 1)λi ≤ E p∗2 (λ) +
λ2i = E p∗2 (λ) +
E
i=1
i=1
i=1
i=1
Here the first inequality used inequality (2) and λ ≻ (n/d, . . . , n/d).
Lemma 3.2. Let λ ∼ SWn (α), where α ∈
(α1 n, . . . , αd n).
Rd is a sorted probability distribution. Then (E λ1 , . . . , E λd) ≻
Proof. Let w ∼ α⊗n , so λ is distributed as shRSK(w). The proof is completed by linearity of
expectation applied to the fact that (λ1 , . . . , λd ) ≻ (#1 w, . . . , #d w) always, where #k w denotes
the number of times letter k appears in w. In turn this fact holds by Greene’s Theorem: we can
form k disjoint increasing subsequences in w by taking all its 1’s, all its 2’s, . . . , all its k’s.
Proof of Theorem 1.1. We have
2
n ·
E
λ∼SWn (α)
kλ −
αk22
≤ dn + 2
d
d
d
X
X
X
2
2
2
(αi n) · E λi
(λi + (αi n) ) − 2
(λi − αi n) = E
=E
i=1
(αi n)2 − 2
i=1
i=1
i=1
d
X
d
X
i=1
(αi n) · E λi ≤ dn + 2
d
X
i=1
(αi n)2 − 2
d
X
i=1
(αi n) · (αi n) = dn,
where the first inequality used Lemma 3.1 and the second used Lemma 3.2 and inequality (2) (recall
that the coefficients αi n are decreasing). Dividing by n2 completes the proof.
4
Quantum state tomography
In this section we analyze the tomography algorithm proposed by Keyl [Key06] based on projection
to the highest weight vector. Keyl’s method, when applied to density matrix ρ ∈ d×d with sorted
spectrum α, begins by performing weak Schur sampling on ρ⊗n . Supposing the partition thereby
obtained from SWn (α) is λ ⊢ n, the state collapses to sλ1(α) πλ (ρ) ∈ Vλd . The main step of Keyl’s
algorithm is now to perform a normalized POVM within Vλd whose outcomes are unitary matrices
in U(d). Specifically, his measurement maps a (Borel) subset F ⊆ U(d) to
Z
πλ (U ) |Tλ i hTλ | πλ (U )† · dim(Vλd ) dU,
M (F ) :=
C
F
where dU denotes Haar measure on U(d). (To see that this is indeed a POVM — i.e., that M :=
M (U(d)) = I — first note that the translation invariance of Haar measure implies πλ (V )M πλ (V )† =
7
M for any V ∈ U(d). Thinking of πλ as an irreducible representation of the unitary group, Schur’s
lemma implies M must be a scalar matrix. Taking traces shows M is the identity.)
We write Kλ (ρ) for the probability distribution on U(d) associated to this POVM; its density
with respect to the Haar measure is therefore
(10)
tr πλ ( sλ1(α) ρ)πλ (U ) |Tλ i hTλ | πλ (U )† · dim(Vλd ) = Φλ (α)−1 · hTλ | πλ (U † ρU ) |Tλ i .
Supposing the outcome of the measurement is U , Keyl’s final estimate for ρ is ρ̂ = U diag(λ)U † .
Thus the expected Frobenius-squared error of Keyl’s tomography algorithm is precisely
E
λ∼SWn (α)
U ∼Kλ (ρ)
kU diag(λ)U † − ρk2F .
Theorem 1.2, which we prove in this section, bounds the above quantity by 4d−3
n . Let us assume
now that rank(ρ) ≤ r. Then ℓ(λ) ≤ r always and hence the estimate U diag(λ)U † will also have
rank at most r. Thus by Cauchy–Schwarz applied to the singular values of U diag(λ)U † − ρ,
q
p q
√
E dtr (U diag(λ)U † , ρ) = 12 E kU diag(λ)U † −ρk1 ≤ 12 2r E kλ−αkF ≤ r/2 E kλ − αk2F ≤ O(rd)
n ,
and Corollary 1.4 follows just as Corollary 1.3 did.
The remainder of this section is devoted to the proof of Theorem 1.2.
4.1
Integration formulas
Cd×d and let λ be a partition of length at most d. The generalized power
Notation 4.1. Let Z ∈
function ∆λ is defined by
∆λ (Z) =
d
Y
pmk (Z)λk −λk+1 ,
k=1
where pmk (Z) denotes the kth principal minor of Z (and λd+1 = 0).
As noted by Keyl [Key06, equation (141)], when Z is positive semidefinite we have hTλ | πλ (Z) |Tλ i =
∆λ (Z); this follows by writing Z = LL† for L = (Lij ) lower triangular with nonnegative diagonal
Q
k
and using the fact that ∆λ (Z) = ∆λ (L† )2 = dk=1 L2λ
kk . Putting this into (10) we have an alternate
definition for the distribution Kλ (ρ):
h
i
E f (U ) = Φλ (α)−1 E
f (U ) · ∆λ (U † ρU ) ,
(11)
U ∼Kλ (ρ)
U ∼U(d)
where U ∼ U(d) denotes that U has the Haar measure. For example, taking f ≡ 1 yields the
identity
E ∆λ (U † ρU ) = Φλ (α);
(12)
U ∼U(d)
this expresses the fact that the spherical polynomial of weight λ for GLd /U(d) is precisely the
normalized Schur polynomial (see, e.g., [Far15]). For a further example, taking f (U ) = ∆µ (U † ρU )
and using the fact that ∆λ · ∆µ = ∆λ+µ , we obtain
E
U ∼Kλ (ρ)
∆µ (U † ρU ) =
Φλ+µ (α)
;
Φλ (α)
in particular,
E
U ∼Kλ (ρ)
(U † ρU )1,1 =
Φλ+e1 (α)
.
Φλ (α)
(13)
For our proof of Theorem 1.2, we will need to develop and analyze a more general formula for the
expected diagonal entry E(U † ρU )k,k . We begin with some lemmas.
8
Definition 4.2. For λ a partition and m a positive integer we define the following partition of
height (at most) m:
λ[m] = (λ1 − λm+1 , . . . , λm − λm+1 ).
We also define the following “complementary” partition λ[m] satisfying λ = λ[m] + λ[m] :
(
λm+1 i ≤ m,
(λ[m] )i =
λi
i ≥ m + 1.
C
Lemma 4.3. Let ρ ∈ d×d be a density matrix with spectrum α and let λ ⊢ n have height at
most d. Let m ∈ [d] and let fm be an m-variate symmetric polynomial. Then
i
h
E fm (β) = Φλ (α)−1 · E
fm (β) · Φλ[m] (β) · ∆λ[m] (U † ρU ) ,
U ∼Kλ (ρ)
U ∼U(d)
where we write β = specm (U † ρU ) for the spectrum of the top-left m × m submatrix of U † ρU .
Proof. Let V ∼ U(m) and write V = V ⊕ I, where I is the (d − m)-dimensional identity matrix.
By translation-invariance of Haar measure we have U V ∼ U(d), and hence from (11),
i
h
†
†
(14)
E fm (β) = Φλ (α)−1
E
fm (specm (V U † ρU V )) · ∆λ (V U † ρU V ) .
U ∼Kλ (ρ)
U ∼U(d),V ∼U(m)
Note that conjugating a matrix by V does not change the spectrum of its upper-left k × k block
†
†
for any k ≥ m. Thus specm (V U † ρU V ) is identical to β, and pmk (V U † ρU V ) = pmk (U † ρU )
for all k ≥ m. Thus using ∆λ = ∆λ[m] · ∆λ[m] we have
h
i
† †
−1
†
E
fm (β) · ∆λ[m] (U ρU ) · E
∆λ[m] (V U ρU V ) .
(14) = Φλ (α)
U ∼U(d)
V ∼U(m)
But the inner expectation equals Φλ[m] (β) by (12), completing the proof.
Lemma 4.4. In the setting of Lemma 4.3,
m
o X
n
sλ[m] +ei (1/m) Φλ+ei (α)
m
·
,
E avg (U † ρU )i,i =
sλ[m] (1/m)
Φλ (α)
U ∼Kλ (ρ) i=1
(15)
i=1
where 1/m abbreviates 1/m, . . . , 1/m (repeated m times).
Remark 4.5. The right-hand side of (15) is also a weighted average — of the quantities Φλ+ei (α)/Φλ (α)
— by virtue of (5). The lemma also generalizes (13), as sλ[1] +e1 (1)/sλ[1] (1) is simply 1.
1
times the expected trace of the upper-left m × m
Proof. On the left-hand side of (15) we have m
†
1
submatrix of U ρU . So by applying Lemma 4.3 with fm (β) = m
(β1 + · · · + βm ), it is equal to
1
sλ[m] (β)
†
−1
· ∆λ[m] (U ρU )
(β 1 + · · · + β m ) ·
Φλ (α) · E
sλ[m] (1, . . . , 1)
U ∼U(d) m
#
"
m
X
s
[m] +e (β)
1
λ
i
· ∆λ[m] (U † ρU )
(by Pieri (6))
= Φλ (α)−1 · E
sλ[m] (1, . . . , 1)
U ∼U(d) m
i=1
m
h
i
X
sλ[m] +ei (1, . . . , 1)
−1
= Φλ (α) ·
· E
Φλ[m] +ei (β) · ∆λ[m] (U † ρU )
m · sλ[m] (1, . . . , 1) U ∼U(d)
i=1
m
X
sλ[m] +ei (1, . . . , 1)
· Φλ+ei (α),
= Φλ (α)−1 ·
m · sλ[m] (1, . . . , 1)
i=1
9
where in the last step we used Lemma 4.3 again, with fm ≡ 1 and λ + ei in place of λ. But this is
equal to the right-hand side of (15), using the homogeneity of Schur polynomials.
Lemma 4.6. Assume the setting of Lemma 4.3. Then ηi := EU ∼Kλ (ρ) (U † ρU )m,m is a convex
combination of the quantities Ri := Φλ+ei (α)/Φλ (α), 1 ≤ i ≤ m.1
Proof. This is clear for m = 1. For m > 1, Remark 4.5 implies
m
m−1
i=1
i=1
avg{ηi } = p1 R1 + · · · + pm Rm ,
avg {ηi } = q1 R1 + · · · + qm Rm ,
Pm
where p1 +· · ·+p
Pmm= q1 +· · ·+qm = 1 and qm = 0. Thus ηi = i=1 ri Ri , where ri = (mpi − (m − 1)qi ),
and evidently i=1 ri = m − (m − 1) = 1. It remains to verify that each ri ≥ 0. This is obvious
for i = m; for i < m, we must check that
sλ[m] +ei (1, . . . , 1)
s [m−1] +ei (1, . . . , 1)
≥ λ
.
sλ[m] (1, . . . , 1)
sλ[m−1] (1, . . . , 1)
(16)
Using the Weyl dimension formula from (1), one may explicitly compute that the ratio of the left
side of (16) to the right side is precisely 1 + (λi −λm1)+(m−i) ≥ 1. This completes the proof.
We will in fact only need the following corollary:
C
Corollary 4.7. Let ρ ∈ d×d be a density matrix with spectrum α and let λ ⊢ n have height at
most d. Then EU ∼Kλ (ρ) (U † ρU )m,m ≥ Φλ+em (α)/Φλ (α) for every m ∈ [d],
Proof. This is immediate from Lemma 4.6 and the fact that Φλ+ei (α) ≥ Φλ+em (α) whenever i < m
(assuming λ + ei is a valid partition). This latter fact was recently proved by Sra [Sra15], verifying
a conjecture of Cuttler et al. [CGS11].
4.2
Proof of Theorem 1.2
Throughout the proof we assume λ ∼ SWn (α) and U ∼ Kλ (ρ). We have
n2 · E kU diag(λ)U † − ρk2F = n2 · E kdiag(λ) − U † ρU k2F
λ,U
=E
λ
d
X
i=1
λ,U
λ2i +
d
X
i=1
(αi n)2 − 2n E
λ,U
d
X
i=1
λi (U † ρU )i,i ≤ dn + 2
d
X
i=1
(αi n)2 − 2n E
λ
d
X
i=1
λi E(U † ρU )i,i ,
U
(17)
using Lemma 3.1. Then by Corollary 4.7,
d
X
d
X
d
X
Φλ+ei (α)
sλ+ei (α)
sλ (1, . . . , 1)
λi
λi E(U ρU )i,i ≥ E
E
λi
=E
U
λ
λ
λ
Φλ (α)
sλ (α) sλ+ei (1, . . . , 1)
i=1
i=1
i=1
d
d
d
X
X sλ+e (α)
X
sλ+ei (α) sλ+ei (1, . . . , 1)
sλ+ei (α)
sλ+ei (1, . . . , 1)
i
λi
λi
λi
−E
,
≥E
2−
= 2E
λ
λ
sλ (α)
sλ (1, . . . , 1)
sλ (α) λ
sλ (α)
sλ (1, . . . , 1)
i=1
†
i=1
i=1
(18)
1
To be careful, we may exclude all those i for which λ + ei is an invalid partition and thus Ri = 0.
10
where we used r ≥ 2 − 1r for r > 0. We lower-bound the first term in (18) by first using the
inequality (2) and Proposition 2.1, and then using inequality (2) and Lemma 3.2 (as in the proof
of Theorem 1.1):
d
d
d
X
X
X
sλ+ei (α)
α2i .
(19)
λi αi ≥ 2n
≥ 2E
λi
2E
λ
λ
sλ (α)
i=1
i=1
i=1
As for the second term in (18), we use (8) and the first formula in (1) to compute
E
λ
d
X
i=1
d
λi
sλ+ei (α) sλ+ei (1, . . . , 1) X X
sλ+ei (α) dim(λ + ei )(d + λi − i + 1)
=
dim(λ)sλ (α) · λi ·
sλ (α)
sλ (1, . . . , 1)
sλ (α)
dim(λ)(n + 1)
=
≤
i=1 λ⊢n
d X
X
i=1 λ⊢n
d
X
i=1
λi (d − i + λi + 1)
n+1
(λ′i − 1)(d − i + λ′i )
n+1
λ′ ∼SWn+1 (α)
1
≤
n+1
1
≤
n+1
=n
dim(λ + ei )sλ+ei (α) ·
d
X
i=1
(by (8) again)
E
E
λ′ ∼SWn+1 (α)
(n + 1)n
d
X
(λ′i )2 +
i=1
d
X
i=1
α2i +
d
X
i=1
E
λ′ ∼SWn+1 (α)
d
X
(d − i − 1)λ′i
i=1
!
!
(d + i − 2)((n + 1)/d)
3
3
α2i + d −
2
2
(20)
where the last inequality is deduced exactly as in the proof of Lemma 3.1. Finally, combining
(17)–(20) we get
n2 · E kU diag(λ)U † − ρk2F ≤ 4dn − 3n.
λ,U
Dividing both sides by n2 completes the proof.
5
Truncated spectrum estimation
In this section we prove Theorem 1.7, from which Corollary 1.8 follows in the same way as Corollary 1.3. The key lemma involved is the following:
Lemma 5.1. Let α ∈
Rd be a sorted probability distribution. Then for any k ∈ [d],
En
λ∼SW (α)
k
X
i=1
λi ≤
k
X
√ √
αi n + 2 2k n.
i=1
P
We remark that it is easy to lower -bound this expectation by ki=1 αi n via Lemma 3.2. We
now show how to deduce Theorem 1.7 from Lemma 5.1. Then in Section 5.1 we prove the lemma.
Proof of Theorem 1.7. Let w ∼ α⊗n , let RSK(w) = (P , Q), and let λ = sh(P ), so λ ∼ SWn (α).
Write w′ for the string formed from w by deleting all letters bigger than k. Then it is a basic
property of the RSK algorithm that RSK(w′ ) produces the insertion tableau P ′ formed from P
by deleting all boxes with labels bigger than k. Thus λ′ = sh(P ′ ) = shRSK(w′ ). Denoting
11
α[k] = α1 + · · · + αk , we have λ′ ∼ SWm (α′ ), where m ∼ Binomial(n, α[k] ) and α′ denotes α
conditioned on the first k letters; i.e., α′ = (αi /α[k] )ki=1 . Now by the triangle inequality,
2n ·
(k)
E dTV (λ, α)
=E
k
X
i=1
k
k
k
X
X
X
′
′
′
α′i m − αi n . (21)
λi − αi m +
(λi − λi ) + E
|λi − αi n| ≤ E
i=1
i=1
i=1
√ √
P
The first quantity in (21) is at most 2 2k n, using Lemma 5.1 and the fact that E[ ki=1 λ′i ] =
P
√
E[m] = ki=1 αi n. The second quantity in (21) is at most k n using Theorem 1.1:
E
k
X
i=1
√ r
√
√
λ′i − α′i m = E m · E′ kλ′ − α′ k1 ≤ E m k E′ kλ′ − α′ k22 ≤ k E m ≤ k n.
m
m
λ
And the third quantity in (21) is at most
E
m
Thus 2n
5.1
k
X
i=1
α′i m − αi n = E
m
(k)
· E dTV (λ, α)
k
X
i=1
αi
α[k]
√
λ
m
n:
m − α[k] n = E |m − α[k] n| ≤ stddev(m) ≤
m
√
n.
√
√
≤ ((2 2 + 1)k + 1) n, and dividing by 2n completes the proof.
Proof of Lemma 5.1
Our proof of Lemma 5.1 is essentially by reduction to the case when α is the uniform distribution
and k = 1. We thus begin by analyzing the uniform distribution.
5.1.1
The uniform distribution case
In this subsection we will use the abbreviation (1/d) for the uniform distribution (1/d, . . . , 1/d)
on [d]. Our goal is the following fact, which is of independent interest:
√
Theorem 5.2.
En
λ1 ≤ n/d + 2 n.
λ∼SW (1/d)
We remark that Theorem 5.2 implies Lemma 5.1 (with a slightly better constant) in the case
of α = (1/d, . . . , 1/d), since of course λi ≤ λ1 for all i ∈ [k]. Also, by taking d → ∞ we recover
√
the well known fact that E λ1 ≤ 2 n when λ has the Plancherel distribution. Indeed, our proof of
Theorem 5.2 extends the original proof of this fact by Vershik and Kerov [VK85] (cf. the exposition
in [Rom14]).
Proof. Consider the Schur–Weyl growth process under the uniform distribution (1/d, . . . , 1/d)
on [d]. For m ≥ 1 we define
(m)
δm = E[λ1
(m−1)
− λ1
] = Pr[the mth box enters into the 1st row] =
E
λ∼SW
m−1
sλ+e1 (1/d)
,
(1/d) sλ (1/d)
where we used (5). By Cauchy–Schwarz and identity (8),
X
sλ+e1 (1/d) 2
sλ+e1 (1/d) 2
2
=
δm ≤
E
dim(λ)sλ (1/d) ·
sλ (1/d)
sλ (1/d)
λ∼SWm−1 (1/d)
λ⊢m−1
X
X
d + λ1
sλ+e1 (1/d)
=
dim(λ + e1 )sλ+e1 (1/d) ·
(22)
=
dim(λ)sλ+e1 (1/d) ·
sλ (1/d)
dm
λ⊢m−1
λ⊢m−1
d + λ1
d + δ1 + . . . + δm
≤
E
=
,
dm
dm
λ∼SWm (1/d)
12
where the ratio in (22) was computed using the first formula of (1) (and the homogeneity of Schur
polynomials). Thus we have established the following recurrence:
δm ≤ √
1 p
d + δ1 + · · · + δm .
dm
(23)
We will now show by induction that δm ≤ d1 + √1m for all m ≥ 1. Note that this will complete the
proof, by summing over m ∈ [n]. The base case, m = 1, is immediate since δ1 = 1. For general
m > 1, think of δ1 , . . . , δm−1 as fixed and δm as variable. Now if δm satisfies (23), it is bounded
above by the (positive) solution δ∗ of
δ=√
1 √
c + δ,
dm
where c = d + δ1 + · · · + δm−1 .
Note that if δ > 0 satisfies
δ≥√
1 √
c+δ
dm
(24)
then it must be that δ ≥ δ∗ ≥ δm . Thus it suffices to show that (24) holds for δ =
indeed,
1
d
+
√1 .
m
But
s
1
1
1
1
1
c+ + √ = √
d + δ1 + · · · + δm−1 + + √
d
d
m
m
dm
v
u
r
r
m
X
√
√
1
m
1
1
1
1 u
1
m
1
t
d+
d+
d+
+√ ≤√
+2 m= √
≤√
= +√ ,
d
d
d
d
m
i
dm
dm
dm
i=1
1
√
dm
s
where the first inequality used induction. The proof is complete.
5.1.2
Reduction to the uniform case
Proof of Lemma 5.1. Given the sorted distribution α on [d], let β be the sorted probability distribution on [d] defined, for an appropriate value of m, as
β1 = α1 , . . . , βk = αk ,
βk+1 = . . . = βm = αk+1 > βm+1 ≥ 0,
βm+2 = . . . = βd = 0.
In other words, β agrees with α on the first k letters and is otherwise uniform, except for possibly a
small “bump” at βm+1 . By construction we have β ≻ α. Thus it follows from our coupling result,
Theorem 1.11, that
k
k
X
X
µi ,
λi ≤
En
En
λ∼SW (α)
µ∼SW (β)
i=1
i=1
and hence it suffices to prove the lemma for β in place of α. Observe that β can be expressed as a
mixture
β = p1 · D1 + p2 · D2 + p3 · D3 ,
(25)
of a certain distribution D1 supported on [k], the uniform distribution D2 on [m], and the uniform
distribution D3 on [m + 1]. We may therefore think of a draw µ ∼ SWn (β) occurring as follows.
First, [n] is partitioned into three subsets I 1 , I 2 , I 3 by including each i ∈ [n] into I j independently
⊗I
with probability pj . Next we draw strings w(j) ∼ Dj j independently for j ∈ [3]. Finally, we let
13
w = (w (1) , w (2) , w(3) ) ∈ [d]n be the natural composite string and define µ = shRSK(w). Let us
also write µ(j) = shRSK(w (j) ) for j ∈ [3]. We now claim that
k
X
i=1
µi ≤
k
X
(1)
µi
+
k
X
(2)
µi
k
X
(3)
µi
i=1
i=1
i=1
+
always holds. Indeed, this follows from Greene’s Theorem: the left-hand side is |s|, where s ∈ [d]n
is a maximum-length disjoint union of k increasing subsequences in w; the projection of s(j) onto
coordinates I j is a disjoint union of k increasing subsequences in w (j) and hence the right-hand
side is at least |s(1) | + |s(2) | + |s(3) | = |s|. Thus to complete the proof of the lemma, it suffices to
show
k
k
k
k
X
X
X
X
√ √
(3)
(2)
(1)
αi n + 2 2 k n.
µi ≤
µi + E
µi + E
(26)
E
i=1
i=1
i=1
i=1
Since D1 is supported on [k], the first expectation above is equal to E[|w (1) |] = p1 n. By (the remark
just after) Theorem 5.2, we can bound the second expectation as
E
k
X
i=1
(2)
µi
≤
(2)
k E µ1
≤ k E |w
(2)
|/m + 2k E
q
√
|w(2) | ≤ k(p2 n)/m + 2k p2 n.
√
√
√
Similarly the third expectation in (26) is bounded by k(p3 n)/(m+1)+2k p3 n. Using p2 + p3 ≤
√
2, we have upper-bounded the left-hand side of (26) by
!
k
X
√ √
√
√
k
k
(p1 + p2 m
βi n + 2 2 k n,
+ p3 m+1
)n + 2 2 k n =
i=1
as required.
6
Principal component analysis
In this section we analyze a straightforward modification to Keyl’s tomography algorithm that
allows us to perform principal component analysis on an unknown density matrix ρ ∈ d×d . The
PCA algorithm is the same as Keyl’s algorithm, except that having measured λ and U , it outputs
the rank-k matrix U diag(k) (λ)U † rather than the potentially full-rank matrix U diag(λ)U † . Here
we recall the notation diag(k) (λ) for the d × d matrix diag(λ1 , . . . , λk , 0, . . . , 0).
Before giving the proof of Theorem 1.5, let us show why the case of Frobenius-norm PCA
appears to be less interesting than the case of trace-distance PCA. The goal for Frobenius PCA
would be to output a rank-k matrix ρe satisfying
q
ke
ρ − ρkF ≤ α2k+1 + . . . + α2d + ǫ,
C
with high probability, while trying to minimize the number of copies n as a function of k, d,
and ǫ. However, even when ρ is guaranteed to be of rank 1, it is likely that any algorithm will
require n = Ω(d/ǫ2 ) copies to output √
an ǫ-accurate rank-1 approximator ρe. This is because such an
approximator will satisfy ke
ρ − ρk1 ≤ 2 · ke
ρ − ρkF = O(ǫ), and it is likely that n = Ω(d/ǫ2 ) copies
of ρ are required for such a guarantee (see, for example, the lower bounds of [HHJ+ 15], which show
d
) copies are necessary for tomography of rank-1 states.). Thus, even in the
that n = Ω( ǫ2 log(d/ǫ)
14
simplest case of rank-1 PCA of rank-1 states, we probably cannot improve on the n = O(d/ǫ2 )
copy complexity for full tomography given by Corollary 1.4.
Now we prove Theorem 1.5. We note that the proof shares many of its steps with the proof of
Theorem 1.2.
Proof of Theorem 1.5. Throughout the proof we assume λ ∼ SWn (α) and U ∼ Kλ (ρ). We write
R for the lower-right (d − k) × (d − k) submatrix of U † ρU and we write Γ = U † ρU − R. Then
E kU diag(k) (λ)U † − ρk1 = E kdiag(k) (λ) − U † ρU k1 ≤ E kdiag(k) (λ) − Γk1 + E kRk1 . (27)
λ,U
λ,U
λ,U
λ,U
We can upper-bound the first term in (27) using
E kdiag
(k)
λ,U
(λ) − Γk1 ≤
√
2k E kdiag
λ,U
(k)
√
(λ) − ΓkF ≤ 2k E kdiag(λ) − U † ρU kF ≤
λ,U
r
8kd
. (28)
n
The first inequality is Cauchy–Schwarz together with the fact that rank(diag(k) (λ) − Γ) ≤ 2k
(since the matrix is nonzero only in its first k rows and columns). The second inequality uses that
diag(λ) − U † ρU is formed from diag(k) (λ) − Γ by adding a matrix, diag(λ) − diag(k) (λ) − R, of
disjoint support; this can only increase the squared Frobenius norm (sum of squares of entries).
Finally, the third inequality uses Theorem 1.2. To analyze the second term in (27), we note that
R is a principal submatrix of U † ρU , and so it is positive semidefinite. As a result,
E kRk1 = E tr(R) = 1 − E tr(Γ).
λ,U
λ,U
λ,U
(29)
By Corollary 4.7,
k
k
X
X
sλ (1, . . . , 1)
sλ+ei (α)
Φλ+ei (α)
=E
λ
U
λ
λ,U
λ
Φλ (α)
sλ (α) sλ+ei (1, . . . , 1)
i=1
i=1
i=1
k
k
k
X
X
X
sλ+ei (α)
sλ+ei (α) sλ+ei (1, . . . , 1)
sλ+ei (α)
sλ+ei (1, . . . , 1)
≥E
−E
,
2−
= 2E
λ
λ
sλ (α)
sλ (1, . . . , 1)
sλ (α) λ
sλ (α)
sλ (1, . . . , 1)
E tr(Γ) = E
k
X
E(U † ρU )i,i ≥ E
i=1
i=1
i=1
(30)
where we used r ≥ 2 −
1
r
for r > 0. The first term here is lower-bounded using Proposition 2.1:
k
k
X
X
sλ+ei (α)
αi .
≥2
λ
sλ (α)
2E
i=1
i=1
15
(31)
As for the second term in (30), we use (8) and the first formula in (1) to compute
k
k
X
sλ+ei (α) sλ+ei (1, . . . , 1) X X
sλ+ei (α) dim(λ + ei )(d + λi − i + 1)
=
dim(λ)sλ (α) ·
λ
sλ (α)
sλ (1, . . . , 1)
sλ (α)
dim(λ)(n + 1)
E
i=1
=
≤
i=1 λ⊢n
k X
X
i=1 λ⊢n
k
X
i=1
dim(λ + ei )sλ+ei (α) ·
(d − i + λi + 1)
n+1
(d − i + λ′i )
n+1
λ′ ∼SWn+1 (α)
(by (8) again)
E
k
X
kd
1
λ′i +
·
E
′
n+1
n + 1 λ ∼SW (α)
n
i=1
√
k
X
2 2k kd
αi + √ +
,
≤
n
n
≤
(32)
i=1
where the last step is by Lemma 5.1. Combining (27)–(32) we get
! r
r
√
k
d
X
X
8kd 2 2k kd
32kd kd
(k)
†
E kU diag (λ)U − ρk1 ≤ 1 −
αi +
αi +
+ √ +
≤
+
,
λ,U
n
n
n
n
n
i=1
i=k+1
√
where the second inequality used k ≤ √
kd. Finally, as the expectation is also trivially upper√
bounded by 2, we may use 6 r ≥ min(2, 32r + r) (which holds for all r ≥ 0) to conclude
E kU diag
λ,U
7
(k)
†
(λ)U − ρk1 ≤
d
X
i=k+1
r
αi + 6
kd
.
n
Majorization for the RSK algorithm
In this section we prove Theorem 1.11. The key to the proof will be the following strengthened
version of the d = 2 case, which we believe is of independent interest.
Theorem 7.1. Let 0 ≤ p, q ≤ 1 satisfy |q − 12 | ≥ |p − 21 |; in other words, the q-biased probability
distribution (q, 1 − q) on {1, 2} is “more extreme” than the p-biased distribution (p, 1 − p). Then
there is a coupling (w, x) of the p-biased distribution on {1, 2}n and the q-biased
for any n ∈
distribution on {1, 2}n such that for all 1 ≤ i ≤ j ≤ n we have LIS(x[i .. j]) ≥ LIS(w[i .. j]) always.
N
We now show how to prove Theorem 1.11 given Theorem 7.1. Then in the following subsections
we will prove Theorem 7.1.
Proof of Theorem 1.11 given Theorem 7.1. A classic result of Muirhead [Mui02] (see also [MOA11,
B.1 Lemma]) says that β ≻ α implies there is a sequence β = γ0 ≻ γ1 ≻ · · · ≻ γt = α such γi
and γi+1 differ in at most 2 coordinates. Since the ☎ relation is transitive, by composing couplings
it suffices to assume that α and β themselves differ in at most two coordinates. Since the Schur–
Weyl distribution is symmetric with respect to permutations of [d], we may assume that these two
coordinates are 1 and 2. Thus we may assume α = (α1 , α2 , β3 , β4 , . . . , βd ), where α1 + α2 = β1 + β2
and α1 , α2 are between β1 , β2 .
16
We now define the coupling (λ, µ) as follows: We first choose a string z ∈ ({∗} ∪ {3, 4, . . . , d})n
according to the product distribution in which symbol j has probability βj for j ≥ 3 and symbol ∗
has the remaining probability β1 + β2 . Let n∗ denote the number of ∗’s in z. Next, we use
Theorem 7.1 to choose coupled strings (w, x) with the p-biased distribution on {1, 2}n ∗ and the
1
1
and q = β1β+β
. Note indeed that
q-biased distribution on {1, 2}n∗ (respectively), where p = β1α+β
2
2
1
1
|q − 2 | ≥ |p − 2 |, and hence LIS(x[i .. j]) ≥ LIS(w[i .. j]) for all 1 ≤ i ≤ n∗ . Now let “z ∪ w”
denote the string in [d]n obtained by filling in the ∗’s in z with the symbols from w, in the natural
left-to-right order; similarly define “z ∪ x”. Note that z ∪ w is distributed according to the product
distribution α⊗n and likewise for z ∪ x and β ⊗n . Our final coupling is now obtained by taking
λ = shRSK(z ∪ w) and µ = shRSK(z ∪ x). We need to show that µ ☎ λ always.
By Greene’s Theorem, it suffices to show that if s1 , . . . , sk are disjoint increasing subsequences
in z ∪ w of total length S, we can find k disjoint increasing subsequences s′1 , . . . , s′k in z ∪ x of
total length at least S. We first dispose of some simple cases. If none of s1 , . . . , sk contains any 1’s
or 2’s, then we may take s′i = si for i ∈ [k], since these subsequences all still appear in z ∪ x. The
case when exactly one of s1 , . . . , sk contains any 1’s or 2’s is also easy. Without loss of generality,
say that sk is the only subsequence containing 1’s and 2’s. We may partition it as (t, u), where t
is a subsequence of w and u is a subsequence of the non-∗’s in z that follow w. Now let t′ be the
longest increasing subsequence in x. As t is an increasing subsequence of w, we know that t′ is at
least as long as t. Further, (t′ , u) is an increasing subsequence in z ∪ x. Thus we may take s′i = si
for i < k, and s′k = (t′ , u).
We now come to the main case, when at least two of s1 , . . . , sk contain 1’s and/or 2’s. Let’s first
look at the position j ∈ [n] of the rightmost 1 or 2 among s1 , . . . , sk . Without loss of generality,
assume it occurs in sk . Next, look at the position i ∈ [n] of the rightmost 1 or 2 among s1 , . . . , sk−1 .
Without loss of generality, assume it occurs in sk−1 . We will now modify the subsequences s1 , . . . , sk
as follows:
• all 1’s and 2’s are deleted from s1 , . . . , sk−2 (note that these all occur prior to position i);
• sk−1 is changed to consist of all the 2’s within (z ∪ w)[1 .. i];
• the portion of sk to the right of position i is unchanged, but the preceding portion is changed
to consist of all the 1’s within (z ∪ w)[1 .. i].
It is easy to see that the new s1 , . . . , sk remain disjoint subsequences of z ∪ w, with total length
at least S. We may also assume that the portion of sk between positions i + 1 and j consists of a
longest increasing subsequence of w.
Since the subsequences s1 , . . . , sk−2 don’t contain any 1’s or 2’s, they still appear in z ∪ x,
and we may take these as our s′1 , . . . , s′k−2 . We will also define s′k−1 to consist of all 2’s within
(z ∪ x)[1 .. i]. Finally, we will define s′k to consist of all 1’s within (z ∪ z)[1 .. i], followed by the
longest increasing subsequence of x occurring within positions (i + 1) .. j in z ∪ x, followed by the
portion of sk to the right of position j (which does not contain any 1’s or 2’s and hence is still in
z ∪ x). It is clear that s′1 , . . . , s′k are indeed disjoint increasing subsequences of z ∪ x. Their total
length is the sum of four quantities:
• the total length of s1 , . . . , sk−2 ;
• the total number of 1’s and 2’s within (z ∪ x)[1 .. i];
• the length of the longest increasing subsequence of x occurring within positions (i + 1) .. j in
z ∪ x;
17
• the length of the portion of sk to the right of position j.
By the coupling property of (w, x), the third quantity above is at least the length of the longest
increasing subsequence of w occurring within positions (i + 1) .. j in z ∪ w. But this precisely shows
that the total length of s′1 , . . . , s′k is at least that of s1 , . . . , sk , as desired.
7.1
Substring-LIS-dominance: RSK and Dyck paths
In this subsection we make some preparatory definitions and observations toward proving Theorem 7.1. We begin by codifying the key property therein.
Definition 7.2. Let w, w′ ∈ An be strings of equal length. We say w′ substring-LIS-dominates w,
notated w′ ✄≫ w, if LIS(w′ [i .. j]) ≥ LIS(w[i .. j]) for all 1 ≤ i ≤ j ≤ n. (Thus the coupling in
Theorem 7.1 satisfies w ✄≫ v always.) The relation ✄≫ is reflexive and transitive. If we have the
substring-LIS-dominance condition just for i = 1 we say that w′ prefix-LIS-dominates w. If we
have it just for j = n we say that w′ suffix-LIS-dominates w.
Definition 7.3. For a string w ∈ An we write behead(w) for w[2 .. n] and curtail(w) for w[1 .. n−1].
Remark 7.4. We may equivalently define substring-LIS-dominance recursively, as follows. If w′
and w have length 0 then w′ ✄≫ w. If w′ and w have length n > 0, then w′ ✄≫ w if and only if
LIS(w′ ) ≥ LIS(w) and behead(w′ ) ✄≫ behead(w) and curtail(w′ ) ✄≫ curtail(w). By omitting the
second/third condition we get a recursive definition of prefix/suffix-LIS-dominance.
Definition 7.5. Let Q be a (nonempty) standard Young tableau. We define curtail(Q) to be the
standard Young tableau obtained by deleting the box with maximum label from Q.
The following fact is immediate from the definition of the RSK correspondence:
Proposition 7.6. Let w ∈ An be a nonempty string. Suppose RSK(w) = (P, Q) and RSK(curtail(w)) =
(P ′ , Q′ ). Then Q′ = curtail(Q).
The analogous fact for beheading is more complicated.
Definition 7.7. Let Q be a (nonempty) standard Young tableau. We define behead(Q) to be the
standard Young tableau obtained by deleting the top-left box of Q, sliding the hole outside of the
tableau according to jeu de taquin (see, e.g., [Ful97, Sag01]), and then decreasing all entries by 1.
(The more traditional notation for behead(Q) is ∆(Q).)
The following fact is due to [Sch63]; see [Sag01, Proposition 3.9.3] for an explicit proof.2
Proposition 7.8. Let w ∈ An be a nonempty string. Suppose RSK(w) = (P, Q) and RSK(behead(w)) =
(P ′ , Q′ ). Then Q′ = behead(Q).
Proposition 7.9. Let w, w′ ∈ An be strings of equal length and write RSK(w) = (P, Q), RSK(w′ ) =
(P ′ , Q′ ). Then whether or not w′ ✄≫w can be determined just from the recording tableaus Q′ and Q.
Proof. This follows from the recursive definition of ✄≫ given in Remark 7.4: whether LIS(w′ ) ≥
LIS(w) can be determined by checking whether the first row of Q′ is at least as long as the first
row of Q; the recursive checks can then be performed with the aid of Propositions 7.6, 7.8.
2
Technically, therein it is proved only for strings with distinct letters. One can recover the result for general strings
in the standard manner; if the letters wi and wj are equal we break the tie by using the order relation on i, j. See
also [vL13, Lemma].
18
Definition 7.10. In light of Proposition 7.9 we may define the relation ✄≫ on standard Young
tableaus.
Remark 7.11. The simplicity of Proposition 7.6 implies that it is very easy to tell, given w, w′ ∈ An
with recording tableaus Q and Q′ , whether w′ suffix-LIS-dominates w. One only needs to check
whether Q′1j ≤ Q1j for all j ≥ 1 (treating empty entries as ∞). On the other hand, it is not
particularly easy to tell from Q′ and Q whether w′ prefix-LIS-dominates w; one seems to need to
execute all of the jeu de taquin slides.
We henceforth focus attention on alphabets of size 2. Under RSK, these yield standard Young
tableaus with at most 2-rows. (For brevity, we henceforth call these 2-row Young tableaus, even
when they have fewer than 2 rows.) In turn, 2-row Young tableaus can be identified with Dyck
paths (also known as ballot sequences).
Definition 7.12. We define a Dyck path of length n to be a path in the xy-plane that starts from
(0, 0), takes n steps of the form (+1, +1) (an upstep) or (+1, −1) (a downstep), and never passes
below the x-axis. We say that the height of a step s, written ht(s), is the y-coordinate of its
endpoint; the (final) height of a Dyck path W , written ht(W ), is the height of its last step. We do
not require the final height of a path to be 0; if it is we call the path complete, and otherwise we
call it incomplete. A return refers to a point where the path returns to the x-axis; i.e., to the end
of a step of height 0. An arch refers to a minimal complete subpath of a Dyck path; i.e., a subpath
between two consecutive returns (or between the origin and the first return).
Definition 7.13. We identify each 2-row standard Young tableau Q of size n with a Dyck path W
of length n. The identification is the standard one: reading off the entries of Q from 1 to n, we add
an upstep to W when the entry is in the first row and a downstep when it is in the second row.
The fact that this produces a Dyck path (i.e., the path does not pass below the x-axis) follows from
the standard Young tableau property. Note that the final height of W is the difference in length
between Q’s two rows. We also naturally extend the terminology “return” to 2-row standard Young
tableaus Q: a return is a second-row box labeled 2j such that boxes in Q labeled 1, . . . , 2j form a
rectangular 2 × j standard Young tableau.
Definition 7.14. In light of Definition 7.10 and the above identification, we may define the relation
✄≫ on Dyck paths.
Of course, we want to see how beheading and curtailment apply to Dyck paths. The following
fact is immediate:
Proposition 7.15. If W is the Dyck path corresponding to a nonempty 2-row standard Young
tableau Q, then the Dyck path W ′ corresponding to curtail(Q) is formed from W by deleting its last
segment. We write W ′ = curtail(W ) for this new path.
Again, the case of beheading is more complicated. We first make some definitions.
Definition 7.16. Raising refers to converting a downstep in a Dyck path to an upstep; note that
this increases the Dyck path’s height by 2. Conversely, lowering refers to converting an upstep to a
downstep. Generally, we only allow lowering when the result is still a Dyck path; i.e., never passes
below the x-axis.
Proposition 7.17. Let Q be a nonempty 2-row standard Young tableau, with corresponding Dyck
path W . Let W ′ be the Dyck path corresponding to behead(Q). Then W ′ is formed from W as
follows: First, the initial step of W is deleted (and the origin is shifted to the new initial point).
19
If W had no returns then the operation is complete and W ′ is the resulting Dyck path. Otherwise,
if W had at least one return, then in the new path W ′ that step (which currently goes below the
x-axis) is raised. In either case, we write W ′ = behead(W ) for the resulting path.
Proof. We use Definitions 7.7 and 7.13. Deleting the top-left box of Q corresponds to deleting
the first step of W , and decreasing all entries in Q by 1 corresponds to shifting the origin in W .
Consider now the jeu de taquin slide in Q. The empty box stays in the first row until it first reaches
a position j such that Q1,j+1 > Q2,j — if such a position exists. Such a position does exist if and
only if Q contains a return (with box (2, j) being the first such return). If Q (equivalently, W )
has no return then the empty box slides out of the first row of Q, and indeed this corresponds
to making no further changes to W . If Q has its first return at box (2, j), this means the jeu de
taquin will slide up the box labeled 2j (corresponding to raising the first return step in W ); then
all remaining slides will be in the bottom row of Q, corresponding to no further changes to W .
Remark 7.18. Similar to Remark 7.11, it is easily to “visually” check the suffix-LIS-domination
relation for Dyck paths: W ′ suffix-LIS-dominates W if and only if W ′ is at least as high as W
throughout the length of both paths. On the other hand, checking the full substring-LIS-domination
relation is more involved; we have W ′ ✄≫W if and only if for any number of simultaneous beheadings
to W ′ and W , the former path always stays at least as high as the latter.
Finally, we will require the following definition:
Definition 7.19. A hinged range is a sequence (R0 , s1 , R1 , s2 , R2 , . . . , sk , Rk ) (with k ≥ 0), where
each si is a step (upstep or downstep) called a hinge and each Ri is a Dyck path (possibly of
length 0) called a range. The “internal ranges” R1 , . . . , Rk−1 are required to be complete Dyck
paths; the “external ranges” R0 and Rk may be incomplete.
We may identify the hinged range with the path formed by concatenating its components; note
that this need not be a Dyck path, as it may pass below the origin.
If H is a hinged range and H ′ is formed by raising zero or more of its hinges (i.e., converting
downstep hinges to upsteps), we say that H ′ is a raising of H or, equivalently, that H is a lowering
of H ′ . We call a hinged range fully lowered (respectively, fully raised) if all its hinges are downsteps
(respectively, upsteps).
7.2
A bijection on Dyck paths
Theorem 7.20. Fix integers n ≥ 2 and 1 ≤ λ2 ≤ ⌊ n2 ⌋. Define
W = (W, s1 ) : W is a length-n Dyck path with exactly λ2 downsteps;
s1 is a downstep in W
and
λ2
[
′ ′
(W , s1 ) : W ′ is a length-n Dyck path with exactly λ2 − k downsteps;
W =
′
k=1
s′1 is an upstep in W ′ with k + 1 ≤ ht(s′1 ) ≤ ht(W ′ ) − k + 1;
s′1 is the rightmost upstep in W ′ of its height .
Then there is an explicit bijection f : W → W ′ such that whenever f (W, s1 ) = (W ′ , s′1 ) it holds
that W ′ ✄≫ W .
Remark 7.21. Each length-n Dyck path with exactly λ2 downsteps occurs exactly λ2 times in W.
Each length-n Dyck path with strictly fewer than λ2 downsteps occurs exactly n − 2λ2 + 1 times
in W ′ .
20
Proof of Theorem 7.20. Given any (W, s1 ) ∈ W, we define f ’s value on it as follows. Let s2 be
the first downstep following s1 in W having height ht(s1 ) − 1; let s3 be the first downstep following s2 in W following s2 having height ht(s2 ) − 1; etc., until reaching downstep sk having
no subsequent downstep of smaller height. Now decompose W as a (fully lowered) hinged range
H = (R0 , s1 , R1 , . . . , sk , Rk ). Let H ′ = (R0′ , s′1 , R1′ , . . . , s′k , Rk′ ) be the fully raised version of H
(where each Rj′ is just Rj and each s′j is an upstep). Then f (W, sk ) is defined to be (W ′ , s′1 ), where
W ′ is the Dyck path corresponding to H ′ .
First we check that indeed (W ′ , s′1 ) ∈ W ′ . As W ′ is formed from W by k raisings, it has exactly
λ2 − k downsteps. Since ht(sk ) ≥ 0 it follows that ht(s1 ) ≥ k − 1 and hence ht(s′1 ) ≥ k + 1. On the
other hand, ht(s′1 ) + (k − 1) = ht(s′k ) ≤ ht(W ′ ) and so ht(s′1 ) ≤ ht(W ′ ) − k + 1. Finally, s′1 is the
rightmost upstep in W ′ of its height because H ′ is fully raised.
To show that f is a bijection, we will define the function g : W ′ → W that will evidently be f ’s
inverse. Given any (W ′ , s′1 ) ∈ W, with W ′ having exactly λ2 − k downsteps, we define g’s value on
it as follows. Let s′2 be the last (rightmost) upstep following s′1 in W ′ having height ht(s′1 ) + 1; let
s′3 be the last upstep following s′2 in W ′ having height ht(s′2 ) + 1; etc., until s′k is defined. That this
s′k indeed exists follows from the fact that ht(s′1 ) ≤ ht(W ′ ) − k + 1. Now decompose W ′ as a (fully
raised) hinged range H ′ = (R0′ , s′1 , R1′ , . . . , s′k , Rk′ ). The fact that Rk′ is a Dyck path (i.e., does not
pass below its starting height) again follows from the fact that ht(s′k ) = ht(s′1 ) + k − 1 ≤ ht(W ′ ).
Finally, let H = (R0 , s1 , R1 , . . . , sk , Rk ) be the fully lowered version of H ′ , and W the corresponding
path. As W has exactly λ2 downsteps, we may define g(W ′ , s′1 ) = (W, s1 ) provided W is indeed a
Dyck path. But this is the case, because the lowest point of W occurs at the endpoint of sk , and
ht(sk ) = ht(s1 ) − k + 1 = ht(s′1 ) − 2 − k + 1 = ht(s′1 ) − k − 1 ≥ 0 since ht(s′1 ) ≥ k + 1.
It is fairly evident that f and g are inverses. The essential thing to check is that the sequence
s1 , . . . , sk determined from s1 when computing f (W, s1 ) is “the same” (up to raising/lowering) as
the sequence s′1 , . . . , s′k′ determined from s′1 in computing g(W ′ , s′1 ), and vice versa. The fact that
the sequences have the same length follows, in the g ◦ f = id case, from the fact that ht(W ′ ) =
ht(W ) + 2k; it follows, in the f ◦ g = id case, from the fact that Rk′ is a Dyck path. The fact
that the hinges have the same identity is evident from the nature of fully raising/lowering hinged
ranges.
It remains to show that if f (W, s1 ) = (W ′ , s′1 ) then W ′ ✄≫ W . Referring to Remark 7.18, we
need to show that if W ′ and W are both simultaneously beheaded some number of times b, then in
the resulting paths, W ′ is at least as high as W throughout their lengths. In turn, this is implied
by the following more general statement:
Claim 7.22. After b beheadings, W ′ and W may be expressed as hinged ranges H ′ = (R0 , s′1 , R1 , . . . , s′k , Rk )
and H = (R0 , s1 , R1 , . . . , sk , Rk ) (respectively) such that H ′ is the fully raised version of H (i.e.,
each s′j is an upstep).
(Note that we do not necessarily claim that H is the fully lowered version of H ′ .)
The claim can be proved by induction on b. The base case b = 0 follows by definition of f .
Throughout the induction we may assume that the common initial Dyck path R0 is nonempty, as
otherwise s1 must be an upstep, in which case we can redefine the common initial Dyck path of W
and W ′ to be (s1 , R1 ) = (s′1 , R1 ).
We now show the inductive step. Assume W ′ and W are nonempty paths as in the claim’s
statement, with R0 nonempty. Suppose now that W ′ and W are simultaneously beheaded. The
first step of W ′ and W (an upstep belonging to R0 ) is thus deleted, and the origin shifted. If R0
contained a downstep to height 0 then the first such downstep is raised in both behead(W ′ ) and
behead(W ) and the inductive claim is maintained. Otherwise, suppose R0 contained no downsteps
to height 0. It follows immediately that W ′ originally had no returns to height 0 at all; hence the
21
beheading of W ′ is completed by the deletion of its first step. It may also be that W had no returns
to height 0 at all; then the beheading of W is also completed by the deletion of its first step and the
induction hypothesis is clearly maintained. On the other hand, W may have had some downsteps
to 0 within (s1 , R1 , . . . , sk , Rk ). In this case, the first (leftmost) such downstep must occur at one of
the hinges sj , and the beheading of W is completed by raising this hinge. The inductive hypothesis
is therefore again maintained. This completes the induction.
We derive an immediate corollary, after introducing a bit of notation:
Definition 7.23. We write SYTn (=λ2 ) (respectively, SYTn (≤λ2 )) for the set of 2-row standard
Young tableaus of size n with exactly (respectively, at most) λ2 boxes in the second row.
Corollary 7.24. For any integers n ≥ 2 and 0 ≤ λ2 ≤ ⌊ n2 ⌋, there is a coupling (Q, Q′ ) of the
uniform distribution on SYTn (=λ2 ) and the uniform distribution on SYTn (≤λ2 − 1) such that
Q′ ✄≫ Q always.
Proof. Let (W , s1 ) be drawn uniformly at random from the set W defined in Theorem 7.20, and
let (W ′ , s′1 ) = f (W , s1 ). Let Q ∈ SYTn (=λ2 ), Q′ ∈ SYTn (≤λ2 −1) be the 2-row standard Young
tableaus identified with W , W ′ (respectively). Then Theorem 7.20 tells us that Q′ ✄≫ Q always,
and Remark 7.21 tells us that Q and Q′ are each uniformly distributed.
Corollary 7.25. For any integers n ≥ 0 and 0 ≤ λ′2 ≤ λ2 ≤ ⌊ n2 ⌋, there is a coupling (Q, Q′ )
of the uniform distribution on SYTn (≤λ2 ) and the uniform distribution on SYTn (≤λ′2 ) such that
Q′ ✄≫ Q always.
Proof. The cases n < 2 and λ′2 = λ2 are trivial, so we may assume n ≥ 2 and 0 ≤ λ′2 < λ2 ≤ ⌊ n2 ⌋.
By composing couplings and using transitivity of ✄≫, it suffices to treat the case λ′2 = λ2 − 1. But
the uniform distribution on SYTn (≤λ2 ) is a mixture of (a) the uniform distribution on SYTn (=λ2 ),
(b) the uniform distribution on SYTn (≤λ2 − 1); and these can be coupled to SYTn (≤λ2 − 1) under
the ✄≫ relation using (a) Corollary 7.24, (b) the identity coupling.
Before giving the next corollary, we have a definition.
Definition 7.26. Let A be any 2-letter alphabet. We write Ank for the set of length-n strings
over A with exactly k copies of the larger letter, and we write Ank,n−k = Ank ∪ Ann−k .
Corollary 7.27. For A a 2-letter alphabet and integers 0 ≤ k′ ≤ k ≤ ⌊ n2 ⌋, there is a coupling
(w, w ′ ) of the uniform distribution on Ank,n−k and the uniform distribution on Ank′ ,n−k′ such that
w′ ✄≫ w always.
Proof. We first recall that if x ∼ Ank is uniformly random and (P , Q) = RSK(x), then the recording
tableau Q is uniformly random on SYTn (≤k). This is because for each possible recording tableau
Q ∈ SYTn (≤k) there is a unique insertion tableau P of the same shape as Q having exactly k boxes
labeled with the larger letter of A. (Specifically, if P ⊢ (λ1 , λ2 ), then the last k − λ2 boxes of P ’s
first row, and all of the boxes of P ’s second row, are labeled with A’s larger letter.) It follows that
the same is true if x ∼ Ank,n−k is uniformly random. But now the desired coupling follows from
Corollary 7.25 (recalling Definition 7.10).
In fact, Corollary 7.27 is fundamentally stronger than our desired Theorem 7.1, as we now show:
22
Proof of Theorem 7.1. For r ∈ [0, 1], suppose we draw an r-biased string y ∈ {1, 2}n and define
the random variable j such that y ∈ {1, 2}nj,n−j . (Note that given j, the string y is uniformly
distributed on {1, 2}nj,n−j .) Write Lr (ℓ) for the cumulative distribution function of j; i.e., Lr (ℓ) =
Pr[y ∈ ∪j≤ℓ {1, 2}nj,n−j ], where y is r-biased.
Claim: Lq (ℓ) ≥ Lp (ℓ) for all 0 ≤ ℓ ≤ ⌊ n2 ⌋.
Before proving the claim, let us show how it is used to complete the proof of Theorem 7.1. We
define the required coupling (w, x) of p-biased and q-biased distributions as follows: First we choose
θ ∈ [0, 1] uniformly at random. Next we define k (respectively, k′ ) to be the least integer such that
Lp (k) ≥ θ (respectively, Lq (k′ ) ≥ θ); from the claim it follows that k′ ≤ k always. Finally, we
let (w, x) be drawn from the coupling on {1, 2}nk,n−k and {1, 2}nk′ ,n−k′ specified in Corollary 7.27.
Then as required, we have that x′ ✄≫ w always, and that w has the p-biased distribution and x
has the q-biased distribution.
It therefore remains to prove the claim. We may exclude the trivial cases ℓ = n2 or q ∈ {0, 1},
where Lq (ℓ) = 1. Also, since Lr (ℓ) = L1−r (ℓ) by symmetry, we may assume 0 < q ≤ p ≤ 21 . Thus it
d
suffices to show that dr
Lr (ℓ) ≤ 0 for 0 < r ≤ 21 . Letting h denote the “Hamming weight” (number
of 2’s) in an r-biased random string on {1, 2}n , we have
Lr (ℓ) = Pr[h ≤ ℓ] + Pr[h ≥ n − ℓ] = 1 − Pr[h > ℓ] + Pr[h > n − ℓ − 1]
d
d
d
⇒ Lr (ℓ) = − Pr[h > ℓ] +
Pr[h > n − 1 − ℓ].
dr
dr
dr
t
d
n−1−t .
(The first equality used ℓ < n2 .) But it is a basic fact that dr
Pr[h > t] = n n−1
t r (1 − r)
Thus
n−1 ℓ
d
−r (1 − r)n−1−ℓ + r n−1−ℓ (1 − r)ℓ ,
Lr (ℓ) = n
ℓ
dr
and we may verify this is indeed nonpositive:
−r ℓ (1 − r)n−1−ℓ + r n−1−ℓ (1 − r)ℓ ≤ 0 ⇐⇒ 1 ≤
which is true since 0 < r ≤
1
2
and n − 1 − 2ℓ ≥ 0 (using ℓ <
n
2
1−r n−1−2ℓ
,
r
again).
References
[ARS88]
Robert Alicki, Slawomir Rudnicki, and Slawomir Sadowski. Symmetry properties of
product states for the system of N n-level atoms. Journal of mathematical physics,
29(5):1158–1162, 1988. 1, 2
[Aud06]
Koenraad Audenaert. A digest on representation theory of the symmetric group. Found
at http://personal.rhul.ac.uk/usah/080/qitnotes_files/irreps_v06.pdf, 2006.
2
[BCG13] Konrad Banaszek, Marcus Cramer, and David Gross. Focus on quantum tomography.
New Journal of Physics, 15(12):125020, 2013. 1
[CGS11]
Allison Cuttler, Curtis Greene, and Mark Skandera. Inequalities for symmetric means.
European Journal of Combinatorics, 32(6):745–761, 2011. 4.1
[CHW07] Andrew Childs, Aram Harrow, and Pawel Wocjan. Weak Fourier-Schur sampling, the
hidden subgroup problem, and the quantum collision problem. In 24th Annual Symposium on Theoretical Aspects of Computer Science, pages 598–609, 2007. 3
23
[CM06]
Matthias Christandl and Graeme Mitchison. The spectra of quantum states and the
Kronecker coefficients of the symmetric group. Communications in mathematical physics,
261(3):789–797, 2006. 1, 3
[Far15]
Jacques Faraut. Rayleigh theorem, projection of orbital measures and spline functions.
Advances in Pure and Applied Mathematics, 2015. 4.1
[FGLE12] Steven Flammia, David Gross, Yi-Kai Liu, and Jens Eisert. Quantum tomography via
compressed sensing: error bounds, sample complexity and efficient estimators. New
Journal of Physics, 14(9):095022, 2012. 1
[Ful97]
William Fulton. Young tableaux: with applications to representation theory and geometry. Cambridge University Press, 1997. 1.2, 2, 7.7
[Gre74]
Curtis Greene. An extension of Schensted’s theorem. Advances in Mathematics, 14:254–
265, 1974. 2
[Har05]
Aram Harrow. Applications of coherent classical communication and the Schur transform
to quantum information theory. PhD thesis, Massachusetts Institute of Technology, 2005.
2
[Har15]
Aram Harrow, 2015. http://dabacon.org/pontiff/?p=10785. 1
[HHJ+ 15] Jeongwan Haah, Aram Harrow, Zhengfeng Ji, Xiaodi Wu, and Nengkun Yu. Sampleoptimal tomography of quantum states. Preprint, August 2015. 1.3, 6
[HM02]
Masahito Hayashi and Keiji Matsumoto. Quantum universal variable-length source coding. Physical Review A, 66(2):022311, 2002. 1, 3
[ITW01]
Alexander Its, Craig Tracy, and Harold Widom. Random words, Toeplitz determinants
and integrable systems I. In Random Matrices and their Applications, pages 245–258.
Cambridge University Press, 2001. 2
[Key06]
Michael Keyl. Quantum state estimation and large deviations. Reviews in Mathematical
Physics, 18(01):19–60, 2006. 1, 1.4, 4, 4.1
[KRT14]
Richard Kueng, Holger Rauhut, and Ulrich Terstiege. Low rank matrix recovery from
rank one measurements. Technical report, arXiv:1410.6913, 2014. 1, 1, 1.4
[KW01]
Michael Keyl and Reinhard Werner. Estimating the spectrum of a density operator.
Physical Review A, 64(5):052311, 2001. 1
[MOA11] Albert W Marshall, Ingram Olkin, and Barry Arnold. Inequalities: theory of majorization and its applications. Springer Series in Statistics, 2011. 7
[Mui02]
Robert Muirhead. Some methods applicable to identities and inequalities of symmetric
algebraic functions of n letters. Proceedings of the Edinburgh Mathematical Society,
21:144–162, 1902. 7
[O’C03]
Neil O’Connell. Conditioned random walks and the RSK correspondence. Journal of
Physics A: Mathematical and General, 36(12):3049, 2003. 2
[OW15]
Ryan O’Donnell and John Wright. Quantum spectrum testing. In Proceedings of the
47th Annual ACM Symposium on Theory of Computing, 2015. 3
24
[Rom14]
Dan Romik. The surprising mathematics of longest increasing subsequences. Cambridge
University Press, 2014. 5.1.1
[Sag01]
Bruce Sagan. The symmetric group: representations, combinatorial algorithms, and
symmetric functions. Springer, 2001. 7.7, 7.1
[Sch63]
Marcel-Paul Schützenberger. Quelques remarques sur une construction de Schensted.
Mathematica Scandinavica, 12:117–128, 1963. 7.1
[Sra15]
Suvrit Sra. On inequalities for normalized Schur functions. European Journal of Combinatorics, 2015. 4.1
[Sza82]
Stanislaw Szarek. Nets of Grassmann manifold and orthogonal group. In Proceedings
of research workshop on Banach space theory, pages 169–185. University of Iowa, Iowa
City, IA, 1982. 1
[VK85]
Anatoly Vershik and Sergei Kerov. Asymptotic of the largest and the typical dimensions of irreducible representations of a symmetric group. Functional Analysis and its
Applications, 19(1):21–31, 1985. 5.1.1
[vL13]
Mark van Leeuwen, 2013. http://mathoverflow.net/a/140739/658. 2
[Wri15]
http://www.cs.cmu.edu/~jswright, 2015. 1.3
25
| 7 |
IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 59, no. 12, December 2012
Adaptive vibration suppression system: An iterative control law
for a piezoelectric actuator shunted by a negative capacitor
Miloš Kodejška,1 Pavel Mokrý,1, 2, ∗ Václav Linhart,1 Jan Václavı́k,1, 2 and Tomáš Sluka3
1
Institute of Mechatronics and Computer Engineering,
Technical University of Liberec, CZ-46117 Liberec, Czech Republic
2
Research Centre for Special Optics and Optoelectronic Systems (TOPTEC),
Sobotecká 1660, CZ-51101 Turnov, Czech Republic
arXiv:1206.3806v2 [cs.CE] 18 Feb 2014
3
Ceramics Laboratory, Swiss Federal Institute of
Technology (EPFL), CH-1015 Lausanne, Switzerland
(Dated: January 23, 2018)
Abstract
An adaptive system for the suppression of vibration transmission using a single piezoelectric
actuator shunted by a negative capacitance circuit is presented. It is known that using negative
capacitance shunt, the spring constant of piezoelectric actuator can be controlled to extreme values
of zero or infinity. Since the value of spring constant controls a force transmitted through an elastic
element, it is possible to achieve a reduction of transmissibility of vibrations through a piezoelectric
actuator by reducing its effective spring constant. The narrow frequency range and broad frequency
range vibration isolation systems are analyzed, modeled, and experimentally investigated. The
problem of high sensitivity of the vibration control system to varying operational conditions is
resolved by applying an adaptive control to the circuit parameters of the negative capacitor. A
control law that is based on the estimation of the value of effective spring constant of shunted
piezoelectric actuator is presented. An adaptive system, which achieves a self-adjustment of the
negative capacitor parameters is presented. It is shown that such an arrangement allows a design
of a simple electronic system, which, however, offers a great vibration isolation efficiency in variable
vibration conditions.
Keywords: Piezoelectric actuator, Vibration transmission suppression, Piezoelectric shunt damping, Negative capacitor, Elastic stiffness control, Adaptive device
∗
Electronic address: [email protected]
1
I.
INTRODUCTION
Vibration suppression is nowadays regarded as an important issue in many technical fields
ranging from robust aerospace industry to delicate nanotechnology. Unsuppressed vibrations
are traditionally the key source of noise pollution, aging of mechanical components or even
life-threatening fatal failures of transport vehicles, machining instruments, etc. Also in
micro- and nanotechnology, the progress relies on high precision devices, which essentially
require efficient vibration control.
Contemporary vibration control techniques are based mostly on (i) passive methods using
elements such as viscoelastic dampers and springs, or (ii) conventional active feedback and
feed-forward control principles. The passive methods are rather inexpensive and do not
require an external source of energy, but they are bulky and inefficient at low frequencies.
On the other hand the conventional active methods can achieve an excellent efficiency with
a subtle device, but on the expense of high technical complexity, high costs, and lower
reliability.
There is necessarily a gap between the conventional passive and active vibration control
methods, which would balance advantages of both approaches: especially a high efficiency
(also at low frequencies) and a low cost. A promising new approach has emerged with so
called semi-active methods, which have been heralded as the dawn of a new era in vibration
control methods. However, it did not happen and yet most suggested semi-active methods
remain confined in research laboratories.
In this article, we present a study of the semi-active vibration suppression method, which
uses a piezoelectric bulk actuator. The vibration suppression effect is achieved: first, by
inserting a piezoelectric actuator between vibrating structure and an object that is being
isolated from vibrations, and, second, by connecting the piezoelectric actuator to an active
external shunt circuit that controls the effective elastic stiffness of the actuator. The aforementioned method for a suppression of vibration transmission was introduced by Hagood
and von Flotow [1] and, later [2], it was named as Piezoelectric Shunt Damping (PSD).
During last two decades, extensive amount of work have been published on PSD method.
Examples to be mentioned here are passive[3, 4] and active [5–7], broadband multi-mode
[4, 8, 9], and, adaptive [9–11] noise and vibration control devices. In a vast majority of the
mentioned publications and many others, the classical control theory is used for a description
2
and analysis of noise and vibration suppression systems.
An alternative approach was offered by Date et al. [12], who discovered that the effect of
the shunt circuit on the mechanical response of the piezoelectric actuator can be explained
through the change of effective elastic properties of a piezoelectric actuator. When an actuator is inserted between a source of vibrations and an object that is being isolated from
vibrations, the resonant frequency and the transmissibility of vibrations through a resulting
spring-mass system depends on the spring constant of the actuator and the mass of the
object. The reduction of the piezoelectric actuator spring constant results in the reduction of resonant frequency and transmissibility of vibrations at super-resonant frequencies.
Therefore, the physics lying behind the vibration suppression effect in PSD method is in
principle the same as in the passive methods. On top of that, PSD method enables the
efficient vibration suppression at low frequencies, too.
Such a principal change in the concept offers a use of alternative and simpler design
tools in the development of noise and vibration suppression devices. The design of vibration
control systems can be reduced to, first, the study of elasticity effect on the noise or vibration
transmission and, second, to the realization of Active Elasticity Control. Early applications
that followed this simple approach [13–17] demonstrated the great potential of this method,
which stems from (i) the simplicity of the noise control system, which consists of a selfsensing piezoelectric actuator connected to an active shunt circuit, (ii) an implementation
of active shunt circuit electronics using a simple analog circuit with a single linear power
amplifier that allows a significant reduction of the electric power consumption, and (iii)
the broad frequency range (e.g. from 10 Hz to 100 kHz) where the system can efficiently
suppress vibrations.
Despite the potential advantages of the PSD method, high sensitivity and low stability of
real systems currently prevents their industrial exploitation, which often requires a system
robustness under varying operational conditions. It was actually shown by Sluka et al.
[18] that the optimal working point of the PDS system lies just on the edge of the system
stability. Later, the stability of several PSD method implementations has been compared in
the work by Preumont et al. [19]. A partial elimination of the drawbacks was achieved by
adaptive PSD vibration control systems reported in Refs. [18, 20], however, with a strong
limitation that the stable vibration isolation efficiency could be achieved only in a narrow
frequency range.
3
The aforementioned issues have motivated the work presented below, where we will address the design of an adaptive broad-band vibration control device. The principle of our
vibration suppression device will be presented in Sec. II. In Sec. III we will demonstrate
advantages and drawbacks of a narrow and broad frequency range vibration isolation device
with manually adjusted negative capacitor. Design of the adaptive broad-band vibration
isolation device will be presented in Sec. IV. There, in addition, we will present previously unpublished details of control law, which was used to obtain results presented in
Refs. [18, 20]. Conclusions of our experiments will be presented in Sec. V.
II.
PRINCIPLE OF THE VIBRATION SUPPRESSION
It is known that the vibration transmission through an interface between two solid objects
is mainly controlled by the ratio of their mechanical impedances. Since the mechanical
impedance is proportional to the material stiffness, extremely soft element placed between
two other objects works as an interface with high transmission loss of vibrations. In the
following Subsection, we present a simple theoretical model that explains the effect of the
elasticity in a mechanical system on the transmission of vibrations through the system.
Later, we present a method to control the elastic properties of the piezoelectric actuator
using a shunt electric circuit that can be profitably used in the vibration isolation system.
A.
Effect of the spring constant on the transmissibility of vibrations
Scheme of the vibration isolation measurement system is shown in Fig. 1(a). The vibration damping element with a spring constant K and a damping coefficient B is placed
between the shaker and the object of a mass M that is going to be isolated from vibrations.
The incident and transmitted vibrations of a displacement amplitudes u1 and u2 , respectively, are measured using accelerometers. The transmissibility of vibrations T R through
the considered vibration isolation system is defined as a ratio of the transmitted over the
incident displacement amplitudes at the reference source point:
TR = |u2 /u1 |
(1)
The transmissibility of vibration is a function of material parameters that control the
dynamic response of the mechanical system. The dynamic response of the system is governed
4
ACCELEROMETER
PIEZOELECTRIC
ACTUATOR
MASS M
u2
DAMPING
ELEMENT
Zs
u1
B
K
KEFF
ACCELEROMETER
NEGATIVE
CAPACITOR
ZNC
SHAKER
a)
b)
FIG. 1: Scheme of the vibration isolation measurement system. The vibration damping element
with a spring constant K and a damping coefficient B are placed between the shaker and a mass M
that is going to be isolated from vibrations. The incident vibrations of a displacement amplitude u1
and the transmitted vibrations of a displacement amplitude u2 are measured using accelerometers
(a). The vibration damping element used in this work is the piezoelectric actuator of the impedance
ZS shunted by a negative capacitor of the impedance ZNC . (b)
by the following equation of motion:
M
d2 u 2
du1
du2
+ Ku2 = B
+ Ku1 .
2 +B
dt
dt
dt
(2)
Considering the simplest case of the transmission of harmonic vibrations of an angular
frequency ω, the solution of Eq. (2) yields the formula:
s
ω 2 + Q2 ω 20
TR = ω 0
2,
ω 2 ω 20 + Q2 (ω 20 − ω 2 )
(3)
√
where the symbols Q and ω 0 stand for the mechanical quality factor Q = KM /B and
p
the resonance frequency ω 0 =
K/M. It is seen that the smaller the value of spring
constant K, the smaller the value of the resonant frequency ω 0 , and the smaller the value
of transmissibility T R of harmonic vibrations of angular frequency ω > ω 0 .
B.
Method of the active control of piezoelectric actuator elasticity
Figure 1(b) shows the vibration damping element used in this work, which is a piezoelectric actuator of capacitance CS shunted by a negative capacitor of capacitance C. This
5
system is an example of so called Active Elasticity Control method introduced in 2000 by
Date et al. [12]. The effective spring constant of a piezoelectric actuator Keff can be derived from the constitutive equations for charge Q and change of length ∆l = u2 − u1 of a
piezoelectric actuator:
Q = dF + CS V,
(4)
∆l = (1/KS )F + dV,
(5)
which are appended by the formula for a voltage V applied back to the piezoelectric actuator
from a shunt circuit of capacitance C:
V = −Q/C,
(6)
where symbols d, CS , and KS stand for the piezoelectric coefficient, capacitance, and spring
constant of a mechanically free piezoelectric actuator, respectively.
Combining Eqs. (4), (5), and (6) and with the use of relationship between the capacitance
and impedance of a capacitor, Z = 1/(jω C), one can readily obtain the formula for the
effective spring constant of a piezoelectric actuator connected to an external shunt circuit
with an electric impedance Z:
F
1 + ZS /Z
Keff =
,
= KS
∆l
1 − k 2 + ZS /Z
(7)
where k 2 = d2 KS /CS is the electromechanical coupling factor of the piezoelectric element
(0 < k < 1) and ZS is the electric impedance of a mechanically free piezoelectric actuator.
It follows from Eq. (7) that, when the complex impedance of the shunt circuit Z approaches the value of −ZS , the effective spring constant Keff of the piezoelectric element
reaches zero. Figure 2 shows the electrical scheme of the piezoelectric actuator shunted by
the active circuit that effectively works as with a negative capacitance. It will be further
referenced as a negative capacitor. Effective impedance of the negative capacitor shown in
Fig. 2 is equal to
Z(ω ) = R1 +
R2
R0 + R2 + Au (ω )R2
Z1 (ω ) ≈ R1 −
Z1 (ω ),
R0 + R2 − Au (ω )R0
R0
(8)
where Au is the output voltage gain of the operational amplifier and
R3 − jω C0 R32
R3
=
Z1 (ω ) =
1 + jω C0 R3
1 + ω 2 C02 R32
6
(9)
NEGATIVE CAPACITOR
R3
Z1
PIEZOELECTRIC
ACTUATOR
C0
R1
R0
ADJUSTABLE
RESISTORS
FIG. 2:
R2
Electrical scheme of the piezoelectric actuator shunted by a negative capacitor. The
negative capacitor is designed using a simple circuit with an operational amplifier in a feedback
loop. By use of adjustable resistors R0 and R1 , it is possible to adjust the real and imaginary
part of its capacitance, so that it matches the capacitance of the piezoelectric actuator (except the
sign).
is the impedance of so called reference capacitance of the negative capacitor. The approximate formula on the right-hand side of Eq. (8) stands for the ideal operational amplifier,
i.e. Au goes to infinity.
It is known that real and imaginary parts of the piezoelectric actuator capacitance practically do not depend on frequency in the frequency range below its resonance frequency. In
this situation, the capacitance of the piezoelectric actuator can be approximated with a high
accuracy by the formula CS′ (1 − j tan δS ), where CS′ and tan δS are the real part and loss
tangent of the piezoelectric actuator capacitance. Then, the impedance of the piezoelectric
actuator is equal to
ZS (ω ) =
jω CS′ (1
tan δS − j
1
=
.
− j tan δS )
ω CS (1 + tan2 δS )
(10)
It is convenient to approximate the frequency dependence of the piezoelectric actuator
impedance by the frequency dependence of the in-series connection of the capacitor and
resistor of capacitance CS and resistance RS , respectively.
ZS (ω ) ≈ RS +
1
.
jω CS
(11)
At given critical frequency ω 0 , it is possible to adjust the negative capacitor in such a
7
way that:
|Z| (ω 0 ) = |ZS | (ω 0 ),
(12a)
arg[Z(ω 0 )] = arg[ZS (ω 0 )] + π.
(12b)
Such a situation is characterized by the relation ZS (ω 0 )/Z(ω 0 ) = −1 and, according to
Eq. (7), it yields Keff that is effectively reaching zero value and the transmission of vibrations
reaches minimum.
C.
Role of impedance matching
In this subsection, we analyze to what extent the condition given by Eqs. (12) must be
satisfied, in order to achieve required suppression of vibration transmission. Such an analysis
can be split into two steps. First, we analyze the sensitivity of transmissibility T R to the
value of spring constant of the actuator K, and, second, we analyze the sensitivity of the
effective spring constant of the actuator Keff to the capacitance C of the negative capacitor.
In order to perform the first step of the analysis, it is convenient to express the suppression
level of the transmissibility of vibrations ∆LTR , which is produced by the active elasticity
control using the negative capacitor:
∆LTR = 20 (log T RN C − log T RS ) ,
(13)
where T RN C and T RS are the transmissibility of vibrations given by Eq. (3) in situations,
where the shunt circuit is connected and disconnected from the piezoelectric actuator, respectively. For small values of spring constant K and for frequencies above the resonant
frequency of the system ω0 , it is possible to express the suppression level of the transmissibility of vibrations in the form:
∆LTR ≈ 10 log |Keff /KS | ,
(14)
where the Keff is the effective spring constant of the actuator, which is controlled by the
negative capacitor.
In the second step, it is convenient to denote ∆Z = Z − (−ZS ) as a deviation of the
impedance Z of the negative capacitor from the required value −ZS . Then for small deviations ∆Z, it is possible to approximate Eq. (7) by a formula:
Keff ≈ KS ∆Z/ k 2 Zs
(15)
8
40
Impedance phase (rad)
Transmissibility (dB)
0.00
Negative capacitor:
Off:
Model
Experiment
On:
Model
Experiment
Model (Adjustment 2)
Model (Adjustment 3)
20
0
-20
Piezoelectric actuator
Negative capacitor (Adjustment 1)
Negative capacitor (Adjustment 2)
Negative capacitor (Adjustment 3)
-0.79
-1.57
-2.36
a)
c)
1.2
Piezoelectric actuator
Negative capacitor adjustments:
0=2.43kΩ, 1=6.86Ω, 0=2 kHz (1)
0=2.19kΩ, 1=9.86Ω, 0=1.7 kHz (2)
0=2.69kΩ, 1=4.36Ω, 0=2.46 kHz (3)
40
35
30
1.0
0.8
Keff/KS (1)
Impedance abs. value (Ω)
45
25
20
0.4
0.0
10
-0.2
6
7
8
9
2
3
-0.4
6
7
1000
b)
Imaginary part
0.2
15
5
Real part
Adjustment 1
Adjustment 2
Adjustment 3
0.6
8
9
2
3
1000
d)
Frequency (Hz)
Frequency (Hz)
FIG. 3: Frequency dependences of the physical quantities that controls the value of transmissibility
of vibrations through the piezoelectric actuator shunted by the negative capacitor shown in Fig. 2:
a) shows the comparison of the measured values of the transmissibility of vibrations through the
electrically free piezoelectric actuator (filled circles) and the piezoelectric actuator shunted by the
negative capacitor adjusted at the frequency f0 = 2 kHz (empty circles); the measured values of the
transmissibility of vibrations are compared with the calculated ones from the theoretical model; b)
absolute value of the electric impedance of the piezoelectric actuator (measured) and the negative
capacitor for three different adjustments of resistors R0 and R1 (calculated); c) phase of the electric
impedance of the piezoelectric actuator (measured) and of the negative capacitor (calculated and
subtracted by π); d) calculated real and imaginary parts of the effective spring constant of the
piezoelectric actuator shunted by the negative capacitor.
From Eq. (14), it can be estimated that a decrease in the level of transmissibility of vibrations
∆LTR by about 20 dB requires a decrease in the effective value of the spring constant K by
a factor of 1/100. Then, considering the values of the electromechanical coupling factor of
conventional piezoelectric ceramics, i.e. k 2 = 0.1, one can conclude from Eq. (15) that the
relative deviation of the negative capacitor impedance δZ = ∆Z/ZS from its required value
−ZS must be smaller than 0.1%. This very narrow region of capacitances of the negative
capacitor, in which the required values of the spring constant are achieved, imposes high
9
requirements on the negative capacitor adjustment.
It is clear that required adjustment of the negative capacitor cannot be achieved with
fixed values of resistors R0 and R1 due to limited number of commercially available values
and the continuously adjustable trimmers must be used. In the next two Sections, there are
presented experimental results acquired on the vibration isolation systems with manual and
adaptive adjustment of the negative capacitor
III.
MANUAL ADJUSTMENT OF THE NEGATIVE CAPACITOR
In the next subsection, we will present and discuss the experimental data measured on
the vibration isolation device with the negative capacitor shown in Fig. 2.
A.
Narrow frequency range vibration isolation
First, the frequency dependence of the transmissibility of vibrations through the electrically free piezoelectric actuator, i.e. the actuator, which was disconnected from the negative
capacitor, was measured in the frequency range from 550 Hz to 3 kHz and the result is
indicated by filled circles in Fig. 3(a). The measured frequency dependences of the transmissibility of vibrations were compared with predictions of the theoretical formula given by
Eq. (3). Values of spring constant KS = 7.11 · 107 Nm−1 , mass M = 1.67 kg and the
mechanical quality factor of the piezoelectric actuator Q = 11.3 were obtained using the
method of least squares.
In the next step, the negative capacitor was assembled using LF 356N operational amplifier according to scheme shown in Fig. 2. The output voltage gain of LF 356N was approximated by the function Au (ω0 ) = A0 /(1 + jω/(2πf1)), where A0 = 105 dB and f1 = 100 Hz.
The condition given by Eqs. (12) is achieved by setting the values of resistances R1 and R0
according to following formulae:
ω 20 C0 CS R2 R32
,
R0 =
1 + ω 2 C02 R32
1
− RS .
R1 = 2
ω C0 CS R3
(16a)
(16b)
In order to find the proper adjustment of the negative capacitor, the frequency dependence
of the electric impedances of the piezoelectric actuator and the reference capacitance Z1
10
R3
R3
C0
CX
RX
C0
(a)
FIG. 4:
(b)
Electrical scheme of the reference impedance Z1 inside the negative capacitor, which is
shown in Fig. 2, for a narrow frequency range (a) and a broad frequency range vibration isolation
system (b).
were measured using HP 4195A spectrum analyzer and shown in Figs. 3(b) and (c). Using
the least squares method, following values were obtained: RS = 1.150 Ω, CS = 6.602 µF,
R3 = 27.84 Ω, and C0 = 4.686 µF. These values were cross-checked by direct measurements
on ESCORT ELS-3133A LRC-meter at 1 kHz: RS = 0.87 Ω, CS = 6.94 µF, R3 = 24.5 Ω,
and C0 = 5.16 µF. Then, resistance R2 = 2.40 kΩ was measured and the negative capacitor
resistors were pre-adjusted to values R0 = 2.41 kΩ and R1 = 6.93 Ω according to Eqs. (16).
Afterwards, the trimmers R0 and R1 in the negative capacitor were finely tuned in order
to achieve 20 dB decrease in the transmissibility of vibration at 2 kHz, as indicated by empty
circles in Fig. 3(a). The measured transmissibility of vibrations was fitted to the theoretical
model given by Eqs. (3), (8), (9), and (11). Following values were obtained using the method
of least squares: k 2 = 0.064, R0 = 2.43 kΩ, and R1 = 6.86 Ω. The direct measurement, using
the LRC-meter, resulted in following values: R0 = 2.32 kΩ and R1 = 6.20 Ω, respectively.
Here, it should be noted that the relative difference between fitted and measured values of
resistances varies from 5% to 11%. This relative difference is much larger than 0.1% allowed
relative difference between the negative capacitor and piezoelectric actuator capacitances.
The reason for such a difference is the presence of parasitic capacitances in the system, which
makes theoretical modeling of piezoelectric shunt damping systems difficult and adjustment
of negative capacitors from such theoretical models practically impossible.
The physics standing behind the decrease in the transmissibility of vibrations in a narrow
frequency range can be easily understood by looking at Fig. 3. Figures 3(b) and (c) show the
comparison of the measured electric impedance absolute value and phase of the piezoelectric
actuator with the calculated values of the electric impedance absolute value and phase of
the negative capacitor for three adjustments that differ in values of resistances R0 , R1 , and
11
40
Experiment
Experiment
Experiment
Impedance phase (rad)
Transmissibility (dB)
0.00
Negative capacitor:
Off:
Model
Narrow:
Model
Broad:
Model
20
0
-20
Piezoelectric actuator
Negative Capacitor (broad band)
Negative Capacitor (narrow band)
-0.79
-1.57
-2.36
a)
c)
1.5
Piezoelectric actuator
Negative Capacitor (broad band)
Negative Capacitor (narrow band)
40
35
1.0
0.5
30
Keff/KS (1)
Impedance abs. value (Ω)
45
25
20
Real part
Narrow band
Broad band
-1.0
15
Imaginary part
-1.5
10
5
0.0
-0.5
6
7
8
9
2
3
-2.0
6
7
1000
b)
8
9
2
3
1000
d)
Frequency (Hz)
Frequency (Hz)
FIG. 5: Frequency dependences of the physical quantities that control the value of transmissibility
of vibrations through the piezoelectric actuator shunted by the negative capacitor shown in Fig. 2:
a) shows the comparison of the measured values of the transmissibility of vibrations through the
electrically free piezoelectric actuator (filled circles), through the piezoelectric actuator shunted by
the narrow frequency range negative capacitor adjusted at f0 = 2 kHz (empty circles), and the
broad frequency range negative capacitor adjusted at f0 = 2 kHz (empty triangles). The measured
values of the transmissibility of vibrations are compared with the theoretical model; b) absolute
value of the electric impedance of the piezoelectric actuator (measured) and the negative capacitor
for the narrow frequency range (see Fig. 4a) and broad frequency range (see Fig. 4b) reference
impedance Z1 ; c) phase of the electric impedance of the piezoelectric actuator (measured) and the
negative capacitor (calculated and subtracted by π); d) calculated real and imaginary parts of the
effective spring constant of the piezoelectric actuator shunted by the negative capacitor.
the critical frequency f0 . Figures 3(b) and (c) indicate that conditions given by Eqs. (12)
are satisfied only in narrow frequency ranges around particular critical frequencies f0 . This
is a reason for narrow frequency ranges, where a decrease in the real part of the effective
spring constant Keff of the piezoelectric actuator can be achieved as indicated in Fig. 3(d).
The next Subsection discusses the problem of broadening the frequency range where the
vibration isolation device can efficiently suppress the vibration transmission.
12
Transmissibility suppr. (dB)
5
Manual adjustment
Adaptive adjustment
0
-5
-10
-15
-20
-25
0
1
2
3
4
5
6
Time (min.)
FIG. 6:
Comparison of the time dependences of the vibration isolation efficiency in changing
operational conditions in the system with manually adjusted negative capacitor (solid line) and in
the system with adaptively controlled negative capacitor (dashed line), respectively. The vibration
isolation system was turned on at the time 1 min. Then, the ambient temperature of the system was
changed. There is seen approximately 15 dB decrease in the suppression level of the transmissibility
of vibrations [see Eq. (13)] after 5 minutes in the system with manually adjusted negative capacitor,
while the suppression level of the transmissibility of vibrations remains constant in the adaptive
vibration isolation system.
B.
Broad frequency range vibration isolation
In order to broaden the frequency range of the efficiently suppressed vibration transmission, it is necessary to achieve a precise matching the electrical impedances of the piezoelectric actuator and the negative capacitor. Since the frequency dependence of the piezoelectric
actuator is controlled by its material and its construction, it is necessary to modify the frequency dependence of the negative capacitor. The frequency dependence of the negative
capacitor impedance is determined by the reference impedance Z1 . The trivial parallel connection of the capacitor C0 and the resistor R3 , which is shown in Fig. 4(a), was replaced
by a more complicated RC network shown in Fig. 4(b).
Values of capacitances and resistances in the reference impedance Z1 were adjusted to
minimize the mismatch between values of Z1 and ZS in the frequency range from 0.5 kHz to
3 kHz. The frequency dependence of the electric impedance of the modified reference capacitance Z1 was measured and the method of least squares yields the values: R3 = 15.09 kΩ,
C0 = 480 nF, RX = 44.6 Ω, and CX = 807 nF. The fitted values were cross-checked by a
13
0
2.91
-p
p
R1/R1,min
R1/R1,min
j1
j0
R0/R0,min
R0/R0,min
′
′′
(a)|Keff
+ iKeff
|
FIG. 7:
′
′′
(b)arg (Keff
+ iKeff
)
Contour plots of absolute value (a) and argument (b) of the effective spring constant
Keff of piezoelectric actuator shunted by the negative capacitor shown in Fig. 2 as functions of
the resistances R0 and R1 . The values of R0 and R1 are normalized by values R0,min and R1,min ,
′ + iK ′′ .
respectively, which yield the zero absolute value of the effective spring constant Keff = Keff
eff
direct measurements using LRC-meter at 1 kHz giving values: R3 = 15 kΩ, C0 = 470 nF,
RX = 44 Ω, and CX = 813 nF.
Then, the trimmers R0 and R1 in the negative capacitor were finely tuned in order to
achieve the maximum decrease in the transmissibility of vibrations at the frequency 2 kHz.
Then the transmissibility of vibration through the piezoelectric actuator shunted by a broadfrequency-range-optimized negative capacitor was measured and the result is indicated by
empty triangles in Fig. 5(a). It can be seen that a 20 dB decrease in the transmissibility of
vibration was achieved in the broad frequency range from 1 kHz to 2 kHz. The measured
values of the frequency dependence of the transmissibility of vibrations were compared with
the theoretical model given by Eqs. (3), (8), (9) and (11), and the values of k 2 = 0.067,
R0 = 12.6 kΩ, and R1 = 2.6 Ω using the method of least squares.
The reason for broadening the frequency range can be seen in Fig. 5. Figures 5(b) and
(c) show the comparison of the measured frequency dependence of the electric impedance
absolute value and phase of the piezoelectric actuator with the calculated values of the
electric impedance of the negative capacitor with the narrow and broad frequency range
reference capacitances Z1 shown in Fig. 4(a) and (b). In Figs. 5(b) and (c), it is seen that
the electric impedances of the piezoelectric actuator and the negative capacitor reference
capacitance are close to each other in the broad frequency range. Figure 5(d) shows the
14
frequency dependence of the real and imaginary parts of the effective Young’s modulus. It
should be noted that the decrease in the Young’s modulus in broad frequency range results
in a decrease in the resonant frequency of the system by approximately 400 Hz. This yields
an increase in the transmissibility of vibrations in sub-resonant frequencies.
IV.
ADAPTIVE SYSTEM FOR THE VIBRATION ISOLATION
An important issue accompanied with the vibration isolation system with manually adjusted negative capacitor is shown in Fig. 6. The solid line shows the time dependence of the
vibration isolation efficiency in changed operational conditions in the system with manually
adjusted negative capacitor. The vibration isolation system was turned on at a time “1
minute” and the decrease in the transmissibility of vibrations by the 20 dB was achieved.
Then, the piezoelectric actuator was exposed to a slight heat irradiation from 100 W bulb
(tungsten lamp), which was placed in a distance of 25 cm. There is seen approximately 15
dB decrease in the suppression level of the transmissibility of vibrations [see Eq. (13)] after
5 minutes.
To avoid the severe deteriorative effect of changing operational conditions on the vibration
isolation efficiency, the adaptive vibration isolation system has been implemented. The next
Subsection describes the principle of the control algorithm.
A.
Iterative control law
A simple control algorithm can be formulated using the analysis of contour plots of the
absolute value and argument of, in general complex, effective spring constant Keff of the
piezoelectric actuator that is shunted by the negative capacitor. The negative capacitor is
shown in Fig. 2 as functions of resistances R0 and R1 . Such plots with values R0 and R1
normalized by values R0,min and R1,min , respectively, are shown in Fig. 7. The values R0,min
and R1,min represent the optimal values of resistances in the negative capacitor that yield
the zero absolute value of the effective spring constant. One can see that the absolute value
of Keff reaches zero for R0 /R0,min = 1 and R1 /R1,min = 1. A more interesting graph is shown
′
′′
in Fig. 7(b) for the argument of the effective spring constant Keff = Keff
+ iKeff
. One can
see that the value of arg(Keff ) monotonically increases as the point (R0 , R1 ) goes around
15
the optimal adjustment (R0,min , R1,min ) in the counter-clockwise direction, as indicated by
the arrow.
Thus, one can immediately determine in which “direction” is the optimal adjustment
(R0,min , R1,min ) with respect to its immediate value (R0 , R1 ) by measuring the argument of
Keff , i.e. ϕ = arg(Keff ). Using this principle, it is possible to formulate the iterative control
algorithm as follows:
R + ∆R0
0,n
R0,n+1 =
R0,n − ∆R0
R0,n + ∆R0
R + ∆R
1,n
1
R1,n+1 =
R1,n − ∆R1
for ϕ < ϕ0 ,
for ϕ0 < ϕ < ϕ0 + π,
(17a)
for ϕ0 + π < ϕ,
for ϕ < ϕ1 ,
for ϕ > ϕ1 .
(17b)
Symbols R0,n+1 , R0,n and R1,n+1 , R1,n are the “new” and “old” values of resistances R0 and
R0 , respectively. Values ∆R0 and ∆R1 are the resistance increments achievable in the
negative capacitor. Symbols ϕ0 and ϕ1 stand for the critical values of arg(Keff ) indicated in
Fig. 7(b). The particular values of ϕ0 and ϕ1 should be usually determined experimentally.
In the next Subsection, the simple way for the estimation of the complex value of effective
spring constant is presented.
B.
Estimation of the effective spring constant
The effective value of the spring constant is given by the ratio of the transmitted force F
through the piezoelectric actuator over it elongation ∆l, see Eq. (7). The transmitted force
can be easily measured using a piezoelectric force sensor. The actuator elongation can be
estimated using the following idea.
When the negative capacitor is close to its required optimal adjustment, the transmitted
force through the piezoelectric actuator is very small. When the transmitted force F is small,
it follows from Eq. (5) that first term on the right-hand-side of Eq. (5), i.e. (1/KS ) F , is much
smaller than the second term, i.e. dV . In this situation, the elongation of the piezoelectric
actuator is dominated by the inverse piezoelectric effect and, thus, it is proportional to the
voltage applied from the negative capacitor, i.e. ∆l ∝ V .
In order to estimate the argument of the effective spring constant, it is sufficient to
calculate the phase difference between the signal from the force sensor F and the voltage V
16
ACCELEROMETER 2
u2
MASS M
FORCE
SENSOR
NEGATIVE CAPACITOR
FORCE SENSOR
ACCELEROMETER 2
ACCELEROMETER 1
PC
PIEZOELECTRIC
ACTUATOR
1
2
3
4
NI-DAQ
PIEZOELECTRIC
ACTUATOR
DAC-OUT
Z1
R1
R2
POWER
AMP.
R0
u1
ACCELEROMETER 1
SHAKER
SHAKER
FIG. 8:
Combined system for the measurement of transmissibility of vibration through the
adaptive vibration isolation system is shown on the right-hand side. The measurement part of
the system consists of two accelerometers. The vibration isolation part of the system consists
of a piezoelectric actuator shunted by an electronically adjustable negative capacitor. Electronic
scheme of the combined system is shown on the left-hand side. Signals from accelerometers are
used to calculate the transmissibility of vibrations. A signal from the force sensor and applied
voltage from the negative capacitor are used for the estimation of the effective spring constant
Keff of the shunted piezoelectric actuator. The estimated value of the argument of Keff is used for
calculation of corrections to the values of resistances of electronically adjustable resistors R0 and
R1 .
applied from the negative capacitor:
arg (Keff ) ≈ arg F − arg V.
C.
(18)
Implementation of the adaptive vibration isolation system
The above described control algorithm has been implemented in the adaptive vibration
isolation system, which is shown in Fig. 8. Due to implementation convenience, the dataacquisition part of the adaptive vibration isolation system was combined with the system
for the measurement of vibration transmissibility. Nevertheless, these two systems were
independent.
The adaptive vibration isolation system consists of a force sensor and a piezoelectric
actuator shunted by an electronically controlled negative capacitor. The force sensor was
realized as a piezoelectric plate with a charge amplifier Kistler 5015A. Such an arrangement
17
requires a calibration, which is done prior experiments in the setup without the damping
element. The transfer function of the force sensor is determined using a mass of the object
and the signal from the output accelerometer. This is simple and fast arrangement that
allows precise force measurements up to high frequencies. The signal from the force sensor
and applied voltage from the negative capacitor are used for the estimation of the effective
spring constant Keff of the shunted piezoelectric actuator. The estimated value of the argument of Keff is used for the calculation of corrections to values of resistances of electronically
adjustable resistors R0 and R1 according to Eqs. (17).
In order to make the electronic control of resistances R0 and R1 in the negative capacitor
possible, the manually adjusted trimmers were replaced by electronically controlled resistors,
which were implemented as a pair of a light-emitting diode and a photoresistor. Example
of the measured volt-ohm characteristics of the electronically adjustable resistor is shown in
Fig. 9. The voltage VC controls the current through the diode and, therefore, the intensity
of the emitted light, using a voltage-to-current converter. The intensity of the generated
light controls the resistance Ra of the photoresistor.
Instantaneous values of the incident and transmitted vibrations are measured by piezoelectric accelerometers PCB-352. These accelerometers have the resonant frequency at 40
kHz, which ensures a flat and phase correct transmission function in the frequency range of
our experiments. Signals from the accelerometers are amplified by ICP amplifier. Electric
signals from the accelerometers 1 and 2, force sensor, and the electric voltage applied to the
piezoelectric actuator from the negative capacitor are measured and digitized by the data
acquisition card NI PCI-6221. It should be stressed that accelerometers are a part of the
measurement system only. They are used to measure the transmissibility of vibrations and
to evaluate the efficiency of the adaptive vibration isolation system. Accelerometers are not
used for the control of the vibration transmission. Signals from accelerometers do not enter
the negative capacitor shunt and they are not used in the iterative control law.
Personal computer (PC) is used for three independent (but simultaneous) operations:
First, it is used for the generation of the signal of the incident vibrations. In the Matlab
software, a pseudo-random signal with few dominant harmonic components is generated.
The output signal from the PC is introduced to the high-voltage amplifier and fed to the
piezoelectric shaker. Second, PC processes signals from accelerometers and calculates the
frequency dependence of the transmissibility of vibrations. Third, PC processes the signals
18
(k )
7
Voltage controlled resistor
6
Vc
Resistance
a
"
5
Ra
V/I
Ω
4
3
2
1
0
2
4
Control voltage
6
C
8
10
(V)
FIG. 9: Example of the measured volt-ohm characteristics of the electronically adjustable resistor,
which is constructed as a pair of a light-emitting diode and a photoresistor. The voltage VC
controls the current through the diode and, therefore, the intensity of the emitted light, using a
voltage-to-current converter. The intensity of the generated light controls the resistance Ra of the
photoresistor.
from the force sensor and from the negative capacitor and generates control signals for
electronically adjustable resistors in the negative capacitor according to the iterative control
law.
The transmissibility of vibrations with harmonic time dependence of frequency 2 kHz
through the adaptive vibration isolation system is shown using dashed line in Fig. 6. It is
seen that the transmissibility of vibrations remained constant even under varying operational
conditions (i.e. ambient temperature). However, it should be noted that in the situation,
where vibrations have harmonic time dependence, the estimation of effective spring constant
argument is a straightforward and easy task. On the other hand, the consideration of
harmonic vibrations greatly limits the applicability of the vibration isolation device. In
order to eliminate this drawback and to broaden the applicability of the above described
adaptive vibration isolation system, a modification of the control algorithm is necessary.
This is described in the next Subsection.
D.
Suppression of vibrations with a general time dependence
In real situations, the incident vibrations usually consist of the sum of several randomly
changing dominant harmonic components. These harmonic components appear in the system due to the eigen-frequencies of mechanical parts or due to vibration of revolving mechan19
ical parts. In order to suppress the vibration transmission between the vibrating mechanical
parts in real industrial applications, following modification to the control algorithm was
implemented.
First, signals from the force sensor and voltage applied from the negative capacitor are
measured. If the amplitude of the signal from the force sensor exceeds some arbitrarily chosen threshold, the Fast Fourier Transformation is applied to the time dependencies of the
measured signals in order to obtain their amplitude and phase frequency spectra. Then the
distribution of the vibration power along the frequency axis is analyzed and the dominant
harmonic component with the greatest amplitude is found. This dominant harmonic component is selected to be suppressed. At the selected frequency of the dominant harmonic
component, the phase difference between the dominant harmonic components in the signals
from the force sensor and from the negative capacitor output is calculated. The calculated
value of the phase difference is used for the iterative corrections of the values of resistances
R0 and R1 according to Eqs. (17). After application of corrections to values of resistances
R0 and R1 , new time-dependences of signals from the force sensor and the negative capacitor output are measured and the above steps are periodically repeated until the dominant
frequency is suppressed in the force sensor signal under a measurable level.
In order to evaluate the performance of the adaptive broad band vibration suppression
device, five different vibration signals were generated and applied to the vibration isolation
device. The vibration signal consists of a random noise and one dominant harmonic component of given frequency. Figure 10 shows spectra of five force signals transmitted through the
vibration isolation system with different frequencies of the dominant harmonic component.
Solid black line indicates the force amplitude spectra transmitted through the piezoelectric
actuator disconnected from the negative capacitor. The zero-filled solid blue lines indicate
the amplitude spectra of the force transmitted through piezoelectric actuator shunted by
the self-adjusted broad-frequency-range-matched negative capacitor.
The frequency dependences of the transmissibility of vibrations through the piezoelectric
actuator shunted by the adaptive broad frequency range negative capacitor, which was
self-adjusted to the five aforementioned vibration signals, are shown in Fig. 11. It is seen
that the adaptive control algorithm adjusts the negative capacitor in such a way that the
transmissibility of vibration curve has its minimum around the frequency of the dominant
harmonic component in the vibration signal. Figure 11 also indicates shifts of the mechanical
20
0.15
Dominant harmonic component frequency: = 1.20 kHz
NC off
NC on
0.10
0.05
0.00
0.15
Dominant harmonic component frequency: = 1.50 kHz
0.10
0.05
Force amplitude (a.u.)
0.00
0.20
Dominant harmonic component frequency: = 2.00 kHz
0.15
0.10
0.05
0.00
0.15
Dominant harmonic component frequency: = 2.25 kHz
0.10
0.05
0.00
0.15
Dominant harmonic component frequency: = 2.50 kHz
0.10
0.05
0.00
6
7
8
9
2
3
1000
Frequency (Hz)
FIG. 10: Spectra of the five force signals transmitted through the vibration isolation system with
different frequencies of the dominant harmonic component. Solid black line indicates the force
amplitude spectra transmitted through the piezoelectric actuator disconnected from the negative
capacitor. The zero-filled solid blue lines indicates the amplitude spectra of the force transmitted through piezoelectric actuator shunted by the self-adjusted broad-frequency-range-matched
negative capacitor. The vibration signal consists of a random noise and one dominant harmonic
component of given frequency.
resonant frequency of the system by more than 300 Hz, which is due to the reduction of the
effective spring constant of the piezoelectric actuator using the negative capacitor in the
broad frequency range. This yields an increase in the transmissibility of vibrations in subresonant frequencies. Such an unwanted phenomenon can be easily eliminated by inserting
high-pass filter in the negative capacitor.
21
Negative capacitor off
Self-adjusted negative capacitor:
= 1.20 kHz
= 1.50 kHz
= 2.00 kHz
= 2.25 kHz
= 2.50 kHz
Transmissibility (dB)
40
20
0
-20
6
7
8
9
2
3
1000
Frequency (Hz)
FIG. 11:
Frequency dependences of the transmissibility of vibrations through the piezoelectric
actuator shunted by the adaptive broad-frequency-range-matched negative capacitor. Each curve
corresponds the transfer functions of the adaptive system adjusted to cancel the vibration signal
with the force amplitude spectra shown in Fig. 10. It should be noted that the decrease in the
Young’s modulus in broad frequency range results in a decrease in the resonant frequency of the
system by more than 300 Hz. This yields an increase in the transmissibility of vibrations in subresonant frequencies.
Finally, it should be noted that the iterative control algorithm can conveniently compensate the effects of dielectric nonlinearity of piezoelectric actuator. The point is that
with an increase in the amplitude of the incident vibrations, the voltage applied from the
operational amplifier is also increased (more or less proportionally to the increase in the
incident vibration amplitude). Then, the permittivity (and capacitance) of the piezoelectric
actuator seen from the negative capacitor is slightly changed due to a dielectric nonlinearity (usually according to the Rayleigh law). This causes maladjustment of the system and
unwanted drop in the efficiency of the vibration isolation. However, if the change in the
amplitude of the incident vibration is not extremely fast, so that the system remains stable,
the iterative control algorithm quickly compensates the changes in the piezoelectric actuator
capacitance. The same behavior is expected even at full voltage range of the actuator, which
can be achieved in systems with a standard high-voltage amplifier presented e.g. in the work
by Fleming and Moheimani [21].
22
V.
CONCLUSIONS
The theoretical model of the vibration transmission through a piezoelectric actuator
shunted by a negative capacitor is presented. The model has been verified using experiments
performed on a narrow frequency (pure tone) vibration isolation. By a proper modification
of the reference capacitor in the negative capacitor, it was successfully demonstrated that it
is possible to achieve the vibration transmission suppression by 20 dB in the broad frequency
range (from 1 kHz to 2 kHz). The iterative control law for automatic adjustment of the
negative capacitor was disclosed using analyzing the absolute value and argument of the
effective spring constant of the piezoelectric actuator shunted by a negative capacitor. A
method for real-time estimation of the effective spring constant argument in the vibration
isolation system was presented. However, the adaptive system in its basic arrangement is
applicable only to the suppression of vibrations with harmonic time dependences. In order
to eliminate this drawback, more evolved signal processing was implemented. It was shown
that the iterative control algorithm is applicable also to vibrations with a general time
dependence.
The advantages of the presented system for the suppression of vibration transmission stem
from its simple electronic realization using an analog circuit with an operational amplifier,
broad frequency range of the efficiently suppressed vibrations from 0.5 kHz to 3 kHz, and a
simple control law that allows applying the automatic corrections to the negative capacitor,
so that the system can work under varying operating conditions. In addition, the presented
adaptive system is an example of a general concept of adaptive piezoelectric shunt damping,
which can be easily modified and applied to a variety of different types of piezoelectric
actuators and other electroacoustic transducers. All in all, the presented realization of the
vibration isolation device offers a solution for many real noise and vibration problems.
Aknowledgments
This work was supported by Czech Science Foundation Project No.: GACR 101/08/1279,
co-financed from the student grant SGS 2012/7821 Interactive Mechatronics Systems Using the Cybernetics Principles, and the European Regional Development Fund and the
Ministry of Education, Youth and Sports of the Czech Republic in the Project No.
23
CZ.1.05/2.1.00/03.0079: Research Center for Special Optics and Optoelectronic Systems
(TOPTEC). The authors acknowledge Julie Volfová for reading the manuscript.
[1] N. W. Hagood and A. von Flotow, “Damping of structural vibrations with piezoelectric materials and passive electrical networks,” Journal of Sound and Vibration, vol. 146, pp. 243–268,
Apr. 1991.
[2] S. O. R. Moheimani and A. J. Fleming, Piezoelectric Transducers for Vibration Control and
Damping. 2006.
[3] M. S. Tsai and K. W. Wang, “On the structural damping characteristics of active piezoelectric
actuators with passive shunt,” Journal of Sound and Vibration, vol. 221, no. 1, pp. 1–22, 1999.
[4] L. Petit, E. Lefeuvre, C. Richard, and D. Guyomar, “A broadband semi passive piezoelectric
technique for structural damping,” in Smart Structures And Materials 2004: Damping And
Isolation (Wang, KW, ed.), vol. 5386 of Proceedings of the Society of Photo-optical Instrumentation Engineers (SPIE), pp. 414–425, 2004.
[5] R. A. Morgan and K. W. Wang, “Active-passive piezoelectric absorbers for systems under
multiple non-stationary harmonic excitations,” Journal of Sound and Vibration, vol. 255,
pp. 685–700, Aug. 2002.
[6] R. Morgan and K. Wang, “An active-passive piezoelectric absorber for structural vibration
control under harmonic excitations with time-varying frequency, part 1: Algorithm development and analysis,” Journal of Vibration And Acoustics-Transactions of the ASME, vol. 124,
pp. 77–83, JAN 2002.
[7] J. Q. Ming Yuan, Hongli Ji and T. Ma, “Active control of sound transmission through a
stiffened panel using a hybrid control strategy,” Journal of Intelligent Material Systems and
Structures, vol. 23, pp. 791–803, 2012.
[8] S. Behrens, A. Fleming, and S. Moheimani, “A broadband controller for shunt piezoelectric
damping of structural vibration,” Smart Materials & Structures, vol. 12, pp. 18–28, FEB 2003.
[9] D. Niederberger, A. Fleming, S. Moheimani, and M. Morari, “Adaptive multi-mode resonant
piezoelectric shunt damping,” Smart Materials & Structures, vol. 13, pp. 1025–1035, OCT
2004.
[10] A. Fleming and S. Moheimani, “Adaptive piezoelectric shunt damping,” Smart Materials &
24
Structures, vol. 12, pp. 36–48, FEB 2003.
[11] A. Badel, G. Sebald, D. Guyomar, M. Lallart, E. Lefeuvre, C. Richard, and J. Qiu, “Piezoelectric vibration control by synchronized switching on adaptive voltage sources: Towards wideband semi-active damping,” Journal of the Acoustical Society Of America, vol. 119, pp. 2815–
2825, MAY 2006.
[12] M. Date, M. Kutani, and S. Sakai, “Electrically controlled elasticity utilizing piezoelectric
coupling,” Journal of Applied Physics, vol. 87, no. 2, pp. 863–868, 2000. NIC.
[13] P. Mokrý, E. Fukada, and K. Yamamoto, “Noise shielding system utilizing a thin piezoelectric
membrane and elasticity control,” Journal of Applied Physics, vol. 94, no. 1, pp. 789–796,
2003.
[14] P. Mokrý, P., E. Fukada, and K. Yamamoto, “Sound absorbing system as an application of the
active elasticity control technique,” Journal of Applied Physics, vol. 94, no. 11, pp. 7356–7362,
2003.
[15] K. Imoto, M. Nishiura, K. Yamamoto, M. Date, E. Fukada, and Y. Tajitsu, “Elasticity control
of piezoelectric lead zirconate titanate (pzt) materials using negative-capacitance circuits,”
Japanese Journal of Applied Physics, vol. 44, no. 9B, pp. 7019–7023, 2005.
[16] K. Tahara, H. Ueda, J. Takarada, K. Imoto, K. Yamamoto, M. Date, E. Fukada, and
Y. Tajitsu, “Basic study of application for elasticity control of piezoelectric lead zirconate titanate materials using negative-capacitance circuits to sound shielding technology,” Japanese
Journal of Applied Physics, vol. 45, no. 9B, pp. 7422–7425, 2006.
[17] H. Kodama, M. Date, K. Yamamoto, and E. Fukada, “A study of sound shielding control of
curved piezoelectric sheets connected to negative capacitance circuits,” Journal of Sound and
Vibration, vol. 311, pp. 898–911, APR 8 2008.
[18] T. Sluka and P. Mokrý, “Feedback control of piezoelectric actuator elastic properties in a
vibration isolation system,” Ferroelectrics, vol. 351, pp. 51–61, 2007. 8th European Conference
on Applications of Polar Dielectrics (ECAPD-8), Metz, FRANCE, SEP 05-08, 2006.
[19] A. Preumont, B. de Marneffe, A. Deraemaeker, and F. Bossens, “The damping of a truss
structure with a piezoelectric transducer,” Computers & Structures, vol. 86, pp. 227–239,
FEB 2008. II ECCOMAS Thematic Conference on Smart Structures and Materials, Lisbon,
PORTUGAL, JUL 18-21, 2005.
[20] T. S. Sluka, H. Kodama, E. Fukada, and P. Mokrý, “Sound shielding by a piezoelectric
25
membrane and a negative capacitor with feedback control,” IEEE Transactions on Ultrasonics,
Ferroelectrics, and Frequency Control, vol. 55, pp. 1859–1866, AUG 2008.
[21] A. J. Fleming and S. O. R. Moheimani, “Improved current and charge amplifiers for driving piezoelectric loads, and issues in signal processing design for synthesis of shunt damping
circuits,” Journal of Intelligent Material Systems and Structures, vol. 15, no. 2, pp. 77–92,
2004.
26
| 3 |
Recursive Method for the Solution
of Systems of Linear Equations ∗
arXiv:1703.10232v1 [cs.DS] 29 Mar 2017
Gennadi.I.Malaschonok
Tambov State University, 392622 Tambov, Russia
e-mail: [email protected]
Abstract
New solution method for the systems of linear equations in commutative
integral domains is proposed. Its complexity is the same that the complexity
of the matrix multiplication.
1
Introduction
One of the first results in the theory of computational complexity is the
Strassen discovery of the new algorithm for matrix multiplication [1]. He
changed the classical method with the complexity O(n3 ) for the new algorithm with the complexity O(nlog2 7 ). This method may be used for a matrix
in any commutative ring. He used matrix multiplication for the computation
of the inverse matrix, of the determinant of a matrix and for the solution of
the systems of linear equations over an arbitrary field with the complexity
O(nlog2 7 ).
Many authors improved this result. There is known now an algorithm
of matrix multiplication with the complexity O(n2,37 ) (see D.Coppersmith,
S.Winograd [2]).
We have another situation with the problems of the solution of systems
of linear equations and of the determinant computation in the commutative
rings. Dodgson [3] proposed a method for the determinant computation and
the solution of systems of linear equations over the ring of integer numbers
with the complexity O(n3 ). During this century this result was improved and
generalized for arbitrary commutative integral domain due to Bareis [4] and
the author (see [5] – [8]). But the complexity is still O(n3 ).
There is proposed the new solution method for the systems of linear equations in integral domains. Its complexity is the same that the complexity of
the matrix multiplication in integral domain.
∗
This paper vas published in: Computational Mathematics (A. Sydow Ed, Proceedings of the
15th IMACS World Congress, Vol. I, Berlin, August 1997 ), Wissenschaft & Technik Verlag,
Berlin 1997, 475–480. No part of this materials may be reproduced, stored in retrieval system, or
transmitted, in any form without prior permission of the copyright owner.
1
Let
m−1
X
aij xj = aim , i = 1, 2, . . . , n
j=1
be the system of linear equations with extended coefficients matrix
A = (aij ), i = 1, . . . , n, j = 1, . . . , m.
whose coefficients are in integral domain R: A ∈ Rn×m .
The solution of such system may be written according to Cramer’s rule
xj =
n −
δjm
Pm−1
n
p=n+1 xp δjp
,
δn
j = 1, . . . , n,
where xp , p = n + 1, . . . , m, are free variables and δn 6= 0. δn = |aij |,
i = 1, . . . , n, j = 1, . . . , n, - denote the corner minors of the matrix A of
n - denote the minors obtained by a substitution of the column
order n, δij
j of the matrix A instead of the column i in the minors δn , i = 1, . . . , n,
j = n + 1, . . . , m. So we need to construct the algorithm of computation of
n ), i = 1, . . . , n, j = n + 1, n + 2, . . . , m.
the minor δn and the matrix G = ( δij
That means that we must make the reduction of the matrix A to the
diagonal form
A → (δn In , G).
In denotes the unit matrix of order n.
2
Recursive Algorithm
For the extended coefficients matrix A we shall denote:
Akij
a11
a21
..
.
a12
a22
..
.
···
···
..
.
a1,k−1
a2,k−1
..
.
ai1
ak−1,2
ai2
···
···
ak−1,k−1
ai,k−1
=
ak−1,1
a1j
a2j
..
.
ak−1,j
aij
– the matrix, formed by the surrounding of the submatrix of an order k − 1
in the upper left corner by row i and column j,
akij = det Akij ,
k – the determinant of the matrix, that is
a1ij = aij , δ0 = 1, δk = akkk , δij
received from the matrix Akkk after the substitution of the column i by the
column j.
k and ak for the construction of the matricez
We shall use the minors δij
ij
r,l,(p)
Ak,c
apr+1,k+1
ap
r+2,k+1
=
..
.
p
al,k+1
apr+1,k+2
apr+2,k+2
..
.
p
al,k+2
2
···
···
..
.
···
apr+1,c
apr+2,c
..
.
apl,c
and
r,l,(p)
Gk,c
r,l,(p)
p
δr+1,k+1
δp
r+2,k+1
=
..
.
p
δl,k+1
p
δr+1,k+2
p
δr+2,k+2
..
.
p
δl,k+2
···
···
..
.
···
p
δr+1,c
p
δr+2,c
..
.
p
δl,c
r,l,(p)
Gk,c , Ak,c ∈ R(l−r)×(c−k) , 0 ≤ k < n, k < c ≤ n, 0 ≤ r < m, r < l ≤ m,
1 ≤ p ≤ n.
We shall describe one recursive step, that makes the following reduction
of the matrix à to the diagonal form
à → (δl Il−k , Ĝ)
where
k,l,(k+1)
à = Ak,c
k,l,(l)
, Ĝ = Gl,c
0 ≤ k < c ≤ m, k < l ≤ n, l < c. Note that if k = 0, l = n and c = m then
we get the solution of the system.
We can choose the arbitrary integer number s: k < s < l and write the
matrix à as the following:
1
A
à =
A2
k,s,(k+1)
where A1 = Ak,c
rows and A2 =
2.1
- the upper part of the matrix à consists of the s − k
s,l,(k+1)
Ak,c
- the lower part of the matrix Ã.
The first step
As the next recurcive step we make the following reduction of the matrix
A1 ∈ R(s−k)×(c−k) to the diagonal form
A1 → (δs Is−k , G12 ),
k,s,(s)
where G12 = Gs,c
2.2
.
The second step
We write the matrix A2 in the following way:
A2 = (A21 , A22 )
s,l,(k+1)
s,l,(k+1)
consists of the first s − k columns and A22 = As,c
where A21 = Ak,s
consists of the last c − s columns of the matrix A2 .
s,l,(s+1)
is obtained from the matrix identity (see the
The matrix Â22 = As,c
proof in the next section):
δk · Â22 = δs · A22 − A21 · G12 .
The minors δk must not equal zero.
3
2.3
The third step
As the next recurcive step we make the following reduction of the matrix
Â22 ∈ R(l−s)×(c−s) to the diagonal form
Â22 → (δl Il−s , Ĝ22′′ ),
s,l,(l)
where Ĝ22′′ = Gl,c
2.4
.
The fourst step
We write the matrix G12 in the following way:
G12 = (G12′ , G12′′ )
k,s,(s)
k,s,(s)
consists of the first l − s columns and G12′′ = Gl,c
where G12′ = Gs,l
consists of the last c − l columns of the matrix G12 .
k,s,(l)
is obtained from the matrix identity (see the
The matrix Ĝ12′′ = Gl,c
proof in the next section):
δs · Ĝ12′′ = δl · G12′′ − G12′ · Ĝ22′′ .
The minors δs must not equal zero.
So we get
1
Ĝ2′′
Ĝ =
Ĝ22′′
and δl .
2.5
Representation of the one recursive step
We can represent one recursive step as the following reduction of the matrix
Ã:
s
s
1
δ Is−k G12
δ Is−k G12
A
→3
→2
→1
à =
0
Â22
A22
A21
A2
→3
3
3.1
δs Is−k
0
G12′
δl Il−s
G12′′
Ĝ22′′
→4
δl Is−k
0
0
l
δ Il−s
Ĝ12′′
Ĝ22′′
= ( δl Il−k
Ĝ )
The Proof of the Main Identities
The first matrix identity
The second step of the algorithm is based on the following matrix identity:
s,l,(k+1)
δk As,l,(s+1)
= δs As,l,(k+1)
− Ak,s
s,c
s,c
· Gk,s,(s)
.
s,c
So we must prove the next identities for the matrix elements
s k+1
δk as+1
−
ij = δ aij
s
X
p=k+1
4
s
ak+1
ip · δpj ,
i = s + 1, . . . , l; j = s + 1, . . . , c.
k denote the minors that will stand in the place of the minors δ k after
Let σij
the replacement the row i by the row j. An expansion of the determinant ak+1
ij
according to the column j is the following
ak+1
= δk aij −
ij
k
X
k
σri
arj
r=1
Therefore we can write the next matrix identity
1
0
..
.
0
k
−σ1i
0
1
..
.
0
k
−σ2i
···
···
..
.
···
···
0
0
..
.
0
k
−σki
0
0
..
.
0
0
···
···
..
.
···
···
0 0
0 0
.. ..
· As+1
. .
ij =
1 0
0 δk
a11
a21
..
.
a12
a22
..
.
···
···
..
.
a1,s
a2,s
..
.
a1j
a2j
..
.
ak+1
i1
as,2
ak+1
i2
···
···
as,s
ak+1
i,s
as,j
ak+1
ij
=
as,1
Note that ak+1
= 0 for p ≤ k. Finaly we decompose the determinant of
ip
the right matrix according to the last row and write the determinant identity
correspondingly to this matrix identity.
3.2
The second matrix identity
The fourth step of the algorithm bases on the matrix identity
k,s,(l)
δs Gl,c
k,s,(s)
= δl Gl,c
k,s,(s)
− Gs,l
s,l,(l)
· Gl,c
.
So we must prove the next identities for the matrix elements:
l
s
δs δij
= δl δij
−
l
X
s
l
δip
· δpj
,
p=s+1
i = k + 1, . . . , s; j = l + 1, . . . , c.
s
s denote the algebraic adjunct of element a
Let γj,i
j,i in the matrix As,s . An
s according to the column i is the following
expansion of the determinant δip
s
δip
=
s
X
s
γqi
aqp
q=1
Therefore we can write the next matrix identity:
1
0
.
.
.
0
s
γ1i
0
1
..
.
0
s
γ2i
···
···
..
.
···
···
0
0
..
.
0
s
γs,i
0
0
..
.
0
0
5
···
···
..
.
···
···
0
0
..
.
1
0
0
0
..
l+1
A
=
.
ij
0
0
a12
a22
..
.
···
···
..
.
a1,l
a2,l
..
.
al,1
al,2
s
δi2
···
···
al,l
s
δi,l
a11
a
21
.
=
..
s
δi1
a1j
a2j
..
.
al,j
s
δij
s = 0 for p ≤ s and δ s = δ s . So to finish the proof we must
Note that δip
ii
decompose the determinant of the right matrix according to the last row and
write the determinant identity correspondingly to this matrix identity.
4
Evaluation of Operations Number
Let us have a method for matrix multiplications with the complexity M (n) =
O(n2+β ), then for multiplication of two matrixes of order l × n and n × c we
need M (l × n, n × c) = O(lcnβ ) operations. Let us denote by S(n, m) the
complexity of the recursive algorithm for the matrix A ∈ Rn×m .
If in the first recursive step upper submatrix consists of the s rows, 1 ≤
s < n, then
S(n, m) = S(s, m) + M ((n − s) × s, s × (m − s))+
+S(n − s, m − s) + M (s × (n − s), (n − s) × (m − n)) + O(nm).
For a matrix with k rows we can choose the arbitrary s : 1 ≤ s ≤ k − 1.
If the process of partition is dichotomous, and the number of rows in the
upper and lower submatrixes is the same in every step, then S(2n, m) satisfies
the recursive inequality:
S(2n, m) = S(n, m) + M (n × n, n × (m − n)) + S(n, m − n)+
+M (n × n, n × (m − 2n)) + O(nm) ≤ 2S(n, m) + 2O(mnβ+1 ).
So we have
(log2 n)−1
S(2n, m) ≤ nS(2, m) +
X
i=0
= nS(2, m) +
O((
n β+1
)
m)2i+1 =
2i
2
O((nβ − 1)nm)
1 − 2−β
And finally
S(2n, m) ≤ O(mnβ+1 ).
On the other hand
S(2n, m) > M (n × n, n × (m − n)) = O(mnβ+1 ).
Therefore
S(2n, m) = O(mnβ+1 ).
So the complexity of this algorithm is the same that the complexity of the
matrix multiplication. In particular for m = n + 1 we have
S(n, n + 1) = O(n2+β )
6
It means that the solution of the system of linear equations needs (accurate to the constant multiplier) the same number of operations that the
multiplication of two matrixes needs.
We can get the exact number of operations, that are necessary for the
solution of the system of linear equations of order n × m, in the case when on
every step upper submatrix is no less then lower submatrix and the number
of rows in upper submatrix is some power of 2.
Let F (s, µ−s, ν) = M ((ν−s)×s, s×(µ−s))+M (s×(ν−s), (ν−s)×(µ−ν)),
then we obtain S(n, m):
k
⌊log2 n⌋
X
k=1
⌊n/2 ⌋
X
n
n
k
F (2k−1 , 2k−1 , m−(i−1)2k ))
(F (2 , n−2 ⌊ k ⌋, m−2 (⌊ k ⌋−1))+
2
2
i=1
k
k
Let n = 2p . If we use simple matrix multiplications with complexity n3 than
we obtain
Anm = (6n2 m − 4n3 − 6nm + 3n2 + n)/6,
Mnm = (6n2 m − 4n3 + (6nm − 3n2 ) log2 n − 6nm + 4n)/6,
Dnm = ((6nm − 3n2 ) log2 n − 6nm − n2 + 6m + 3n − 2)/6.
Here we denote by Anm , Mnm , Dnm the numbers of additions/subtractions,
multiplications and divisions, and take into account that (6nm − 2n2 − 6m +
2)/6 divisions in the second step are divisions by δ0 = 1, so they do not exist
in Dnm .
For m=n+1 we obtain
An,n+1 = (2n3 + 3n2 − 5n)/6,
Mn,n+1 = (2n3 + (3n2 + 6n) log2 n − 2n)/6,
Dn,n+1 = (3n2 log2 n − 7n2 + 6n log2 n + 3n + 4)/6.
The general quantity of multiplication and division operations is about n3 /3.
We can compare these results with one-pass algorithm, that was the best
3
2
O
of all known algorithms (see [8]): AO
n,n+1 = (2n + 3n − 5n)/6, Mn,n+1 =
3
3
2
O
(n + 2n − n − 2)/2, Dn,n+1 = (n − 7n + 6)/6, the general quantity of
multiplication and division operations is about 2n3 /3.
If we use Strassen’s matrix multiplications with complexity nlog2 7 than
we can obtain for n = 2p the general quantity of multiplication and division
operations
S
M Dn,m
= n2 (log2 n − 5/3) + 7/15nlog 2 7 + (m − n)(n2 + 2n log2 n − n) + 6/5n−
log2 n−1
−
X
i=1
n i
m−n
n
m − n − 2i+1 ⌊(m − n)/2i+1 ⌋
i
(8
−
7
){⌊
⌋
−
(
−
2)⌊
⌋}.
2i
2i
2i
2i
For m = n + 1, n = 2p we get
S
M Dn,n+1
=
2
1
7 log2 7
n
+ n2 (log2 n − ) + n(2 log2 n + ).
15
3
5
7
5
Example
Let us consider next system over the integer numbers
3
1
0
1
x1
4
1 −1
x2 4
0 1
· =
2 0 x3 −2
x4
0 2
−1
1
2
1
0
02(1)
5.1 Reduction of the matrix A1 = A05
onal form
to the diag-
We make the next reduction:
02(1)
02(2)
→ (δ2 I2 , G25
A05
)
5.1.1
01(1)
A05
01(1)
→ (δ1 I1 , G15
) = (3; 1, 1, −1, 4)
5.1.2
12(2)
δ0 A15
12(1)
= δ1 A15
12(1)
− A01
01(1)
G15
=
δ0 ≡ 1.
= 3(2, 0, 1, 4) − (1)(1, 1, −1, 4) = (5, −1, 4, 8),
5.1.3
12(2)
A15
12(2)
→ (δ2 I1 , G25
) = (5; −1, 4, 8)
5.1.4
01(2)
δ1 G25
01(1)
= δ2 G25
01(1)
− G12
12(2)
G25
=
= 5(1, −1, 4) − (1)(−1, 4, 8) = (6, −9, 12)
01(2)
= (2, −3, 4)
G25
Finally we obtain
2
(δ I2 ,
5.2
02(2)
G25 )
=
5 0;
0 5;
2 −3 4
−1 4 8
24(3)
Computation of the matrix Â22 = A25
24(3)
δ0 A25
=5·
2
0
24(1)
= δ2 A25
0 −2
2 −1
=
−
24(1)
− A02
0 1
1 0
24(3)
=
8
=
2 −3
−1 4
11 −4 −18
−2 13 −9
δ0 ≡ 1; A25
02(2)
G25
4
8
11 −4 −18
−2 13 −9
=
24(3)
5.3
Reduction of the matrix A22 = A25
agonal form
to the di-
We make the next reduction:
24(3)
A25
24(4)
→ (δ4 I2 G45
)
5.3.1
23(3)
A25
23(3)
→ (δ3 I1 , G35
) = (11; −4, −18)
5.3.2
34(4)
δ2 A35
34(4)
= δ3 A35
34(3)
− A23
23(3)
G35
=
= 11(13, −9) − (−2)(−4, −18) = (135, −135)
34(4)
A35
= (27, −27)
5.3.3
34(4)
A35
34(4)
→ (δ4 I1 , G45
) = (27, −27)
5.3.4
23(4)
δ3 G45
23(3)
= δ4 G45
23(3)
− G34
34(4)
=
23(4)
= (−54)
G45
= 27(−18) − (−4)(−27) = −594, G45
Finally, in step (3) we obtain
4
(δ I2 ,
5.4
24(4)
G45 )
=
27
0
0;
27;
−54
−27
02(4)
Computation of the matrix Ĝ12′′ = G45
02(4)
δ2 G45
= 27
4
8
02(2)
= δ4 G45
−
2 −3
−1 4
02(4)
G45
=
02(2)
G45
−54
−27
− G24
27
54
24(4)
=
The solution of the system is the following:
δ4 = 27;
04(4)
G45
9
27
54
=
−54
−27
=
135
270
5.5
Representation of the first recursive step
We can represent the first recursive step as the following
5
0
A →1
0
1
5
0
→3
0
0
6
0 2 −3 4
5 0
0 5
5 −1 4
8
→
1 2
0 −2 2 0 0
0 0
2 −1
0 0
0 2 −3
4
27
0
5 −1 4
8
→
0 27
0 −54 4 0
0 0
27 −27
0
2 −3
4
−1 4
8
→
11 −4 −18 3
−2 13 −9
0
27
0
0
0
0
27
0
0
0
0
27
27
54
−54
−27
Conclusion
The described algorithm for the solution of the systems of linear equations
over the integral domain includes the known one-pass method and the method
of forward and back-up procedures [8]. If in every recursive step the partition
of the matrix is such that the upper submatrix consists only of one row then
it is the method of forward and back-up procedures. If the lower submatrix
consists only of one row in every step then it is the one-pass method.
If the process of partition is dichotomous and the numbers of rows in the
upper and lower submatrixes are equal in every step, then the complexity of
the solution has the same order O(n2+β ) as the complexity of matrix multiplication.
The computation of the matrix determinant and the computation of the
adjugate matrix have the same complexity.
This method may be used in any commutative ring if the corner minors
δk , k = 1, 2, . . . , n, do not equal zero and are not zero divisors.
References
[1] V. Strassen. Gaussian Elimination is not optimal. Numerische
Mathematik, 1969, 13, 354–356.
[2] D. Coppersmith, S. Winograd. in Proc. 19th Annu ACM Symp.
on Theory of Comput., 1987, 1–6.
[3] C.L. Dodgson. Condensation of determinants, being a new and
brief method for computing their arithmetic values. Proc. Royal
Soc. Lond., 1866, A.15, 150–155.
[4] E.N. Bareiss. Sylvester’s identity and multistep integer-preserving
Gaussian elimination. Math. Comput., 1968, 22, 565–578.
[5] G.I. Malaschonok. Solution of a system of linear equations in an
integral domain. USSR Journal of Computational Mathematics
and Mathematical Physics, 1983, 23, 1497–1500.
10
[6] G.I. Malaschonok. On the solution of a linear equation system
over commutative ring. Math. Notes of the Acad. Sci. USSR,
1987, 42, N4, 543–548.
[7] G.I. Malaschonok. A new solution method for linear equation
systems over the commutative ring. In Int. Algebraic Conf., Theses on the ring theory, algebras and modules. Novosibirsk, 1989,
82–83.
[8] G.I. Malaschonok. Algorithms for the solution of systems of linear
equations in commutative rings. In Effective Methods in Algebraic
Geometry, Edited by T. Mora and C. Traverso, Progress in Mathematics 94, Birkhauser, Boston-Basel-Berlin, 1991, 289–298.
11
| 0 |
HDR image reconstruction from a single exposure using deep CNNs
GABRIEL EILERTSEN, Linköping University, Sweden
JOEL KRONANDER, Linköping University, Sweden
GYORGY DENES, University of Cambridge, UK
RAFAŁ K. MANTIUK, University of Cambridge, UK
JONAS UNGER, Linköping University, Sweden
arXiv:1710.07480v1 [cs.CV] 20 Oct 2017
Reconstructed HDR image
Input
Reconstruction
Ground truth
Input
Reconstruction
Ground truth
Input LDR image
Fig. 1. The exposure of the input LDR image in the bottom left has been reduced by 3 stops, revealing loss of information in saturated image regions. Using
the proposed CNN trained on HDR image data, we can reconstruct the highlight information realistically (top right). The insets show that the high luminance
of the street lights can be recovered (top row), as well as colors and details of larger saturated areas (bottom row). The exposures of the insets have been
reduced by 5 and 4 stops in the top and bottom rows, respectively, in order to facilitate comparisons. All images have been gamma corrected for display.
Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of
different exposures are typically combined. In this paper we address the
problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show
that this problem is well-suited for deep learning algorithms, and propose a
deep convolutional neural network (CNN) that is specifically designed taking
into account the challenges in predicting HDR values. To train the CNN
we gather a large dataset of HDR images, which we augment by simulating
sensor saturation for a range of cameras. To further boost robustness, we
pre-train the CNN on a simulated HDR dataset created from a subset of the
MIT Places database. We demonstrate that our approach can reconstruct
high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with
arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for
HDR expansion, and show high quality results also for image based lighting.
Finally, we evaluate the results in a subjective experiment performed on an
HDR display. This shows that the reconstructed HDR images are visually
convincing, with large improvements as compared to existing methods.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from [email protected].
© 2017 Association for Computing Machinery.
0730-0301/2017/11-ART178 $15.00
https://doi.org/10.1145/3130800.3130816
CCS Concepts: • Computing methodologies → Image processing; Neural
networks;
Additional Key Words and Phrases: HDR reconstruction, inverse tone-mapping,
deep learning, convolutional network
ACM Reference format:
Gabriel Eilertsen, Joel Kronander, Gyorgy Denes, Rafał K. Mantiuk, and Jonas
Unger. 2017. HDR image reconstruction from a single exposure using deep
CNNs. ACM Trans. Graph. 36, 6, Article 178 (November 2017), 15 pages.
https://doi.org/10.1145/3130800.3130816
1
INTRODUCTION
High dynamic range (HDR) images can significantly improve the
viewing experience – viewed on an HDR capable display or by means
of tone-mapping. With the graphics community as an early adopter,
HDR images are now routinely used in many applications including photo realistic image synthesis and a range of post-processing
operations; for an overview see [Banterle et al. 2011; Dufaux et al.
2016; Reinhard et al. 2010]. The ongoing rapid development of HDR
technologies and cameras has now made it possible to collect the
data required to explore recent advances in deep learning for HDR
imaging problems.
In this paper, we propose a novel method for reconstructing HDR
images from low dynamic range (LDR) input images, by estimating
missing information in bright image parts, such as highlights, lost
due to saturation of the camera sensor. We base our approach on a
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
178:2 •
G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk and J. Unger
fully convolutional neural network (CNN) design in the form of a
hybrid dynamic range autoencoder. Similarly to deep autoencoder
architectures [Hinton and Salakhutdinov 2006; Vincent et al. 2008],
the LDR input image is transformed by an encoder network to
produce a compact feature representation of the spatial context
of the image. The encoded image is then fed to an HDR decoder
network, operating in the log domain, to reconstruct an HDR image.
Furthermore, the network is equipped with skip-connections that
transfer data between the LDR encoder and HDR decoder domains
in order to make optimal use of high resolution image details in
the reconstruction. For training, we first gather data from a large
set of existing HDR image sources in order to create a training
dataset. For each HDR image we then simulate a set of corresponding
LDR exposures using a virtual camera model. The network weights
are optimized over the dataset by minimizing a custom HDR loss
function. As the amount of available HDR content is still limited
we utilize transfer-learning, where the weights are pre-trained on
a large set of simulated HDR images, created from a subset of the
MIT Places database [Zhou et al. 2014].
Expansion of LDR images for HDR applications is commonly
referred to as inverse tone-mapping (iTM). Most existing inverse
tone-mapping operators (iTMOs) are not very successful in reconstruction of saturated pixels. This has been shown in a number of
studies [Akyüz et al. 2007; Masia et al. 2009], in which naïve methods or non-processed images were more preferred than the results
of those operators. The existing operators focus on boosting the
dynamic range to look plausible on an HDR display, or to produce
rough estimates needed for image based lighting (IBL). The proposed method demonstrates a step improvement in the quality of
reconstruction, in which the structures and shapes in the saturated
regions are recovered. It offers a range of new applications, such as
exposure correction, tone-mapping, or glare simulation.
The main contributions of the paper can be summarized as:
(1) A deep learning system that can reconstruct a high quality
HDR image from an arbitrary single exposed LDR image,
provided that saturated areas are reasonably small.
(2) A hybrid dynamic range autoencoder that is tailored to operate
on LDR input data and output HDR images. It utilizes HDR
specific transfer-learning, skip-connections, color space and
loss function.
(3) The quality of the HDR reconstructions is confirmed in a
subjective evaluation on an HDR display, where predicted
images are compared to HDR and LDR images as well as a
representative iTMO using a random selection of test images
in order to avoid bias in image selection.
(4) The HDR reconstruction CNN together with trained parameters are made available online, enabling prediction from any
LDR images: https:// github.com/ gabrieleilertsen/ hdrcnn.
2 RELATED WORK
2.1 HDR reconstruction
In order to capture the entire range of luminance in a scene it is
necessary to use some form of exposure multiplexing. While static
scenes commonly are captured using multiplexing exposures in
the time domain [Debevec and Malik 1997; Mann and Picard 1994;
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
Unger and Gustavson 2007], dynamic scenes can be challenging as
robust exposure alignment is needed. This can be solved by techniques such as multi-sensor imaging [Kronander et al. 2014; Tocci
et al. 2011] or by varying the per-pixel exposure [Nayar and Mitsunaga 2000] or gain [Hajisharif et al. 2015]. Furthermore, saturated
regions can be encoded in glare patterns [Rouf et al. 2011] or with
convolutional sparse coding [Serrano et al. 2016]. However, all these
approaches introduce other limitations such as bulky and custom
built systems, calibration problems, or decreased image resolution.
Here, we instead tackle the problem by reconstructing visually convincing HDR images from single images that have been captured
using standard cameras without any assumptions on the imaging
system or camera calibration.
2.2
Inverse tone-mapping
Inverse tone-mapping is a general term used to describe methods
that utilize LDR images for HDR image applications [Banterle et al.
2006]. The intent of different iTMOs may vary. If it is to display
standard images on HDR capable devices, maximizing the subjective
quality, there is some evidence that global pixel transformations
may be preferred [Masia et al. 2009]. Given widely different input
materials, such methods are less likely to introduce artifacts compared to more advanced strategies. The transformation could be
a linear scaling [Akyüz et al. 2007] or some non-linear function
[Masia et al. 2009, 2017]. These methods modify all pixels without
reconstructing any of the lost information.
A second category of iTMOs attempt to reconstruct saturated
regions to mimic a true HDR image. These are expected to generate
results that look more like a reference HDR, which was also indicated by the pair-wise comparison experiment on an HDR display
performed by Banterle et al. [2009]. Meylan et al. [2006] used a linear
transformation, but applied different scalings in highlight regions.
Banterle et al. [2006] first linearized the input image, followed by
boosting highlights using an expand map derived from the median
cut algorithm. The method was extended for video processing, and
with additional features such as automatic calibration and crossbilateral filtering of the expand map [Banterle et al. 2008]. Rempel
et al. [2007] also utilized an expand map, but computed this from
Gaussian filtering in order to achieve real-time performance. Wang
et al. [2007] applied inpainting techniques on the reflectance component of highlights. The method is limited to textured highlights, and
requires some manual interaction. Another semi-manual method
was proposed by Didyk et al. [2008], separating the image into diffuse, reflections and light sources. The reflections and light sources
were enhanced, while the diffuse component was left unmodified.
More recent methods includes the iTMO by Kovaleski and Oliviera
[2014], that focus on achieving good results over a wide range of
exposures, making use of a cross-bilateral expand map [Kovaleski
and Oliveira 2009].
For an in-depth overview of inverse tone-mapping we refer to the
survey by Banterle et al. [2009]. Compared to the existing iTMOs,
our approach achieves significantly better results by learning from
exploring a wide range of different HDR scenes. Furthermore, the
reconstruction is completely automatic with no user parameters
and runs within a second on modern hardware.
HDR image reconstruction from a single exposure using deep CNNs • 178:3
2.3
Bit-depth extension
2.4
Convolutional neural networks
CNNs have recently been applied to a large range of computer vision
tasks, significantly improving on the performance of classical supervised tasks such as image classification [Simonyan and Zisserman
2014], object detection [Ren et al. 2015] and semantic segmentation [Long et al. 2015], among others. Recently CNNs have also
shown great promise for image reconstruction problems related to
the challenges faced in inverse tone-mapping, such as compression
artifact reduction [Svoboda et al. 2016], super-resolution [Ledig
et al. 2016], and colorization [Iizuka et al. 2016]. Recent work on
inpainting [Pathak et al. 2016; Yang et al. 2016] have also utilized
variants of Generative Adversarial Networks (GANs) [Goodfellow
et al. 2014] to produce visually convincing results. However, as these
methods are based on adversarial training, results tend to be unpredictable and can vary widely from one training iteration to the next.
To stabilize training, several tricks are used in practice, including
restricting the output intensity, which is problematic for HDR generation. Furthermore, these methods are limited to a single image
resolution, with results only shown so far for very low resolutions.
Recently, deep learning has also been successfully applied for
improving classical HDR video reconstruction from multiple exposures captured over time [Kalantari and Ramamoorthi 2017]. In
terms of reconstructing HDR from one single exposed LDR image,
the recent work by Zhang and Lalonde [2017] is most similar to
ours. They use an autoencoder [Hinton and Salakhutdinov 2006]
in order to reconstruct HDR panoramas from single exposed LDR
counterparts. However, the objective of this work is specifically to
recoverer high intensities near the sun in order to use the prediction
for IBL. Also, the method is only trained using rendered panoramas
of outdoor environments where the sun is assumed to be in the
same azimuthal position in all images. Given these restrictions, and
that predictions are limited to 128 × 64 pixels, the results are only
applicable for IBL of outdoor scenes. Compared to this work, we
propose a solution to a very general problem specification without
any such assumptions, and where any types of saturated regions
are considered. We also introduce several key modifications to the
standard autoencoder design [Hinton and Salakhutdinov 2006], and
show that this significantly improves the performance.
Finally, it should be mentioned that the concurrent work by Endo
et al. [2017] also treats inverse tone-mapping using deep learning
algorithms, by using a different pipeline design. Given a single
(a) f −1 (D )
(b) exp(ŷ)
(c) α
⎫
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎬
⎥
⎥
⎥
⎭
A standard 8-bit LDR image is affected not only by clipping but also
by quantization. If the contrast or exposure is significantly increased,
quantization can be revealed as banding artifacts. Existing methods
for decontouring, or bit-depth extension, include dithering methods
that use noise in order to hide the banding artifacts [Daly and Feng
2003]. Decontouring can also be performed using low-pass filtering
followed by quantization, in order to detect false contours [Daly and
Feng 2004]. There are also a number of edge-preserving filters used
for the same purpose. In this work we do not focus on decontouring,
which is mostly a problem in under-exposed images. Also, since
we treat the problem of predicting saturated image regions, the bit
depth will be increased with the reconstructed information.
(d) Ĥ
(e) H
Fig. 2. Zoom-in of an example of the components of the blending operation
in Equation 1, compared to the ground truth HDR image. (a) is the input
image, (b) is prediction, (c) is the blending mask, (d) is the blending of (a-b)
using (c), and (e) is ground truth. Gamma correction has been applied to
the images, for display purpose.
exposure input image, the method uses autoencoders in order to
predict a set of LDR images with both shorter and longer exposures.
These are subsequently combined using standard methods, in order
to reconstruct the final HDR image.
3 HDR RECONSTRUCTION MODEL
3.1 Problem formulation and constraints
Our objective is to predict values of saturated pixels given an LDR
image produced by any type of camera. In order to produce the final
HDR image, the predicted pixels are combined with the linearized
input image. The final HDR reconstructed pixel Ĥi,c with spatial
index i and color channel c is computed using a pixel-wise blending
with the blend value α i ,
Ĥi,c = (1 − α i )f −1 (D i,c ) + α i exp(ŷi,c ),
(1)
where D i,c is the input LDR image pixel and ŷi,c is the CNN output (in the log domain). The inverse camera curve f −1 is used to
transform the input to the linear domain. The blending is a linear
ramp starting from pixel values at a threshold τ , and ending at the
maximum pixel value,
max(0, maxc (D i,c ) − τ )
.
(2)
1−τ
In all examples we use τ = 0.95, where the input is defined to be in
the range [0, 1]. The linear blending prevents banding artifacts between predicted highlights and their surroundings, as compared to a
binary mask. It is also used to define the loss function in the training,
as described in Section 3.4. For an illustration of the components
of the blending, see Figure 2. Due to the blending predictions are
focused on reconstructing around the saturated areas, and artifacts
may appear in other image regions (Figure 2(b)).
The blending means that the input image is kept unmodified in
the non-saturated regions, and linearization has to be made from
either knowledge of the specific camera used or by assuming a
αi =
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk and J. Unger
1x
v.
v.
on
on
1c
1c
1x
1x
d
x4
v.
on
ec
4
1
1x
.
nv
co
v.
on
ec
4d
4x
v.
on
1c
1x
v.
on
ec
1
1x
v.
on
ec
.
nv
co
4d
4x
v.
on
1c
1x
4x4 deconv.
20x20x1024
20x20x512
v.
on
3c
3x
3x
v.
on
3c
+
v.
on
3c
3x
p
l
oo
3x
v.
on
3c
+
v.
on
3c
3x
10x10x512
p
l
oo
ol
v.
v.
on
on
po
+
3c
3c
v.
3x
3x
n
o
3c
3x
40x40x512
80x80x256
160x160x128
320x320x64
320x320x3
LDR encoder
(VGG16 conv. layers)
10x10x512
40x40x512
80x80x512
80x80x256
160x160x256
160x160x128
320x320x128
320x320x64
320x320x6
320x320x3
HDR decoder
4d
4x
v.
on
3c
3x
20x20x512
v.
on
3c
+
v.
on
3c
3x
v.
on
3c
l
oo
p
3x
3x
v.
on
3c
3x3 conv.
Latent representation
v.
on
1c
40x40x1024
178:4 •
3x3 conv. + pool
Domain transformation
skip-connection
Skip-layer
3x
Fig. 3. Fully convolutional deep hybrid dynamic range autoencoder network, used for HDR reconstruction. The encoder converts an LDR input to a latent
feature representation, and the decoder reconstructs this into an HDR image in the log domain. The skip-connections include a domain transformation from
LDR display values to logarithmic HDR, and the fusion of the skip-layers is initialized to perform an addition. The network is pre-trained on a subset of the
Places database, and deconvolutions are initialized to perform bilinear upsampling. While the specified spatial resolutions are given for a 320 × 320 pixels
input image, which is used in the training, the network is not restricted to a fixed image size.
certain camera curve f . We do not attempt to perform linearization
or color correction with the CNN. Furthermore, information lost
due to quantization is not recovered. We consider these problems
separate for the following reasons:
(1) Linearization: The most general approach would be to linearize either within the network or by learning the weights of
a parametric camera curve. We experimented with both these
approaches, but found them to be too problematic given any
input image. Many images contain too little information in
order to evaluate an accurate camera curve, resulting in high
variance in the estimation. On average a carefully chosen
assumed transformation performs better.
(2) Color correction: The same reasoning applies to color correction. Also, this would require all training data to be properly color graded, which is not the case. This means that given
a certain white balancing transformation of the input, the
saturated regions are predicted within this transformed color
space.
(3) Quantization recovery: Information lost due to quantization can potentially be reconstructed from a CNN. However,
this problem is more closely related to super-resolution and
compression artifact reduction, for which deep learning techniques have been successfully applied [Dong et al. 2015; Ledig
et al. 2016; Svoboda et al. 2016]. Furthermore, a number of
filtering techniques can reduce banding artifacts due to quantization [Bhagavathy et al. 2007; Daly and Feng 2004].
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
Although we only consider the problem of reconstructing saturated
image regions, we argue that this is the far most important part
when transforming LDR images to HDR, and that it can be used to
cover a wide range of situations. Typical camera sensors can capture
between 8 and 12 stops of dynamic range, which is often sufficient
to register all textured areas. However, many scenes contain a small
number of pixels that are very bright and thus saturated. These can
be reconstructed with the proposed method, instead of capturing
multiple exposures or using dedicated HDR cameras. Our method is
not intended to recover the lower end of the dynamic range, which
is below the noise floor of a sensor. Instead, the problem of underexposed areas is best addressed by increasing exposure time or gain
(ISO). This will result in more saturated pixels, which then can be
recovered using our approach.
3.2
Hybrid dynamic range autoencoder
Autoencoder architectures transform the input to a low-dimensional
latent representation, and a decoder is trained to reconstruct the
full-dimensional data [Hinton and Salakhutdinov 2006]. A denoising
autoencoder is trained with a corrupted input, with the objective of
reconstructing the original uncorrupted data [Vincent et al. 2008].
This is achieved by mapping to a higher level representation that is
invariant to the specific corruption. We use the same concept for
reconstruction of HDR images. In this case the corruption is clipped
highlights, and the encoder maps the LDR to a representation that
can be used by the decoder for HDR reconstruction. This means that
HDR image reconstruction from a single exposure using deep CNNs • 178:5
the encoder and decoder work in different domains of pixel values,
and we design them to optimally account for this. Since our objective
is to reconstruct larger images than is practical to use in training,
the latent representation is not a fully connected layer, but a lowresolution multi-channel image. Such a fully convolutional network
(FCN) enables predictions at any resolution that is a multiple of the
autoencoder downscaling factor.
The complete autoencoder design is depicted in Figure 3. Convolutional layers followed by max-pooling encodes the input LDR in a
W × H × 512 latent image representation, where W and H are the
32
32
image width and height, respectively. The encoder layers correspond
to the well-known VGG16 network [Simonyan and Zisserman 2014],
but without the fully connected layers.
While the encoder operates directly on the LDR input image, the
decoder is responsible for producing HDR data. For this reason the
decoder operates in the log domain. This is accomplished using
a loss function that compares the output of the network to the
log of the ground truth HDR image, as explained in Section 3.4.
For the image upsampling, we use deconvolutional layers with a
spatial resolution of 4 × 4 initialized to perform bilinear upsampling
[Long et al. 2015]. While nearest neighbor up-sampling followed by
convolution has been shown to alleviate artifacts that are common in
decoder deconvolutions [Odena et al. 2016], we have not experienced
such problems, and instead use the more general deconvolutional
layers. All layers of the network use ReLU activation functions, and
after each layer of the decoder a batch normalization layer [Ioffe
and Szegedy 2015] is used.
3.3
Domain transformation and skip-connections
The encoding of the input image means that much of the high resolution information in earlier layers of the encoder are lost. The
information could potentially be used by the decoder to aid reconstruction of high frequency details in saturated regions. Thus, we
introduce skip-connections that transfer data between both high
and low level features in the encoder and decoder.
Skip-connections have been shown to be useful for constructing
deeper network architectures which improve performance in a variety of tasks [He et al. 2016]. For autoencoders, where layers have
different spatial resolution, a separate residual stream can be maintained in full resolution, with connections to each layer within the
autoencoder [Pohlen et al. 2017]. Alternatively, skip-connections
between layers of equal resolution in encoder and decoder have
also been shown to boost performance in a variety of imaging tasks
using autoencoders [Ronneberger et al. 2015; Zhang and Lalonde
2017].
Our autoencoder uses skip-connections to transfer each level of
the encoder to the corresponding level on the decoder side. Since the
encoder and decoder process different types of data (see Section 3.2),
the connections include a domain transformation described by an
inverse camera curve and a log transformation, mapping LDR display values to a logarithmic HDR representation. Since the camera
curve is unknown, we have to assume its shape. Although a sigmoid
function fits well with camera curves in general [Grossberg and
Nayar 2003], its inverse is not continuous over IR + . The linearization
of the skip-connections is therefore done using a gamma function
f −1 (x) = x γ , where γ = 2.
(a) Input
(b) Without skip
(c) With skip
(d) Ground truth
Fig. 4. Zoom-ins of reconstruction without (b) and with (c) the domain
transformation skip-connections. The plain autoencoder architecture can
reconstruct high luminance, but without skip-connections the detail information around saturated regions cannot be fully exploited.
A skip-connected layer is typically added to the output of the layer
at the connection point. However, to allow for additional freedom,
we concatenate the two layers along the feature dimension. That is,
given two W × H × K dimensional layers, the concatenated layer is
W × H × 2K. The decoder then makes a linear combination of these,
that reduces the number of features back to K. This is equivalent
to using a convolutional layer with a filter size of 1 × 1, where the
number of input and output channels are 2K and K, respectively, as
depicted in Figure 3. More specifically, the complete LDR to HDR
skip connection is defined as
"
#
!
D
D
hi
+b .
h̃i = σ W
(3)
log f −1 hiE + ϵ
The vectors hiE and hiD denote the slices across all the feature
channels k ∈ {1, ..., K } of the encoder and decoder layer tensors
D
y E , y D ∈ IRW ×H ×K , for one specific pixel i. Furthermore, h̃i is
the decoder feature vector with information fused from the skipconnected vector hiE . b is the bias of the feature fusion, and σ is the
activation function, in our case the rectified linear unit (ReLU). A
small constant ϵ is used in the domain transformation in order to
avoid zero values in the log transform. Given K features, h E and h D
are 1 × K vectors, and W is a 2K × K weight matrix, which maps
the 2K concatenated features to K dimensions. This is initialized to
perform an addition of encoder and decoder features, setting the
weights as
1
0
W 0 = .
..
0
0
1
..
.
0
...
...
..
.
...
0
0
..
.
1
1
0
..
.
0
0
1
..
.
0
...
...
..
.
...
0
0
.. ,
.
1
b 0 = 0.
(4)
During training, these weights can be optimized to improve the
performance of the skip-connection. Since the linear combination of
features is performed in the log domain, it corresponds to multiplications of linear HDR data. This is an important characteristic of the
domain transformation skip-connections as compared to existing
skip architectures.
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
178:6 •
G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk and J. Unger
-6 stops
-6 stops
-6 stops
-6 stops
(a) Input
(b) Direct loss (eq. 5) (c) I/R loss (eq. 7)
(d) Ground truth
Fig. 6. Zoom-in of a reconstruction with different loss functions. The input
(a) is exposure corrected and clipped to have a large amount of information lost. The direct pixel loss (b) is more prone to generating artifacts as
compared to the illuminance + reflectance loss (c).
(a) Input
(b) λ = 0.9
(c) λ = 0.05
(d) Ground truth
Fig. 5. Zoom-ins of reconstructions with different relative weight of illuminance and reflectance, λ in Equation 7. A higher weight of illuminance will
in general better predict high intensity regions (b), while a higher weight of
reflectance is better at deducing local colors and details (c).
An example of the impact of the described skip-connection architecture is shown in Figure 4. The autoencoder design is able to
reconstruct HDR information from an encoded LDR image. However, all information needed by the decoder has to travel trough the
intermediate encoded representation. Adding the skip-connections
enables a more optimal use of existing details.
3.4
HDR loss function
A cost function formulated directly on linear HDR values will be
heavily influenced by high luminance values, leading to underestimation of important differences in the lower range of luminaces. The
few existing deep learning systems that predict HDR have treated
this problem by defining the objective function in terms of tonemapped luminance [Kalantari and Ramamoorthi 2017; Zhang and
Lalonde 2017]. In our system the HDR decoder is instead designed
to operate in the log domain. Thus, the loss is formulated directly
on logarithmic HDR values, given the predicted log HDR image ŷ
and the linear ground truth H ,
2
1 Õ
L(ŷ, H ) =
α i ŷi,c − log Hi,c + ϵ
,
(5)
3N i,c
where N is the number of pixels. Since Hi,c ∈ IR + , the small constant
ϵ removes the singularity at zero pixel values. The cost formulation
is perceptually motivated by the the close to logarithmic response
of the human visual system (HVS) in large areas of the luminance
range, according to the Weber-Fechner law [Fechner 1965]. The law
implies a logarithmic relationship between physical luminance and
the perceived brightness. Thus, a loss formulated in the log domain
makes perceived errors spread approximately uniformly across the
luminance range.
As described in Section 3.1, we use only the information from the
predicted HDR image ŷ around saturated areas. This is also reflected
by the loss function in Equation 5 where the blend map α from
Equation 2 is used to spatially weight the error.
Treating the illuminance and reflectance components separately
makes sense from a perceptual standpoint, as the visual system may
indirectly perform such separation when inferring reflectance or
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
discounting illumination [Gilchrist and Jacobsen 1984]. We therefore also propose another, more flexible loss function that treats
illuminance and reflectance separately. The illumination component
I describes the global variations, and is responsible for the high
dynamic range. The reflectance R stores information about details
and colors. This is of lower dynamic range and modulates the illuminance to create the final HDR image, Hi,c = Ii Ri,c . We approximate
the log illuminance by means of a Gaussian low-pass filter G σ on
the log luminance Lŷ ,
ŷ
log Ii = G σ ∗ Lŷ ,
(6)
i
ŷ
ŷ
log Ri,c = ŷi,c − log Ii .
Since the estimation is performed in the log domain, the log reflectance is the difference between ŷ and log illuminance. Lŷ is a
Í
ŷ
linear combination of the color channels, Li = log( c wc exp(ŷi,c )),
where w = {0.213, 0.715, 0.072}. The standard deviation of the
Gaussian filter is set to σ = 2. The resulting loss function using I
and R is defined as
2
λ Õ
ŷ
y
LI R (ŷ, H ) =
α i log Ii − log Ii
N i
(7)
2
1−λ Õ
ŷ
y
α i log Ri,c − log Ri,c
+
,
3N i,c
where y = log(H + ϵ) to simplify notation. The user-specified
parameter λ can be tuned for assigning different importance to the
illuminance and reflectance components. If not stated otherwise,
we use the illuminance + reflectance (I/R) loss with λ = 0.5 for all
results in this paper. This puts more importance on the illuminance
since the error in this component generally is larger. Figure 5 shows
examples of predictions where the optimization has been performed
with different values of λ. With more relative weight on illuminance,
high intensity areas are in general better predicted, which e.g. could
benefit IBL applications. If the reflectance is given more importance,
local colors and details can be recovered with higher robustness, for
better quality e.g. in post-processing applications.
The visual improvements from using the I/R loss compared to
the direct loss in Equation 5 are subtle. However, in general it tends
to produce less artifacts in large saturated areas, as exemplified in
Figure 6. One possible explanation is that the Gaussian low-pass
filter in the loss function could have a regularizing effect, since
it makes the loss in a pixel influenced by its neighborhood. This
observation is further supported by the comparison in Table 1, where
HDR image reconstruction from a single exposure using deep CNNs • 178:7
Freq. [norm. # of pixels]
10
-1
10 -2
10
-3
10
(a) Input
Flickr
Places
Places non-saturated
HDR database
10 -4
-5
10
-4
10
-3
10
-2
10
-1
10
0
(b) No pre-training
(c) Pre-trained
(d) Ground truth
Fig. 8. A 150×150 pixels zoom-in of a reconstruction. Using pre-training the
CNN is in general more consistent and can reconstruct smaller highlights
better.
Relative luminance
Fig. 7. Histograms over two LDR datasets and the pre-processed HDR
dataset. The creation of the HDR data is described in Section 4. For the LDR
data, the probabilities show a large increase close to 1, indicating saturated
information. The HDR dataset contains such information, represented by
the tail of decreasing frequency.
the I/R loss lowers the final error in Equation 5 by more than 5%,
demonstrating better generalization to the test data.
4
HDR IMAGE DATASET
A key challenge for a learning-based HDR reconstruction is to obtain
a sufficiently large set of well structured training data. However, an
increasing amount of HDR content has become available, in particular through recent HDR video datasets [Azimi et al. 2014; Boitard
et al. 2014; Froehlich et al. 2014; Kronander et al. 2014]. We were able
to gather a total of 1121 HDR images and 67 HDR video sequences.
The sources of the data are specified in the supplementary document. 4 video sequences and 95 images are separated from these to
create the test set used throughout this paper, and the rest are used
for training. Since consecutive frames from a video sequence are
expected to be very similar, we use every 10th frame. Together with
the static images, this results in a total of ∼3700 HDR images. Using a virtual camera, a carefully designed set of data augmentation
operations are then applied in order to improve robustness of the
predictions.
Considering each HDR image a real-world scene, we set up a
virtual camera that captures a number of random regions of the
scene using a randomly selected camera calibration. This provides us
with an augmented set of LDR and corresponding HDR images that
are used as input and ground truth for training, respectively. The
regions are selected as image crops with random size and position,
followed by random flipping and resampling to 320 × 320 pixels. The
camera calibration incorporates parameters for exposure, camera
curve, white balance and noise level. These are randomly selected,
with the camera curve defined as a parametric function fitted to the
database of camera curves collected by Grossberg and Nayar [2003].
For details on the augmentation, we refer to Appendix A.
In total we capture ∼125K training samples from the HDR dataset
using the virtual camera. This augmentation is responsible for creating a final trained model that generalizes well to a wide range of
images captured with different cameras.
4.1
Image statistics
It is important to note that the dynamic range statistics of LDR
and HDR images differ considerably. Figure 7 shows averaged histograms over two typical LDR datasets, as well as our HDR dataset
of 125K images. The LDR data are composed of around 2.5M and
200K images for Places [Zhou et al. 2014] and Flickr, respectively.
Inspecting the LDR histograms, they show a relatively uniform distribution of pixel values, except for distinct peaks near the maximum
value representing information lost due to saturation. In the HDR
histogram on the other hand, pixels are not saturated, and are instead represented by an exponentially decaying long tail. Although
there are not many pixels with extreme intensities, these are very
important to properly learn a plausible HDR image.
5
TRAINING
To initialize the weights in the network we use different strategies
for different parts of the network. As we use the convolutional layers from the well-known VGG16 network, we can use pre-trained
weights available for large scale image classification on the Places
database [Zhou et al. 2014] to initialize the encoder. The decoder
deconvolutions are initiated for bilinear upsampling, and the fusions of skip-connection layers are initiated to perform addition of
features (Equation 4). For the convolutions within the latent image
representation (right-most of Figure 3) and the final feature reduction (top-left in Figure 3) we use Xavier initializaion [Glorot and
Bengio 2010].
Minimization is performed with the ADAM optimizer [Kingma
and Ba 2014], with a learning rate of 5 × 10−5 , on the loss function in
Equation 7. In total 800K steps of back-propagation are performed,
with a mini-batch size of 8, taking approximately 6 days on an Nvidia
Titan X GPU.
5.1
Pre-training on simulated HDR data
As we have a limited amount of HDR data at hand, we use transfer
learning by pre-training the entire network on a large simulated
HDR dataset. To this end, we select a subset of the images in the
Places database [Zhou et al. 2014], requiring that the images should
not contain saturated image regions. Given the set of all Places
images P, this subset S ⊂ P is defined as
S = {D | D ∈ P, p D (255) < ξ } ,
(8)
where p D is the image histogram. For the threshold we use ξ =
50/2562 . Thus, if less than 50 pixels (0.076% of the 2562 pixels of
an image) have maximum value, we use this in the training set. For
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk and J. Unger
(c) Ground truth
(b) Reconstruction
(a) Input
178:8 •
Fig. 9. The input images (a) have been exposure corrected, followed by camera transformation, quantization and clipping. 5% of the pixels are saturated and
contain no information. Visually convincing reconstructions (b) can be made in a wide range of situations. The reconstructions correspond well to the ground
truth HDR images (c). The exposures of the images have been reduced to show the differences, and all images have been gamma corrected for display.
the Places database this gives ∼600K images of the total set size of
∼2.5M. The averaged histogram over the subset S, plotted in Figure 7,
does not show the peak of saturated pixels as the original set P. By
linearizing the images D ∈ S using the inverse of the camera curve
f in Equation 10 and increasing the exposure, H = s f −1 (D), we
create a simulated HDR training dataset.
The simulated HDR dataset is prepared in the same manner as in
Section 4, but at 224 × 224 pixels resolution and without resampling.
The CNN is trained using the ADAM optimizer with learning rate
2 × 10−5 for 3.2M steps with a batch size of 4.
The result of this pre-training on synthetic data leads to a significant improvement in performance. Small highlights, which sometimes are underestimated, are better recovered, as illustrated in
Figure 8, and less artifacts are introduced in larger saturated regions.
Table 1 shows that the error is reduced by more than 10%.
6
RESULTS
In this section we present a number of examples, verifying the
quality of the HDR reconstructions. Additional visual examples
can be found in the supplementary material and video. Furthermore, for prediction using any LDR image the CNN together with
trained parameters can be downloaded from: https:// github.com/
gabrieleilertsen/ hdrcnn.
6.1
Test errors
To justify the different model and training strategies explained in
Section 3 and 5, we evaluate the success of the optimization in Table 1. The errors of the different configurations have been averaged
over the test set of 95 HDR images, reconstructed at 1024 × 768 pixels resolution. The LDR images used for reconstruction use virtual
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
exposures and clipping such that 5% of the pixels in each image are
saturated.
Table 1 shows how different errors are affected by the different training strategies. The CNN without skip-connections can
drastically reduce the MSE of the input. However, adding the skipconnections reduces error by an additional 24%, and creates images
with substantial improvements in details (Figure 4). Comparing the
two different loss functions in Equation 5 and 7, the latter I/R loss
shows a lower error both in terms of I/R and direct MSE, with a
reduction of 5.8%. Finally, with pre-training and the I/R loss the
best training performance is accomplished, lowering the error by
10.7% as compared to no pre-training. All the trainings/pre-trainings
that do no use our pre-trained parameters have been initialized
Table 1. Different MSEs evaluated over the test set. Rows show different
training strategies, while columns evaluate with different errors. The direct
MSE is from Equation 5, while the I/R, I and R MSEs use Equation 7 with
λ = 0.5, 1 and 0, respectively. The reference is the input image without
reconstruction. Adding skip-connections improves the result to a large
extent. The illuminance + reflectance loss lowers both direct MSE and in
terms of illuminance and reflectance, as compared to optimizing for only
the direct loss. Pre-training has a significant impact on the result.
Direct
I/R
I
R
Reference
0.999
0.890
0.712
0.178
Without skip-conn.
0.249
0.204
0.102
0.102
Direct loss (eq. 5)
0.189
0.159
0.090
0.069
I/R loss (eq. 7)
0.178
0.150
0.081
0.068
Pre-train. + I/R loss
0.159
0.134
0.069
0.066
HDR image reconstruction from a single exposure using deep CNNs • 178:9
10
1
10 0
10
-1
640
660
680
700
720
740
760
10 3
(c) iPhone image input
(d) Reconstruction
Input
Pred. 6 = 0.5
Pred. 6 = 0.9
Ground truth
10 2
10 1
10 0
10 -1
150
Column
160
170
180
190
Column
Fig. 10. The learned residual of the image in Figure 1 (top left), together
with the ground truth residual (top right). The plots show relative luminance
values across the two scanlines marked in the images. Complex regions
are predicted convincingly (bottom left), but for the very intense spotlight
the signal is underestimated (bottom right). The underestimation is less
pronounced when training with higher illuminance weight (λ in Equation 7).
with VGG16 encoder weights trained for classification of the Places
dataset [Zhou et al. 2014].
6.2
(b) Reconstruction
10 4
Input
Prediction, 6 = 0.5
Ground truth
Relative luminance
Relative luminance
10 2
(a) Camera JPEG input
Fig. 11. Reconstruction from Canon 5DS R camera JPEG (top). The examples
are the same as in Figure 9, but with the camera set to store JPEG, applying
unknown image transformations and compression. Reconstruction can also
be performed with hand-held low-end cameras, such as the iPhone 6S
(bottom), where the input image is more degraded.
Comparisons to ground truth
Figure 9 demonstrates a set of predictions on HDR images from the
test set that have been transformed to LDR by the virtual camera
described in Section 4. The examples demonstrate successful HDR
reconstruction in a variety of situations. In night scenes, colors and
intensities of street lights and illuminated facades can be restored
with very convincing quality. The same goes for specular reflections and other smaller highlights in day scenes. Furthermore, in
situations where there is some small amount of information left
in any of the color channels, details and colors of larger areas can
be reconstructed to be very close to the ground truth HDR image.
For example, in the right-most column the large area light source
does not contain any visual details when inspecting the input image. However, some small amount of invisible information enables
recovery of details. Also, while all channels are saturated in the sky
in the third column, the spatial context can be utilized to infer a
blue color.
In order to visualize the information that is reconstructed
by the
CNN, Figure 10 shows the residual, r̂ = max 0, Ĥ − 1 . That is,
only the information in highlights is shown, which is not present in
the input image D ∈ [0, 1]. The information corresponds well with
the ground truth residual, r = max (0, H − 1). The complete input
and output signals are also plotted for two different scanlines across
the images. Complex lighting of street lights and windows can be
recovered convincingly (bottom left). However, in some situations of
very intense light sources, the luminance is underestimated (bottom
right). We elaborate on such limitations in Section 8.
6.3
Reconstruction with real-world cameras
In order to show that the HDR reconstruction model generalizes
to real-world cameras, Figure 11 shows two of the scenes from Figure 9, captured using a Canon 5DS R camera’s JPEG mode (top
row) and with an iPhone 6S camera (bottom row). Both these cameras provide more realistic scenarios as compared to the virtual
camera. Nevertheless, reconstructions of equal quality can be done
from camera JPEGs. The iPhone images are more degraded, shot in
dark conditions without a tripod, but the reconstructed information
comply well with the image characteristics. To further explore the
possibilities in reconstructing everyday images, Figure 12 displays
a set of iPhone images, taken in a variety of situations. The examples not only demonstrate the method’s ability to generalize to a
different camera, but also to a wide range of situations, such as
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
178:10 •
G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk and J. Unger
(a) 4%
(b) 6%
(c) 8%
(d) 10%
Fig. 13. Zoom-ins of reconstructions (bottom row) with different exposure
settings of the input (top row). The numbers indicate how large fraction
of the total number of pixels are saturated in the input. The images have
then been scaled to have the same exposure after clipping has been applied.
Although illuminance is predicted at approximately the same level, more
details are available in reconstruction of the shorter exposure images.
(a) Input
(b) Reconstruction
Fig. 12. Predictions on iPhone camera images. Plausible reconstructions
can be made of skin tones, caustics and fire (row 1-3). Large saturated areas
can be recovered if there still is information in one of the color channels
(bottom row).
skin tones, fire and caustics. The most limiting factor when reconstructing from iPhone images is the hard JPEG compression that
has been applied, which results in small blocking artifacts close to
highlights, affecting the final reconstruction. In order to compensate
for these, the brightness of the images has been increased by a small
factor, followed by clipping. This removes the artifacts caused by
harder compression in the brightest pixels, which improves the final
reconstruction quality.
6.4
Changing clipping point
In order to demonstrate the behavior of the reconstruction with
varying amount of information loss, Figure 13 shows predictions
using different virtual exposure times. As the training of the CNN
uses a virtual camera with different exposure settings, part of the
objective is to minimize the difference between these, apart from
a scaling factor. However, since more information is available in
highlights with shorter exposure, in most cases there will be visible
differences in the reconstruction quality, as exemplified by the figure.
6.5
Comparison to iTMOs
Figure 14 shows the HDR reconstruction compared to three existing
methods for inverse tone-mapping. These are examples of local
methods that apply different processing in saturated areas in order
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
(a) Input LDR
(b) [Banterle et al. 2008]
(c) [Meylan et al. 2006]
(d) [Rempel et al. 2007]
(e) Ours
(f) Ground truth
Fig. 14. Comparison to some existing iTMOs. Since the iTMO results usually
are calibrated for an HDR display, they have been scaled to the same range
for comparison. Although the they can deliver an impression of increased
dynamic range by boosting highlights, when inspecting the saturated image
regions little information have actually been reconstructed. The CNN we
use can make a prediction that is significantly closer to the true HDR image.
to boost the dynamic range of an LDR image. The results can successfully convey an impression of HDR when viewed on an HDR
capable display. However, when inspecting the highlights, local information is not recovered by a naïve scaling of image highlights.
With our CNN we are able to predict saturated regions based on
a high level understanding of the context around the area, which
HDR image reconstruction from a single exposure using deep CNNs •
(a) Input LDR
(b) iTMO, [Banterle et al. 2008]
(c) Ours
178:11
(d) Ground truth
Fig. 15. IBL using reconstructed highlights. The top row shows the panoramas that are used for IBL in the bottom row. Rendering with the LDR input gives a
dark and undynamic result. The iTMO boosts brightness to alleviate the problems. With our reconstruction, although all details cannot be recovered in the
large saturated areas of the windows, the estimated luminance enables a visually convincing rendering that is much closer to the ground truth.
makes it possible to actually recover convincing colors and structural content.
6.6
Image based lighting
An important application of HDR imaging is image based lighting
(IBL), where the omnidirectional lighting of a particular scene is captured in a panorama and used for re-lighting of a computer graphics
model. For this task it is of major importance that the entire range
of luminances are present within the image, in order to convey a
faithful and dynamic rendering. Figure 15 shows a panorama from
an indoor scene, where the majority of illuminance is due to two
windows. In the LDR image the windows are saturated, and the result when used for IBL is overall dark and of low contrast. The iTMO
by Banterle et al. [2008] can accomplish a more dynamic rendering
by boosting image highlights, although the result is far from the
ground truth. With our learning-based method, the reconstructed
panorama shows some loss of details in the large saturated areas of
the windows, but the illuminance is estimated convincingly. This
makes it possible to render a result that is very close to the ground
truth.
7
EVALUATION
In order to assess the perceived visual quality of our novel HDR
reconstruction, we performed a subjective pairwise comparison
experiment. 15 participants took part in the experiment, aged 19 –
40 with normal or corrected-to-normal full color vision.
7.1
Setup
We used a calibrated projector-based HDR display, similar to the
one described by Seetzen et al. [2004], at a viewing distance of 80 cm.
The display consisted of an Acer P1276 1024 × 768 DLP projector
with removed color wheel, and a 9.7" 2048 × 1536 iPad Retina display
panel with removed backlight. The maximal and minimal luminance
of the display was 5000 cd/m 2 and 0.1 cd/m 2 , yielding a maximal
contrast range of 50 000:1.
7.2
Stimuli
Each stimulus consisted of a pair of images identical in terms of content, but reproducing one of the four processing or display scenarios:
(1) LDR image with its dynamic range clamped to that of a typical
DSLR camera (10.5 stops); (2) ground truth HDR image; (3) iTMO
technique by Banterle et al. [2008], shown to perform the best in the
evaluation [Banterle et al. 2009]; (4) output from our CNN-based
HDR reconstruction (Pre-train + I/R loss). To prevent overall image
brightness from affecting the outcome of the experiment, we fixed
the luminance of the 90th percentile pixel value for all methods
to 180 cd/m2 . To avoid bias in the selection of images, we used a
randomly selected sample of 25 images from a pool of the 95 images
in the test set.
7.3
Task
We used a two-alternative forced choice experiment, where in each
trial the participants were shown a pair of images side-by-side. The
task was to select the image that looked more natural. Our definition of natural involved “the depth and vividness that bears most
resemblance to a view that you could experience looking through a
window”. Participants were given unlimited time to look at the images and to make their decisions. Before each session, participants
were briefed about their task both verbally and in writing, followed
by a short training session to gain familiarity with the display equipment and the task. We used a full pairwise comparison design, in
which all pairs were compared. Each observer compared each pair
three times, resulting in 450 trials per observer. The order of the
stimuli as well as their placement on the screen were randomized.
7.4
Results
The result of the pairwise comparison experiment is scaled in justobjectionable-differences (JODs), which quantify the relative quality differences. The scaling is performed using publicly available
software1 , which formulates the problem as a Bayesian inference
under the Thurstone Case V assumptions, and uses a maximumlikelihood-estimator to find relative JOD values. Unlike standard
scaling procedures, the Bayesian approach robustly scales pairs of
conditions for which there is unanimous agreement. Since JOD values are relative, we fix the starting point of the JOD scale at 0 for
the LDR images. When two points are a single unit apart on the
1 Pairwise
comparison scaling software for Matlab: https://github.com/mantiuk/pwcmp
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
178:12 •
G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk and J. Unger
5
4
HDR
Ours
iTMO
Quality (JOD)
3
2
1
0
-1
-2
-3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Fig. 16. Results of the subjective quality experiment. The error bars represent 95% confidence intervals computed by bootstrapping. All values on the JOD
scale are relative to the LDR images. Negative scores indicate a lower perceived naturalness for the iTMO technique when compared with LDR images. The
output of our CNN-based HDR reconstruction method surpasses LDR and is comparable to the original HDR images in most cases.
JOD space, approximately 75% of the population are expected to
perceive an objectionable quality difference.
The results of the subjective quality experiment are shown in
Figure 16. Unexpectedly, the iTMO technique by Banterle et al. was
judged to be the least natural, even less so than the LDR images. This
can be explained by the operator’s attempt to inverse the camera response curve, which often results in reduced contrast and inaccurate
colors. Figure 17 row (a) shows the 19th image from the evaluation,
where the effect of over-boosted colors can be easily observed. As
expected, LDR images were rated worse than the original HDR images in almost all cases. Most participants were mainly accustomed
to standard display monitors, which, as reflected upon by some
subjects during the unstructured post-experiment interview, might
have affected their perception of “natural”. With more exposure to
HDR displays we expect the perceived quality of LDR images to
drop in the future. According to the data, our CNN-based images are
very likely to perform better than their original LDR counterparts.
The number of times our CNN images were picked is slightly less
but comparable to the original HDR images. Figure 17 rows (b) and
(c) illustrate two scenes, where the algorithm succeeds in estimating
the spatial texture of high luminance areas, producing plausible
results. The performance of the CNN is scene-dependent, and can
sometimes introduce artifacts in the highlights that affect perceived
naturalness. One example is depicted in Figure 17 row (d), where
the artificial texture of the sun is immediately obvious. Overall, the
CNN-based reconstruction improves the subjective quality as compared to the input LDR images, which is not always the case for the
state-of-the-art iTMO techniques. The reconstructed images are in
most cases comparable to ground truth HDR images, as evidenced
by the quality differences of less than 1 JOD.
8
CONCLUSION AND FUTURE WORK
HDR reconstruction from an arbitrary single exposed LDR image is a
challenging task. To robustly solve this problem, we have presented
a hybrid dynamic range autoencoder. This is designed and trained
taking into account the characteristics of HDR images in the model
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
architecture, training data and optimization procedure. The quality
and versatility of the HDR reconstruction have been demonstrated
through a number of examples, as well as in a subjective experiment.
8.1
Limitations
There is a content-dependent limit on how much missing information the network can handle which is generally hard to quantify.
Figure 18 shows two examples of difficult scenes that are hard to
reconstruct. The first row has a large region with saturation in all
color channels, so that structures and details cannot be inferred.
However, illuminance may still be estimated well enough in order to
allow for high quality IBL, as demonstrated in Figure 15. The second
row of Figure 18 shows a situation where besides a similar loss of
spatial structures, extreme intensities are also underestimated. This
is also demonstrated in Figure 10 with the intense spotlight. The
plot also shows that the problem can be alleviated by altering the
illuminance weight λ in Equation 7. However, underestimation of
highlights is also an inherent problem of the training data. Some
of the HDR images used to create the training data show saturated
pixels in high intensity regions. For example, the sun in Figure 18 is
saturated in the ground truth HDR image.
There is also a limitation on how much compression artifacts that
can be present in the input image. If there are blocking artifacts
around highlights, these will impair the reconstruction performance
to some extent.
8.2
Future work
Recovering saturated image regions is only one of the problems in
reconstructing HDR images. Another, less prominent issue is quantization artifacts, which can be alleviated using existing methods
[Daly and Feng 2003]. However, we believe that bit-depth extension
also can be achieved by means of deep learning. For this purpose
existing architectures for compression artifact reduction [Svoboda
et al. 2016] or super resolution [Ledig et al. 2016] are probably better
suited.
HDR image reconstruction from a single exposure using deep CNNs •
LDR
iTMO
Ours
178:13
HDR
a)
b)
-3 stops
-3 stops
-3 stops
c)
-6 stops
(a) Input
d)
Fig. 17. Examples of typical images in the subjective experiment. a) image
19 demonstrates how the inaccurate colors of iTMO reduces perceived
realism. b) image 21 with a successful HDR reconstruction. c) image 4 shows
how even an inaccurate estimate of the highlight luminance still produces
plausible results. d) image 8 is an example of unsuccessful reconstruction,
producing artifacts that heavily affect the perceived naturalness.
ACKNOWLEDGMENTS
The authors would like to thank Francesco Banterle for the invaluable discussions and help with inverse tone-mapping operators, and
the anonymous reviewers for helping in improving the manuscript.
This work was funded through Linköping University Center for
Industrial Information Technology (CENIIT), the Swedish Science
(b) Reconstruction
-6 stops
(c) Ground truth
Fig. 18. Zoom-ins of areas where reconstruction fails. (Top) A large fraction
of the pixels are saturated, and the structures cannot be recovered properly.
(Bottom) the intensity and shape of the sun are not estimated correctly.
Council through Grant 2015-05180, and the Wallenberg Autonomous
Systems Program (WASP).
APPENDIX A: DATA AUGMENTATION
In this appendix we specify the details of the virtual camera used in
Section 4 for augmentation of HDR images.
A.1
The complementary problem of reconstructing saturated pixels,
is the recovery of dark regions of an image, that have been lost due
to quantization and noise. This problem is also significantly different
from ours in that noise will be a main issue when increasing the
exposure of an image.
Another direction for future work is to investigate how to improve
reconstruction of images that are degraded by compression artifacts.
The most straightforward solution would be to augment the training
data with compression artifacts. However, this also runs the risk
of lowering the quality for reconstructions of images without any
compression applied.
Finally, although recent development in generative adversarial
networks (GAN) [Goodfellow et al. 2014; Radford et al. 2015] shows
promising result in a number of imaging tasks [Ledig et al. 2016;
Pathak et al. 2016], they have several limitations. An important
challenge for future work is to overcome these, in order to enable
high-resolution and robust estimation.
-6 stops
Random cropping
For each mega-pixel of HDR data, N sub-images are selected at
random positions and sizes. For the trainings we perform, we choose
N = 10, which results in a final training set of ∼125K images. The
sizes are drawn uniformly from the range [20%, 60%] of the size of
an input image, followed by bilinear resampling to 320x320 pixels.
Since 320 pixels corresponds to 20%–60% of the original images,
the training is optimized to account for images that are between
320/0.6 = 533 and 320/0.2 = 1600 pixels.
A.2
Exposure
The exposure of each cropped image is set so that clipping removes
a fraction v of the image information. The fraction is uniformly
drawn in the range v ∈ [0.05, 0.15]. To accomplish this, v is used to
define an exposure scaling s,
s=
H
th
Õ
1
, s.t .
pH (i) = 1 − v,
Hth
(9)
i=Hmin
where pH is the histogram of the HDR image H . Thus, Hth is the
1 −v percentile, and this defines the scaling s which is applied to the
image in order to remove 5 − 15% of the information when clipping
is performed (see Equation 11).
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk and J. Unger
1
1
0.9
0.9
0.8
0.8
Registered digital value
Registered digital value
178:14 •
0.7
0.6
0.5
0.4
0.3
0.2
Camera curves
Mean camera curve
Fitted sigmoid
0.1
0.7
0.6
0.5
0.4
0.3
0.2
0
0
0
0.2
0.4
0.6
0.8
Collected radiance
(a) [Grossberg and Nayar 2003]
Camera curves
Mean camera curve
0.1
1
0
0.2
0.4
0.6
0.8
1
Collected radiance
(b) Random sigmoid curves
Fig. 19. A sigmoid function fits well to the mean of the collected camera
curves by Grossberg and Nayar [2003] (a). We simulate camera curves from
a normal distribution centered at the fitted parameters (b).
A.3
Camera curve
To approximate different camera curves we use a parametric function, in form of a sigmoid,
n
Hi,c
.
f (Hi,c ) = (1 + σ ) n
Hi,c + σ
(10)
The scaling 1 + σ is used to normalize the output so that f (1) = 1.
We fit the parameters n and σ to the mean of the database of camera
curves collected by Grossberg and Nayar [2003], where n = 0.9
and σ = 0.6 gives a good fit as shown in Figure 19. For random
selection of camera curves in the training data preparation, we draw
the parameters from normal distributions around the fitted values,
n ∼ N (0.9, 0.1) and σ ∼ N (0.6, 0.1). As demonstrated in Figure 19
this creates a continuous range that do not include extreme functions
such as gamma curves with γ > 1.
A.4
Other
Augmentation in terms of colors is accomplished in the HSV color
space, where we modify the hue and saturation channels. The hue is
altered by means of adding a random perturbation h̃ ∼ N (0, 7). The
same is done for the saturation, but with a narrower distribution,
s̃ ∼ N (0, 0.1).
Finally, a small amount of additive Gaussian noise is injected, with
a standard deviation randomly selected in the range σ ∈ [0, 0.01].
Also, images are flipped horizontally with a probability of 0.5. The
processed linear images H represent the ground truth data, while
the inputs for training are clipped at 1 and quantized,
D i,c = ⌊255 min(1, f (Hi,c )) + 0.5⌋/255.
(11)
REFERENCES
A. O. Akyüz, R. Fleming, B. E. Riecke, E. Reinhard, and H. H. Bülthoff. 2007. Do HDR
Displays Support LDR Content?: A Psychophysical Evaluation. ACM Trans. Graph.
26, 3, Article 38 (2007).
M. Azimi, A. Banitalebi-Dehkordi, Y. Dong, M. T. Pourazad, and P. Nasiopoulos. 2014.
Evaluating the Performance of Existing Full-Reference Quality Metrics on High
Dynamic Range (HDR) Video Content. In International Conference on Multimedia
Signal Processing (ICMSP ’14), Vol. 1. 789.
F. Banterle, A. Artusi, K. Debattista, and A. Chalmers. 2011. Advanced High Dynamic
Range Imaging: Theory and Practice. AK Peters (CRC Press).
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
F. Banterle, K. Debattista, A. Artusi, S. Pattanaik, K. Myszkowski, P. Ledda, and A.
Chalmers. 2009. High Dynamic Range Imaging and Low Dynamic Range Expansion
for Generating HDR Content. Computer Graphics Forum 28, 8 (2009), 2343–2367.
F. Banterle, P. Ledda, K. Debattista, M. Bloj, A. Artusi, and A. Chalmers. 2009. A
Psychophysical Evaluation of Inverse Tone Mapping Techniques. Computer Graphics
Forum 28, 1 (2009), 13–25.
F. Banterle, P. Ledda, K. Debattista, and A. Chalmers. 2006. Inverse Tone Mapping. In
Proceedings of the 4th International Conference on Computer Graphics and Interactive
Techniques in Australasia and Southeast Asia (GRAPHITE ’06). ACM, 349–356.
F. Banterle, P. Ledda, K. Debattista, and A. Chalmers. 2008. Expanding Low Dynamic
Range Videos for High Dynamic Range Applications. In Proceedings of the 24th
Spring Conference on Computer Graphics (SCCG ’08). ACM, 33–41.
S. Bhagavathy, J. Llach, and J. f. Zhai. 2007. Multi-Scale Probabilistic Dithering for
Suppressing Banding Artifacts in Digital Images. In The IEEE International Conference
on Image Processing (ICIP ’07), Vol. 4. IV – 397–IV – 400.
R. Boitard, R. Cozot, D. Thoreau, and K. Bouatouch. 2014. Survey of Temporal Brightness Artifacts in Video Tone Mapping. In Second International Conference and SME
Workshop on HDR imaging (HDRi2014).
S. J. Daly and X. Feng. 2003. Bit-depth extension using spatiotemporal microdither
based on models of the equivalent input noise of the visual system. In Proceedings
of SPIE, Vol. 5008. 455–466.
S. J. Daly and X. Feng. 2004. Decontouring: prevention and removal of false contour
artifacts. In Proceedings of SPIE, Vol. 5292. 130–149.
P. E. Debevec and J. Malik. 1997. Recovering High Dynamic Range Radiance Maps from
Photographs. In Proceedings of the 24th Annual Conference on Computer Graphics
and Interactive Techniques (SIGGRAPH ’97). 369–378.
P. Didyk, R. Mantiuk, M. Hein, and H.P. Seidel. 2008. Enhancement of Bright Video
Features for HDR Displays. Computer Graphics Forum 27, 4 (2008), 1265–1274.
C. Dong, Y. Deng, C. Change L., and X. Tang. 2015. Compression Artifacts Reduction by
a Deep Convolutional Network. In The IEEE International Conference on Computer
Vision (ICCV ’15).
F. Dufaux, P. Callet, R.K. Mantiuk, and M. Mrak (Eds.). 2016. High Dynamic Range
Video: From Acquisition, to Display and Applications. Vol. 1. Academic Press.
Y. Endo, Y. Kanamori, and J. Mitani. 2017. Deep Reverse Tone Mapping. ACM Trans.
Graph. 36, 6, Article 177 (2017).
G. Fechner. 1965. Elements of psychophysics. Holt, Rinehart & Winston.
J. Froehlich, S. Grandinetti, B. Eberhardt, S. Walter, A. Schilling, and H. Brendel. 2014.
Creating Cinematic Wide Gamut HDR-Video for the Evaluation of Tone Mapping
Operators and HDR-Displays. In Proceedings of SPIE 9023, Digital Photography X.
90230X–90230X–10.
A. Gilchrist and A. Jacobsen. 1984. Perception of lightness and illumination in a world
of one reflectance. Perception 13, 1 (1984), 5–19.
X. Glorot and Y. Bengio. 2010. Understanding the difficulty of training deep feedforward
neural networks. In Proceedings of the Thirteenth International Conference on Artificial
Intelligence and Statistics (AISTATS ’10), Vol. 9. 249–256.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,
and Y. Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information
Processing Systems 27. 2672–2680.
M. D. Grossberg and S. K. Nayar. 2003. What is the space of camera response functions?.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR ’03)., Vol. 2. II–602–9.
S. Hajisharif, J. Kronander, and J. Unger. 2015. Adaptive dualISO HDR reconstruction.
EURASIP Journal on Image and Video Processing 2015, 1 (2015), 41.
K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep Residual Learning for Image Recognition.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR ’16).
G. E. Hinton and R. R. Salakhutdinov. 2006. Reducing the dimensionality of data with
neural networks. Science 313, 5786 (2006), 504–507.
S. Iizuka, E. Simo-Serra, and H. Ishikawa. 2016. Let There Be Color!: Joint End-to-end
Learning of Global and Local Image Priors for Automatic Image Colorization with
Simultaneous Classification. ACM Trans. Graph. 35, 4 (2016), 110:1–110:11.
S. Ioffe and C. Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International
Conference on Machine Learning (PMLR ’15), Vol. 37. 448–456.
N. K. Kalantari and R. Ramamoorthi. 2017. Deep High Dynamic Range Imaging of
Dynamic Scenes. ACM Trans. Graph. 36, 4 (2017), 144:1–144:12.
D. P. Kingma and J. Ba. 2014. Adam: A Method for Stochastic Optimization. CoRR
abs/1412.6980 (2014). http://arxiv.org/abs/1412.6980
R. P. Kovaleski and M. M. Oliveira. 2009. High-quality brightness enhancement functions
for real-time reverse tone mapping. The Visual Computer 25, 5 (2009), 539–547.
R. P. Kovaleski and M. M. Oliveira. 2014. High-Quality Reverse Tone Mapping for
a Wide Range of Exposures. In 27th Conference on Graphics, Patterns and Images
(SIBGRAPI ’14). 49–56.
J. Kronander, S. Gustavson, G. Bonnet, A. Ynnerman, and J. Unger. 2014. A Unified
Framework for Multi-Sensor HDR Video Reconstruction. Signal Processing: Image
Communications 29, 2 (2014), 203 – 215.
HDR image reconstruction from a single exposure using deep CNNs •
178:15
C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A.
Tejani, J. Totz, Z. Wang, et al. 2016. Photo-realistic single image super-resolution
using a generative adversarial network. arXiv preprint arXiv:1609.04802 (2016).
J. Long, E. Shelhamer, and T. Darrell. 2015. Fully Convolutional Networks for Semantic
Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR ’15).
S. Mann and R.W. Picard. 1994. Being ‘undigital’ with cameras: Extending Dynamic
Range by Combining Differently Exposed Pictures. Technical Report 323. M.I.T. Media
Lab Perceptual Computing Section. 422–428 pages.
B. Masia, S. Agustin, R. W. Fleming, O. Sorkine, and D. Gutierrez. 2009. Evaluation of
Reverse Tone Mapping Through Varying Exposure Conditions. ACM Trans. Graph.
28, 5 (2009), 160:1–160:8.
B. Masia, A. Serrano, and D. Gutierrez. 2017. Dynamic range expansion based on image
statistics. Multimedia Tools and Applications 76, 1 (2017), 631–648.
L. Meylan, S. Daly, and S. Süsstrunk. 2006. The Reproduction of Specular Highlights
on High Dynamic Range Displays. Color and Imaging Conference 2006, 1 (2006),
333–338.
S. K. Nayar and T. Mitsunaga. 2000. High dynamic range imaging: spatially varying
pixel exposures. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR ’00), Vol. 1. 472–479.
A. Odena, V. Dumoulin, and C. Olah. 2016. Deconvolution and Checkerboard Artifacts.
Distill (2016). http://distill.pub/2016/deconv-checkerboard.
D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. Efros. 2016. Context Encoders:
Feature Learning by Inpainting. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (CVPR ’16). 2536–2544.
T. Pohlen, A. Hermans, M. Mathias, and B. Leibe. 2017. Full-Resolution Residual
Networks for Semantic Segmentation in Street Scenes. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR ’17).
A. Radford, L. Metz, and S. Chintala. 2015. Unsupervised representation learning with
deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434
(2015).
E. Reinhard, G. Ward, S. N. Pattanaik, P. E. Debevec, W. Heidrich, and K. Myszkowski.
2010. High dynamic range imaging: acquisition, display, and image-based lighting
(2nd ed.). Morgan Kaufmann.
A. G. Rempel, M. Trentacoste, H. Seetzen, H. D. Young, W. Heidrich, L. Whitehead, and
G. Ward. 2007. Ldr2Hdr: On-the-fly Reverse Tone Mapping of Legacy Video and
Photographs. ACM Trans. Graph. 26, 3, Article 39 (2007).
S. Ren, K. He, R. Girshick, and J. Sun. 2015. Faster R-CNN: Towards Real-Time Object
Detection with Region Proposal Networks. In Advances in Neural Information
Processing Systems 28. 91–99.
O. Ronneberger, P. Fischer, and T. Brox. 2015. U-Net: Convolutional Networks for
Biomedical Image Segmentation. In Proceedings of Medical Image Computing and
Computer-Assisted Intervention (MICCAI ’15). 234–241.
M. Rouf, R. Mantiuk, W. Heidrich, M. Trentacoste, and C. Lau. 2011. Glare encoding
of high dynamic range images. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (CVPR ’11). 289–296.
H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A.
Ghosh, and A. Vorozcovs. 2004. High Dynamic Range Display Systems. ACM Trans.
Graph. 23, 3 (2004), 760–768.
A. Serrano, F. Heide, D. Gutierrez, G. Wetzstein, and B. Masia. 2016. Convolutional
Sparse Coding for High Dynamic Range Imaging. Computer Graphics Forum 35, 2
(2016), 153–163.
K. Simonyan and A. Zisserman. 2014. Very Deep Convolutional Networks for LargeScale Image Recognition. CoRR abs/1409.1556 (2014).
P. Svoboda, M. Hradis, D. Barina, and P. Zemcik. 2016. Compression Artifacts Removal
Using Convolutional Neural Networks. In Journal of WSCG, Vol. 24. 63–72.
M. D. Tocci, C. Kiser, N. Tocci, and P. Sen. 2011. A Versatile HDR Video Production
System. ACM Trans. Graphics 30, 4 (2011), 41:1–41:10.
J. Unger and S. Gustavson. 2007. High-dynamic-range video for photometric measurement of illumination. In Proceedings of SPIE, Vol. 6501.
P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. 2008. Extracting and Composing Robust Features with Denoising Autoencoders. In Proceedings of the 25th
International Conference on Machine Learning (ICML ’08). ACM, 1096–1103.
L. Wang, L.-Y. Wei, K. Zhou, B. Guo, and H.-Y. Shum. 2007. High Dynamic Range
Image Hallucination. In Proceedings of the 18th Eurographics Conference on Rendering
Techniques (EGSR’07). 321–326.
C. Yang, X. Lu, Z. Lin, E. Shechtman, O. Wang, and H. Li. 2016. High-Resolution Image
Inpainting using Multi-Scale Neural Patch Synthesis. arXiv preprint arXiv:1611.09969
(2016).
J. Zhang and J.-F. Lalonde. 2017. Learning High Dynamic Range from Outdoor Panoramas. arXiv preprint arXiv:1703.10200 (2017).
B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. 2014. Learning Deep Features
for Scene Recognition using Places Database. In Advances in Neural Information
Processing Systems 27. 487–495.
ACM Transactions on Graphics, Vol. 36, No. 6, Article 178. Publication date: November 2017.
| 1 |
arXiv:1601.00456v1 [math.AC] 4 Jan 2016
EXPANSION OF A SIMPLICIAL COMPLEX
SOMAYEH MORADI AND FAHIMEH KHOSH-AHANG
Abstract. For a simplicial complex ∆, we introduce a simplicial complex
attached to ∆, called the expansion of ∆, which is a natural generalization
of the notion of expansion in graph theory. We are interested in knowing
how the properties of a simplicial complex and its Stanley-Reisner ring relate
to those of its expansions. It is shown that taking expansion preserves vertex
decomposable and shellable properties and in some cases Cohen-Macaulayness.
Also it is proved that some homological invariants of Stanley-Reisner ring of a
simplicial complex relate to those invariants in the Stanley-Reisner ring of its
expansions.
Introduction
Simplicial complexes are widely used structures which have many applications in
algebraic topology and commutative algebra. In particular, in order to characterize
monomial quotient rings with a desired property, simplicial complex is a very strong
tool considering the Stanley-Reisner correspondence between simplicial complexes
and monomial ideals. Characterizing simplicial complexes which have properties
like vertex decomposability, shellability and Cohen-Macaulayness are some main
problems in combinatorial commutative algebra. It is rather hopeless to give a
full classification of simplicial complexes with each of these properties. In this
regard, finding classes of simplicial complexes, especially independence complexes
of graphs with a desired property have been considered by many researchers (cf.
[9, 12, 15, 26, 29, 30]). Constructing new simplicial complexes from the existing
ones satisfying a desired property is another way to know more about the characterization. In the works [6, 7, 10, 19, 27], the idea of making modifications to a
graph like adding whiskers and ears to the graph in order to obtain sequentially
Cohen-Macaulay, Cohen-Macaulay and vertex decomposable graphs is investigated.
In [2], the authors developed a construction similar to whiskers to build a vertex
decomposable simplicial complex ∆χ from a coloring χ of the vertices of a simplicial complex ∆, and in [1] for colorings of subsets of the vertices, necessary and
sufficient conditions are given for this construction to produce vertex decomposable
simplicial complexes.
Motivated by the above works and the concept of expansion of a graph in graph
theory, in this paper, we introduce the concept of expansion of simplicial complexes
which is a natural generalization of expansion of graphs. Also, we study some
properties of this expansion to see how they are related to corresponding properties
2010 Mathematics Subject Classification. Primary 13D02, 13P10; Secondary 16E05.
Key words and phrases. Cohen-Macaulay, edge ideal, expansion, projective dimension, regularity, shellable, vertex decomposable.
1
2
S. MORADI AND F. KHOSH-AHANG
of the initial simplicial complex. This tool allows us construct new vertex decomposable and shellable simplicial complexes from vertex decomposable and shellable
ones. Moreover, some families of Cohen-Macaulay simplicial complexes are introduced. We are also interested in knowing how the homological invariants of the
Stanley-Reisner ring of a simplicial complex and its expansions are related.
The paper is organized as follows. In the first section, we review some preliminaries from the literature. In Section 2, first in Theorem 2.7 we show that
for a simplicial complex ∆, vertex decomposability of ∆ is equivalent to vertex
decomposability of an expansion of ∆. Also it is proved that expansions of a
shellable simplicial complex are again shellable (see Theorem 2.12). Moreover, it
is shown that under some conditions, expansions of a simplicial complex inherit
Cohen-Macaulayness (see Corollaries 2.10, 2.11, 2.13 and 2.15). Finally, in Section
3, for a shellable simplicial complex, the projective dimension and the regularity of
its Stanley-Reisner ring are compared with the corresponding ones in an expansion
of ∆ (see Propositions 3.1 and 3.4).
1. Preliminaries
Throughout this paper, we assume that ∆ is a simplicial complex on the vertex
set V (∆) = {x1 , . . . , xn }. The set of facets (maximal faces) of ∆ is denoted by
F (∆).
In this section, we recall some preliminaries which are needed in the sequel. We
begin with definition of a vertex decomposable simplicial complex. To this aim, we
need to recall definitions of the link and the deletion of a face in ∆. For a simplicial
complex ∆ and F ∈ ∆, the link of F in ∆ is defined as
lk∆ (F ) = {G ∈ ∆ : G ∩ F = ∅, G ∪ F ∈ ∆},
and the deletion of F is the simplicial complex
del∆ (F ) = {G ∈ ∆ : G ∩ F = ∅}.
Definition 1.1. A simplicial complex ∆ is called vertex decomposable if ∆ is
a simplex, or ∆ contains a vertex x such that
(i) both del∆ (x) and lk∆ (x) are vertex decomposable, and
(ii) every facet of del∆ (x) is a facet of ∆.
A vertex x which satisfies condition (ii) is called a shedding vertex of ∆.
Remark 1.2. It is easily seen that x is a shedding vertex of ∆ if and only if no
facet of lk∆ (x) is a facet of del∆ (x).
Definition 1.3. A simplicial complex ∆ is called shellable if there exists an
ordering F1 < · · · < Fm on the facets of ∆ such that for any i < j, there exists a
vertex v ∈ Fj \ Fi and ℓ < j with Fj \ Fℓ = {v}. We call F1 , . . . , Fm a shelling for
∆.
The above definition is referred to as non-pure shellable and is due to Björner
and Wachs [3]. In this paper we will drop the adjective “non-pure”.
Definition 1.4. A graded R-module M is called sequentially Cohen–Macaulay
(over a field K) if there exists a finite filtration of graded R-modules
0 = M0 ⊂ M1 ⊂ · · · ⊂ Mr = M
EXPANSION OF A SIMPLICIAL COMPLEX
3
such that each Mi /Mi−1 is Cohen–Macaulay and
dim (M1 /M0 ) < dim (M2 /M1 ) < · · · < dim (Mr /Mr−1 ).
For a Z-graded R-module M , the Castelnuovo-Mumford regularity (or
briefly regularity) of M is defined as
reg(M ) = max{j − i : βi,j (M ) 6= 0},
and the projective dimension of M is defined as
pd(M ) = max{i : βi,j (M ) 6= 0 for some j},
where βi,j (M ) is the (i, j)th graded Betti number of M .
Let V = {x1 , . . . , xn } be a finite set, and let E = {E1 , . . . , Es } be a family of
nonempty subsets of V . The pair H = (V, E) is called a simple hypergraph if for
each i, |Ei | ≥ 2 and whenever Ei , Ej ∈ E and Ei ⊆ Ej , then i = j. The elements
of V are called the vertices and the elements of E are called the edges of H. For a
hypergraph H, the independence complex of H is defined as
∆H = {F ⊆ V (H) : E * F, for each E ∈ E(H)}.
A simple graph G = (V (G), E(G)) is a simple hypergraph with the vertices
V (G) and the edges E(G), where each of its edges has cardinality exactly two.
For a simple graph G, the edge ideal of G is defined as the ideal I(G) = (xi xj :
{xi , xj } ∈ E(G)). It is easy to see that I(G) can be viewed as the Stanley-Reisner
ideal of the simplicial complex ∆G i.e., I(G) = I∆G . Also, the big height of I(G),
denoted by bight(I(G)), is defined as the maximum height among the minimal
prime divisors of I(G).
A graph G is called vertex decomposable, shellable, sequentially Cohen-Macaulay
or Cohen-Macaulay if the independence complex ∆G is vertex decomposable, shellable,
sequentially Cohen-Macaulay or Cohen-Macaulay.
A graph G is called chordal, if it contains no induced cycle of length 4 or greater.
Definition 1.5. A monomial ideal I in the ring R = K[x1 , . . . , xn ] has linear
quotients if there exists an ordering f1 , . . . , fm on the minimal generators of I such
that the colon ideal (f1 , . . . , fi−1 ) :R (fi ) is generated by a subset of {x1 , . . . , xn }
for all 2 ≤ i ≤ m. We show this ordering by f1 < · · · < fm and we call it an order
of linear quotients on G(I). Also for any 1 ≤ i ≤ m, set I (fi ) is defined as
set I (fi ) = {xk : xk ∈ (f1 , . . . , fi−1 ) :R (fi )}.
We denote set I (fi ) by set (fi ) if there is no ambiguity about the ideal I.
A monomial ideal I generated by monomials of degree d has a linear resolution
if βi,j (I) = 0 for all j 6= i + d. Having linear quotients is a strong tool to determine
some classes of ideals with linear resolution. The main tool in this way is the
following lemma.
Lemma 1.6. (See [9, Lemma 5.2].) Let I = (f1 , . . . , fm ) be a monomial ideal
with linear quotients such that all fi s are of the same degree. Then I has a linear
resolution.
For a squarefree monomial ideal I = (x11 · · · x1n1 , . . . , xt1 · · · xtnt ), the Alexander dual ideal of I, denoted by I ∨ , is defined as
I ∨ := (x11 , . . . , x1n1 ) ∩ · · · ∩ (xt1 , . . . , xtnt ).
4
S. MORADI AND F. KHOSH-AHANG
For a simplicial complex ∆ with the vertex set X = {x1 , . . . , xn }, the Alexander
dual simplicial complex associated to ∆ is defined as
∆∨ = {X \ F : F ∈
/ ∆}.
Q
C
For a subset C ⊆ X, by x we mean the monomial x∈C x in the ring K[x1 , . . . , xn ].
c
One can see that (I∆ )∨ = (xF : F ∈ F (∆)), where I∆ is the Stanley-Reisner
c
ideal associated to ∆ and F = X \ F . Moreover, one can see that (I∆ )∨ = I∆∨ .
The following theorem which was proved in [24], relates projective dimension
and regularity of a squarefree monomial ideal to its Alexander dual. It is one of
our tools in the study of the projective dimension and regularity of the ring R/I∆ .
Theorem 1.7. (See [24, Theorem 2.1].) Let I be a squarefree monomial ideal.
Then pd(I ∨ ) = reg(R/I).
2. Expansions of a simplicial complex and their algebraic properties
In this section, expansions of a simplicial complex and their Stanley-Reisner
rings are studied. The main goal is to explore how the combinatorial and algebraic
properties of a simplicial complex ∆ and its Stanley-Reisner ring affects on the
expansions.
Definition 2.1. Let ∆ = hF1 , . . . , Fm i be a simplicial complex with the vertex
set V (∆) = {x1 , . . . , xn } and s1 , . . . , sn ∈ N be arbitrary integers. For any Fi =
{xi1 , . . . , xiki } ∈ F (∆), where 1 ≤ i1 < · · · < iki ≤ n and any 1 ≤ r1 ≤ si1 , . . . , 1 ≤
rki ≤ siki , set
r1 ,...,rki
Fi
= {xi1 r1 , . . . , xiki rki }.
We define the (s1 , . . . , sn )-expansion of ∆ to be a simplicial complex with the vertex
set {{x11 , . . . , x1s1 , x21 , . . . , x2s2 , . . . , xn1 , . . . , xnsn } and the facets
{xi1 r1 , . . . , xiki rki } : {xi1 , . . . , xiki } ∈ F (∆), (r1 , . . . , rki ) ∈ [si1 ] × · · · × [siki ]}.
We denote this simplicial complex by ∆(s1 ,...,sn ) .
Example 2.2. Consider the simplicial complex ∆ = h{x1 , x2 , x3 }, {x1 , x2 , x4 }, {x4 , x5 }i
depicted in Figure 1. Then
∆(1,2,1,1,2) = h{x11 , x21 , x31 }, {x11 , x22 , x31 }, {x11 , x21 , x41 }, {x11 , x22 , x41 }, {x41 , x51 }, {x41 , x52 }i.
x3
x2
x5
13
12
15
x31
x21
31
21
x11
14
x1
x4
51
x41
11
11
x51
41
52
22
x52
x22
∆
∆(1,2,1,1,2)
Figure 1. The simplicial complex ∆ and the (1, 2, 1, 1, 2)expansion of ∆
EXPANSION OF A SIMPLICIAL COMPLEX
5
The following definition, gives an analogous concept for the expansion of a hypergraph, which is also a generalization of [11, Definition 4.2].
Definition 2.3. For a hypergraph H with the vertex set V (H) = {x1 , . . . , xn } and
the edge set E(H), we define the (s1 , . . . , sn )-expansion of H to be a hypergraph
with the vertex set {x11 , . . . , x1s1 , x21 , . . . , x2s2 , . . . , xn1 , . . . , xnsn } and the edge set
{{xi1 r1 , . . . , xit rt } : {xi1 , . . . , xit } ∈ E(H), (r1 , . . . , rt ) ∈ [si1 ] × · · · × [sit ]}∪
{{xij , xik } : 1 ≤ i ≤ n, j 6= k}.
We denote this hypergraph by H
(s1 ,...,sn )
.
Remark 2.4. From Definitions 2.1 and 2.3 one can see that for a hypergraph
(s ,...,sn )
. Thus the expansion of
H and integers s1 , . . . , sn ∈ N, ∆H(s1 ,...,sn ) = ∆H1
a simplicial complex is the natural generalization of the concept of expansion in
graph theory.
Example 2.5. Let G be the following graph.
x1
x2
3
2
1
4
x4
x3
5
x5
The graph G(1,1,2,1,2) and the independence complexes ∆G and ∆G(1,1,2,1,2) are
shown in Figure 2.
x1
x2
x3
2
3
1
4
5
x4
x11
x21
3
2
x5
x41
(a) ∆G
x21
5
2
x11
x31
3
1
5
x51
5
x31
1
x52
x51
4
3
x32
(b) G(1,1,2,1,2)
5
3
x52
x32
4
x41
(c) ∆G(1,1,2,1,2)
Figure 2. The graph G(1,1,2,1,2) and simplicial complexes ∆G and ∆G(1,1,2,1,2)
In the following proposition, it is shown that a graph is chordal if and only if
some of its expansions is chordal.
Proposition 2.6. For any s1 , . . . , sn ∈ N, G is a chordal graph if and only if
G(s1 ,...,sn ) is chordal.
Proof. If G(s1 ,...,sn ) is chordal, then clearly G is also chordal, since it can be considered as an induced subgraph of G(s1 ,...,sn ) . Now, let G be chordal, V (G) =
6
S. MORADI AND F. KHOSH-AHANG
{x1 , . . . , xn } and consider a cycle Cm : xi1 j1 , . . . , xim jm in G(s1 ,...,sn ) , where m ≥ 4
and 1 ≤ jk ≤ sik for all 1 ≤ k ≤ m. We consider two cases.
Case 1. ik = iℓ for some distinct integers k and ℓ with 1 ≤ k < ℓ ≤ m. Then
by the definition of expansion, xik jk xiℓ jℓ ∈ E(G(s1 ,...,sn ) ). Thus if xik jk xiℓ jℓ is not
an edge of Cm , then it is a chord in Cm . Now, assume that xik jk xiℓ jℓ is an edge of
Cm . Note that since xiℓ jℓ xiℓ+1 jℓ+1 ∈ E(Cm ), either iℓ = iℓ+1 or xiℓ xiℓ+1 ∈ E(G) (if
ℓ = m, then set ℓ + 1 := 1). Thus xik jk xiℓ+1 jℓ+1 ∈ E(G(s1 ,...,sn ) ) is a chord in Cm .
Case 2. ik 6= iℓ for any distinct integers 1 ≤ k, ℓ ≤ m. By the definition of
expansion, one can see that xi1 , . . . , xim forms a cycle of length m in G. So it has a
chord. Let xik xiℓ ∈ E(G) be a chord in this cycle. Then xik jk xiℓ jℓ ∈ E(G(s1 ,...,sn ) )
is a chord in Cm . Thus G(s1 ,...,sn ) is also chordal.
The following theorem illustrates that the vertex decomposability of a simplicial
complex is equivalent to the vertex decomposability of its expansions.
Theorem 2.7. Assume that s1 , . . . , sn are positive integers. Then ∆ is vertex
decomposable if and only if ∆(s1 ,...,sn ) is vertex decomposable.
Proof. Assume that ∆ is a simplicial complex with the vertex set V (∆) = {x1 , . . . , xn }
and s1 , . . . , sn are positive integers. To prove the ‘only if’ part, we use generalized induction on |V (∆(s1 ,...,sn ) )| (note that |V (∆(s1 ,...,sn ) )| ≥ |V (∆)|). If
|V (∆(s1 ,...,sn ) )| = |V (∆)|, then ∆ = ∆(s1 ,...,sn ) and so there is nothing to prove in
this case. Assume inductively that for all vertex decomposable simplicial complexes
′
′
′
′
∆′ and all positive integers s′1 , . . . , s′n with |V (∆′(s1 ,...,sn ) )| < t, ∆′(s1 ,...,sn ) is vertex
decomposable. Now, we are going to prove the result when t = |V (∆(s1 ,...,sn ) )| >
|V (∆)|. Since |V (∆(s1 ,...,sn ) )| > |V (∆)|, there exists an integer 1 ≤ i ≤ n such that
si > 1. If ∆ = hF i is a simplex, we claim that xi1 is a shedding vertex of ∆(s1 ,...,sn ) .
It can be easily checked that
lk∆(s1 ,...,sn ) (xi1 ) = hF \ {xi }i(s1 ,...,si−1 ,si+1 ,...,sn )
and
del∆(s1 ,...,sn ) (xi1 ) = ∆(s1 ,...,si−1 ,si −1,si+1 ,...,sn ) .
So, inductive hypothesis ensures that lk∆(s1 ,...,sn ) (xi1 ) and del∆(s1 ,...,sn ) (xi1 ) are
vertex decomposable. Also, it can be seen that every facet of del∆(s1 ,...,sn ) (xi1 ) is a
facet of ∆(s1 ,...,sn ) . This shows that ∆(s1 ,...,sn ) is vertex decomposable in this case.
Now, if ∆ is not a simplex, it has a shedding vertex, say x1 . We claim that x11 is
a shedding vertex of ∆(s1 ,...,sn ) . To this end, it can be seen that
(s2 ,...,sn )
lk∆(s1 ,...,sn ) (x11 ) = lk∆ (x1 )
and
del∆(s1 ,...,sn ) (x11 ) =
∆(s1 −1,s2 ,...,sn )
(s ,...,sn )
del∆ (x1 ) 2
if s1 > 1;
if s1 = 1.
Hence, inductive hypothesis deduces that lk∆(s1 ,...,sn ) (x11 ) and del∆(s1 ,...,sn ) (x11 )
are vertex decomposable simplicial complexes. Now, suppose that F j1 ,...,jk =
{xi1 j1 , . . . , xik jk } is a facet of lk∆(s1 ,...,sn ) (x11 ), where F = {xi1 , . . . , xik } is a face
(s ,...,sn )
, F is a facet of lk∆ (x1 ). So,
of ∆. Then since lk∆(s1 ,...,sn ) (x11 ) = lk∆ (x1 ) 2
there is a vertex xik+1 ∈ V (∆) such that {xi1 , . . . , xik , xik+1 } is a face of del∆ (x1 )
(see Remark 1.2). Hence {xi1 j1 , . . . , xik jk , xik+1 1 } is a face of del∆(s1 ,...,sn ) (x11 ).
This completes the proof of the first part.
EXPANSION OF A SIMPLICIAL COMPLEX
7
To prove the ‘if’ part, we also use generalized induction on |V (∆(s1 ,...,sn ) )|. If
|V (∆(s1 ,...,sn ) )| = |V (∆)|, then ∆ = ∆(s1 ,...,sn ) and so there is nothing to prove
in this case. Assume inductively that for all simplicial complexes ∆′ and all pos′
′
′
′
itive integers s′1 , . . . , s′n with |V (∆′(s1 ,...,sn ) )| < t such that ∆′(s1 ,...,sn ) is vertex
′
decomposable, we have proved that ∆ is also vertex decomposable. Now, we
are going to prove the result when t = |V (∆(s1 ,...,sn ) )| > |V (∆)|. Now, since
|V (∆(s1 ,...,sn ) )| > |V (∆)| and ∆(s1 ,...,sn ) is vertex decomposable, it has a shedding
vertex, say x11 . If s1 > 1, then
del∆(s1 ,...,sn ) (x11 ) = ∆(s1 −1,s2 ,...,sn ) ,
and the inductive hypothesis ensures that ∆ is vertex decomposable as desired.
Else, we should have s1 = 1,
lk∆(s1 ,...,sn ) (x11 ) = lk∆ (x1 )(s2 ,...,sn )
and
del∆(s1 ,...,sn ) (x11 ) = del∆ (x1 )
(s2 ,...,sn )
.
So, inductive hypothesis implies that lk∆ (x1 ) and del∆ (x1 ) are vertex decomposable
simplicial complexes. Now, assume that F = {xi1 , . . . , xik } is a facet of del∆ (x1 ).
Then {xi1 1 , . . . , xik 1 } is a facet of del∆(s1 ,...,sn ) (x11 ). Since x11 is a shedding vertex
of ∆(s1 ,...,sn ) , {xi1 1 , . . . , xik 1 } is a facet of ∆(s1 ,...,sn ) . Hence, F is a facet of ∆ and
the proof is complete.
Remark 2.8. By the notations as in Definition 2.1, ∆ is pure if and only if
r1 ,...,rki
∆(s1 ,...,sn ) is pure, since any facet Fi
of ∆(s1 ,...,sn ) has the same cardinality as Fi .
The following theorem together with Theorem 2.7 help us to see how the CohenMacaulayness propery in a vertex decomposable simplicial complex and its expansions are related.
Theorem 2.9. A vertex decomposable simplicial complex ∆ is Cohen-Macaulay if
and only if ∆ is pure.
Proof. See [4, Theorem 11.3] and [28, Theorem 5.3.18].
Corollary 2.10. Let ∆ be a vertex decomposable simplicial complex and s1 , . . . , sn
be positive integers. Then ∆ is Cohen-Macaulay if and only if ∆(s1 ,...,sn ) is CohenMacaulay.
Proof. By Theorem 2.7, ∆(s1 ,...,sn ) is also vertex decomposable. Also, by Theorem
2.9, ∆, respectively ∆(s1 ,...,sn ) , is Cohen-Macaulay if and only if ∆, respectively
∆(s1 ,...,sn ) , is pure. Now, by Remark 2.8, the result is clear.
Corollary 2.11. Let G be a Cohen-Macaulay chordal graph or a Cohen-Macaulay
bipartite graph. Then G(s1 ,...,sn ) is Cohen-Macaulay.
Proof. By [29, Corollary 7] and [25, Corollary 2.12] chordal graphs and CohenMacaulay bipartite graphs are vertex decomposable. The result now follows from
Corollary 2.10.
In the following theorem, it is shown that shellability is preserved under expansion and from a shelling for ∆, a shelling for its expansion is constructed.
8
S. MORADI AND F. KHOSH-AHANG
Theorem 2.12. Let ∆ be a shellable simplicial complex with n vertices. Then
∆(s1 ,...,sn ) is shellable for any s1 , . . . , sn ∈ N.
Proof. Use the notations as in Definition 2.1. Let ∆ be a shellable simplicial complex with the shelling order F1 < · · · < Fm on the facets of ∆. Consider an order
r1 ,...,rki
on F (∆(s1 ,...,sn ) ) as follows. For two facets Fi
r1 ,...,rki
(i) if i < j, set Fi
(ii) if i = j, set
r1 ,...,rki
Fi
′
r1′ ,...,rk
j
< Fj
<
′
r1′ ,...,rk
and Fj
j
of ∆(s1 ,...,sn )
,
′
r1′ ,...,rk
i
,
Fi
when (r1 , . . . , rki ) <lex (r1′ , . . . , rk′ i ).
r1 ,...,rki
We show that this ordering forms a shelling order. Consider two facets Fi
′
r1′ ,...,rk
j
Fj
and
with i < j. Since Fi < Fj , there exists an integer ℓ < j and xjt ∈ Fj \ Fi
′
r1′ ,...,rk
r1 ,...,rk
j
i
\ Fi
. Let Fℓ = {xℓ1 , . . . , xℓkℓ },
such that Fj \ Fℓ = {xjt }. So xjt rt′ ∈ Fj
where ℓ1 < · · · < ℓkℓ . Then there exist indices h1 , . . . , ht−1 , ht+1 , . . . , hkj such that
j1 = ℓh1 , . . . , jt−1 = ℓht−1 , jt+1 = ℓht+1 , . . . , jkj = ℓhkj . Thus
′
r1′ ,...,rk
j
Fj
where
rh′′1
=
r1′ , . . . , rh′′t−1
=
′′
r1′′ ,...,rk
\ Fℓ
′
rt−1
, rh′′t+1
ℓ
=
′′
r1′′ ,...,rk
indices λ. Since ℓ < j, we have Fℓ
Now assume that i = j and
ℓ
r1 ,...,rki
Fi
(r1 , . . . , rki )
= {xjt rt′ },
′
rt+1
, . . . , rh′′k
j
′
r1′ ,...,rk
< Fj
′
r1′ ,...,rk
i
< Fi
j
= rk′ j and rλ′′ = 1 for other
.
. Thus
<lex (r1′ , . . . , rk′ i ).
′
r1′ ,...,rk
i
Let 1 ≤ t ≤ ki be an integer with rt < rt′ . Then xit rt′ ∈ Fi
′
r1′ ,...,rk
i
Fi
′
′
′
r1′ ,...,rt−1
,rt ,rt+1
,...,rk
i
\ Fi
r1 ,...,rki
\ Fi
,
= {xit rt′ }
and
′
′
(r1′ , . . . , rt−1
, rt , rt+1
, . . . , rk′ i ) <lex (r1′ , . . . , rk′ i ).
′
′
′
r1′ ,...,rt−1
,rt ,rt+1
,...,rk
i
Thus Fi
′
r1′ ,...,rk
i
< Fi
. The proof is complete.
The following corollary is an immediate consequence of Theorem 2.12, Remark
2.8 and [28, Theorem 5.3.18].
Corollary 2.13. Let ∆ be a pure shellable simplicial complex. Then ∆(s1 ,...,sn ) is
Cohen-Macaulay for any s1 , . . . , sn ∈ N.
Theorem 2.14. Let ∆ be a pure one dimensional simplicial complex. Then the
following statements are equivalent.
(i) ∆ is connected.
(ii) ∆ is vertex decomposable.
(iii) ∆ is shellable.
(iv) ∆ is sequantially Cohen-Macaulay.
(v) ∆ is Cohen-Macaulay.
Proof. (i ⇒ ii) Suppose that ∆ = hF1 , . . . , Fm i. We use induction on m. If m = 1,
∆ is clearly vertex decomposable. Suppose inductively that the result has
been proved for smaller values of m. We consider two cases. If ∆ has a
free vertex (a vertex which belongs to only one facet), then there is a facet,
EXPANSION OF A SIMPLICIAL COMPLEX
9
Sm−1
say Fm = {x, y}, of ∆ such that x 6∈ i=1 Fi . In this case lk∆ (x) = h{y}i,
which is clearly vertex decomposable. Also, since ∆ is connected,
del∆ (x) = hF1 , . . . , Fm−1 i
(ii ⇒ iii)
(iii ⇒ iv)
(iv ⇒ v)
(v ⇒ i)
is a pure one dimensional connected simplicial complex. So, by inductive
hypothesis del∆ (x) is also vertex decomposable. Moreover each facet of
del∆ (x) is a facet of ∆. This shows that ∆ is vertex decomposable. Now,
suppose that ∆ doesn’t have any free vertex. So, each vertex belongs to
at least two facets. Hence, there is a vertex x such that del∆ (x) is also
connected and one dimensional. (Note that since ∆ is connected and one
dimensional, it may be illustrated as a connected graph. Also, from graph
theory, we know that every connected graph has at least two vertices such
that by deleting them, we still have a connected graph). Now, by induction
hypothesis we have that del∆ (x) is vertex decomposable. Also, lk∆ (x) is
a discrete set and so vertex decomposable. Furthermore, in view of the
choice of x, it is clear that every facet of del∆ (x) is a facet of ∆. Hence, ∆
is vertex decomposable as desired.
follows from [4, Theorem 11.3].
is firstly shown by Stanley in [23].
The result follows from the fact that every pure sequantially Cohen-Macaulay
simplicial complex is Cohen-Macaulay.
follows from [28, Corollary 5.3.7].
Corollary 2.15. Let ∆ be a Cohen-Macaulay simplicial complex of dimension one.
Then ∆(s1 ,...,sn ) is Cohen-Macaulay for any s1 , . . . , sn ∈ N.
Proof. Since ∆ is Cohen-Macaulay of dimension one, Theorem 2.14 implies that ∆
is pure shellable. Hence, Corollary 2.13 yields the result.
The evidence suggests when ∆ is Cohen-Macaulay, its expansions are also CohenMacaulay. Corollaries 2.10, 2.11, 2.13 and 2.15 are some results in this regard. But
in general, we did not get to a proof or a counter example for this statement. So,
we just state it as a conjecture as follows.
Conjecture. If ∆ is a Cohen-Macaulay simplicial complex, then ∆(s1 ,...,sn ) is
Cohen-Macaulay for any s1 , . . . , sn ∈ N.
3. Homological invariants of expansions of a simplicial complex
We begin this section with the next theorem which presents formulas for the
projective dimension and depth of the Stanley-Reisner ring of an expansion of a
shellable simplicial complex in terms of the corresponding invariants of the StanleyReisner ring of the simplicial complex.
Theorem 3.1. Let ∆ be a shellable simplicial complex with the vertex set {x1 , . . . , xn },
s1 , . . . , sn ∈ N and R = K[x1 , . . . , xn ] and R′ = K[x11 , . . . , x1s1 , . . . , xn1 , . . . , xnsn ]
be polynomial rings over a field K. Then
pd(R′ /I∆(s1 ,...,sn ) ) = pd(R/I∆ ) + s1 + · · · + sn − n
and
depth(R′ /I∆(s1 ,...,sn ) ) = depth(R/I∆ ).
10
S. MORADI AND F. KHOSH-AHANG
Proof. Let ∆ be a shellable simplicial complex. Then it is sequentially CohenMacaulay. By Theorem 2.12, ∆(s1 ,...,sn ) is also shellable and then sequentially
Cohen-Macaulay. Thus by [20, Corollary 3.33],
pd(R′ /I∆(s1 ,...,sn ) ) = bight (I∆(s1 ,...,sn ) )
and
pd(R/I∆ ) = bight (I∆ ).
Let k = min{|F | : F ∈ F (∆)}. It is easy to see that min{|F | : F ∈ F (∆(s1 ,...,sn ) )} =
k. Then bight (I∆ ) = n − k and
bight (I∆(s1 ,...,sn ) ) = |V (∆(s1 ,...,sn ) )|−k = s1 +· · ·+sn −k = s1 +· · ·+sn +pd(R/I∆ )−n.
The second equality holds by Auslander-Buchsbaum formula, since depth(R′ ) =
s1 + · · · + sn .
In the following example, we compute the invariants in Theorem 3.1 and illustrate
the equalities.
Example 3.2. Let ∆ = h{x1 , x2 , x3 }, {x1 , x2 , x4 }, {x4 , x5 }i. Then ∆ is shellable
with the order as listed in ∆. Then
∆(1,1,2,1,2) = h{x11 , x21 , x31 }, {x11 , x21 , x32 }, {x11 , x21 , x41 }, {x41 , x51 }, {x41 , x52 }i.
computations by Macaulay2 [14], show that pd(R/I∆ ) = 3 and pd(R′ /I∆(s1 ,...,sn ) ) =
5 = pd(R/I∆ )+s1 +· · ·+sn −n = 3+1+1+2+1+2−5. Also depth(R′ /I∆(s1 ,...,sn ) ) =
depth(R/I∆ ) = 2.
The following result, which is a special case of [22, Corollary 2.7], is our main
tool to prove Proposition 3.4.
Theorem 3.3. (See [22, Corollary 2.7].) Let I be a monomial ideal with linear
quotients with the ordering f1 < · · · < fm on the minimal generators of I. Then
X
|set I (ft )|
.
βi,j (I) =
i
deg(ft )=j−i
Proposition 3.4. Let ∆ = hF1 , . . . , Fm i be a shellable simplicial complex with
the vertex set {x1 , . . . , xn }, s1 , . . . , sn ∈ N and R = K[x1 , . . . , xn ] and R′ =
K[x11 , . . . , x1s1 ,
. . . , xn1 , . . . , xnsn ] be polynomial rings over a field K. Then
(i) if s1 , . . . , sn > 1, then reg(R′ /I∆(s1 ,...,sn ) ) = dim(∆) + 1 = dim(R/I∆ );
(ii) if for each 1 ≤ i ≤ m, λi = |{xℓ ∈ Fi : sℓ > 1}|, then
reg(R′ /I∆(s1 ,...,sn ) ) ≤ reg(R/I∆ ) + max{λi : 1 ≤ i ≤ m}.
Proof. Without loss of generality assume that F1 < · · · < Fm is a shelling for
c
c
∆. We know that I∆∨ has linear quotients with the ordering xF1 < · · · < xFm
on its minimal generators (see [17, Theorem 1.4]). Moreover by Theorem 1.7,
reg(R′ /I∆(s1 ,...,sn ) ) = pd(I∆(s1 ,...,sn )∨ ) and by [5, Theorem 5.1.4] we have dim(R/I∆ ) =
dim(∆) + 1. Thus, to prove (i), it is enough to show that pd(I∆(s1 ,...,sn )∨ ) =
c
dim (∆) + 1. By Theorem 3.3, pd(I∆∨ ) = max{|set (xFi )| : 1 ≤ i ≤ m}. For
c
c
any 1 ≤ i ≤ m, set (xFi ) ⊆ Fi , since any element xℓ ∈ set (xFi ) belongs to
c
c
c
c
c
(xFj ) :R (xFi ) for some 1 ≤ j < i. Thus xℓ = xFj / gcd(xFj , xFi ) = xFi \Fj .
c
Let Fi = {xi1 , . . . , xiki } and set (xFi ) = {xiℓ : ℓ ∈ Li }, where Li ⊆ {1, . . . , ki }.
EXPANSION OF A SIMPLICIAL COMPLEX
11
Consider the shelling for ∆(s1 ,...,sn ) constructed in the proof of Theorem 2.12. Using again of [17, Theorem 1.4] shows that this shelling induces an order of linear
quotients on the minimal generators of I∆(s1 ,...,sn )∨ . With this order
r1 ,...,rk
i )c
set (x(Fi
(3.1)
) = {xiℓ rℓ : ℓ ∈ Li } ∪ {xit rt : rt > 1}.
More precisely, if rt > 1 for some 1 ≤ t ≤ ki , then
r1 ,...,rk
i
xit rt = x(Fi
r1 ,...,rt−1 ,rt −1,rt+1 ,...,rk
i
\Fi
r1 ,...,rt−1 ,rt −1,rt+1 ,...,rk
i )c
∈ (x(Fi
)
r1 ,...,rk
i )c
) :R′ (x(Fi
r1 ,...,rk
i )c
(Fi
).
c
). Also for any xiℓ ∈ set (xFi ), there exists 1 ≤
Hence, xit rt ∈ set (x
c
c
Fi \Fj
∈ (xFj ) :R (xFi ). Thus there exist positive integers
j < i such that xiℓ = x
r1′′ , . . . , rj′′ such that
r1 ,...,rk
i
xiℓ rℓ = x(Fi
′′ ,...,r′′
r1
kj
\Fj
′′ ,...,r′′
r1
kj
∈ (x(Fj
r1 ,...,rk
i )c
Hence, xiℓ rℓ ∈ set (x(Fi
{xi1 si1 , . . . , xiki sik }. Thus
)c
)
r1 ,...,rk
i )c
) :R′ (x(Fi
).
si ,...,si
1
ki
). Now, if s1 , . . . , sn > 1, then set (x(Fi
)c
)=
i
si ,...,si
1
ki
pd(I∆(s1 ,...,sn ) ∨ ) = max{|set (x(Fi
)c
)| : 1 ≤ i ≤ m} = max{|Fi | : 1 ≤ i ≤ m} = dim(∆)+1.
r1 ,...,rk
i c
c
)
To prove (ii), notice that by equality 3.1, |set (x(Fi
)| ≤ |set (xFi )| + λi .
Therefore pd(I∆(s1 ,...,sn ) ∨ ) ≤ pd(I∆∨ ) + max{λi : 1 ≤ i ≤ m}. Now, by Theorem
1.7, the result holds.
Example 3.5. Consider the chordal graph G depicted in Figure 3 and its (2, 2, 3, 2, 3)expansion which is a graph with 12 vertices. Then ∆G = h{x1 , x3 }, {x3 , x5 }, {x4 , x5 }, {x2 }i.
Since G is shellable, by Proposition 3.4, reg(R′ /I(G(2,2,3,2,3) )) = dim (∆G ) + 1 = 2.
x53
5
x51
5
5
x1
x2
3
2
1
2
x12
1
x11
1
x4
x4
x3
G
x52
5
x5
x41
x22
2
x21
4
3
x31
3
4
3
x42
x32
x33
G(2,2,3,2,3)
Figure 3. The graph G and the (2, 2, 3, 2, 3)-expansion of G
12
S. MORADI AND F. KHOSH-AHANG
References
1. J. Biermann; C. A. Francisco; H. T. Hà; A. Van Tuyl , Colorings of simplicial complexes
and vertex decomposability. Preprint, arXiv:math.AC/1209.3008v1.
2. J. Biermann; A. Van Tuyl , Balanced vertex decomposable simplicial complexes and their
h-vectors. Electron. J. Combin. 20 (2013), no. 3, 15 pp.
3. A. Björner; M. L. Wachs, Shellable nonpure complexes and posets. I. Trans. Amer. Math.
Soc. 348 (1996), no. 4, 1299–1327.
4. A. Björner; M. L. Wachs, Shellable nonpure complexes and posets. II. Trans. Amer. Math.
Soc. 349 (1997), no. 10, 3945–3975.
5. W. Bruns; J. Herzog, Cohen-Macaulay rings. Cambridge Studies in Advanced Mathematics,
39. Cambridge University Press, Cambridge, 1993.
6. D. Cook II; U. Nagel, Cohen-Macaulay graphs and face vectors of flag complexes. SIAM J.
Discrete Math. 26 (2012), no. 1, 89–101.
7. A. Dochtermann; A. Engström, Algebraic properties of edge ideals via combinatorial topology. Electron. J. Combin. 16 (2009), no. 2, Special volume in honor of Anders Bjorner, Research Paper 2, 24 pp.
8. J. A. Eagon; V. Reiner, Resolutions of Stanley-Reisner rings and Alexander duality. J. Pure
Appl. Algebra 130 (1998), no. 3, 265–275.
9. S. Faridi, Simplicial trees are sequentially Cohen-Macaulay. J. Pure Appl. Algebra 190 (2004),
no. 1-3, 121–136.
10. C. A. Francisco, H. T. Hà, Whiskers and sequentially Cohen-Macaulay graphs. J. Combin.
Theory Ser. A 115 (2008), no. 2, 304-316.
11. C. A. Francisco; H. T. Hà; A. Van Tuyl, Coloring of hypergraphs, perfect graphs and
associated primes of powers of monomial ideals. J. Algebra 331 (2011), 224-242.
12. C. A. Francisco; A. Van Tuyl, Sequentially Cohen-Macaulay edge ideals. Proc. Amer.
Math. Soc. 135 (2007), 2327-2337.
13. R. Fröberg, On Stanley-Reisner rings. Topics in Algebra, Banach Center Publications, 26
(1990), 57–70.
14. D. R. Grayson; M. E. Stillman, Macaulay2, a software system for research in algebraic
geometry. (1996). http://www.math.uiuc.edu/Macaulay2/.
15. H. T. Hà; S. Morey; R. H. Villarreal, Cohen-Macaulay admissible clutters. J. Commut.
Algebra 1 (2009), No. 3, 463–480.
16. H. T. Hà; A. Van Tuyl, Monomial ideals, edge ideals of hypergraphs, and their graded Betti
numbers. J. Algebraic Combin. 27 (2008), no. 2, 215–245.
17. J. Herzog; T. Hibi; X. Zheng, Dirac’s theorem on chordal graphs and Alexander duality.
European J. Combin. 25 (2004), no. 7, 949–960.
18. F. Khosh-Ahang; S. Moradi, Regularity and projective dimension of edge ideal of C5 -free
vertex decomposable graphs. Proc. Amer. Math. Soc. 142, no. 5 (2014) 1567–1576.
19. F. Mohammadi; D. Kiani; S. Yassemi, Shellable cactus graphs. Math. Scand. 106 (2010),
161-167.
20. S. Morey; R. H. Villarreal, Edge ideals: algebraic and combinatorial properties. Progress
in Commutative Algebra, Combinatorics and Homology, Vol. 1 (C. Francisco, L. C. Klingler,
S. Sather-Wagstaff and J. C. Vassilev, Eds.), De Gruyter, Berlin, 2012, pp. 85–126.
21. G. Reisner, Cohen-Macaulay quotients of polynomial rings. Advances in Math. 21 (1976),
no. 1, 30-49.
22. L. Sharifan; M. Varbaro, Graded Betti numbers of ideals with linear quotients. Le Matematiche (Catania) 63 (2008), no. 2, 257–265.
23. R. P. Stanley, Combinatorics and Commutative Algebra. Second edition. Progress in Mathematics 41. Birkhauser Boston, Inc., Boston, MA, 1996.
24. N. Terai, Alexander duality theorem and Stanley-Reisner rings. Free resolutions of coordinate
rings of projective varieties and related topics (Japanese) (Kyoto, 1998). Sürikaisekikenkyüsho
Kökyüroku no. 1078 (1999), 174–184.
25. A. Van Tuyl, Sequentially Cohen-Macaulay bipartite graphs: vertex decomposability and
regularity. Arch. Math. (Basel) 93 (2009), no. 5, 451–459.
26. A. Van Tuyl; R. Villarreal, Shellable graphs and sequentially Cohen-Macaulay bipartite
graphs. J. Combin. Theory Ser. A 115 (2008), no. 5, 799-814.
27. R. H. Villarreal, Cohen-Macaulay graphs. Manuscripta Math. 66 (1990), no. 3, 277–293.
EXPANSION OF A SIMPLICIAL COMPLEX
13
28. R. H. Villarreal, Monomial Algebras. Monographs and Textbooks in Pure and Applied Mathematics 238, Marcel Dekker, New York, 2001.
29. R. Woodroofe, Vertex decomposable graphs and obstructions to shellability. Proc. Amer.
Math. Soc. 137 (2009), no. 10, 3235–3246.
30. R. Woodroofe, Chordal and sequentially Cohen-Macaulay clutters. Electron. J. Combin. 18
(2011), no. 1, Paper 208, 20 pp.
31. X. Zheng, Resolutions of facet ideals. Comm. Algebra 32 (2004), no. 6, 2301–2324.
Somayeh Moradi, Department of Mathematics, Ilam University, P.O.Box 69315-516,
Ilam, Iran and School of Mathematics, Institute for Research in Fundamental Sciences
(IPM), P.O.Box: 19395-5746, Tehran, Iran.
E-mail address: [email protected]
Fahimeh Khosh-Ahang, Department of Mathematics, Ilam University, P.O.Box 69315516, Ilam, Iran.
E-mail address: fahime− [email protected]
| 0 |
An Algebra of Synchronous Scheduling Interfaces
Michael Mendler
Faculty of Information Systems and Applied Computer Sciences
Bamberg University
[email protected]
In this paper we propose an algebra of synchronous scheduling interfaces which combines the expressiveness of Boolean algebra for logical and functional behaviour with the min-max-plus arithmetic
for quantifying the non-functional aspects of synchronous interfaces. The interface theory arises from
a realisability interpretation of intuitionistic modal logic (also known as Curry-Howard-Isomorphism
or propositions-as-types principle). The resulting algebra of interface types aims to provide a general
setting for specifying type-directed and compositional analyses of worst-case scheduling bounds.
It covers synchronous control flow under concurrent, multi-processing or multi-threading execution
and permits precise statements about exactness and coverage of the analyses supporting a variety
of abstractions. The paper illustrates the expressiveness of the algebra by way of some examples
taken from network flow problems, shortest-path, task scheduling and worst-case reaction times in
synchronous programming.
1
Introduction
The algebra discussed in this paper aims at the specification of behavioural interfaces under the execution
model of synchronous programming. Such interfaces abstract externally observable Boolean controls
for components activated under the regime of a global synchronous scheduler familiar from data-flow
oriented languages such as Lustre [11], Signal [8], Lucid Synchrone [24], or imperative control-flow
oriented languages such as Statecharts [12, 23], Esterel [5] and Quartz [25]. In this model computations
are coordinated under one or more global system clocks, which may be physical or logical. They divide
physical time into a sequence of discrete ticks, or instants. During each instant the synchronous components interact using broadcast signals, which can have one of two statuses, present or absent. These
signal statuses evolve monotonically as they are propagated through the system, generating the emission
or inhibition of further signals and computations. Under the synchrony hypothesis [10] it is assumed
that at each instant, outputs are synchronous with the inputs. In other words, computations take place
instantaneously and appear to happen at each tick “all at once.”
The synchrony hypothesis conveniently abstracts internal, possibly distributed computations into
atomic reactions, making signals appear almost like Boolean variables and (stateful) interfaces almost
like Mealy automata with Boolean labels. Unfortunately, this abstraction is not perfect, so that Boolean
algebra is insufficient. First, it is well-known [14, 20] that classical two-valued Boolean analysis is inadequate to handle the causality and compositionality problems associated with the synchrony hypothesis
adequately. E.g., Boolean algebra by itself cannot guarantee there are no races between signal presence
and absence, thus guaranteeing unique convergence after a finite number of signal propagation steps.
Some form of causality information needs to be preserved. Secondly, quite practically, in many applications we want to compute non-Boolean information about otherwise “instantaneous” control signals,
such as latency or worst-case reaction times, maximal throughput, earliest deadlines, or other quantitative information about the scheduling process. This provides one way to motivate the work reported
Benoı̂t Caillaud and Axel Legay (Eds.):
Foundations of Interface Technologies (FIT’10)
EPTCS 46, 2011, pp. 28–48, doi:10.4204/EPTCS.46.3
c Michael Mendler
This work is licensed under the
Creative Commons Attribution License.
Michael Mendler
29
here, viz. the search for a fully abstract synchronisation algebra as an economic refinement of classical Boolean algebra in situations where Booleans are subject to synchronous schedules and quantitative
resource consumption.
Another motivation may be drawn from the arithmetical point of view. One of the challenges in
quantitative resource analysis is the clever interchange (distribution) of max, min and +. For instance,
consider the analysis of worst-case reaction times (WCRT). In its simplest form, given a weighted dependency graph, the WCRT is the maximum of all sums of paths delays, an expression of the form
max(∑i∈p1 di1 , ∑i∈p2 di2 , . . . , ∑i∈pn din ) where p j are execution paths of the system and di j the delay of
path segment i in path p j . As it happens, the number n of paths is exponential in the number of elementary nodes of a system. Practicable WCRT analyses therefore reduce the max-of-sums to the polynomial
complexity of sum-of-maxes (dynamic programming on dependency graphs) employing various forms
of dependency abstraction. For illustration, imagine two alternative path segments of length d1 , e1 sequentially followed by two alternative path segments of length d2 , e2 , respectively. The distribution
max(d1 + d2 , d1 + e2 , e1 + d2 , e1 + e2 ) = max(d1 , e1 ) + max(d2 , e2 ) for efficiently calculating the longest
possible path, is exact only if we have a full set of path combinations. In general, there will be dependencies ruling out certain paths, in which case sum-of-maxes obtains but conservative over-approximations.
E.g., assume the combination of d1 with e2 is infeasible. Then, the sum-of-maxes is not exact since
max(d1 , e1 ) + max(d2 , e2 ) ≥ max(d1 + d2 , e1 + d2 , e1 + e2 ). On the other hand, knowing the infeasibility of d1 + e2 we would rather compute max(d1 + d2 , e1 + max(d2 , e2 )) = max(d1 + d2 , e1 + d2 , e1 + e2 )
which eliminates one addition and thus is both exact and more efficient than the full conservative maxof-sums. The same applies to min-plus problems such as shortest path or network flow. In the former,
the efficient sum-of-mins is an under-approximation of the exact min-of-sums on all feasible paths. For
network flow the arithmetic is complicated further by the fact that min/max do not distribute over +, i.e.,
min(d, e1 + e2 ) 6= min(d, e1 ) + min(d, e2 ) which obstructs simple linear programming techniques.
The art of scheduling analysis consists in finding a judicious trade-off between merging paths early
in order to aggregate data on the one hand, and refining dependency paths by case analysis for the sake
of exactness, on the other hand. A scheduling algebra for practicable algorithms must be able to express
and control this trade-off. In this paper we present an interface theory which achieves this by coupling
resource weights d with logic formulas φ . A pair d : φ specifies the semantic meaning of d within
the control-flow of a program module. Logical operations on the formulas then go hand-in-hand with
arithmetic operations on resources. E.g., suppose a schedule activates control points X and Y with a cost
of d1 and d2 , respectively, expressed d1 : ◦X ∧ d2 : ◦Y . If the threads are resource concurrent then both
controls are jointly active within the maximum, i.e., max(d1 , d2 ) : ◦(X ∧ Y ). If we are only concerned
whether one of the controls is reached, then we take the minimum min(d1 , d2 ) : ◦(X ⊕Y ). If activations
of X and Y requires interleaving of resources, then we must use addition d1 + d2 : ◦(X ⊗Y ).
Our interface theory combines min-max-plus algebra (N∞ , min, max, +, 0, −∞, +∞), see e.g. [4], with
a refinement of Boolean algebra to reason about logical control-flow. It features two conjunctions ∧, ⊗ to
distinguish concurrent from multi-threading parallelism, two disjunctions ∨, ⊕ to separate external from
internal scheduling choices, respectively. A consequence of its constructive nature, our algebra replaces
classical negation by a weaker and more expressive pseudo-complement for which x = x and x + x = 1
are no longer tautologies. This turns Boolean into a so-called Heyting algebra. The work presented here
is an extension and adaptation of our earlier work on propositional stabilisation theory [21] which has
been developed to provide a semantic foundation for combinational timing analyses.
The plan for the paper is as follows: To start with, Sec. 2 lays out the syntactic and semantical groundwork for our interface type theory which is then studied in some more detail in Sec. 3. For compactness
we keep these theoretical Sections 2 and 3 fairly condensed, postponing examples to Secs. 4 and 5. In
30
An Algebra of Synchronous Scheduling Interfaces
the former, Sec. 4, we sketch applications to network flow, shortest path and task scheduling, while in
Sec. 5 we discuss the problem of WCRT analysis for Esterel-style synchronous processing. The paper
concludes in Sec. 6 with a discussion of related work.
2
Syntax and Semantics of Synchronous Scheduling Interfaces
Synchronous scheduling assumes that all dependencies in the control flow of a single instant are acyclic
and the propagation of control, for all threads, is a monotonic process in which each atomic control point
is only ever activated at most once. Let V be a set of signals, or control variables, which specify the
atomic control points in the interface of a synchronous module. An event is a subset E ⊆ V of control
variables. A synchronous activation sequence, or simply an activation, is a monotonically increasing
function σ ∈ n → 2V from n = {0, 1, . . . , n−1} into the set of events, i.e., σ (i) ⊆ σ ( j) for all 0 ≤ i ≤ j < n.
The length |σ | of σ is the number of events it contains, i.e., |σ | = n. The unique activation of length
0 = 0/ is called the empty activation, also denoted 0.
/
Activations model the monotonic process of signal propagation during one synchronous instant, i.e.,
between two ticks of the logical clock. They induce a Boolean valuation on the control variables in the
sense that A ∈ V may be considered “present” for the instant if A ∈ σ (i) for some 0 ≤ i < |σ | and “absent”
otherwise. In the former case, index i is the activation level for the presence of control A. In general, the
domain n over which an activation is defined acts as a discrete domain of quantifiable resources which
are consumed by control variables becoming active at different resource levels. In this way, activation
sequences give an operational understanding of truth values that is faithful to causality and resource
consumption. A canonical interpretation is the temporal reading: The length |σ | is the duration of the
synchronous instant, i.e., the overall reaction time, and A ∈ σ (i) means that A is activated, or is present
from micro-step i.
Definition 2.1 Let σ ∈ n → 2V be an activation.
• A sub-activation σ 0 ⊆ σ of σ is an activation σ 0 ∈ m → 2V such that there exists a strictly monotonic function f ∈ m → n with σ 0 (i) = σ ( f (i)) for all i ∈ m.
• We write σ = σ1 ∪ σ2 to express that sub-activations σ1 , σ2 ⊆ σ form an activation cover of σ ,
or an interleaving decomposition in the sense that each event is contained in σ1 or in σ2 , i.e.,
∀i ∈ |σ |. ∃ j = 1, 2. ∃k ∈ |σ j |. i = f j (k) where f j are the index embeddings of σ j , j = 1, 2.
• For every i ∈ N we define the shifted activation σ [i, :] : m → 2V , where m =d f { j | 0 ≤ j + i < n}
and σ [i, :]( j) =d f σ ( j + i).
A shifted activation is also a sub-activation, σ [i, :] ⊆ σ . We have σ [i, :] = 0/ if σ = 0/ or if i ≥ |σ |.
The shift operator is monotonic wrt sub-activations and antitonic wrt resource level, i.e., if σ 0 ⊆ σ and
0 ≤ i ≤ j then σ 0 [ j, :] ⊆ σ [i, :]. This depends on strict monotonicity of the index embedding in σ 0 ⊆ σ .
In order to model non-determinism (abstracting from internal parameters or external environment)
our interfaces are interpreted over subsets Σ of activation sequences, called (synchronous) schedules.
These schedules (of a program, a module, or any other program fragment) will be specified by a scheduling type φ generated by the logical operators
φ
::= A | true | false | φ ∧ φ | ¬φ | φ ⊃ φ | φ ∨ φ | φ ⊕ φ | φ ⊗ φ | ◦φ
generated from control variables A ∈ V. We will write Σ |= φ (σ |= φ ) to say that schedule Σ (activation σ )
satisfies the type φ . The semantics of types is formally defined below in Def. 2.2. As a type specification,
Michael Mendler
31
each control variable A ∈ V represents the guarantee that “A is active (the signal is present, the program
label has been traversed, the state is activated) in all activations of Σ”. The constant true is satisfied
by all schedules and false only by the empty schedule or the schedule which contains only the empty
activation. The type operators ¬, ⊃ are negation and implication. The operators ∨ and ⊕ are two
forms of logical disjunction to encode internal and external non-determinism and ∧, ⊗ are two forms
of logical conjunction related to true concurrency and interleaving concurrency, respectively. Finally,
◦ is the operator to express resource consumption. The usual bracketing conventions apply: The unary
operators ¬, ◦ have highest binding power, implication ⊃ binds most weakly and the multiplicatives ∧, ⊗
are stronger than the summations ∨, ⊕. Occasionally, bi-implication φ ≡ ψ is useful as an abbreviation
for (φ ⊃ ψ) ∧ (ψ ⊃ φ ). Also, we note that ¬φ is equivalent to φ ⊃ false.
A scheduling type φ by itself only captures the functional aspect of an interface. To get a full interface
we need to enrich types by resource information. To this end, we associate with every scheduling type φ
a set of scheduling bounds Bnd(φ ) recursively as follows:
Bnd(false) = 1
Bnd(true) = 1
Bnd(A) = 1
Bnd(¬φ ) = 1
Bnd(φ ∧ ψ) = Bnd(φ ) × Bnd(ψ)
Bnd(φ ⊕ ψ) = Bnd(φ ) × Bnd(ψ)
Bnd(◦φ ) = N∞ × Bnd(φ )
Bnd(φ ∨ ψ) = Bnd(φ ) + Bnd(ψ)
Bnd(φ ⊃ ψ) = Bnd(φ ) → Bnd(ψ)
Bnd(φ ⊗ ψ) = Bnd(φ ) × Bnd(ψ),
where 1 = {0} is a distinguished singleton set. Elements of the disjoint sum Bnd(φ ) + Bnd(ψ) are
presented as pairs (0, f ) where f ∈ Bnd(φ ) or (1, g) where g ∈ Bnd(ψ). The set Bnd(φ ) × Bnd(ψ) is the
Cartesian product of the sets Bnd(φ ) and Bnd(ψ) and Bnd(φ ) → Bnd(ψ) the set of total functions from
Bnd(φ ) to Bnd(ψ). Intuitively, an element f ∈ Bnd(φ ) may be seen as a form of generalised higher-order
resource matrix for schedules of shape φ .
Definition 2.2 A scheduling interface is a pair f : φ consisting of a scheduling type φ and a scheduling
bound f ∈ Bnd(φ ). An activation σ satisfies an interface f : φ , or satisfies the scheduling type φ with
bound f , written σ |= f : φ , according to the following inductive rules:
σ |= 0 : false
iff |σ | = 0, i.e., σ = 0/
σ |= 0 : true
iff always
σ |= 0 : A
iff ∀0 ≤ i < |σ | ⇒ A ∈ σ (i)
σ |= ( f , g) : φ ∧ ψ iff σ |= f : φ and σ |= g : ψ
σ |= (0, f ) : φ ∨ ψ iff σ |= f : φ
σ |= (1, g) : φ ∨ ψ iff σ |= g : ψ
σ |= ( f , g) : φ ⊕ ψ iff σ |= f : φ or σ |= g : ψ
σ |= f : φ ⊃ ψ
iff ∀σ 0 ⊆ σ . ∀g ∈ Bnd(φ ). (σ 0 |= g : φ ⇒ σ 0 |= f g : ψ)
σ |= (d, f ) : ◦φ
iff |σ | = 0 or ∃i ∈ N. 0 ≤ i ≤ d and σ [i, :] |= f : φ
σ |= ( f , g) : φ ⊗ ψ iff ∃σ1 , σ2 ⊆ σ . σ = σ1 ∪ σ2 and σ1 |= f : φ and σ2 |= g : ψ.
A schedule Σ satisfies φ with bound f , written Σ |= f : φ , if for all σ ∈ Σ, σ |= f : φ . A schedule satisfies
φ or is bounded for φ if there exists f ∈ Bnd(φ ) such that Σ |= f : φ .
The semantics Σ |= f : φ as formalised in Def. 2.2 is a ternary relation: It links schedules, types and
bounds. The symbol |= separates the behavioural model Σ from the formal interface f : φ . The latter,
in turn, combines a qualitative and a quantitative aspect. The type φ captures the causal relationships
between the control points and the bound f ∈ Bnd(φ ) refines this quantitatively by weaving in concrete
activation levels. The colon : is a binary connective which separates these concerns.
32
An Algebra of Synchronous Scheduling Interfaces
Proposition 2.3 σ |= f : φ and σ 0 ⊆ σ implies σ 0 |= f : φ . Moreover, |σ | = 0 implies σ |= f : φ .
Prop. 2.3 says that interfaces are inherited by sub-activations. This is natural since a sub-activation
selects a subset of events and thus (in general) contains more control variables with lower activation distances. The degenerated case is the empty activation which is inconsistent and thus satisfies all interfaces,
including the strongest specification 0 : false, viz. “everything is true with zero resource consumption”.
The most general way to use the semantic relation of Def. 2.2 is to consider the set of (typically
abstracted) activations for a given module P as a schedule ΣP , and then determine a suitable interface for
it. Any such f : φ with ΣP |= f : φ may be taken as a valid interface specification of P giving a quantified
behavioural guarantee for all activations σ ∈ ΣP under the given scheduling assumptions. Ideally, we are
interested in the best fitting or tightest interface, if such exists. To measure the relative strength of an
interface we employ Def. 2.2 to associate with every pair f : φ the schedule [[ f : φ ]] = { σ | σ |= f : φ }
which is the semantic meaning of the interface. Interfaces may then be compared naturally. The smaller
the set of associated activations [[ f : φ ]] the tighter is the interface f : φ . Formally, we write
f :φ g:ψ
if
[[ f : φ ]] ⊆ [[g : ψ]]
∼ g : ψ in case [[ f : φ ]] = [[g : ψ]]. We call an interface f : φ tight for ΣP if it is minimal wrt
and f : φ =
, i.e., whenever g : ψ f : φ and ΣP |= g : ψ then f : φ ∼
= g : ψ. A tight interface provides exact
information about ΣP in both the functional and the resource dimensions within the expressiveness of
our typing language. Typically, however, we are given some schedule ΣP together with a fixed type φ
and ask for a minimal bound f such that ΣP |= f : φ . If such a tight bound exists and is unique we call it
worst-case for φ .
We generalise equivalence to arbitrary types, taking φ ∼
= ψ to mean that for every f ∈ Bnd(φ ) there
is g ∈ Bnd(ψ) such that f : φ ∼
= g : ψ and vice versa, for each g ∈ Bnd(ψ) we can find f ∈ Bnd(φ )
with g : ψ ∼
= f : φ . The main purpose of the relations and ∼
= is to justify strengthening, weakening or
semantics-preserving, transformations to handle interfaces as tightly as sensible. They are the basis of
the interface algebra, some of whose laws will be studied next.
3
The Algebra of Scheduling Types
The set of scheduling bounds Bnd(φ ) captures the amount of resource information associated with a
type φ . In this respect the most simple class of types is that for which Bnd(φ ) is (order) isomorphic
to 1. Such types are called pure since they do not carry resource information and thus specify only
functional behaviour. It will be convenient to exploit the isomorphisms Bnd(ζ ) ∼
= 1 and identify all
bounds f ∈ Bnd(ζ ) of a pure type canonically with the unique 0 ∈ 1. Further, since it is unique, we
may as well drop the (non-informative) bound and simply write ζ instead of 0 : ζ . This means, e.g., that
ζ1 ∧ ζ2 , (0, 0) : ζ1 ∧ ζ2 and 0 : ζ1 ∧ ζ2 are all identified.
Second, with this simplification on pure types in place, we may mix bounds and types and apply
the type operators to full interfaces. Since f : φ specifies individual activations it formally behaves like
an atomic statement. Hence, it is possible to use interfaces f : φ themselves as generalised “control
variables” in types such as ( f : φ ) ∧ ψ or ◦( f : φ ). We simply define
Bnd( f : φ ) =d f 1
σ |= 0 : ( f : φ ) iff σ |= f : φ
which turns an interface f : φ into a pure type. Then, e.g., [[ f : φ ∧ g : ψ]] = [[(0, 0) : ( f : φ ∧ g : ψ)]] =
[[0 : ( f : φ )]] ∩ [[0 : (g : ψ)]] = [[ f : φ ]] ∩ [[g : ψ]].
Michael Mendler
33
A few basic facts about the interface algebra arising from Def. 2.2 are readily derived. Not really
surprisingly, true and false are complements, ¬true ∼
= false, ¬false ∼
= true as well as neutral false ⊗ φ ∼
=
∼
∼
∼
∼
∼
∼
false ⊕ φ = true ∧ φ = φ and dominant elements false ∧ φ = false, true ⊕ φ = true ∨ φ = true ⊗ φ = true.
Shifting a type by −∞ and +∞ produces the strongest and weakest statements false and true, respectively:
Proposition 3.1 For arbitrary types φ , −∞ : ◦φ ∼
= false and +∞ : ◦φ ∼
= true.
All operators ∨, ∧, ⊕ and ⊗ are commutative. The pairs ∨ ↔ ∧ and ⊕ ↔ ∧ fully distribute over
each other, while ⊗ distributes over both ⊕ and ∨, but not the other way round. Between ⊗ and ∧ no
distribution is possible, in general. One can show that the fragment ∨, ∧, false, ¬, ⊃ satisfies the laws of
Heyting algebras seen in Prop. 3.2.
Proposition 3.2 For arbitrary types φ1 , φ2 , ψ:
ψ ⊃ψ ∼
= true
(φ1 ∧ φ2 ) ⊃ ψ ∼
= φ1 ⊃ (φ2 ⊃ ψ)
(φ1 ∨ φ2 ) ⊃ ψ ∼
= (φ1 ⊃ ψ) ∧ (φ2 ⊃ ψ)
false ⊃ ψ ∼
= true
∼
ψ ⊃ false = ¬ψ
φ1 ⊃ (φ2 ⊃ φ1 ) ∼
= true
(φ1 ⊃ φ2 ) ∧ φ1 ∼
= φ1 ∧ φ2
ψ ⊃ (φ1 ∧ φ2 ) ∼
= (ψ ⊃ φ1 ) ∧ (ψ ⊃ φ2 )
∼
ψ ⊃ true = true
true ⊃ ψ ∼
= ψ.
It is worthwhile to observe that the classical principles of the Excluded Middle A ⊕ ¬A and A ∨ ¬A
are both different and not universally valid in WCRT algebra. The latter says A is static, i.e., A is present
in all activations or absent in all activations, the former that signal A is stable, i.e., in each activation
individually, A is either present from the start or never becomes active. Clearly, not every signal is
static or stable. The absence of the axioms A ⊕ ¬A, A ∨ ¬A, which arises naturally from the activation
semantics, is a definitive characteristics of intuitionistic logic or Heyting algebra. This feature is crucial
to handle the semantics of synchronous languages in a compositional and fully abstract way [20].
Boolean Types. An important sub–class of pure types are negated types ¬φ . They express universal
statements about each singleton event of each activation sequence in a schedule. For instance, Σ |=
¬(A ⊗ B) says that no event σ (i) ⊆ V (0 ≤ i < |σ |) in any σ ∈ Σ contains A or B. Similarly, ¬(A ⊃ B)
states that A is present and B is absent in every event of every activation sequence, which is the same as
¬¬(A ∧ ¬B). Negated types are expressively equivalent to, and can be transformed into, Boolean types
obtained from the following grammar, where φ is an arbitrary type:
β
::= true | false | A | ¬β | β ∧ β | β ⊗ β | φ ⊃ β .
Proposition 3.3 The Boolean types form a Boolean algebra with ¬, ∧, ⊗ as classical complement,
conjunction and disjunction, respectively. Moreover, Σ |= β iff for every σ ∈ Σ and i ∈ |σ | the event
σ (i) ⊆ V satisfies β as a classical Boolean formula in control variables V.
A consequence of Prop. 3.3 is that the interface algebra contains ordinary classical Boolean algebra as
the fragment of Boolean types. In particular, for Boolean types the Double Negation principle ¬¬β ∼
=β
and Excluded Middle ¬β ⊗ β ∼
= true hold as well as the De-Morgan Laws ¬(β1 ∧ β2 ) ∼
= ¬β1 ⊗ ¬β2 and
¬(β1 ⊗ β2 ) ∼
= ¬β1 ∧ ¬β2 . Boolean types, like all types satisfying ¬¬φ ∼
= φ or ¬φ ⊗ φ ∼
= true, behave
exactly like expressions of Boolean algebra, encapsulating a Boolean condition to be satisfied by each
event in a sequence.
34
An Algebra of Synchronous Scheduling Interfaces
Pure Types. The sum operator ⊕ takes us outside the sub-language of Boolean types. The reason is
that the truth of ⊕, e.g., in stability A ⊕ ¬A, depends on the global behaviour of an activation and cannot
be reduced to a single Boolean condition. This is highlighted by the difference between σ |= A ⊕ B
which is the condition ∀i ∈ |σ |, A ∈ σ (i) or ∀i ∈ |σ |, B ∈ σ (i) and σ |= A ⊗ B which says ∀i ∈ |σ |, A ∈
σ (i) or B ∈ σ (i). The larger class of pure types, which includes ⊕, give us the possibility to express
“Boolean” conditions across activations, as opposed to Boolean types which act within activations. The
pure types, denoted by meta-variable ζ , are characterised syntactically as follows:
ζ
::= β | ζ ∧ ζ | ζ ⊕ ζ | ζ ⊗ ζ | φ ⊃ ζ ,
where β is Boolean and φ is an arbitrary type. Notice that not only every Boolean type, but also every
negation ¬φ = φ ⊃ false, is pure according to this syntactic criterion.
L
Proposition 3.4 Every pure type ζ has a representation ζ ∼
= i βi over Boolean types βi .
Elementary Types. Pure types have the special property that schedules Σ are bounded for them iff each
individual activation σ ∈ Σ is bounded, i.e., they express properties of individual activations. Formally,
if Σ1 |= ζ and Σ2 |= ζ then Σ1 ∪ Σ2 |= ζ . Disjunctions ζ1 ∨ ζ2 and resource types ◦ζ , in contrast, do
not share this locality property: Although each activation σ may satisfy ζ1 or ζ2 , the schedule Σ as
a whole need not be resource-bounded for ζ1 ∨ ζ2 as this would mean all activations satisfy ζ1 or all
satisfy ζ2 . Similarly, each individual activation σ ∈ Σ may validate ζ with some resource bound, without
necessarily there being a single common bound for all activations in Σ.
A useful class of types containing ∨ and ◦ are those for which Bnd(φ ) is canonically order-isomorn for some n ≥ 0. These scheduling types φ with
phic to a Cartesian product of numbers, i.e., to N∞
Bnd(φ ) ∼
= Nn∞ are called elementary. They are generated by the grammar
θ
::= ζ | θ ∧ θ | θ ⊕ θ | θ ⊗ θ | ◦ζ | ψ ⊃ θ ,
where ζ is pure and ψ is ◦-free. Elementary scheduling types are of special interest since their elements
are first-order objects, i.e., vectors and matrices of natural numbers.
Elementary interfaces specify the resource consumption of logical controls. For instance, σ |= (d, 0) :
◦ζ , given ζ = ⊕i βi (see Prop. 3.4), says that σ enters and remains inside a region of events described
by one of the Boolean conditions βi and consumes at most d resource units to do that. The special case
σ |= d : ◦false says that σ consumes no more than d units during any instant. Similarly, σ |= ζ ⊃ (d, 0) :
◦ξ with ζ = ⊕i βi and ξ = ⊕ j γ j says that every sub-activation σ 0 ⊆ σ that runs fully inside one of the
regions βi must reach one of the regions γ j with resources bounded by d. Then, σ |= ζ ⊃ (d, 0) : ◦false
means that σ consumes no more than d units while staying in any of the regions βi .
To compactify the notation we will write tuples (d1 , d2 ) for the bounds ((d1 , 0), (d2 , 0)) ∈ (N∞ × 1) ×
(N∞ × 1) ∼
= N∞ × N∞ of types such as ◦ζ1 ⊕ ◦ζ2 , ◦ζ1 ∧ ◦ζ2 , ◦ζ1 ⊗ ◦ζ2 . We apply this simplification
also to bounds f ∈ 1 → N∞ × 1 ∼
= N∞ for types such as ζ1 ⊃ ◦ζ2 : We write [d] : ζ1 ⊃ ◦ζ2 , treating
the bracketed value [d] like a function λ x. (d, 0). In fact, [d] : ζ1 ⊃ ◦ζ2 is the special case of a 1 × 1
matrix. We will systematically write column vectors [d1 ; d2 ] instead of λ x.((d1 , 0), (d2 , 0)) for the bounds
of types such as ζ ⊃ ◦ζ1 ⊕ ◦ζ2 , ζ ⊃ ◦ζ1 ∧ ◦ζ2 or ζ ⊃ ◦ζ1 ⊗ ◦ζ2 , and row-vectors [d1 , d2 ] in place of
λ x. case x of [(0, 0) → (d1 , 0), (1, 0) → (d2 , 0)] for types ζ1 ∨ ζ2 ⊃ ◦ζ . Our linearised matrix notation uses
semicolon for row-wise and ordinary colon for columns-wise composition of sub-matrices. Specifically,
[d11 ; d21 , d12 ; d22 ] and [d11 , d12 ; d21 , d22 ] denote the same 2 × 2 matrix.
Michael Mendler
35
In the following Secs. 4 and 5 we are going illustrate different sub-algebras of specialised elementary
types to manipulate combined functional and quantitative information and to facilitate interface abstractions. These generalise the algebra of dioids [4, 17] to full max-min-plus, obtaining an equally tight as
uniform combination of scheduling algebra and logical reasoning.
4
Examples I: Network Flow, Shortest Path, Task Scheduling
The logical operations on types control the arithmetical operations on resource bounds. The next two
Props. 4.1 and 4.2 sum up some important basic facts.
Proposition 4.1 The arithmetic operations min, max and + compute worst-case bounds such that
[d1 ] : ζ1 ⊃ ◦ζ2 ∧ [d2 ] : ζ2 ⊃ ◦ζ3 [d1 + d2 ] : ζ1 ⊃ ◦ζ3
(1)
[d1 ] : ζ ⊃ ◦ζ1 ∧ [d2 ] : ζ ⊃ ◦ζ2 [min(d1 , d2 )] : ζ ⊃ ◦(ζ1 ⊕ ζ2 )
(3)
[d1 ] : ζ1 ⊃ ◦ζ ∧ [d2 ] : ζ2 ⊃ ◦ζ
(5)
[d1 ] : ζ ⊃ ◦ζ1 ∧ [d2 ] : ζ ⊃ ◦ζ2 [max(d1 , d2 )] : ζ ⊃ ◦(ζ1 ∧ ζ2 )
(2)
[d1 ] : ζ1 ⊃ ◦ζ ∧ [d2 ] : ζ2 ⊃ ◦ζ
(4)
[max(d1 , d2 )] : (ζ1 ⊕ ζ2 ) ⊃ ◦ζ
[min(d1 , d2 )] : (ζ1 ∧ ζ2 ) ⊃ ◦ζ .
The law (1) expresses a sequential composition of an offset by d1 from control point ζ1 to ζ2 with
a further shift of d2 from ζ2 to ζ3 . The best guarantee we can give for the cost between ζ1 and ζ3 is
the addition d1 + d2 . The bounds [d1 ] and [d2 ] act like typed functions with [d1 + d2 ] being function
composition, [d2 ] · [d1 ] = [d1 + d2 ]. This is nothing but the multiplication of 1 × 1 matrices in max-plus or
min-plus algebra. The law (2) is conjunctive forking: If it takes at most d1 units from ζ to some control
point ζ1 and at most d2 to ζ2 , then we know that within max(d1 , d2 ) we have activated both together,
ζ1 ∧ ζ2 . A special case of this occurs when ζ ∼
= true, i.e., d1 : ◦ζ1 ∧ d2 : ◦ζ2 ∼
= max(d1 , d2 ) : ◦(ζ1 ∧ ζ2 ).
Now suppose conjunction is replaced by sum ζ1 ⊕ ζ2 , i.e., we are only interested in activating one of ζ1
or ζ2 , but do not care which. The worst-case bound for this disjunctive forking is the minimum, as seen
in (3). Again, there is the special case d1 : ◦ζ1 ∧ d2 : ◦ζ2 ∼
= min(d1 , d2 ) : ◦(ζ1 ⊕ ζ2 ). Dually, disjunctive
joins (4) are governed by the maximum: Suppose that starting in ζ1 activates ζ with at most d1 cost and
starting in ζ2 takes at most d2 resource units. Then, if we only know the activation starts from ζ1 or ζ2
but not which, we can obtain ζ if we are prepared to expend the maximum of both costs. If, however, we
assume the schedule activates both ζ1 and ζ2 , which amounts to conjunctive join, then the destination ζ
is obtained with the minimum of both shifts, see (5).
Proposition 4.2 Let ζ1 , ζ2 be pure types which are persistent in the sense that whenever σ (k) |= ζi for
0 ≤ k < |σ |, then σ [k, :] |= ζi , too. Then,
d1 : ◦ζ1 ⊗ d2 : ◦ζ2 d1 + d2 : ◦(ζ1 ⊕ ζ2 )
(d1 : ◦ζ1 ∧ (ζ1 ⊃ ζ2 )) ⊗ (d2 : ◦ζ2 ∧ (ζ2 ⊃ ζ1 )) d1 + d2 : ◦(ζ1 ∧ ζ2 ).
(6)
(7)
Consider (6) of Prop. 4.2. Suppose a schedule σ splits into two (sub-)threads σ = σ1 ∪ σ2 each
switching control ζ1 and ζ2 consuming at most d1 and d2 units, respectively. Since they can be arbitrarily
interleaved and we do not know which one completes first, all we can claim is σ (k) |= ζi for some
k ≤ d1 + d2 and i = 1, 2. By persistence, this suffices to maintain ζi from level k onwards, so that
σ |= d1 + d2 : ◦(ζ1 ⊕ ζ2 ). Without imposing further assumptions, a sub-thread may be allocated an
unknown number of resource units, thereby stalling the progress of the other, unboundedly. The situation
36
Michael
Mendler
An Algebra of Synchronous Scheduling Interfaces
9
changes, however,
however, if
if the
the ζζii are
are synchronisation
synchronisation points
points where
where the
the threads
threads must
must give
give up
up control
control unless
unless the
the
changes,
other
thread
has
passed
its
own
synchronisation
point
ζ
(i
=
6
j),
too.
This
is
the
content
of
(7)
and
other thread has passed its own synchronisation point ζ jj (i 6= j), too. This is the content of (7) and
specified formally
formally by
by the
the additional
additional constraints
constraints ζζii ⊃
⊃ ζζ jj..
specified
Prop.
4.1
and
4.2
highlight
how
the
arithmetic
of min-max-plus
min-max-plus algebra
algebra are
are guided
guided by
by the
the logical
logical
Prop. 4.1 and 4.2 highlight how the arithmetic of
semantics of
of interface
interface types.
types. From
From this
this vantage
vantage point,
point, resource
resource analysis
analysis is
is nothing
nothing but
but aa semanticssemanticssemantics
consistent
manipulation
of
a
collection
of
numbers:
Whether
[d
]
:
φ
,
[d
]
:
φ
are
to
be
added,
maximised
1
1
2
2
consistent manipulation of a collection of numbers: Whether [d1 ] : φ1 , [d2 ] : φ2 are to be added, maximised
or minimised
minimised depends
depends on
on their
their types
types φφ11 and
and φφ22.. In
In particular,
particular, keeping
keeping track
track of
of the
the types
types will
will make
make the
the
or
difference between
between aa max-of-sums
max-of-sums (sum-of-mins)
(sum-of-mins) as
as opposed
opposed to
to aa sum-of-maxes
sum-of-maxes (min-of-sums).
(min-of-sums).
difference
4.1 Network
Network Flow
Flow
4.1
Consider the
the dependency
dependency graph
graph in
in Fig.
Fig. 11 with
with conconConsider
trol
nodes
V
=
{A,
B,C,
D,
E,
F}
and
dependency
2
trol nodes V = {A, B,C, D, E, F} and dependency
E
B
edges
labelled
by
positive
integers.
Let
us
asedges labelled by positive integers. Let us as5
1
5
sume the
the graph
graph models
models aa communication
communication network
network
sume
2
D
in which
which control
control nodes
nodes represent
represent packet
packet routers
routers
A
in
4
4
and edges
edges are
are directed
directed point-to-point
point-to-point connections
connections
and
3
F
8
of limited
limited bandwidth.
bandwidth. For
For instance,
instance, the
the router
router at
at
of
C
node D
D receives
receives packets
packets from
from routers
routers B
B and
and C
C
node
through
channels
of
bandwidth
1
and
4,
respecFigure
through channels of bandwidth 1 and 4, respecFigure 1:
1: Scheduling
Scheduling Dependency
Dependency Graph
Graph N
N
tively. It
It forwards
forwards the
the incoming
incoming traffic
traffic to
to routers
routers
tively.
E or
or F
F of
of bandwidth
bandwidth 55 and
and 4,
4, respectively.
respectively. The
The bandwidth
bandwidth measures
measures the
the maximal
maximal amount
amount of
of information
information
E
that
can
travel
across
the
channel
per
synchronisation
instant.
The
analysis
of
the
maximum
throughput
that can travel across the channel per synchronisation instant. The analysis of the maximum throughput
is aa synchronous
synchronous scheduling
scheduling problem
problem which
which can
can be
be modelled
modelled using
using interface
interface types.
types.
is
We
associate
with
the
network
N
a
scheduling
type
φ
,
such
that
the
amount
of packets
packets that
that can
can
We associate with the network N a scheduling type φNN, such that the amount of
be
pushed
into
a
node
X
is
given
by
the
minimal
d
such
that
φ
X
⊃
d
:
◦false,
i.e.,
the
maximal
be pushed into a node X is given by the minimal d such that φNN X ⊃ d : ◦false, i.e., the maximal
number of
of scheduling
scheduling cycles
cycles that
that node
node X
X may
may be
be kept
kept alive
alive within
within any
any activation
activation specified
specified by
by φφNN .. The
The
number
idea
is
that
if
σ
∈
[[φ
]]
is
a
valid
activation
of
N
then
each
cycle
i
∈
|σ
|
such
that
X
∈
σ
(i)
represents
idea is that if σ ∈ [[φNN ]] is a valid activation of N then each cycle i ∈ |σ | such that X ∈ σ (i) represents
packet unit
unit ii sent
sent through
through X
X.. The
The event
event σ
σ (i)
(i) ⊆
⊆V
V encodes
encodes the
the packet’s
packet’s path,
path, i.e.,
i.e., the
the set
set of
of all
all routers
routers
aa packet
that
payload
unit
i
is
passing
on
its
journey
through
the
network.
The
statement
σ
|=
X
⊃
d
:
◦false
then
that payload unit i is passing on its journey through the network. The statement σ |= X ⊃ d : ◦false then
says
that
whenever
X
becomes
alive
in
activation
σ
it
handles
no
more
d
packets.
This
number
may
vary
says that whenever X becomes alive in activation σ it handles no more d packets. This number may vary
between activations.
activations. The
The minimal
minimal d,
d, bounding
bounding all
all activations
activations in
in this
this way,
way, is
is the
the maximal
maximal throughput
throughput
between
at
X
permitted
by
specification
φ
.
Observe
that
both
capacity
values
0
and
−∞
are
equivalent,
at X permitted by specification φNN . Observe that both capacity values 0 and −∞ are equivalent, 00 ::
∼
∼
◦false ∼
−∞ :: ◦false
◦false ∼
false. In
In fact,
fact, the
the type
type X
X⊃
⊃ 00 :: ◦false
◦false paraphrased
paraphrased “X
“X forwards
forwards 00 packets”
packets” and
and
= −∞
= false.
◦false
=
=
X
⊃
−∞
:
◦false
saying
“X
does
not
forward
any
packets”,
are
the
same
statements
and
equivalent
to
¬X.
X ⊃ −∞ : ◦false saying “X does not forward any packets”, are the same statements and equivalent to ¬X .
Now consider
consider node
node D
D again.
again. Within
Within the
the synchronous
synchronous measurement
measurement instant,
instant, all
all packets
packets arriving
arriving at
at D
D
Now
must be
be scheduled
scheduled to
to leave
leave through
through channels
channels D
D→
→E
E or
or D
D→
→ F.
F. Consider
Consider an
an activation
activation σ
σ |=
|= D,
D, i.e.,
i.e., all
all
must
ii ∈
∈ ||σ
|
are
packets
dispatched
through
D.
Some
of
these
will
go
to
E,
others
to
F
and
all
go
to
one
of
σ | are packets dispatched through D. Some of these will go to E, others to F and all go to one of
the two.
two. Hence
Hence there
there are
are sub-activations
sub-activations σ
σ=
=σ
σ11 ∪
∪σ
σ22 such
such that
that σ
σ11 |=
|= E
E and
and σ
σ22 |=
|= F.
F. Also,
Also, because
because of
of
the
the
channel
limitations,
there
can
be
at
most
5
packet
units
of
the
former
and
4
of
the
latter
type.
Thus,
the channel limitations, there can be at most 5 packet units of the former and 4 of the latter type. Thus,
σ11 |=
|= E
E∧
∧ 55 :: ◦false
◦false and
and σ
σ22 |=
|= F
F∧
∧ 44 :: ◦false.
◦false. All
All in
in all,
all, we
we have
have found
found the
the type
type specifying
specifying D
D and
and its
its
σ
connections in
in N
N to
to be
be D
D⊃
⊃ (E
(E ∧
∧ 55 :: ◦false)
◦false) ⊗
⊗ (F
(F ∧
∧ 44 :: ◦false).
◦false).
connections
The
tensor
⊗
is
used
to
model
the
output
branching
at aa node.
node. Observe
Observe that
that if
if we
we increase
increase one
one of
of
The tensor ⊗ is used to model the output branching at
the channel
channel capacities
capacities to
to +∞,
+∞, say
say the
the one
one giving
giving access
access to
to E,
E, we
we get
get D
D⊃
⊃ (E
(E ∧
∧ +∞
+∞ :: ◦false)
◦false) ⊗
⊗ (F
(F ∧
∧ 44 ::
the
∼
∼
∼
◦false)
D
⊃
E
⊗
(F
∧
4
:
◦false)
because
E
∧
+∞
:
◦false
E
∧
true
E.
This
means
the
channel
∼
∼
∼
=
=
=
◦false) = D ⊃ E ⊗ (F ∧ 4 : ◦false) because E ∧ +∞ : ◦false = E ∧ true = E. This means the channel
D→
→E
E does
does not
not impose
impose any
any further
further constraints
constraints on
on the
the throughput
throughput besides
besides what
what E
E prescribes.
prescribes. If
If we
we
D
Michael Mendler
37
decrease the capacity to 0, the type reduces to D ⊃ (E ∧ 0 : ◦false) ⊗ (F ∧ 4 : ◦false) ∼
= D ⊃ F ∧ 4 : ◦false
since E ∧ 0 : ◦false ∼
= E ∧ false ∼
= false and false ⊗ φ ∼
= φ . Hence, a capacity of 0 behaves as if the
channel was cut off completely. Consequently, the degenerated case of a node X without any exits would
be specified by X ⊃ false or ¬X. If we conjoin the types for all nodes of N as seen in Fig. 1, we get
φN =d f
true ⊃ (A ∧ +∞ : ◦false)
∧ A ⊃ ((B ∧ 5 : ◦false) ⊗ (C ∧ 3 : ◦false))
(8)
(9)
∧ B ⊃ ((E ∧ 2 : ◦false) ⊗ (D ∧ 1 : ◦false))
(10)
∧ D ⊃ ((E ∧ 5 : ◦false) ⊗ (F ∧ 4 : ◦false))
(12)
∧ F ⊃ (true ∧ +∞ : ◦false).
(14)
∧C ⊃ ((D ∧ 4 : ◦false) ⊗ (F ∧ 8 : ◦false))
∧ E ⊃ (F ∧ 2 : ◦false)
(11)
(13)
Type (8) designates A as the source node of the network. It formalises a source channel of infinite
capacity permitting the global environment, represented by the logical control true, to push as many
packets as possible into A. Analogously, destination node F (14) returns packets back to the external
environment. Again, this sink channel has infinite capacity, since all packets arriving at F will delivered.
The throughput dN of N is the smallest d such that φN d : ◦false. To get the “exact” or “optimal”
bound we must explore the network in breadth and depth. The analysis strategy involves non-linear
global optimisation such as the Ford-Fulkerson or Goldberg’s Preflow-Push algorithms. This is not the
place to review these algorithm. We shall merely indicate how their logical content can be coded in type
theory. Consider that each of the network implications (8)–(14) of the form X ⊃ ⊗Y (Y ∧ dY : ◦false) can
be used as an equation X ∼
= X ∧ ⊗Y (Y ∧ dY : ◦false) for transformations by substitution. For example,
proceeding forwards from the source A, breadth-first, we can derive
true ∼
=A
∼
= A ∧ ((B ∧ 5 : ◦false) ⊗ (C ∧ 3 : ◦false))
∼
= A ∧ ((B ∧ ((E ∧ 2 : ◦false) ⊗ (D ∧ 1 : ◦false)) ∧ 5 : ◦false)
⊗ (C ∧ ((D ∧ 4 : ◦false) ⊗ (F ∧ 8 : ◦false)) ∧ 3 : ◦false))
∼
= ((A ∧ B ∧ E ∧ 2 : ◦false)
⊗ (A ∧ B ∧ D ∧ 1 : ◦false))
⊗ (((A ∧C ∧ D ∧ 3 : ◦false)
⊗ (A ∧C ∧ F ∧ 3 : ◦false)) ∧ 3 : ◦false),
(15)
(16)
(17)
(18)
using the special ∧/⊗ distribution X ∧ (φ1 ⊗ φ2 ) ∼
= (X ∧ φ1 ) ⊗ (X ∧ φ2 ) for atoms X ∈ V, and the derivable
laws ((φ1 ∧ d1 : ◦false) ⊗ (φ2 ∧ d2 : ◦false)) ∧ e : ◦false ∼
= (φ1 ∧ d1 : ◦false) ⊗ (φ2 ∧ d2 : ◦false) for e ≥ d1 +
d2 and ((φ1 ∧ d1 : ◦false) ⊗ (φ2 ∧ d2 : ◦false)) ∧ e : ◦false ∼
= (φ1 ∧ e : ◦false) ⊗ (φ2 ∧ e : ◦false) ∧ e : ◦false
for e ≤ min(d1 , d2 ).
The type (15)–(18) describes the resource usage of packets entering the network up to a depth of 3
nodes, classifying them into 4 separate flows: The packets from (15) pass through A → B → E and can
occupy at most 2 bandwidth units, those from (16) follow the path A → B → D and have a volume of at
most 1 unit. Furthermore, the packets (17) travelling along A → C → D or (18) on path A → C → F each
have at most volume 3, as specified by A ∧C ∧ D ∧ 3 : ◦false and A ∧C ∧ F ∧ 3 : ◦false. Moreover, their
sum must not exceed the limit 3 either, as enforced by the extra outer conjunct 3 : ◦false. The maximal
38
An Algebra of Synchronous Scheduling Interfaces
flow through the network can be obtained by applying the (in-)equations (15)–(18) in this fashion until
saturation is achieved, when all logical controls may be dropped, turning equation ∼
= into inequation :
true ∼
= A ∼
= ···
∼
= ((A ∧ B ∧ E ∧ F ∧ 2 : ◦false)
⊗ (A ∧ B ∧ D ∧ F ∧ 1 : ◦false))
⊗ (((((A ∧C ∧ D ∧ F ∧ 3 : ◦false)
⊗ (A ∧C ∧ D ∧ E ∧ F ∧ 2 : ◦false)) ∧ 3 : ◦false)
⊗ (A ∧C ∧ F ∧ 3 : ◦false)) ∧ 3 : ◦false)
(2 : ◦false ⊗ 1 : ◦false)
⊗ ((((3 : ◦false ⊗ 2 : ◦false) ∧ 3 : ◦false) ⊗ 3 : ◦false) ∧ 3 : ◦false) ∼
= 6 : ◦false,
using the laws d : ◦false ∧ e : ◦false ∼
= min(d, e) : ◦false and d : ◦false ⊗ e : ◦false ∼
= d + e : ◦false, derived
from (3) and (6), respectively.
This saturation process is a fixed-point construction which may be implemented using a standard
“max-flow” algorithm. Specifically, the graph algorithms of Ford-Fulkerson or Goldberg are efficient
decision procedures for deciding the algebra induced by the fragment of types appearing in (8)–(18).
This sub-algebra of “logical numbers” provides a purely algebraic interpretation for these standard algorithms. It should be clear that the graph-theoretic information is coded in the syntactic structure of the
types. However, in contrast to plain graphs, types are equipped with behavioural meaning in the form of
scheduling sequences. They generate a plus-min algebra of scheduling sequences which is not a linear
algebra, as it does not satisfy distribution. Specifically, e : ◦false ∧ (d1 : ◦false ⊗ d2 : ◦false) ∼
= min(e, d1 +
d2 ) : ◦false min(e, d1 ) + min(e, d2 ) : ◦false ∼
= (e : ◦false ∧ d1 : ◦false) ⊗ (e : ◦false ∧ d2 : ◦false). This
approximation offset, of course, is why max-flow problems are not linear matrix problems but require
global search and relaxation methods.
4.2
Shortest Path
A different interpretation of the scheduling graph Fig. 1 reads the edge labels as distances and asks for
the length of the shortest path through the network. This leads to an “inverted” network algebra: The
sequential composition of edges is addition and the branching of edges at a node is associated with the
minimum operation, whereas in the network flow situation of Sec. 4.1, sequential composition corresponds to minimum and branching is addition. Not surprisingly, the shortest path interpretation invokes
a different fragment of the type theory. Again, each node is a control variable V = {A, B,C, D, E, F}.
An activation σ models a journey through the network activating control nodes as it passes them. If σ
activates X at time i, then X ∈ σ (i), and if it traverses an edge X → Y with distance label d, then for
some 0 ≤ k ≤ d, Y ∈ σ (i + k). Hence σ satisfies the type X ⊃ d : ◦Y . If there are several outgoing edges
X → Y1 and X → Y2 and σ reaches X, then, because we are interested in the shortest path, we permit σ
to explore both branches “in parallel”. Hence, σ fulfils both implications X ⊃ d1 : ◦Y1 and X ⊃ d2 : ◦Y2 .
Following this idea, the network N as given in Fig. 1 comes out as the type specification
φN =d f A ⊃ 5 : ◦B ∧ A ⊃ 3 : ◦C ∧ B ⊃ 1 : ◦D ∧ B ⊃ 2 : ◦E
∧ C ⊃ 4 : ◦D ∧ C ⊃ 8 : ◦F ∧ D ⊃ 5 : ◦E ∧ D ⊃ 4 : ◦F ∧ E ⊃ 2 : ◦F.
(19)
The length of the shortest path between X and Y is the minimal d such that φN X ⊃ d : ◦Y . By (1),
sequentially connecting edges X ⊃ d1 : ◦Y and Y ⊃ d2 : ◦Z yields X ⊃ d1 + d2 : ◦Z, and a choice of two
Michael Mendler
39
paths X ⊃ d1 : ◦Z and X ⊃ d2 : ◦Z between the same start and end node, by (3) implies X ⊃ min(d1 , d2 ) :
◦Z as desired. Now the values of 0 and −∞ have different meaning: X ⊃ 0 : ◦Y is equivalent to X ⊃ Y
modelling an edge without cost. In contrast, X ⊃ −∞ : ◦Y is semantically the same as X ⊃ false which
says that no activation reaches control node X. A distance +∞ expresses absence of a connection since
X ⊃ +∞ : ◦Y ∼
= X ⊃ true ∼
= true which does not give any information about how to reach Y from X.
It is well-known how to compute shortest paths by linear programming. This exploits the distribution
law min(e + d1 , e + d2 ) = e + min(d1 , d2 ), which permits us to organise the scheduling bounds in the
network theory (19) in form of matrices and to manipulate them using typed matrix multiplications. For
instance, we can combine the two outgoing edges of A into a single type
(A ⊃ 5 : ◦B) ∧ (A ⊃ 3 : ◦C) ∼
= A ⊃ (5, 3) : ◦B ∧ ◦C ∼
= [5; 3] : A ⊃ ◦B ∧ ◦C,
(20)
where [5; 3] abbreviates the function λ x. ((5, 0), (3, 0)) interpreted as a column vector of numbers. Dually,
the two incoming edges into node D can be combined into a single type
(B ⊃ 1 : ◦D) ∧ (C ⊃ 4 : ◦D) ∼
= [1, 4] : B ∨C ⊃ ◦D,
(21)
where [1, 4] is the function λ x. case x of [0 → (1, 0), 1 → (4, 0)] thought of as a row vector. The type
algebra, essentially (1) and (3), proves that the conjunction of both (20) and (21) implies the matrix
multiplication
([5; 3] : A ⊃ ◦B ∧ ◦C) ∧ ([1, 4] : B ∨C ⊃ ◦D) min(5 + 1, 3 + 4) : A ⊃ ◦D = [1, 4] · [5; 3] : A ⊃ ◦D
in min-plus algebra. More generally, for every sub-network with source nodes X1 , X2 , . . . , Xm and sink
n
nodes Y1 ,Y2 , . . . ,Yn we have an elementary type D : ∨m
i=1 Xi ⊃ ∧ j=1 ◦Y j describing the shortest path ben
tween any source to any target, in which the scheduling bound D ∈ Bnd((∨m
i=1 Xi ) ⊃ ⊗ j=1 ◦Y j ) behaves
like a n × m matrix in min-plus algebra. For instance, take the decomposition of N into the edge sets
N1 =d f {A → B, A → C}, N2 =d f {B → E, B → D,C → D,C → F} and N3 =d f {D → E, D → F, E → F}:
D(N1 ) = [5; 3] : A ⊃ (◦B ∧ ◦C)
D(N2 ) = [1; 2; +∞, 4; +∞; 8] : (B ∨C) ⊃ (◦D ∧ ◦E ∧ ◦F)
D(N3 ) = [4, 2, 0] : (D ∨ E ∨ F) ⊃ ◦F.
The shortest path from A to F is then obtained by multiplying these matrices
[4, 2, 0] · [1; 2; +∞, 4; +∞; 8] · [5; 3] = [4, 2, 0] · [6; 7; 11] = 9 : A ⊃ ◦F
in min-plus-algebra. The type-theoretic approach facilitates a compositional on-the-fly construction of
the shortest path matrix. The pure algebraic technique would combine all the information in a global
6 × 6 network matrix N : (∨X∈V X) ⊃ (∧X∈V ◦X) where (N)XY = d < +∞ if there exists an edge X ⊃ d : Y
in φN . Then, the shortest path matrix is N ∗ = Id ∧ N ∧ N 2 ∧ · · · , where Id is the identity matrix with 0s in
the diagonal and +∞ everywhere else and ∧ is the operation of forming element-wise minimum, lifting
the logical operation d1 : ◦X ∧ d2 : ◦X ∼
= min(d1 , d2 ) : ◦X to matrices. The entries in N ∗ are the shortest
distances between any two nodes in the network.
This way of solving shortest paths is well-known, of course. But now the behavioural typing permits
us safely to play over- and under-approximation games which are difficult to control in pure algebra
or graph theory without model-theoretic semantics. Just to give a simple example, suppose we wanted
to derive a lower bound on the shortest path. Such can be obtained by identifying some of the control
40
An Algebra of Synchronous Scheduling Interfaces
nodes, i.e., pretending we could jump between them on our path to reach the destination. For instance,
assuming C ≡ B, we find that φN ∧ C ≡ B A ⊃ 7 : ◦F is the shortest distance. Since the conjunction
φN ∧C ≡ B specifies a subset of activations, the shortest distance between A and F relative to φN ∧C ≡ B
is a lower bound on the shortest distance relative to φN . It may be more efficient to compute since the
network φN ∧C ≡ B only has 5 different nodes rather than 6 as with φN .
4.3
Task Scheduling
In yet another interpretation of network N the nodes are tasks and edges scheduling dependencies associated with upper bounds for task completion. Computing the worst-case completion time for the overall
schedule, sequential composition of edges corresponds to addition as in the shortest path scenario Sec. 4.2
but branching now involves maximum rather than the minimum. Again, this is induced by the logical
nature of the problem, the fact that the input join now is conjunctive rather than disjunctive as before. For
instance, task D in Fig. 1 cannot start before both tasks C and B have started with a set-up delay of 4 time
units from the start of C and 1 unit from B. Let us assume the task activation times are included in these
set-up delays. To model this type-theoretically we take the edges as the atomic control variables, i.e.,
V = {AC, AB,CD,CF, BD, BE, DE, DF, F}. Whenever XY ∈ σ (i), for i ∈ |σ |, this says that the edge XY
is ready, i.e., the source task X is completed and the start token has arrived at the corresponding control
input of target task Y . The node D establishes a logical-arithmetical relationship between its input edges
CD, BD and its output edges DF, DE, given by CD ∧ BD ⊃ (4 : ◦DF) ∧ (5 : ◦DE). Overall,
φN =d f (true ⊃ 3 : ◦AC ∧ 5 : ◦AB) ∧ (AC ⊃ 4 : ◦CD ∧ 8 : ◦CF)
∧ (AB ⊃ 1 : ◦BD ∧ 2 : ◦BE) ∧ ((CD ∧ BD) ⊃ 4 : ◦DF ∧ 5 : ◦DE)
∧ (DE ∧ BE ⊃ 2 : ◦EF) ∧ (CF ∧ DF ∧ EF ⊃ 0 : ◦F).
The critical path is the minimal d such that φN d : ◦F. It can be computed by linear programming
involving matrix multiplication in max-plus algebra using essentially the laws (1) and (2).
5
Examples II: Esterel-style Synchronous Multi-threading
Like task scheduling in Sec. 4.3, the timing analysis of Esterel programs [6, 22] involves max-plus
algebra, yet takes place in an entirely different fragment of the type theory. Instead of implications
ζ1 ∧ ζ2 ⊃ ◦ξ1 ∧ ◦ξ2 as in Sec. 4.3 we employ dependencies of the form ζ1 ∨ ζ2 ⊃ ◦ξ1 ⊕ ◦ξ2 , which
are handled by (1) and (4) rather than (1) and (2). In addition, we use the tensor ⊗ for capturing multithreaded parallelism. Here we provide some further theoretical background for the work reported in [22].
Esterel programs communicate via signals, which are either present or absent during one instant.
Signals are set present by the emit statement and tested with the present test. They are reset at the
start of each instant. Esterel statements can be either combined in sequence (;) or in parallel (||).
The loop statement simply restarts its body when it terminates. All Esterel statements are considered
instantaneous, except for the pause statement, which pauses for one instant, and derived statements like
halt (= loop pause end), which stops forever. Esterel supports multiple forms of preemption, e. g.,
via the abort statement, which simply terminates its body when some trigger signal is present. Abortion
can be either weak or strong. Weak abortion permits the activation of its body in the instant the trigger
signal becomes active, strong abortion does not. Both kinds of abortions can be either immediate or
delayed. The immediate version already senses for the trigger signal in the instant its body is entered,
while the delayed version ignores it during the first instant in which the abort body is started.
Michael Mendler
14
41
An Algebra of Synchronous Scheduling Interfaces
Consider the Esterel fragment in Figure 2b. It consists of two threads. The first thread G emits signals
Consider the Esterel fragment in Figure 2b. It consists of two threads. The first thread G emits signals
R, S, T depending on some input signal I. In any case, it emits signal U and terminates instantaneously.
R, S, T depending on some input signal I. In any case, it emits signal U and terminates instantaneously.
The thread H continuously emits signal R, until signal I occurs. Thereafter, it either halts, when E is
The thread H continuously emits signal R, until signal I occurs. Thereafter, it either halts, when E is
present, or emits S and terminates otherwise, after having executed the skip statement nothing.
present, or emits S and terminates otherwise, after having executed the skip statement nothing.
1
21
32
T0
43
54
v0 fork
G0
65
76
G
H
v1 present I
_
+
109
11
10
v8 wabort I
L5
12
11
13
12
L12
G1 v2 emit R
14
13
15
14
v9 pause
L6
15
L13
v3 present I
+
87
98
H0
H3 v10 emit R
v12 present E
v5 emit S
v4 goto
_
+
L14
v11 goto
L16
v13 halt
L9
H2
v14 emit S
L18
G2 v6 emit T
v15 nothing
L10
v7 emit U
L19
L11
v16 join
L20
(a) CKAG
(a) CKAG
R end present;
S;
emitpresent;
T end present;
R end
S; emit T end present;
end present;
end present;
(b) Esterel module T
(b) Esterel module T
H1 I
_
L7 G3
% module T
[ %
threadTG
%
module
[ present
% threadI Gthen emit
present I then
else emit
emit
U; I else emit
present
|| emit U;
|| % thread H
weak
abort
% thread
H
loopabort
weak
pause;emit R
loop
endpause;emit
loop
R
when
I;
end immediate
loop
present
E then halt
when immediate
I;
emit
S; nothing;
present
E then halt
] emit S; nothing;
]
L01:
L02:
L01:
L03:
L02:
L03:
L04:
L05:
L04:
L06:
L05:
L07:
L06:
L08:
L07:
L09:
L08:
L10:
L09:
L10:
L11:
L12:
L11:
L13:
L12:
L14:
L13:
L15:
L14:
L16:
L15:
L17:
L16:
L18:
L17:
L18:
L19:
L19:
T0: PAR 1,G0,1
T0: PAR
PAR 1,H0,2
1,G0,1
PARE
A1
PAR 1,H0,2
PARE A1
G0: PRESENT I,G1
R I,G1
G0: EMIT
PRESENT
G1: PRESENT
EMIT R I,G3
G2 I,G3
G1: GOTO
PRESENT
G3: EMIT
GOTOSG2
G3: EMIT
EMIT T
S
G2: EMIT
EMIT U
T
G2: EMIT U
H0: WABORT I,H1
H3:
H0: PAUSE
WABORT I,H1
H3: EMIT
PAUSER
GOTO
EMIT RH3
H1: PRESENT
GOTO H3 E,H2
H1: HALT
PRESENT E,H2
H2: EMIT
HALT S
H2: NOTHING
EMIT S
NOTHING
A1: JOIN
A1: JOIN
(c) KEP Assembler
(c) KEP Assembler
Figure 2: Esterel module T (b) with control-flow graph (a) and resulting KEP Assembler (c).
Figure 2: Esterel module T (b) with control-flow graph (a) and resulting KEP Assembler (c).
The concurrent KEP assembler graph [18] (CKAG, see Fig. 2a) captures the control flow, both
The concurrent
assembler
graph program.
[18] (CKAG,
see Fig.is 2a)
captures
the Esterel
controlprogram
flow, both
standard
control and KEP
abortions,
of an Esterel
The CKAG
derived
from the
by
standard
control
and
abortions,
of
an
Esterel
program.
The
CKAG
is
derived
from
the
Esterel
program
by
structural translation. For a given CKAG, the generation of assembly code for the Kiel Esterel Processor
structural
translation.
For
a
given
CKAG,
the
generation
of
assembly
code
for
the
Kiel
Esterel
Processor
(KEP) [18, 19], executing synchronous parallelism by multi-threading, is straight-forward (see Fig. 2c).
(KEP)
19],
executing
synchronous
parallelism
multi-threading,
is straight-forward
(see synchroFig. 2c).
Let[18,
S, L
and
M be disjoint
sets of
(input orby
output)
signals, control
flow labels and
Let S,
L and
M be disjoint
(inputmodule
or output)
signals,
synchronisation
states,
respectively.
Forsets
the of
Esterel
in Fig.
2 we control
have S flow
= {I,labels
E, R, S,and
T,U},
L=
nisation
states,
respectively.
For
the
Esterel
module
in
Fig.
2
we
have
S
=
{I,
E,
R,
S,
T,U
}, Lde=
{L0, . . . , L20, G0, . . . , G3, H0, . . . , H3}. As synchronisation states we use the names of the atomic
{L0, . . . , L20, G0, . . . , G3, H0, . . . , H3}. As synchronisation states we use the names of the atomic de-
42
An Algebra of Synchronous Scheduling Interfaces
lay nodes, i.e., the pause, halt and join nodes, M = {v9 , v13 , v16 }. These describe the different state
bits of the synchronous automaton coded by the program block T . To distinguish the cases of a thread
starting from or ending in a given state s ∈ M during an instant we use the modifiers out(s) and in(s).
The former expresses that the thread is leaving from s at the beginning of the instant and the latter that
it enters and terminates the instant in s. The set M+ =d f {out(s), in(s) | s ∈ M} collects these atomic
statements. The set of control variables, specifying the atomic control points of a program module, is
the union V = S ∪ L ∪ M+ . All the controls out(s) are stable, i.e., we may assume out(s) ⊕ ¬out(s). This
is not true for controls in(s) which are switched on dynamically as the schedule enters a delay node.
One possible activation of the Esterel module T in Fig. 2a would be as follows. Initially, control
variable T 0 is set, so σ (0) = {T 0}. Then the PAR and PARE instructions making up the fork node v0
are executed in line numbers L01, L02, L03 of Fig. 2c, each taking one instruction cycle (ic). The two
PAR instructions set up internal counters for thread control, which does not change the set of events
in the variables of Fig. 2a. Hence, σ (1) = σ (2) = {T 0}. After the PARE both control variable G0,
H0 become present bringing threads G and H to life. This means σ (3) = {T 0, G0, H0}. The next
instruction could be any of the two first instructions of G or H. As it happens, the KEP Assembler
Fig. 2c assigns higher priority to H so that our activation continues with wabort (node v8 ), i.e., σ (4) =
{T 0, G0, H0, L12}. This brings up the pause instruction v9 . Now, depending on whether signal I is
present or not the activation of pause either moves to v12 (weak immediate abort) or terminates. Let us
assume the latter, i.e., σ (5) = {T 0, G0, H0, L12, in(v9 )}, where thread H is finished up for the instant
and has entered a wait state in node v9 . The activation continues with the first instruction of G, the
present node v1 at label G0. Since I is assumed absent, its activation effects a jump to label G1, i.e.,
σ (6) = {T 0, G0, H0, L12, in(v9 ), G1}. Thereafter, we run sequentially through nodes v3 , v5 , v6 , v7 giving
σ (7) = σ (6) ∪ {G3}, σ (8) = σ (7) ∪ {L9} and σ (9) = σ (8) ∪ {L10}.
Executing the final emit instruction v7 hits the join at entry L11, so that σ (10) = {T 0, G0, H0,
L12, in(v9 ), G1, G3, L9, L10, L11}. Now both threads G and H are finished. While G is terminated and
hands over to the main thread T for good, H is still pausing in v9 . It takes one activation step of the
join node v16 to detect this and to terminate the synchronous instant of T with the final event σ (11) =
{T 0, G0, H0, L12, in(v9 ), G1, G3, L9, L10, L11, in(v16 )}. Overall, we get an activation of the outer-most
main thread of T , σ = σ (0), . . . , σ (11), starting from program label T 0 consisting of 12 ics in total. In the
next logical instant when T is resumed in v16 and v9 , with initial event σ (0) = {out(v9 ), out(v16 )}, and
thread H eventually comes out at control point L19 (if signal I is present and E absent), then executing
the join v16 will bring us to control point L20 and out of T instantaneously.
Activation sequences starting in control label T 0 and ending in L20 are called through paths, those
starting in T 0 and pausing in a synchronisation state in(s), s ∈ {v9 , v13 , v16 }, are sink paths; source paths
begin in a state out(s) and end in L20, while internal paths begin in a state and end in a state.
Esterel IO-Interface Types. Our normal form interfaces to describe Esterel-KEP modules are of the
W
Ln
form θ = φ ⊃ ψ, with input control φ = m
i=1 ζi and output control ψ =
k=1 ◦ξk where the ζi and ξk
are pure types. The former φ captures all the possible ways in which a program module (or any other
fragment) of type θ can be started within an instant and the latter ψ sums up the ways in which it can
be exited during the instant. Intuitively, Σ |= θ says that whenever the schedule Σ enters the fragment
through one of the input controls ζi then within some bounded number of ics it is guaranteed to exit
through one of the output controls ξk . The disjunction ∨ in the input control φ models the external
non-determinism resolved by the environment which determines how a program block is started. On the
output side ψ, the selection of which exit ξk is taken is expressed by ⊕ since it is an internal choice which
Michael Mendler
43
is dynamically resolved during each activation. Each delay operator ◦ stands for a possibly different delay
L
depending on which output ξk is taken. Contrast this with an output control such as ψ = ◦( nk=1 ξk )
which only specifies one bound for all exits ξk . An interface bound T ∈ Bnd(φ ⊃ ψ) can be understood
as a n × m shaped timing matrix relative to the Boolean controls ζi and ξk serving as “base” vectors.
The logical conjunction of these interfaces in a fixed set of such base controls corresponds to matrix
multiplications in max-plus algebra. Furthermore, using logical reasoning on base controls ζi , ξ j we can
massage the semantics of timing matrices very much like we do with base transformations in ordinary
linear algebra. Two important operations on IO-interfaces are matrix multiplication and the Kronecker
product which in our scheduling algebra are now strongly typed and thus receive semantic meaning in
logical spaces.
Transient and Sequential Submodules G and H. A full and exact WCRT specification encapsulating
the synchronous block G as a component would require mention of program labels G1, G3, G2 which are
accessible from outside for jump statements. Therefore, the interface type for single-threaded scheduling
of G would be [6, 4, 3, 1] : G0 ∨ G1 ∨ G3 ∨ G2 ⊃ ◦L11. This is still not the exact description of G since
it neither expresses the dependency of the WCRT on signal I, nor the emissions of R, S, T , U. For
instance, if I is present then all threads must take control edges L5 and L7 rather than G1 or G3 which
are blocked. If I is absent then both G1 and G3 must be taken instead. As a result the longest path
v1 + v2 + v3 + v5 + v6 + v7 with delay 6 is not executable. To capture this, we consider signal I as another
control input and refine the WCRT interface type of G:
[5, 5, 3, 4, 3, 1] : (G0 ∧ I) ∨ (G0 ∧ ¬I) ∨ (G1 ∧ I) ∨ (G1 ∧ ¬I) ∨ G3 ∨ G2 ⊃ ◦L11.
(22)
The inclusion of signal I in the interface has now resulted in the distinction of two different delay values
3 and 4 for G1 ⊃ ◦L11 depending on whether I is present or absent. On the other hand, G0, split into
controls G0 ∧ I and G0 ∧ ¬I, produces the same delay of 5 ics in both cases, which is a decrease of
WCRT compared to [6] : G0 ⊃ ◦L11 from above. Assuming that input signal I is causally stable, i.e.,
I ⊕ ¬I ∼
= true, it is possible to optimise the interface without losing precision: since (G0 ∧ I) ⊕ (G0 ∧
¬I) ∼
G0
∧ (I ⊕ ¬I) ∼
=
= G0 ∧ true ∼
= G0 the column vector [0; 0] : G0 ⊃ ◦(G0 ∧ I) ⊕ ◦(G0 ∧ ¬I) is sound
and can be used to compress the two entries of value 5 in (22) into a single value 5 = max(5, 5) giving
[5, 3, 4, 3, 1] : G0 ∨ (G1 ∧ I) ∨ (G1 ∧ ¬I) ∨ G3 ∨ G2 ⊃ ◦L11. In the same vein, but this time without
referring to stability, we could further bundle G1 ∧ I and G3 into a single control with the single delay
[3] : (G1 ∧ I) ⊕ G3 ⊃ ◦L11 at the same level of precision. This finally yields [5, 3, 4, 1] : G0 ∨ ((G1 ∧ I) ⊕
G3) ∨ (G1 ∧ ¬I) ∨ G2 ⊃ ◦L11. Still, if we only ever intend to use G as an encapsulated block with entry
G0 and exit L11 the following typing is sufficient:
[5] : G0 ⊃ ◦L11.
(23)
Now we take a look at the sequential control flow which starts and terminates in pause and halt
nodes. Consider the sub-module H from Fig. 2a consisting of nodes v8 –v15 . Nodes wabort, emit, goto,
present, nothing are transient and specified as before for G. But now the instantaneous paths are broken
by the delay nodes v9 and v13 .
First, consider the pause node v9 . It can be entered by two controls, line number L12 and program
label H3, and left via two exits, a non-instantaneous edge L13 and an instantaneous exit H1 (weak
abortion). When a control thread enters v9 then either it terminates the current instant inside the node or
leaves through the weak abort H1 (data-dependent, if signal I is present) continuing the current reaction,
instantaneously. A thread entering v9 never exits through L13 in the same instant. On the other hand, if
44
An Algebra of Synchronous Scheduling Interfaces
a thread is started (resumed) from inside the pause node v9 then control can only exit through L13. This
suggests to specify the pause node as follows:
[1; 1, 1; 1] : H3 ∨ L12 ⊃ ◦H1 ⊕ ◦in(v9 )
[1] : out(v9 ) ⊃ ◦L13.
(24)
(25)
The interface (24) says that if pause is entered through H3 or L12 it can be left through H1 or terminate
(in) inside the pause. In all cases activation takes 1 instruction cycle. Since there are no differences
in the delays we could bundle the controls H3, L12 and compress the matrix (24) as [1] : H3 ⊕ L12 ⊃
◦(H1 ⊕ in(v9 )) without losing information. We could also record the dependency of control on signal I,
with the more precise interface [1; −∞, −∞, 1] : ((H3 ⊕ L12) ∧ I) ∨ ((H3 ⊕ L12) ∧ ¬I) ⊃ ◦H1 ⊕ ◦in(v9 ).
This separates the threads which must stop inside the pause from those which must leave via H1 due
to a weak immediate abort on signal I. The specification (25) accounts for threads starting in the pause
which must necessarily pass control to L13 within one instruction cycle.
The halt node v13 in Fig. 2a is not only a sink for control threads entering through L16 but it also
has an internal path of length 1 (which is repeated at every instant). It is specified by the interface
[1, 1] : (out(v13 ) ∨ L16) ⊃ ◦in(v13 ). By composition from the WCRT interfaces of nodes v12 –v15 using
matrix multiplications in max-plus algebra we get
H = [5; 4, 7; 6] : H0 ∨ out(H) ⊃ ◦L19 ⊕ ◦in(H)
(26)
recording the lengths of the longest through path v8 + v9 + v12 + v14 + v15 , sink path v8 + v9 + v12 + v13 ,
source path v9 + v10 + v11 + v9 + v12 + v14 + v15 and internal path v9 + v10 + v11 + v9 + v12 + v13 .
Multi-threading Composition: Fork and Join. Finally, consider the two blocks G and H as they are
combined inside the Esterel module T (Fig. 2a) and synchronised by fork and join nodes v0 and v16 . The
main thread starts G and H in their initial controls, i.e., by activating G0 ∧ H0. Then, the executions
of G and H are interleaved, depending on the priorities assigned by the compiler about which we shall
make no assumptions. Child thread G can only run through its instantaneous path until it reaches L11
where it is stopped by the join. The sequential block H has two options: It can take its instantaneous
through path stopping at L19 or it pauses in one of its delay nodes. In the former case we have reached
L11 ∧ L19, where the synchronising join takes over letting the main thread continue by instantaneously
activating L20 within the same instant. In the latter case we have activated L11 ∧ in(H) where the
synchronous instant is finished and the combined system pauses. Activation is resumed in the next
instant from L11 ∧ out(H), while G is still inactive and waiting at L11. Child thread H may either leave
instantaneously through L19, giving L11 ∧ L19 overall, or once more pause internally, leading again to
L11 ∧ in(H).
This synchronous composition is obtained by the Kronecker product GH =d f G0 ⊗ H 0 where G0 and
H 0 are the stand-alone interfaces of G (23) and H (26) instrumented for the synchronisation:
G0 = Sync1 ∧ [5, 0] : G0 ∨ L11 ⊃ ◦L11
H 0 = Sync2 ∧ [5; 4, 7; 6] : H0 ∨ out(H) ⊃ ◦L19 ⊕ ◦in(H).
G is extended by the additional input control L11 and trivial path [0] : L11 ⊃ ◦L11 to let G start an instant
from L11 when H is pausing. The conjunct Sync1 =d f ¬L11 expresses the synchronisation whereby G
finishes once it reaches L11. Similarly, the conjunct Sync2 =d f ¬(L19⊕in(H)) added to the interface (26)
Michael Mendler
45
stops H from continuing its activation instant past L11 or in(H). The Kronecker product G0 ⊗ H 0 now
generates all possible interleaving of activations specified by type G0 with those from type H 0 :
G0 ⊗ H 0 [5, 0] ⊗ [5; 4, 7; 6] = [5 · [5; 4, 7; 6], 0 · [5; 4, 7; 6]] = [10; 9, 12; 11, 5; 4, 7; 6]
: (G0 ∧ H0) ∨ (G0 ∧ out(H)) ∨ (L11 ∧ H0) ∨ (L11 ∧ out(H)) ⊃ ◦(L11 ∧ L19) ⊕ ◦(L11 ∧ in(H)).
In the synchronised composition GH we are only interested in the (surface) paths initiated by G0 ∧ H0
and the (depth) paths activated by the combination L11 ∧ out(H). All other paths cannot be activated
inside the fork and join context. Thus, we drop these column vectors and only continue with
GH = [10; 9, 12; 11, 5; 4, 7; 6] · [0; −∞; −∞; −∞, −∞; −∞; −∞, 0] = [10; 9, 7; 6]
: (G0 ∧ H0) ∨ (L11 ∧ out(H)) ⊃ ◦(L11 ∧ L19) ⊕ ◦(L11 ∧ in(H)).
This models the concurrent composition of G and H but not yet the interface of the composite block T
with fork and join as depicted in Fig. 2a. These are additional components specified as
join =[1; −∞, −∞; 1] : (L11 ∧ L19) ∨ (L11 ∧ in(H)) ⊃ ◦L20 ⊕ ◦in(T )
fork =[3; −∞, −∞; 0] : T 0 ∨ out(T ) ⊃ ◦(G0 ∧ H0) ⊕ ◦(L11 ∧ out(H))
with new state controls in(T ) and out(T ) for module T . The JOIN instruction in line 19 of Fig. 2c is
always executed upon termination of both threads from G and H inside T and the associated activation
time of one ic is accounted for in the join interface above. Specifically, this is a through path [1] :
(L11 ∧ L19) ⊃ ◦L20 and source path [1] : L11 ∧ in(H) ⊃ ◦in(T ). The entry [3] : T 0 ⊃ ◦(G0 ∧ H0) of fork
includes the ics for two PAR, one PARE from lines 1-3 of Fig. 2c. Adding fork and join on the input and
output side then obtains
T = [1; −∞, −∞; 1] · [10; 9, 7; 6] · [3; −∞, −∞; 0] = [14; 13, 8; 7] : T 0 ∨ out(T ) ⊃ ◦L20 ⊕ ◦in(T )
for the composite module T . Indeed, the longest through path is exemplified by the sequence of nodes
v0 (3) + {v1 + v2 + v3 + v4 + v7 }G (5) + {v8 + v9 + v12 + v14 + v15 }H (5) + v16 (1) = 14. A longest sink path
is v0 (3) + {v1 + v2 + v3 + v4 + v7 }G (5) + {v8 + v9 + v12 + v13 }H (4) + v16 (1) = 13. As a maximal source
path we could take {}G (0) + {v9 + v10 + v11 + v9 + v12 + v14 + v15 }H (7) + v16 (1) = 8 and as a possible
longest internal path {}G (0) + {v9 + v10 + v11 + v9 + v12 + v13 }H (6) + v16 (1) = 7.
In specific WCRT algorithms such as the one of [6] many of the matrix multiplications shown above
are executed efficiently in the combinatorics of traversing the program’s control flow graph forming
maximum and additions as we go along. This is possible only so far as control flow dependencies
are represented explicitly in the graph. In general, with data-dependencies, this may be an exponential
problem so that symbolic techniques for modular analyses are needed. Our logical interface algebra can
be used to keep track of the semantic meaning of WCRT data. Even without data-dependencies, the
WCRT interfaces presented here give rise to a depth-first search algorithm [22] which is already more
precise than the one presented in [6].
6
Related Work
Most interface models in synchronous programming are restricted to causality issues, i. e., dependency
analysis without considering quantitative time. Moreover, the granularity of dependency is limited. E.g.,
the modules of André et al. [3] do not permit instantaneous interaction. Such a model is not suitable
46
An Algebra of Synchronous Scheduling Interfaces
for compositional, intra-instant, scheduling analysis. Hainque et al. [9] use a topological abstraction
of the underlying circuit graphs (or syntactic structure of Boolean equations) to derive a fairly rigid
component dependency model. A component is assumed executable iff all of its inputs are available;
after component execution all of its outputs become defined. This is fine for concurrent execution but
too restricted to model single- or multi-threaded execution compositionally. The interface model also
does not cover data dependencies and thus cannot deal with dynamic schedules. It also does not support
quantitative resource information, either.
The causality interfaces of Lee et al. [17] are much more flexible. These are functions associating
with every pair of input and output ports an element of a dependency domain D, which expresses if and
how an output depends on some input. Causality analysis is then performed by multiplication on the
global system matrix. Using an appropriate dioid structure D, one can perform the analyses of Hainque
et. al. [9] as well as restricted forms of WCRT. Lee’s interfaces presuppose a fixed static distinction
between inputs and outputs and cannot express the difference between an output depending on the joint
presence of several values as opposed to depending with each input individually. Similarly, there is no
coupling of outputs, e. g., that two outputs always occur together at “the same time.” Thus, they do
not support full AND- and OR-type synchronisation dependencies for representing multi-threading and
multi-processing. Also, the model does not include data dependency. The work reported here can be seen
as an extension of [17] to include such features. In particular, note that our scheduling interfaces can also
be used in situations where linear algebra is not applicable, as in the case of network flow problems.
Recent works [27, 13] combining network calculus [4, 7] with real-time interfaces are concerned
with the compositional modelling of regular execution patterns. Existing interface theories [17, 27, 13],
which aim at the verification of resource constraints for real-time scheduling, handle timing properties
such as task execution latency, arrival rates, resource utilisation, throughput, accumulated cost of context
switches, and so on. The dependency on data and control flow is largely abstracted. For instance,
since the task sequences of Henzinger and Matic [13] are independent of each other, their interfaces
do not model concurrent forking and joining of threads. The causality expressible there is even more
restricted than that by Lee et al. [17] in that it permits only one-to-one associations of inputs with outputs.
The interfaces of Wandeler and Thiele [27] for modular performance analysis in real-time calculus are
like those of Henzinger and Matic [13] but without sequential composition of tasks and thus do not
model control flow. On the other hand, the approaches [27, 13] can describe continuous and higher-level
stochastic properties which our interface types cannot.
AND- and OR-type synchronisation dependencies are important for synchronous programming since
reachability of control nodes in general depends both conjunctively and disjunctively on the presence
of data. Also, control branching may be conjunctive (as in multi-threading or concurrent execution)
or disjunctive (as in single-threaded code). Moreover, execution may depend on the absence of data
(negative triggering conditions), which makes compositional modelling rather a delicate matter in the
presence of logical feedback loops. This severely limits the applicability of existing interface models.
The assume-guarantee style specification [27, 13] does not address causality issues arising from feedback
and negative triggering conditions. The interface automata of Alfaro, Henzinger, Lee, Xiong [1, 15]
model synchronous macro-states and assume that all stabilisation processes (sequences of micro-states)
can be abstracted into atomic interaction labels. The introduction of transient states [16] alleviates
this, but the focus is still on regular (scheduling) behaviour. The situation is different, however, for
cyclic systems, in which causality information is needed. Our interface algebra is semantically sound
with respect to feedback and indeed supports causality analysis as a special case: A signal A is causal
if ◦A ⊕ ¬A can be derived in the type theory of a module. Because of the complications arising from
causality issues, there is currently no robust component model for synchronous programming. We believe
Michael Mendler
47
that the interface types introduced in this paper, cover new ground towards such a theory.
Finally, note that our algebra is not intended as a general purpose interface model such as, e.g., the
relational interfaces of Tripakis et al. [26]. While these relational interfaces permit contracts in firstorder logic between inputs and outputs, our interfaces only describe propositional relations. Therefore,
our algebra cannot describe the full functional behaviour of data processing (other than by coding it
into finite Booleans). Our interfaces are logically restricted to express monotonic scheduling processes
and the resource consumption inside synchronous instants. Because we use an intuitionistic realisability
semantics (Curry-Howard) we obtain enough expressiveness to deal with causality problems and upperbound scheduling costs. The interface algebra does not aim to cover behavioural aspects of sequences
of instants such as in approaches based on temporal logics or the timed interfaces of Alfaro, Henzinger
and Stoelinga [2], which build on timed automata. The scheduling problem addressed here is a simpler
problem in the sense that it arises afresh within each synchronous step and does not need to carry (e.g.,
timing) constraints across steps. However, note that our algebra can fully capture finite-state sequential
transition functions in the standard way by duplicating propositional state variables s using out(s) and
in(s) as seen in Sec. 5. An inter-instant transition (instantaneous, no clock tick) between s1 and s2 is
given by the implication out(s1 ) ⊃ ◦in(s2 ) while the intra-instant transition (sequential, upon clock tick)
is the weak implication ¬in(s1 ) ⊕ out(s2 ). In this way, we can derive exact state-dependent worst-case
bounds across all reachable states of a finite state behaviour.
The scheduling algebra in this paper extends [21] in that it not only captures concurrent execution
(as in combinational circuits) but also includes the tensor ⊗ for multi-threading. More subtly, while [21]
is restricted to properties of activation sequences stable under the suffix preordering, here we consider
the much richer lattice of arbitrary sub-sequences. This paper introduces the theory behind [22] which
reported on the application to WCRT analysis for Esterel and also provides more detailed information on
the modelling in Sec. 5.
Acknowledgements. The author would like to thank the anonymous reviewers for their suggestions to
improve the presentation.
References
[1] L. de Alfaro & T. Henzinger (2001): Interface automata. In: Proc. Foundations of Software Engineering,
ACM Press, pp. 109–120.
[2] L. de Alfaro, Th. Henzinger & Marielle Stoelinga (2002): Timed interfaces. In: Proc. EMSOFT’02.
[3] C. André, F. Boulanger, M.-A. Péraldi, J. P. Rigault & G. Vidal-Naquet (1997): Objects and synchronous
programming. European Journal on Automated Systems 31(3), pp. 417–432.
[4] F. L. Baccelli, G. Cohen, G. J. Olsder & J.-P. Quadrat (1992): Synchronisation and Linearity. John Wiley &
Sons.
[5] Gérard Berry & Georges Gonthier (1992): The Esterel synchronous programming language: Design, semantics, implementation. Science of Computer Programming 19(2), pp. 87–152.
[6] Marian Boldt, Claus Traulsen & Reinhard von Hanxleden (2008): Worst case reaction time analysis of concurrent reactive programs. ENTCS 203(4), pp. 65–79. Proc. SLA++P’07, March 2007, Braga, Portugal.
[7] J. Le Boudec & P. Thiran (2001): Network Calculus - A theory of deterministic queuing systems for the
internet, Lecture Notes in Computer Science 2050. Springer.
[8] Paul Le Guernic, Thierry Goutier, Michel Le Borgne & Claude Le Maire (1991): Programming real time
applications with SIGNAL. Proceedings of the IEEE 79(9).
48
An Algebra of Synchronous Scheduling Interfaces
[9] Olivier Hainque, Laurent Pautet, Yann Le Biannic & Eric Nassor (1999): Cronos: A separate compilation
toolset for modular Esterel applications. In: Jeannette M. Wing, Jim Woodcock & Jim Davies, editors: World
Congress on Formal Methods, Lecture Notes in Computer Science 1709, Springer, pp. 1836–1853.
[10] Nicolas Halbwachs (1998): Synchronous programming of reactive systems, a tutorial and commented bibliography. In: Tenth International Conference on Computer-Aided Verification, CAV ’98, LNCS 1427,
Springer Verlag, Vancouver (B.C.).
[11] Nicolas Halbwachs (2005): A synchronous language at work: The story of Lustre. In: Third ACM-IEEE
International Conference on Formal Methods and Models for Codesign, MEMOCODE’05, Verona, Italy.
[12] D. Harel, A. Pnueli, J. Pruzan-Schmidt & R. Sherman (1987): On the formal semantics of Statecharts. In:
LICS ’87, IEEE Computer Society Press, pp. 54–64.
[13] Th. Henzinger & S. Matic (2006): An interface algebra for real-time components. In: Proceedings of the
12th Annual Real-Time and Embedded Technology and Applications Symposium (RTAS), IEEE Computer
Society, Los Alamitos, CA, USA, pp. 253–266.
[14] C. Huizing (1991): Semantics of Reactive Systems: Comparison and Full Abstraction. Ph.D. thesis, Eindhoven Univ. of Technology.
[15] E. A. Lee & Y. Xiong (2001): System-level types for component-based design. In: Workshop on Embedded
Software EMSOFT 2001, Lake Tahoe, CA, USA.
[16] E. A. Lee & Y. Xiong (2004): A behavioral type system and its application in Ptolemy II. Formal Aspects of
Computing 13(3), pp. 210–237.
[17] E. A. Lee, H. Zheng & Y. Zhou (2005): Causality interfaces and compositional causality analysis. In:
Foundations of Interface Technologies (FIT’05), ENTCS, Elsevier.
[18] Xin Li, Marian Boldt & Reinhard von Hanxleden (2006): Mapping Esterel onto a multi-threaded embedded
processor. In: Proceedings of the 12th International Conference on Architectural Support for Programming
Languages and Operating Systems (ASPLOS’06), San Jose, CA.
[19] Xin Li & Reinhard von Hanxleden (2010): Multi-threaded reactive programming—The Kiel Esterel processor. IEEE Transactions on Computers .
[20] G. Luettgen & M. Mendler (2002): The intuitionism behind Statecharts steps. ACM Transactions on Computational Logic 3(1), pp. 1–41.
[21] M. Mendler (2000): Characterising combinational timing analyses in intuitionistic modal logic. The Logic
Journal of the IGPL 8(6), pp. 821–853.
[22] Michael Mendler, Reinhard von Hanxleden & Claus Traulsen (2009): WCRT algebra and interfaces for
Esterel-style synchronous processing. In: Proceedings of the Design, Automation and Test in Europe
(DATE’09), Nice, France.
[23] Amir Pnueli & M. Shalev (1991): What is in a step: On the semantics of Statecharts. In: TACS ’91:
Proceedings of the International Conference on Theoretical Aspects of Computer Software, Springer-Verlag,
London, UK, pp. 244–264.
[24] Marc Pouzet (2006): Lucid Synchrone, version 3. Tutorial and reference manual. Université Paris-Sud, LRI.
Distribution available at: www.lri.fr/∼pouzet/lucid-synchrone.
[25] Klaus Schneider (2002): Proving the equivalence of microstep and macrostep semantics. In: TPHOLs ’02:
Proceedings of the 15th International Conference on Theorem Proving in Higher Order Logics, SpringerVerlag, London, UK, pp. 314–331.
[26] S. Tripakis, B. Lickly, Th. A. Henzinger & E. A. Lee (2009): On relational interfaces. Technical Report
UCB/EECS-2009-60, Electrical Enginering and Computer Sciences, Univ. of California at Berkely.
[27] E. Wandeler & L. Thiele (2005): Real-time interfaces for interface-based design of real-time systems with
fixed priority scheduling. In: Proceedings of the ACM International Conference on Embedded Software
(EMSOFT’05).
| 0 |
1
Improved Transients in Multiple Frequencies
Estimation via Dynamic Regressor Extension and
Mixing
arXiv:1604.01928v1 [cs.SY] 7 Apr 2016
Stanislav Aranovskiy1,2, Alexey Bobtsov2, Romeo Ortega3, Anton Pyrkin2
Abstract
A problem of performance enhancement for multiple frequencies estimation is studied. First, we consider a basic gradientbased estimation approach with global exponential convergence. Next, we apply dynamic regressor extension and mixing technique
to improve transient performance of the basic approach and ensure non-strict monotonicity of estimation errors. Simulation results
illustrate benefits of the proposed solution.
I. I NTRODUCTION
A problem of frequency identification for sinusoidal signals attracts researchers’ attention both in control and signal processing
communities due to its practical importance. Indeed, frequency identification methods are widely used in fault detection systems
[1], for periodic disturbance attenuation [2], [3], in naval applications [4] and so on.
Many online frequency estimation methods are currently available in literature, e.g. a phase-locked loop (PLL) proposed in
[5], adaptive notch filters [6], [7]. Another popular approach is to find a parametrization yielding a linear regression model,
which parameters are further identified with pertinent estimation techniques, see [8]–[10]. However, the most of online methods
are focused on stability studies and local or global convergence analysis; transients performance is not usually considered and
is only demonstrated with simulations. On the other hand, it is well-known that many gradient-based estimation methods can
exhibit poor transients even for relatively small number of estimated parameters, the transients can oscillate or even display a
peaking phenomena. A method to increase frequency estimation performance with adaptive band-pass filters was proposed in
[11] but for a single frequency case only. Thus, the problem of performance improvement for multiple frequencies estimation
remains open.
A novel way to improve transient performance for linear regression parameters estimation was proposed in [12]; the approach
is based on extension and mixing of the original vector regression in order to obtain a set of scalar equations. In this paper we
apply this approach to the problem of multiple frequencies estimation. It is shown that under some reasonable assumptions and
neglecting fast-decaying terms, non-strict monotonicty can be provided for estimates of parameters avoiding any oscillatory or
peaking behavior.
The paper is organized as follows. First, in Section II a multiple frequencies estimation problem is stated. A basic method
to solve the problem is presented in Section III. Next, in Section IV we consider dynamic regressor extension and mixing
(DREM) procedure and apply it to the previously proposed method. Illustrative results are given in Section V and the paper
is wrapped up with Conclusion.
II. P ROBLEM S TATEMENT
Consider the measured scalar signal
u(t) =
N
X
Ai sin(ωi t + ϕi ),
(1)
i=1
where t ≥ 0 is time, Ai > 0, ϕi ∈ [0, 2π), and ωi > 0 are unknown amplitudes, phases, and frequencies, respectively,
i ∈ N := {1, 2, . . . N }, N is the number of the frequencies in the signal.
Assumption 1: All the frequencies ωi , i ∈ N , are distinguished, i.e.
ωi 6= ωj ∀i 6= j, i, j ∈ N .
1 Stanislav Aranovskiy is with the NON-A team, INRIA-LNE, Parc Scientifique de la Haute Borne 40, avenue Halley Bat.A, Park Plaza, 59650 Villeneuve
d’Ascq, France
2 Stanislav Aranovskiy, Alexey Bobtsov and Anton Pyrkin are with the Department of Control Systems and Informatics, ITMO University, Kronverkskiy
av. 49, Saint Petersburg, 197101, Russia : [email protected]
3 Romeo Ortega is with the LSS-Supelec, 3, Rue Joliot-Curie, 91192 Gif–sur–Yvette, France : [email protected]
2
Remark 1: The signal (1) can be seen as an output of a marginally stable linear signal generator
ż(t) = Γz(t),
u(t) = Hz,
z(0) = z0 ∈ R2N ,
where Γ ∈ R2N ×2N and H ∈ R1×2N . The characteristic polynomial of the matrix Γ is given by
Pθ (s) := s2N + θ1 s2N −2 + . . . θN −1 s2 + θN ,
√
where the parameters θi are such that roots of the polynomial Pθ (s) are ±iωi , where i := −1, i ∈ N . Obviously, given a
vector θ := col(θi ) ∈ RN , the frequencies can be univocally (up to numerical procedures accuracy) defined, and vice versa.
Thus, in many multiple frequencies estimation methods the vector θ is identified instead of separate frequencies values. In our
paper we follow this approach and assume that the frequencies are estimated if the vector θ is obtained. The problem of direct
frequency identification is considered, for example, in [13].
Frequencies Estimation Problem. The goal is to find mappings Ψ : R × Rl 7→ Rl and Θ : Rl 7→ RN , such that the following
estimator
χ̇(t) = Ψ(χ(t), u(t)),
(2)
θ̂(t) = Θ(χ(t)).
ensures
lim |θ̂(t) − θ| = 0.
(3)
t→∞
III. A BASIC
FREQUENCIES IDENTIFICATION METHOD
In this section we consider a multiple frequencies estimation method, proposed in [14] and further extended in [15], [16].
This method is based on State-Variable Filter (SVF) approach, see [17], [18].
Lemma 1: Consider the following SVF
˙ = Aξ(t) + Bu(t),
ξ(t)
(4)
⊤
where ξ := ξ1 (t), ξ2 (t), . . . ξ2N (t) ,
0
1
0
...
0
0
0
0
0
1
.
.
.
0
..
..
..
..
..
A = ...
,
B
=
. ,
.
.
.
.
0
0
0
0
...
1
−a0 −a1 −a2 . . . −a2N −1
a0
ai , i ∈ {0, 2, . . . , 2N − 1}, are coefficients of a Hurwitz polynomial
a(s) = s2N + a2N −1 s2N −1 + . . . a1 s + a0 .
Define
y(t) := −ξ̇2N (t) =
2N
X
i=1
ai−1 ξi (t) − a0 u(t).
(5)
Then the following holds:
y(t) = φ⊤ (t)θ + ε(t),
(6)
where
φ(t) := ξ2N −1 (t),
ξ2N −3 (t),
. . . ξ3 (t)
ξ1 (t)
⊤
,
(7)
θ is defined in Remark 1, and ε(t) is an exponentially decaying term.
The proof is straightforward and follows the proof presented in [16].
Using Lemma 1 we can propose a multiple frequencies estimator.
Proposition 1: Consider the signal (1) satisfying Assumption 1, the SVF (4), and the signals y(t) and φ(t), defined by (5)
and (7), respectively. Then the estimator
˙
θ̂(t) = Kθ φ(t)(y(t) − φ⊤ (t)θ̂(t)),
(8)
where Kθ ∈ RN ×N , Kθ > 0, ensures the goal (3). Moreover, the estimation error θ̃(t) := θ̂(t) − θ converges to zero
exponentially fast.
3
Remark 2: The proposed estimator can be also written in form (2) as (the argument of time is omitted):
χ := col(ξ, θ̂),
Aξ + Bu
Ψ(χ, u) :=
,
Kθ φ(y − φ⊤ θ̂)
Θ(χ) := θ̂.
Sketch of the proof. The proof of Proposition 1 follows the proof given in [16]. Substituting (6), it is easy to show that the
estimation error θ̃(t) obeys the following differential equation
˙ = −K φ(t)φ⊤ (t)θ̃(t) + ǫ(t),
θ̃(t)
θ
(9)
where ǫ(t) := Kθ φ(t)ε(t) is bounded and exponentially decays. Since signal (1) consists of N sinusoidal components with
distinguished frequencies, the vector φ(t) satisfies persistant excitation condition [19], that is
Z t+T
φ(s)φ⊤ (s)ds ≥ δIq ,
t
for some T, δ > 0 and for all t ≥ 0, which will be denoted as φ(t) ∈ PE. Thus, the linear time-varying system (9) is
exponentially stable and
lim |θ̃(t)| = 0.
t→∞
The estimation algorithm (8) ensures global exponential convergence of θ̃(t), but do not guarantee performance transients. It
is known from practice that for N ≥ 2 behavior of the estimator (8) becomes oscillatory and can exhibit peaking phenomena.
However, these limitations can be overcome with DREM technique presented in the next section.
IV. E NCHANCING
THE BASIC ALGORITHM VIA
DREM
PROCEDURE
In this section we first present the DREM procedure proposed in [12], and then apply it to the basic frequencies estimation
algorithm studied in Section III.
A. Dynamic Regressor Extension and Mixing
Consider the basic linear regression
ρ(t) = m⊤ (t)r,
(10)
where ρ ∈ R and m ∈ Rq are measurable bounded signals and r ∈ Rq is the vector of unknown constant parameters to be
estimated. The standard gradient estimator, equivalent to (8),
˙ = Kr m(t)(ρ(t) − m⊤ (t)r̂(t)),
r̂(t)
with a positive definite adaptation gain Kr ∈ Rq×q yields the error equation
˙ = −Kr m(t)m⊤ (t)r̃(t),
r̃(t)
(11)
where r̃(t) := r̂(t) − r is the parameters estimation error.
We propose the following dynamic regressor extension and mixing procedure. The first step in DREM is to introduce q − 1
linear, L∞ -stable operators Hi : L∞ → L∞ , i ∈ {1, 2, . . . , q − 1}, whose output, for any bounded input, may be decomposed
as
(·)fi (t) := [Hi (·)](t) + ǫt ,
with ǫt is a (generic) exponentially decaying term. For instance, the operators Hi may be simple, exponentially stable LTI
filters of the form
αi
,
Hi (p) =
p + βi
with αi 6= 0, βi > 0; in this case ǫt accounts for the effect of initial conditions of the filters. Another option of interest are
delay operators, that is
[Hi (·)](t) := (·)(t − di ),
where di > 0.
Now, we apply these operators to the regressor equation (10) to get the filtered regression1
ρfi (t) = m⊤
fi (t)r.
1 To
simplify the presentation in the sequel we will neglect the ǫt terms. However, it is incorporated in the analysis and proofs given in [12].
4
Combining the original regressor equation (10) with the q − 1 filtered regressors we can construct the extended regressor
system
Re (t) = Me (t)r,
(12)
where Re ∈ Rq and Me ∈ Rq×q are defined as
ρ(t)
ρf1 (t)
..
.
Re (t) :=
,
ρfq−1 (t)
m⊤ (t)
m⊤
f1 (t)
Me (t) :=
.
..
.
m⊤
(t)
fq−1
(13)
Note that, because of the L∞ –stability assumption of Hi , Re and Me are bounded. Premultiplying (12) by the adjunct matrix
of Me we get q scalar regressions of the form
Ri (t) = ψm (t)ri
(14)
with i ∈ q̄ := {1, 2, . . . , q}, where we defined the determinant of Me as
and the vector R ∈ Rq
ψm (t) := det{Me (t)},
(15)
R(t) := adj{Me (t)}Re (t).
(16)
Proposition 2: Consider the q–dimensional linear regression (10), where ρ(t) and m(t) are known, bounded functions of time
and r ∈ Rq is a vector of unknown parameters. Introduce q−1 linear, L∞ –stable operators Hi : L∞ → L∞ , i ∈ {1, 2, . . . , q−1}
verifying (IV-A). Define the vector Re and the matrix Me as given in (13). Next consider the estimator
r̂˙i = ki ψm (t)(Ri (t) − ψm (t)r̂i ), i ∈ q̄,
(17)
where ki > 0, ψm (t) and R(t) are defined in (15) and (16), respectively. The following implications holds:
ψm (t) ∈
/ L2
=⇒
lim r̃i (t) = 0, ∀i ∈ q̄.
t→∞
(18)
Moreover, if ψm (t) ∈ PE, then r̃i (t) tends to zero exponentially fast.
Remark 3: It is well–known [19] that the zero equilibrium of the linear time–varying system (11) is (uniformly) exponentially
stable if and only if the regressor vector m(t) ∈ PE. However, the implication (18) proposes a novel criterion for assymptotic
convergence which is not necessary uniform for ψm (t) ∈
/ PE. This criterion, namely ψm (t) ∈
/ L2 , is established not for the
regressor m(t) itself, but for a determinant of the extended matrix Me , and do not coincide with the condition m(t) ∈ PE.
For more details and illustrative examples see [12].
Remark 4: It is easy to show that error dynamics is given by
2
r̃˙i (t) = −ki ψm
(t)r̃i (t).
It follows that all the transients are non-strictly monotonic and do not exhibit oscillatory behavior.
B. Applying DREM for frequencies estimation
Following the proposed procedure we introduce N − 1 linear, L∞ –stable delay operators [Hi (·)](t) := (·)(t − di ), i ∈
{1, 2, . . . N − 1}, where di > 0 and di 6= dj for i 6= j, and define N − 1 filtered signals
φfi (t) = φ(t − di ),
yfi (t) = y(t − di ).
Next coupling these signals with y(t) and φ(t) we construct
⊤
φ (t)
y(t)
yf1 (t)
φ⊤
f1 (t)
Ye (t) :=
,
Φ
(t)
:=
,
..
..
e
.
.
⊤
φfN −1 (t)
yfN −1 (t)
(19)
(20)
where Ye (t) is a N × 1 vector and Φe (t) is a N × N matrix. Defining
ψφ (t) := det{Φe (t)}
(21)
Y (t) = adj{Φe (t)}Ye (t),
(22)
and
5
we result with a set of N scalar equations
Yi (t) = ψφ (t)θi .
Next the basic differentiator (8) is replaced with
˙
θ̂i (t) = γi ψφ (t)(Yi (t) − ψφ (t)θ̂(t)),
(23)
where γi > 0, i ∈ N .
Following the Proposition 2 and Remark 4, we are now in position to establish the main result.
Proposition 3: Consider the signal (1) and the SVF (4). Define y(t) and φ(t) as (5) and (7), respectively. Choose N − 1
parameters di , i = {1, 2, . . . , N − 1} and compute Ye (t) and Φe (t) as (19),(20). If the parameters di are chosen such that
ψφ (t) ∈
/ L2 , where ψφ (t) is defined in (21), then estimation algorithm (23) with Y (t) defined in (22) guarantees for i ∈ N
• limt→∞ |θ̂i (t) − θi | = 0;
• θ̂i (t) is non-strictly monotonic and |θ̃i (t)| is non-increasing.
Moreover, if ψφ (t) ∈ PE, then θ̂i (t) converges to θi exponentially fast.
The main novelty of Proposition 3 in compare with the basic algorithm given in Proposition 1 consists in guaranteed non-strict
monotonicity of the transients θ̂i (t). Obviously, the second statement of Proposition 3 is only valid neglecting exponentially
decaying terms in SVF transients, namely ε(t) in (6). However, these transients depend on our choice of SVF matrix A in (4),
and, practically, are significantly faster then the estimation process.
V. A N E XAMPLE
As an illustrative example we consider the case N = 2, i.e.
u(t) = A1 sin(ω1 t + ϕ1 ) + A2 sin(ω2 t + ϕ2 ).
(24)
First we are to choose the tuning parameters
• SVF (4) with the characteristic polynomial of the matrix A
4
a(s) = (s + λ) ,
where λ > 0;
the linear delay operator [H1 (·)](t) := (·)(t − d1 ), where d1 > 0;
• the tunning gains γ1,2 > 0.
Next we construct φ(t) = [ξ3 (t), ξ1 (t)]⊤ , φf1 (t) = φ(t − d1 ), y(t) as (5), yf1 (t) = y(t − d1 ), and the matrices
ξ3 (t)
ξ1 (t)
Φe (t) =
,
ξ3 (t − d1 ) ξ1 (t − d1 )
y(t)
Y (t) = adj{Φe (t)}
.
yf1 (t)
•
Applicability of the DREM procedure is stated in the following proposition.
Proposition 4: The condition
π
d1 <
max{ω1 , ω2 }
(25)
(26)
is sufficient to ensure det{Φe (t)} ∈
/ L2 .
Remark 5: The condition (26) is not, actually, restrictive, since in many practical scenarios it is reasonable to assume a
known upper bound, i.e. ω ≥ ωi ∀i ∈ N . Then (26) is satisfied for d1 < πω −1 .
Proof 1: Neglecting exponentially decaying terms, for the states of SVF we have
ξ1 (t) = B1 sin(ω1 t + ϕ̄1 ) + B2 sin(ω2 t + ϕ̄2 ),
ξ3 (t) = − ω12 B1 sin(ω1 t + ϕ̄1 ) + ω22 B2 sin(ω2 t + ϕ̄2 ) ,
where the parameters B1,2 > 0 and ϕ̄1,2 ∈ [0, 2π) depend on the choice of λ and parameters of the signal (24).
Define the function
Z t
2
I(t) :=
(det{Φe (s)}) ds.
0
Tedious but straightforward trigonometric computations yield
I(t) = Clin t + Cper (t) + C0 ,
where the linear term is
Clin :=
1 2 2 2
B B (ω − ω22 )2 (1 − cos(d1 ω1 ) cos(d1 ω2 )) ,
2 1 2 1
6
10
θ̃1 (t)
θ̃2 (t)
0
−10
−20
−30
−40
0
10
20
30
t [s] 40
Fig. 1: Transients of the basic estimator (8) for the input signal (27) with λ = 5, K =
30 0
.
0 3
10
θ̃1 (t)
θ̃2 (t)
0
−10
−20
−30
−40
0
10
20
30
t [s] 40
Fig. 2: Transients of the estimator with DREM (23) for the input signal (27) with λ = 5, d1 = 0.3, γ1 = γ2 = 0.1.
Cper (t) is a bounded periodic term and C0 is a constant. The condition det{Φe (t)} ∈
/ L2 is equivalent to I(t) → ∞ as t → ∞,
that is satisfied if Clin 6= 0. Noting that
| cos(d1 ωi )| < 1, i = 1, 2
follows from (26) and recalling Assumption 1 complete the proof.
It is worth noting that condition Clin 6= 0 implies that d1 is not a period of the signals u(t), ξi (t), or a half of period if
the signals have half-wave symmetry, u(t − d1 ) 6= ±u(t); otherwise the matrix Φe (t) is singular for all t ≥ 0. The inequality
(26) guarantees that d1 is smaller then a have of the smallest period among sinusoidal components with the frequencies ω1,2 ;
it is sufficient but conservative estimate.
The both estimators (8) and (23) were simulated for the input signal
π
π
(27)
u(t) = 1.2 sin(2t + ) + 2 sin(3t + ).
3
4
with the following parameters
30 0
• λ = 5, K =
for the estimator (8);
0 3
• λ = 5, d1 = 0.3, γ1 = γ2 = 0.1 for the estimator (23).
Zero initial conditions are chosen for the both estimators, θ̂(0) = 0, that implies θ̃(0) = −θ = −[13, 36]⊤ . To separate
transients of the estimator and of the SVF both the estimators are turned on at t = 5 seconds.
Transients θ̃(t) of the estimator (8) are presented in Fig.1, while transients θ̃(t) of the estimator (23) are presented in Fig.2;
note the difference in gains K and γ1,2 and in transient time. Transients of the estimator (23) with λ = 5, d1 = 0.3 and
different values γ1,2 are given in Fig. 3 and illustrate the impact of the gains.
7
2
10
0
−2
0
γ1 = 0.1
γ1 = 1
−4
−10
γ2 = 0.1
γ2 = 1
−6
−20
−8
−10
−30
−12
−14
0
5
10
t [s] 15
−40
0
5
(a) θ̃1 (t)
10
t [s] 15
(b) θ̃2 (t)
Fig. 3: Transients comparision of the estimator with DREM (23) for the input signal (24) with λ = 5, d1 = 0.3 and different
gains.
200
0
−200
−400
−600
θ̃3 (t)
θ̃2 (t)
θ̃1 (t)
−800
−1000
−1200
0
20
40
60
80
t [s] 100
240 0 0
Fig. 4: Transients of the basic estimator (8) for N = 3 with λ = 25, K = 0 40 0 .
0
0 10
We also present simulation results for N = 3 and θ = [38, 361, 900]⊤ with zero initial condition, SVF (4) with a(s) =
6
(s + λ) and l = 25, and start time t = 2 seconds. The transients θ̃1,2,3 (t) are given in Fig. 4 for the estimator (8) with
240 0 0
K = 0 40 0 ,
0
0 10
and in Fig. 5 for the estimator (23) with d1 = 0.2, d2 = 0.5, γ1 = γ2 = γ3 = 10−5 ; note the difference in time scales.
VI. C ONCLUSION
The problem of transients improving for multiple frequencies estimation was considered. The dynamic regressor extension
and mixing (DREM) procedure, which allows to translate the original vector estimation problem to a set of scalar sub-problems,
was successfully applied to enhance the basic estimation algorithm; as a benefit of this translation the non-strict monotonicity
can be ensured. Significant transients improvement is illustrated with simulation results.
R EFERENCES
[1] P. Goupil, “Oscillatory failure case detection in the a380 electrical flight control system by analytical redundancy,” Control Engineering Practice, vol. 18,
no. 9, pp. 1110–1119, 2010.
[2] I. D. Landau, M. Alma, A. Constantinescu, J. J. Martinez, and M. Noë, “Adaptive regulationrejection of unknown multiple narrow band disturbances (a
review on algorithms and applications),” Control Engineering Practice, vol. 19, no. 10, pp. 1168–1181, 2011.
8
200
θ̃3 (t)
θ̃2 (t)
θ̃1 (t)
0
−200
−400
−600
−800
−1000
0
2
4
6
8
t [s] 10
Fig. 5: Transients of the estimator with DREM (23) for N = 3 with λ = 25, d1 = 0.2, d2 = 0.5, γ1 = γ2 = γ3 = 10−5 .
[3] A. A. Bobtsov, A. S. Kremlev, and A. Pyrkin, “Compensation of harmonic disturbances in nonlinear plants with parametric and functional uncertainty,”
Automation and Remote Control, vol. 72, no. 1, pp. 111–118, 2011.
[4] D. Belleter, D. Breu, T. I. Fossen, and H. Nijmeijer, “A globally k-exponentially stable nonlinear observer for the wave encounter frequency,” in Control
Applications in Marine Systems, vol. 9, no. 1, 2013, pp. 209–214.
[5] B. Wu and M. Bodson, “A magnitude/phase-locked loop approach to parameter estimation of periodic signals,” Automatic Control, IEEE Transactions
on, vol. 48, no. 4, pp. 612–618, 2003.
[6] P. A. Regalia, “An improved lattice-based adaptive iir notch filter,” Signal Processing, IEEE Transactions on, vol. 39, no. 9, pp. 2124–2128, 1991.
[7] M. Mojiri and A. R. Bakhshai, “An adaptive notch filter for frequency estimation of a periodic signal,” Automatic Control, IEEE Transactions on, vol. 49,
no. 2, pp. 314–318, 2004.
[8] X. Xia, “Global frequency estimation using adaptive identifiers,” Automatic Control, IEEE Transactions on, vol. 47, no. 7, pp. 1188–1193, Jul 2002.
[9] B. Chen, G. Pin, and T. Parisini, “Robust parametric estimation of biased sinusoidal signals: A parallel pre-filtering approach,” in Decision and Control
(CDC), 2014 IEEE 53rd Annual Conference on, Dec 2014, pp. 1804–1809.
[10] G. Fedele and A. Ferrise, “A frequency-locked-loop filter for biased multi-sinusoidal estimation,” Signal Processing, IEEE Transactions on, vol. 62,
no. 5, pp. 1125–1134, 2014.
[11] S. Aranovskiy, A. Bobtsov, A. Pyrkin, and P. Gritcenko, “Adaptive filters cascade applied to a frequency identification improvement problem,”
International Journal of Adaptive Control and Signal Processing, 2015, Early View. [Online]. Available: http://dx.doi.org/10.1002/acs.2602
[12] S. Aranovskiy, A. Bobtsov, R. Ortega, and A. Pyrkin, “Performance enhancement of parameter estimators via dynamic regressor extension and mixing,”
arXiv Preprints, 2015, http://arxiv.org/abs/1509.02763.
[13] G. Pin, Y. Wang, B. Chen, and T. Parisini, “Semi-global direct estimation of multiple frequencies with an adaptive observer having minimal
parameterization,” in 54th Conference on Decision and Control, Osaka, Japan, 2015.
[14] S. Aranovskiy, A. Bobtsov, A. Kremlev, N. Nikolaev, and O. Slita, “Identification of frequency of biased harmonic signal,” European Journal of Control,
vol. 16, no. 2, pp. 129–139, 2010.
[15] A. A. Bobtsov, D. Efimov, A. A. Pyrkin, and A. Zolghadri, “Switched algorithm for frequency estimation with noise rejection,” IEEE transactions on
automatic control, vol. 57, no. 9, pp. 2400–2404, 2012.
[16] A. Pyrkin, A. A. Bobtsov, A. Vedyakov, and S. Kolyubin, “Estimation of polyharmonic signal parameters,” Automation and Remote Control, vol. 76,
no. 8, pp. 1400–1416, 2015.
[17] P. Young, “Parameter estimation for continuous-time modelsa survey,” Automatica, vol. 17, no. 1, pp. 23–39, 1981.
[18] H. Garnier, M. Mensler, and A. Richard, “Continuous-time model identification from sampled data: Implementation issues and performance evaluation,”
International Journal of Control, vol. 76, no. 13, pp. 1337–1357, 2003.
[19] S. Sastry and M. Bodson, Adaptive control: stability, convergence and robustness. Courier Corporation, 2011.
| 3 |
On shrinking horizon move-blocking predictive
control
arXiv:1803.09676v1 [cs.SY] 26 Mar 2018
Hafsa Farooqi, Lorenzo Fagiano, Patrizio Colaneri∗
1
Introduction
This manuscript contains technical details of recent results developed by the authors
on shrinking horizon predictive control with a move-blocking strategy.
2
Motivating application: energy efficient operation of
trains
Our theoretical developments are motivated by a real-world application under study in
collaboration with a rail transport manufacturer, pertaining to the energy efficient operation of trains. Consider an electric train controlled by a digital control unit in discrete
time, with sampling period Ts . Let us denote with k ∈ Z the discrete time variable, with
x(k) = [x1 (k), x2 (k)]T the state of the train, where x1 is its position and x2 its speed (·T
denotes the matrix transpose operator), and with u(k) ∈ [−1, 1] a normalized traction
force, where u(k) = 1 corresponds to the maximum applicable traction and u(k) = −1
to the maximum braking. The input u is the available control variable. The train has
to move from one station with position x1 = 0 to the next one, with position x1 = x f ,
in a prescribed time t f . For a given pair of initial and final stations, the track features
(slopes, curvature) are known in advance. Thus, in nominal conditions (i.e. with rated
values of the train parameters, like its mass and the specifications of the powertrain and
braking systems), according to Newton’s laws and using the forward Euler discretization method, the equations of motion of a reasonably accurate model of this system
read:
x1 (k + 1) = x1 (k) + Ts x2 (k)
(1)
R (x(k))
x2 (k + 1) = x2 (k) + Ts FT (x(k),u(k))−FB (x(k),u(k))−F
M
where M is the total mass of the train, FT is the traction force, FB is the braking force,
and FR the resistive force. Functions FT (x, u), FB (x, u) are nonlinear and they depend
on the specific train and track profile. They include, for example, look-up tables that
link the traction and braking forces to the train speed and to the control input value.
These functions are derived either experimentally or from complex models of the train
and its traction and braking systems. In our research, these are provided by the business
∗ The
authors are
neria, Politecnico di
with the Dipartimento
Milano, Milano, Italy.
di
Elettronica, Informazione e BioingegE-mail addresses:
{hafsa.farooqi|
lorenzo.fagiano|lorenzo.fagiano| }@polimi.it
1
unit at our industrial partner. More details on these functions are omitted for confidentiality reasons. The resistive force FR (x) is also nonlinear, and it is the sum of a first
term Rv (x2 ), accounting for resistance due to the velocity, and a second term Rg (x1 ),
accounting for the effects of slopes and track curvature:
FR (x) = Rv (x2 ) + Rg (x1 )
Rv (x2 ) = A + Bx2 +Cx22
Rg (x1 ) = Ms g tan(α(x1 )) +
D
rc (x1 )
(2)
where the parameters A, B,C, D are specific to the considered train, Ms is the static mass
of the train, i.e. the mass calculated without taking into account the effective inertia
of the rotating components, rc (x1 ) and α(x1 ) are, respectively, the track curvature and
slope at position x1 , and g is the gravity acceleration. For example, an uphill track
section corresponds to α(x1 ) > 0, i.e. a positive slope.
Besides the prescribed arrival time t f and position x f , there are additional state constraints that must be satisfied. These pertain to the limit on the maximum allowed
velocity, x2 (x1 ), which depends on the position x1 , since a different velocity limit is
imposed for safety by the regulating authority according to the track
features at each
.
position. Overall, by defining the terminal time step k f = t f /Ts (where b·c denotes
the flooring operation to the closest integer), the state constraints read:
x(0)
x(k f )
x2 (k)
x2 (k)
= [0, 0]T
= [x f , 0]T
≥ 0, k = 0, . . . , k f
≤ x2 (x1 (k)), k = 0, . . . , k f
(3)
The control objective is to maximize the energy efficiency of the train while satisfying the constraints above. To translate this goal in mathematical terms, different
possible cost functions can be considered. In our case, we consider the discretized
integral of the absolute value of the traction power over time (with a constant scaling
factor Ts−1 ):
kf
J=
∑ |FT (x(k), u(k))x2 (k)| .
(4)
k=0
This choice tends to produce controllers that minimize the traction energy injected into
the system. The braking energy is not penalized, since in our case there is no restriction
to the use of the braking system.
As already pointed out, the input variable is also constrained in the interval u ∈ [−1, 1].
When the controller operates fully autonomously, i.e. without a human driver in the
loop, the whole interval can be used. However, in a driver assistance scenario, i.e.
when the control algorithm is developed to assist a human driver with a suggested
value of the input handle, only a smaller set of possible values can be delivered by the
controller, in order to facilitate the human-machine interaction. In particular, in this
scenario the input constraints are further tightened according to four possible operating
modes prescribed by our industrial partner:
• Acceleration: in this mode, the input can take one of three allowed values, i.e.
u ∈ {0.5, 0.75, 1}.
• Coasting: this mode implies that the traction is zero, i.e u = 0.
2
• Cruising: in this mode, the train engages a cruise control system that keeps a
constant speed, i.e. u is computed by an inner control loop in such a way that
FT = FR for positive slopes and FB = FR for negative slopes.
• Braking: in this mode the maximum braking force is used, i.e. u = −1.
As a matter of fact, the modes above can be merged in just two: one with a finite integer
number of possible input values u ∈ {−1, 0, 0.5, 0.75, 1} (which unites the Acceleration, Coasting and Braking modes), and one with the cruise control engaged. Finally,
a further feature of this application is a relatively small sampling time Ts with respect
to the imposed overall time horizon t f , resulting in a rather large number of sampling
periods in the interval [0, t f ], typically from several hundreds to a few thousands.
3
Problem abstraction and nominal SBPC approach
The control problem described in Section 2 can be cast in a rather standard form:
kf
min ∑ `(x(k), u(k))
u
(5a)
k=0
subject to
x(k + 1) = f (x(k), u(k))
(5b)
u(k) ∈ U, k = 0, . . . , k f − 1
(5c)
x(k) ∈ X, k = 1, . . . , k f
(5d)
x(0) = x0
(5e)
x(k f ) ∈ X f
(5f)
where x ∈ X ⊂ Rn is the system state, x0 is the initial condition, u ∈ U ⊂ Rm is the
input, f (x, u) : X × U → X is a known nonlinear mapping representing the discretetime system dynamics, and l(x, u) : X × U → R is a stage cost function defined by the
designer according to the control objective. The symbol u = {u(0), . . . , u(k f − 1)} ∈
Rm k f represents the sequence of current and future control moves to be applied to the
plant. The sets X ⊂ X and U ⊂ U represent the state and input constraints, and the set
X f ⊂ X the terminal state constraints, which include a terminal equality constraint as a
special case.
We recall that a continuous function a : R+ → R+ is a K -function (a ∈ K ) if it is
strictly increasing and a(0) = 0. Throughout this paper, we consider the following
continuity assumption on the system model f .
Assumption 1 The function f enjoys the following continuity properties:
k f (x1 , u) − f (x2 , u)k ≤ ax kx1 − x2 k , ∀x1 , x2 ∈ X, u ∈ U
k f (x, u1 ) − f (x, u2 )k ≤ au ku1 − u2 k , ∀u1 , u2 ∈ U, x ∈ X
where ax , au ∈ K .
(6)
In (6) and in the remainder of this paper, any vector norm k · k can be considered. Assumption (1) is reasonable in most real-world applications, and it holds in the railway
application considered here.
The nonlinear program (5) is a Finite Horizon Optimal Control Problem (FHOCP). In
3
the literature, many different solutions to solve this kind of a problem can be found, depending on the actual form of the system dynamics and constraints. One approach is to
compute a (typically local) optimal sequence of inputs u∗ and to apply it in open loop.
This might be convenient when little uncertainty is present and the system is open-loop
stable. Unfortunately, this is seldom the case. A much more robust approach is to resort to a feedback control policy u(k) = κ(x(k)). However, to derive explicitly such a
feedback policy in closed form is generally not computationally tractable, due to the
presence of system nonlinearities and constraints. A common way to derive implicitly
a feedback controller is to adopt a receding horizon strategy, where the input sequence
is re-optimized at each sampling time k and only the first element of such a sequence,
u∗ (k), is applied to the plant. Then, the feedback controller is implicitly defined by the
solution of a FHOCP at each k, where the current measured (or estimated) state x(k) is
used as initial condition. This approach is well-known as Nonlinear Model Predictive
Control (NMPC), and is adopted here as well, however with two particular differences
with respect to the standard formulation:
• First, since in our problem the terminal time k f is fixed, the resulting strategy
features a shrinking horizon rather than a receding one. Indeed, here the goal is
to make the state converge to the terminal set in the required finite time, and not
asymptotically as usually guaranteed by a receding horizon strategy;
• Second, we adopt a move-blocking strategy (see e.g. [3]) to reduce the computational burden required by the feedback controller. This is motivated by applications like the one described in Section 2, featuring values of k f of the order of
several hundreds to thousands. The corresponding number of decision variables,
combined with the system nonlinearity, often results in a prohibitive computational complexity when all the predicted control moves are free optimization
variables.
To the best of our knowledge, the combined presence of nonlinear dynamics, shrinking
horizon, and move-blocking strategy is new in the literature. We named the resulting
control approach Shrinking horizon Blocking Predictive Control (SBPC). We present
next the optimal control problem to be solved at each time step in SBPC, followed by
a pseudo-algorithm that realizes this control approach and by a proof of convergence
in nominal conditions.
3.1
Shrinking horizon Blocking Predictive Control (SBPC)
We consider a move blocking strategy, where the control input is held constant for a
certain time interval. Let us denote with L the maximum number of blocked control
moves in each interval within the prediction horizon. Moreover, we consider that each
interval contains exactly L blocked moves, except possibly the first one, which can
contain a number between 1 and L of blocked input vectors. In this way, for a given
value of k ∈ [0, k f − 1], the number N(k) of intervals (i.e. of different blocked input
vector values to be optimized) is equal to (see Fig. 1 for a graphical representation) :
kf −k
,
(7)
N(k) =
L
where d.e denotes the ceiling operation to the closest integer. Let us denote with
vN(k) = {v(1), . . . , v(N(k))} ∈ Rm N(k) , where v(·) ∈ U, the sequence of free input values to be optimized, i.e. the values that are held constant within each interval of the
4
blocked input sequence (see Fig. 1) and with u( j|k) the input vector at time k + j predicted at time k. Then, with the described blocking strategy, at each k the values of
u( j|k) are computed as:
$
!
%
j + k − Lk L
.
u( j|k) = g(vN(k) , j, k) = v
+1
(8)
L
Finally, let us denote with x( j|k), j = 0, ..., k f − k the state vectors predicted at time
k + j starting from the one at time k. At each time k ∈ [0, k f − 1], we formulate the
following FHOCP:
k f −k
min ∑ `(x( j|k), u( j|k))
vN(k) j=0
(9a)
subject to
u( j|k) = g(vN(k) , j, k), j = 0, . . . , k f − k − 1
(9b)
x( j + 1|k) = f (x( j|k), u( j|k)), j = 0, . . . , k f − k − 1
(9c)
u( j|k) ∈ U, j = 0, . . . , k f − k − 1
(9d)
x( j|k) ∈ X, j = 1, . . . , k f − k
(9e)
x(0|k) = x(k)
(9f)
x(k f − k|k) ∈ X f
(9g)
∗
We denote with vN(k)
= {v∗ (1), . . . , v∗ (N(k))} a solution (in general only locally optimal) of (9). Moreover, we denote with x∗ (k) and u∗ (k) the corresponding predicted
sequences of state and input vectors:
x∗ (k) = {x∗ (0|k), . . . , x∗ (k f − k|k)}
(10a)
u∗ (k) = {u∗ (0|k), . . . , u∗ (k f − 1 − k|k)}
(10b)
where
x∗ (0|k) = x(k)
x ( j + 1|k) = f (x∗ ( j|k), u∗ ( j|k))
∗
u∗ ( j|k) = g(vN(k)
, j, k)
∗
(10c)
(10d)
The SBPC strategy is obtained by recursively solving (9), as described by the following
pseudo-algorithm.
Algorithm 1 Nominal SBPC strategy
1. At sampling instant k, measure or estimate the state x(k) and solve the FHOCP
∗
(9). Let vN(k)
be the computed solution;
∗ , i.e. the control vector
2. Apply to the plant the first element of the sequence vN(k)
u(k) = u∗ (0|k) = v∗ (1);
3. Repeat the procedure from 1) at the next sampling period.
Algorithm 1 defines the following feedback control law:
u(k) = µ(x(k)) := u∗ (0|k),
5
(11)
and the resulting model of the closed-loop system is:
x(k + 1) = f (x(k), µ(x(k))
(12)
Remark 1 In the approach described
l mso far, the number N(k) of predicted inputs to be
k
optimized decreases from N(1) = Lf to N(k f − 1) = 1, see Fig. 1. Another approach
that can be used with little modifications is to keep a constant value of N = N(1), and
to reduce the number of blocked input values in each interval as k increases, up until
the value k = k f − N(1) is reached, after which each predicted input vector is a free
variable and their number shrinks at each k. This second strategy has the advantage to
retain more degrees of freedom in the optimization as time approaches its final value.
We conclude this section with a Lemma on the recursive feasibility of (9) and convergence of the state of (12) to the terminal set.
Proposition 1 Assume that the FHOCP (9) is feasible at time k = 0. Then, the FHOCP
(9) is recursively feasible at all k = 1, . . . , k f − 1 and the state of the closed loop system
(12) converges to the terminal set X f at time k f .
Proof. Recursive feasibility is established by construction, since at any time k + 1
∗ , if N(k +
one can build a feasible sequence vN(k+1) either by taking vN(k+1) = vN(k)
∗ ), if
1) = N(k), or by taking vN(k+1) = {v∗ (2), . . . , v∗ (N(k))} (i.e. the tail of vN(k)
N(k + 1) = N(k) − 1. Convergence to the terminal set is then achieved by considering
that constraint (9g) is feasible at time k = k f − 1.
1
0.8
0.6
0.4
0.2
0
0
10
20
30
40
50
60
Figure 1: SBPC scheme with L = 20 and k f = 60. As an example, possible courses of
predicted inputs at time k = 0 (‘◦’), k = 10 (‘+’), and k = 20 (‘∗’) are depicted. It can
be noted that for k = 20 the number of decision variables reduces from 3 to N(20) = 2,
and that as k increases, the number of blocked moves in the first block decreases.
6
Remark 2 So far, we have disregarded any mismatch between the model f (x, u) and
the real plant, like the presence of model uncertainty and external disturbances. For
this reason, we termed the SBPC approach of Algorithm 1 the “nominal” one. In the
next section, we introduce a model of uncertainty, whose form is motivated again by the
application described in Section 2, and two possible variations of Algorithm 1 to deal
with it, along with their guaranteed convergence properties. We term these variations
the “relaxed” approaches, since they involve the use of suitable soft (i.e. relaxed)
constraints to guarantee recursive feasibility.
Remark 3 Convergence to X f does not necessarily imply forward invariance of such a
set under the described control scheme (which is by the way not well defined for k > k f ).
The capability to keep the state within the terminal set depends on how such a set is
defined (e.g. it holds when X f contains equilibrium points for the model f (x, u)) and in
general it is not required by the considered problem setup. This automatically implies
that we don’t have to assume the existence of any terminal control law as usually done
in standard NMPC formulations. On the other hand, in our motivating application the
terminal set X f actually corresponds to an equilibrium point (namely with zero speed,
and position equal to the arrival station, see (3)), thus in this case nominal forward
invariance is guaranteed for k > k f .
4
Relaxed SBPC approaches: algorithms and properties
Following Remark 2, to model the system uncertainty and disturbances we consider an
additive term d(k) acting on the input vector, i.e.:
ũ(k) = u(k) + d(k)
(13)
where ũ(k) is the disturbance-corrupted input provided to the plant. This model represents well all cases where plant uncertainty and exogenous disturbances can be translated into an effect similar to the control input (the so-called matched uncertainty).
For example, in our motivating application with straightforward manipulations, equation (13) can describe uncertainty in the train mass, drivetrain specs, track slope and
curvature, as well as the discretization of u∗ (0|k) and/or misapplication by the human
operator in a driver assistance scenario (see Section 2).
We consider the following assumption on d:
Assumption 2 The disturbance term d belongs to a compact set D ⊂ Rm such that:
kdk ≤ d, ∀d ∈ D
where d ∈ (0, +∞).
(14)
This assumption holds in many practical cases and in the considered train application as well. We indicate the perturbed state trajectory due to the presence of d as:
x̃(k + 1) = f (x̃(k), ũ(k)), k = 0, . . . , k f
where x̃(0) = x(0). Now, referring to Proposition 1, the convergence guarantees achieved
in the nominal case are a direct consequence of the recursive feasibility property, which
7
can be easily lost in presence of the disturbance d, due to the deviation of perturbed
trajectory from the nominal one. As commonly done in standard NMPC, to retain
recursive feasibility, we therefore soften the constraints in the FHOCP. However, in
general the use of soft constraints does not guarantee that, in closed-loop operation, the
operational constraints are satisfied, or even that the constraint violation is uniformly
decreasing as the worst-case disturbance bound d gets smaller. For simplicity and to be
more specific, from now on let us restrict our analysis to the terminal state constraint
in (5f), i.e. x(k f ) ∈ X f . We do so without loss of generality, since the results and approaches below can be extended to any state constraint in the control problem. On the
other hand, in our railway application the terminal state constraint is the most important
one from the viewpoint of system performance. The other constraints (velocity limits)
are always enforced for safety by modulating traction or by braking. Let us denote the
distance between a point x and a set X as:
∆(x, X) = min kx − yk.
y∈X
Then, we want to derive a modified SBPC strategy with softened terminal state constraint (to ensure recursive feasibility) that guarantees a property of the following form
in closed loop:
(15)
∆(x̃(k f ), X f ) ≤ β (d), β ∈ K .
That is, the distance between the terminal state and the terminal constraint is bounded
by a value that decreases strictly to zero as d → 0. In order to obtain this property,
we propose a relaxed SBPC approach using a two-step constraint softening procedure,
described next.
4.1
Two-step relaxed SBPC strategy
At each time k we consider a strategy consisting of two optimization problems to be
solved in sequence:
a) we compute the best (i.e. smallest) achievable distance between the terminal
state and the terminal set, starting from the current perturbed state x̃(k):
γ = arg min γ
vN(k) ,γ
(16a)
subject to
u( j|k) = g(vN(k) , j, k), j = 0, . . . , k f − k − 1
(16b)
x( j + 1|k) = f (x( j|k), u( j|k)), j = 0, . . . , k f − k − 1
(16c)
u( j|k) ∈ U, j = 0, . . . , k f − k − 1
(16d)
x( j|k) ∈ X, j = 1, . . . , k f − k
(16e)
x(0|k) = x̃(k)
(16f)
∆(x(k f − k|k), X f ) ≤ γ
(16g)
b) we optimize the input sequence using the original cost function, and softening
8
the terminal constraint by γ:
k f −k
min ∑ `(x̃( j|k), u( j|k))
vN(k) j=0
(17a)
subject to
u( j|k) = g(vN(k) , j, k), j = 0, . . . , k f − k − 1
(17b)
x( j + 1|k) = f (x( j|k), u( j|k)), j = 0, . . . , k f − k − 1
(17c)
u( j|k) ∈ U, j = 0, . . . , k f − k − 1
(17d)
x( j|k) ∈ X, j = 1, . . . , k f − k
(17e)
x(0|k) = x̃(k)
(17f)
∆(x(k f − k|k), X f ) ≤ γ
(17g)
By construction, both problems are always feasible (with the caveat that state constraints are considered to be always feasible, as discussed above, otherwise the softr
ening shall be applied to these constraints as well). We denote with vN(k)
, xr (k) and
r
u (k) the optimized sequences of decision variables, state and inputs resulting from
r
the solution of (17). The sequences xr (k) and ur (k) are computed from vN(k)
and x̃(k)
as reported in (10). Finally, we note that the disturbance is not explicitly considered in
problems (16)-(17), which still employ the nominal model for the predictions.
The resulting relaxed SBPC strategy is implemented by the following pseudo-algorithm.
Algorithm 2 Two-stage relaxed SBPC strategy
1. At sampling instant k, measure or estimate the state x̃(k) and solve in sequence
r
the optimization problems (16)-(17). Let vN(k)
be the computed solution;
r
2. Apply to the plant the first element of the sequence vN(k)
, i.e. the control vector
r
r
u(k) = u (0|k) = v (1);
3. Repeat the procedure from (1) at the next sampling period.
Algorithm 2 defines the following feedback control law:
u(k) = µ r (x̃(k)) := ur (0|k),
(18)
and the resulting closed-loop dynamics are given by:
x̃(k + 1) = f (x̃(k), µ r (x̃(k)) + d(k)).
(19)
The next result shows that the closed-loop system (19) enjoys a uniformly bounded
accuracy property of the form (15), provided that the nominal SBPC problem (9) is
feasible at k = 0.
Theorem 1 Let Assumptions 1 and 2 hold and let the FHOCP (9) be feasible at time
k = 0. Then, the terminal state x̃(k f ) of system (19) enjoys property (15) with
k f −1
∆(x̃(k f ), X f ) ≤ β (d) =
∑ βk f −k−1 (d)
(20)
k=0
where
β0 (d) = au (d)
βk (d) = au (d) + ax (βk−1 (d)), k = 1, . . . , k f − 1
(21)
9
Proof. The proof is by induction. Start at k = 0 and consider the nominal optimized
sequences x∗ (k), u∗ (k) obtained by solving problem (9). We first evaluate the worstcase perturbation induced by the disturbance with respect to the open-loop state trajectory x∗ (k). For a sequence of disturbances d( j|k), j = 0, . . . , k f − k, the corresponding
open-loop input and state trajectories are:
ũ( j|k) = u∗ ( j|k) + d( j|k), j = 0, . . . , k f − k − 1
x̃(0|k) = x∗ (0|k)
x̃( j + 1|k) = f (x̃( j|k), ũ( j|k)), j = 1, . . . , k f − k
(22)
From (6) we have:
kx̃(1|0) − x∗ (1|0)k = k f (x(0), ũ(0|0))
− f(x(0), u∗ (0|0))k ≤
∗
au (kũ(0|0) − u (0|0)k) ≤ au d = β0 d
Consider now the perturbation 2-steps ahead:
kx̃(2|0) − x∗ (2|0)k =
k f (x̃(1|0), ũ(1|0)) − f (x∗ (1|0), u∗ (1|0))k =
k f (x̃(1|0), ũ(1|0)) − f (x̃(1|0), u∗ (1|0))+
∗
∗
∗
f (x̃(1|0),
u (1|0)) − f (x∗ (1|0), u (1|0))k ≤
au d + ax (kx̃(1|0)
− x (1|0)k)
≤
au d + ax β0 d = β1 d .
By iterating up until the second last time step we obtain:
kx̃(k f |0) − x∗ (k f |0)k ≤ βk f −1 d ,
(23)
where βk f −1 ∈ K since it is given by compositions and summations of class-K functions. Since the FHOCP (9) is feasible, we have x∗ (k f |0) ∈ X f , i.e. ∆(x∗ (k f |0), X f ) = 0
and thus:
∆(x̃(k f |0), X f ) ≤ kx̃(k f |0) − x∗ (k f |0)k + ∆(x∗ (k f |0), X f )
≤ βk f −1 (d).
(24)
Now consider k = 1 and the FHOCP (16). If the optimizer is initialized with blocked
control moves vN(1) such that the tail of the previous optimal sequence u∗ (0) is applied
to the system, the corresponding minimum γ in (16) results to be upper bounded by
βk f −1 (d), in virtue of (24). The optimal value γ is therefore not larger than this bound
as well:
γ |k=1 ≤ βk f −1 (d).
(25)
Now take the optimal sequences xr (1) and ur (1) computed by solving the FHOCP
(17). By applying the same reasoning as we did for k = 0, we have (compare with
(23)):
kx̃(k f |1) − xr (k f |1)k ≤ βk f −2 (d).
(26)
Moreover, equation (25) implies that the solution of (17) satisfies the following inequality:
∆(xr (k f |1), X f ) ≤ γ |k=1
(27)
From (25)-(27) we have:
∆(x̃(k f |1), X f ) ≤ kx̃(k f |1) − xr (k f |1)k + ∆(xr (k f |1), X f ) ≤
1
βk f −2 (d) + βk f −1 (d) = ∑ βk f −k−1 (d).
k=0
10
By applying recursively the same arguments, the bound (20) is obtained.
Theorem 1 indicates that the worst-case distance between the terminal state and the
terminal set is bounded by a value which is zero for d = 0 and increases strictly with
the disturbance bound. In the considered railway application this means that, for example, the worst-case accuracy degradation in reaching the terminal station due to a
discretization of the input, as done in the driver assistance mode, is proportional to the
largest employed quantization interval of the input handle. This result provides a theoretical justification to the proposed two-step relaxed SBPC approach. The bound (20)
is conservative, since it essentially results from the accumulation of worst-case perturbations induced by the disturbance on the open-loop trajectories computed at each k.
As we show in our simulation results, in practice the resulting closed-loop performance
are usually very close to those of the nominal case, thanks to recursive optimization in
the feedback control loop.
4.2
Multi-objective relaxed SBPC strategy
As an alternative to the two-step approach described above, one can also consider a
multi-objective minimization:
k f −k
min
vN(k) ,β
∑ `(x̃( j|k), u( j|k)) + ωγ
(28a)
j=0
subject to
u( j|k) = g(vN(k) , j, k), j = 0, . . . , k f − k − 1
(28b)
x( j + 1|k) = f (x̃( j|k), u( j|k)), j = 0, . . . , k f − k − 1
(28c)
u( j|k) ∈ U, j = 0, . . . , k f − k − 1
(28d)
x( j|k) ∈ X, j = 1, . . . , k f − k
(28e)
x(0|k) = x̃(k)
(28f)
∆(x(k f − k|k), X f ) ≤ γ
(28g)
where ω is a positive weight on the scalar γ. Problem (28) can be solved in Algorithm
(2) in place of problems (16)-(17). In this case, the advantage is that a trade-off between constraint relaxation and performance can be set by tuning ω. Regarding the
guaranteed bounds on constraint violation, with arguments similar to those employed
in [4] one can show that, at each k ∈ [0, k f − 1], for any ε > 0 there exists a finite value
of ω such that the distance between the terminal state and the terminal set is smaller
than γk f −k−1 (d) + ε. Thus, with large-enough ω, one can recover the behavior obtained
with the two-step relaxed SBPC approach. The theoretical derivation is omitted for the
sake of brevity, as it is a rather minor extension of the results of [4].
References
[1] Szilárd Aradi, Tamás Bécsi, and Péter Gáspár. Design of predictive optimization
method for energy-efficient operation of trains. In European Control Conference
(ECC), pages 2490–2495, 2014.
[2] VF. Borrelli, A. Bemporad, and M. Morari. Predictive Control for Linear and
Hybrid Systems. Cambridge University Press, 2017.
11
[3] Raphael Cagienard, Pascal Grieder, Eric C Kerrigan, and Manfred Morari. Move
blocking strategies in receding horizon control. Journal of Process Control,
17(6):563–570, 2007.
[4] L. Fagiano and A.R.Teel. Generalized terminal state constraint for model predictive control. Automatica, 49(5):2622–2631, 2013.
[5] Ravi Gondhalekar and Jun-ichi Imura. Recursive feasibility guarantees in moveblocking MPC. In 46th IEEE Conference on Decision and Control, pages 1374–
1379, 2007.
[6] Ravi Gondhalekar and Jun-ichi Imura. Strong feasibility in input-move-blocking
model predictive control. IFAC Proceedings Volumes, 40(12):816–821, 2007.
[7] Ravi Gondhalekar and Jun-ichi Imura. Least-restrictive move-blocking model
predictive control. Automatica, 46(7):1234–1240, 2010.
[8] Ravi Gondhalekar, Jun-ichi Imura, and Kenji Kashima. Controlled invariant feasibility - a general approach to enforcing strong feasibility in MPC applied to
move-blocking. Automatica, 45(12):2869–2875, 2009.
[9] G. C. Goodwin, M. M. Seron, and J. A. De Don. Constrained control and estimation: an optimisation approach. Springer, London, 2005.
[10] Shengbo Eben Li, Zhenzhong Jia, Keqiang Li, and Bo Cheng. Fast online computation of a model predictive controller and its application to fuel economy–
oriented adaptive cruise control. IEEE Transactions on Intelligent Transportation
Systems, 16(3):1199–1209, 2015.
[11] D.Q. Mayne. Model predictive control: Recent developments and future promise.
Automatica, 50:2967–2986, 2014.
[12] S.J. Qin and T.A. Badgwell. A survey of industrial model predictive control
technology. Control Engineering Practice, 11:733–764, 2003.
[13] Gerben M Scheepmaker, Rob MP Goverde, and Leo G Kroon. Review of energyefficient train control and timetabling. European Journal of Operational Research, 257(2):355–376, 2017.
[14] Rohan C Shekhar and Jan M Maciejowski. Robust variable horizon mpc with
move blocking. Systems & Control Letters, 61(4):587–594, 2012.
[15] G Valencia-Palomo, JA Rossiter, CN Jones, Ravi Gondhalekar, and B Khan. Alternative parameterisations for predictive control: How and why? In American
Control Conference (ACC), 2011, pages 5175–5180, 2011.
[16] Mingzhao Yu and Lorenz T Biegler. A stable and robust NMPC strategy with
reduced models and nonuniform grids. IFAC-PapersOnLine, 49(7):31–36, 2016.
12
| 3 |
A Two-Stage Method for Text Line Detection in Historical Documents
Tobias Grüninga,∗, Gundram Leiferta , Tobias Straußa , Roger Labahna
a Computational
Intelligence Technology Lab, Institute of Mathematics, University of Rostock, 18057 Rostock, Germany
arXiv:1802.03345v1 [cs.CV] 9 Feb 2018
Abstract
This work presents a two-stage text line detection method for historical documents. In a first stage, a
deep neural network called ARU-Net labels pixels to belong to one of the three classes: baseline, separator
or other. The separator class marks beginning and end of each text line. The ARU-Net is trainable from
scratch with manageably few manually annotated example images (less than 50). This is achieved by
utilizing data augmentation strategies. The network predictions are used as input for the second stage
which performs a bottom-up clustering to build baselines. The developed method is capable of handling
complex layouts as well as curved and arbitrarily oriented text lines. It substantially outperforms current
state-of-the-art approaches. For example, for the complex track of the cBAD: ICDAR2017 Competiton
on Baseline Detection the F-value is increased from 0.859 to 0.922. The framework to train and run the
ARU-Net is open source.
Keywords: baseline detection, text line detection, layout analysis, historical documents, U-Net, pixel
labeling, semantic segmentation, state estimation
1. Introduction
Accessibility of the valuable cultural heritage of historical documents is an important concern of
archives, libraries as well as certain companies, e.g., those specialized in genealogy. After years of digitization at an industrial scale to protect and preserve these valuable goods, millions over millions of
scanned pages are stored at servers all over the world [1]. The generic next step is to make the enormous
amount of content of these document images accessible and enable humanists, historians, genealogists as
well as ordinary people to efficiently work with these documents. Besides the cost- and time-consuming
process of manually annotating volumes [2], it is subject to current research and scientific discussion how
to automate this process [3].
Since 2009, tremendous progress in the field of Automated Text Recognition1 (ATR) [4, 5] as well
as Keyword Spotting (KWS) [6, 7, 8] was achieved. The performance of state-of-the-art systems reaches
∗ Corresponding
author
Email addresses: [email protected] (Tobias Grüning), [email protected]
(Gundram Leifert), [email protected] (Tobias Strauß), [email protected] (Roger Labahn)
1 Optical Character Recognition + Handwritten Text Recognition
Preprint submitted to Elsevier
February 12, 2018
character error rates below 10% for ATR [9] and mean average precisions above 0.9 for KWS [10] for
complex handwritten documents. Although efforts are made to develop systems working solely on the
rough input image without any a-priori segmentation [11, 12, 13], the best performing recognition systems
– with reference to recently hosted competitions – rely on segmented words or text lines as input. Entirely
segmentation-free approaches suffer either from an enormous training/inference time and/or, up to now,
did not demonstrate its applicability with competitive quality on challenging datasets [10]. Hence, a
workflow which involves a text line extraction followed by the transformation of pixel information into
textual information (ATR/KWS) is the widely used standard. This work deals with the first step of the
information retrieval pipeline, namely the text line extraction. This is a mandatory step since errors
directly effect the performance of the overall information retrieval process. The text line extraction is
still unsolved to a certain extent for historical documents due to difficulties such as physical degradations
(e.g., bleed-through, faded away characters, heterogeneous stroke intensity), image capture conditions
(e.g., scan curve, illumination issues), complex layouts (e.g., structured documents, marginalia, multicolumn layouts, varying font sizes), arbitrary orientations and curved text lines.
The results achieved by state-of-the-art approaches are not satisfying [14], especially if dealing with
heterogeneous data. Therefore, this work focuses on the extraction of text lines in arbitrary historical
documents. Since different ATR/KWS systems necessitate different text line representations, e.g., bounding boxes [15], x-height areas [16] or more precise polygonal representations following all ascenders and
descenders [7], there is not the one correct text line representation. Therefore we limit ourselves towards
the text line detection task by representing each text line by its baseline. The detected baselines allow
for an extraction of the text lines in an appropriate – with respect to the following method – way. The
problem of extracting a text line given its baseline can be tackled by applying, e.g., histogram approaches
to estimate the x-height [16] or by utilizing Dynamic Programming to calculate separating seams [17].
Besides the classical image processing based approaches, deep learning based methods became omnipresent in the document analysis community within the last years. Such techniques were recently used
to solve several different problems such as binarization [18], page boundary extraction [19], page segmentation [20] or text line detection [16]. The presented work to our knowledge is the first which uses a
two-stage method, combining deep learning strategies and state-of-the-art image processing based techniques. We propose an extension of the U-Net [21], the so-called ARU-Net. The fully convolutional U-Net
is extended by incorporating residual blocks [22] to increase its representative power. Furthermore, a spatial attention mechanism is developed which allows the ARU-Net to focus on image content at different
positions and scales. The network is designed to processes the entire, arbitrarily-sized image at once to
take account of all spatial context. The ARU-Net is universal in a way, that it could be used to tackle any
pixel labeling task. In this work, it is trained in a fully supervised fashion to classify each pixel to belong
to one of the following classes: baseline, separator or other. The separator class is introduced to explicitly
2
predict beginning and end of each text line and not just rely on the information implicitly given by the
baseline class. This is advantageous for text lines which are close together but have to be separated,
e.g., those belonging to different columns. The network output serves as input for an image processing
based bottom-up clustering approach. This approach utilizes so-called states of superpixels [23], which
encode local text orientation and interline distances. This second stage allows for an error correction of
the network output by incorporating domain knowledge based on assumptions, which hold for text lines in
general, see Sec. 3.3.3. Additionally, it is easily possible to incorporate the separator information, which
allows for an handling of documents with complex layouts, e.g., images containing tables or marginalia.
Each method relying on supervised deep learning and therefore relying on training data can suffer
from the need of an enormous amount of labeled training data. We demonstrate that the presented
approach achieves high quality results on the Bozen dataset [24] with less than 50 full-page training
samples by using data augmentation strategies. Along with an annotating effort of just a few minutes
per page the adaptation of the proposed method is easy and cheap. We demonstrate the applicability
of the proposed method for images with arbitrarily oriented as well as curved text lines by achieving
nearly as good results as for straight 0° oriented text lines. Finally, we show that the presented approach
outperforms state-of-the-art methods on three different datasets. A relative F-value [25] error (the gap to
1.0) reduction of at least 24% is achieved for the cBAD dataset [26]. This dataset is composed of images
of nine different archives and libraries from all over Europe and is therefore – in the opinion of the authors
– the most representative and heterogeneous freely available dataset. Especially, for the complex track,
which contains mostly documents with complex layouts, the average F-value is increased from 0.859 to
0.922.
The main contributions of this work are:
• introduction of a newly designed deep neural network (ARU-Net) for pixel labeling along with a
meaningful parametrization – the ARU-Net and its training framework are open source2 ,
• introduction of the new concept of learned separators to handle complex layouts instead of an a-priori
page segmentation or white-/blackrun calculation
• introduction of a state-of-the-art two-stage workflow which combines state-of-the-art deep learning
and image processing techniques – the entire workflow is freely usable via the Transkribus platform3 .
2. Related Work
A comprehensive survey of approaches for text line extraction in historical documents is given in [27]
and [28]. In this section, we will focus on approaches relevant for this work.
2 https://github.com/TobiasGruening/ARU-Net
3 https://transkribus.eu
3
In [17, 29, 30], the principle of Dynamic Programming is utilized to calculate cost optimal paths
passing the image from left to right to separate different text lines from each other. These methods
basically differ in the way the images are pre-processed and in the definition of the cost function. Garz
et al. [31] propose a method based on clustering of interest points (this is just another name for what
we call superpixel). Using a standard clustering technique, interest points in an area which exceeds a
certain density are clustered to form word clusters. Word clusters are separated to sub-word segments
and these are finally grouped to build text lines. Ryu et al. [23] propose an algorithm which uses certain
characteristics (so-called states) of extracted connected components to assign costs to certain clustering
results. These states encode local text orientation and interline distances and are introduced in Def. 3.3.4.
Subsequently using four different operations (merge, split, merge-split, merge-merge-split) on an initial
coarse clustering, the costs are minimized to obtain an optimal clustering, which leads to the final text line
segmentation. Ahn et al. [32] improve this approach by the introduction of a newly developed binarization
method and an improved clustering process. Grüning et al. [33] extended the approach of Ryu et al. so
that it is applicable for more general superpixels with a newly introduced clustering procedure which
does not rely on a coarse initial clustering. Besides these “classical” approaches, which are based on
image processing techniques, methods based on machine learning gained importance within the last two
years. Moysset et al. [34] propose a method based on a recurrent neural network. The network is trained
given only the number of lines in the image utilizing Connectionist Temporal Classification which was
introduced to train networks for handwriting text recognition and allows for ground truth data without
any alignment. The trained neural network predicts confidences for the vertical coordinates of the image
to belong either to the classes line or interline. Further post-processing of the neural network output
is performed to detect the text lines. In follow-up works, they formulated the problem as a regression
problem [35, 36]. The recurrent neural network directly predicts bounding boxes as well as the start of
each text line, respectively. Besides this regression based approach, classification based approaches were
proposed most recently. In contrast to the approach of Moysset et al., these methods perform a pixel
labeling to classify each image pixel (instead of classifying rows of pixels, only). For instance, Renton
et al. [16] propose a fully convolutional network (FCN) based on dilated (or atrous) convolutions to
classify pixels as text line main body or not. The classification results are utilized to extract the text line
information. These techniques are currently very popular, e.g., four of the five participants of the cBAD:
ICDAR2017 Competition on Baseline Detection [37] use methods relying on FCNs.
3. Methodology
In this section, we introduce the two-stage method for baseline detection, see Fig. 3.1. The first stage
relies on a deep neural network – the ARU-Net – and performs a pixel labeling. The pixel labeling
can be seen as some kind of goal-oriented binarization. Instead of detecting all foreground elements, it
4
Stage II
Superpixel Clustering
Superpixel Calculation
Stage I
State Estimation
ARU-Net
Input
Output
Figure 3.1: Two-stage workflow to detect baselines – The first stage utilizes a deep hierarchical neural network to
perform a pixel labeling. The result of Stage I is the input for an image processing based method in Stage II. This method
clusters superpixel to build baselines. The image is sampled from the cBad complex test set [25].
restricts itself to those elements which are of interest for the specific task. The second stage performs a
superpixel (SP) extraction on the first stage’s output. These SPs are further clustered to build baselines.
In the following, the problem of baseline detection is formulated. Afterwards, a detailed description of
the proposed ARU-Net is given. Finally, the SP extraction and clustering approach is described.
3.1. Problem Statement
We will introduce the problem of baseline detection in a formal way by defining all necessary termini
and notation. Within this work we follow the definition of a baseline given in [26]:
Definition 3.1.1 (baseline). A baseline is defined in the typographical sense as the virtual line where
most characters rest upon and descenders extend below.
h×w
Definition 3.1.2 (image, pixel, intensity). A matrix I ∈ [0, 1]
is called (gray-scale) image of
height h and width w. A pair p = (y, x) ∈ {1, ..., h} × {1, ..., w} is called pixel of I. The matrix value
I(p) = I(y, x) = Iy,x in row y and column x is called intensity of pixel p.
Image means gray-scale image for the rest of this work. Ih denotes the height of image I, Iw denotes the
width, analogously. For visualization purposes a pixel intensity value of 1 means white and 0 means black.
If the colored image is available, we usually use this one for visualization even though it is converted to
its gray-scale version for calculations.
Definition 3.1.3 (coordinate). Let p = (y, x) be a pixel. py = y and px = x denote the elements of
the first and second dimension of p and are called coordinates (y−coordinate and x−coordinate).
5
Definition 3.1.4 (image space). The set of all possible images
I=
[
[0, 1]
h×w
(3.1)
h,w∈N
is called image space.
Definition 3.1.5 (polygonal chain, closed). A polygonal chain of length n ∈ N is an n-tuple of pixels
P = (p1 , p2 , ..., pn ) .
(3.2)
A polygonal chain is called closed iff p1 = pn holds.
Taking into account Def. 3.1.1, each baseline can be represented by a polygonal chain.
Definition 3.1.6 (polygonal chain space). The infinite set P of all possible polygonal chains is called
polygonal chain space.
Definition 3.1.7 (baseline detector, baseline hypothesis). We call a function b : I → P(P) which
maps each image to a subset of P a baseline detector. The set of all baseline detectors is denoted by B.
The output of b for a certain image I is called baseline hypothesis.
Definition 3.1.8 (baseline ground truth). The set GI ⊂ P of polygonal chains representing the baselines of an image I (possibly annotated by a human operator) is called baseline ground truth (for image
I).
Def. 3.1.1 allows for some baseline variety. Hence, there is not the one unique and correct ground truth
for an image. Therefore, ground truth information is always biased by its creator. This has to be taken
into account for the evaluation process as well as for the baseline detector design.
Definition 3.1.9 (similarity score). A function h·, ·iµ : P × P → [0, 1] assigning a scalar value to each
pair of baseline ground truth and baseline hypothesis polygonal chain sets is called similarity score.
A value of 1.0 indicates that two polygonal chains are regarded as equal. Within this work we follow
the similarity score introduced in [25]: We measure the accuracy of a baseline detector in terms of the
F-value, see [25] for a detailed introduction.
The problem tackled in this work can now be formulated as follows: Suppose there are two sets of
images along with their baseline ground truth information
Ttrain = {(I, GI )i | i = 1, ..., n},
Ttest = {(I, GI )i | i = 1, ..., m}.
6
(3.3)
We aim for a design of a baseline detector b∗ given Ttrain which solves
b∗ = arg max
b∈B
X
hGI , b(I)iµ .
(3.4)
(I,GI )∈Ttest
In the design phase of b∗ the set Ttest is unknown and one is allowed to use solely Ttrain . Hence, one has
to ensure that b∗ generalizes well from Ttrain to Ttest .
Since the proposed design consists of two stages and the first stage relies on deep learning techniques,
an adaptation to a differently biased ground truth (produced by a different annotator) can be done easily
by retraining the first stage without any fine tuning done by experts.
3.2. Stage I: ARU-Net
Typically, layout analysis algorithms directly work on the input image I or on a binarized version of it
[17, 23, 29, 30, 31, 33]. Instead, we employ a more goal-oriented transformation of the input image utilizing
a neural network, which is trained in a supervised manner to assign a certain class to each pixel like in
[21, 38, 39]. This is often referred to as pixel labeling or semantic segmentation. We will introduce the
problem of pixel labeling utilizing hierarchical neural networks, followed by a description of the proposed
ARU-Net architecture.
3.2.1. Pixel Labeling – Problem Formulation
Definition 3.2.1 (neural pixel labeler). A neural pixel labeler (NPL) for the classes C = {c1 , ..., cn }
is a hierarchical neural network Φ( · ; w) : I → I |C| . The NPL is parametrized by w ∈ RN . For I ∈ I it
performs a prediction over all pixels and all possible classes
Φ(I; w) = C ∈ [0, 1]
Ih ×Iw ×|C|
subject to
|C|
X
C(y, x, c) = 1
∀y, x.
(3.5)
c=1
C(:, :, c) = C:,:,c ∈ I denotes the image which encodes the pixel-wise prediction (probability) for the c−th
class.
Definition 3.2.2 (pixel ground truth). A cartesian product GI ∈ I |C| is called pixel ground truth (for
image I) if it assigns exactly one class (one-hot-encoding) to each pixel such that
∀y, x ∃!c ∈ {1, ..., |C|} :
GI (y, x, c) = 1 ∧ GI (y, x, c̃) = 0 (∀c̃ 6= c).
(3.6)
Following the problem formulation of Section 3.1, we aim for an NPL, which was tuned on a training set
and optimally performs on a test set. Assume there are training and test sets in the style of Eq. (3.3) –
7
with pixel ground truth information instead of baseline ground truth information
Tetrain = {(I, GI )i | i = 1, ..., n},
Tetest = {(I, GI )i | i = 1, ..., m}.
(3.7)
The performance of an NPL is evaluated in terms of the cross-entropy between the predicted and the
ground truth distribution. The cross-entropy can also be motivated by a maximum likelihood estimation.
This results in the cross-entropy loss function.
Definition 3.2.3 (loss function). Let Te be a set of images along with their pixel ground truth and
Φ( · ; w) is an NPL. The performance of Φ on Te is evaluated in terms of the (cross-entropy) loss function
L(Φ, Te ) = −
X
|C|
Ih X
Iw X
X
GI (y, x, c) ln Φ(I; w)y,x,c .
(3.8)
(I,GI )∈Te y=1 x=1 c=1
To improve the performance of the NPL on the training set, one can calculate the loss function’s gradient
with respect to the model parameters using the well-known technique of backpropagation [40]. The
gradient is used to adapt the model parameters by gradient descent
w ←w−τ ·
∂L
(Φ, Tetrain )
∂w
(3.9)
with a learning rate τ . This is repeated to successively adapt the NPL. The process of adapting the
model by minimizing its loss is called training. Since one does not aim for a minimization of the loss
on the training set, the system has to generalize to achieve high quality results on the test set as well.
To stabilize training, avoid over-fitting, improve generalization, ... dozens of techniques to improve the
simple rule in Eq. (3.9) were introduced within the last years. Since the introduction of these is beyond
the scope of this work, we refer to [41]. Details on techniques used within this work are given in Sec. 4.
3.2.2. ARU-Net – Architecture
The ARU-Net is a special form of an NPL and is described in this section. We omit a formal introduction of the used neural network components and concepts and refer to the above mentioned literature.
Within the last few years, different architectures were proposed for the pixel labeling task. Most of them
are based on Convolutional Neural Networks (CNNs) [42]. A direct application of CNNs for semantic
segmentation is presented in [38]. The presented Fully Convolutional Network (FCN) combines local
features to produce more meaningful high level features using pooling layers. Pooling reduces the spatial
dimension. Thus, the result suffers from a coarse resolution. Noh et al. [39] tackle this problem by
applying a deconvolutional network on the subsampled output of the FCN. The U-Net proposed in [21]
furthermore introduces shortcuts between layers of the same spatial dimension. This allows for an easier
combination of local low level features and global higher-level features. Additionally, error propagation
8
for deep structures is facilitated and the so-called vanishing gradient problems [43] are reduced. The
U-Net is the basis for the proposed ARU-Net. We extend the U-Net by two more key concepts – spatial
attention (A) and depth (residual structure (R)) to be described below. Remarkably, in contrast to the
U-Net proposed in [21], we perform border padding. Hence, the spatial dimensions in each scale space of
the U-Net are all the same, see Fig. 3.2 for a schematic representation of an U-Net. The output of the
U-Net thus is a feature map (Z features in Fig. 3.2) of the same spatial dimension as the input. Hence, the
U-Net becomes an NPL as defined in Def. 3.2.1 by adding a convolutional (to get pixel-wise predictions)
softmax classifier on top which distinguishes between the different classes of C.
Remark 3.2.1. If the presented architectures are used for the pixel labeling task, it is implicitly assumed
that such a classifier is always added to generate per class probabilities at pixel level.
Z
2Z
Identity
2Z
Z
Z
Output
I
Z
Input
1
2Z
4Z
2Z
Identity
II
4Z
4Z
8Z
4Z
Identity
III
8Z
IV
Conv+Act
Max Pool
Deconv+Act
2 Layer CNN Block
Figure 3.2: U-Net – The input is an image of arbitrary spatial dimension. ”Act” is the activation function thus the
rectangles represent sets of activation maps. Each rectangle represents a 3-dim array (∈ Rh×w×z ). Within each scale space
(roman numbers) the feature map widths and heights are constant (encoded by the height of the rectangles). The number
of feature maps Z is pictured by the width of the rectangles. Between adjacent scale spaces the spatial dimension decreases
by a certain factor (2 in the figure) and the representative depth (number of feature maps) increases by the same factor.
He et al. [22] introduce very deep neural networks which are still trainable and yield state-of-the-art
results. This is achieved using so-called residual blocks. Residual blocks introduce shortcuts, which
enable the error backpropagation and identity propagation even for very deep structures. Hence, the
vanishing gradient problems are reduced [22]. There are various different forms of residual blocks. The
one used within this work is depicted in Fig. 3.3.
Definition 3.2.4 (RU-Net). An RU-Net is an U-Net with residual blocks.
That means, each of the 2 layer CNN blocks in Fig. 3.2 is replaced by a residual block as in Fig. 3.3.
To explicitly incorporate the potential to handle various font sizes, especially mixed font sizes on a
single page, we introduce a pixel-wise (spatial) attention mechanism. For this purpose, we introduce an
9
Z
Z
Z
+
Conv+Act
Conv
Identity
Act
Output
Z
Input
Z
Logits
Figure 3.3: Residual Block – The input is convolved and the resulting 3-dim array (the maps before passed through an
acitvation function are referred to as logits) is used twice. At the first branch it is passed through the activation function
and further processed by several convolution layers. At the second branch it is directly fed into a summation node. After
a point-wise summation of the two logit maps an activation function is applied. The shortcut enables for an easy identity
propagation and error backpropagation. Arbitrarily many inner layers are possible.
attention network (A-Net). The A-Net is a multi-layer CNN which generates a single output feature map.
The A-Net will be applied along with the RU-Net at different scales, the same network weights are used
on all scales (weight sharing). Specially, a scale pyramid is built by downscaling the input image I = I1
several times. The resulting (scaled) images I1 , I2 , I4 , I8 , ..., Is (subscripts denote the scaling factors)
are fed into the RU-Net and the A-Net. Trainable deconvolutional layers (of corresponding scales) are
applied on the outputs of the RU- and the A-Net to obtain feature maps of spatial dimensions equal to
the inputs. A1 , ..., As denote the up-sampled feature maps of the A-Net, RU1 , ..., RUs of the RU-Net,
respectively. After applying a pixel-wise softmax normalization for the attention maps
bi (y, x) = P
A
exp(Ai (y, x))
j∈{1,2,...,s} exp(Aj (y, x))
(3.10)
bi sum to one (pixel-wise). The feature maps RUi are combined following
the normalized attention maps A
ARU =
X
RUi
bi ,
A
(3.11)
i∈{1,2,...,s}
where
is the Hadamard product (element-wise multiplication). ARU is the input for the classifier to
build a NPL, see Rem. 3.2.1.
Definition 3.2.5 (ARU-Net). An RU-Net incorporating the described spatial attention mechanism is
called ARU-Net, see Fig. 3.4.
The point-wise multiplication combined with the pixel-wise attention maps allow the ARU-Net to pay
attention in different scales at different positions of the image. In Fig. 3.4 one can see that this behavior
was indeed learned by the network. It seems like the RU-Net is specialized on a certain font size and the
A-Net distinguishes between areas of different font sizes (bright and dark areas).
The ARU-Net as introduced can be used for any pixel labeling task, e.g., binarization, page detection
10
A-Net
A-Net
RU-Net
Classification
Softmax
RU-Net
+
Deconv (4)
A-Net
Deconv (2)
RU-Net
Figure 3.4: ARU-Net – The input image and its downscaled versions are fed into the A-Net and R-U-Net
accross different scales). The results for the lower resolutions are deconvolved. The attention maps are
a softmax normalization. The brighter the map at a certain position the more attention is paid to that
corresponding scale. The attention maps are point-wise muliplied with the feature maps of the RU-Net.
summed and a classification is performed.
(weight sharing
passed through
position at the
The results are
and page segmentation. The purpose of the ARU-Net is defined and fixed by the number of classes and
the ground truth data provided for training. In this work, we limit ourselves to the baseline detection
problem introduced in Sec. 3.1. For this purpose, we introduce three different classes: baseline (bl),
separator (sep) and other (∅). The separators mark beginning and end of each text line. Although, the
separator information is implicitly encoded by the baselines, it is advantageous to explicitly introduce
it as possible classification result. Especially, for baselines which are close together, e.g., such belonging
to two adjacent columns, this approach helps to avoid segmentation errors. Pixel ground truth for the
classes C = {bl, sep, ∅} is automatically generated by Alg. 1 given the baseline ground truth of Eq. (3.3).
A sample image with baseline ground truth along with its generated pixel ground truth is depicted in
Fig. 3.5. The prediction of a trained ARU-Net for this sample image is shown in Fig. 3.6a.
3.3. Stage II: Baseline Estimation
This subsection describes the second stage of the proposed approach. Baselines are estimated given
the output of the ARU-Net. This task consists of three steps: superpixel calculation, state estimation
and superpixel clustering, which are described in the following.
Ih ×Iw ×3
The trained ARU-Net generates an output C ∈ [0, 1]
for each image I ∈ I. In the following
B = C:,:,1 denotes the image encoding the confidence of each pixel belonging to a baseline and S = C:,:,2
is the separator image, see Fig.3.6a
3.3.1. Superpixel Calculation
The number of all pixels in an image often exceeds several millions. To reduce the dimensionality of
the problem (the number of pixels to be regarded for the baseline estimation), we limit ourselves to a
11
Algorithm 1: Pixel Ground Truth Generation
1
2
3
4
5
6
7
8
9
10
11
12
input : image I, corresponding baseline ground truth GI
output: pixel ground truth GI
B, S, N ← 0
B of dimension Ih × Iw
for P = (p1 , ..., pn ) ∈ GI do
θ ← local text orientation of P
B see Def. 3.3.2
d ← interline distance of P
B see Def. 3.3.3
P b ← polygonal chain of length d and orient. θ + 90° centered at p1
P e ← polygonal chain of length d and orient. θ + 90° centered at pn
draw P b and P e in S
B draw: follow the chain and set pixel values to 1.0
draw P in B
E ← 3 × 3 matrix of ones
S ←S⊕E
B ← B ⊕ E ∧ ¬S
N ← ¬S ∧ ¬B
return : GI ← [B, S, N ]
B ⊕ morphological dilation
subset of all pixels.
Definition 3.3.1 (superpixel). Let S = {p1 , ..., pN } be a subset of the image pixels of I (typically,
N Ih · Iw holds). An element of S is called superpixel (SP).
Basically, the definition of a superpixel does not introduce any new concept. A SP is just a normal pixel
which is somehow regarded to be of certain importance. Since it is frequently used term, we decided to
introduce it via a definition. It is easy to see that the choice of the set of SPs is crucial for the overall
performance. If there are no SPs for a baseline at all, this baseline will be missed. To calculate a suitable
set of SPs, we utilize the baseline map B generated by the ARU-Net.
In a first step B is binarized Bb = B > b by an element-wise comparison of B with a confidence
threshold b. The morphological skeleton Bs = SKE(Bb ) is calculated for Bb following Lantuéjoul’s formula
[44]. All foreground pixels (pixels with an intensity of 1) of Bs build an initial set of pixels {p1 , ..., pM }.
Its elements are sorted (π : N → N) in descending order w.r.t. their baseline confidences
pπ(1) , ..., pπ(M ) : B(pπ(i) ) ≥ B(pπ(j) ) ⇔ i ≤ j.
(3.12)
Finally, S is set up by iteratively adding pixels of the sorted list of Eq. (3.12) (beginning with the first
pixel). To keep the number of SPs small, a new pixel p is added to S only if
kp − qk2 > d ∀q ∈ S
(3.13)
holds, otherwise it is skipped. In Fig. 3.6b the set of resulting SPs is shown. These SPs build the basis
for the further clustering.
Remark 3.3.1. For all experiments, we have chosen fixed values of b = 0.2 (binarization threshold) and
12
(a) Baseline ground truth – The baselines are described by the red dots. For better clarity dots of the same baseline
were connected.
(b) Pixel ground truth produced by Alg. 1 – Green encodes the separator class, red the baseline class and black
the ”other” class.
Figure 3.5: Baseline and pixel ground truth – These are shown for the top snippet of the image of Fig. 3.1.
d = 10 (Eq. (3.13)). These demonstrated to be well suited for a wide range of different scenarios. Hence,
they are not regarded as free parameters of the system which have to be further tuned. This also holds
for the parameters which are fixed in Rem. 3.3.4 & 3.3.8.
3.3.2. Superpixel State Estimation
Assume we can assign each SP to a certain text line. The state of an SP should encode meaningful
characteristics of its text line. These characteristics will be defined and combined to build the state. This
work is based on previous work of [23, 33], but adapted to the characteristics of SPs extracted given the
ARU-Net output, e.g., easier calculation of the local text orientation as well as a different smoothing cost
formulation.
Definition 3.3.2 (local text orientation). The local text orientation θ of an SP p is the slope of its
text line’s baseline at the coordinates closest (w.r.t. the euclidean distance) to p.
Definition 3.3.3 (interline distance). The interline distance s of an SP p is the distance of its text
line’s baseline to the nearest other baseline. Distance means the distance which is orthogonal to the local
text direction of p.
Definition 3.3.4 (state). The state of an SP is the pair (θ, s) of its local text orientation and its interline
distance.
In the following, we will describe a method to estimate the states of all SPs. The local text orientation
will be calculated in a straightforward way utilizing solely the baseline image B and local information.
13
(a) ARU-Net output – The estimated baselines B (blue) and separators S (cyan) are shown.
(b) Superpixel and neighborhood system – The calculated SPs (blue) are shown along with the resulting Delaunay
neighborhood system N (yellow).
Figure 3.6: Baseline detection process – Two intermediate steps are shown for the top snippet of the image of Fig. 3.1.
On the other hand, the estimation of the interline distances combines local information of the text line’s
periodicity with the more global assumption that nearby SPs tend to have similar interline distances. For
these approaches the concepts of neighborhood and connectivity are mandatory and will be introduced.
Definition 3.3.5 (neighborhood system, edge, adjacent). We call a subset N ⊂ S×S neighborhood
system. An element of N is called edge and denoted by ep,q . N is not directed (ep,q = eq,p ). Two SPs
p, q are adjacent if ep,q ∈ N . ep,q \ p ∈ S denotes the SP q.
Remark 3.3.2. In the following the neighborhood system N for a set of SPs is always calculated by
Delaunay’s triangulation [45].
Definition 3.3.6 (connectivity function). The line segment g( · ; ep,q ) : [0, 1] → R2 defined by
g(τ ; ep,q ) := p + τ (q − p) connects the two pixels p, q of the edge ep,q . The function Γ : N × I → [0, 1]
defined by
R1
Γ(ep,q , I) =
0
I(g(τ ; ep,q ))dτ
kp − qk2
(3.14)
is called connectivity function. I(g(τ ; ep,q )) denotes the intensity of the pixel in I closest (w.r.t the
euclidean distance) to the real-valued coordinates g(τ ; ep,q ).
The connectivity function calculates the average intensity for a given image along the shortest path
connecting two pixels. The local text orientation of each SP is estimated by θp = LTO(p; N , B) utilizing
N and the baseline image B, see Alg. 2. The LTO algorithm picks the two neighbors of a SP p with the
largest baseline connectivity to p and determines the slope of the line passing through these neighbors.
14
Algorithm 2: Local Text Orientation of p
1
2
3
4
5
6
input : SP p, neighborhood system N , baseline image B
output: local text orientation θ of p
M ← {eq,r ∈ N | q = p ∨ r = p}
L ← sorted list of M
B sorted by means of Γ(eq,r , B)
if |L| == 1 then
eq,r ← L1
B Lk denotes the k-th element of L
else
eq,r ← (L1 \ p, L2 \ p)
r −q
return : θ ← arctan rxy −qy
x
The periodicity of text lines in document images is utilized to calculate the interline distances. We
determine the interline distance of a SP p by evaluating the regional text-line periodicity around p as
follows. For an SP p, a circular region of diameter d ∈ N around p, and a projection direction determined
by the local text orientation θp , let
p,d
d
hp,d = (hp,d
1 , ..., hd ) ∈ N
(3.15)
be the projection profile with respect to S, see Fig. 3.7. For the calculation of hp,d , only SPs with a
distance to p of less than
d
2
are taken into account.
Remark 3.3.3. The projection profile hp,d can be calculated very efficiently by utilizing the cross product
T
# » for q ∈ S with kp − qk ≤ d .
of the orientation vector o = (cos(θ ), sin(θ )) and the vectors pq
p
p
2
2
To extract the regional periodicity inherent in the projection profile hp,d , a Discrete Fourier Transformation (DFT) is applied to hp,d with resulting coefficients
H p,d = H1p,d , ..., Hdp,d ∈ Cd .
(3.16)
A coefficient Hkp,d , k ∈ {1, ..., d} corresponds to the portion of the signal with a period of
p,d
signal h
0
. In the simplest case, the index k of the dominant coefficient of H
distance s of p as s =
d
k0 .
p,d
d
k
to the entire
determines the interline
However, we may be forced to assign a different value to s due to additional
constraints to be discussed in a moment. Therefore, we introduce a data energy value for each possible
value
d
k
of the interline distance s of p. From energy, we then derive a data cost to be used within a cost
minimization framework for finding the optimal interline distance.
Definition 3.3.7 (data energy, data cost). The data energy of SP p and interline distance
15
d
k
is given
Figure 3.7: Interline distance estimation – Illustration of several projection profiles for a certain SP (red point). The
profiles for different diameters d ∈ {64, 128, 256, 512} and an orientation of 0° are shown in green. The winning period
(interline distance) is drawn as yellow curve. In blue a histogram for a wrong orientation (45°) is shown.
by
d
=
Ep
k
Hkp,d
H p,d
2
2.
(3.17)
2
The corresponding data cost is calculated by
Dp
d
d
= − log Ep
.
k
k
(3.18)
Remarkably, the data energy is normalized such that it sums (over k) up to 1.0 for arbitrary d ∈ N. To
cover a suitable range of different interline distances as well as to be robust against disturbances due
to close-by text regions of a different style, the projection profiles and DFTs are calculated for different
diameters d ∈ {64, 128, 256, 512} and k ∈ {3, 4, 5}. The choice of the values for d and k is application
driven and results in reasonable interline distances (sorted list) of
S := (170.7, 128.0, 102.4, 85.3, 64.0, 51.2, 42.7, 32.0, 25.6, 21.3, 16.0, 12.8) .
In the following, we write sp for the assigned interline distance s =
d
k
(3.19)
∈ S of SP p and say p is labeled
with sp . A labeling {sp }p∈S of S assigns an interline distance to each SP of S. Following a greedy labeling
strategy by assigning the interline distance with the highest energy defined by Eq. (3.17) to each SP leads
to a noisy result, see Fig. 3.8a. To reduce the noise effects, the influence of close-by SPs is taken into
account. It is reasonable to expect that neighboring SPs tend to have similar interline distances. This
expectation is encoded via a smoothing cost defined for adjacent SPs.
Definition 3.3.8 (smoothing cost). For each ep,q ∈ N (and assigned interline distances sp , sq ) the
16
(a) Greedy states – The SP states for a greedy labeling using the highest energy of Eq. (3.17) are shown.
(b) Smoothed states – The SP states for the final labeling after minimizing Eq. (3.21) are shown.
Figure 3.8: SPs with their assinged states – The local text orientation of each SP is visualized by the orientation of the
green lines (rotated by 90°). The length of the lines encode the interline distance of the corresponding SP.
smoothing cost is defined by
Vp,q (sp , sq ) =
σ
, hsp , sq is ≥ 4,
hsp , sq is
, else.
(3.20)
hsp , sq is is the index difference of sp and sq in the sorted list S of Eq. (3.19), e.g., h16.0, 42.7is = 4.
Thus, the smoothing cost Vp,q (sp , sq ) becomes large if interline distances of different size are assigned to
adjacent SPs. A maximum cost value of σ is used for huge differences in the interline distances. Setting
σ to a large value prevents neighboring SPs to differ to much in their interline distances.
Definition 3.3.9 (labeling cost). The labeling cost is given by
X
X
C {sp }p∈S = αd
Dp (sp ) + αs
Vp,q (sp , sq ).
(3.21)
ep,q ∈N
p∈S
The data cost and the smoothing costs are weighted by αd and αs , respectively, to form the labeling cost.
The graphcut algorithm [46] is utilized to minimize Eq. (3.21). The final labeling is shown in Fig. 3.8b.
Remark 3.3.4. For all experiments, we have chosen fixed values of σ = 25, αd = 1 and αs = 1.
3.3.3. Superpixel Clustering
In the previous subsections the calculation of SPs and their enrichment with state information was
described. In a final step, this state information is utilized to cluster the SPs to build baselines. There
17
will be a one-to-one assignment between clusters and baselines. In the following, we call a set of SPs
cluster.
In this subsection we formulate the clustering problem and introduce a greedy clustering procedure to
solve the problem. Two assumptions which hold for baselines in general constitute the conditions for the
clustering problem:
(I) Baselines should not exceed a certain curvilinearity value.
(II) Within the interline distance of a baseline there are no other baselines.
Basically, assumption (I) claims that a baseline can be approximated by a polynomial function of a certain
degree, see [23]. Assumption (II) is self-explanatory.
Remark 3.3.5. In the following, θ({p1 , ..., pn }) denotes the average orientation and s({p1 , ..., pn }) the
average interline distance of all SPs in {p1 , ..., pn }.
Definition 3.3.10 (curvilinearity value). Let deg ∈ N and S be a set of SPs. Assume pS,deg (t) ∈ P [t]
is the polynomial which solves the linear regression problem in the monomials t0 , t1 , ..., tdeg for the rotated
pixels
cos(−θ(S))
S0 =
sin(−θ(S))
− sin(−θ(S))
cos(−θ(S))
·p
p∈S
⊂ R2 .
(3.22)
The root-mean-square regression error normalized by s(S) is called curvilinearity value of S and is denoted
by cur(S, deg).
Remark 3.3.6. We fix deg = 3 and omit it in the following.
Def. 3.3.10 allows for an easy evaluation of (I). To test for (II) we will introduce the distance of two clusters.
Remarkably, only distances orthogonal to the text orientation should be taken into account. First, the
orthogonal component of the distance between two SPs is introduced. Afterwards, this is generalized for
two clusters of SPs.
Definition 3.3.11 (off-text distance). Given two SPs p, q and an orientation θ, the off-text distance
of p and q is the length of the component of p − q ∈ R2 which is orthogonal to θ. It is denoted by
kp − qkθ .
Remark 3.3.7. The off-text distance can be efficiently calculated by
kp − qkθ = (px − q x ) sin(θ) − py − q y cos(θ) .
18
(3.23)
Calculating the minimal pairwise off-text distance off all SPs of two clusters could result in a cluster
distance distorted by SP outliers. Therefore, SPs in each cluster will be projected onto the corresponding
regression curve obtained by the regression problem in Def. 3.3.10, before taking pairwise distances.
Definition 3.3.12 (regression curve). Let S, S 0 and pS (t) be of Def. 3.3.10. The spatial t-range of
S 0 is given by tmin = min{px | p ∈ S 0 } and tmax = max{px | p ∈ S 0 }. A curve cS : [0, 1] → R2 which
results from rotating the graph of pS (t) for t ∈ [tmin , tmax ] by θ(S) is called regression curve of S.
The SPs in S are projected onto cS (in direction θ(S) + π2 ). The resulting projected SPs are denoted by
S c . To achieve robust distance estimates even for curved and differently slanted text lines we focus on
SPs of the different clusters which are quite close to each other and furthermore take into account the
slope of the regression curve at the specific SP positions instead of averaging over the entire text line.
Definition 3.3.13 (cluster distance). Assume two clusters S1 , S2 with regression curves cS1 (t), cS2 (t)
and projected SPs S1c , S2c . The cluster distance is defined as
d(S1 , S2 ) =
min
p∈S1c ,q∈S2c :
kp−qk2 <4·s(S1 ∪S2 )
kp − qkθc (p,q) ,
(3.24)
where θc (p, q) is the average slope of the corresponding regression curves at p and q, respectively.
Since, it is now possible to evaluate conditions (I) & (II), we will use this to introduce feasible sets of
clusters. For this purpose, we will limit ourselves to partitions (a special kind of cluster sets) and require
the baseline clusters to be N -linked.
Definition 3.3.14 (partition). Let M be a set. We call a set of subsets {M1 , ..., MN } of M a partition
of M iff M = M1 ∪ · · · ∪ MN ∧ Mi 6= ∅ ∀i ∧ Mi ∩ Mj = ∅, i 6= j. The set of all partitions of M is
denoted by par(M).
Definition 3.3.15 (N -linked). Let S be a cluster and N be a neighborhood system. S is N -linked iff
∀p, q ∈ S ∃p0 , ..., pN ∈ S : p0 = p ∧ pN = q ∧ epi ,pi+1 ∈ N (0 ≤ i ≤ N − 1)
(3.25)
holds.
Definition 3.3.16 (feasible). For γ, δ ∈ R+ , L ∈ N, a set of SPs S and a neighborhood system N , we
call a set of clusters P = {S0 , ..., SL } feasible iff
1. P ∈ par(S)
2. ∀i > 0 : Si is N -linked
3. conditions (I) and (II) hold:
19
• cur(Si ) < γ
∀i > 0
• d(Si , Sj ) > δ · max{s(Si ), s(Sj )}
∀i, j > 0, i 6= j.
The set of feasible sets of clusters is denoted by f easN (S).
The clusters Si , i > 0 identify the baselines, S0 constitutes the clutter cluster containing SPs not belonging
to any baseline. We identify the baseline corresponding to Si with the polygonal chain of the projected
SPs Sic which follow the regression curve cSi (t), see Fig. 3.9. The number L ∈ N of baselines is (a-priori)
unknown. In the following, we will incorporate domain knowledge to promote SPs belonging to different
baselines not to be N -linked. Hence, clusterings with erroneously connected baselines are not feasible
anymore. This is done by a modification of the neighborhood system N .
Since baselines of different text orientations should not contribute to the same cluster, we adjust the
initial neighborhood system N by removing edges ep,q of SPs with substantially different local orientations:
|θp − θq | mod π >
π
4.
In addition, it is an ease to incorporate layout information by further adjusting
N . The layout information encoded by the separator image S (Fig. 3.6a) can be incorporated by taking
into account the connectivity of SPs in S. All edges ep,q ∈ N for which a separator is crossed, i.e.,
Γ(ep,q , S) > η or maxτ S(g(τ ; ep,q )) > 2 · η (g of Def. 3.3.6) holds, are removed, see Fig. 3.9b.
Finally, a common scenario is the baseline detection with given text regions. We assume that the
text regions are represented by closed polygonal chains R1 , ..., RN . This additional layout information (if
available) is easy to integrate. All edges for which
@Ri : Ri contains p, q.
(3.26)
holds are removed. Roughly speaking, a closed polygonal chain contains a SP if for all ”ways” from the
SP to the image border one have to cross the polygonal chain. Hence, SPs which are part of different nonoverlapping text regions are not N -linked any more. Thus, each baseline Si , i > 0 is entirely contained
in one text region for all feasible sets. The resulting neighborhood system is still denoted by N .
Remark 3.3.8. For all experiments, we have chosen fixed values of γ = 0.3, δ = 0.5 (Def. 3.3.16) and
η = 0.125.
After reducing the neighborhood system, we now introduce the total baseline energy. We will assign an
energy to all feasible sets and aim for an optimal one. This allows for the formulation of the clustering
problem to be solved.
Definition 3.3.17 (total baseline energy). Let B be a baseline image, N a neighborhood system and
P = {S0 , ..., SL } a set of clusters over S. With N (Si ) = {ep,q ∈ N | p, q ∈ Si } ⊂ N the total baseline
20
(a) Without separator information – The entire neighborhood system (yellow) is shown.
(b) With separator information – The neighborhood system was reduced by removing edges (cyan) with high
separator connectivity. The corresponding separator information is illustrated in Fig. 3.6a.
Figure 3.9: Influence of the separator information – The resulting baselines (blue lines) with and without taking into
account the separator information are shown.
energy is defined by
b(P) =
L
X
X
Γ(ep,q , B).
(3.27)
i=1 ep,q ∈N (Si )
Finally, the clustering problem can be formulated as
P∗ =
arg max b(P).
(3.28)
P∈f easN (S)
Because there could be a huge number of feasible sets of clusters for large S, we introduce a greedy clustering algorithm to solve Eq. (3.28). The proposed algorithm clusters edges of N instead of clustering SPs.
If an edge is assigned to a cluster (set) of edges, we assign both corresponding SPs to the corresponding
cluster of SPs. In a first step, the set of edges in N is sorted in decreasing order w.r.t.
1−
kp − qkθ({p,q})
kp − qk2
!
· Γ(ep,q , B).
(3.29)
The sorted list is denoted by N . Eq. (3.29) takes into account the B-connectivity value of an edge and
discounts it if ep,q is rather orthogonal to θ({p, q}). Discounted edges are less likely part of a baseline
and are therefore sorted to the end of the list. This avoids that these edges are falsely assigned to baseline
clusters which are composed of just a few correct edges (statistics of the cluster are not reliable, yet).
Given S and N , the proposed clustering process is shown in Alg. 3.
21
Algorithm 3: SP Clustering
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
input : Set of SPs S and sorted list of edges N
output: optimized partition P ∗
S0 ← S, P ∗ ← {S0 }, n ← 0
while |N | =
6 n do
n ← |N |
for ep,q ∈ N do
B Four possible cases dependent on the SPs
if ∃i > 0 : p, q ∈ Si then
B Case 1: add edge to existing cluster!
N ← N \ {ep,q }
else if p, q ∈ S0 then
B Case 2: create new cluster?
if kp − qkθ({p,q}) < δ · s({p, q}) then
B γ, δ of Rem. 3.3.8
N ← N \ {ep,q }, S|P ∗ | ← {p, q}, S0 ← S0 \ S|P ∗ |
P ∗ ← P ∗ ∪ {S|P ∗ | }
else if w.l.o.g. p ∈ S0 ∧ ∃i > 0 : q ∈ Si then
B Case 3:
if cur(Si ∪ {p}) < γ ∧ d(Si , {p}) < δ · s(Si ) then
if d(Si ∪ {p}, Sj ) > δ · s(Sj ) ∀j 6= i, j > 0 then
N ← N \ {ep,q }, Si ← Si ∪ {p}, S0 ← S0 \ {p}
extend cluster Si ?
else if ∃i, j > 0 (i 6= j) : p ∈ Si ∧ q ∈ Sj then
B Case 4: merge clusters Si , Sj ?
if cur(Si ∪ Sj ) < γ ∧ d(Si , Sj ) < δ · min (s(Si ), s(Sj )) then
N ← N \ {ep,q }, Si ← Si ∪ Sj
P ∗ ← P ∗ \ {Sj }
return : P ∗
4. Experiments
The experiment section is divided into 4 subsections. First, we investigate the influence of the training
set size as well as the influence of different data augmentation strategies. This is followed by an investigation of the performance of the proposed method if it is applied to images with curved or arbitrarily
oriented text lines. The third subsection presents and compares results of different versions of our proposed NPL architectures on the very heterogeneous and challenging cBAD dataset [25, 26]. We perform
statistical tests to show the statistical significance of the stated conclusion – the superiority of the proposed ARU-Net in a two-stage workflow over other architectures and a single-stage workflow. Finally, we
compare the proposed method against other state-of-the-art methods on the datasets of 3 recently hosted
competitions. As mentioned in Sec. 3 we will follow the similarity score of [25] (F-value) to measure the
quality of the baseline detection. The configuration for all experiments including the hyperparameters of
the network architecture as well as the training are summarized in Tab. 4.1. This configuration is the
result of an extensive search in the hyperparameter space and results in impressive results for various
scenarios/datasets.
Since no early stopping based on the loss for any validation set is used, we train on the entire training
set. The ARU-Net workflow for training and inference (Tensorflow code) as well as a trained network
22
Table 4.1: Hyperparameters – The architecture and training configuration which were used in this work are described.
Image pre-processing: input image I is downscaled by a factor of 2 for max{Ih , Iw } < 2000, 3
for 2000 ≤ max{Ih , Iw } < 4800 or 4 followed by a normalization to mean 0 and variance 1 (on pixel
intensity level)
RU-Net architecture, see Fig. 3.2 & 3.3: number of scale spaces: 6, initial feature depth: 8,
residual depth (activated layers in a residual block): 3, feature increasing and spatial decreasing factor:
2, activation function: ReLu, kernel size: 3 × 3, stride: 1
A-Net architecture: 4 layer CNN, activation function: ReLu, kernel size: 4×4, stride: 1, maxpooling
of size 2 × 2 after each convolution, feature number: 12, 16, 32, 1
ARU-Net architecture, see Fig. 3.4: number of image scales: 5, classifier: 4 × 4 convolution layer
with softmax activation
Training: weight initialization: Xavier, optimizer: RMSprop, learning rate: 0.001, learning rate
decay per epoch: 0.985, weight decay on the L2 norm: 0.0005, exponential moving average on the
model weights: 0.9995, mini batch size: 1 (due to memory limitations of the GPU), early stopping:
none (trained for a fixed number of epochs)
are freely available4 . The ARU-Net training takes 3 h to 24 h from scratch (dependent on the number
of epochs and samples per epoch) on a Titan X GPU. The inference time per image ranges from 2 s to
12 s per image on a dual core laptop (Intel Core i7-6600U with 16GiB RAM), this reduces to 0.5 s to 2 s
running the ARU-Net on the Titan X.
4.1. Influence of Training Sample Number and Data Augmentation
A major drawback of state-of-the-art approaches (Sec. 2) is the need for an extensive expert tuning
if confronted with scenarios which are not already covered. But the eligibility for an usage at industrial
scale depends on the possibility to easily adapt at reasonable cost. For approaches relying on machine
learning, this reduces to two questions:
• What about the amount of ground truth needed?
• What about the effort of ground truth production?
Concerning the second question, we refer to Alg. 1. The annotation of baselines for a document image is
quite easy and does not need remarkable expert knowledge compared to, e.g., ground truth production
for ATR systems for historical handwritings or even the text line annotation at surrounding polygon
level. The effort is reduced to several minutes per page by using platforms such as Transkribus5 . In the
following, we want to examine the first question.
The influence of training dataset size along with different data augmentation strategies is investigated
for the freely available Bozen dataset6 [24], see Fig. A.1. This dataset is a subset of documents from the
4 https://github.com/TobiasGruening/ARU-Net
5 https://transkribus.eu
6 https://zenodo.org/record/218236
23
Ratsprotokolle collection of Bozen composed of minutes of the council meetings held from 1470 to 1805
and consists of 400 pages. It is written in Early Modern German. Baseline ground truth information is
available in form of PAGE7 XML. The dataset is quite challenging concerning layout analysis issues. Most
of the pages consist of a single main text region with many difficulties for line detection and extraction,
e.g., bleed through, touching text lines and marginalia. For the following experiments, we have randomly
divided the Bozen set in a set of training samples T of size 350 and a test set of size 50. In a first step,
we randomly set up a chain of subsets of T
T1 ⊂ T3 ⊂ T5 ⊂ T10 ⊂ T30 ⊂ T50 ⊂ T100 ⊂ T200 ⊂ T350 ,
(4.1)
where Ti contains i training samples (pages and pixel ground truth). Since we expect an influence of the
choice of training samples (= sorting of T ), we repeat the mentioned procedure 4 times. Notably, the test
set remains untouched. Finally, we got 45 training sets – five of each quantity. For each set, we trained the
RU-Net for 100 epochs with 256 images per epoch. Therefore, we randomly choose samples of the training
set and remove them from the set. If each element of the training set was used for training once, we start
again with the initial training set. Hence, it does not matter whether the number of training samples per
epoch exceeds the size of the training set or not. This procedure guarantees the same amount of training
samples shown to the networks in training independent of the size of the training set. The RU-Net was
chosen instead of the ARU-Net, because of the homogeneity of the Bozen dataset concerning font size
and resolution. We trained the RU-Net from scratch on all 45 sets in 4 different scenarios. For training
purposes the image pre-processing mentioned in Tab. 4.1 is disabled. Instead, the training samples (I, GI )i
are pre-processed following one of the four strategies:
1. subsampled by a constant factor of 3 (no further data augmentation - one training sample per
element of the training set) – B
2. randomly subsampled by a factor s ∈ [2, 5] – S
3. S + random affine transformation (three corner points of the image are randomly shifted within a
circle of diameter 0.025 · max(Ih , Iw ) around there original position) – S + A
4. S + A + elastic transformation [47] – S + A + E
For the test set the images were sub-sampled by the constant factor of 3 in all scenarios. The results
of these 180 experiments are shown in Fig. 4.1. One can see that all 3 data augmentation strategies
significantly improve the performance compared to the base (B) strategy. Notably, for small numbers of
training samples the min-max difference is much larger than for higher number of training samples. Hence,
7 http://www.primaresearch.org/tools
24
B
S
S+A
S+A+E
F-Value on Test Set
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
1
3
5
10
30
50
Number of Training Samples
100
200
350
Figure 4.1: Influence of the number of training samples and of different data augmentation strategies – The
bar height represents the mean F-value. The error bars encode min-max values of the 5 experiments (not the standard
deviation). The dashed green line marks the maximum mean value of 0.975 achieved for 350 trainings samples. For a
description of the different augemantation strategies: B, S, S+A and S+A+E, see main text.
if just a few training samples are available, the choice of these is of importance. The best mean F-value
(0.975) is achieved for all 350 training samples with the S + A + E strategy. Nevertheless, there only is a
negligible loss in performance for 200 or 100 training samples. Even for 30 training samples, a F-value of
0.963 is achieved for the S+A strategy, which is sufficient for most applications, see Fig. A.1. This results
in a quite acceptable effort for ground truth production making the presented approach interesting even
for industrial production. The S + A data augmentation strategy will be the default for the rest of this
work.
Of course, the presented numbers are not directly transferable to collections with pages of entirely
different scenarios, e.g., census tables mixed with postal cards mixed with ... . One would expect
that more than 30 training samples are necessary for this kind of scenario. Nevertheless, the presented
experiment reflects a common situation: One has a robust baseline detector which was trained on very
heterogeneous data (see Sec. 4.4.3), but this detector does not work satisfyingly well for a certain (in
most cases quite homogeneous) collection. The numbers presented here give a hint concerning the effort
of ground truth production necessary in this scenario.
4.2. Curved and Oriented Text Lines
In this subsection, we demonstrate the ability of the introduced approach to handle curved or arbitrarily
oriented text lines. In a first experiment, the test set of the Bozen dataset was deformed to contain
arbitrarily curved text lines. For this purpose, we utilized trigonometric functions with random period to
simulate curved text lines in the test phase, see Fig. A.2. The RU-Net was trained (5 times) for 100 epochs
with 256 samples per epoch on the Bozen training set using the S + A + E augmentation strategy with
25
strong elastic deformations. We choose elastic transformations in training, because they simulate curves
of different amplitudes and frequencies in the same image. Furthermore, we increased the polynomial
degree (Def. 3.3.10) to 5 to enable the system to handle the curvatures present in the test set.
Remark 4.2.1. Different methods were used to deform the images during training and test phases.
Hence, the system had to learn the concept of curved text lines instead of an inversion of the image
degradation method used in the training phase.
In a second experiment, we have trained an RU-Net (5 times) on arbitrarily oriented samples of the
Bozen training set and evaluated the resulting networks on oriented pages of the test set. The results are
shown in Tab. 4.2 and a few sample images are shown in Fig. A.2. For the curved scenario the results
are as good as for the base scenario. In case of the oriented scenario the results are slightly worse, but
still excellent. This demonstrates the applicability for images with curved or oriented text lines without
remarkable adaptation of the workflow. Finally, we have trained five models with all degradations (affine,
elastic, rotation) and evaluated this model on the three different scenarios. The corresponding F-values
are depicted in Tab. 4.2. The system is worse than the experts for the base and curved scenarios, but for
the oriented scenario it even benefits from the additional elastic transformations.
Table 4.2: Results for the Bozen test set – The results in the Base, Curved and Oriented scenario are depicted. The Pand R-values are strongly related to the well known precision and recall measures, see [25]. Finally, the results for a single
system trained with all degradations are shown.
Scenario
∅ P-Val
∅ R-Val
∅ F-val [min,max]
Base
0.9765
0.9734
0.9750 [0.9693, 0.9770]
Curved
0.9802
0.9690
0.9745 [0.9725, 0.9760]
Oriented
0.9625
0.9655
0.9640 [0.9582, 0.9674]
∅ F-val (Base)
∅ F-val (Curved)
∅ F-val (Oriented)
0.9531
0.9573
0.9676
Combined
4.3. U-Net vs. ARU-Net vs. Single-Stage Workflow
In Sec. 3, we have introduced the ARU-Net in a two-stage workflow. In this section, we will investigate
its superiority over the classical U-Net as well as over a ”single-stage” workflow. For this purpose we have
trained the U-, RU-, ARU- and LARU-Net (each 5 times – random weight initialization and random
training sample order) on the recently introduced cBAD dataset8 [26]. The LARU-Net is an ARU-Net
with a separable MDLSTM9 layer at the lowest resolution to incorporate full spatial context. The details
of the dataset are described in [25]. In our opinion, this is the most challenging freely available dataset at
8 https://zenodo.org/record/257972
9A
separable MDLSTM layer is a concatenation of two (x- and y-direction) BLSTM layers
26
Table 4.3: Results for the cBAD test set – The results for different neural network architectures and the workflow
without Stage II (for the ARU-Net) are shown. Each architecture is trained 5 times on the cBAD train set. The results are
sorted with respect to computational effort. The last two columns indicate whether an architecture is superior to all before
mentioned ones in terms of disjunct confidence intervals and the Tukey-Duckworth test.
Method
∅ F-val [95% CI]
Simple Track
Complex Track
ARU I†
0.9627 [0.9615, 0.9636]
0.9081 [0.9071, 0.9095]
CI
T-D
U
0.9714 [0.9701, 0.9721]
0.9114 [0.9107, 0.9122]
3
3
RU
0.9756 [0.9744, 0.9766]
0.9182 [0.9165, 0.9203]
3
3
ARU
0.9781 [0.9772, 0.9789]
0.9223 [0.9214, 0.9230]
3
3
LARU
0.9772 [0.9765, 0.9780]
0.9233 [0.9217, 0.9249]
7
7
†
single-stage workflow – baseline estimation by basic image processing
methods (binarization of B followed by a CC analysis, no usage of S)
the moment. We have trained each network for 250 epochs, 1024 training samples each epoch using the
S + A data augmentation strategy. To assure the statistical significance of the posed superiority of the
newly introduced architecture, we follow [48] and provide the results of a statistical analysis. The choice of
appropriate statistical tests is quite limited since we can’t make any assumptions regarding the underlying
distribution. We utilize 95% confidence intervals (CI) provided by non-parametric bootstrapping [49] as
well as the Tukey-Duckworth test (level of significance: 5%) [50]. The results obtained are summarized
in Tab. 4.3. The ARU-Net performs significantly (last two columns) better than all architectures with
less computational effort. The LARU-Net could not prove its superiority and is therefore dismissed.
Furthermore, the results show that the introduction of the second stage is beneficial for the overall
performance. Hence, the ARU-Net together with the two-stage workflow has shown its superiority (which
is statistically significant) over the other systems and is used in the following. It has to be mentioned
that the above comparison is not fair concerning the number of trainable parameters – U - 2.16, RU 4.13, ARU - 4.14, LARU - 6.25 (in millions) – nor concerning the training or even inference time. The
comparison is about different architectures which, theoretically, have different capabilities, and whether
they make good use of them or not. For instance, the LARU-Net should be capable of incorporating a
more detailed spatial context, but in fact it does not benefit (in our settings) from this capability.
4.4. Comparison against the State of the Art
In this subsection, we compare the proposed framework against the state of the art. We have chosen
the 3 most recent competitions on text line detection for historical documents, namely: ICDAR 2015
competition on text line detection in historical documents [14], ICDAR2017 Competition on Layout
Analysis for Challenging Medieval Manuscripts (Task 2) [51] and cBAD: ICDAR2017 Competition on
Baseline Detection [37]. We will not further introduce the datasets or metrics used and refer to the
competition papers.
27
4.4.1. ICDAR 2015 Competition on Text Line Detection in Historical Documents (ANDAR-TL)
The ARU-Net was trained on the cBAD training set10 . This competition aims at the origin point
(OP) detection. An OP is roughly spoken the lower left ”corner” of a text line. Hence, we calculate the
left most point of each detected baseline. This is the output of our system for this competition. The
achieved results are shown in Tab. 4.4. Since the ARU-Net was not trained on the original training data,
it is hard to compare its results to the other ones. Nevertheless, we would like to stress the fact, that
trained systems usually perform better if training set and test set are sampled from the same distribution.
E.g., the ARU-Net trained on the cBAD training set achieves an average F-value of 0.9605 for the Bozen
test set, which is worse than the F-vlaue of 0.9750 of the system trained solely on the Bozen training set,
see Tab. 4.2. This indicates (but does not prove) the superiority of the presented method over the other
methods in Tab. 4.4.
Table 4.4: Origing Point (OP) detection results for the ANDAR-TL test set – Results for the dataset of [14] are
shown. #DF means the number of detection failures (no OP detected by the system), #DM means the number of detection
misses (detected OP far away from the ground truth OP) and #FP means the number of false positives
Method
#HYP
#COR
#DF
#DM
#FP
avg. cost
UNIFR
9301
2578
3022
6456
267
19.00
IA-2
11789
5655
407
6032
102
14.51
A2iA-3†
8967
6523
2490
2263
181
13.20
SNU[32]
10466
7741
948
2700
25
9.77
[33]
10896
8015
517
2860
21
8.19
proposed
11635
9610
358
1942
83
5.39
†
According to [28] this is an extension of [34].
4.4.2. ICDAR2017 Competition on Layout Analysis for Challenging Medieval Manuscripts (Task 2)
The ARU-Net was trained for 250 epochs 1024 samples per epoch on the competition training data11
provided by the competition organizers. This allows an entirely fair comparison to the participant’s results,
see Tab. 4.5. The proposed method substantially outperforms the winning one and reduces the error (the
gap to 1.0) by 43.26% (relatively). The specialty of this competition was, that the methods should focus
on a special kind of text, e.g., comments were not annotated as text. Hence, the ARU-Net had to learn
to distinguish between different types of text. The output of the ARU-Net and the detected baselines for
a sample image of the CSG18 subset of the test set are shown in Fig. 4.2. One can see, that the ARU-Net
entirely ignores all text entities not regarded (in this competition) as main text. Remarkably, no further
information besides the input image is provided to the ARU-Net.
10 The
competition training data was not available to the authors.
11 http://diuf.unifr.ch/main/hisdoc/diva-hisdb
28
Table 4.5: Results for the ICDAR2017 Competition on Layout Analysis for Challenging Medieval Manuscripts
– The F-values for Task 2 of all participants and the proposed method are shown for the different subsets of the test set.
Method
CB55
CSG18
CSG863
overall
CVML
0.9534
0.8734
0.9751
0.9340
BYU
0.9597
0.9879
0.9830
0.9768
CITlab
0.9896
0.9853
0.9716
0.9822
proposed
0.9980
0.9828
0.9889
0.9899
Figure 4.2: Results for an image of the CSG18 subset of the test set – The original image (only the main text lines
were ground truthed), the baseline image generated by the trained ARU-Net and the baselines detected by the proposed
method are shown (from left to right).
4.4.3. cBAD: ICDAR2017 Competition on Baseline Detection
We compare our average result for the ARU-Net (see Tab. 4.3) to the results presented in [37], see
Tab. 4.6. Our method performs considerably better in both tracks compared to all submissions. Especially,
the increase in performance for the complex track is massive. Remarkably, the winning team uses an UNet based system with task specific pre- and postprocessing. This indicates that the newly introduced
concepts and parametrization, which are presented in this work, significantly improve the capability of the
classical U-Net. Some results on chosen images of the cBAD test set are shown in Fig. A.3-A.5. Notably,
no further information besides the input image (and the text region information in the simple track) is
provided to the ARU-Net nor to the second stage of the workflow during inference.
5. Conclusion
In this work we presented a machine learning based method for text line detection in historical documents. The text lines are represented by their baselines. The problem and the proposed method were
29
Table 4.6: Results for the cBAD test set – The P-, R- and F-values of all participants and of the proposed method for
the simple and complex track of the cBAD: ICDAR2017 Competition on Baseline Detection are shown.
Method
Simple Track
Complex Track
P-Val
R-Val
F-val
P-Val
R-Val
F-val
LITIS
0.780
0.836
0.807
–
–
–
IRISA†
0.883
0.877
0.880
0.692
0.772
0.730
UPVLC
0.937
0.855
0.894
0.833
0.606
0.702
BYU
0.878
0.907
0.892
0.773
0.820
0.796
DMRZ
0.973
0.970
0.971
0.854
0.863
0.859
proposed
0.977
0.980
0.978
0.926
0.918
0.922
†
This method is based on the work presented in [16].
introduced thoroughly. The proposed ARU-Net, which is a universal pixel labeling approach, was trained
to predict the baseline position and the beginning and end of each text line. This enables the system to
handle documents with complex layouts, e.g., tables, marginalia, multi columns layouts. We have shown
that the system can be trained from scratch with manageably few training samples for a complex but
homogeneous collection. Remarkably, ground truth production is quite cheap. A ground truth sample
is just a page with annotated baselines, which can be done in a few minutes per page. Therefore, one
can expect that an adaptation on collections, which are not covered by the neural network, is possible
with quite reasonable ground truthing effort. The applicability of the proposed method was shown for
straight, curved and oriented text lines as well as for a combined scenario. The superiority of the proposed
ARU-Net in the two-stage workflow over the classical U-Net and over a simplified workflow was shown and
statistically verified. Finally, we showed that the proposed method substantially outperforms the previous
state of the art. Nevertheless, as one can see in Fig. A.3-A.5 there are still errors made by the system,
e.g., missed baselines (see Fig. A.4 – bottom right), segmentation errors (see Fig. A.5 – bottom left),
false positives (see Fig. A.3 – top left) or problems with strongly degraded documents (see Fig. A.4 – top
left). But these errors do not seem to follow a certain deterministic principle, which is not surprising for
a method based on machine learning. However, we plan to test newly introduced concepts like capsules,
memory augmentation and deeply supervised networks to further improve the system’s performance.
Acknowledgement
NVIDIA Corporation kindly donated a Titan X GPU used for this research. This work was partially
funded by the European Unions Horizon 2020 research and innovation programme under grant agreement
No 674943 (READ Recognition and Enrichment of Archival Documents). Finally, we would like to thank
Udo Siewert for his valuable comments and suggestions.
30
References
References
[1] A. Isaac, R. Clayphan, B. Haslhofer, Europeana: Moving to linked open data, Information Standards
Quarterly 24 (2/3).
[2] T. Causer, V. Wallace, Building a volunteer community: Results and findings from transcribe bentham, Digital Humanities Quarterley 6 (2) (2012) 1–28.
[3] J. A. Sánchez, G. Mühlberger, B. Gatos, P. Schofield, K. Depuydt, R. M. Davis, E. Vidal, J. de Does,
tranScriptorium: a european project on handwritten text recognition, in: Proceedings of the 2013
ACM symposium on Document engineering, ACM, 2013, pp. 227–228.
[4] A. Graves, J. Schmidhuber, Offline handwriting recognition with multidimensional recurrent neural
networks, Advances in Neural Information Processing Systems 21, NIPS’21 (2008) 545–552.
[5] G. Leifert, T. Strauß, T. Grüning, W. Wustlich, R. Labahn, Cells in multidimensional recurrent
neural networks, J. Mach. Learn. Res. 17 (1) (2016) 3313–3349.
[6] J. Puigcerver, A. H. Toselli, E. Vidal, Word-graph and character-lattice combination for kws in handwritten documents, in: 2014 14th International Conference on Frontiers in Handwriting Recognition,
IEEE, 2014, pp. 181–186.
[7] T. Strauß, T. Grüning, G. Leifert, R. Labahn, CITlab ARGUS for Keyword Search in Historical
Handwritten Documents: Description of CITlab’s System for the ImageCLEF 2016 Handwritten
Scanned Document Retrieval Task, CEUR Workshop Proceedings, Évora, Portugal, 2016.
[8] T. Strauß, G. Leifert, T. Grüning, R. Labahn, Regular expressions for decoding of neural network
outputs, Neural Networks 79 (2016) 1–11.
[9] J. A. Sanchez, V. Romero, A. H. Toselli, E. Vidal, ICFHR2014 Competition on Handwritten Text
Recognition on Transcriptorium Datasets (HTRtS), in: Proceedings of International Conference on
Frontiers in Handwriting Recognition, ICFHR, Vol. 2014-Decem, IEEE, 2014, pp. 785–790.
[10] I. Pratikakis, K. Zagoris, J. Puigcerver, A. H. Toselli, E. Vidal, ICFHR2016 Handwritten Keyword
Spotting Competition ( H-KWS 2016 ), in: Proceedings of International Conference on Frontiers in
Handwriting Recognition, ICFHR, IEEE, 2016, pp. 613–618.
[11] M. Rusiñol, D. Aldavert, R. Toledo, J. Lladós, Efficient segmentation-free keyword spotting in historical document collections, Pattern Recognition 48 (2) (2015) 545–555.
[12] T. Bluche, Joint line segmentation and transcription for end-to-end handwritten paragraph recognition, Advances in Neural Information Processing Systems (2016) 838–846.
[13] T. Konidaris, A. L. Kesidis, B. Gatos, A segmentation-free word spotting method for historical
printed documents, Pattern Analysis and Applications 19 (4) (2016) 963–976.
[14] M. Murdock, S. Reid, B. Hamilton, J. Reese, ICDAR 2015 competition on text line detection in
historical documents, in: Proceedings of the International Conference on Document Analysis and
Recognition, ICDAR, Vol. 2015-Novem, IEEE, 2015, pp. 1171–1175.
[15] S. Sudholt, G. A. Fink, Phocnet : A deep convolutional neural network for word spotting in handwritten documents, in: Proceedings of International Conference on Frontiers in Handwriting Recognition,
ICFHR, 2016, pp. 1–6.
[16] G. Renton, C. Chatelain, S. Adam, C. Kermorvant, T. Paquet, Handwritten text line segmentation
using Fully Convolutional Network, in: Proceedings of the International Conference on Document
Analysis and Recognition, ICDAR, 2017, pp. 5–9.
31
[17] N. Arvanitopoulos, S. Süsstrunk, Seam Carving for Text Line Extraction on Color and Grayscale
Historical Manuscripts, International Conference on Frontiers in Handwriting Recognition (ICFHR)
(2014) 726 – 731.
[18] Q. N. Vo, S. H. Kim, H. J. Yang, G. Lee, Binarization of degraded document images based on
hierarchical deep supervised network, Pattern Recogn. 74 (2018) 568–586.
[19] C. Tensmeyer, B. Davis, C. Wigington, I. Lee, B. Barrett, PageNet: Page Boundary Extraction in
Historical Handwritten Documents, in: Proceedings of the 4th International Workshop on Historical
Document Imaging and Processing, HIP ’1, ACM, New York, NY, USA, 2017, pp. 59–64.
[20] K. Chen, M. Seuret, J. Hennebert, R. Ingold, Convolutional Neural Networks for Page Segmentation of Historical Document Images, in: Proceedings of the International Conference on Document
Analysis and Recognition, ICDAR, 2017, pp. 965–970.
[21] O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation, Miccai (2015) 234–241.
[22] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: 2016 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
[23] J. Ryu, H. I. Koo, N. I. Cho, Language-independent text-line extraction algorithm for handwritten
documents, IEEE Signal Processing Letters 21 (9) (2014) 1115–1119.
[24] J. A. Sánchez, V. Romero, A. H. Toselli, E. Vidal, READ dataset Bozen (2016). doi:10.5281/
zenodo.218236.
[25] T. Grüning, R. Labahn, M. Diem, F. Kleber, S. Fiel, READ-BAD: A New Dataset and Evaluation
Scheme for Baseline Detection in Archival Documents, arXiv preprint arXiv:1705.03311.
[26] M. Diem, F. Kleber, S. Fiel, T. Grüning, B. Gatos, ScriptNet: ICDAR 2017 Competition on Baseline
Detection in Archival Documents (cBAD) (2017). doi:10.5281/zenodo.257972.
[27] A. Zahour, L. Likforman-Sulem, W. Boussalaa, B. Taconet, Text Line segmentation of historical Arabic documents, Proceedings of the International Conference on Document Analysis and Recognition,
ICDAR 1 (2-4) (2007) 138–142.
[28] S. Eskenazi, P. Gomez-Krämer, J.-M. Ogier, A comprehensive survey of mostly textual document
segmentation algorithms since 2008, Pattern Recognition 64 (2017) 1–14.
[29] A. Nicolaou, B. Gatos, Handwritten text line segmentation by shredding text into its lines, in:
Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, 2009,
pp. 626–630.
[30] R. Saabni, A. Asi, J. El-Sana, Text line extraction for historical document images, Pattern Recognition Letters 35 (1) (2014) 23–33.
[31] A. Garz, A. Fischer, R. Sablatnig, H. Bunke, Binarization-free text line segmentation for historical
documents based on interest point clustering, in: Proceedings - 10th IAPR International Workshop
on Document Analysis Systems, DAS 2012, IEEE, 2012, pp. 95–99.
[32] B. Ahn, J. Ryu, H. I. Koo, N. I. Cho, Textline detection in degraded historical document images,
EURASIP Journal on Image and Video Processing 2017 (1) (2017) 82.
[33] T. Grüning, G. Leifert, T. Strauß, R. Labahn, A Robust and Binarization-Free Approach for Text Line
Detection in Historical Documents, in: Proceedings of the International Conference on Document
Analysis and Recognition, ICDAR, 2017, pp. 236–241.
32
[34] B. Moysset, C. Kermorvant, C. Wolf, J. Louradour, Paragraph text segmentation into lines with
Recurrent Neural Networks, Proceedings of the International Conference on Document Analysis and
Recognition, ICDAR 2015-Novem (2015) 456–460.
[35] B. Moysset, J. Louradour, C. Kermorvant, C. Wolf, Learning text-line localization with shared and
local regression neural networks, in: Proceedings of International Conference on Frontiers in Handwriting Recognition, ICFHR, 2017, pp. 1–6.
[36] B. Moysset, C. Kermorvant, C. Wolf, Full-Page Text Recognition: Learning Where to Start and When
to Stop, in: Proceedings of the International Conference on Document Analysis and Recognition,
ICDAR, 2017, pp. 871–876.
[37] M. Diem, F. Kleber, S. Fiel, B. Gatos, T. Grüning, cBAD: ICDAR2017 Competition on Baseline
Detection, in: Proceedings of the International Conference on Document Analysis and Recognition,
ICDAR, 2017, pp. 1355–1360.
[38] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
Vol. 07-12-June, 2015, pp. 3431–3440.
[39] H. Noh, S. Hong, B. Han, Learning deconvolution network for semantic segmentation, in: Proceedings
of the IEEE International Conference on Computer Vision, Vol. 2015 Inter, 2015, pp. 1520–1528.
[40] D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning representations by back-propagating errors,
Nature 323 (6088) (1986) 533–536.
[41] I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016.
[42] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition,
Proceedings of the IEEE 86 (11) (1998) 2278–2323.
[43] X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks,
in: Y. W. Teh, M. Titterington (Eds.), Proceedings of the Thirteenth International Conference on
Artificial Intelligence and Statistics, Vol. 9 of Proceedings of Machine Learning Research, PMLR,
2010, pp. 249–256.
[44] J. Serra, Image Analysis and Mathematical Morphology, Vol. 1, 1982.
[45] B. Delaunay, Sur la sphere vide, Bulletin de l’Académie des Sciences de l’URSS 6 (1934) 793–800.
[46] Y. Boykov, O. Veksler, R. Zabih, Fast approximate energy minimization via graph cuts, IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (11) (2001) 1222–1239.
[47] P. Simard, D. Steinkraus, J. C. Platt, Best Practices for Convolutional Neural Networks Applied to
Visual Document Analysis, Proceedings of the 7th International Conference on Document Analysis
and Recognition (2003) 958–963.
[48] J. Puigcerver, Are Multidimensional Recurrent Layers Really Necessary for Handwritten Text Recognition?, in: Proceedings of the International Conference on Document Analysis and Recognition,
ICDAR, 2017, pp. 67–72.
[49] B. Efron, Better bootstrap confidence intervals, Journal of the American Statistical Association
82 (397) (1987) 171–185.
[50] J. W. Tukey, A Quick Compact Two Sample Test To Duckworth’s Specifications, Technometrics 1 (1)
(1959) 31–48.
[51] F. Simistira, M. Bouillon, M. Seuret, M. Würsch, M. Alberti, R. Ingold, M. Liwicki, ICDAR2017
Competition on Layout Analysis for Challenging Medieval Manuscripts, in: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, 2017, pp. 1361–1370.
33
Figure A.1: Results for an image of the Bozen test set – Results for RU-Nets trained on 5, 30 and 350 training
samples (left to right) with different data augmentation strategies B, S+A and S+A+E (top to bottom) are shown.
34
Figure A.2: Results for an image of the Bozen test set – Results for two “degraded” images are shown. The images
were arbitrarily curved and rotated.
35
Figure A.3: Results for images of the cBAD test set – Only the images without any layout information were used.
36
Figure A.4: Results for images of the cBAD test set – Only the images without any layout information were used.
37
Figure A.5: Results for images of the cBAD test set – Only the images without any layout information were used.
38
| 9 |
INTENSIONAL CYBERFORENSICS
Serguei A. Mokhov
A thesis
in
The Department
of
Computer Science and Software Engineering
Presented in Partial Fulfillment of the Requirements
For the Degree of Doctor of Philosophy (Computer Science)
Concordia University
Montréal, Québec, Canada
September 2013
c Serguei A. Mokhov, 2013
⃝
CONCORDIA UNIVERSITY
SCHOOL OF GRADUATE STUDIES
This is to certify that the thesis prepared
By:
Serguei A. Mokhov
Entitled:
Intensional Cyberforensics
and submitted in partial fulfillment of the requirements for the degree of
Doctor of Philosophy (Computer Science)
complies with the regulations of the University and meets the accepted standards with
respect to originality and quality.
Signed by the final examining committee:
Dr. Deborah Dysart-Gale
Chair
Dr. Weichang Du
External Examiner
Dr. Amr Youssef
External to Program
Dr. Peter Grogono
Examiner
Dr. Terry Fancott
Examiner
Dr. Joey Paquet and Dr. Mourad Debbabi
Thesis Supervisor
Approved by
Chair of Department or Graduate Program Director
September 2013
Dean of Faculty
Abstract
Intensional Cyberforensics
Serguei A. Mokhov, Ph.D.
Concordia University, 2013
This work focuses on the application of intensional logic to cyberforensic analysis and its
benefits and difficulties are compared with the finite-state-automata approach. This work
extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in
which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs
of claims are undertaken in an eductive manner of evaluation. This approach is a practical,
context-aware improvement over the finite state automata (FSA) approach we have seen in
previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining
hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic
expressions. We also augment the work with credibility factors surrounding digital evidence
and witness accounts, which have not been previously modeled.
The Forensic Lucid programming language, used for this intensional cyberforensic
analysis, formally presented through its syntax and operational semantics. In large part, the
language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, MARFL and JOOIP bound by the underlying
intensional programming paradigm.
iii
Acknowledgments
To my dear wife, Miao Song, and our little family, who make life worthwhile and
meaningful.
This thesis is also dedicated to those who made this thesis itself worthwhile1 .
1
... and are acknowledged in detail in Section 10.5, page 290
iv
Contents
List of Figures
ix
List of Tables
xiii
List of Algorithms and Listings
xiv
1 Introduction
I
1
1.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.2
Motivation and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.3
Problem Statement and Gap Analysis . . . . . . . . . . . . . . . . . . . . . .
6
1.4
Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.5
Scope
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
Background
23
2 Cyberforensics
24
2.1
Forensic Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
Gladyshev’s Formal Approach to Cyberforensic Investigation and Event Re-
24
construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
2.3
Other Formal Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
2.4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
3 Intensional Logic, Programming, Reasoning, Uncertainty and Credibility 57
3.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
3.2
Intensional Logics and Programming . . . . . . . . . . . . . . . . . . . . . .
59
3.3
Uncertainty, Evidence, and Credibility . . . . . . . . . . . . . . . . . . . . .
65
3.4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
v
4 The Lucid Programming Language Family
4.1
Lucid Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
4.2
Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
4.3
Lucid Dialects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
4.4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
5 Data Mining and Pattern Recognition
97
5.1
Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
98
5.2
MARF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
102
5.3
fileType . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
107
5.4
MARFCAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
110
5.5
MARFPCAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
121
5.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
122
6 The General Intensional Programming System
II
76
128
6.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
129
6.2
GIPSY’s Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
136
6.3
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
151
Methodology
153
7 Forensic Lucid Design and Specification
154
7.1
Approach Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
155
7.2
The Forensic Lucid Language Requirements and Design Considerations .
156
7.3
Concrete Forensic Lucid Syntax . . . . . . . . . . . . . . . . . . . . . . .
166
7.4
Operational Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
192
7.5
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
203
7.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
210
8 Software Architecture Design
211
8.1
Forensic Lucid Compiler . . . . . . . . . . . . . . . . . . . . . . . . . . .
211
8.2
Forensic Lucid Run-time System Design . . . . . . . . . . . . . . . . . . .
213
8.3
Updates to GIPSY’s Frameworks’ Design . . . . . . . . . . . . . . . . . . . .
218
8.4
Forensic Lucid Component Testing
. . . . . . . . . . . . . . . . . . . . .
232
8.5
Forensic Lucid Encoders . . . . . . . . . . . . . . . . . . . . . . . . . . .
233
8.6
GIPSY Cluster Lab Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . .
238
8.7
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
242
vi
III
Conclusion
243
9 Evaluation Applications
244
9.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
244
9.2
Toy DSTME Examples in Forensic Lucid . . . . . . . . . . . . . . . . . .
248
9.3
ACME Printing Case in Forensic Lucid . . . . . . . . . . . . . . . . . . .
250
9.4
Blackmail Case in Forensic Lucid . . . . . . . . . . . . . . . . . . . . . .
254
9.5
MAC Spoofer Investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
257
9.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
278
10 Concluding Remarks and Future Work
279
10.1 Objectives Achieved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
280
10.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
280
10.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
282
10.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
284
10.5 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
290
Bibliography
IV
293
Appendices
322
A Glossary
323
B A Type System Theory for Higher-Order Intensional Logic Support in
Hybrid Intensional-Imperative Programs in GIPSY
329
B.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
329
B.2 The GIPSY Type System . . . . . . . . . . . . . . . . . . . . . . . . . . . .
331
B.3 Simple Theory of GIPSY Types . . . . . . . . . . . . . . . . . . . . . . . . .
336
B.4 Concrete Specification of the GIPSY Types . . . . . . . . . . . . . . . . . . .
337
B.5 Describing Some GIPSY Types’ Properties . . . . . . . . . . . . . . . . . . .
343
B.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
350
C MARFL
351
C.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
351
C.2 Theoretical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
352
C.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
357
C.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
358
vii
D Self-Forensics
359
D.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
359
D.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
361
D.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
362
D.4 Self-Forensics Methodology Overview . . . . . . . . . . . . . . . . . . . . . .
366
D.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
378
E Graph-based Representation and Visualization
382
E.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
382
E.2 Visualization Example Prototypes . . . . . . . . . . . . . . . . . . . . . . . .
383
E.3 Visualization of Forensic Lucid . . . . . . . . . . . . . . . . . . . . . . . .
384
E.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
388
Index
389
viii
List of Figures
1
Boyd’s OODA loop [51]
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2
The GIPSY logo representing the distributed nature of GIPSY . . . . . . . .
17
3
Overall multifaceted research summary . . . . . . . . . . . . . . . . . . . . .
18
4
Run of a computation c [135] . . . . . . . . . . . . . . . . . . . . . . . . . .
31
−1
−1
5
Backtracking example in ψ (y) and Ψ (Y ) [135] . . . . . . . . . . . . . . .
32
6
Explanations as partitioned runs of an observation sequence [135] . . . . . .
35
7
Meaning and explanation hierarchy [135] . . . . . . . . . . . . . . . . . . . .
36
8
Fixed-length event reconstruction [135] . . . . . . . . . . . . . . . . . . . . .
37
9
Example computation event sequences . . . . . . . . . . . . . . . . . . . . .
38
10
Example of construction of MPR [83] . . . . . . . . . . . . . . . . . . . . . .
40
11
Generic observation sequence permutations example . . . . . . . . . . . . . .
40
12
Printer Case state machine [135] . . . . . . . . . . . . . . . . . . . . . . . . .
44
13
Paths leading to (BX , BX ) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
14
Cluster data with Blackmail fragments . . . . . . . . . . . . . . . . . . . . .
53
15
Simplified view of the cluster model [136] . . . . . . . . . . . . . . . . . . . .
53
16
Blackmail Case state machine [307] . . . . . . . . . . . . . . . . . . . . . . .
55
17
Natural-language contextual expression [361, 380] . . . . . . . . . . . . . . .
63
18
1D example of tag-value contextual pairs [361, 380] . . . . . . . . . . . . . .
63
19
2D example of tag-value contextual pairs [264, 282] . . . . . . . . . . . . . .
63
20
Sample G≥ syntax expressions . . . . . . . . . . . . . . . . . . . . . . . . . .
78
21
Sample G≥ operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
22
Classical natural-numbers example [361] . . . . . . . . . . . . . . . . . . . .
79
23
Extract of operational semantics rules of GIPL [361] . . . . . . . . . . . . .
82
24
Extract of operational semantics of Lucx [473, 513] . . . . . . . . . . . . . .
83
25
Eduction tree as a trace for the natural-numbers problem in Objective Lu-
26
cid [264] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
The natural-numbers problem in Objective Lucid [305] . . . . . . . . . .
93
ix
27
Extract of operational semantics of Objective Lucid [304] . . . . . . . . .
94
28
Higher-order context Dot operator of MARFL [304] . . . . . . . . . . . . .
95
29
MARF’s pattern-recognition pipeline [272] . . . . . . . . . . . . . . . . . . .
103
30
MARF’s pattern-recognition pipeline sequence diagram [290] . . . . . . . . .
105
31
The distributed MARF pipeline . . . . . . . . . . . . . . . . . . . . . . . . .
107
32
High-level structure of GIPSY’s GEER flow overview [315] . . . . . . . . . .
137
33
High-level structure of the GIPC framework [315] . . . . . . . . . . . . . . .
138
34
GMT context use case diagram . . . . . . . . . . . . . . . . . . . . . . . . .
145
35
Design of the GIPSY node [161, 362] . . . . . . . . . . . . . . . . . . . . . .
146
36
Detailed GMT context use case diagram . . . . . . . . . . . . . . . . . . . .
149
37
RIPE context use case diagram . . . . . . . . . . . . . . . . . . . . . . . . .
150
38
MARFCAT GIPSY network graph . . . . . . . . . . . . . . . . . . . . . . .
151
39
GIPSY WebEditor interface [264] . . . . . . . . . . . . . . . . . . . . . . . .
152
40
Nested context hierarchy example for cyberforensic investigation [300, 304] .
158
41
Concrete Forensic Lucid syntax (E) [304, 305] . . . . . . . . . . . . . . .
168
42
Concrete Forensic Lucid syntax (Q) [304, 305] . . . . . . . . . . . . . . .
169
43
Concrete Forensic Lucid syntax (ES, OS, O) [304, 305] . . . . . . . . . . .
170
44
Concrete Forensic Lucid syntax (operators) [304, 305] . . . . . . . . . . .
171
45
Operators translated to GIPL-compatible definitions [304, 305] . . . . . . .
186
46
Operational semantics rules of Forensic Lucid: E and Q Core . . . . . . .
198
47
Operational semantics rules of Forensic Lucid: E and Q Core Context . .
199
48
Operational semantics of Forensic Lucid (OO) . . . . . . . . . . . . . . .
200
49
Operational semantics of Forensic Lucid: an observation . . . . . . . . . .
201
50
Operational semantics of Forensic Lucid: forensic operators and lifting . .
202
51
Operational semantics of Forensic Lucid: an observation sequence . . . .
202
52
Operational semantics of Forensic Lucid: an evidential statement . . . . .
203
53
Operational semantics of Forensic Lucid: belief and plausibility . . . . . .
204
54
PRISM input language basic syntax [190] . . . . . . . . . . . . . . . . . . . .
216
55
PRISM input language syntax (1) [190] . . . . . . . . . . . . . . . . . . . . .
216
56
PRISM input language syntax (2) [190] . . . . . . . . . . . . . . . . . . . . .
217
57
PRISM input language operational semantics [190] . . . . . . . . . . . . . .
217
58
Forensic Lucid compilation and evaluation flow in GIPSY [322] . . . . . .
219
59
Updated high-level structure of the GIPC framework . . . . . . . . . . . . .
221
60
Semantic analyzers framework . . . . . . . . . . . . . . . . . . . . . . . . . .
222
x
61
Evaluation engines class diagram . . . . . . . . . . . . . . . . . . . . . . . .
225
62
Demand Generator and Dispatcher relationship . . . . . . . . . . . . . . . .
227
63
Configuration UML class diagram . . . . . . . . . . . . . . . . . . . . . . . .
230
64
Annotations class diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . .
231
65
Forensic extensions to the GIPSY Type System . . . . . . . . . . . . . . . .
232
66
Conceptual illustration of Forensic Lucid encoders . . . . . . . . . . . . .
233
67
Example of a three-observation sequence context exported from MARF to
Forensic Lucid [322] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
235
Example of a simplified three-observation sequence context exported from
MARF to Forensic Lucid [322] . . . . . . . . . . . . . . . . . . . . . . . .
235
69
GIPSY cluster network topology . . . . . . . . . . . . . . . . . . . . . . . . .
241
70
Example RT ticket alerting of a possible MAC spoofing attempt . . . . . . .
260
71
Use and Misuse Cases in MAC spoofer investigations . . . . . . . . . . . . .
264
72
Procmail handler trigger recipe (rule) . . . . . . . . . . . . . . . . . . . . . .
266
73
MAC spoofer analyzer UML sequence diagram . . . . . . . . . . . . . . . . .
276
74
HOIFL initial sketch formulation . . . . . . . . . . . . . . . . . . . . . . . .
288
75
The GIPSY type system [315] . . . . . . . . . . . . . . . . . . . . . . . . . .
333
76
Example of provider interfaces and comparators [301] . . . . . . . . . . . . .
344
77
Composite types [301] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
346
78
Abstract GIPSY existential types [301]. . . . . . . . . . . . . . . . . . . . . .
347
79
Concrete GIPSY existential types [301]. . . . . . . . . . . . . . . . . . . . . .
348
80
Concrete GIPSY union types (providers) [301] . . . . . . . . . . . . . . . . .
349
81
Concrete GIPSY union types [301] . . . . . . . . . . . . . . . . . . . . . . .
349
82
Example of hierarchical context specification for an evaluation configuration
of MARF [272] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
353
83
MARFL context syntax [272] . . . . . . . . . . . . . . . . . . . . . . . . . .
355
84
MARFL context semantic rules [272] . . . . . . . . . . . . . . . . . . . . . .
357
85
Initial SpeakerIdentApp re-written in MARFL [272] . . . . . . . . . . . . .
358
86
Vassev’s ASSL multi-tier model [322] . . . . . . . . . . . . . . . . . . . . . .
365
87
Example of observations in GIPSY [321] . . . . . . . . . . . . . . . . . . . .
374
88
Example of observations in JDSF [321] . . . . . . . . . . . . . . . . . . . . .
375
89
Example of observations for the second equation in Cryptolysis [321] . . . . .
376
90
Canonical example of a 2D dataflow graph-based program [361] . . . . . . .
384
91
Example of an actual rendered 2D DFG-based program with Graphviz [89] .
384
xi
92
Modified conceptual example of a 2D DFG with 3D elements [311] . . . . . .
385
93
Conceptual example of linked 3D observation nodes [311] . . . . . . . . . . .
385
94
Interactive documentary illimitable space visualization and management [437] 388
xii
List of Tables
1
Possible identifier types [361] . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
2
File-type identification top 10 results, bigrams ([290]) . . . . . . . . . . . . .
109
3
File-type identification top 10 results, 2nd best, bigrams ([290]) . . . . . . .
109
4
File-type identification top 10 results, bigrams, per file type ([290])
. . . . .
109
5
Sample CVE classification stats for Chrome 5.0.375.54 [287] . . . . . . . . .
119
6
Sample CVE stats for Tomcat 5.5.13 [287] . . . . . . . . . . . . . . . . . . .
120
7
Top 6 distance malware classifiers and 32 results, FFT feature vectors . . . .
123
9
Top 6 distance malware classifiers and 32 results, wavelet filter preprocessing
124
11
Top 6 distance malware classifiers and 32 results, low-pass filter preprocessing 125
13
Forensic Lucid identifier types in D . . . . . . . . . . . . . . . . . . . . .
14
Example of application of Forensic Lucid operators to bounded streams [305]184
15
Types of context operators’ arguments and resulting types . . . . . . . . . .
203
16
GIPSY cluster IP assignment . . . . . . . . . . . . . . . . . . . . . . . . . .
240
17
Matching data types between Lucid and Java [301, 315] . . . . . . . . . . .
334
18
Common example types of context operators’ arguments and resulting types [272]356
xiii
161
List of Algorithms and Listings
6.1
Example of a hybrid GIPSY program . . . . . . . . . . . . . . . . . . . . . .
141
8.1
MARFCAT encoded evidence fragment example . . . . . . . . . . . . . . . .
236
8.2
swm encoded evidence example . . . . . . . . . . . . . . . . . . . . . . . . . .
236
8.3
Argus encoded evidence example . . . . . . . . . . . . . . . . . . . . . . . .
238
8.4
arp encoded evidence example . . . . . . . . . . . . . . . . . . . . . . . . . .
238
9.1
Limb Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
248
9.2
Raining Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
249
9.3
DSTME Raining Example . . . . . . . . . . . . . . . . . . . . . . . . . . . .
250
9.4
Developing the Printing Case: “main” [304, 305, 312] . . . . . . . . . . . . .
251
9.5
Transition function ψ in Forensic Lucid for the ACME Printing Case . . .
253
9.6
Inverse transition function Ψ−1 in Forensic Lucid for the ACME Printing
Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
255
9.7
Blackmail Case modeling in Forensic Lucid . . . . . . . . . . . . . . . . .
256
9.8
Augmented MAC spoofer checking/investigation algorithm . . . . . . . . . .
263
9.9
MAC Spoofer Analyzer’s RT evidential statement context encoding example
270
9.10 msw encoded evidence example . . . . . . . . . . . . . . . . . . . . . . . . . .
271
9.11 activity encoded no-observation evidence example . . . . . . . . . . . . . .
271
9.12 nmap encoded evidence example . . . . . . . . . . . . . . . . . . . . . . . . .
272
9.13 dhcp log encoded evidence example . . . . . . . . . . . . . . . . . . . . . . .
273
D.1 The prototype syntactical specification of the SELF FORENSICS in ASSL for
ADMARF [322] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xiv
371
Chapter 1
Introduction
Cyberforensics, or digital forensics, is the overarching term for digital crime investigations
concerning (among other aspects) digital data recorded on various computing devices and services as evidence for the purposes of analysis, event reconstruction, attribution, and eventual
prosecution or exoneration of suspects. Unlike in traditional forensics, cyberforensic processes (see Chapter 2) face a number of extra multifaceted challenges for their methods and
results to be usable in a court of law and in information security practice. This thesis presents
an endeavor that extends the use of the intensional programming paradigm [131, 350] and
the science behind it to formally model and implement a cyberforensics investigation process
with backtracing of event reconstruction, formalizing and modeling the evidence as multidimensional hierarchical contexts, and proving or disproving the claims in an intensional
manner of expression and eductive evaluation [305, 307] based on Kripke’s possible-world
semantics [508]. Our proposed solution (Section 1.4) is designed to be a practical, contextaware improvement on the raw finite-state-automata (FSA/LISP) approach we have seen
in [83, 135, 136, 137]. The core contribution of this thesis, Forensic Lucid, is in Chapter 7, with its most advanced application described in Section 9.5, page 257. What follows
are executive summary details of the overall research and approach.
We apply intensional logic to automated cyberforensic analysis, reasoning, and event
reconstruction. We explore the benefits and difficulties in comparing the work with the
aforementioned finite-state automata/Common Lisp approach. Intensional logic, as a multidimensional generalization of temporal logic, makes the evaluation of regular mathematical
and logic expressions context-aware, where the context of evaluation (an arbitrary part of
interest in the “possible world”) is a first-class value and can be manipulated in logical expressions via context operators. The fundamental context consists of dimension-value pairs.
1
To help the study, the foundation of the new Forensic Lucid language is presented along
with its forensic context operators to navigate evidential statements and crime scene specifications [305]. Our approach additionally introduces the notion of credibility and trustworthiness of evidence and witnesses, which FSA/LISP lacked—via the Dempster–Shafer theory of
mathematical evidence to address the problem that some evidence or witness accounts may
not be 100% credible, potentially affecting the overall reasoning about a case.
The Forensic Lucid compiler and run-time system are being designed within the General Intensional Programming System (GIPSY) [161, 264, 362, 370]. As an added benefit
arising from the use of Lucid and its GIPSY implementation, the cyberforensics investigation cases may be conducted using a scalable distributed/parallel intensional approach for
event reconstruction computation. This work is a natural evolution of and refinement of
related works written or cowritten by the author [267, 269, 276, 300, 304, 305, 307, 312, 321].
Lucid and intensional logic are good candidates, proven over time, for knowledge representation and computable context-aware evaluation [171]. The Forensic Lucid instance
presented in this thesis is arguably the first formally specified and developed language allowing scalable encoding of forensic knowledge of a case, the building of the case’s operational
description, and the evaluation of claims against them with event reconstruction locally or on
a cluster for large volumes of digital evidence. The first ideas of Forensic Lucid appeared
in [267] in 2007.
As a result, this thesis can be said to be a cross-disciplinary work that touches the breadth
of the research in digital forensics, formal methods, intensional logic and programming, software engineering, distributed and parallel computing, soft computing and expert systems,
automated reasoning, law, mathematics, pattern recognition and data mining, and graphical visualization. Thus, armed with all these tools and techniques we approach the digital
investigation process.
In this chapter subsequently we gradually provide in increasing level of detail, after a more
comprehensive overview (Section 1.1) of the research along with the principal motivations for
it, we detail a set of problems and gaps in Section 1.3 requiring addressing and the proposed
solutions to them in Section 1.4. Further we define the scope of this thesis in Section 1.5
and the overall summary of the research and contributions as well as organization of the
remainder of this thesis in Section 1.6.
2
1.1
Overview
Forensic Lucid, a functional-intensional forensic case programming/specification language
is at the core of the Intensional Cyberforensics project. It has undergone extensive design
and development including its syntax, semantics,
the corresponding compiler, run-time
environment, and interactive development environments [266, 267] provided by the General
Intensional Programming System (GIPSY) [463]. This work further extends our previous
developments in the related work [263, 266, 267, 300, 304].
Forensic Lucid, serving as the base declarative specification language model that we
use in this approach, is in fact a new dialect of the Lucid intensional-logic-based programming language [24, 25, 26, 27, 509]. As a part of this thesis, we define hierarchical contexts in
Forensic Lucid (Section 7.2.2) based on intensional logic for the evaluation of cyberforensic
expressions, first for modeling example case investigations from the related FSA/LISP work
in order to do comparative studies of the old and new approaches [307]. The cases involved
disputes between parties surrounding some kind of computer-related equipment. In particular, in this work we model the blackmail and ACME (a fictitious company name) “printing
case incident” and make their specification in Forensic Lucid for follow-up cyberforensic
analysis and event reconstruction. Our approach is based on the said cases, modeled by
encoding concepts such as evidence and the related witness accounts as an evidential statement context in a Forensic Lucid “program”. The evidential statement is an input to the
transition function that models the possible “feed-forward” deductions in the case. We then
invoke the transition function (actually its reverse, “backtracking”) with the evidential statement context, to see if the evidence we encoded agrees with one’s claims and then attempt to
reconstruct the sequence of events that may explain the claim or disprove it [312]. Following
the simple cases, we model more realistic cases and place some of the resulting practical artifacts to work in the actual network and system operations (Section 9.5). Additionally, in the
ongoing theoretical and practical work, Forensic Lucid is augmented with the Dempster–
Shafer theory of mathematical evidence to include credibility factors and similar concepts
that are lacking in Gladyshev’s model [310]. Specifically, this thesis further refines the theoretical structure and formal model of the observation tuple with credibility weight and other
factors for cyberforensic analysis and event reconstruction [310] by extending the first iteration of Forensic Lucid that was originally following Gladyshev’s approach [135, 136, 137]
to only formalize the evidence and the case in question without taking into account witness
credibility [300, 304, 310].
As an intensional dialect, Forensic Lucid’s toolset is practically being designed within
3
the General Intensional Programming System (GIPSY) and the probabilistic model-checking
tool PRISM as a backend to compile the Forensic Lucid model into PRISM code syntax
and check the compiled case model with the PRISM tool at run-time [310]. As briefly mentioned earlier, GIPSY is the middleware platform in this study that performs the Forensic
Lucid computations. The GIPSY project is an ongoing effort aiming at providing a flexible
platform for the investigation of the intensional programming model as realized by the latest instances of the Lucid programming language [24, 25, 26, 27, 509] (a multidimensional
context-aware language whose semantics is based on possible-worlds semantics [218, 219]).
GIPSY provides an integrated framework for compiling programs written in theoretically all
variants of Lucid, and even any language of intensional nature that can be translated into
some kind of “generic Lucid” [302] (e.g., in our case GIPL [361, 365, 473, 513]).
1.2
Motivation and Applications
Motivations
A formal approach to cyberforensic analysis is necessary for the artifacts produced to be a
credible tool to use in law enforcement and to be viable in courts if challenged. Pavel Gladyshev in his PhD thesis [135] dedicated the entire Chapter 4 to that effect. He subsequently
provided formalisms in event reconstruction in digital investigations. Likewise, in order for
Forensic Lucid (and its surrounding theory and practice) to be a credible tool to use in
a court of law (including the implementation of relevant tools for the argumentation), the
language ought to have a solid scientific basis, a part of which is formalizing the semantics
of the language and proving correctness of the programs written in it.
We move one step further motivated by the fact that truth and credibility can be fuzzy
(that is with elements of uncertainty or taintedness of evidence and witness accounts) and
not just being completely true or false. Thus, it is natural to want to be able to represent
such knowledge, and reason with its presence to discount low credibility claims and give
higher-credibility claims higher weight.
The concrete realization of the formal approach also has to be usable by a wider investigative audience, who should be able to represent and visualize the voluminous case knowledge
and reason about it efficiently. Thus, it is imperative to have usable scripting and visual aid
tools to compose the case and import the digital evidence by human investigators. Additionally, the knowledge representation, case building and management should be friendlier
4
to human investigators and take contextual meaning into the account. The subsequent case
evaluation should be scalable and efficient at the same time, given the likely possibility of
the need to process a large amount of digital evidential data.
Furthermore, a concrete operational need exists to automate the reasoning about truly
offending or false-positive cases in the actual operational environment on a Faculty network to
reduce the burden of the very few human analysts manually doing the related investigations
while attending to many other duties (cf. Section 9.5).
Applications
Due to the inherent interdisciplinary nature of this research (cf. page 2), its possible applications and implications can go significantly further beyond the very specific computer
and network security investigations mentioned in this thesis and even beyond the cybercrime
domain itself.
One possible application of the theoretical framework and formal model of the observation
tuple with credibility weight and other factors for cyberforensic analysis is intrusion-detection
system (IDS) data and their corresponding event reconstruction [310]. This work may also
help with further generalization of the testing methodology of IDSs [355] themselves [310].
In particular, encoding and modeling large volumes of network and other data related to
intrusion detection is an important step in incident analysis and response. The data are
formalized in Forensic Lucid for the purposes of event correlation and reconstruction
along with trustworthiness factors (e.g., the likelihood of logs being altered by an intruder)
in a common specification of the evidential statement context and a digital crime scene. A
possible goal here is to able to collect the intrusion-related evidence as the Forensic Lucid’s
evidential statement from diverse sources like Snort [256, 398, 438], netflows, pcap’s data,
etc., to do the follow-up investigation and event reconstruction. Another goal is to either
be interactive with an investigator present, or fully automated in an autonomous IDS with
self-forensics [321] capability [310].
The proposed practical approach in the cyberforensics field can also be used in a normal
investigation process involving crimes not necessarily associated with information technology.
Combined with an expert system (e.g., implemented in CLIPS [400]), the approach can also
be used in training new staff in investigation techniques to help to prevent incomplete analysis
that conventional ad-hoc techniques are prone to.
Venturing completely outside of crime investigation (digital or not), the Forensic Lucid
approach was proposed to be adapted to applications beyond its intended purpose described
5
in this thesis. On one hand, such a cross-disciplinary area of application is archeology for
event reconstruction in historical context of the evidence one digs up from the archaeological
sites trying to describe and arrange mentally all the evidence and describe what may have
happened long ago given all the finds and validate various hypotheses put forward in that
context. Extrapolating from that, any application where event reconstruction is needed in
the presence of uncertainty, can be approached with Forensic Lucid. Another application
area comes from even further away—the artistic side and the entertainment industry, such
as validating logical consistency of scripts, plots, and strategies for feature films and games
of relative complexity (especially if such works have themselves to do with investigations!).
1.3
Problem Statement and Gap Analysis
The arguably very first formal approach for evidential statement consistency verification and
event reconstruction in cyberforensic investigative analysis appeared in the previously mentioned works [135, 136, 137] by Gladyshev et al . That approach (described and recited in
detail in Chapter 2) relies on the finite-state automata (FSA) and their transformation and
operation to model evidence, witnesses, stories told by witnesses, and their possible evaluation for the purposes of claim validation and event reconstruction (other formalisms were
studied in [21, 22], see Section 2.2). There the authors provide a Common Lisp sample
implementation. The examples the works present are the initial use-cases for the proposed
technique in this thesis—Blackmail Investigation (Section 2.2.5.2) and ACME Printing (Section 2.2.5.1) cases. Their approach, however, is unduly complex to use and to understand for
non-theoretical-computer science or equivalently minded investigators. While the formalization and implementation of the FSA/LISP approach was very valuable to the community,
it is not as elegant as it could have been [83] nor it is very usable by the majority of the
less-formal-tech-savvy investigators.
At the origins of Intensional Programming (IP) is the Lucid functional intensional programming language, dating back to 1974 [27]. After more than 35 years of development,
history has proven that it is a programming paradigm whose languages are diversified and
are in constant evolution. However, Intensional Programming [131, 350] is an off-main-stream
programming paradigm [526] whose concrete applicability still needs to be proven in order to
be widely accepted. A lot of intensional dialects have been spawned from the 35+-year-old
Lucid [5, 24, 25, 26, 27, 93, 122, 126, 361, 364, 509, 514, 539]. Lucid (see Section 4.1) itself
was originally invented with a goal for program correctness verification [23, 25, 26, 244, 533].
6
Overall, there are a number of unaddressed problems and gaps with theories, techniques,
and technologies used, which we summarize below and then elaborate in some detail on the
specific points.
1. The FSA/LISP approach to formal cyberforensic analysis:
(a) does not scale humanly (comprehension, scalability) and computationally (usability)
(b) has no credibility factors to annotate evidence and testimonies (probabilities and
PRISM are subsequently needed)
(c) has no visualization of the knowledge base and the case management
(d) requires more realistic cases to test in the actual operational work
(e) no automatic encoders of the evidence into Common Lisp or FSA
2. The Lucid language:
(a) needs hierarchical contexts to represent nested/hierarchical knowledge as streamed
arguments and results
(b) requires object members access and other APIs for arbitrary nesting
(c) needs the use of formal methods and verification
(d) needs better usability of the Lucid development and run-time tools for a wider
community
(e) requires an update to the theoretical foundations:
i. needs a common format for knowledge representation and reasoning
ii. needs a augmented theoretical logical framework (HOIL and HOIFL, see Section 1.6.1.1, page 13) support for reasoning
3. The GIPSY system:
(a) needs augmentation to support Forensic Lucid
(b) requires cross-language data types
(c) needs a compiler for Forensic Lucid
(d) the evaluation engine/run-time systems needs:
i. multi-tier architecture to address scalability better
7
ii. refactoring the framework for optimized problem-specific handling
iii. reasoning engines for backtracking and probabilities
(e) configuration management and automation
(f) requires development/configuration environment with graphical support for the
better usability
1.4
Proposed Solution
This work focuses on the refinement of application of the intensional logic to cyberforensic
analysis and its benefits are compared to the pure finite-state automata approach [307]. At
the same time we increase the visibility of the Intensional Programming paradigm within the
formal methods and cyberforensic analysis communities as the right tool to use in such an
approach.
The large part of the solution is the creation of the introduced Forensic Lucid dialect
of Lucid to foster the research on the intensional cyberforensics [300] (see Section 1.6.1.2).
To summarize in a few words, this thesis presents multidimensional context-oriented cyberforensic specification and analysis. In a large part, Forensic Lucid is a union of the syntax
and operational semantics inference rules from the comprising intensional languages with its
own forensic extensions based on the cited finite-state automata approach [135, 136, 137].
In order to be a credible tool to use, for example, in court, to implement relevant tools for
the argumentation, the language ought to have a solid scientific base, a part of which is a
complete formalization of the syntax and semantics of the language [304].
Thus, the approach this thesis follows is tailored to addressing a decent subset of problems
and gaps outlined in the previous section. Specifically:
1. Addressing the FSA/LISP approach gaps.
The goal of this work is to lay a foundation to lead to a solution that remedies these two
major drawbacks of the FSA approach by introducing credibility factors and improving
usability. Additionally, we benefit from the parallel demand-driven context-aware evaluation in terms of the implementing system, which the original Common Lisp-based
implementation [137] approach lacks [305].
As to the test cases, the two actual illustratory examples Gladyshev’s works presented
are the mentioned first use-cases for the proposed technique in this thesis—the ACME
8
printer and blackmail case investigations (see Section 2.2.5.1 and Section 2.2.5.2 respectively). These works [136, 137] detail the corresponding formalization using the
FSA and the proof-of-concept (PoC) Common Lisp implementation for the printer
case [137].
We first aim at the same cases to model and implement them using the
new approach, which paves a way to be more friendly and usable in the actual investigator’s work and serve as a basis to further development in the areas of forensic
computing and intensional programming using Forensic Lucid [307, 312] (see Section 9.3 and Section 9.4 respectively). We then move onto more realistic cases, such as,
e.g., MAC spoofer report investigations (see Section 9.5).
Thus, a Lucid approach is a major solution to these problems.
2. Addressing the Lucid gaps.
The Lucid family of languages thrived around intensional logic that makes the notion
of context explicit and central, and recently, a first class value [473, 513, 515] that
can be passed around as function parameters or as return values and have a set of
operators defined upon. We greatly draw on this notion by formalizing our evidence
and the stories as a contextual specification of the incident to be tested for consistency
against the incident model specification. In our specification model, we require more
than just atomic context values—we need a higher-order context hierarchy to specify
different level of detail of the incident and being able to navigate into the “depth” of
such a context. Luckily, such a proposition has already been made [272] and needs
some modifications to the expressions of the cyberforensic context.
In terms of syntax and semantics for Forensic Lucid we benefit in large part, as
the language is based on its predecessor and codecessor Lucid dialects, such as GIPL,
Indexical Lucid, Lucx, Objective Lucid, and JOOIP bound by the higher-order
intensional logic (HOIL) that is behind them. This work continues to formally specify
the operational semantics of the Forensic Lucid language extending the previous
related work [300, 304].
We further define the specification of hierarchical evidential context expressions and the
operators on them when modeling the examples while illustrating related fundamental
concepts, operators, and application of context-oriented case modeling and evaluation.
Common Lisp, unlike Lucid, entirely lacks contexts built into its logic, syntax, and
semantics, thereby making the implementation of the cases more clumsy and inefficient
(i.e., highly sequential) [137]. Our GIPSY system [191, 264, 362, 366, 370] offers a
9
distributed demand-driven evaluation of Lucid programs in a more efficient way and is
more general than the LISP’s compiler and run-time environment [307, 312].
Thus, HOIL, GIPSY are the solutions here.
3. Addressing the GIPSY gaps.
To enhance the scalability of GIPSY a number of APIs needed to be updated, redesigned, or designed from scratch to accommodate new middleware technologies, compilers, and the run-time evaluation system. We designed components and amended existing components of the distributed evaluation engine within the General Intensional
Programming System (GIPSY) enabling measurements of a wide array of run-time parameters for the purposes of scalability studies, its configuration management, scheduling, and administration with the General Manager Tier (Section 6.2.2.1) component
of the GIPSY’s multi-tier architecture [161]. We made advances in the software engineering design and implementation of the multi-tier run-time system for the General
Intensional Programming System (GIPSY) by further unifying the distributed technologies used to implement the Demand Migration Framework (DMF, Section 6.1.3)
in order to streamline distributed execution of hybrid intensional-imperative programs
using Java [161]. The bulk of multi-tier implementation and scalability studies following the redesign of the APIs were carried out by Han [160] and Ji [191], and graphical
configuration management by Rabah et al . [393].
The compiler API support has been further enhanced to allow JOOIP (Section 4.3.2.3)
compilation, the semantics and PoC implementation of which was subsequently carried
out by Wu [528].
The author Mokhov supported the design and unification of these APIs and the integration effort within GIPSY following up with his own extensions to support multiple
GEE (Section 6.2.2) backends and a compiler to support Forensic Lucid, including
the complete GIPSY Type System and Theory (see Appendix B). All this work resulted
in a number of contributions to GIPSY and its frameworks detailed in Chapter 8.
As a part of the proposed near-future work to improve scalability of information management and presentation and with the ongoing advances in the interaction design [404],
the visualization project was proposed to further enhance usability of the discussed
language and system and the related tools. Lucid programs are dataflow programs
and can be visually represented as data flow graphs (DFGs) and composed visually.
Forensic Lucid includes the encoding of the evidence (representing the context of
10
evaluation) and the crime scene modeling in order to validate claims against the model
and perform event reconstruction, potentially within large swaths of digital evidence.
To aid investigators to model the scene and evaluate it, instead of typing a Forensic
Lucid program, we propose to expand the design and implementation of the Lucid
DFG programming onto Forensic Lucid case modeling and specification to enhance
the usability of the language and the system and its behavior in 3D. We briefly discuss the related work on visual programming and DFG modeling in an attempt to
define and select one approach or a composition of approaches for Forensic Lucid
based on various criteria such as previous implementation, wide use, formal backing in
terms of semantics and translation [311] (see Chapter E). Reaching out into a different disciplinary areas of specific interest here—a recent novel concept of documentary
knowledge visual representation in illimitable space introduced by Song in [437]. That
work may scale when properly re-engineered and enhanced to act as an interactive 3D
window into the evidential knowledge grouped into the semantically linked “bubbles”
visually representing the documented evidence and by moving such a contextual window, or rather, navigating within theoretically illimitable space and investigator can
sort out and re-organize the knowledge items as needed prior launching the reasoning
computation. The interaction aspect would be of a particular usefulness to open up
the documented case knowledge and link the relevant witness accounts. This is a proposed solution to the large scale visualization problem of large volumes of “scrollable”
evidence that does not need to be all visualized at one, but be like in a storage depot.
(However, the actual realization of this solution is deferred to later time.)
What follows are the details of our solution along with the related work [274, 313].
1.5
Scope
Out of the problems and gaps detailed in the previous sections, we summarize the more
concrete objectives within the actual scope of this thesis in Section 1.5.1 and the items that
are outside the scope of the thesis and more destined for the immediate future work in
Section 1.5.2.
11
1.5.1
Thesis Objectives
The primary objectives are to design the introduced Forensic Lucid incident specification/scripting language in terms of formal syntax and semantics and show their use through
several case studies. Specifically, in the point form the objectives are to produce:
1. Forensic Lucid (Chapter 7)
(a) Forensic Lucid syntax (Section 7.3)
(b) Forensic Lucid operational semantics (Section 7.4)
(c) Hierarchical higher-order context specification (Section 7.2.2)
(d) Operators and “transition functions” (Section 7.3.4)
(e) Observation specification with credibility (Section 7.4.2)
2. Forensic Lucid parser (Section 8.1)
3. Run-time evaluation environment (engine) design (Section 8.2)
4. Example cases (see Chapter 9)
1.5.2
Out of Scope
• Large scale visualization of the case and systems
• IDS integration and inter-operation
• Other future work items detailed in Section 10.4
1.6
Summary
To summarize, we believe and show that the intensional approach with a Lucid-based dialect
to the problem is an asset in the fields of cyberforensics and intensional logic and programming
as it is promising to be more practical and usable than the plain FSA/LISP in the end [305,
307, 312]. Since Lucid was originally designed and used to prove correctness of programs [24,
25, 26, 509], and is based on the temporal logic functional and data-flow languages, we can
relatively easily adopt its computational machinery to backtracking in proving or disproving
the evidential statements and claims in the investigation process as simply an evaluation of
12
a forensic expression that either translates to sets of true or false given all the facts in the
formally specified context [312] providing a set of event reconstruction traces (backtraces).
For that we defined the novel Lucid dialect with a new set of primitives predefined for
forensic tasks. Unlike the LISP-based system implementing the finite state automata, we
still retain the flexibility of parallel evaluation of several claims or several components of one
claim at the same time by relying on the GIPSY’s demand-driven general eduction engine
(GEE) whose backend is powered by various distributed systems technologies such as the
DMS [242, 383, 384, 385, 498, 499, 501] and multi-tier architecture [160, 161, 191, 362]. We
also retain the generality of the approach [305, 307, 312].
1.6.1
Science and Technology
Here we summarize the science and technology aspects behind this work that we use and rely
on as tools and techniques that support the multifaceted nature of this thesis in a variety of
ways. The detailed description of these aspects follows in the background chapters in Part I.
The specific scientific contributions and the related work done by others and some by the
author come from the Intensional Logic and its extensions for formalization (Section 1.6.1.1),
cyberforensic analysis (Section 1.6.1.2), and GIPSY (Section 1.6.1.4).
1.6.1.1
Intensional Logic and Soft Computing
From the logic perspective, it was shown one can model computations (a computation is also
a basic formal unit in the finite state machines in [136, 137]) as logic [222]. When armed
with contexts [508] as first-class values and a demand-driven run-time model adopted in the
implementation of the Lucid-family of languages [362, 365, 370, 379, 396, 473, 515] that
constrains the scope of evaluation in a given set of dimensions, we come to the intensional
logic and the corresponding programming artifact. In a nutshell, we model our forensic
computation unit in intensional logic and implement it in practice within an intensional
programming platform—the General Intensional Programming System (GIPSY) [264, 362,
366, 370]. We project a lot of potential for the results of this work to be successful, beneficial,
and usable for cyberforensics investigation as well as simulation and intensional programming
communities [307, 312].
From the intensional logic we move up to the concept of higher-order intensional logic
(HOIL) [301, 302] since Forensic Lucid’s constructs are that of a function al language. To
accommodate the notion of credibility in our formalism [274, 313], we then move to something
13
we define as the higher-order intensional fuzzy logic (HOIFL), which is HOIL+Dempster–
Shafer mathematical evidence theory since we are dealing with possibly inexact reasoning
present in intelligent soft computing systems [204]. For the latter we use the mentioned
PRISM tool [467] to check our models. This affects the stated earlier objectives 1a, 1b, and
1e.
For in-depth background on intensional logic and Dempster–Shafer theory see Section 3.2
and Section 3.3.2 respectively.
1.6.1.2
Cyberforensic Analysis
Cyberforensic analysis has to do with automated or semi-automated processing of, and reasoning about, digital evidence, witness accounts, and other details from cybercrime incidents
(involving computers, but not limited to them). Analysis is one of the phases in cybercrime investigation (while the other phases focus on evidence collection, preservation, chain
of custody, information extraction that precede the analysis) [84]. The phases that follow
the analysis are formulation of a report and potential prosecution or exoneration, typically
involving expert witnesses [83, 300, 311].
There are quite a few techniques, tools (hardware and software), and methodologies that
have been developed for the mentioned phases of the cybercrime investigation. A lot of
attention has been paid to tool development for evidence collection and preservation; a few
tools have been developed to aid data “browsing” on the confiscated storage media, log files
(e.g., Splunk [439]), memory, and so on. Fewer tools have been developed for case analysis of
the data (e.g., Sleuthkit [61]), and the existing commercial packages (e.g., EnCase [54, 149]
or FTK [2]) are very expensive. Even less so there are case management, event modeling,
and event reconstruction, especially with a solid formal theoretical base [300, 311]. More
in-depth background on cyberforensics is found in Chapter 2.
Forensic Lucid’s design and implementation, its theoretical base are being established
in this work (see Chapter 7). In the cyberforensic analysis, Forensic Lucid is designed
to be able to express in a program form the encoding of the evidence, witness stories, and
evidential statements, that can be tested against claims to see if there is a possible sequence
or multiple sequences of events that explain a given “story”. As with the Gladyshev’s approach, it is designed to aid investigators to avoid ad-hoc conclusions and have them look at
the possible explanations the Forensic Lucid program “execution” would yield and refine
the investigation, as was Gladyshev has previously shown [135, 136, 137] where hypothetical
14
investigators failed to analyze all the “stories” and their plausibility before drawing conclusions in the cases under investigation [300, 311]. This works improves on Gladyshev’s work
by being more usable, scalable, expressive, and concise, while dealing with credibility and
incorporating the good ideas from Gladyshev.
1.6.1.3
Context-Orientation
Context-oriented computing and reasoning emerged as a domain with a lot of research going
into it from various aspects as context is something that provides us with meaning. This
includes mobile computing, semantic web [535] and related technologies for natural language
understanding [413, 414, 522], intelligent agent computing and distributed systems [42, 414,
454], description logic and ontologies [538, 538], web services and service-oriented architectures [66, 103, 432, 534], human-computer interaction [102], security [538] and environment
for requirements engineering in software engineering [254, 401, 413, 482, 522], data mining
and pattern recognition [63, 413] among others. Context also helps to deal with an uncertainty sometimes humans have to deal with in an unknown situation using some form of a
dialectic approach based on incomplete information [50].
Figure 1: Boyd’s OODA loop [51]
Context specification and evaluation is also at the core of the Isabelle/HOL interactive
theorem proving framework within its ML implementation [518].
Context helps making decisions about a situation or investigation pertinent to the environment where the situation is taking place. Boyd came up with the notion of the ObserveOrient-Decide-Act (OODA) loop (see Figure 11 ) when describing fighter pilots in a combat
1
The image is by P. E. Moran, reproduction at the CC BY 3.0 license of the Boyd’s OODA loop is sourced
from the Wikipedia’s http://en.wikipedia.org/wiki/OODA_loop page.
15
situation [51] learning the context of their environment, reading out flight parameters, receiving information from the mission control about the known information about the enemy to
plan for the battle style, and then, when engaging, the process continues, albeit, much more
rapidly. Observations give the evidential parameters as an input of raw observed properties,
that formulate the initial context of the situation. The Orient step can be termed as the set
of context operations, the most important part of the loop, where contextual knowledge is
applied to filter out less relevant data about the environment, enemy, and own tools, through
the available constraints before arriving at a Decision of how to Act next to alter the environment and situation further to own advantage and respond to the changes introduced by the
enemy for each iteration via the feedback loops. Boyd’s military approach of the OODA loop
was further expanded to a more complex multi-perspective view of the business world [477]
for decision making. (In both cases the approach was to try to run through own loop faster
than the opponent’s to gain advantage.)
Context-orientation in security and investigation very similarly go through a similar process, but on a different scale. Context navigation is done with appropriate operators from the
existing evidence before arriving at a conclusion, and a possible enforcement action (e.g., enabling firewall restriction, bringing network port down, or intensive follow-up investigation),
or a production of a new context to explore further possibilities, options, and hypotheses.
This approach in part is also taken by, e.g., Rapid7, the makers of the Metasploit penetration
testing suite and their related systems.
A number of approaches have been created to represent contextual knowledge and reason
about it. The common mainstream popular way appears to be with ontologies and Web Ontology Language (OWL) [151] and using description logics [521, Chapters 10, 19]. Physically,
it is typically a kind of XML-based specification. Aspect-oriented programming (AOP) in
away also introduced the notion of context (see Section 8.2.1, page 213). Intensional logic (see
Section 1.6.1.1, page 13) has been built around the notion of context nearly from the start
and can encompass all the mentioned concepts, logics, and it has a nice executable formalism
to go along. We stick with the intensional approach as arguably the oldest sound approach
in the scene with solid foundations for formal representation of contextual knowledge and
reasoning about it in one concept, instantiated in Lucid. For further in-depth discussion
please refer to Section 6.1.4 and Section 3.2.
16
Figure 2: The GIPSY logo representing the distributed nature of GIPSY
1.6.1.4
The General Intensional Programming System (GIPSY)
The General Intensional Programming System (GIPSY) has been built around the mentioned Lucid family of intensional programming languages that rely on the higher-order
intensional logic (HOIL) to provide context-oriented multidimensional reasoning of intensional expressions. HOIL combines functional programming with various intensional logics
(Section 1.6.1.1) to allow explicit context expressions to be evaluated as first-class values that
can be passed as parameters to functions and return as results with an appropriate set of
operators defined on contexts. GIPSY’s frameworks are implemented in Java as a collection
of replaceable components for the compilers of various Lucid dialects and the demand-driven
eductive evaluation engine that can run distributively (Figure 22 ). GIPSY provides support
for hybrid programming models that couple intensional and imperative languages for a variety of needs. Explicit context expressions limit the scope of evaluation of mathematical
expressions (effectively a Lucid program is a mathematics or physics expression constrained
by the context) in tensor physics, regular mathematics in multiple dimensions, etc., and for
cyberforensic reasoning as one of the specific use-cases of interest of this thesis. In return,
some of this thesis’ work also provides GIPSY with more application scenarios to prove its
applicability to solve different kinds of problems. Thus, GIPSY is a support testbed for
HOIL-based languages some of which enable such reasoning, as in formal cyberforensic case
analysis with event reconstruction. In this thesis we discuss in detail the GIPSY architecture,
its evaluation engine and example use-cases [302] in Chapter 6 and Chapter 8 respectively.
2
Paquet, 2005
17
1.6.2
Research Approach Overview
As it is becoming evident, the research approach presented in this thesis is multifaceted
drawing on a number of aspects. These are visually presented in Figure 3 summarizing the
overall research. This figure is drawn up from an inspiration of the multi-tier architecture
of GIPSY depicted in Figure 2, with the faces augmented and extend from the logotype
of the book by Jordan and Alaghband [196], and lately in part on the context-orientation
in the OODA loop in Figure 1. The Figure 3 depicts at the center the core contribution
of this thesis—Forensic Lucid surrounded by all the supporting tools, techniques, and
methodologies from logic programming, to algorithms, to programming languages, to the
compile- and run-time middleware systems architecture, and to the data sources used for
case evaluations.
Figure 3: Overall multifaceted research summary
18
1.6.3
Thesis Organization
Due to the multifaceted nature of this thesis, it’s organized in parts covering the different
facets to make it easier for the reader to navigate and read the relevant parts of the material
easier, including the chapter, page, section, figure, etc. hyperlinks provided in the electronic
version. A lot of it is dedicated to the background work primarily by others and the author
himself to provide the necessary setting, but the background chapters are not required to be
read in a strict sequence and the readers may pick and choose the chapters of interest to read
based on a particular topic or discussion found in this chapter or in the core methodology
part. Some chapters are short that were not fitting into any other chapters, while other
chapters are significantly more comprehensive depending on its material’s importance for
this thesis.
We begin by reviewing the relevant background knowledge (Part I) on cyberforensics
(Chapter 2), mathematical and logic foundations (Chapter 3) where we review the notion
of intensional logic and programming for the unaware reader, the Lucid programming language (Chapter 4), data mining and pattern recognition aspects (Chapter 5), the General
Intensional Programming System (Chapter 6). We subsequently present the refined syntax
and the semantics specification of the Forensic Lucid language properly attributing the
inherited language constructs and rules, and the new extensions followed by the supporting
components and systems design and implementation in Part II. We then proceed with the
evaluation of the approach using a number of case studies followed by concluding remarks
in Part III. After the Bibliography, appendices provide additional details about the theory, design, and implementation aspects supporting the core concepts presented in the main
chapters.
1.6.4
Publications and Contributions
Here are several works that are directly related to or supporting this research as far as publication aspect concerned. This section lists the select related publications on Forensic Lucid
(Section 1.6.4.1); GIPSY (Section 1.6.4.2); other aspects related to data mining, pattern
recognition, networking and security (Section 1.6.4.3); and self-forensics (Section 1.6.4.4).
These contributions provides some initial solutions addressing some of the earlier states gaps
and thesis objectives.
19
1.6.4.1
Forensic Lucid
• S. A. Mokhov, J. Paquet, and M. Debbabi. Reasoning about a simulated printer case investigation
with Forensic Lucid. In P. Gladyshev and M. K. Rogers, editors, Proceedings of ICDF2C’11, number
0088 in LNICST, pages 282–296. Springer, Oct. 2011. Submitted in 2011, appeared in 2012; online at
http://arxiv.org/abs/0906.5181
• S. A. Mokhov, J. Paquet, and M. Debbabi. Towards automated deduction in blackmail case analysis
with Forensic Lucid. In J. S. Gauthier, editor, Proceedings of the Huntsville Simulation Conference
(HSC’09), pages 326–333. SCS, Oct. 2009. Online at http://arxiv.org/abs/0906.0049
• S. A. Mokhov. Encoding forensic multimedia evidence from MARF applications as Forensic Lucid
expressions. In T. Sobh, K. Elleithy, and A. Mahmood, editors, Novel Algorithms and Techniques in
Telecommunications and Networking, proceedings of CISSE’08, pages 413–416, University of Bridgeport, CT, USA, Dec. 2008. Springer. Printed in January 2010
• S. A. Mokhov, J. Paquet, and M. Debbabi. Formally specifying operational semantics and language
constructs of Forensic Lucid. In O. Göbel, S. Frings, D. Günther, J. Nedon, and D. Schadt, editors, Proceedings of the IT Incident Management and IT Forensics (IMF’08), LNI140, pages 197–216. GI, Sept.
2008. Online at http://subs.emis.de/LNI/Proceedings/Proceedings140/gi-proc-140-014.pdf
• S. A. Mokhov, J. Paquet, and M. Debbabi. On the need for data flow graph visualization of Forensic
Lucid programs and forensic evidence, and their evaluation by GIPSY. In Proceedings of the Ninth
Annual International Conference on Privacy, Security and Trust (PST), 2011, pages 120–123. IEEE
Computer Society, July 2011. Short paper; full version online at http://arxiv.org/abs/1009.5423
• S. A. Mokhov, J. Paquet, and M. Debbabi. Towards automatic deduction and event reconstruction using Forensic Lucid and probabilities to encode the IDS evidence. In S. Jha, R. Sommer, and
C. Kreibich, editors, Proceedings of RAID’10, LNCS 6307, pages 508–509. Springer, Sept. 2010
• S. A. Mokhov. Enhancing the formal cyberforensic approach with observation modeling with credibility
factors and mathematical theory of evidence. [online], also in ;login: vol. 34, no. 6, p. 101, Dec. 2009.
Presented at WIPS at USENIX Security’09, http://www.usenix.org/events/sec09/wips.html
1.6.4.2
GIPSY
• S. A. Mokhov and J. Paquet. Using the General Intensional Programming System (GIPSY) for
evaluation of higher-order intensional logic (HOIL) expressions. In Proceedings of SERA 2010, pages
101–109. IEEE Computer Society, May 2010. Online at http://arxiv.org/abs/0906.3911
• S. A. Mokhov and J. Paquet. A type system for higher-order intensional logic support for variable
bindings in hybrid intensional-imperative programs in GIPSY. In T. Matsuo, N. Ishii, and R. Lee,
20
editors, 9th IEEE/ACIS International Conference on Computer and Information Science, IEEE/ACIS
ICIS 2010, pages 921–928. IEEE Computer Society, May 2010. Presented at SERA 2010; online at
http://arxiv.org/abs/0906.3919
• S. A. Mokhov, J. Paquet, and X. Tong. A type system for hybrid intensional-imperative programming
support in GIPSY. In Proceedings of C3S2E’09, pages 101–107, New York, NY, USA, May 2009. ACM
• B. Han, S. A. Mokhov, and J. Paquet. Advances in the design and implementation of a multi-tier
architecture in the GIPSY environment with Java. In Proceedings of SERA 2010, pages 259–266. IEEE
Computer Society, 2010. Online at http://arxiv.org/abs/0906.4837
• A. Wu, J. Paquet, and S. A. Mokhov. Object-oriented intensional programming: Intensional Java/Lucid
classes. In Proceedings of SERA 2010, pages 158–167. IEEE Computer Society, 2010. Online at:
http://arxiv.org/abs/0909.0764
• J. Paquet, S. A. Mokhov, and X. Tong. Design and implementation of context calculus in the GIPSY
environment. In Proceedings of the 32nd Annual IEEE International Computer Software and Applications Conference (COMPSAC), pages 1278–1283, Turku, Finland, July 2008. IEEE Computer
Society
1.6.4.3
Data Mining, Pattern Recognition, and Security
• A. Boukhtouta, N.-E. Lakhdari, S. A. Mokhov, and M. Debbabi. Towards fingerprinting malicious
traffic. In Proceedings of ANT’13, volume 19, pages 548–555. Elsevier, June 2013
• E. Vassev and S. A. Mokhov. Developing autonomic properties for distributed pattern-recognition
systems with ASSL: A Distributed MARF case study. LNCS Transactions on Computational Science,
Special Issue on Advances in Autonomic Computing: Formal Engineering Methods for Nature-Inspired
Computing Systems, XV(7050):130–157, 2012. Accepted in 2010; appeared February 2012
• S. A. Mokhov, J. Paquet, M. Debbabi, and Y. Sun. MARFCAT: Transitioning to binary and larger
data sets of SATE IV. [online], May 2012. Submitted for publication to JSS; online at http://arxiv.
org/abs/1207.3718
• S. A. Mokhov. The use of machine learning with signal- and NLP processing of source code to
fingerprint, detect, and classify vulnerabilities and weaknesses with MARFCAT. Technical Report
NIST SP 500-283, NIST, Oct. 2011.
Report: http://www.nist.gov/manuscript-publication-
search.cfm?pub_id=909407, online e-print at http://arxiv.org/abs/1010.2511
• S. A. Mokhov and M. Debbabi. File type analysis using signal processing techniques and machine
learning vs. file unix utility for forensic analysis. In O. Goebel, S. Frings, D. Guenther, J. Nedon, and
D. Schadt, editors, Proceedings of the IT Incident Management and IT Forensics (IMF’08), LNI140,
pages 73–85. GI, Sept. 2008
21
• S. A. Mokhov. Towards syntax and semantics of hierarchical contexts in multimedia processing applications using MARFL. In Proceedings of the 32nd Annual IEEE International Computer Software and
Applications Conference (COMPSAC), pages 1288–1294, Turku, Finland, July 2008. IEEE Computer
Society
• M. J. Assels, D. Echtner, M. Spanner, S. A. Mokhov, F. Carrière, and M. Taveroff. Multifaceted
faculty network design and management: Practice and experience. In B. C. Desai, A. Abran, and
S. Mudur, editors, Proceedings of C3 S2 E’11, pages 151–155, New York, USA, May 2010–2011. ACM.
Short paper; full version online at http://www.arxiv.org/abs/1103.5433
• S. A. Mokhov. Towards security hardening of scientific distributed demand-driven and pipelined
computing systems. In Proceedings of the 7th International Symposium on Parallel and Distributed
Computing (ISPDC’08), pages 375–382. IEEE Computer Society, July 2008
1.6.4.4
Self-Forensics
• S. A. Mokhov, E. Vassev, J. Paquet, and M. Debbabi. Towards a self-forensics property in the ASSL
toolset. In Proceedings of C3S2E’10, pages 108–113. ACM, May 2010
• S. A. Mokhov. The role of self-forensics modeling for vehicle crash investigations and event reconstruction simulation. In J. S. Gauthier, editor, Proceedings of the Huntsville Simulation Conference
(HSC’09), pages 342–349. SCS, Oct. 2009. Online at http://arxiv.org/abs/0905.2449
• S. A. Mokhov and E. Vassev. Self-forensics through case studies of small to medium software systems.
In Proceedings of IMF’09, pages 128–141. IEEE Computer Society, Sept. 2009
• S. A. Mokhov. Towards improving validation, verification, crash investigations, and event reconstruction of flight-critical systems with self-forensics. [online], June 2009. A white paper submitted in response to NASA’s RFI NNH09ZEA001L, http://arxiv.org/abs/0906.1845, mentioned in
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20100025593_2010028056.pdf
• S. A. Mokhov, J. Paquet, and M. Debbabi. Towards formal requirements specification of self-forensics
for autonomous systems. Submitted for review to J. Req. Eng., 2009–2013
22
Part I
Background
23
Chapter 2
Cyberforensics
In this chapter we review the background work in the domain of digital crime investigations.
The bulk of the related literature pertinent to this thesis has been summarized here in detail
for the readers who want to remind themselves of the core literature that is used in this
research (most prominently the FSA formal approach to digital investigations and other
formal approaches in cyberforensics). The major literature cited is the one that is either the
most fundamental and inspiring work for this thesis and/or of some supplemental relevance
and support of this work as this thesis was gradually developed over the years. It is not meant
to be a comprehensive survey of the “cutting edge” of the state of the art on the matter.
More specifically we review the general aspects of Forensic Computing in Section 2.1, the core
formal FSA approach by Gladyshev to evidence representation and event reconstruction with
examples in Section 2.2, some more recent related approaches are discussed in Section 2.3,
and concluding remarks are in Section 2.4.
2.1
Forensic Computing
Gathering and analyzing data in a manner as free from distortion or bias as
possible to reconstruct data or what happened in the past on a system [or a
network] —Dan Farmer, Wietse Venema (1999) [252]
Many ideas in this work come from computer forensics and forensic computing [138,
139, 525]. Both have traditionally been associated with computer crime investigations (cf.,
Section 1.6.1.2) to seize the evidence, live or “dead”, memory contents, disk contents, log
24
files off any storage and computing devices, followed by, among other things, information
extraction and analysis [308, 322]. There is a wide scope of research done in forensic computing, including formal and practical approaches, methodologies, and the use associated
tools [21, 22, 85, 136, 137, 152, 357] (see Section 2.1.3) [322] and the annual Digital Forensic
Research Workshop (DFRWS) was established [357] to track progress in the field along with
other venues, such as IMF [138, 139], ADFSL, etc.
Digital forensics is also a prominently growing art in court of law with practitioners
including attorneys, such as Craig Ball [37], who become more proficient in the digital investigation and the tools, but who are not as widely available as attorneys for other law
disciplines because due to very detailed technical aspects of the digital investigation practice
that is usually restricted to technical expert witnesses found in law enforcement and similar
agencies.
Forensic computing relies mostly on similar approaches in the evidence gathering and
usage as the traditional way, summarized below by Lee [231] (where “substance” in item 8
in our case are digital data):
Following are the objectives of utilization of forensic evidence found at a crime
scene in any [investigation];
1. Information on the corpus delicti;
2. Information on the modus operandi;
3. Linkage of persons to other persons, objects, or scenes;
4. Linkage of evidence to persons, objects or locations;
5. Determining or eliminating the events and actions that occurred;
6. Disproving or supporting witness statements or testimony;
7. Identification or elimination of a suspect;
8. Identification of unknown substance;
9. Reconstruction of a crime;
10. Providing investigative leads.
H.C. Lee [231]
25
Forensic computing is broadly separated into two aspects: “dead” analysis and “live”
analysis. The former is more traditional (working on data off a disk or log files some time
after the incident occurred and the computing equipment was powered off) and the latter
emerged and gained popularity to capture any live volatile data that may provide more insight
on the incident while it is still available (such as memory content that may have unencrypted
data and other live data structures, process list, current network connections and traffic flows,
current open files, etc., possibly gathering the live data while the incident is unfolding) [59,
252, 373]. Live forensics according to McDougal [252] often actually constitutes Incident
Response [247] while volatile evidence is “hot” and may still be available. Live forensics is
also common in active “honeypots” deployed [172, 464] as a honeypot to lure in attackers
to weakened virtual hosts networks and observe their activity to learn and know what and
how they attack; the most prominent example is the HoneyNet project [172]. Despite its
growing attractiveness, live forensics has also been criticized for potential risks that it may
introduce [59], e.g., when an attacker deliberately tries to actively circumvent or feed/leave
bogus and noisy data around in disguise for real data making live data findings sometimes less
reliable and untrustworthy as evidence (unless the attacker is not in a position to poison the
real data or prevent the live analysis tools running on the system under investigation [59]).
In practice both live and dead analyses are often used together (e.g., as done by one of our
operational case studies in Section 9.5). A combination of live forensics and dead analysis in
autonomic environment is termed as self-forensic computing (detailed in Appendix D.2.1).
2.1.1
Tracing for Event Reconstruction
A lot of aspects in forensic computing have to do with tracing of states, activity, and events
via (sys)logging. As a part of the related work on what we call self-forensics (see Appendix D
for an in-depth discussion) there has been some preliminary work done in the past that was
not identified as belonging to that notion. For example, a state-tracing Linux kernel [150]
exhibits some elements of self-forensics by tracing its own state for the purposes of forensic
analysis [322]. AspectJ [29]’s (see Section 8.2.1, page 213 for in-depth discussion) tracing
capability of Java programs provides a way of tracing by the observing aspects collecting
the forensic data (e.g., data structures content) during the forward tracing of the normal
26
execution flow. (From the self-forensics standpoint this is useful given a large number of webbased applications and distributed systems deployed today are written in Java, the use of
AspectJ, which isn’t coupled with the applications’ source code and can observe application
objects independently, provides a good basis for a self-forensics middleware [322].)
2.1.2
Computer Anti-Forensics
There are opposing forces to the forensic analysis (computer forensics—CF) and investigation that stand in the way and are also a part of (anti-)forensic computing (computer
anti-forensics—CAF). Such forces aim at attacking forensic tools or techniques.
As an example, Dahbur and Mohammad introduced [76] the anti-forensics challenges
in a brief survey of tools and techniques in the field of anti-forensics together with their
classification [288] (excluding network anti-forensics aspects) [288]. In doing so, the authors
took the white-hat position on the problem to combat anti-forensics (i.e., the implied “antianti-forensics”) for successful investigation. They lay a foundation for the terminology of
the anti- side [288]. Then follows the problem space definition as a context in the geopolitical spread of data usage on the Internet worldwide [471] including the definition of the
relevant and introductory terminology and literature [288]. That allows the authors to raise
several sets of classification categories from the surveyed literature based on attack targets,
(non-)traditional techniques, functionality, and distinguishing between anti-forensics (forensic
analysis prevention by data hiding via cryptographic or steganographic tools) and counterforensics (direct attack on forensic tools) [288]. They examine the problem from the point
of view of constraints found in CF: temporal, financial, and other environmental aspects.
This prompts to explore more deeply the challenges posed by CAF to the investigator by describing the evolution of the privacy technologies available to users, encryption, compression
bombs, cloud computing, steganography, etc.—overall the nature of the digital evidence and
where CAF attacks may come from in the attempt to delay or halt the investigation [288].
On a related topic, Dahbur and Mohammad overlooked some of the related work that
can help addressing some of the challenges posed by CAF, specifically on spectral file type
analysis via machine learning instead of magic signatures, which can detect file types with
relatively good precision when the magic numbers are altered ([290], Section 5.3, page 107).
27
They also overlook standard Unix utilities that have existed for ages and that can change
timestamps, for example, touch [409] and what possible attacks on such tools may be [288].
The machine learning approach (Chapter 5) has been shown to work well as well for network
forensics quite reliably [49] that in part can address the CAF problem at the network layer.
2.1.3
Forensic Tools, Techniques, and Methodologies
Dan Farmer and Wietse Venema are generally credited (1999) with the creation
of computer forensics as we know it today. They are also the author of one of
the [first] freeware tools for doing forensics named The Coroner’s Toolkit (TCT).
While this tool suit has generally been expanded and enhanced by many others,
it certainly is the basis of modern computer forensics at least within the *NIX
world. —Monty McDougal [252]
This section briefly summarizes the tools and techniques surveyed during this work. The
reader is assumed to have some familiarity with or exposure to the languages, tools, techniques, and the concepts listed here. If it is not the case, please refer to the references cited
on the subjects in question.
Digital forensic investigation tools have to do with various computing aspects, such as
most commonly memory, log, and disk analysis in search for all kinds of evidence, cataloging
it properly, and presenting the results. Other aspects in digital investigation touch various
operating systems, especially after a compromise due to bad security controls or a misuse [19,
434], including in virtual machines security [356] since virtualization and cloud computing
are gaining a lot of prominence lately. Related forensic computing tools also have to do
with malware analysis [185, 235, 433] and fingerprinting of malware and its traffic [49, 289,
314]. These activities are becoming a norm for certified computer and network security
professionals [54, 97, 444].
There are several forensic toolkits in the open-source, academic, and commercial worlds.
Dead/live analysis tools include Sleuthkit along with the chain of custody support, Helix [374]
with common tools for most OSes such as TCT [461], Sleuthkit and its Autopsy Browser [60,
61], Windows Forensics Toolkit (WFT) [252], EnCase [149], FTK [2], and others.
28
We project to have our work included into one or more of those either as a plug-in if the
host environment is in Java like Forensic Toolkit as JPF Plug-ins [21, 22, 85], Ftklipse [225,
226, 227], or others [249], or as a standalone tool by itself for inclusion into more general
forensic toolsets for binary data analysis extracted from files for preliminary classification
file data alongside stegdetect [386] and many others. We are considering to make it work
with the Linux Sleuthkit [60, 61] and commercial tools, such as FTK [2], EnCase [54, 149]
and inclusion in the relevant Linux distributions as a part of our future work [290] (see
Section 10.4).
2.1.4
Formal Cyberforensic Analysis
An increasingly important aspect of forensic computing has to do with formalisms recently
developed or being developed to the process of conceiving and applying scientific methodology
to cyberforensic investigations. This aspect is a primary interest and pursuit of this thesis.
The first more prominent work in that direction discussed is that of Gladyshev et al . [83, 136,
137], followed by that of Debbabi et al . from the Computer Security Laboratory, Concordia
University, from [21, 22, 85] (subsequent discussed in Section 2.3), and more specifically of
the author Mokhov et al . [269, 274, 300, 304, 307, 310], which comprise the contributions of
this thesis. Thus, this chapter’s primary remaining focus is on such a formal methodology
detailed further in Section 2.2 that serves the as one of the core founding principles of the
Forensic Lucid foundations described in Chapter 7.
2.2
Gladyshev’s Formal Approach to Cyberforensic Investigation and Event Reconstruction
For the readers to further understand better the methodology and contribution of this thesis
(presented further in Chapter 7), we review in depth the details of the Gladyshev’s solution
to forensic formalization here along with the two example cases. These formalisms of the
FSA approach are recited from the related works of Gladyshev et al . [135, 136, 137].
29
2.2.1
Overview
This section reviews in-depth the earlier (Section 1.6.1.2) mentioned first formal approach to
cyberforensic analysis that appeared in the works [135, 136, 137] by Gladyshev et al . To recapitulate, Gladyshev [135] relies on finite state automata (FSA) and their transformation and
operation to model evidence and witnesses accounts, and the subsequent evaluation of their
possible explanations (meanings). These are specifically recited in Section 2.2.2 (FSA and
terminology definitions), Section 2.2.3 (approach to backtracing for event reconstruction),
and Section 2.2.4 (evidence formalization, the core Gladyshev’s contribution [135]). Additionally, his works present two example use-cases: the ACME Printing Case and Blackmail
Case investigations, reviewed subsequently in Section 2.2.5.
2.2.2
Definitions
In Gladyshev’s formalization [135], a finite state machine is a sequence of four elements
T = (Q, I, ψ, q), where:
• Q is a finite set of all possible states
• I is a finite set of all possible events
• ψ : I × Q → Q is a transition function that determines the next state for every possible
combination of each state and event
• q ∈ Q is the current system state
A transition is the process of state change [83, 135]. Transitions are instantaneous (no
duration). A finite computation is a non-empty finite sequence of steps c = (c0 , c1 , . . . , c|c|−1 )
where each step is a pair cj = (clj , cqj ), in which clj ∈ I is an event, cqj ∈ Q is a state, and
any two steps ck and ck−1 are related via a transition function: ∀k, such that 1 ≤ k < |c|,
cqk = ψ(clk−1 , cqk−1 ). The set of all finite computations of the finite state machine T is denoted
CT [83, 135].
A run is a sequence of computations r ∈ (CT )|r| , such that if r is non-empty, its first element
is a computation r0 ∈ CT , and for all 1 ≤ i < |r|, ri = ψ(ri−1 ), where function ψ discards the
30
Figure 4: Run of a computation c [135]
first element of the given computation [83, 135]. For two computations x ∈ CT and y ∈ CT ,
a relationship y = ψ(x) exists, if and only if x = x0 .y. The set of all runs of the finite state
machine T is denoted RT . The run of computation c is a run, whose first computation is
c [83, 135]. Any run r is completely determined by its length and its first computation: a
computation is a sequence, a run is a computation plus length; the run defines a progress
within a computation (see Figure 4). Gladyshev uses runs as an explanation model of an
observation [83, 135].
A partitioned run is a finite sequence of runs pr ∈ (RT )|pr| , such that concatenation of its
elements in the order of listing is also a run [83, 135]:
(pr0 .pr1 .pr2 . · · · .pr|pr|−1 ) ∈ RT
(2.2.2.1)
A set of all partitioned runs is denoted P RT [83, 135]. A partitioning of run r ∈ RT is a
partitioned run denoted prr , such that concatenation of its elements produces r:
(prr0 .prr1 .prr2 . · · · .prr|pr|−1 ) = r
(2.2.2.2)
A partitioned run is used as an explanation model of an observation sequence [83, 135]. The
condition on the concatenation is to reflect the fact that observations happened after each
other, without gaps in time. A partitioned run is determined by a sequence of pairs where
31
the first element of a pair is a computation and the second element is a length [83, 135].
2.2.3
Back-tracing
Further, Gladyshev defines the inverse of ψ—a function ψ −1 : CT → 2CT . For any computation y ∈ CT , it identifies a subset of computations, whose tails are y : ∀x ∈ ψ −1 (y), y = ψ(x).
In other words, ψ −1 back-traces the given computation [83, 135]. Subsequently, a modified
function Ψ−1 : 2CT → 2CT is defined as: for Y ⊆ CT , Ψ−1 (Y ) = ∀y∈Y ψ −1 (y). This definition, in addition to uniformity, allows for easier implementation [83, 135]. Both inverse
functions are illustrated in Figure 5.
Figure 5: Backtracking example in ψ −1 (y) and Ψ−1 (Y ) [135]
2.2.4
Formalization of Evidence
Gladyshev formalizes the evidence (observations) from the observed event flow understanding
and storytelling [135]. Every piece of evidence tells its own “story” of the incident. The goal of
event reconstruction can be seen as a combination of stories told by witnesses and by various
pieces of evidence to make the description of the incident as precise as possible [83, 135].
An observation is a statement that system behavior exhibited some property P continuously
for some time. Formally, it is defined as a triple o = (P, min, opt), where P is the set of all
32
computations in T that possess the observed property, min and opt are non-negative integers
that specify the duration of an observation [83, 135]. An explanation of observation o is a
run r ∈ RT such that every element of the run r possesses the observed property: for all
0 ≤ i < |r|, ri ∈ P , and the length of the run r satisfies min and opt: min ≤ |r| ≤ (min+opt).
The meaning of observation o is the set Ro ⊆ RT of all runs that explain o [83, 135].
The duration is modeled as lengths of computations. A run explains an observation if
the length of the run satisfies the min and opt requirements, and, each element of the run ri
should possess the property P , i.e., the computation ri ∈ P [83, 135].
2.2.4.1
Types of Observations
Gladyshev divides observations into several types [83, 135]:
• A fixed-length observation is an observation of the form of (P, x, 0). Any run explaining
it has a length of x.
• A zero-observation is an observation of the form of (P, 0, 0). The only run explaining
it is an empty sequence.
• A no-observation is an observation $ = (CT , 0, infinitum) that puts no restrictions on
computations. The inf initum is an integer constant that is greater than the length of
any computation that may have happened (all runs satisfy this observation).
• A generic observation is an observation with variable length, i.e., y > 0 in (P, x, y).
See Section 2.2.4.5 for more details.
2.2.4.2
Observation Sequence
An observation sequence, per Gladyshev [135], is a non-empty sequence of observations listed
in chronological order:
os = (observationA , observationB , observationC , . . .)
33
(2.2.4.1)
An observation sequence represents an uninterrupted “eyewitness” (without gaps) story. The
next observation in the sequence begins immediately when the previous observation finishes [83, 135]. Gaps in the story are represented by no-observations [83, 135], i.e.:
$ = (CT , 0, infinitum)
An explanation of observation sequence os is a partitioned run pr such that the length of pr
is equal to the length of os: |pr| = |os|, and each element of pr explains the corresponding
observation of os [83, 135]:
∀i : 0 ≤ i ≤ |os|, pri ∈ Rosi
(2.2.4.2)
os = (
o1
.
o2
.
···
.
on
)
pr = (
pr1
.
pr2
.
···
.
prn
)
where each pri is a run that is an explanation of oi [83, 135]. The meaning of observation
|os|
sequence os is a set P Ros ⊆ 2(RT )
of all partitioned runs that explain os. A run r satisfies an
observation sequence os if and only if there exists a partitioning of r that explains os [83, 135].
There may be more than one partitioning of r that explains os as shown in Figure 6. A
computation c satisfies an observation sequence os if and only if there is a run r that satisfies
os and r0 = c [83, 135].
2.2.4.3
Evidential Statement
An evidential statement is a non-empty sequence of observation sequences:
es = (osA , osB , osC , . . .)
(2.2.4.3)
where ordering of the observation sequences is not important [83, 135]. Each observation
sequence is a version of the story. Each principal (i.e., witness) will have their own version
(i.e., observation sequence) [83, 135]. An explanation of an evidential statement es is a
34
os = ((P1 , 1, infinitum), (P2 , 1, infinitum))
P1 = { S1 , S2 , S3 }
P2 = { S3 , S4 , S5 }
r = S1 → S2 → S3 → S4 → S5
prA1 = S1 → S2 → S3
prA2 = S4 → S5
prB1 = S1 → S2
prB2 = S3 → S4 → S5
explanationA = ( S1 → S2 → S3 , S4 → S5 )
explanationB = ( S1 → S2 , S3 → S4 → S5 )
where r is a run, pri are various partitions of r (which are also runs), and each explanation is a
partitioned run.
Figure 6: Explanations as partitioned runs of an observation sequence [135]
sequence of partitioned runs spr, such that all elements of spr are partitionings of the same
run:
spr0,0 · spr0,1 · . . . · spr0,|spr0 |−1 =
spr1,0 · spr1,1 · . . . · spr1,|spr1 |−1 =
...
spr|es|−1,0 · spr|es|−1,1 · . . . · spr|es|−1,|spr|es|−1 |−1 = r
In other words, it should be a story that explains all the versions and each element of spr
explains the corresponding observation sequence in es, more specifically: for all 0 ≤ i <
|es|, spri ∈ P Resi [83, 135].
35
Figure 7: Meaning and explanation hierarchy [135]
The meaning of evidential statement es is a set of all sequences of partitioned runs
SP Res ⊆ (P Res0 × P Res1 × . . . × P Res|es|−1 ) that explains es [83, 135]. An evidential statement is inconsistent if it has an empty set of explanations. The summary of all explanations
is in Figure 7 [83, 135].
2.2.4.4
Event Reconstruction Algorithm
Gladyshev’s defined event reconstruction algorithm (ERA) involves computing the meaning of
the given evidential statement with respect to the given state machine [83, 135]. There are two
types of observation sequences: fixed-length observation sequences and generic observation
sequences (described in the next section). The meanings of individual observation sequences
are combined in order to yield the meaning of the evidential statement [83, 135].
2.2.4.4.1
Fixed-Length Observation Sequences.
Here we review Gladyshev’s way
to compute the meaning of fixed-length observation sequences [135]. The function Ψ−1 provides basic operations to automate back-tracing and proceeds as follows. First, take the set
of all computations CT as the starting point and iteratively back-trace it into the past using
36
Figure 8: Fixed-length event reconstruction [135]
Ψ−1 [83, 135]. At each step, computations that do not possess observed property P are discarded. This is achieved by intersecting the set of back-tracings with the set of computations
that possess the property observed at the current step [83, 135]. The result of the intersection
is then used as an input for the next invocation of Ψ−1 . The process continues until either
all observations are explained, or the set of computations becomes empty. An illustration of
the fixed-length event reconstruction is in Figure 8 [83, 135].
37
Figure 9: Example computation event sequences
2.2.4.4.2
Example.
As an example, Gladyshev calculates the meaning of the following
fixed-length observation sequence [83, 135]:
osAB = (A, 3, 0)(B, 2, 0)
Knowing that:
A = {c6 , c8 , c10 , c12 }
B = {c1 , c2 , c3 , c4 }
CT = {c1 , . . . , c13 }
such that the relationship of the events is as shown in Figure 9. There is an arrow from cx
to cy if ψ(cx ) = cy [83, 135].
2.2.4.4.3
Meaning Computation.
The observation osAB is equivalent to a sequence
AAABB. The meaning of osAB can then be computed as follows [83, 135]:
38
• 2B intersections:
B ∩ CT = B
Ψ−1 (B ∩ CT ) = Ψ−1 (B) = {c4 , c3 , c5 , c7 , c8 , c6 }
B ∩ Ψ−1 (B ∩ CT ) = {c3 , c4 }
Ψ−1 (B ∩ Ψ−1 (B ∩ CT )) = {c6 , c7 , c8 }
• 3A intersections:
A ∩ (Ψ−1 (B ∩ Ψ−1 (B ∩ CT ))) = {c6 , c8 }
Ψ−1 (A ∩ (Ψ−1 (B ∩ Ψ−1 (B ∩ CT )))) = {c10 , c12 }
A ∩ (Ψ−1 (A ∩ (Ψ−1 (B ∩ Ψ−1 (B ∩ CT ))))) = {c10 , c12 }
Ψ−1 (A ∩ (Ψ−1 (A ∩ (Ψ−1 (B ∩ Ψ−1 (B ∩ CT )))))) = {c10 , c12 }
A ∩ (Ψ−1 (A ∩ (Ψ−1 (A ∩ (Ψ−1 (B ∩ Ψ−1 (B ∩ CT ))))))) = {c10 , c12 }
2.2.4.4.4
Map of Partitioned Runs.
To calculate the meaning of an observation
sequence Gladyshev [135] uses a set of partitioned runs. A map of partitioned runs (MPR)
is a representation for a set of partitioned runs. It is a pair pm = (len, C) where C is the
set of computations, and len is a sequence of their lengths. An MPR could represent the
set of all partitioned runs whose initial computations are in C, and whose partitions have
lengths: len0 , len1 , . . . , len|len|−1 [83, 135]. Gladyshev states that the meaning of a fixed-length
observation sequence can be expressed by a single MPR. Let C = {c1 , c2 , . . . , cn } and len =
⟨l1 , l2 , . . . , lm ⟩. Let M P R = (C, len), then its possible illustration is in Figure 10 [83, 135].
2.2.4.5
Generic Observation Sequences
For a generic observation, whose opt ̸= 0, the length of the explaining run is not fixed, but is
bounded between min and min + opt [83, 135] (Section 2.2.4, page 32). A single observation
sequence represents many permutations of linking observed properties, see Equation 2.2.4.4
39
Figure 10: Example of construction of MPR [83]
and Figure 11 for an example [83, 135].
osAB2 = ((A, 1, 3), (B, 1, 2))
(2.2.4.4)
AB
ABB
ABBB
AAB
AABB
AABBB
AAAB
AAABB
AAABBB
AAAAB AAAABB AAAABBB
Figure 11: Generic observation sequence permutations example
As can be seen, there are twelve possible variants of linking properties, which is essentially
a cross-product of the lengths [83, 135]. Every permutation can be represented by a fixed
length observation sequence. The meaning of osAB2 is, then, the union of the meanings of
each variant [83, 135].
(A, 1, 3) = U = {(A, 1, 0), (A, 2, 0), (A, 3, 0), (A, 4, 0)}
(B, 1, 2) = V = {(B, 1, 0), (B, 2, 0), (B, 3, 0)}
⟨(A, 1, 3)(B, 1, 2)⟩ = U × V
40
For instance, (A, 2, 0)(B, 3, 0) ∈ U × V is written as AABBB, i.e., an observation sequence that has the A-property for two steps followed by three steps that have the Bproperty [83, 135]. Gladyshev has shown how to calculate the meaning of a fixed-length
observation sequence (see Section 2.2.4.4.4). In order to calculate the meaning of a variable
length observation sequence, one computes the union of the meanings of each fixed-length
observation sequence. This can be modeled as a set of MPRs. Each MPR is the meaning of
a fixed-length observation sequence [83, 135].
2.2.4.6
Meaning of an Evidential Statement
First, the meanings of individual observation sequences are computed as described earlier.
Second, Gladyshev combines the meanings of observation sequences into the meaning of the
entire evidential statement. To satisfy the evidential statement, a run must satisfy all of its
observation sequences [83, 135]. Then Gladyshev states a problem of identifying the subset
of runs whose partitionings are present in the meanings of all observation sequences [83, 135].
2.2.4.6.1
Towards the Meaning of an Evidential Statement.
To recapitulate,
the meaning of an observation sequence is a set of MPRs. Subsequently, each MPR is the
meaning of a fixed-length observation sequence. Further, an evidential statement is a set
of observation sequences [83, 135]. Therefore, per Gladyshev, the meaning of an evidential
statement is the combination of the sequence of MPRs with each MPR being the meaning
of a particular fixed-length observation sequence of the evidential statement. Accordingly,
the meaning of an evidential statement computation reduces to a problem of how to combine
two MPRs [83, 135].
Let pma = (lena , Ca ) and pmb = (lenb , Cb ) be two MPRs. A run r can be partitioned by
both pma and pmb if and only if two conditions hold [83, 135]:
1. The initial computation of run r belongs to initial computation sets of both MPRs:
r0 ∈ Ca and r0 ∈ Cb [83, 135].
2. Both MPRs have equal total number of computation steps:
lena =
lenb . If
lena ̸=
lenb , then the two MPRs have no common runs. Otherwise, the common
runs are determined by the common set of initial computations Ca ∩ Cb [83, 135].
41
2.2.4.6.2
Map of Sequence of Partitioned Runs.
Ascending further per Glady-
shev [135], a map of a sequence of partitioned runs (MSPR), written out as:
mspr = ((len0 , len1 , . . . , lenn ), C)
is a representation for a set of sequences of partitioned runs [83, 135]. C is the set of initial
computations, and len0 , len1 , . . . , lenn are lists of lengths that describe how to partition runs
generated from the elements of C. An MSPR is proper if and only if
len0 =
len1 =
... =
lenn . The combination of two MPRs is defined by function comb() that takes two
MPRs and returns a proper MSPR [83, 135]:
∅
comb(pmx , pmy ) =
((len , len ), C ∩ C )
x
y
x
y
if
lenx ̸=
leny or Cx ∩ Cy = ∅
otherwise
(2.2.4.5)
2.2.4.6.3
Use of Combination.
Gladyshev gives a typical usage example of the com-
bination (Equation 2.2.4.5) in the example that follows [135]. Suppose that the meanings of
two observation sequences osa and osb are represented by two sets of MPRs called P Ma and
P Mb respectively. The meaning of the evidential statement es = (osa , osb ) is expressed by
the set of proper MSPRs, which is obtained by combining every MPR from P Ma with every
MPR from P Mb [83, 135]:
∀x ∈ P Ma , ∀y ∈ P Mb , SP Mes =
comb(x, y)
(2.2.4.6)
This procedure can be extended to an arbitrary number of observation sequences, thus providing a way to calculate the meaning of an arbitrary evidential statement [83, 135]. As a
result, the below is the representation of the previously calculated meaning as an MPR in
Section 2.2.4.4.3:
M P R(osAB ) = (⟨2, 2⟩, {c10 , c12 })
42
2.2.4.6.4
Computing Combinations of MPRs.
More examples are illustrated fur-
ther in the consideration of the following MPRs [83, 135]:
M P R1 = (⟨2, 1, 4⟩, {c1 , c2 , c3 })
M P R2 = (⟨3, 4⟩, {c4 , c5 , c6 })
M P R3 = (⟨4, 4⟩, {c1 , c2 , c3 , c4 })
M P R4 = (⟨5, 2, 1⟩, {c2 , c3 , c5 })
comb(M P R4 , M P R3 ) = ((⟨4, 4⟩, ⟨5, 2, 1⟩), {c2 , c3 })
comb(M P R1 , M P R2 ) = ∅
2.2.5
Investigation Cases Examples Using the FSA Approach
Further in this section, what follows is the summary of the FSA approach from [136, 137]
applied to two case studies to show its inner workings [305]. Gladyshev’s printing case review
is in Section 2.2.5.1 and the blackmail case review is in Section 2.2.5.2 accordingly.
We extract the parameters and terms defined in the mentioned works, such as the formalization of the various pieces of evidence and witnesses telling their stories of a particular
incident. The goal is to put such statements together to make the description of the incident
as precise as possible and focus on the modeling of the core aspect of the case under the
investigation. As a result, to establish that a certain claim may be true, the investigator
has to demonstrate that there are some meaningful explanations of the evidence that agree
with the claim. Conversely, to disprove the claim, the investigator has to show there are no
explanations of evidence that agree with that claim [83, 136].
Gladyshev did initial proof-of-concept realization of the algorithms’ implementation in
CMU Common Lisp [137] that we improve by re-writing it in Forensic Lucid [305]. We
subsequently re-model these cases using the new approach in Section 9.3 and Section 9.4, to
show for it to be more usable in the actual investigator’s work and serve as a basis to further
development in the area [305] (such as a GUI front end based on data-flow graphs [89, 309]).
43
Figure 12: Printer Case state machine [135]
2.2.5.1
ACME Printer Case in FSA
This one is the first of the cases we re-examine from Gladyshev’s FSA/LISP approach [137].
The local area network at some company called ACME Manufacturing consists of
two personal computers and a networked printer. The cost of running the network
is shared by its two users Alice (A) and Bob (B). Alice, however, claims that she
never uses the printer and should not be paying for the printer consumables.
Bob disagrees, he says that he saw Alice collecting printouts. According to the
manufacturer, the printer works as follows [137]:
1. When a print job is received from the user, it is stored in the first unallocated
directory entry of the print job directory.
2. The printing mechanism scans the print job directory from the beginning
and picks the first active job.
3. After the job is printed, the corresponding directory entry is marked as
“deleted”, but the name of the job owner is preserved.
44
4. The printer can accept only one print job from each user at a time.
5. Initially, all directory entries are empty.
The investigator finds the current state of the printer’s buffer as:
1. Job From B Deleted
2. Job From B Deleted
3. Empty
4. Empty
5. ...
2.2.5.1.1
Investigative Analysis.
If Alice never printed anything, only one directory
entry must have been used, because the printer accepts only one print job from each user
at a time [137]. However, two directory entries have been used and there are no other users
except Alice and Bob. Therefore, it must be the case that both Alice and Bob submitted
their print jobs in the same time frame. The trace of Alice’s print job was overwritten by
Bob’s subsequent print jobs. As a result, a finite state machine is constructed to model the
situations as in the FSA [137] in Figure 12 to indicate the initial state and other possible
states and how to arrive to them when Alice or Bob would have submitted a job and a
job would be deleted [83, 137]. (Each state has two print job directory entries, where e is
empty, A—print job from Alice, B—print job from Bob, AX —deleted print job from Alice,
BX —deleted print job from Bob. The edges denote events of +A or +B corresponding to the
addition of the print jobs from Alice or Bob respectively, whereas X corresponds to taking
the job for printing by the printer). The FSM presented in [137] covers the entire case with
all possible events and transitions resulted due to those events. The FSM is modeled based
on the properties of the investigation, in this case the printer queue’s state according to the
manufacturer specifications and the two potential users. The modeling is assumed to be
done by the investigator in the case in order to perform a thorough analysis. It also doesn’t
really matter how actually it so happened that the Alice’s print job was overwritten by Bob’s
subsequent jobs as is not a concern for this case any further. Assume, this behavior is derived
45
from the manufacturer’s specification and the evidence found. The investigator will have to
make similar assumptions in the real case [137].
The authors of [137] provided a proof-of-concept implementation of this case in Common
Lisp (the code is not recited in here) which takes about 6–12 pages of printout depending on
the printing options set and column format. Using our proposed solution in Section 9.3, we
rewrite the example in Forensic Lucid and show the advantages of a much finer conciseness
and added benefit of the implicit context-driven expression and evaluation, and parallel
evaluation that the Common Lisp implementation lacks entirely [305, 312].
2.2.5.1.2
Formalization of System Functionality.
Gladyshev’s formalization of the
Printing Case has to do with the definition of a number of problem domain elements [83, 137].
This formalization comes from [137] as does most of this section. Here we summarize the
formal approach notation and definitions applied to this case from Section 2.2.4.
• Q represents a finite set of all possible states of the Printing Case FSM
• CT represents a final set of finite computations within Printing Case FSM T
• I represents a final set of all possible events that occur in the system (said to be
computations)
• o = (P, min, opt) is an observation of a property P for the duration of time between
[min, min + opt]
• os = (o1 , . . . , on ) = ((P1 , min1 , opt1 ), . . . , (Pn , minn , optN )) is an observation sequence
that constitutes an uninterrupted story told by a witness or a piece of evidence
• es = (os1 , . . . , osn ) is an evidential statement that is composed of all stories that the
investigator put together. es need not to be ordered, but is used as a part of explanation
and event reconstruction algorithm [83, 137]
• DIR is a case-specific set of states of the printer queue slot that the investigator is
interested in
Thus, concretely for the Printing Case problem [83, 137] the investigator defines:
46
• States:
DIR = {A, B, A Deleted, B Deleted, empty},
(2.2.5.1)
Q = DIR × DIR
(2.2.5.2)
I = {add A, add B, take}
(2.2.5.3)
• Events:
• Formalization of the evidence [83, 137]:
– The initial state of the print job directory:
osmanuf acturer = ((Pempty , 1, 0), (CT , 0, infinitum))
Pempty = {c | c ∈ CT , cq0 = (empty, empty)}
(2.2.5.4)
(2.2.5.5)
– The final state:
PB Deleted = {c | c ∈ CT , cq0 = (B Deleted, B Deleted)}
osf inal = ((CT , 0, infinitum), (PB Deleted , 1, 0))
(2.2.5.6)
(2.2.5.7)
– The complete collection of stories [83, 137]:
esACM E = (osf inal , osmanuf acturer )
2.2.5.1.3
(2.2.5.8)
Testing Investigative Hypothesis.
Claim of Alice. Alice’s claim is that Alice did not print anything until the investigator
examined the printer [83, 137]:
PAlice = {c | c ∈ CT , cli ̸= add A}
(2.2.5.9)
osAlice = ((PAlice , 0, infinitum), (PB Deleted , 1, 0))
(2.2.5.10)
47
In order to verify Alice’s claim, Gladyshev includes it in the final evidential statement in
order to try to find an explanation for it.
es′ = (osAlice , osf inal , osmanuf acturer )
(2.2.5.11)
There is no explanation for it [83, 137] as shown through the meaning computations.
Meaning of the Final State. The meaning of
osf inal = ((CT , 0, infinitum), (PB Deleted , 1, 0)) = (PB Deleted , 1, 0)
(2.2.5.12)
is the final state itself as observed by the investigator [83, 137]. The meaning also includes all the paths in the graph in Figure 12 that lead to (B Deleted, B Deleted) (i.e.
(CT , 0, infinitum)) [83, 137].
Mf inal = {M P Rf,1 , M P Rf,2 , . . . , M P Rf,infinitum }
(2.2.5.13)
M P Rf,1 = (lenf,1 , Cf,1 )
lenf,1 = ⟨1⟩
(2.2.5.14)
Cf,1 = {all paths arriving at (empty, empty) with the length of 1}
= {(∗, (B Deleted, B Deleted))}
M P Rf,2 = (lenf,2 , Cf,2 )
lenf,2 = ⟨1, 1⟩
Cf,2 = {all paths arriving at (empty, empty) with the length of 2}
= {((take, (B Deleted, B)), (∗, (B Deleted, B Deleted))),
((take, (B, B Deleted)), (∗, (B Deleted, B Deleted)))}
48
(2.2.5.15)
M P Rf,3 = (lenf,3 , Cf,3 )
lenf,3 = ⟨2, 1⟩
Cf,3 = {all paths leading to (B Deleted, B Deleted) with the length of 3}
= {((add B, (empty, B Deleted)),
(take, (B, B Deleted)),
(∗, (B Deleted, B Deleted))),
((take, (B, B)), (take, (B Deleted, B)), (∗, (B Deleted, B Deleted)))}
(2.2.5.16)
where:
• Mf inal is the meaning of the final state represented by an infinite collection of associated
MPRs for all possible lengths [83, 137].
• M P Rf,x represents a map of partitioned runs for the final state f of lengths of x.
(Please refer to Section 2.2.4.4.4 for the complete definition of an MPR [83, 137].)
• lenf,x represents the set of observed path lengths property for the maximum length of
x for the final state f . The notation ⟨2, 1⟩ means ⟨min, opt⟩, i.e., the lengths of paths
between 2 and 2 + 1 = 3 [83, 137].
In the analysis above and further in the example, the maximum length of 3 is used because it
covers paths long enough to reach from the (empty, empty) state to (B Deleted, B Deleted)
or back. Otherwise, the problem would be unfeasible to compute for all infinitely possible
MPRs [83, 137].
Meaning of the Manufacturer’s Specifications. The meaning of the observation sequence corresponding to the manufacturer
osmanuf acturer = ((Pempty , 1, 0), (CT , 0, infinitum))
(2.2.5.17)
is comprised of a meaning of the observations of (Pempty , 1, 0) and (CT , 0, infinitum). The
former corresponds to just the initial state (empty, empty), and the later to all the paths
49
that originate from the initial state [83, 137].
Mmanuf acturer = {M P Rm,1 , M P Rm,2 , . . . , M P Rm,infinitum }
(2.2.5.18)
M P Rm,1 = (lenm,1 , Cm,1 )
lenm,1 = ⟨1⟩
(2.2.5.19)
Cm,1 = {all paths initiating from (empty, empty) with the length of 1}
= {(∗, (empty, empty))}
M P Rm,2 = (lenm,2 , Cm,2 )
lenm,2 = ⟨1, 1⟩
Cm,2 = {all paths initiating from (empty, empty) with the length of 2}
(2.2.5.20)
= {((add A, (empty, empty)), (∗, (A, empty))),
((add B, (empty, empty)), (∗, (B, empty)))}
M P Rm,3 = (lenm,3 , Cm,3 )
lenm,3 = ⟨1, 2⟩
Cm,3 = {all paths initiating from (empty, empty) with the length of 3}
= {((add A, (empty, empty)), (take, (A, empty)), (∗, (A Deleted, empty))),
((add A, (empty, empty)), (add B, (A, empty)), (∗, (A, B))),
((add B, (empty, empty)), (take, (B, empty)), (∗, (B Deleted, empty))),
((add B, (empty, empty)), (add A, (B, empty)), (∗, (B, A)))}
(2.2.5.21)
Meaning of the Incident. Gladyshev obtains the meaning of the evidential statement
esACM E = (osf inal , osmanuf acturer )
(2.2.5.22)
by combining every MPR from Mf inal with every MPR from Mmanuf acturer [83, 137]:
∀x ∈ Mf inal , ∀y ∈ Mmanuf acturer : SP MACM E =
50
comb(x, y)
(2.2.5.23)
The comb() operator combines the computations from each MPR with every other MPR
and is fully defined in Section 2.2.4.6.2. Since the combination may produce unnecessary
duplicates, they are removed using the union of the combination, which is also a minimal set
of representing the meaning of the incident as recorded based on the evidence in the form of
the final state and a story of the manufacturer as a credible expert witness of how printer
operates according to its specification [305].
Meaning of the Alice’s Claim. The meaning of the observation sequence as stated
by the Alice’s claim in the osAlice = ((PAlice , 0, infinitum), (PB Deleted , 1, 0)) is comprised of
the meanings of the parts (PAlice , 0, infinitum) and (PB Deleted , 1, 0), where the latter simply
indicates the printer state as found by the investigator, and the former represents all the
paths leading to (B Deleted, B Deleted) that do no involve any add A event [83, 137].
Conclusion. None of the paths in Alice’s story originate in (empty, empty) thereby making
Alice’s story questionable at the very least as shown in Figure 13 [83, 137].
Figure 13: Paths leading to (BX , BX )
2.2.5.2
Initial Blackmail Case Modeling
The following case description in this section is cited from Gladyshev’s [136]. It’s subsequent
realization in Forensic Lucid is in Section 9.4.
51
A managing director of some company, Mr. C, was blackmailed. He contacted
the police and handed them evidence in the form of a floppy disk that contained
a letter with a number of allegations, threats, and demands.
The message was known to have come from his friend Mr. A. The police officers went to interview Mr. A and found that he was on [holidays] abroad. They
seized the computer of Mr. A and interviewed him as soon as he returned into the
country. Mr. A admitted that he wrote the letter, but denied making threats and
demands. He explained that, while he was on [holidays], Mr. C had access to his
computer. Thus, it was possible that Mr. C added the threats and demands into
the letter himself to discredit Mr. A. One of the blackmail fragments was found
in the slack space of another letter unconnected with the incident. When the
police interviewed the person to whom that letter was addressed, he confirmed
that he had received the letter on the day that Mr. A had gone abroad on [holidays]. It was concluded that Mr. A must have added the threats and demands
into the letter before going on [holidays], and that Mr. C could not have been
involved [136].
In Figure 14 is the initial view of the incident as a diagram illustrating cluster data of
the blackmail and unconnected letters [136, 307] since the investigation centers around that
particular disk space.
2.2.5.2.1
Modeling the Investigation.
In the blackmail example, the functionality of
the last cluster of a file was used to determine the sequence of events and, hence, to disprove
Mr. A’s alibi [83, 136]. Thus, the scope of the model is restricted to the functionality of
the last cluster in the unrelated file (see Figure 14). The last cluster model can store data
objects of only three possible lengths: L = {0, 1, 2}. Zero length means that the cluster is
unallocated. The length of 1 means that the cluster contains the object of the size of the
unrelated letter tip. The length of 2 means that the cluster contains the object of the size
of the data block with the threats. In Figure 15 is, therefore, the simplified model of the
investigation [83, 136].
52
Figure 14: Cluster data with Blackmail fragments
Simplified cluster model:
possible data lengths:
possible data values:
Observed final state:
L=1
PL = {u, t1 , o1 }
PR = {t2 , o2 }
Q = L × PL × PR
0
1
2
PL —left part
PR —right part
u—unrelated
t1 —threats-obscured part t2 —threats in slack
o1 —other data left part o2 —other data right part
(u) unrelated
(t2 ) threats in slack
Figure 15: Simplified view of the cluster model [136]
2.2.5.2.2
Modeling Events.
The state of the last cluster can be changed by three
types of events [83, 136]:
1. Ordinary writes into the cluster:
W = {(u), (t1 ), (o1 ), (u, t2 ), (u, o2 ), (t1 , t2 ), (t1 , o2 ), (o1 , t2 ), (o1 , o2 )}
(2.2.5.24)
2. Direct writes into the file to which the cluster is allocated (bypassing the OS):
Wd = {d(u, t2 ), d(u, o2 ), d(o1 ), d(t1 , t2 ), d(t1 , o2 ), d(o1 , t2 ), d(o1 , o2 )}
53
(2.2.5.25)
3. Deletion of the file D sets the length of the file to zero. Therefore, all writes and the
deletion comprise I:
I=W
Wd
D
(2.2.5.26)
Formalization of the Evidence. The final state observed by the investigators is (1, u, t2 )
[83, 136]. Gladyshev lets Of inal denote the observation of this state. The entire final sequence
of observations is then [83, 136]:
osf inal = ($, Of inal ).
(2.2.5.27)
The observation sequence osunrelated specifies that the unrelated letter was created at some
time in the past, and that it was received by the person to whom it was addressed is:
osunrelated = ($, Ounrelated , $, (CT , 0, 0), $)
(2.2.5.28)
where Ounrelated denotes the observation that the “unrelated” letter tip (u) is being written
into the cluster. The evidential statement is then the composition of the two stories [83, 136]:
esblackmail = {osf inal , osunrelated }
esblackmail = {osf inal , osunrelated }
(2.2.5.29)
Modeling an Explanation of Mr. A’s Theory. Mr. A’s theory, encoded using the
proposed notation, is:
osM r.
A
= ($, Ounrelated−clean , $, Oblackmail , $)
(2.2.5.30)
where Ounrelated−clean denotes the observation that the “unrelated” letter (u) is being written
into the cluster and, at the same time, the cluster does not contain the blackmail fragment;
Oblackmail denotes the observation that the right part of the model now contains the blackmail
fragment (t2 ) [83, 136].
54
Modeling Complete Explanations. There are two most logically possible explanations
that can be represented by a state machine [136]. See the corresponding state diagram for
the blackmail case in Figure 16 [83, 136].
Figure 16: Blackmail Case state machine [307]
1. The first explanation [136]:
(u)
(u,t2 )
(u)
. . . −→ (1, u, o2 ) −−−→ (2, u, t2 ) −→ (1, u, t2 )
(2.2.5.31)
• Finding the unrelated letter, which was written by Mr. A earlier;
• Adding threats into the last cluster of that letter by editing it “in-place” with a
suitable text editor (such as ViM [325]);
• Restoring the unrelated letter to its original content by editing it “in-place”
again [83, 136]:
“To understand this sequence of events, observe that certain text editors
(e.g., ViM [325]) can be configured to edit text “in-place”. In this mode
of operation, the modified file is written back into the same disk blocks
that were allocated to the original file. As a result, the user can forge
the file’s slack space by (1) appending the desired slack space content to
the end of the file, (2) saving it, (3) reverting the file back to the original
content, (4) saving it again.” [136]
2. The second explanation [136]:
(u)
d(u,t2 )
. . . −→ (1, u, o2 ) −−−−→ (1, u, t2 )
55
(2.2.5.32)
• The threats are added into the slack space of the unrelated letter by writing
directly into the last cluster using, for example, a low-level disk editor [136].
2.3
Other Formal Approaches
There are other notable formal approaches to the cyberforensic analysis. For example,
Arasteh and Debbabi [21] use control-flow graphs (CFGs), push down systems (PDS), and
the ADM process logic [4] (a logic having dynamic, linear, temporal, and modal characteristics originally designed for evaluation of security protocols) to construct code models from
the executable in-memory code and call-stack residues (like leftover slack space similar to the
Gladyshev’s blackmail case study earlier).
Arasteh et al. [22] subsequently extend their formal approach to log analysis along with
a tree of labeled terms of -algebra and a tableau-based proof system for the ADM logic
arriving at a case study of a SYN attack detection from various log sources.
Clear parallels and relationships can be made between these approaches, the intensional
logic (Section 3.2), and Cartesian programming [378] as formalization tools.
2.4
Summary
We reviewed the notion of cyberforensics and more specifically, the background on the formal
approach to the digital evidence analysis, which is central to this thesis. We reviewed the
Gladyshev’s arguably the first comprehensive formal approach to evidence modeling and
event reconstruction as well as two sample case studies we further use to test our approach
in Chapter 7.
56
Chapter 3
Intensional Logic, Programming,
Reasoning, Uncertainty and
Credibility
This chapter reviews the background and related work necessary for formal methods, formal
logic-based [195, 253, 416] specification and context-orientation, dealing with uncertainty
and credibility aspects that are relevant to cyberforensic investigations, reasoning, and event
reconstruction in developing soft-computing intelligent systems [204], autonomic self-forensic
units (Appendix D), and expert systems [186].
The logic-based formal specifications are used for requirements specification [257, 482,
521] of critical components and systems, knowledge representation and derivation (keeping
a knowledge base of a case), validation and verification, and testing hypotheses. As a formal
system, logic can be used in the court of law as a part of an expert witness testimony when
challenged about the validity and verifiability of the system and system-based argumentation.
Consequently, this chapter summarizes the related background in intensional logic, uncertainty, hierarchical credibility, and so on as these concepts are fundamental in establishing
theoretical setting for the intensional programming realized by the GIPSY (Chapter 6) and
the formalization of Forensic Lucid (Chapter 7). The reader is assumed to have some
familiarity with or exposure to the concepts introduced here. Nevertheless, we attempt to
summarize the relevant content in a coherent manner in referencing the material actually
57
used subsequently in this thesis. As a result, we survey the related works and then review
their fusion.
3.1
Overview
The surveyed literature explores a connection between multidimensional logics, such as intensional logics with the theory of mathematical evidence. These are being used (or considered
for use) in a variety of application domains for automated reasoning in context-oriented environments where the knowledge may be imprecise and incomplete and credibility is a factor
(e.g., formal digital crime investigations). Some of these logical systems (e.g., intensional
logics) and probabilistic reasoning systems were not given enough attention, especially when
viewed together and parallels are drawn to finite state machines and the set theory [119, 237].
Thus, more specifically, we review various relatively recent literature on intensional logics of
multidimensional nature where context can be formalized, combined with with the Dempster–
Shafer theory of mathematical evidence along with hierarchical credibility and possible connection to the multidimensional description logics (DLs by themselves are presently more
studied and understood in literature and popular in ontological frameworks, but (the author
conjectures) can be formalized in intensional logics). We also review the evolution of the
intensional logics research into Cartesian programming and intensionality in mathematics
(e.g., see a recent workshop of that title held in May 2013 [391]) including a number of references from there of interest discussing Fregean abstract objects; soundness, reflection and
intensionality; mathematical intensions and intensionality in mathematics; intensional side
of the algebraic topological theorems; knowledge acquisition of numbers by children; formal
mathematical sense of mathematical intensions; and others [45, 82, 115, 125, 166, 337, 412].
This chapter is structured as follows: the notion of intensional logics and programming
are explored in Section 3.2, uncertainty and mathematical evidence modeling are discussed
in Section 3.3, where specifically the Dempster–Shafer theory is further reviewed in Section 3.3.2 with its connection to the research described. We summarize our overall findings
in Section 3.4 with the use of material presented here and a conjecture.
58
3.2
Intensional Logics and Programming
Why Intensional Logic? It can be traced all the way back to Aristotle [508] to represent sense
(meaning) and its change between different possibilities (possible worlds, or contexts) having
true statements available there. Concrete realization of the intensional logic ideas resulted
in Intensional Programming. What follows is the justification and some historical remarks
for both.
Many problem domains are intensional in nature and aspects that surround us, e.g.,
natural language understanding and learning [412], particle in cell simulation [376], World
Wide Web [510, 512], computation of differential and tensor equations [361], versioning [246],
temporal computation and temporal databases [360, 453], multidimensional signal processing [5, 272], context-driven computing [514], constraint programming [124, 513], negotiation
protocols [513], automated reasoning in cyberforensics discussed in this thesis [269, 300, 304],
multimedia and pattern recognition [272] among others [364]. The current mainstream programming languages are not well adapted for the natural expression of the intensional aspects
of such problems, requiring the expression of the intensional nature of the problem statement
into a procedural (and therefore sequential) approach in order to provide a computational
solution [302].
What follows is a brief history of Intensional Logic synthesized from [112, 508] for the
curious reader.
In essence, the intensional concepts in times of Aristotle appeared in response to the need
to separate conceptual senses (e.g., the “Prime Minister of Canada”) from their concrete
context-dependent instantiations (i.e., extensions, e.g., at the time of this writing in July
2013 “Prime Minister of Canada” evaluates to the extension “Stephen Harper”) and modal
logic. Fast-forward to Gottlob Frege who was making distinctions between “senses” (abstract
descriptions of objects) and their “denotations” (concrete “real” instances of those objects)
along the same lines. Fast forward again to Carnap [57] who introduced the word “intensional” in 1930s [508] to refer to Frege’s “senses”. Around the same time (1930s) Lewis et al .
were reviving the modal logic (that fell out of popularity sometime before that) [508] from
the extensional propositional logic by adding ≺ connective to mean “strict implication” to
59
avoid some paradoxes (entailment, explosion, etc. [40]) associated with material implication
→ [508]. Then they realized the now standard modal operators ✸ (“possible”) and ✷ (“necessary”) are easier to work with and redefined (P ≺ Q) as ¬✸(P ∧ ¬Q) meaning P implies Q
if and only if it’s impossible for P to be true and Q be false simultaneously [508]. Likewise,
Aristotle’s notion of “everything necessary is possible” is formalized as ✷P → ✸P . Then, everything necessary is true (✷P → P ) and everything that is true is possible (P → ✸P ) [508].
Fast forward another 20 years or so (per Montague [324] and Melvin [112]), Church
had a go at the intensionality with his work on the simple theory of types circa 1951 in
A Formulation of the Logic of Sense and Denotation subsequently inter-influencing further
developments in the area with Carnap [112]. Saul Kripke subsequently in the 60s [218, 219]
formalized semantics for modal logic. His core idea is that of the possible world semantics;
that is statements can be true in certain worlds, but not necessarily in all of them [508]. The
world w does not need to be fully defined, but serve as indices into possible interpretations [53]
of each statement, e.g., ✷P is true at a particular world w if P is true in some worlds w′
accessible from w [508]. As William W. Wadge puts it, “Kripke models were a turning
point in development of the intensional logic.” [508]. That enabled simple temporal logic,
e.g., where ✸ would mean sometimes and ✷ would mean always, etc. [508]. Temporal logic
made use of those operators, but had only a single dimension of time points. Making it
multi-dimensional later with intensional operators made it what is its today as intensional
temporal logic (see page 62 for an informal example).
Many other later formalism followed (70s-80s), such as that of Marcus then on to codecessor works of Montague, Tichý, Bressan, and Gallin [127] exploring various intensional
logic aspects [112, 480]. Richard Montague deliberated on the pragmatics and intensional
logic in 1970 [91, 324] with a critical discussion and raising problems of interest. Hobbs and
Rosenschein subsequently published an approach to make computational sense of Montague’s
Intensional Logic [171] in 1978. Finally, Dana Scott’s 1969 work (as Wadge suggested [508],
“perhaps mistitled”) Advice on Modal Logic laid down more formalisms by extending Kripke’s
possible worlds with Carnap’s distinction between extension and intension (later known as
the Scott-Montague model ) defining a non-empty set I of points of reference (possible worlds)
without requirements on accessibility relations [508]. Then φ in each world has an extension
60
(the truth value) and an intension (the mapping of w to the extension of φ at w, making it an
element of the power set 2I [508]. Scott maps (via a unary operator) of intensions to others
as 2I → 2I either as an accessibility or any other similar relation and Scott’s models are
not restricted to propositional logic, and can specify worlds at any hierarchical level of types
(strings, numbers, people, organizations, etc.) [508]. Scott calls such individuals as virtual individuals (intensional objects) grouped in a collection D, and these individuals DI can denote
different extensions in different worlds [508]. This approach inspired Wadge and Ashcroft to
create Lucid in 1974 and the notion of intensional programming has emerged [25].
William W. Wadge placed a beautifully succinct introduction to Intensional Logic in
context in a tutorial in 1999 [508], most of which is still valid and is a highly recommended
reading. A lot of background history is also provided in the special issue articles in 2008 by
Blanca Mancilla and John Plaice bringing the historical record more up-to-date with possible
world versioning and Cartesian programming [246, 378]. Subsequently, Melvin Fitting in his
updated 2012 entry [112] provided another broad encyclopedic account on Intensional Logic.
All are very recommended readings.
Both Intensional Programming volumes (I and II) from 1995 [350] and 1999 [131] also
detail a series of articles on various temporal logic aspects applied to program verification,
constraint logic and its semantics, flexible agent grouping, temporal meaning representation
and reasoning [18, 214, 244, 358, 394] as well as a proposal of an asynchronous calculus based
on absence of actions [220] followed by the more recent 2008 work by Orgun et al . [352]
on knowledge representation and reasoning with diverse granularity clocks. Fitting in 2005
provided axiomatization of the first-order intensional logic (FOIL) [113]. Intensional logic is
used in the present day to describe various natural language phenomena and beyond, e.g.,
as in the recent (2010) work by Dužı́ et al . [98] on procedural semantics of Hyperintensional
Logic as a survey of the foundations and applications of Transparent Intensional Logic of
Pavel Tichý.
Thus, what is intensional logic, again? Wadge summarized the definition of it as: “...
intensional logic—possible worlds and intensional operators...” [508].
61
3.2.1
Multidimensional Logics and Programming
Here we mention two major multidimensional branches of modal logic and pick the one
of prime use and interest to us. The earlier kind, that began development as such in the
1940-50s is the Intensional Logic informally described in the previous section; the later kind
that appeared in 1980s [524] is the Description Logic, both are being actually families of
logics for knowledge representation and reasoning incorporating natural language constructs
of possibility, necessity, belief, and others. We further concentrate in some more detail the
multidimensional intensional logic as the most fundamental and intuitive to use. From the
modal to temporal uni-dimensional logic, multidimensional logic, knowledge representation,
and programming paradigms emerged. Back to the Scott’s 1969 tuple [508] i ∈ I included:
i = (w, p, t, a)
(3.2.1.1)
the notion of the possible world extension w (however big or small, does not need
to be fully defined), p is a context point in space, e.g., dimensions of (x, y, z), a
being the agent, and t is time, and i is an index into all these coordinates [508].
Fast forward from 1969 to 1991–1993, Faustini and Jagannathan described multidimensional
problem solving in Indexical Lucid [109]. Ashcroft et al ., subsequently summarized the
concepts of multidimensional programming in their book with the same title [24]. Baader and
Ohlbach present a multi-dimensional terminological knowledge representation language [34]
in 1995. Ashcroft discussed multidimensional program verification in terms of reasoning
about programs that deal with multidimensional objects [23], also in 1995. Orgun and
Du described theoretical foundations of multi-dimensional logic programming in 1997 [351].
Wolter and Zakharyaschev in 1999 [524] discuss multi-dimensional description logics.
Temporal Intensional Logic Example
Temporal intensional logic is an extension of temporal logic that allows to specify the time
in the future or in the past [361]. What follows is an informal example.
Let’s take E1 from Figure 17. The context is a collection of the dimensions (e.g., as in
E1 ’s place and time) paired with the corresponding tags (here and today respectively).
62
1. E1 := it is raining here today
Context: {place:here, time:today}
2. E2 := it was raining here before(today) = yesterday
3. E3 := it is going to rain at (altitude here + 500 m) after (today) = tomorrow
Figure 17: Natural-language contextual expression [361, 380]
Then let us fix here to Montreal and assume it is a constant. In the month of April 2013,
with a granularity of one day, for every day, we can evaluate E1 to either true or false, as
shown in Figure 18 [300, 304, 312].
Tags days in April:
Values (raining?):
1 2 3 4 5 6 7 8 9 ...
F F T T T F F F T ...
Figure 18: 1D example of tag-value contextual pairs [361, 380]
If one starts varying the here dimension (which could even be broken down into finer
X, Y, Z), one gets a two-dimensional (or at higher level of detail 4D: (X, Y, Z, t) respectively)
evaluation of E1 , as shown in Figure 19 [302, 312].
Place/Time
Montreal
Quebec
Ottawa
New York
Toronto
Sydney
Moscow
1
T
F
F
F
F
F
F
2
F
F
T
F
T
F
F
3
T
T
T
F
T
T
F
4
F
T
T
F
T
T
F
5
T
T
T
T
T
T
T
6
F
F
T
T
T
F
T
7
T
F
F
T
F
F
T
8
F
F
F
F
F
F
F
9
T
T
F
F
F
T
F
...
...
...
...
...
...
...
...
Figure 19: 2D example of tag-value contextual pairs [264, 282]
Even with these toy examples we can immediately illustrate the hierarchical notion of the
dimensions in the context: so far the place and time we treated as atomic values fixed at
days and cities. In some cases, we need finer subdivisions of the context evaluation, where,
e.g., time can become fixed at hour, minute, second and finer values, and so is the place
broken down into boroughs, regions, streets, etc. and finally the X, Y, Z coordinates in the
Euclidean space with the values of millimeters or finer. This notion becomes more apparent
and important, e.g., in Forensic Lucid, where the temporal components can be, e.g., log
63
entries and other registered events and observations from multiple sources [302, 312].
3.2.2
Intensional Programming
Intensional programming (IP) is based on multidimensional intensional logics [171], which,
in turn, are based on Natural Language Understanding aspects (such as time, situation,
direction, etc.) mentioned earlier. IP brings in dimensions and context to programs (e.g.,
space and time). Since intensional logic adds dimensions to logical expressions, a nonintensional logic can be seen as a constant or a snapshot in all possible dimensions. To
paraphrase, intensions are certain statements, whose extensions in possible worlds are true
or false (or have some other than Boolean values). Intensional operators are operators that
allow us to navigate within these contextual dimensions [361]. Higher-order Intensional Logic
(HOIL) [300, 302, 405] is behind functional programming of Lucid with multidimensional
dataflows which intensional programs can query and alter through an explicit notion of
contexts as first-class values [300, 302, 304, 312, 365, 513].
From another side, intensional programming [131, 350, 380], in the sense of the latest
evolutions of Lucid (Chapter 4), is a programming language paradigm based on the notion
of declarative programming where the declarations are evaluated in an inherent multidimensional context space. The context space being in the general case infinite, intensional
programs are evaluated using a lazy demand-driven model of execution—eduction [110, 380],
the precept of which is the referential transparency enabling scalable caching at the implementation level. There the program identifiers are evaluated in a restricted context space,
in fact, a point in space, where each demand is generated, propagated, computed and stored
as an identifier-context pair [241, 302]. The subsequent demands for the same context, can
simply be satisfied by fetching the previously computed value from the store. Plaice and
Paquet provided a lucid (no pun intended) explanation of Intensional Programming in a tutorial [380] in 1995, that still holds today. At ISLIP (after several iterations), in 1995 [350]
is where per Wadge [508] the intensional logic, the Advice on Modal Logic of Scott emerged
together to have a broader community actively developing practical languages and systems
based on these formalism. Wadge and Ashcroft, based to Scott’s models then defined the Lucid language with intensional operators first, next, fby in 1974, with next, e.g., denoting
64
from DI → DI (page 60), such that next(X) = λn.X(n + 1) [508].
Intensional programming can be used to solve widely diversified problems mentioned in
this chapter as well as in Chapter 4, which can be expressed using diversified languages of
intensional nature. There also has been a wide array of flavors of Lucid languages developed
over the years. Yet, very few of these languages have made it to the pragmatic implementation
level. The GIPSY project (Chapter 6) aims at the creation of a programming environment
encompassing compiler generation for all flavors of Lucid, and a generic run-time system
enabling the execution of programs written in all flavors of Lucid. Its goal is to provide
a flexible platform for the investigation on programming languages of intensional nature, in
order to prove the applicability of intensional programming to solve important problems [302].
We continue this trend in this work with Forensic Lucid (Chapter 7).
3.3
Uncertainty, Evidence, and Credibility
The works reviewed in this section contribute to the enhancement of the cyberforensic analysis
with options to automatically reason in the presence of uncertainty and non-binary degrees
of belief, credibility (reliability of witnesses) of each witness story and a piece of evidence
as an additional artifact of the proposed Forensic Lucid system. We subsequently briefly
review the related literature in this category involving in particular the Mathematical Theory
of Evidence by Dempster–Shafer [422] Probabilistic Argumentation Systems by Haenni et
al . [153] and some related probabilistic logic work [87], and the links to intensional logic.
(Such probabilistic reasoning provides additional foundations for a future elaborate expert
system to train the investigator personnel in presence of uncertainty.)
3.3.1
Probabilistic Logics
It is worthwhile to mention some logical foundations to probabilistic reasoning to represent
uncertainty, including in modal and intensional aspects [87]. Previously mentioned Carnap
himself in 1950 wrote on Logical Foundations of Probabilities [87] while working on all aspects
of logic including his work on intensional aspects. Roughly, there are two main approaches
65
in probabilistic logic [87] to represent probabilistic uncertainty: qualitative (notion of possibility) and quantitative (numerical probability) [87]. After Hamblin in 1959, Gärdenfors
(along with Segerberg) in 1975 proposed treating qualitative probability as an Intensional
Logic [130] as we have seen the intensional logic provided the notion of “possible”. Quantitative approaches included treading numerical values of probability spaces. Modal probability
logics [87, Section 4] subsequently were introduced bringing back the Kripke possible world
semantics that was not possible for some initial probability logic works. That included
Fagin and Harpen’s work on this in 1988 and 1994 later [87]; indexing and interpretation
were introduced for the possible world states with probabilities. Combining qualitative and
quantitative [87] then seemed like a natural progression. As an example, quoting Demey
et al . [87]: ¬✷h ∧ (¬✷P (h) = 1/2) ∧ (✸P (h) = 1/2) would read as “it is not known that
h is true, and it is not known that the probability of h is 1/2, but it is possible that the
probability of h is 1/2”. More recently, in 2006, Halpern and Pucella presented a logic for
reasoning about evidence [158] following earlier works about reasoning about uncertainty
and knowledge [106, 156, 157]. Most recently (2011), Haenni et al . [155], explored the notion
probabilistic logics and probabilistic networks.
3.3.2
Dempster–Shafer Theory of Evidence and Probabilistic Reasoning Systems
Semantics and interpretation of truth were always of interest to reasoning in any logical
system [53]. This includes uncertain knowledge representation and manipulation [156, 425].
This section subsequently reviews the Dempster–Shafer theory that helps us to reason with
the presence of uncertainty in beliefs about the system’s state. The theory has been extensively discussed and extended; the presentation of which is found in Expert Systems [186] as
well as the work by Haenni et al. [153, 154] about probabilistic argumentation.
3.3.2.1
Dempster-Shafer Theory of Mathematical Evidence
The Dempster–Shafer theory as a mathematical theory of evidence (DSTME) is there to
give machinery to combine evidence from different sources to come up with a degree of belief
66
(represented by a belief function [423]) that takes into account all the available evidence.
The initial theory was a product of the work by Arthur P. Dempster [88] in 1968 on his rule
of combination and Glenn Shafer’s mathematical evidence theory [422] of 1976. Since then
DSTME was applied to different application domains altering sometimes the Dempster’s rule
of combination (Section 3.3.2.1.3, page 72) to produce better (more intuitive, correct) results
in that domain or situation.
Quoting Shafer [424]:
The Dempster-Shafer theory, also known as the theory of belief functions, is a generalization of the Bayesian theory of subjective probability. Whereas the Bayesian
theory requires probabilities for each question of interest, belief functions allow
us to base degrees of belief for one question on probabilities for a related question. These degrees of belief may or may not have the mathematical properties of
probabilities; how much they differ from probabilities will depend on how closely
the two questions are related.
...
The Dempster-Shafer theory is based on two ideas: the idea of obtaining degrees
of belief for one question from subjective probabilities for a related question, and
Dempster’s rule for combining such degrees of belief when they are based on
independent items of evidence.–G. Shaffer, 2002 [424]
Thus, we review these aspects further.
3.3.2.1.1
Formal Definition.
Below we quote [88, 422] the formal definition with the
symbols adapted to match our needs:
• Q is the universal set representing all possible states q of a system under consideration.
• 2Q is the power set of all subsets of Q (including the empty set ∅). The elements of 2Q
represent propositions concerning the actual state of the system, by containing all and
only the states, in which the propositions are true.
67
• m is a function denoting DSTME’s assignment of a belief mass to each element of 2Q
(called a basic belief assignment (BBA)).
m : 2Q → [0, 1]
(3.3.2.1)
It has two properties:
– the mass of the empty set is zero: m(∅) = 0
– the masses of the remaining members add up to a total of 1:
A∈2X
m(A) = 1
The mass m(A) of A ⊂ 2Q denotes the fraction of all relevant available evidence supporting the claim that the actual state q belongs to A (q ∈ A). The value of m(A)
corresponds only to the set A itself.
• bel(A) and pl(A) are belief and plausibility denoting the upper and lower bounds of
the probability interval that contains the precise probability of a set of interest, and is
bounded by two non-additive continuous bel(A) and pl(A):
bel(A) ≤ P (A) ≤ pl(A)
(3.3.2.2)
The belief bel(A) is the sum of all the masses in A of subsets of the set of interest A:
bel(A) =
m(B)
(3.3.2.3)
B|B⊆A
The plausibility pl(A) is the sum of all the masses of the sets B that intersect A:
pl(A) =
m(B)
(3.3.2.4)
B|B∩A̸=∅
Belief and plausibility are related as follows:
pl(A) = 1 − bel(A)
(3.3.2.5)
Conversely, for a finite A, given the belief bel(B) for all subsets B of A, one can find
68
the masses m(A) with the following inverse function:
m(A) =
(−1)|A−B| bel(B)
(3.3.2.6)
B|B⊆A
where |A − B| is the difference of the cardinalities of the two sets [421], From Equation 3.3.2.5 and Equation 3.3.2.6, for a finite set Q, one needs to know only one of
the mass, belief, or plausibility to deduce the other two. In the case of an infinite
Q, there can be well-defined belief and plausibility functions but no well-defined mass
function [156], but in our cyberforensic investigations Q is always finite.
3.3.2.1.2
Examples.
The Wikipedia page on the theory [520] as well as the quoted
reference above from Shafer give a good number of examples where the theory would be
applicable and how it works.
To illustrate the idea of obtaining degrees of belief for one question from subjective
probabilities for another, suppose I have subjective probabilities for the reliability of
my friend Betty. My probability that she is reliable is 0.9, and my probability that she
is unreliable is 0.1. Suppose she tells me a limb fell on my car. This statement, which
must true if she is reliable, is not necessarily false if she is unreliable. So her testimony
alone justifies a 0.9 degree of belief that a limb fell on my car, but only a zero degree of
belief (not a 0.1 degree of belief) that no limb fell on my car. This zero does not mean
that I am sure that no limb fell on my car, as a zero probability would; it merely means
that Betty’s testimony gives me no reason to believe that no limb fell on my car. The
0.9 and the zero together constitute a belief function.
To illustrate Dempster’s rule for combining degrees of belief, suppose I also have a
0.9 subjective probability for the reliability of Sally, and suppose she too testifies, independently of Betty, that a limb fell on my car. The event that Betty is reliable is
independent of the event that Sally is reliable, and we may multiply the probabilities of
these events; the probability that both are reliable is 0.9 × 0.9 = 0.81, the probability
that neither is reliable is 0.1×0.1 = 0.01, and the probability that at least one is reliable
69
is 1 − 0.01 = 0.99. Since they both said that a limb fell on my car, at least [one] of
them being reliable implies that a limb did fall on my car, and hence I may assign this
event a degree of belief of 0.99.
Suppose, on the other hand, that Betty and Sally contradict each other—Betty says
that a limb fell on my car, and Sally says no limb fell on my car. In this case, they
cannot both be right and hence cannot both be reliable—only one is reliable, or neither
is reliable. The prior probabilities that only Betty is reliable, only Sally is reliable,
and that neither is reliable are 0.09, 0.09, and 0.01, respectively, and the posterior
probabilities (given that not both are reliable) are 9/19, 9/19, and 1/19, respectively.
Hence we have a 9/19 degree of belief that a limb did fall on my car (because Betty is
reliable) and a 9/19 degree of belief that no limb fell on my car (because Sally is reliable).
In summary, we obtain degrees of belief for one question (Did a limb fall on my car?)
from probabilities for another question (Is the witness reliable?). Dempster’s rule begins
with the assumption that the questions for which we have probabilities are independent with respect to our subjective probability judgments, but this independence is
only a priori; it disappears when conflict is discerned between the different items of
evidence.–G. Shaffer, 2002
We adapt one of the examples to re-state one of Gladyshev’s examples (Section 2.2.5.1,
page 44): “Did Alice print anything?” from the questions “Is Alice a reliable witness?”,
“Is Bob a reliable witness?” and “Is the printer manufacturer a reliable witness?” following the belief mass assignments and the rule of combination. Suppose the investigator has
a subjective probability the manufacturer is 0.9 reliable and 0.1 unreliable in their printer
mechanism work specification. Since both Alice and Bob are roughly under the same investigation questions, assume the investigator’s subjective probability of their reliability is
0.5 that is no preference is given to either Alice or Bob, or either could be truthful or lying
in their testimony, whereas the printer manufacturer is treated as an expert witness. Since
Alice and Bob are strictly not independent, their testimony have a degree of conflict, the introduction of the manufacturer’s testimony is necessary since both Bob and Alice are equally
70
(un)reliable before the incident, so either one of them can be true (or neither), but not both.
These belief assignments of course do not resolve alone the case in question; for that we need
the combined approach with the intensional logic presented earlier for event reconstruction
from multiple observations in witness accounts. We will review that further in Chapter 7.
We solve a similar operational question in Section 9.5 for the MAC Spoofer Investigation
case using the question “Is there a real MAC spoofer?” given different witness accounts
from logs and probes and their reliability that is when a log entry is observed it is assumed
to be 1.0 reliable, when log is empty for some reason or a probe fails (e.g., due to firewall
restrictions on the probed host), its witness reliability is 0.0; when observed information is
partial that is also reflected in the reliability aspects.
Another quoted example is from signal processing of different color sensors from a distant
light source where the light color can be colored in any of the three red, green, or blue.
Assigning their masses as given [48]:
Hypothesis
Mass
Belief Plausibility
Null (neither)
0.00
0.00
0.00
Red
0.35
0.35
0.56
Yellow
0.25
0.25
0.45
Green
0.15
0.15
0.34
Red or Yellow
0.06
0.66
0.85
Red or Green
0.05
0.55
0.75
Yellow or Green 0.04
0.44
0.65
Any
1.00
1.00
0.10
allows to compute belief and plausibility. Any denotes Red or Yellow or Green is a catch-all
case basically saying there is some evidence “there is a light”, but the color is uncertain.
Null-hypothesis (“no light”, but for completeness) is always given 0 mass. This can be
used to model various situations (e.g., the evidence received from a color-blind person or
a malfunctioning CCD). Below are two examples of computing belief and plausibility given
mass:
71
bel(Red or Y ellow) = m(null) + m(Red) + m(Y ellow) + m(Red or Y ellow)
= 0 + 0.35 + 0.25 + 0.06 = 0.66
pl(Red or Y ellow) = 1 − bel(¬(Red or Y ellow))
= 1 − bel(Green)
= 1 − m(null) − m(Green)
= 1 − 0 − 0.15 = 0.85
3.3.2.1.3
Dempster’s Rule of Combination.
Once evidential data are in from mul-
tiple sources, they need to be combined somehow [421]. Dempster created such a rule as a
part of his work on the generalization of the Bayesian inference in 1968 [88]. When viewed
under this interpretation, the priors and conditionals are not required to be specified (as
opposed to traditional Bayesian, e.g., assigning 0.5 probabilities to values, for which no prior
information is available). The rule ignores any such information unless it can be obtained
during the overall computation. This can be seen as allowing DSTME to formulate a degree
of ignorance vs. the absolute necessity to provide prior probabilities [200, 421].
Thus, we need to combine independent sets of probability mass assignments. Specifically,
the combination (called as joint mass) is calculated from the two mass sets m1 (B) and m2 (C)
(where A = B ∩ C) as follows:
m1,2 (∅) = 0
(3.3.2.7)
m1,2 (A) = (m1 ⊕ m2 )(A) =
K=
1
1−K
m1 (B)m2 (C)
(3.3.2.8)
A=B∩C̸=∅
m1 (B)m2 (C)
(3.3.2.9)
B∩C=∅
where K is a measure of the amount of conflict between the two mass sets [422].
This rule was criticized for some scenarios by Zadeh [536] (the recognized father of Fuzzy
Logic [537]) and others where it produced counter-intuitive results in some cases of high or
low conflict, especially when the sources are not independent enough. (See Zadeh’s examples
72
of Alice and Bob deciding to pick a move vs. two doctors diagnosing brain tumor in a
patient, or a toy murder trial suspect case [200].) Many researchers subsequently proposed
various way of combining (fusing) beliefs that suit a particular situation or application (sensor
fusion, opinion fusion, etc.) better using the appropriate fusion operators by Jøsang and
others [198, 199, 200].
3.3.2.2
Probabilistic Argumentation Systems
In the recent work, a lot of attention was devoted to various probabilistic argumentation
systems. Notably, Haenni et al . consistently discussed the topic [153, 154] in 1999–2003 as
well as probabilistic logics and probabilistic networks [155] in 2011. Halpern and Pucella in
2006 provided a logic to reason about evidence as well [158] in 2006. Jøsang in 2012 did
a thorough review of different ways to apply the combination rule [200] and provided the
relevant background proofs. Shi et al . [428] in 2011 provided a hybrid system combining
intuitionistic fuzzy description logics with intuitionistic fuzzy logic programs.
Xu et al . [174] in 2009 made a strong point for the necessity of attribute reduction
in ordered information systems with a possible solution. They exploited the relationship
between the Dempster–Shafer theory of mathematical evidence and the Pawlak’s rough set
theory as well as their plausibility and belief functions [286]. This is of relevance to this work
as well. Reduction of attributes can become an important step point to reduce state explosion
problem in evidence analysis and event reconstruction by applying Occam’s razor. Grünwald
and Halpern certainly agree with that in their 2004 work When ignorance is bliss [147]. This
is pertinent to us in the sense of irrelevant context reduction and elimination with a lot of
digital evidence to reduce the computational effort and/or reduce confusion, e.g., as filtering
is done in Forensic Lucid encoders in Section 9.5.7.2, page 266 later on.
3.4
Summary
This chapter provided a quick overview of the reasoning foundations and the corresponding
related work on intensional logics and the Dempster–Shafer theory of evidence. In this
research, we combine these reasoning tools. It serves the purpose to encode forensic knowledge
73
with credibility and reason about forensic knowledge in a context-oriented manner in a formal
system for cyberforensic reasoning and event reconstruction in investigator’s work where rigor
and formality are a must to be used in the court of law, but the reasoning may include
incomplete, imprecise, or untrustworthy evidence.
The presented related work around Dempster–Shafer, has a strong correlation with intensional logic [480] (Section 3.2, page 59) and the Lucid programming language (Chapter 4)
as well its derivative Lucx [513] where the ordered information system is a context of evaluation, a context set of ⟨dimension : tag⟩ pairs, that has seen a number of applications (such
as iHTML [510], the contribution of this thesis—Forensic Lucid, and others). Forensic
Lucid [310] (Chapter 7) also blends the Dempster–Shafer theory, and intensional logic and
Lucid dialects for ordered evidential contexts [286], such as observation sequences.
Conjecture. Not in a mathematical or logic terms, but I would like to conjecture a philosophical point for discussion: In the metalogical [58, 175]1 sense the (meta?) intensional
logic can be used to reason about logical systems governing them all. (We shall see how that
fits with Tichý’s work on TIL and his followers [98].)
Since intensional logic dealt with senses and semantics from the beginning, it makes sense
to describe other logics with it. Every logic L is a possible world w (however defined, nested,
complex, complete, sound, consistent or not). The philosophical aspects aside, the statements
of consistency can be true for w = L. There may or may not be accessibility transitions
between the possible logic worlds, but if there is one, then one logic can be represented
(translated to) into another in a whole or in part. Accepting the fact that some statements
may not retain their truth values in such a process, that is evaluating a proposition in one
logical world may yield one answer, another in another, or be undecidable or undefined in
the third.
The same can be applied to more concrete and restricted entities and instantiations the
logics may describe, such as the syntax and semantics of natural (w = LN L ) or programming languages (w = LP L ). Linguistically, it is known people of different cultures can learn
multiple languages, while most NL concepts are universal and fully inter-translatable, but
1
http://en.wikipedia.org/wiki/Metalogic
74
without learning fully the culture, one cannot translate all possible expressions, proverbial
or not, so not all statements can be translated between languages, or would have different meanings when attempted. The same (though weaker) is true for the more restricted
programming languages (though the situation here is better due to the restricted nature of
PLs). While the most universal algebras and commands can be easily inter-translated via
a basic assembly form, not all assembly instances support all features of all architectures.
Not all high-language constructs can be translated to others (some kind of virtualization and
wrappers can be done to make statements from one PL available in another).
Thus, complete translation may not be possible between logics, just like between natural
or programming languages since some statements do not have universal meaning across these
logics (or languages). However, any two minimal inter-translatable logic systems describing
the same model may represent indistinguishable possible logic world intensional individuals [508]. This assertion is in part supported by Assels’s refutation of the logic of the global
conventionalism (GC) by showing GC’s inconsistency [30] in 1985.
75
Chapter 4
The Lucid Programming Language
Family
In this chapter we review the background on the notion of the Lucid-family of languages
(which we often collectively refer to as “Lucid” instead of implying the specific “Original
Lucid” of Ashcroft and Wadge from 70s-90s) as instantiations of various HOIL realizations from the overview (Section 4.1), to the historical background and the related work
(Section 4.2), to the dialects spectrum (Section 4.3). We then summarize the findings in
Section 4.4. Lucid is central to this thesis for its lazy stream processing semantics and ease
of expression as well as its solid theoretical and mathematical research base make it ideal for
scalable knowledge representation and reasoning of any streamed data.
4.1
Lucid Overview
Lucid [24, 25, 26, 27, 509] is a dataflow intensional and functional programming language. In
fact, it is now a family of languages that are built upon intensional logic (which in turn can be
understood as a multidimensional generalization of temporal logic, cf. Section 3.2, page 59)
promoting context-aware demand-driven parallel computation model [304]. A program written in some Lucid dialect is an expression that may have subexpressions that need to be
evaluated at a certain context. Given a set of dimensions DIM = {dimension i }, in which
an expression varies, and a corresponding set of indexes, or, tags, defined as placeholders
76
over each dimension, the context is represented as a set of ⟨dimension : tag⟩ mappings. Each
variable in Lucid, often called a stream, is evaluated in that defined context that may also
evolve using context operators [305, 307, 312, 365, 473, 513, 515]. The first generic version
of Lucid, the General Intensional Programming Language (GIPL) [361], defines two basic
operators @ and # to navigate (switch and query) in the context space [305, 307, 312]. The
GIPL is the first (with the second being Lucx [365, 473, 513, 515], and third is TransLucid [379]) generic programming language of all intensional languages, defined by the means
of only two mentioned intensional operators—@ and # [305, 307, 312]. It has been proven
that other intensional programming languages of the Lucid family can be translated into
the GIPL [305, 307, 312, 361]. A recent similar notion to GIPL called TransLucid was
designed by Plaice and colleagues [90, 378, 379, 396]. Plaice and Paquet give a succinct yet
comprehensive introduction to Lucid and intensional programming in their tutorial [380] in
1995, which is still a recommended reading today. A more recent (2008) overview is by Plaice
and colleagues can be found in [246, 378].
4.1.1
Sample Syntax and Semantics
The fundamental syntax and semantics of Lucid are rather simple allowing easier compiler
implementation and human comprehension of Lucid programs. This also allows flexible
extension in various dialects for application domain specific purposes and use the core baseline
as a sound and complete building block. Similarly, as pioneered by GLU, Lucid is relatively
easy to marry to imperative programming languages to allow mutual advantage of eductive
evaluation and availability of rich libraries. These are the aspects that contributed to the
choice of this language to be extended in this thesis. What follows are examples of syntax
and semantics of the Lucid dialects relevant to us. For illustratory purposes, in the example
excerpts that follow we unify the definitions in a hypothetical language that we call a G≥ ,
which is the union of necessary features of the Lucid dialects we inherit from. (Further in this
thesis, we define Forensic Lucid in much more concrete terms and detail in Chapter 7). To
say it simply, G>= ≡ min (GIPL, Indexical Lucid, Objective Lucid, JOOIP, MARFL) .
We briefly discuss these dialects in this chapter, and present their concrete features we borrow
in Chapter 7.
77
4.1.1.1
Lucid Syntax
Example syntaxes of G≥ for expressions, definitions, and operators are presented in Figure 20
and Figure 21 for both GIPL and Indexical Lucid respectively to remind the reader of
their structure [305]. The concrete syntax of the GIPL in Figure 20 has been amended
to support the isoed operator of Indexical Lucid for completeness and is influenced by
the productions from Lucx [515] to allow contexts as first-class values while maintaining
backward compatibility to the GIPL language proposed by Paquet earlier [305, 361]. In
Figure 22 is a simple program illustrating a Lucid expression [305].
E
Q
::=
|
|
|
|
|
|
|
|
|
::=
|
|
|
|
id
E (E,...,E )
E [E,...,E ](E,...,E )
if E then E else E fi
#E
E @ [E :E ]
E @E
E where Q end;
[E :E,...,E :E ]
iseod E ;
dimension id,...,id;
id = E ;
id(id,....,id) = E ;
id[id,...,id](id,....,id) = E ;
QQ
Figure 20: Sample G≥ syntax expressions
4.1.1.2
Operational Semantics for Lucid
In the implementing system, GIPSY (see Chapter 6), GIPL is the generic counterpart of
all the Lucid programming languages. Like Indexical Lucid, which it is derived from,
it has only the two standard intensional operators: E @ C for evaluating an expression E
in context C, and #d for determining the position in dimension d of the current context
of evaluation in the context space [361]. SIPLs are Lucid dialects (Specific Intensional
Programming Languages) with their own attributes and objectives. Theoretically, all SIPLs
can be translated into the GIPL [361]. Here for convenience we provide the semantic rules
of GIPL and Indexical Lucid [361] and Lucx [513]. The operational semantics of GIPL
is presented in Figure 23. The excerpt of semantic rules of Lucx is then presented as a
78
op
::=
|
intensional-op
data-op
intensional-op
::=
|
i-unary-op
i-binary-op
i-unary-op
i-binary-op
::=
::=
first | next | prev
fby | wvr | asa | upon
data-op
::=
|
unary-op
binary-op
unary-op
binary-op
::=
::=
|
|
::=
::=
::=
! | − | iseod
arith-op
rel-op
log-op
+|-|*|/|%
< | > | <= | >= | == | !=
&& | ||
arith-op
rel-op
log-op
Figure 21: Sample G≥ operators
N @.d 2
where
dimension d;
N = 42 fby.d (N + 1);
end
Figure 22: Classical natural-numbers example [361]
conservative extension to GIPL in Figure 24 [300, 304].
Following is the description of the GIPL semantic rules as presented in [361]:
D⊢E:v
(4.1.1.1)
tells that under the definition environment D, expression E would evaluate to value v.
D, P ⊢ E : v
(4.1.1.2)
specifies that in the definition environment D, and in the evaluation context P (sometimes
also referred to as a point in the context space), expression E evaluates to v. The definition
environment D retains the definitions of all of the identifiers that appear in a Lucid program,
as created with the semantic rules 4.1.1.16–4.1.1.19 in Figure 23. It is therefore a partial
79
function
D : Id → IdEntry
(4.1.1.3)
where Id is the set of all possible identifiers and IdEntry, summarized in Table 1, has five
possible kinds of values, one for each of the kinds of identifier [300, 304]:
• Dimensions define the coordinate pairs, in which one can navigate with the # and @
operators. Their IdEntry is simply (dim) [361].
• Constants are external entities that provide a single value, regardless of the context of
evaluation. Examples are integers and Boolean values. Their IdEntry is (const, c),
where c is the value of the constant [361].
• Data operators are external entities that provide memoryless functions. Examples are
the arithmetic and Boolean functions. The constants and data operators are said to
define the basic algebra of the language. Their IdEntry is (op, f ), where f is the
function itself [361].
• Variables carry the multidimensional streams. Their IdEntry is (var, E), where E is
the Lucid expression defining the variable. It should be noted that this semantics
makes the assumption that all variable names are unique. This constraint is easy to
overcome by performing compile-time renaming or using a nesting level environment
scope when needed [361].
• Functions are non-recursive user-defined functions. Their IdEntry is (func, idi , E),
where the idi are the formal parameters to the function and E is the body of the
function [361].
Table 1: Possible
type
dimension
constant
operator
variable
function
identifier types [361]
form
(dim)
(const, c)
(op, f )
(var, E)
(func, idi , E)
80
The evaluation context P, which is changed when the @ operator is evaluated, or a dimension is declared in a where clause, associates a tag (i.e., an index) to each relevant dimension.
It is, therefore, a partial function
P : Id → N
(4.1.1.4)
Each type of identifier can only be used in the appropriate situations. Identifiers of type op,
func, and dim evaluate to themselves (Figure 23, rules 4.1.1.6, 4.1.1.7, 4.1.1.8). Constant
identifiers (const) evaluate to the corresponding constant (Figure 23, rule 4.1.1.5). Function
calls, resolved by the Efct rule (Figure 23, rule 4.1.1.11), require the renaming of the formal
parameters into the actual parameters (as represented by E ′ [idi ← Ei ]). The function P ′ =
P † [id →→ v ′′ ] specifies that P ′ (x) is v ′′ if x = id, and P(x) otherwise. The rule for the
where clause, Ew (Figure 23, rule 4.1.1.16), which corresponds to the syntactic expression
E where Q, evaluates E using the definitions Q therein. The additions to the definition
environment D and context of evaluation P made by the Q rules (Figure 23, rules 4.1.1.17,
4.1.1.18, 4.1.1.19) are local to the current where clause. This is represented by the fact that
the Ew rule returns neither D nor P. The Qdim rule adds a dimension to the definition
environment and, as a convention, adds this dimension to the context of evaluation with
the tag 0 (Figure 23, rule 4.1.1.17). The Qid and Qfid simply add variable and function
identifiers along with their definition to the definition environment (Figure 23, rules 4.1.1.18,
4.1.1.19) [300, 304, 361].
As a conservative extension to GIPL, Lucx’s semantics introduced the context as a firstclass value, as described by the rules in Figure 24. The semantic rule 4.1.1.22 (Figure 24)
creates a context as a semantic item and returns it as a context P that can then be used
by the rule 4.1.1.23 to navigate to this context by making it override the current context.
The semantic rule 4.1.1.21 expresses that the # symbol evaluates to the current context.
When used as a parameter to the context calculus operators, this allows for the generation
of contexts relative to the current context of evaluation [361, 513, 515]
81
Ecid
:
D(id) = (const, c)
D, P ⊢ id : c
(4.1.1.5)
Eopid
:
D(id) = (op, f )
D, P ⊢ id : id
(4.1.1.6)
Edid
:
D(id) = (dim)
D, P ⊢ id : id
(4.1.1.7)
Efid
:
D(id) = (func, idi , E)
D, P ⊢ id : id
(4.1.1.8)
Evid
:
D(id) = (var, E)
D, P ⊢ E : v
D, P ⊢ id : v
(4.1.1.9)
Eop
:
D, P ⊢ E : id
D(id) = (op, f )
D, P ⊢ Ei : vi
D, P ⊢ E(E1 , . . . , En ) : f (v1 , . . . , vn )
Efct
:
EcT
:
D, P ⊢ E : true
D, P ⊢ E ′ : v ′
D, P ⊢ if E then E ′ else E ′′ : v ′
(4.1.1.12)
EcF
:
D, P ⊢ E : false
D, P ⊢ E ′′ : v ′′
D, P ⊢ if E then E ′ else E ′′ : v ′′
(4.1.1.13)
Etag
:
D, P ⊢ E : id
D(id) = (dim)
D, P ⊢ #E : P(id)
(4.1.1.14)
Eat
:
Ew
:
Qdim
:
Qid
:
Qfid
:
QQ
:
D, P ⊢ E : id
D, P ⊢ E ′ : id
(4.1.1.10)
D(id) = (func, idi , E ′ )
D, P ⊢ E ′ [idi ← Ei ] : v
D, P ⊢ E(E1 , . . . , En ) : v
D(id) = (dim)
D, P ⊢ E ′′ : v ′′
D, P ⊢ E @E ′ E ′′ : v
D, P †[id →→ v ′′ ] ⊢ E : v
D, P ⊢ Q : D′ , P ′
D′ , P ′ ⊢ E : v
D, P ⊢ E where Q : v
D, P ⊢ dimension id : D†[id →→ (dim)], P †[id →→ 0]
D, P ⊢ id = E : D†[id →→ (var, E)], P
D, P ⊢ id(id1 , . . . , idn ) = E : D†[id →→ (func, idi , E)], P
D, P ⊢ Q : D′ , P ′
D′ , P ′ ⊢ Q′ : D′′ , P ′′
D, P ⊢ Q Q′ : D′′ , P ′′
(4.1.1.11)
(4.1.1.15)
(4.1.1.16)
(4.1.1.17)
(4.1.1.18)
(4.1.1.19)
(4.1.1.20)
Figure 23: Extract of operational semantics rules of GIPL [361]
4.1.2
Streaming and Basic Operators
The origins of Lucid date back to 1974 [361, 380]. At that time, Ashcroft and Wadge were
working on a purely declarative language, in which iterative algorithms could be expressed
naturally, which eventually resulted in non-procedural iterative Lucid [25, 26, 27]. Their
work further fit into the broad area of research into program semantics and verification.
Later it turned out that their work is also relevant to the dataflow networks and coroutines
82
E#(cxt)
:
(4.1.1.21)
D, P ⊢ # : P
D, P ⊢ Edj : idj
D, P ⊢ Eij : vj
D(idj ) = (dim)
P ′ = P0 †[id1 →→ v1 ]†. . .†[idn →→ vn ]
(4.1.1.22)
Econstruction(cxt)
:
Eat(cxt)
:
D, P ⊢ E ′ : P ′
D, P †P ′ ⊢ E : v
D, P ⊢ E @ E ′ : v
(4.1.1.23)
E.
:
D, P ⊢ E2 : id2
D(id2 ) = (dim)
D, P ⊢ E1 .E2 : tag(E1 ↓ {id2 })
(4.1.1.24)
Etuple
:
D, P ⊢ E : id
D†[id →→ (dim)]
P †[id →→ 0]
D, P ⊢ Ei : vi
D, P ⊢ ⟨E1 , E2 , . . . , En ⟩E : v1 fby.id v2 fby.id . . . vn fby.id eod
(4.1.1.25)
Eselect
:
Eat(s)
:
D, P ⊢ [Ed1 : Ei1 , Ed2 : Ei2 , . . . , Edn : Ein ] : P ′
E = [d : v’]
E ′ = ⟨E1 , . . . , En ⟩d
P ′ = P †[d →→ v ′ ]
D, P ⊢ select(E, E ′ ) : v
D, P ′ ⊢ E ′ : v
(4.1.1.26)
D, P ⊢ C : {P1 , . . . , P2 }
D, Pi:1...m ⊢ E : vi
D, P ⊢ E @C : {v1 , . . . , vm }
D, P ⊢ Edi : idi
D(idi ) = (dim)
{E1 , . . . , En } = dim(P1 ) = . . . = dim(Pm )
E ′ = fp (tag(P1 ), . . . , tag(Pm ))
D, P ⊢ E ′ : true
(4.1.1.27)
(4.1.1.28)
Cbox
:
Cset
:
D, P ⊢ Ew:1...m : Pm
D, P ⊢ {E1 , . . . , Em } : {P1 , . . . , Pw }
(4.1.1.29)
Cop
:
D, P ⊢ E : id
D(id) = (cop, f )
D, P ⊢ Ci : vi
D, P ⊢ E(C1 , . . . , Cn ) : f (v1 , . . . , vn )
(4.1.1.30)
Csop
:
D, P ⊢ E : id
D(id) = (sop, f )
D, P ⊢ Ci : {vi1 , . . . , vik }
D, P ⊢ E(C1 , . . . , Cn ) : f ({v11 , . . . , v1s }, . . . , {vn1 , . . . , vnm })
(4.1.1.31)
D, P ⊢ Box [E1 , . . . , En |E ′ ] : {P1 , . . . , Pm }
Figure 24: Extract of operational semantics of Lucx [473, 513]
of Kahn and MacQueen [202, 203]. In the original Lucid (whose operators are in this
font), streams were defined in a pipelined manner, with two separate definitions: one for
the initial element, and another one for the subsequent elements [361, 380]. For example, the
equations
first X = 0
next X
= X +1
define the variable X to be a stream, such that
x0 = 0
xi+1 = xi + 1
83
In other words,
0
= (0, 0, 0, ..., 0, ...)
X = (x0 , x1 , . . . , xi , . . .) = (0, 1, . . . , i, . . .)
Similarly, the equations
first X = X
next Y
= Y + next X
define variable Y to be the running sum of X, i.e.,
y 0 = x0
yi+1 = yi + xi+1
That is,
Y = (y0 , y1 , . . . , yi , . . .) = 0, 1, . . . , i(i+1)
,
.
.
.
2
According to Paquet, then it became clear that a “new” operator at the time, fby (followed
by) can be used to define such typical situations [361, 380] allowing the above two variables
be defined as follows:
X = 0 fby X + 1
Y
= X fby Y + next X
As a result, Plaice and Paquet summarized the three basic operators of the original Lucid [361, 380]:
Definition L1
If X = (x0 , x1 , . . . , xi , . . .) and Y = (y0 , y1 , . . . , yi , . . .), then
(1)
first X
def
(2)
next X
def
(3) X fby Y
def
= (x0 , x0 , . . . , x0 , . . .)
= (x1 , x2 , . . . , xi+1 , . . .)
= (x0 , y0 , y1 , . . . , yi−1 , . . .)
Paquet and Plaice further drew parallels to the list operations of LISP dialects, where first
corresponds to head, next corresponds to tail, and fby corresponds to cons [361, 380]. This
is especially useful, when translating Gladyshev’s Common Lisp implementation (see [137])
into Forensic Lucid (Section 9.3, page 250). They further state when these operators
84
are combined with Landin’s ISWIM [223] (If You See What I Mean), which is essentially
typed λ-calculus with some syntactic sugar, it becomes possible to define complete Lucid
programs [361, 380]. The following three derived operators have turned out to be very useful
(we will use them later in the text) [361, 380]:
Definition L2
(1) X wvr Y
def
(2) X asa Y
def
(3) X upon Y
def
= if first Y then X fby ( next X wvr next Y )
else
=
( next X wvr next Y )
first (X wvr Y )
= X fby (if first Y then ( next X upon next Y )
else (
X upon next Y ))
Where wvr stands for whenever , asa stands for as soon as and upon stands for advances
upon.
4.1.3
Random Access to Streams
In the beginning, with the original Lucid operators, one could only define programs with
pipelined dataflows, i.e., in where the (i + 1)-th element in a stream is only computed once
the i-th element has been computed. This situation was deemed potentially wasteful, since
the i-th element might not necessarily be needed. More importantly, it only allows sequential
access into streams [361, 380].
In order to have random access into streams, an index # corresponding to the current
position, the current context of evaluation was created [361, 380]. This step made the infinite extensions (streams), constrained intensions, defining computation according to a context (originally a simple single integer) [361, 380]. Effectively, intensional programming was
born [361, 380]. As such, Paquet and Plaice redefined all original Lucid operators in terms
of the operators # and @ (and Paquet [361] established their equivalence):
85
Definition L3
(1) #
def
(2) X @ Y
def
= 0 fby (# + 1)
= if Y = 0 then first X
else ( next X) @ (Y − 1)
We re-examine again these and other operators in Forensic Lucid in Chapter 7.
4.1.4
Eductive Model of Computation
The first operational model for computing Lucid programs was designed independently by
Cargill at the University of Waterloo and May at the University of Warwick, based directly
on the formal semantics of Lucid, itself based on Kripke models and possible-worlds semantics [218, 219]. This technique was later extended by Ostrum for the implementation of the
Luthid interpreter [354]. While Luthid was tangential to standard Lucid, its implementation model was later adopted as a basis for the design of the pLucid interpreter by Faustini
and Wadge [110]. This program evaluation model is now called eduction and opens doors for
distributed execution [302] of such programs [161, 271, 362, 452, 454, 455].
In Figure 25 [264] is the trace represented as an eduction tree during execution of the
Objective Lucid program in Figure 26. In this figure, the outermost boxes labeled {d:0}
etc. represent the current context of evaluation, gray rectangular boxes with expressions
represent demands for values to be computed under that context, and the red1 boxes with
the terminal bullets next to them represent the results of the computation of the expressions.
In our proposed solution, such eduction tree can be adapted to back-tracing in forensic
evaluation, e.g., when the inner-most results are traced back to the final result of the entire
computation [305].
The concept of eduction can be described as a “tagged-token demand-driven dataflow”
[503] computing paradigm (whereupon Lucid influenced a popular media platform and language called PureData [388]). The central concept to this model of execution is the notion
of generation, propagation, and consumption of demands and their resulting values. Lucid
1
“Red” in the color image; or dark gray in black-and-white image.
86
Figure 25: Eduction tree as a trace for the natural-numbers problem in Objective Lucid [264]
87
programs are declarative programs where every identifier is defined as a HOIL expression using other identifiers and an underlying algebra. An initial demand for the value of a certain
identifier is generated, and the eduction engine, using the defining expression of this identifier, generates demands for the constituting identifiers of this expression, on which operators
are applied in their embedding expressions. These demands in turn generate other demands,
until some demands eventually evaluate to some values, which are then propagated back in
the chain of demands, operators are applied to compute expression values, until eventually
the value of the initial demand is computed and returned [302].
Lucid identifiers and expressions inherently vary in a multidimensional context space, i.e.,
any identifier or expression can be evaluated in a multidimensional context, thus leading
to have identifiers and expressions representing a set of values, one value for each possible
context in which the identifier or expression can be evaluated. This is brining the notion
of intensionality, where identifiers are defined by intensional expressions, i.e., expressions
whose evaluation varies in a multidimensional context space, which can then be constrained
by a particular multidimensional context specification. Note that Lucid variables and expressions represent “dimensionally abstract” concepts, i.e., they do not explicitly mention their
dimensionality. For example, Newton’s Law of Universal Gravitation
F = (G · m1 · m2 )/r · r
(4.1.4.1)
can be written literally in Lucid as [302]:
F = (G * m1 * m2) / r * r;
and can then be evaluated in different dimensional manifolds (i.e., n-dimensional spaces),
keeping the same definition, but being evaluated in contexts varying in their dimensionality.
For example, F can be evaluated in a one-dimensional space, yielding a single scalar, or in a
three-dimensional manifold, yielding a three-dimensional vector. Note that a time dimension
could also be added where, for example, the masses (m1 and m2) and/or the distance between
them (r) can be defined as to vary in time. In this case, the expression will inherently vary
in the time dimension since some of its constituents vary in this dimension [302].
88
4.1.5
Related Languages
Some other languages can be referred to as intensional even though they may not refer to
themselves as such, and were born after Lucid (Lucid began in 1974). Examples include
hardware-description languages (HDLs, appeared in 1977) where the notion of time (often
the only “dimension”, and usually progresses only forward), e.g., Verilog and VHDL [342].
Another branch of newer languages for the becoming popular is aspect-oriented programming
(AOP) languages (e.g., AspectJ [29] and SHL [228]), Web Service Description Language
(WSDL). that can have a notion of context explicitly, but primarily focused on software
engineering aspect of software evolution and maintainability [300]. Yet another set of streamprocessing languages influenced by Lucid include Miller Puckette’s PureData [388] (opensource) and its commercial equivalent Jitter in Max/MSP [74, 75].
4.2
Related Work
In this section we briefly review some of the related work on Lucid and its dialects and
developments. Some of that work is explained in detail further in Section 4.3 that we draw
on the most as well as in [364] where a list of publications, dialects, and related work is
being maintained and periodically updated. A lot of the related work was done earlier and
summarized in the ISLIP proceedings [131, 350] and some of this work was reviewed in the
preceding chapter and is still of relevance today.
Lucid was influenced by Landin’s ISWIM [223] in his work 700 programming languages,
producing in the end contexts, the where clause, etc. (ISWIM had influence elsewhere as well
like SQL, Z, and others). It was designed to model change that happens to various abstract
objects and phenomena [380].
A very good overview of Lucid, Indexical Lucid, etc., in the context of WWW as an
illustratory example is done in the tutorial by Plaice and Paquet in [380], including multidimensional contexts, higher-order functions, and examples. Wadge discusses the Kripke’s
possible worlds semantics of Lucid in the OO context [507]. Wadge and Yoder further
reviewed the possible world semantics in their work Possible-World Wide Web [512].
One of the application domains Lucid was proposed for was for verification and reasoning
89
about multidimensional programs by Ashcroft [23]. Ma and Orgun proposed MULTRAN
program verification with temporal logic [244]. Yamane proposed real-time OO specification
and verification techniques [533].
There were naturally a number of works on the hybrid intensional-imperative programming. The two paradigms have a generally poor interface among each other: on the one hand
are conventional imperative programming languages that have no room for multidimensionality or intensional or demand-driven evaluation; on the other hand, existing multidimensional
languages that cannot take advantage of imperative features and techniques. Developed
over years of research, the combination typically result in much better performance [526].
Liu and Staples proposed introduction of logic constructs into the imperative procedural languages [238] and Rondogiannis followed up with multidimensional additions to the procedural
languages [406]. Rondogiannis and Wadge also proposed a way to extend what they call the
intensionalization algorithm to a broader class of higher-order programs [408]. The GLU#
approach embeds a small multidimensional core in a mainstream object-oriented programming language (C++) [359]. By this way, without changing the syntax and semantics of
the host language, multidimensionality can be supported. GLU# is a small subset of GLU
and it is implemented as a collection of C++ classes and class templates. All operators
in Lucid appear as C++ functions. GLU# does not support Lucid functions; however,
programmers are able to use lazy method templates in C++ to use C++’s functions in
GLU#. GLU# provided a bridge between Lucid and OO [526]. The concept about objects in Lucid first appeared in [122] in the early 1990s. In the later 1990s, Peter Kropf and
John Plaice talked about this topic in their paper “intensional objects” [221]. In this paper,
intensional objects are considered as openable boxes labeled by Lucid contexts. That work
focuses on intensional versioning whose task is to build a system from versioned components,
which are already sitting in the intensional value warehouse (a cache of the previously computed results). This warehouse is different as the warehouse in intensional programming.
The latter is like a cache to improve the performance. The former contains the source of
everything, it is like a “catalog” or a “repository”, in which the boxes are put into. Each box
is of some contents and a tag that is context. Thus, in this approach, these labeled boxes are
called intensional objects, which are re-openable and re-packageable [526]. In [93], there is
90
another discussion on issues about object-oriented implementation of intensional languages.
In this approach, each variable in a Lucid program is considered as a class and an object
of a class is a variable in a context. Each variable definition in a Lucid program is compiled into a C++ class definition that has the same name as the variable. This approach
focuses on the implementation-level by creating a class for each Lucid variable, it helps the
system to execute in a distributed manner. However, the objects introduced here does not
contain information from C++ variables [526]. Lu created similar identifier-context (IC)
classes in the multithreaded GEE PoC implementation [241]. Grogono proposed Onyx for
multidimensional lazy arrays [144].
On the scientific application side Plaice proposed particle in-cell simulations with Lucid [376] and Paquet with his Scientific Intensional Programming work [361] proposed Tensor Lucid for plasma physics simulations.
Paquet and Plaice proposed the use of Lucid constructs to handle fine amount records
in relational databases [360, 367].
Le proposed a notion of a Fuzzy Temporal Prolog. While we are not into Prolog in this
work, the fuzzy temporal aspect is of relevance to Forensic Lucid. Knowledge representation and temporal knowledge based simulation subsequently proposed by Liu and Orgun in
Chronolog is also a related work to Forensic Lucid. Panayiotopoulos also proposed temporal reasoning with TRL [358]. At the same time. Androutsopoulos discussed the temporal
meaning representation in a natural language front-end [18].
Paquet and Plaice followed by Wan investigated the semantics of dimensions as values [368, 513]. For nested hierarchical contexts the related work includes the work by Schraefel
et al . [417, 511] on various hypertext representational and modeling issues.
Besnard et al . [42] proposed a design of a multi-formalism application and distribution
in a dataflow context via an example. Gagné and Plaice have further expanded on the topic
of real-time demand-driven computing [126].
Faustini established the equivalence of denotational and operational semantics in pure
dataflows [108], which can be useful when relying on the relevant denotational formalism
elsewhere (e.g., [532]) and most of the Lucid’s semantics was traditionally done in the
operational fashion. Uustalu and Vene in a more recent review go over the notion of dataflow
91
programming [478].
On the compiler and toolset side, Swoboda and Wadge produced a set of intensionalizations tools like vmake, libintense [455], Mark did a basic PoC interpreter in Haskell [250].
4.3
Lucid Dialects
Here we briefly review some of the core influential dialects contributing to the construction
of Forensic Lucid in addition to the information provided in Section 4.1. A great deal is
devoted to the dialects and related publications in [364].
4.3.1
Lucx
Wan’s Lucx [513, 515] (which stands for Lucid enriched with context is a fundamental extension of GIPL and the Lucid family as a whole that promotes the contexts to the first-class
values thereby creating a “true” generic Lucid language. We recited its semantics in Figure 24 in Section 4.1, page 76. Wan [513, 515] defined a new collection of set operators (e.g.,
union, intersection, box, etc.) on the multidimensional contexts, which will help with the
multiple explanations of the evidential statements in forensic evaluation where the context
sets are often defined as cross products (boxes), intersections, and unions. Lucx’s further
specification, refinement, and implementation details were produced by Tong [365, 473] in
2007–2008 based on Wan’s design [302, 305].
4.3.2
JLucid, Objective Lucid, and JOOIP
4.3.2.1
JLucid
JLucid [145, 264] was the first attempt on intensional arrays and “free Java functions”
in the GIPSY environment. The approach used the Lucid language as the driving main
computation, where Java methods were peripheral and could be invoked from the Lucid
segment, but not the other way around. This was the first instance of hybrid intensionalimperative programming within GIPSY. The semantics of this approach was not completely
defined, plus, it was only a single-sided view (Lucid-to-Java) of the problem. JLucid did
92
not support objects of any kind, but introduced the wrapper class idea to contain the freely
appearing Java methods (in either GIPL or Indexical Lucid) and served as a precursor
to Objective Lucid [304, 312, 526].
4.3.2.2
Objective Lucid
Objective Lucid [262, 264] was a natural extension of the JLucid language mentioned in
the previous section that inherited all of the JLucid’s features and introduced Java objects
to be available for use by Lucid. Objective Lucid expanded the notion of the Java object
(a collection of members of different types) to the array (a collection of members of the
same type) and first introduced the dot-notation in the syntax and operational semantics in
GIPSY (Figure 27). Like in JLucid, Objective Lucid’s focus was on the Lucid part being
the “main” program and did not allow Java to call intensional functions or use intensional
constructs from within a Java class. Objective Lucid was the first in GIPSY to introduce
the more complete operational semantics of the hybrid OO intensional language [304, 312,
526]. Having the arrays and objects allows grouping of the related data items (of the same
#typedecl
Nat42;
#JAVA
class Nat42
{
private int n;
public Nat42()
{
n = 42;
}
public Nat42 inc()
{
n++;
return this;
}
public void print()
{
System.out.println("n = " + n);
}
}
#OBJECTIVELUCID
(N @.d 2).print[d]()
where
dimension d;
N = Nat42[d]() fby.d N.inc[d]();
end
Figure 26: The natural-numbers problem in Objective Lucid [305]
93
type or different types) together and evaluate them under the same context. In Figure 26 is
the modified example of a demand-driven evaluation of a simple natural numbers problem
re-written in Objective Lucid [262, 264, 305]. In Figure 25 is the modified example of a
demand-driven evaluation of a similar natural numbers problem in Figure 26 re-written in
Objective Lucid [264]. In this work, such eduction tree can be adapted to back-tracing
in forensic evaluation, e.g., when the inner-most results are traced back to the final result of
the entire computation.
Ec−vid
Ec−fct
:
:
D, P ⊢ E : id D, P ⊢ E ′ : id′
D(id) = (class, cid, cdef) D(id′ ) = (classv, cid.cvid, vdef)
D, P ⊢<cid.cvid>: v
D, P ⊢ E.E ′ : v
D, P ⊢ E : id
D, P ⊢ E ′ : id′
D, P ⊢ E1 , . . . , En : v1 , . . . , vn
D(id′ ) = (classf, cid.cfid, fdef)
D(id) = (class, cid, cdef)
D, P ⊢<cid.cfid(v1 , . . . , vn )>: v
D, P ⊢ E.E ′ (E1 , . . . , En ) : v
D, P ⊢ E : id
D, P ⊢ E1 , . . . , En : v1 , . . . , vn
D(id) = (freefun, ffid, ffdef)
D, P ⊢<ffid(v1 , . . . , vn )>: v
(4.3.2.1)
(4.3.2.2)
Effid
:
#JAVAobjid
:
cdef = Class cid {. . .}
D, P ⊢ cdef : D†[cid →→ (class, cid, cdef)], P
(4.3.2.4)
#JAVAobjvid
:
cdef = Class cid {. . . vdef . . .}
vdef = public type vid;
D, P ⊢ cdef : D†[cid.vid →→ (classv, cid.vid, vdef)], P
(4.3.2.5)
#JAVAobjfid
:
D, P ⊢ E(E1 , . . . , En ) : v
(4.3.2.3)
cdef = Class cid {. . . fdef . . .}
fdef = public frttype fid(fargtype1 f argid1 , . . . , fargtypen f argidn )
D, P ⊢ cdef : D†[cid.fid →→ (classf, cid.fid, fdef)], P
(4.3.2.6)
#JAVAffid
:
ffdef = frttype ffid(fargtype1 f argid1 , . . . , fargtypen f argidn )
D, P ⊢ ffdef : D†[ffid →→ (freefun, ffid, ffdef)], P
(4.3.2.7)
Figure 27: Extract of operational semantics of Objective Lucid [304]
4.3.2.3
JOOIP
Wu’s JOOIP [526, 528] greatly complements Objective Lucid by allowing Java to call
the intensional language constructs closing the gap and making JOOIP a complete hybrid
OO intensional programming language within the GIPSY environment. JOOIP’s semantics
further refines in a greater detail the operational semantics rules of Lucid and Objective
Lucid in the attempt to make them complete [304, 312, 526]. JOOIP’s approach following
GLU# is natural since object-oriented languages are known by literally all computer scientists and software engineers. Especially, Java is a very popular and widely used language
94
in today’s application domains. JOOIP increases the visibility of Intensional Programming [131, 350] (see Section 3.2) is to make it more mainstream via a marriage between
Object-Oriented Programming and Intensional Programming paradigms, allowing a broader
audience to be exposed to the benefits of Intensional Programming [507, 526] within their
favorite OO language. To show the similarities and differences between JOOIP and GLU#,
Wu [528] provided the translation of some of the examples given in [359] into JOOIP for the
comparison reasons and to show its advantages. A similar embedding of multidimensional
characteristics in a conventional programming language has been proposed by Rondogiannis [407]. In his approach, Java is used as the host language and intensional languages are
embedded into Java as a form of definitional lazy multidimensional arrays. Integration of
Forensic Lucid with JOOIP becomes more relevant when considered together for hybridintensional-imperative programming in particular for self-forensics presented in Appendix D.
4.3.3
MARFL
MARF Language (MARFL) was conceived and designed to manage the MARF system’ runtime (see Section 5.2) configuration as collections of nested name-value configuration option
pairs [272]. While not strictly of the Lucid family or GIPSY, MARFL [272] was nearly
entirely influenced by Lucid. It is based on the overloaded @ and # operators and allows
to navigate into the depth of the higher-order contextual space using the dot operator (see
Figure 28) [304]. The latter was indirectly (re-invented in part) influenced by iHTML and
libintense [452, 455]. For detailed discussion on MARFL and its semantics please refer to
Appendix C. Forensic Lucid adapts this idea of the hierarchical context navigation and
querying with the overloaded @ and # for evidential statements, observation sequences, and
individual observations.
EE.did
:
D(E.id) = (dim)
D, P ⊢ E.id : id.id
Figure 28: Higher-order context Dot operator of MARFL [304]
95
(4.3.3.1)
4.4
Summary
Since the Lucid family of languages thrived around intensional logic that makes the notion of
context explicit and central, and relatively recently in Lucx, a first class value [365, 473, 513,
515] (that can be passed around as function parameters or as return values and have a set
of operators defined upon), we greatly draw on this notion by formalizing our evidence and
the witness stories as a contextual specification of the incident to be tested for consistency
against the incident model specification. In our specification model we require more than
just atomic context values—we need a higher-order context hierarchy to specify different
level of detail of the incident and being able to navigate into the “depth” of such a context.
A similar provision has been made in [272] and earlier works of Swoboda and colleagues
in [452, 453, 454, 455] that needed some modifications to the expressions of the cyberforensic
context [300, 304].
To summarize, expressions written in virtually all Lucid dialects correspond to higherorder intensional logic (HOIL) expressions with some dialect-specific instantiations. They
all can alter the context of their evaluation given a set of operators and in some cases types
of contexts, their rank, range, and so on. HOIL combines functional programming and
intensional logics, e.g., temporal intensional logic (Section 3.2, page 59). The contextual
expression can be passed as parameters and returned as results of a function and constitute
the multi-dimensional constraint on the Lucid expression being evaluated. The corresponding context calculus [365, 473, 513] defines a comprehensive set of context operators, most
of which are set operators and the baseline operators are @ and # that allow to switch the
current context or query it, respectively. Other operators allow to define a context space and
a point in that context corresponding to the current context. The context can be arbitrary
large in its rank. The identified variables of the dimension type within the context can take
on any data type, e.g., an integer, or a string, during lazy binding of the resulting context to
a dimension identifier [302].
96
Chapter 5
Data Mining and Pattern Recognition
This chapter discusses the relevant background on the data mining and pattern recognition
facet of this research to present some of the supporting related work in the area as well as a
detailed discussion of the devised tools and their results. This discussion is important and
relevant because the data mining techniques are used substantially in digital investigation
with imprecise or encrypted data in files or network packets (netflows) and the PoC tools
presented here are some of the sources of encoded fuzzy classification evidential data that can
be used in reasoning in digital investigation cases. The other motivation is the augmentation
of the tools themselves to support self-forensics (Appendix D) in autonomic environments.
In the detailed discussion we primarily focus on MARF, fileType, MARFCAT, and
MARFPCAT produced and maintained primarily by the author Mokhov. All these are designed to export any of their classification findings and data structures in the declarative
Forensic Lucid format via their corresponding encoders to be a part of the witness testimonies aiding investigations of incidents such as involving vulnerable software (browsers, etc.)
or malicious software (Chapter 9), and self-forensic investigations (Appendix D). MARFCAT
and MARFPCAT also have problem-specific DGT and DWT in GIPSY (see Chapter 6) for
scalable demand-driven evaluation using GIPSY’s distributed middleware.
This chapter is organized as follows. The related work is referenced in Section 5.1. Brief
description of MARF is in Section 5.2, of MARFCAT in Section 5.4, and of MARFPCAT in
Section 5.5 respectively. Some example classification results for all approaches are presented
as well. Then follows a brief summaryconcluding remarks in Section 5.6.
97
5.1
Related Work
There is a great number of related works in the area published over the last two decades or
so. We briefly focus on the subgenre of data mining and pattern recognition works related
to signal, audio, and NL processing, switching to the various aspects of the data mining
aspects applied to computer and network security and digital forensics. The classical works
of data mining, patter recognition, signal processing, NLP and the related disciplines are subdisciplines of AI [81, 182, 186, 201, 248, 353, 411]. In reference to the previous background
chapter on Lucid (Chapter 4), multidimensional intensional signal processing was proposed
by Agi using GLU in 1995 [5] and MARFL for MARF by the author Mokhov (Appendix C).
In typical pipelines when data are loaded and preprocessed (filtered, etc.), the next major
step is to do feature extraction of features of interest. This can be done automatically and/or
manually to improve the classification precision and recall metrics. For this to work, the best
features types have to be properly selected and a few proposals for feature selection and
extraction algorithms were put forward [140, 243] including continual feature selection [426].
The classification stage comes into play in learning and classification using a variety of proposed classifiers and their combinations [63, 129, 208].
Chen in 2006 [64] in his book summarized the issues about information sharing and data
mining as intelligence and security informatics disciplines.
5.1.1
Open-Source Tools
There are a number of open-source tools and frameworks primarily done in Java to do data
mining, pattern recognition, signal processing, etc. and that can also integrate with commercial tools and toolboxes, such as that of MATLAB [418] and others. MARF that began in
2002 is detailed in the further sections. It started off as an audio recognition framework by
the author and colleagues, but went beyond that and was applied for image, text, code, data
analysis, and different classification applications, some of which are detailed further in Section 5.3, Section 5.4, and Section 5.5. At around the same time frame, another open-source
system emerged, also done in Java, at CMU called CMU Sphinx [468], which focused on the
reliable and robust tasks for speech recognition (speech-to-text). On around 2006 Weka [469]
98
was created and emerged; its source code base originally followed very much the MARF’s
architecture. This system has had a lot of developer support and maintenance and significantly grew in popularity since. The author Mokhov (while working on the MARFPCAT
branch of the project (detailed further in this chapter) as a part of a malware classification
based on packet-capture (pcap) data project [49]) developed initial MARF plug-in support
to allow Weka’s rich classifier set to be used in the MARF’s pipeline. GATE [462] is a tool
with rich customizable pipeline of processing resources (PRs) support written in Java originally developed as General Architecture for Text Engineering, more specifically to enable
NLP developers to create easier NLP applications. It has outgrown the text aspect of it
allowing various other technologies being added for other classification tasks and knowledge
representation such as Weka, ontologies, and others. MARF’s plug-in support for it was also
planned [281].
5.1.2
Network Forensics
There was a number of data mining, machine learning, and resulting classification approaches
used for forensic investigation and analysis of network traffic, especially the encrypted kind
over SSL/TLS, ssh, etc. to identify network-active applications and malware. For example,
Density Based Spacial Clustering of Application with Noise (DBSCAN) was proposed [531]
in 2008 to use clustering algorithms to identify various FTP clients, VLC media player, and
UltraVNC traffic over encrypted channels. That was followed by Alshammari et al . [15, 16]
to identify ssh and Skype encrypted traffic (without looking at payload, port numbers, and
IP addresses). Additionally, comparison of algorithms and approaches for network traffic
classification were proposed separately by Alshammari et al . [14] in 2008 and Okada et
al . [346] in 2011 surveying and comparing various machine learning algorithms for encrypted
traffic analysis. Prior to that, in 2000, Lee et al . [232] introduced the notion of adaptive
intrusion detection with data mining techniques, followed by Bloedorn [44] in 2001 of MITRE
with their technical report on data mining for network intrusion detection with data mining
as well. Livadas et al . [239] in 2006 used machine learning techniques to classify botnet
traffic followed by Binsalleeh et al . [43] in 2010 with the very detailed analysis of the Zeus
botnet crimeware toolkit. Simon et al . [430], likewise in 2006, proposed to detect non-obvious
99
brute-force network scanning using a data mining approach as well. Finally, Boukhtouta et
al . (including the author) [49] proposed network malware identification by machine-learning
captured network malware traffic pcaps as well as benign traffic (regardless the fact if the
payload encrypted or not) via a select set of features and comparing it to the captured traffic
currently going through the network.
5.1.3
Malware Analysis and Classification for Investigations
Some of the malware and malicious code techniques described here are also used in the
network anomaly analysis described in the previous section especially for the network-enabled
malware that propagates, so some techniques described there are also applicable here. In
part our methodology has some similarities in common with the related work on automatic
classification of new, unknown malware and malware in general such as viruses, web malware,
worms, spyware, and others where AI pattern recognition and expert system techniques are
successfully used for automatic classification [290].
Schultz et al . [419] in 2001 proposed the data mining techniques to detect new malicious
executables as opposed to traditional signature-based approaches. Sung et al . [451] proposed
the static analysis of vicious executables (SAVE) framework in 2004. In 2007, Bailey, Nazario,
et al . [36] came up with automated analysis and classification of various Internet malware.
Provos et al . [387] in the same year did the web-based malware “ghost in the browser”
analysis. Suenaga proposed malware analysis through linguistics techniques by searching for
ethnic words [445] and Hnatiw et al . [170] proposed techniques for parsing malicious and
malformed executables as well, both in 2007. Rafique et al . [395] followed suit in 2008 and
proposed another approach for automatic adjudication of new malware as well.
In 2007, Hwang et al . [176] proposed an anti-malware expert system. Spectral techniques
are used for pattern scanning in malware detection by Eto et al . in [105] in 2009 where
they propose a malware distinction method based on scan patterns. Subsequently, Inoue et
al. [104, 183] proposed a general data mining system for incident analysis with data mining
engines called NICTER based on data mining techniques.
Classification results may be encoded as Forensic Lucid constructs as an evidence in
100
investigations to support or disprove reasoning about investigative claims in incidents involving malware. Additionally, Forensic Lucid-based reasoners are planned to be integrated
into some of these tool as a part of the future work (Section 10.4).
5.1.4
Code Analysis for Investigations
Arguably, to the author’s knowledge MARFCAT (Section 5.4) in 2010 was the first time a
machine learning approach was attempted to static code analysis for vulnerable/weak code
classification with the first results demonstrated during the SATE2010 workshop [284, 285,
347]. In the same year, a somewhat similar approach independently was presented [52] for
vulnerability classification and prediction using machine learning and SVMs, but working
with a different set of data [314].
Additional related work (to various degree of relevance or use) can be found below (this
list is not exhaustive) [287, 314]: A taxonomy of Linux kernel vulnerability solutions in terms
of patches and source code as well as categories for both are found in [297]. The core ideas
and principles behind the MARF’s pipeline and testing methodology for various algorithms in
the pipeline adapted to this case are found in [270, 281] as well as in Section 5.2 as it was the
easiest implementation available to accomplish the task. There also one can find the majority
of the core options used to set the configuration for the pipeline in terms of algorithms used.
A binary analysis using machine a learning approach for quick scans for files of known types
in a large collection of files is described in [290] as well as the NLP and machine learning for
NLP tasks in DEFT2010 [279, 283] with the corresponding DEFT2010App and its predecessor
for hand-written image processing WriterIdentApp [318]. Tlili’s 2009 PhD thesis covers topics on automatic detection of safety and security vulnerabilities in open source software [472].
Statistical analysis, ranking, approximation, dealing with uncertainty, and specification inference in static code analysis are found in the works of Engler’s team [215, 216, 217]. Kong
et al . further advance static analysis (using parsing, etc.) and specifications to eliminate
human specification from the static code analysis in [213]. Hanna et al. describe a synergy
between static and dynamic analysis for the detection of software security vulnerabilities
in [163] paving the way to unify the two analysis methods. Other researchers propose the
MEDUSA system for metamorphic malware dynamic analysis using API signatures in [330].
101
Some of the statistical NLP techniques we used are described at length in [248]. BitBlaze
(and its web counterpart, WebBlaze) are other recent tools that do fast static and dynamic
binary code analysis for vulnerabilities, developed at Berkeley [435, 436]. For wavelets, for
example, Li et al. [234] have shown wavelet transforms and k-means classification can be used
to identify communicating applications fast on a network and is relevant to the study of the
code in text or binary form [314].
5.2
MARF
MARF (Modular A* Recognition Framework ) as a framework (and its instance as a library)
has been covered in a number of work since its inception in 2002 [260, 268, 270, 283, 290, 465]
for various classification tasks originally targeting audio applications, but eventually outgrowing that domain (hence the change from Audio to A* in the name). The research
into MARF has led to connections with other disciplines such as presented here intensional
programming with the design of MARFL, a MARF configuration specification and manipulation language (Appendix C), design of the code analysis and network packet analysis applications MARFCAT and MARFPCAT presented further, forensic analysis of file
types [290], and NLP aspects [281]. MARF was integrated in various capacities with GIPSY
(Chapter 6) since circa 2005, had a PoC design of the distributed [271, 296] and autonomic
versions [320, 494, 495, 496]. MARF’s data structure declarations (as well as that of other
middleware [322]) as well as its applications are a subject of encoding and export in Forensic
Lucid (Section 8.5.1.1) to serve as additional witness accounts in investigations.
5.2.1
MARF Overview
MARF is an open-source project that provides pattern recognition APIs with sample implementation for (un)supervised machine learning and classification, including biometric forensic
identification, written in Java [260, 268, 270, 465] by the author and collaborators in 2002.
As one of its roles, it serves as a testbed to verify common and novel algorithms for sample loading, preprocessing, feature extraction, training and classification stages. In this role
MARF provides researchers with a tool for the practical comparison of the algorithms in
102
a uniform environment and allows for dynamic module selection via reflection [142] based
on a wide array of configuration options supplied by MARF applications. Within few years
MARF accumulated a fair number of implementations for each of the pipeline stages (cf. Figure 29, page 103) allowing comparative studies of algorithm combinations, studying their
behavior and other properties when used for various pattern recognition tasks. MARF, its
derivatives, and applications were also used beyond audio processing tasks, as in this work,
due to the generality of the design and implementation in [265, 271, 292] and several other
works [290, 319].
Figure 29: MARF’s pattern-recognition pipeline [272]
Some of the MARF’s architectural design influenced GIPSY ([264], Chapter 6) and
MARF’s utility modules are likewise in use by GIPSY. Both distributed version of MARF
103
(DMARF) and GIPSY were together proposed case studies from the security [271] and selfforensics [321] standpoints.
5.2.2
MARF Architecture
The Modular A* Recognition Framework (MARF) [260, 268, 270, 465] is a Java framework,
and an open-source research platform and a collection of pattern recognition, signal processing, and natural language processing (NLP) algorithms written in Java and put into a
modular and extensible framework facilitating addition of new algorithms for use and experimentation by scientists. A MARF instance can run distributively [265] over a network,
run stand-alone, or may just act as a simple library in applications. MARF has a number of algorithms implemented for various pattern recognition and some signal processing
tasks [271, 322].
The backbone of MARF consists of pipeline stages that communicate with each other to
get the data they need in a chained manner. MARF’s pipeline of algorithm implementations
is illustrated in Figure 29, where the implemented algorithms are in white boxes, and the
stubs or in-progress algorithms are in gray. The pipeline consists of four basic stages: sample
loading, preprocessing, feature extraction, and training/classification [271, 322].
There are a number of applications that test MARF’s functionality and serve as examples of how to use or to test MARF’s modules. One of the most prominent applications
is SpeakerIdentApp—Text-Independent Speaker Identification (who, gender, accent, spoken language, etc.) [317]. Its derivative, FileTypeIdentApp, was used to employ MARF’s
capabilities for forensic analysis of file types [290] as opposed to [271, 322] the Unix file
utility [77, 78].
5.2.3
Pattern Recognition Pipeline
The conceptual pattern recognition pipeline design presented in Figure 29 depicts the core
of the data flow and transformation between the stages in MARF [260, 465]. Generally, the
classical pattern recognition process starts by loading a sample (e.g., an audio recording, text,
image file, pcap, or virtually any regular file), preprocessing it somehow (e.g., normalization
104
Figure 30: MARF’s pattern-recognition pipeline sequence diagram [290]
105
and filtering out noisy and “silent” data), then extracting the most prominent features, and,
finally either training the system such that it learns a new set of features of a given subject
or actually classifies what/who the subject is [290].
The outcome of training is either a collection of some form of feature vectors or their mean
or median clusters [270], called training sets, which are stored per every learned subject. The
outcome of classification is an instance of the ResultSet data structure, which is a sorted
collection of IDs (int) and their corresponding outcome values (double); the sorting is done
from most likely outcome to least likely. The most likely one is the ultimate outcome to be
interpreted by the application. Some of the details of such processing of classification are
illustrated on the actual sequence of events and method calls within the main MARF module
is shown in Figure 30 [290].
5.2.4
Distributed MARF (DMARF)
DMARF [265] is based on the classical MARF whose pipeline stages were made into distributed nodes [322]. Specifically, the classical MARF presented earlier was extended [265]
to allow the stages of the pipeline to run as distributed nodes as well as their front-ends,
as shown in Figure 31 in a high-level overview. The basic stages and the front-ends were
designed to support, but implemented without backup recovery or hot-swappable capabilities. They only support communication over Java RMI [523], CORBA [446], and XML-RPC
WebServices [296, 447]. Later, DMARF was further extended to allow management of its
nodes with SNMP [293] by implementing the proxy SNMPv2 [165] agents and translating
some of the management information to DMARF’s “native” operations.
There is also an
undergoing project on the intensional configuration scripting language, MARFL [272] (see
Appendix C) to script MARF tasks and applications and allows them to be run distributively either using MARF’s own infrastructure or over a GIPSY network instance (Chapter 6).
Being distributed, DMARF has new data structures and data flow paths that are not covered
by the Forensic Lucid specification of the classical MARF, so we contribute an extension
in this work [322] in Section 8.5.1.1 and Appendix D.4.6.1.
106
Figure 31: The distributed MARF pipeline
5.3
fileType
The Unix file utility determines file types of regular files by examining usually the first
512 bytes of the file that often contain some magic header information or typical header
information for binary files or common text file fragments; otherwise, it defers to the OSdependent stat() system call. It combines heuristics with the common file extensions to
give the final result of classification. While file is standard, fast and small, and its magic
database is “serviceable” by expert users, for it to recognize new file types, perhaps with much
finer granularity it requires code and/or magic database updates and a patch release from the
core developers to recognize new file types correctly. MARF-based fileType was proposed
in 2008 an alternative file-like utility in determining file types with much greater flexibility
that can learn new types on the user’s side and be integrated into forensic toolkits as a plugin that relies on the file-like utility and uses signal processing techniques to compute the
“spectral signatures” of file types. What follows is an overview of the design of such a tool
based on MARF’s collection of algorithms and the selection of the best combination and the
integration of the tool into a forensic toolkit to enhance the tool, called fileType with the
automatic machine learning capabilities of the new file types. Some of the advantages and
107
disadvantages of this tool are compared with the file utility in terms of various metrics [290].
It also served as a predecessor to the MARF Forensic Data Sniffer case study. A similar
file type analysis has recently (2012) been included with Sourcefire’s SIM and FireSIGHT
into their toolset to report or block certain file types.
5.3.1
Overview
fileType follows an approach using the MARF’s collection of algorithms to determine file
types in various ways and compare them using signal processing and NLP techniques, both
supervised and unsupervised machine learning, and various file format loaders. MARF and its
application SpeakerIdentApp [317] were shown to be used as a proof-of-concept for biometric
forensic analysis of the phone-quality audio recordings to classify the identities of speakers irrespective of what speakers say on voice recordings, their gender, and spoken accent [268, 270].
fileType adapts MARF’s pattern recognition pipeline, the SpeakerIdentApp application,
and the magic database of file to be used together, in the resulting application called in
Java FileTypesIdentApp, with a shorthand invocation of fileType [290].
MARF conveniently has a class, ByteArrayFileReader that reads a file from a file system
or an URI (or any Reader or InputStream for that matter). fileType employs this class to
read the file data (either the first 512 bytes or the entire file as options) and the values of the
byte array become features for classification (spectral or otherwise). It then may optionally do
the regular signal pattern recognition techniques [270] of preprocessing and feature extraction
to remove all the unwanted noise and silence and extract more discriminating features [290].
The FileTypesIdentApp application, a.k.a fileType, is capable of understanding some
of the file’s options [77], and the work is under way to be able to experiment with file’s
magic database. fileType has its own database that it can augment throughout its lifetime
automatically using machine learning techniques. The statistics of the algorithm combinations tried and their recognition accuracy performance along with the run-time are stored in
a comma-separated values (CSV) file, per each major technique [290].
108
Table 2: File-type identification top 10 results, bigrams ([290])
Guess
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
Rank
Configuration
1
1
1
1
1
2
3
3
3
4
-wav
-wav
-wav
-wav
-wav
-wav
-wav
-wav
-wav
-wav
-raw -lpc -cheb
-silence -noise -raw -lpc -cheb
-noise -raw -lpc -cheb
-norm -lpc -cheb
-silence -raw -lpc -cheb
-silence -norm -fft -cheb
-bandstop -fft -cheb
-silence -noise -norm -fft -cheb
-silence -low -fft -cheb
-silence -norm -lpc -cheb
GOOD
BAD
147
147
147
147
147
129
125
125
125
124
54
54
54
54
54
72
76
76
76
77
Precision, %
73.13
73.13
73.13
73.13
73.13
64.18
62.19
62.19
62.19
61.69
Table 3: File-type identification top 10 results, 2nd best, bigrams ([290])
Guess
2nd
2nd
2nd
2nd
2nd
2nd
2nd
2nd
2nd
2nd
Rank
Configuration
1
1
1
1
1
2
3
3
3
4
-wav
-wav
-wav
-wav
-wav
-wav
-wav
-wav
-wav
-wav
-raw -lpc -cheb
-silence -noise -raw -lpc -cheb
-noise -raw -lpc -cheb
-norm -lpc -cheb
-silence -raw -lpc -cheb
-silence -norm -fft -cheb
-bandstop -fft -cheb
-silence -noise -norm -fft -cheb
-silence -low -fft -cheb
-silence -norm -lpc -cheb
GOOD
BAD
166
166
166
166
166
137
130
140
140
176
35
35
35
35
35
64
71
61
61
25
Precision, %
82.59
82.59
82.59
82.59
82.59
68.16
64.68
69.65
69.65
87.56
Table 4: File-type identification top 10 results, bigrams, per file type ([290])
Guess
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
5.3.2
Rank
1
2
3
4
5
6
7
8
9
10
File type
Mach-O filetype=10 i386
HTML document text
TIFF image data; big-endian
data
ASCII c program text; with very long lines
Rich Text Format data; version 1; Apple Macintosh
ASCII English text
a /sw/bin/ocamlrun script text executable
perl script text executable
NeXT/Apple typedstream data; big endian; version 4; system 1000
GOOD
64
64
64
64
64
128
64
516
832
255
BAD
0
0
0
0
0
0
0
60
192
65
Precision, %
100.00
100.00
100.00
100.00
100.00
100.00
100.00
89.58
81.25
79.69
Sample Results
In [290], an experiment was conducted to use a MARF-based FileTypeIdentApp for bulk
forensic analysis of file types using signal processing techniques. Certain results were quite
encouraging precision/recall-wise for the first and second best top 10 statistics extracts in
Table 2 and Table 3, as well as statistics per file type in Table 4 [278].
5.3.3
Limitations and Drawbacks
In the current implementation of the fileType there are several drawbacks and limitations
(that were planned to be eliminated or reduced as a part of the future work) [290]. These
are listed in part as follows:
• The presented here technique of machine learning and the tool are more effective to use
with the regular files only, and are not suited for special files and devices that are filesystem specific. For those, the same approach as file does using the stat() system
109
call [483] can be used (but this is no longer machine learning, effectively deferring such
tasks to file that does it better) [290].
• Training the system with noisy data samples can deteriorate the recognition accuracy
of the tool, which may lead to the problem of over-fitting [162]. This can be either
accidental (local) or malicious (system-wide, CAF) by supplying a file of one type
for training, but telling it is another. This is a general problem with any machinelearning tools and applications. A way of dealing with this partly is to validate each of
the incoming training samples by classifying them first and comparing with the class
specified for training and in the case of mismatch, report to the user of a potential
problem; in the safe mode refuse to train in the case of mismatch. This will prevent
the accidental training on the wrong data for misclassification. The latter only partly
solves the problem, as the system can be cheated at the beginning when the new file
type being inserted for the first time and is mistrained on [290].
• To be seriously considered in a real investigation toolset and environment, legally, the
tool has to be proved to be correct in its design and implementation as well as components it relies on, such as MARF, e.g., by using JML [55, 229, 230] and Isabelle [372]
for this task later in the project [290].
5.4
MARFCAT
We elaborate on the details of the methodology and the corresponding results of application
of the machine learning techniques along with signal processing and NLP alike to static
source and binary code analysis in search for and investigation on program weaknesses and
vulnerabilities [287]. Here we review the tool, named MARFCAT, a MARF-based Code
Analysis Tool [285], first exhibited at the Static Analysis Tool Exposition (SATE) workshop
in 2010 [347] to machine-learn from the (Common Vulnerabilities and Exposures) CVE-based
vulnerable as well as synthetic CWE-based cases to verify the fixed versions as well as nonCVE based cases from the projects written in various programming languages. The second
iteration of this work was prepared for SATE IV [348] and used its updated data set [314].
110
5.4.1
Overview
We review our machine learning approach to static code analysis and fingerprinting for weaknesses related to security, software engineering, and others using the open-source MARF
framework and the MARFCAT application based on it for the NIST’s SATE 2010 and SATE
IV [348] static analysis tool exposition workshop’s data sets that include additional test
cases, including large synthetic cases [287, 314]. To aid detection of weak or vulnerable code,
including source or binary on different platforms the machine learning approach proved to
be fast and accurate for such tasks. We use signal and NLP processing techniques in our
approach to accomplish the identification and classification tasks. MARFCAT’s design from
the beginning in 2010 was made independent of the language being analyzed, be it source
code, bytecode, or binary. We evaluated also additional algorithms that were used to process the data [314]. This work is imperative in digital investigations and that’s why MARF
itself and MARFCAT were designed to export evidence in the Forensic Lucid format
(Section 8.5.1.1, page 234, [314]).
In 2010, at the core of the workshop there were C/C++-language and Java language
tracks comprising CVE-selected cases as well as stand-alone cases. The CVE-selected cases
had a vulnerable version of a software in question with a list of CVEs attached to it, as well
as the most known fixed version within the minor revision number. One of the goals for
the CVE-based cases is to detect the known weaknesses outlined in CVEs using static code
analysis and also to verify if they were really fixed in the “fixed version” [287, 347].
The test cases at the time included CVE-selected: C: Wireshark 1.2.0 (vulnerable) and
Wireshark 1.2.9 (fixed); C++: Chrome 5.0.375.54 (vulnerable) and Chrome 5.0.375.70
(fixed); Java: Tomcat 5.5.13 (vulnerable) and Tomcat 5.5.29 (fixed), and non-CVE selected:
C: Dovecot; Java: Pebble 2.5-M2. They were later expanded to other cases and newer versions and a PHP test case was added (Wordpress). For more information on the data sets
see Section 9.1.1.1.1, page 245. The open-source MARFCAT tool itself [285] was developed
to machine-learn from the CVE-based vulnerable cases and verify the fixed versions as well
as non-CVE based cases from similar programming languages [287].
At the time, the presented machine learning approach was novel and highly beneficial
in static analysis and routine testing of any kind of code, including source code and binary
111
deployments for its efficiency in terms of speed, relatively high precision, robustness, and
being a complementary tool to other approaches that do in-depth semantic analysis, etc., by
prioritizing those tools’ targets. All these techniques can be used in an automated manner
in diverse distributed and scalable demand-driven environments (e.g., GIPSY, Chapter 6) in
order to ensure the code safety, especially the mission critical software code in all kinds of
systems. It uses spectral, acoustic and language models to learn and classify such a code [314].
5.4.2
Core Principles
The core methodology principles include:
• Machine learning and dynamic programming
• Spectral and signal processing techniques
• NLP n-gram and smoothing techniques (add-δ, Witten-Bell, MLE, etc.)
MARFCAT uses signal processing techniques (i.e., no syntactic parsing or otherwise work at
the syntax and semantics levels). MARFCAT treats the source code as a “signal”, equivalent
to binary, where each n-gram (n = 2 presently, i.e., two consecutive characters or, more
generally, bytes) are used to construct a sample amplitude value in the signal. In the NLP
pipeline, it similarly treats the source code as “characters”, where each n-gram (n = 1..3) is
used to construct the language model [314].
The MARFCAT system is shown the examples of files with weaknesses and it learns
them by computing spectral signatures using signal processing techniques or various language
models (based on options) from CVE-selected test cases. When some of the mentioned
techniques are applied (e.g., filters, silence/noise removal, other preprocessing and feature
extraction techniques), the line number information is lost as a part of this process [314].
When testing, MARFCAT computes either how similar or distant each file is from the
known trained-on weakness-laden files or compares the trained language models with the unseen language fragments in the NLP pipeline. In part, the methodology can approximately
be seen as some fuzzy signature-based “antivirus” or IDS software systems detect bad signature, except that with a large number of machine learning and signal processing algorithms
112
and fuzzy matching, we test to find out which combination gives the highest precision and
best run-time [314].
At the present, however, MARFCAT processes the whole files instead of parsing the finergrain details of patches and weak code fragments. This aspect lowers the precision, but is
relatively fast to scan all the code files [314].
5.4.3
CVEs and CWEs – the Knowledge Base
The CVE-selected test cases serve as a source of the knowledge base to gather information
of how known weak code “looks like” in the signal form [347], which are stored as spectral
signatures clustered per CVE or CWE (Common Weakness Enumeration). The introduction
by the SAMATE team of a large synthetic code base with CWEs, serves as a part of knowledge
base learning as well [314]. Thus, we:
• Teach the system from the CVE-based cases
• Test on the CVE-based cases
• Test on the non-CVE-based cases
For synthetic cases, similarly:
• Teach the system from the CWE-based synthetic cases
• Test on the CWE-based synthetic cases
• Test on the CVE and non-CVE-based cases for CWEs from synthetic cases
We created index files in XML in the format similar to that of SATE to index all the
file of the test case under study. The CVE-based cases after the initial index generation are
manually annotated from the NVD database before being fed to the system [314].
5.4.4
Categories for Machine Learning
The two primary groups of classes MARFCAT is trained and tested on include naturally
the CVEs [340, 341] and CWEs [484]. The advantages of CVEs is the precision and the
113
associated meta knowledge from [340, 341] can be all aggregated and used to scan successive
versions of the same software or derived products (e.g., WebKit in multiple browsers). CVEs
are also generally uniquely mapped to CWEs. The CWEs as a primary class, however, offer
broader categories, of kinds of weaknesses there may be, but are not yet well assigned and
associated with CVEs, so we observe the loss of precision. Since there is no syntactic parsing,
MARFCAT generally cannot deduce weakness types or even simple-looking aspects like line
numbers where the weak code may be, it resorts to the secondary categories, that are usually
tied into the first two, which we also machine-learn along, such as issue types (sink, path, fix )
and line numbers [314].
5.4.5
Algorithms
In the methodology systematic tests and selection of the best (a tradeoff between speed and
accuracy) combination(s) of the algorithm implementations available is conducted. The subsequent runs then use only the selected algorithms for subsequent testing. This methodology
includes the cases when the knowledge base for the same code type is learned from multiple
sources (e.g., several independent C test cases) [314].
5.4.5.1
Signal Pipeline
Algorithmically-speaking, the steps that are performed in the machine-learning signal based
analysis are in Figure 1. The specific algorithms come from the classical literature and other
sources and are detailed in [270], related works therein, and in Section 5.2. In the context
of MARFCAT the loading typically refers to the interpretation of the files being scanned in
terms of bytes forming amplitude values in a signal (as an example, 8kHz or 16kHz frequency)
using either uni-gram, bi-gram, or tri-gram approach. Then, the preprocessing allows to
be none at all (“raw”, or the fastest), normalization, traditional frequency domain filters,
wavelet-based filters, etc. Feature extraction involves reducing an arbitrary length signal to
a fixed length feature vector of what thought to be the most relevant features are in the signal
(e.g., spectral features in FFT, LPC), min-max amplitudes, etc. The classification stage is
then separated either to train by learning the incoming feature vectors (usually as k-means
clusters, median clusters, or plain feature vector collections, combined with, for example,
114
neural network training) or testing them against the previously learned models [314] (cf.,
Section 5.2).
// Construct an index mapping CVEs to files and locations within files
1 Compile meta-XML index files from the CVE reports (line numbers, CVE, CWE, fragment size, etc.). Partly done by a
Perl script and partly annotated manually;
2 foreach source code base, binary code base do
// Presently in these experiments we use simple mean clusters of feature vectors or unigram language
models per default MARF specification [270, 465]
3
Train the system based on the meta index files to build the knowledge base (learn);
4
begin
5
Load (interpret as a wave signal or n − gram);
6
Preprocess (none, FFT-filters, wavelets, normalization, etc.);
7
Extract features (FFT, LPC, min-max, etc.);
8
Train (Similarity, Distance, Neural Network, etc.);
9
end
10
Test on the training data for the same case (e.g., Tomcat 5.5.13 on Tomcat 5.5.13) with the same annotations to
make sure the results make sense by being high and deduce the best algorithm combinations for the task;
11
begin
12
Load (same);
13
Preprocess (same);
14
Extract features (same);
15
Classify (compare to the trained k-means, or medians, or language models);
16
Report;
17
end
18
Similarly test on the testing data for the same case (e.g., Tomcat 5.5.13 on Tomcat 5.5.13) without the
annotations as a sanity check;
19
Test on the testing data for the fixed case of the same software (e.g., Tomcat 5.5.13 on Tomcat 5.5.33);
20
Test on the testing data for the general non-CVE case (e.g., Tomcat 5.5.13 on Pebble or synthetic);
21 end
Algorithm 1: Machine-learning-based static code analysis testing algorithm using the
signal pipeline [314]
5.4.5.2
NLP Pipeline
The steps that are performed in NLP and the machine-learning based analyses are presented
in Figure 2. The specific algorithms again come from the classical literature (e.g., [248])
and are detailed in [281] and the related works. To be more specific for this background
overview, the loading typically refers to the interpretation of the files being scanned in terms
of n-grams: uni-gram, bi-gram, or tri-gram approach and the associated statistical smoothing
algorithms, the results of which (a vector, 2D or 3D matrix) are stored [314].
5.4.6
Binary and Bytecode Analysis
MARFCAT also does preliminary Java bytecode and compiled C code static analysis and
produces results using the same signal processing, NLP, combined with machine learning and
data mining techniques. At this writing, the NIST SAMATE synthetic reference data set
115
1 Compile meta-XML index files from the CVE reports (line numbers, CVE, CWE, fragment size, etc.). Partly done by a
Perl script and partly annotated manually;
2 foreach source code base, binary code base do
// Presently these experiments use simple unigram language models per default MARF
specification [281]
3
Train the system based on the meta index files to build the knowledge base (learn);
4
begin
5
Load (n-gram);
6
Train (statistical smoothing estimators);
7
end
8
Test on the training data for the same case (e.g., Tomcat 5.5.13 on Tomcat 5.5.13) with the same annotations to
make sure the results make sense by being high and deduce the best algorithm combinations for the task;
9
begin
10
Load (same);
11
Classify (compare to the trained language models);
12
Report;
13
end
14
Similarly test on the testing data for the same case (e.g., Tomcat 5.5.13 on Tomcat 5.5.13) without the
annotations as a sanity check;
15
Test on the testing data for the fixed case of the same software (e.g., Tomcat 5.5.13 on Tomcat 5.5.33);
16
Test on the testing data for the general non-CVE case (e.g., Tomcat 5.5.13 on Pebble or synthetic);
17 end
Algorithm 2: Machine-learning-based static code analysis testing algorithm using the NLP
pipeline [314]
for Java and C was used. The algorithms presented in Section 5.4.5 are used as-is in this
scenario with the modifications to the index files. The modifications include removal of the
line numbers, source code fragments, and lines-of-text counts (which are largely meaningless
and ignored. The byte counts may be recomputed and capturing a byte offset instead of a
line number was projected. The filenames of the index files were updated to include -bin in
them to differentiate from the original index files describing the source code [314].
5.4.7
Wavelets
During MARFCAT design and development as a part of a collaboration project waveletbased signal processing for the purposes of noise filtering is being introduced into MARF to
compare it to no-filtering or FFT-based classical filtering. It’s been also shown in [234] that
wavelet-aided filtering could be used as a fast preprocessing method for a network application
identification and traffic analysis [236] as well [314]. That implementation relies in part on
the algorithm and methodology found in [1, 211, 212, 420], and at this point only a separating
1D discrete wavelet transform (SDWT) has been tested [314].
116
5.4.8
Demand-Driven Distributed Evaluation with GIPSY
To enhance the scalability of the approach, we convert the MARFCAT stand-alone application to a distributed one using an eductive model of computation (demand-driven) implemented in the General Intensional Programming System (GIPSY)’s multi-tier run-time
system [160, 191, 362, 501], which can be executed distributively using Jini (Apache River),
or JMS [193] (see Section 6) [314].
To adapt the application to the GIPSY’s multi-tier architecture, we create a problemspecific generator and worker tiers (PS-DGT and PS-DWT respectively) for the MARFCAT
application. The generator(s) produce demands of what needs to be computed in the form
of a file (source code file or a compiled binary) to be evaluated and deposit such demands
into a store managed by the demand store tier (DST) as pending. Workers pickup pending
demands from the store, and them process then (all tiers run on multiple nodes) using a
traditional MARFCAT instance. Once the result (a Warning instance) is computed, the PSDWT deposit it back into the store with the status set to computed. The generator “harvests”
all computed results (warnings) and produces the final report for a test cases. Multiple test
cases can be evaluated simultaneously or a single case can be evaluated distributively. This
approach helps to cope with large amounts of data and avoid recomputing warnings that
have already been computed and cached in the DST [314]. Rabah also contribute a graphical
GMT [393], and MARFCAT configuration is rendered in Figure 38 as an example.
In this setup a demand represents a file (a path) to scan (actually an instance of the
FileItem object), which is deposited into the DST. The PS-DWT picks up that and checks
the file per training set that’s already there and returns a ResultSet object back into the
DST under the same demand signature that was used to deposit the path to scan. The result
set is sorted from the most likely to the list likely with a value corresponding to the distance
or similarity. The PS-DGT picks up the result sets and does the final output aggregation and
saves report in one of the desired report formats (see Section 5.4.9 picking up the top two
results from the result set and testing against a threshold to accept or reject the file (path)
as vulnerable or not. This effectively splits the monolithic MARFCAT application in two
halves in distributing the work to do where the classification half is arbitrary parallel [314].
Simplifying assumptions:
117
• Test case data and training sets are present on each node (physical or virtual) in
advance (via a copy or a CIFS or NFS volume), so no demand driven training occurs,
only classification
• The demand assumes to contain only the file information to be examined (FileItem)
• PS-DWT assumes a single pre-defined configuration, i.e., the configuration for MARFCAT’s option is not a part of the demand
• PS-DWT assume CVE or CWE testing based on its local settings and not via the
configuration in a demand
5.4.9
Export and Encoding
5.4.9.1
SATE
By default MARFCAT produces the report data in the SATE XML format, according to the
SATE IV requirements. In this iteration other formats are being considered and realized.
To enable multiple format output, the MARFCAT report generation data structures were
adapted case-based output [314].
5.4.9.2
Forensic Lucid
MARFCAT began Forensic Lucid export support, the core topic of this thesis (Chapter 7).
Following the data export in Forensic Lucid in the preceding work [269, 304, 310] we
use it as a format for evidential processing of the results produced by MARFCAT. The
Chapter 7 provides details of the language; it will suffice to mention here that the report
generated by MARFCAT in Forensic Lucid is a collection of warnings as observations
with the hierarchical notion of nested context of warning and location information. These
will form an evidential statement in Forensic Lucid (see, e.g., [314, Appendix]). The
example scenario where such evidence compiled via a MARFCAT Forensic Lucid report
would be in web-based applications and web browser-based incident investigations of fraud,
XSS, buffer overflows, etc. linking CVE/CWE-based evidence analysis of the code (binary
or source) security bugs with the associated web-based malware propagation or attacks to
118
provide possible events where specific attacks can be traced back to the specific security
vulnerabilities [314].
5.4.10
Sample Results
This section illustrates some sample classification results on a few sample test cases from the
SATE workshop.
5.4.10.1
Chrome 5.0.375.54
This version’s CVE testing result of Chrome 5.0.375.54 is in Table 5. The results are as good
as the training data given; if there are mistakes in the data selection then the results will
also have mistakes accordingly [287].
Table 5: Sample CVE classification stats for Chrome 5.0.375.54 [287]
guess
1st
1st
1st
1st
1st
1st
2nd
2nd
2nd
2nd
2nd
2nd
guess
1st
1st
1st
1st
1st
1st
1st
1st
1st
2nd
2nd
2nd
2nd
2nd
2nd
2nd
2nd
2nd
run
1
2
3
4
5
6
1
2
3
4
5
6
run
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
algorithms
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
class
CVE-2010-2301
CVE-2010-2300
CVE-2010-2299
CVE-2010-2298
CVE-2010-2297
CVE-2010-2304
CVE-2010-2303
CVE-2010-2295
CVE-2010-2302
CVE-2010-2301
CVE-2010-2300
CVE-2010-2299
CVE-2010-2298
CVE-2010-2297
CVE-2010-2304
CVE-2010-2303
CVE-2010-2295
CVE-2010-2302
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-eucl
-cos
-diff
-cheb
-mink
-hamming
-eucl
-cos
-diff
-cheb
-mink
-hamming
119
good
10
10
10
10
9
9
11
11
11
11
10
10
good
6
6
6
6
6
6
6
10
6
6
6
6
6
6
6
6
10
12
bad
1
1
1
1
2
2
0
0
0
0
1
1
bad
0
0
0
0
0
0
0
2
6
0
0
0
0
0
0
0
2
0
%
90.91
90.91
90.91
90.91
81.82
81.82
100.00
100.00
100.00
100.00
90.91
90.91
%
100.00
100.00
100.00
100.00
100.00
100.00
100.00
83.33
50.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
83.33
100.00
5.4.10.2
Tomcat 5.5.13
This example of a MARFCAT classification run represents CVE-based testing on training
for Tomcat 5.5.13. Classifiers corresponding to -cheb (Chebyshev distance) and -diff (Diff
distance) continue to dominate as in the other test cases [287]. These CVE-based results are
summarized in Table 6 [287].
Table 6: Sample CVE stats for Tomcat 5.5.13 [287]
1st
1st
1st
1st
1st
1st
2nd
2nd
2nd
2nd
2nd
2nd
guess
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1
2
3
4
5
6
1
2
3
4
5
6
run
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
-nopreprep -raw
algorithms
CVE-2006-7197
CVE-2006-7196
CVE-2006-7195
CVE-2009-0033
CVE-2007-3386
CVE-2009-2901
CVE-2007-3385
CVE-2008-2938
CVE-2007-3382
CVE-2007-5461
CVE-2007-6286
CVE-2007-1858
CVE-2008-0128
CVE-2007-2450
CVE-2009-3548
CVE-2009-0580
CVE-2007-1355
CVE-2008-2370
CVE-2008-4308
CVE-2007-5342
CVE-2008-5515
CVE-2009-0783
CVE-2008-1232
CVE-2008-5519
CVE-2007-5333
CVE-2008-1947
CVE-2009-0781
CVE-2007-0450
CVE-2007-2449
CVE-2009-2693
CVE-2009-2902
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-fft
-diff
-cheb
-cos
-eucl
-mink
-hamming
-diff
-cheb
-cos
-eucl
-mink
-hamming
120
36
36
37
34
28
26
40
40
40
36
31
29
good
6
6
6
6
6
3
6
6
6
6
6
6
6
6
6
6
6
6
6
6
19
11
13
6
6
6
6
5
6
2
0
7
7
9
9
15
17
3
3
6
7
12
14
bad
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5
4
5
6
6
6
6
7
12
6
1
83.72
83.72
80.43
79.07
65.12
60.47
93.02
93.02
86.96
83.72
72.09
67.44
%
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
79.17
73.33
72.22
50.00
50.00
50.00
50.00
41.67
33.33
25.00
0.00
5.5
MARFPCAT
As a part of the fingerprinting network malware project [49] by analyzing the pcap traces
as a branch the MARFPCAT (MARF-based PCap Analysis Tool ) [289] was created as an
extension of the latest revision of MARFCAT [314]. Specifically, to improve detection and
classification of the malware in the network traffic or otherwise we employ fast MARFbased machine learning approach to static pcap analysis, fingerprinting, and subsequent
investigation. Similarly to the other tools, MARFPCAT is first trained on the known malware
pcap data and then measures the detection precision. Then we test it on the unseen data
during training, but known to the investigator, and we select the best available machine
learning algorithm combination to do so in subsequent investigations. MARFPCAT, like
MARFCAT has PS-DWT and -DGT backends to run over a GIPSY network and Forensic
Lucid export capability. In Section 10.4, page 284 it considered to be used as a evidence
feed tool for network forensics related investigations about malware and scanning.
5.5.1
Overview
The MARFPCAT work elaborates on the details of the earlier methodology and the corresponding results of application of the machine learning techniques along with signal processing and NLP alike to network packet analysis in search for malicious code in the packet
capture (pcap) data. Most of the ideas in Section 5.4 [287] are still applicable here where
the same approach was used to machine-learn, detect, and classify vulnerable or weak code
fast and with relatively high precision.
We show the system the examples of pcap files with malware and MARFPCAT learns
them by computing spectral signatures using signal processing techniques. When we test,
we compute how similar or distant each file is from the known trained-on malware-laden
files. At the present, however, we are looking at the whole pcap files. This aspect lowers the
precision, but is fast to scan all the files.
MARFPCAT was first designed to use JNetPcap, a Java wrapper of libpcap as one of the
loaders to extract headers and other packet data structure items to refine the classification
precision when not using the whole-file training and classification. This work was further
121
refined in [49] by Boukhtouta et al . For comparative studies with this work MARF added
wrapper plug-ins to allow Weka’s classifiers to be available to the MARF pipeline and MARF
applications.
5.5.2
The Knowledge Base
The GFI malware database with known malware, the reports, etc. serves as a knowledge
base to machine-learn from in this experiment. Thus, we primarily:
• Teach the system from the known cases of malware from their pcap data
• Test on the known cases
• Test on the unseen cases
5.5.3
Categories for Machine Learning
The primary category is the malware class, e.g., “Virus1”’, “Trojan2”, etc., which are internally enumerated. The known data is indexed via a Perl script creating an XML file and
MARFPCAT uses such files for training and testing.
5.5.4
Sample Results
Some classification results follow (Table 7, Table 9, and Table 11) using various algorithm
combinations including wavelets. The precision results are designed to be assigned to the
credibility (confidence) weights w encoded in Forensic Lucid observations.
5.6
Summary
Of particular interest to this thesis are the results that are supplied as an evidence encoded
in observations and observation sequences in Forensic Lucid with the precision/confidence
value assigned to the credibility value of each observation. We review the current results of
this experimental work, its current shortcomings, advantages, and practical implications.
122
Table 7: Top 6 distance malware classifiers and 32 results, FFT feature vectors
guess
1st
1st
1st
1st
1st
1st
2nd
2nd
2nd
2nd
2nd
2nd
guess
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
5.6.1
run
1
2
3
4
5
6
1
2
3
4
5
6
run
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
algorithms
-dynaclass -binary -nopreprep -raw -fft -cos -flucid
-dynaclass -binary -nopreprep -raw -fft -diff -flucid
-dynaclass -binary -nopreprep -raw -fft -cheb -flucid
-dynaclass -binary -nopreprep -raw -fft -eucl -flucid
-dynaclass -binary -nopreprep -raw -fft -hamming -flucid
-dynaclass -binary -nopreprep -raw -fft -mink -flucid
-dynaclass -binary -nopreprep -raw -fft -cos -flucid
-dynaclass -binary -nopreprep -raw -fft -diff -flucid
-dynaclass -binary -nopreprep -raw -fft -cheb -flucid
-dynaclass -binary -nopreprep -raw -fft -eucl -flucid
-dynaclass -binary -nopreprep -raw -fft -hamming -flucid
-dynaclass -binary -nopreprep -raw -fft -mink -flucid
class
VirTool.Win32.VBInject.gen.bp (v)
Trojan.Win32.Agent.roei
BehavesLike.Win32.Malware.dls (mx-v)
Worm.Win32.AutoRun.dkch
Trojan-FakeAV.Win32.Agent.det
FraudTool.Win32.FakeRean
VirTool:Win32/Obfuscator.WJ (suspicious)
Trojan.Win32.Vilsel.ayyw
Worm:Win32/Yeltminky.A!dll
Trojan.Win32.Meredrop
TrojanDownloader:Win32/Allsum
Virtumonde
Backdoor.Win32.Hupigon.nndu
VirTool:WinNT/Protmin.gen!C [generic]
PWS:Win32/Fareit.gen!C [generic]
Trojan-Dropper.Win32.Injector.cxqb
Trojan.Win32.Menti.mlgp
Trojan.Win32.Buzus (v)
Trojan.Win32.FakeAV.lcpt
Trojan.Win32.Agent.rlot
Trojan-Spy.Win32.SpyEyes.aecv
Trojan:Win32/Swrort.A
TrojanDownloader:Win32/Carberp.C
PWS:Win32/Lolyda.BF
Trojan.Win32.Yakes.qjn
Trojan.Win32.Agent.rlnz
Trojan.Win32.VBKrypt.fkvx
VirTool:Win32/VBInject.OT
HomeMalwareCleaner.FakeVimes
Trojan.Win32.Generic!BT
Trojan.FakeAlert
Trojan.Win32.Generic.pak!cobra
good
67
55
55
50
37
34
92
77
77
73
46
47
good
6
6
6
6
6
6
6
6
6
6
12
6
6
6
6
6
6
6
12
6
6
11
11
15
8
5
6
6
36
56
6
0
bad
154
166
166
171
184
187
129
144
144
148
175
174
bad
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
3
4
7
12
12
264
598
108
18
%
30.32
24.89
24.89
22.62
16.74
15.38
41.63
34.84
34.84
33.03
20.81
21.27
%
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
91.67
91.67
83.33
66.67
41.67
33.33
33.33
12.00
8.56
5.26
0.00
Shortcomings
Following are the most prominent issues with the presented tools and approaches. Some of
them are more “permanent”, while others are solvable and intended to be addressed in the
future work [314].
Looking at a signal is less intuitive visually for code analysis by humans. (However, the
approach can produce an easily identifiable problematic spectrogram in some cases). For
123
Table 9: Top 6 distance malware classifiers and 32 results, wavelet filter preprocessing
guess
1st
1st
1st
1st
1st
1st
2nd
2nd
2nd
2nd
2nd
2nd
guess
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
run
1
2
3
4
5
6
1
2
3
4
5
6
run
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
algorithms
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
-dynaclass -binary -nopreprep -sdwt -fft
class
VirTool.Win32.VBInject.gen.bp (v)
Trojan.Win32.Agent.roei
BehavesLike.Win32.Malware.dls (mx-v)
Worm.Win32.AutoRun.dkch
Trojan-FakeAV.Win32.Agent.det
FraudTool.Win32.FakeRean
VirTool:Win32/Obfuscator.WJ (suspicious)
Trojan.Win32.Vilsel.ayyw
Worm:Win32/Yeltminky.A!dll
Trojan.Win32.Meredrop
Virtumonde
Backdoor.Win32.Hupigon.nndu
VirTool:WinNT/Protmin.gen!C [generic]
PWS:Win32/Fareit.gen!C [generic]
Trojan-Dropper.Win32.Injector.cxqb
Trojan.Win32.Menti.mlgp
Trojan.Win32.Buzus (v)
Trojan.Win32.Agent.rlot
Trojan-Spy.Win32.SpyEyes.aecv
Trojan.Win32.FakeAV.lcpt
TrojanDownloader:Win32/Allsum
Trojan.Win32.Yakes.qjn
Trojan.Win32.Agent.rlnz
Trojan:Win32/Swrort.A
TrojanDownloader:Win32/Carberp.C
Trojan.Win32.VBKrypt.fkvx
VirTool:Win32/VBInject.OT
HomeMalwareCleaner.FakeVimes
Trojan.FakeAlert
Trojan.Win32.Generic.pak!cobra
Trojan.Win32.Generic!BT
PWS:Win32/Lolyda.BF
-cos -flucid
-diff -flucid
-mink -flucid
-cheb -flucid
-eucl -flucid
-hamming -flucid
-cos -flucid
-diff -flucid
-mink -flucid
-cheb -flucid
-eucl -flucid
-hamming -flucid
good
55
41
41
41
41
30
75
56
67
55
58
44
good
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
11
10
10
9
6
6
5
5
46
8
1
18
0
bad
146
180
180
180
180
191
126
165
154
166
163
177
bad
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
2
3
6
6
11
11
250
104
17
626
18
%
27.36
18.55
18.55
18.55
18.55
13.57
37.31
25.34
30.32
24.89
26.24
19.91
%
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
91.67
83.33
83.33
75.00
50.00
50.00
31.25
31.25
15.54
7.14
5.56
2.80
0.00
source code analysis, line numbers are a problem (easily “filtered out” as high-frequency
“noise”, etc.). As a result, a whole “relativistic” and machine learning methodology was
developed for the line numbers in [284] to compensate for that. Generally, when CVEs
are the primary class, by accurately identifying the CVE number one can get all the other
pertinent details from the CVE database, including patches and line numbers making this a
lesser issue. Accuracy depends on the quality of the knowledge base collected. Some of this
124
Table 11: Top 6 distance malware classifiers and 32 results, low-pass filter preprocessing
guess
1st
1st
1st
1st
1st
1st
2nd
2nd
2nd
2nd
2nd
2nd
guess
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
1st
run
1
2
3
4
5
6
1
2
3
4
5
6
run
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
algorithms
-dynaclass -binary -nopreprep -low -fft -cos -flucid
-dynaclass -binary -nopreprep -low -fft -cheb -flucid
-dynaclass -binary -nopreprep -low -fft -diff -flucid
-dynaclass -binary -nopreprep -low -fft -eucl -flucid
-dynaclass -binary -nopreprep -low -fft -hamming -flucid
-dynaclass -binary -nopreprep -low -fft -mink -flucid
-dynaclass -binary -nopreprep -low -fft -cos -flucid
-dynaclass -binary -nopreprep -low -fft -cheb -flucid
-dynaclass -binary -nopreprep -low -fft -diff -flucid
-dynaclass -binary -nopreprep -low -fft -eucl -flucid
-dynaclass -binary -nopreprep -low -fft -hamming -flucid
-dynaclass -binary -nopreprep -low -fft -mink -flucid
class
Trojan:Win32/Swrort.A
VirTool.Win32.VBInject.gen.bp (v)
Trojan.Win32.Agent.roei
BehavesLike.Win32.Malware.dls (mx-v)
Worm.Win32.AutoRun.dkch
Trojan-FakeAV.Win32.Agent.det
FraudTool.Win32.FakeRean
VirTool:Win32/Obfuscator.WJ (suspicious)
Trojan.Win32.Vilsel.ayyw
Worm:Win32/Yeltminky.A!dll
Trojan.Win32.Meredrop
Virtumonde
Backdoor.Win32.Hupigon.nndu
VirTool:WinNT/Protmin.gen!C [generic]
PWS:Win32/Fareit.gen!C [generic]
Trojan-Dropper.Win32.Injector.cxqb
Trojan.Win32.Menti.mlgp
Trojan.Win32.Buzus (v)
Trojan.Win32.FakeAV.lcpt
Trojan.Win32.Agent.rlot
Trojan-Spy.Win32.SpyEyes.aecv
TrojanDownloader:Win32/Allsum
TrojanDownloader:Win32/Carberp.C
PWS:Win32/Lolyda.BF
Trojan.Win32.Yakes.qjn
Trojan.Win32.Agent.rlnz
Trojan.Win32.VBKrypt.fkvx
VirTool:Win32/VBInject.OT
HomeMalwareCleaner.FakeVimes
Trojan.Win32.Generic.pak!cobra
Trojan.FakeAlert
Trojan.Win32.Generic!BT
good
60
54
54
46
35
33
88
74
74
69
49
48
good
12
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
12
6
6
11
10
15
8
6
6
6
37
2
8
35
bad
161
167
167
175
186
188
133
147
147
152
172
173
bad
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
3
4
6
12
12
263
16
106
619
%
27.15
24.43
24.43
20.81
15.84
14.93
39.82
33.48
33.48
31.22
22.17
21.72
%
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
100.00
91.67
83.33
83.33
66.67
50.00
33.33
33.33
12.33
11.11
7.02
5.35
collection and annotation is manually done to get the indexes right, and, hence, error prone.
Should there be mistakes and errors, the output quality will suffer. To detect more of the
useful CVE or CWE signatures in non-CVE and non-CWE cases requires large knowledge
bases (human-intensive to collect), which can perhaps be shared by different vendors via
a common format, such as Forensic Lucid. For MARFCAT, no path tracing (since no
parsing is present); no slicing, semantic annotations, context, locality of reference, etc. are
125
presently possible. Therefore, the corresponding attribute “sink”, “path”, and “fix” results
found in the reports also have to be machine-learned. There is a significant number of
algorithms and their combinations to try (currently ≈ 1800 permutations) to get the best
top N precise result. This is, however, also an advantage of the approach as the underlying
framework can quickly allow for such testing. In most of the cases, only file-level training
vs. fragment-level training is done—presently the classes are trained based on the entire files
instead of the known file fragments of interest. The latter would be more fine-grained and
precise than whole-file classification, but slower. However, overall the file-level processing is
a man-hour limitation than a technological one.
These shortcomings may affect the credibility/confidence score in the data mining analysis
encoded into the observations in Forensic Lucid. The lower the score, the less likely the
evidence from this analysis is to be used to support or refute claims in the case at hand.
Thus, addressing these shortcomings is an important aspect to improve.
5.6.2
Advantages
There are some key advantages of the approaches and tools presented, which follow. The
approach is relatively fast (e.g., MARFCAT’s processing of Wireshark’s ≈ 2400 files to train
and test completes in about three minutes) on a now-commodity desktop or a laptop. As
signal processing tools, they are language- and protocol-independent (no syntactic parsing)—
given enough examples one can apply them to any data type (language, pcap, etc.), i.e., the
methodology is the same no matter C, C++, Java or any other source or binary languages
(PHP, C#, VB, Perl, bytecode, assembly, etc.) are used, or using other data types such as
images, pcaps, etc. As a machine-learning based approach, the tools can automatically learn
a large knowledge base to test on known and unknown cases as well as can learn, for example,
from previous SATE 2008–2013 reports that are publicly available [348]. We can also use
the tools to quickly pre-scan project data for further analysis by humans or other tools that
do in-depth parsing and semantic analyses as a means to prioritize large data sets for such
tools. We often get high (or good enough) precision and recall in CVE and CWE detection,
even at the file level and good enough precision at the whole-pcap processing. There are
many algorithms and their combinations to select the best for a particular classification task
126
after initial test runs. The approach can cope with altered code or code clones used in other
projects (e.g., a lot of problems in Chrome were found it WebKit, used by several browsers).
The fast and relatively high precision results can guide any forensic investigation when
big data are involved faster to enable investigators to focus. The higher precision results can
prioritize the Forensic Lucid observations related to them in the process.
5.6.3
Practical Implications
We outline some practical implications of the presented approach [314]. MARFCAT can be
used on any target language without modifications to the methodology or knowing the syntax
of the language. Thus, it scales to any popular and new language analysis with a very small
amount of effort [314]. MARFPCAT can likewise scale for various data/code clones, malware,
or application identification. MARFCAT was easily adapted to the compiled binaries and
bytecode to be able detect vulnerable deployments and installations—akin to virus scanning
of binaries, but instead scanning for infected binaries, one would scan for security-weak
binaries on site deployments to alert system administrators to upgrade their packages [314].
It can likewise learn from binary signatures from other tools like Snort [438]. As a result
both tools are extendable to the embedded code and mission-critical code found in aircraft,
spacecraft, and various autonomous systems [314] for incident prevention or investigations.
Spectral signatures and machine learning techniques are very beneficial for file type analysis
as well, especially when there is a need to bulk-preprocess a large collection of files for
preliminary classification of “files of interest” on suspect’s hard drive [290] (e.g., with the
evidence in the form of audio and text files recovered from a suspect’s computer).
All the tools are designed in Java with the easier plug-in-like integration of the tools into
Java-based plug-in frameworks, such as JPF, Eclipse, and others [290] in mind. MARF,
MARFCAT, and MARFPCAT were already updated to export their results and data structures in the Forensic Lucid format in order to allow their inclusion into the existing
investigative cases.
127
Chapter 6
The General Intensional Programming
System
This background chapter covers the General Intensional Programming System (GIPSY) [161,
241, 264, 302, 361, 362, 363, 366, 369, 370, 499, 529], which is is an open-source platform
implemented primarily in Java to investigate properties of the Lucid [25, 26, 27] (see Chapter 4) family of intensional programming languages and beyond. GIPSY is being developed
and maintained by the GIPSY Research and Development Group at Concordia University,
Montreal, Canada. As a multi-tier distributed system, it evaluates Lucid programs following
a demand-driven distributed generator-worker architecture, and is designed as a modular collection of frameworks where components related to the development (RIPE1 ), compilation
(GIPC2 ), and execution/evaluation (GEE3 ) of Lucid [27] programs are decoupled allowing easy extension, addition, and replacement of the components and subcomponents. The
high-level general architecture of GIPSY is presented in Figure 32 as well as the high-level
structure of the GIPC framework is in Figure 33 [271, 304, 305, 307, 322].
This background chapter is compiled from a list of the related cited works by the GIPSY
Research and Development Group and the author Mokhov. In this chapter we present the
general GIPSY overview (Section 6.1, page 129) including the necessary historical notes of
its conception, evolution, and the subsequent extensions for the use in this thesis as well as
1
Run-time Integrated Programming Environment, implemented in gipsy.RIPE
General Intensional Programming Compiler, implemented in gipsy.GIPC
3
General Eduction Engine, implemented in gipsy.GEE
2
128
the unaware reader. We follow through with its key architectural design points (Section 6.2,
page 136) to provide an in-depth background overview to subsequent GIPSY contributions
in Part II.
6.1
Overview
GIPSY [241, 264, 282, 302, 363, 366, 369, 370, 499, 529] is a continued effort for the design
and development of a flexible and adaptable multi-lingual programming language development framework aimed at the investigation on the Lucid family of intensional programming
(Section 3.2) languages [24, 25, 26, 354, 361, 364, 379, 396, 509]. Using this platform, programs written in various flavors of Lucid can be compiled and executed in a variety of
ways [161, 301, 302, 315]. The framework approach adopted is aimed at providing the possibility of easily developing compiler components for other languages of intensional nature,
and to execute them on a generally language-independent run-time system. With Lucid
being a functional “data-flow” language (cf. Chapter 4), its programs can be executed in
a distributed processing environment [315]. As a result, by being multi-lingual, GIPSY’s
design incorporates the mentioned flexible compilers framework and a run-time system to
allow processing of programs written in multiple dialects of Lucid as well as mixing them
with common imperative languages, such as Java, potentially all in the same source code
“program” or a source code file comprising a semantic unit of interrelated program fragments
written in multiple languages and interacting with each other. This is what makes GIPSY
different from being “just a Lucid dialect” into a complete programming system for multiple
languages though glued together by the type system (described in depth in Appendix B) and
the “meta” preprocessor language of various declarations to aid compilation [264, 282, 301].
6.1.1
Related Work
The ideas relevant to GIPSY are covered in a number of works from various research teams
about intensional and multidimensional programming, distributed, parallel as well as with
hybrid intensional/imperative paradigms. The most prominent is perhaps GLU [5, 188,
129
189, 359] that prompted GIPSY’s existence. The GLU (Granular Lucid) system, developed at the Stanford Research Institute in the 1990s, was arguably the first large hybrid
intensional-procedural system to allow Indexical Lucid programs to use C or Fortran
data structures and procedures [188, 189]. GLU then relied on a distributed GeneratorWorker execution architecture to evaluate such programs. Due to the lack of flexibility in its
architectural design, GLU was not able to adapt to further evolutions of Lucid, or to interface with object-oriented languages [361, 526] (at least not until later appearance of GLU#
in 2004 to interface with C++ [359]). In that context, GIPSY’s design started off with a
similar model as GLU’s, but with flexibility and adaptability in mind [363, 366, 370]. Using
a framework approach, the GIPSY has been used to develop compilers for different variants
of Lucid, which are allowed to coexist in the same program [264, 399, 527]. Moreover, its
design additionally permits Lucid programs to use procedures or methods defined in virtually
any procedural language [526] as long as a corresponding compiler plug-in is added to the
framework. A similar model is also successfully adopted by the prominent GNU Compiler
Collection (GCC) [485].
The distributed and parallel indexical and intensional program evaluation was studied
in several works, such as Du’s work in 1999 [92, 94] on indexical parallel programming and
Swoboda’s and Plaice’s work on distributed context computing [454] followed by formalization of distributed intensional programming paradigm by Swoboda in his PhD thesis [452]
in 2004. A slightly more recent work on scheduling and evaluation of multidimensional programs in parallel/distributed environments was performed by Ben Hamed [159] in 2008. The
data-flow aspect of intensional programming was also explored in graph-based distributed
systems [56] back in 1995 by Cao et al . Subsequently, a number of formalisms appeared
including Bensnard et al . [42] presenting an example of a multi-formalism application design
and distribution in a data-flow context, Swoboda’s work [452, 454], and Fourtounis [118] in
2011 about formal specification and analysis of a parallel virtual machine for lazy functional
languages (that refers to some of the GIPSY and the mentioned related work). Fisher and
Kakoudakis did flexible agent grouping in executable temporal logic [111] and Gagné and
Plaice looked at the demand-driven real-time computing [126]. Ranganathan and Campbell in 2003 also proposed a middleware for context-aware agents in ubiquitous computing
130
environments [397]. Peralta et al . proposed an approach similar to GIPSY for automatic
synthesis and deployment of intensional Kahn process networks [375].
The hybridification (mixing intensional/logic programming with imperative languages)
aspects were discussed in several works: Liu and Stables [238] proposed inclusion of logic
expressions into the procedural programming languages in 1995; in 1999 Rondogiannis subsequently proposed adding multidimensionality to imperative programming languages [406];
while Swoboda and Wadge proposed tools to intensionalize software systems in general [455].
As time passed, Ditu and Plaice proposed the general TransLucid [90] language with
elaborate typing system and intent to support multicore processors, Cartesian programming
model [377] and some of its PoC eager [378, 379] and multithreaded [396] implementation by
Rahilly in 2007–2008.
In relationship to the aspect-oriented programming (AOP), Du pondered of the links
between AOP and the intensional programming models [95] by drawing parallels between
the notion of context in two paradigms.
All of this work and related contributions are accumulated and summarized in [364],
which is being regularly updated as a reference resource. The GIPSY’s software architecture
is able to constantly change and accept new ideas from the GIPSY members as well as the
cited works of others above for comparative studies and evaluation.
6.1.2
Historical Notes
Historically, the concept of GIPSY was conceived as a very modular collection of frameworks
and artifacts geared towards sustainable support for the intensional programming languages
and embracing continuous iterative revision and development overcoming issues of the earlier
GLU system [5, 188, 189] that did not survive for very long due to its inflexibility to extend
to the newer dialects, and its unmaintainability defects [161, 302, 361, 362, 363, 370].
The initial GIPSY and GIPL ideas were featured in Paquet’s PhD thesis [361] in 1999.
Then those ideas were architecturally expanded by Paquet and Kropf in 2000 [363]. Subsequently, the project moved to Concordia where it resides at the present day. In the meantime,
Grogono defined GIPC increments [143] in 2002. Ren, also in 2002 [399], produced the first
Indexical Lucid and GIPL compilers as a part of the first PoC GIPC written in Java
131
and JavaCC [506]. Wu in the same year provided the first version of the semantic analyzer
and translation rules to the Indexical Lucid and GIPL compilers [527]. Wu, Paquet, and
Grogono expanded further the detailed design of the GIPC framework in 2003 [530]. Alagar,
Paquet, and Wan described the notion of Intensional Programming for agent communication
in 2004 [7]. Lu, Grogono, and Paquet subsequently proposed and defined the initial GEE
Lucid interpreter in C++ subsequently rewritten in Java by Lu in multithreaded and distributed Java RMI (Remote Method Invocation) prototypes [241, 242]. In 2004, Grogono
proposed Onyx for lazy multidimensional arrays [144]. Then, in the same year, Paquet,
Wu, and Grogono expanded on the GIPC architecture further [369]. After that, also in
2004, Tao provided the first realization of the data warehousing and garbage collection in
the GEE [459]. Following that, Ding [89] provided a PoC implementation of the automated
translation between graphical and textual representations of intensional programs for Indexical Lucid and GIPL within GIPC. In the meantime, in 2005–2006 Wan developed a
full context theory for intensional programming and the corresponding artifact of the Lucx
language [513, 514, 515]. Vassev looked into the distributed aspects in his follow-up work
on the GEE by designing the Demand Migration Framework (DMF) and its concrete instance of Jini DMS (Demand Migration System) to incorporate the demand store and the
distribution architecture middleware to work together [498, 499, 501]. Wu and Paquet reviewed the GIPSY architecture [370, 529] and began pondering about including intensional
expressions into Java. In 2003–2005, the author Mokhov (along with Paquet and Grogono),
re-integrated and retrofitted all the components with the original design by Paquet and defined the Preprocessor and introduced the first instance of hybrid intensional-imperative
compilation frameworks of GIPC with JLucid and Objective Lucid (with sequential
threads and communication procedures as well as operational semantics) prototype compilers, initial type system, and other aspects integrating all the components under a common
CVS [146] repository [145, 261, 262, 264] that led to a more general framework for greater
interoperability between intensional and imperative programming languages—GICF and a
web-based editor RIPE servlet front-end. In 2007–2008, Pourteymour et al . provided another
PoC instance of the DMS, implemented using JMS [383, 384, 385]. Around the same time
frame, Tong et al . implemented a Lucx compiler [365, 473, 474]. Also in 2007, the author
132
Mokhov proposed the use of GIPSY as a platform for the Intensional Cyberforensics and
Forensic Lucid [266] eventually culminating in this thesis (see Section 6.1.4.3 and further). The language MARFL is proposed by the author to manage the MARF’s (Modular
Audio Recognition Framework ) configuration [272] with the hierarchical contexts in it further
in 2008. He also performed the first preliminary security evaluation of the earlier GICF’s
design and the distributed aspects of GIPSY [271]. Vassev in the same year 2008, moved on
to propose self-management properties of GIPSY with the design of the Autonomic GIPSY
(AGIPSY) [500]. Subsequently, in around 2007–2010 Paquet proposed a new multi-tier architecture for GEE [362] where Han et al . provided its initial implementation [160, 161], and Ji
and the author Mokhov completely unified the Jini and JMS DMS’es [193] and Ji did a scalability study of the two [191]. Concurrently, in 2008–2009, Wu et al . completed design and
implementation of a compiler for JOOIP to embed Lucid expressions into Java [526, 528]
such that Java classes can instantiate intensional variables, and have the Lucid expressions
access the Java class properties (methods, variables (local, instance, class)). Meanwhile,
the author Mokhov investigated the GIPSY’s use for the HOIL support [302] and provided
a complete GIPSY Type System (Appendix B) specification [301, 315] alongside with Tong
and Paquet in the context of the multi-tier work and Lucx compiler implementation. The
author Mokhov et al . then moved on to the proposal of self-forensics aspects within GIPSY
and other systems [321], which is also an ongoing work (see Appendix D). In 2011–2013,
the author Mokhov developed problem-specific generator and worker tiers for MARFCAT
(Section 5.4) and MARFPCAT [287, 289, 314] and for genome sequencing as a case study
with Rabah [393]. At the same time, in 2011–2012, Rabah designed the first prototype of
a graph-based RIPE component to manage distributed GIPSY networks [393]. The author
Mokhov (along with Rabah’s help) built the GIPSY cluster lab environment (detailed in
Section 8.6, page 238) for higher-performance and scalability evaluations. Subsequently, the
author Mokhov expanded the compiler and run-time support onto the Forensic Lucid
language as a part of this thesis.
As previously mentioned, the GIPSY’s design is centered around the compiler framework
(GIPC) following good principles of compiler design [240], the eduction execution engine
(GEE), i.e., the run-time execution environment (akin to a virtual machine for execution of
133
the intensional logic expressions), and the programming environment (RIPE). The former of
the three is responsible to support multiple compilers in a similar compiler framework that all
produce a consistent, well-agreed on binary format, essentially a compiled GIPSY program, as
a binary output. The second performs lazy demand-driven (potentially parallel/distributed)
evaluation of the compiled Lucid programs, or as we call them, GIPSY programs [302].
This idea of the GIPSY’s framework approach is to provide an infrastructure to develop
compiler and run-time components for other languages of intensional nature easier as well as
to execute them on a relatively language-independent run-time system [315]. As discussed
earlier, Lucid programs in general can be naturally executed in a distributed processing
environment because its constructs and expressions do not impose sequentiality. However,
the standard Lucid algebra (i.e., types and operators) is extremely fine-grained and can
hardly benefit from distributed evaluation of the operands. Adding granularity to the data
elements manipulated by Lucid comes through the addition of coarser-grained data types
and their corresponding operators [315] (Appendix B.4). With Lucid semantics being defined
as typeless, a solution to the granularity problem consists in adding a hybrid counterpart
to Lucid to allow an external language to define an algebra of coarser-grained types and
operators [301, 315].
6.1.3
From Sequentiality to Concurrency with DMS and MultiTier Architecture
The eventual availability of the Demand Migration Framework and System (DMF and
DMS) [384, 385, 498, 499] turned the GIPSY’s PoC RMI implementation [241] into a true
distributed system [271]. These developments prompted a number of related research subdirections some of which are still active today. Specifically, in [264] when a notion of prototype
JLucid and Objective Lucid languages was conceived, it led to a more general framework
for greater interoperability between intensional and imperative programming languages—
GICF [271]. However, with that greater flexibility that the languages, GICF, and DMS
brought, there came to be the issues of security of the embedded code, and demand monitoring, etc. [271].
134
The subsequent evolution presented in [161, 191, 193] furthered the original architecture
for the run-time system of the GIPSY (as hinted in [361], and elaborated in [302, 362, 363,
370]). The architecture proposed in these works was itself developed following the generatorworker architecture adopted successfully by GLU [188, 189], as mentioned earlier, where
its run-time system was not as scalable and flexible [161] as the solutions presented in the
cited GIPSY works. In addition, the communication procedures of the distributed run-time
system implemented in GLU were implemented using RPC only [161]. The GIPSY solution
proposed a much more flexible approach by integrating demand migration and storage by
using the DMF, which can be concretely instantiated using various middleware technologies, such as Jini [498, 499] and JMS [384, 385] and others as they become available and
integrated allowing for even heterogeneous mixed use of such technologies simultaneously
fulfilling different communication requirements and availability [161, 191, 193]. Further design and development has GIPSY planned to try other/new architectures (e.g., Hadoop,
PlanetLab [466], and others) and prompted an establishment of a dedicated cluster setup
(see Section 8.6, page 238). The distributed and scalable evaluation becomes very important
in large scale reasoning and evaluation of large amounts of digital evidence in Forensic
Lucid case evaluations.
6.1.4
Context-Oriented Reasoning with Lucid in GIPSY
The reasoning aspect of GIPSY is a particularity of a Lucid dialect (Chapter 4) rather than
its entire architecture. The architecture is general enough to go beyond pure an evaluation of
intensional logic expressions. If those expressions form a language dialect that helps us with
reasoning (such as Forensic Lucid to reason about cybercrime incidents and claims).
6.1.4.1
Reasoning in Hybrid OO Environment
Object-orientation came to GIPSY with the concept of Objective Lucid [264, 282] that
allowed for rudimentary support of Lucid programs to manipulate Java objects as first class
values. Its generalization resulted in JOOIP [526, 528] offering Lucid fragments within Java
code and allowing the Lucid code to reference to Java methods and variables, along with the
corresponding type system extensions and providing context-aware Java objects [302] (see
135
Section 4.3.2, page 92).
6.1.4.2
Reasoning in Autonomic Environment
This aspect is important for the project on self-forensics described in Appendix D. Vassev
and Paquet designed an Autonomic GIPSY [500] (AGIPSY) version of the platform with
the corresponding ASSL toolset [486, 493, 502] as a research case study where GIPSY is
turned into an autonomic system via an ASSL specification to allow it running unattended
with self-healing, self-protecting, self-configuring, and self-optimizing autonomic properties.
Unattended reasoning environment is essential for long-running problem solving programs
with minimal intervention and maintenance [302].
6.1.4.3
Forensic Lucid Compilation and Evaluation Platform
GIPSY is the proposed testing and investigation platform for the compilation and distributed
cyberforensics evaluation of the Forensic Lucid programs [300, 304] (introduced later in
Chapter 7) [271, 304, 305, 307, 322]. GEE is the component where the distributed demanddriven evaluation takes place, subtasked to different evaluation engines implemeting the GEE
framework (see Figure 58 and Chapter 8). We rely on the GIPSY’s compilers for the intensional languages like GIPL [361], Lucx [513], Objective Lucid [264], and JOOIP [526].
We reach out to the syntax and operational semantics of those languages implemented in
GIPSY and draw from them some ideas for the simple context specification of dimensions
and tags, the navigational operators @ and #, and the “dot-notation” for object properties
and apply it to context spaces. The dialects referred to cover by themselves a large scope
of issues Forensic Lucid capitalizes on. This is what this thesis is about, in particular
Chapter 7, Chapter 8, and Chapter 9 which we defer to the reader about.
6.2
GIPSY’s Architecture
Intensional programming (see Section 3.2, [131, 350]), in the context of the Lucid programming language, implies a declarative programming paradigm. The declarations are evaluated
in an inherent multi-dimensional context space [161, 473]. GIPSY evolved from a modular
136
Figure 32: High-level structure of GIPSY’s GEER flow overview [315]
collection of frameworks for local execution into a multi-tier architecture [161, 362]. Back in
the early days, with the bright but short-lived story of GLU in mind, efforts were made to
design a system with similar capacities, but significantly more flexible in order to cope with
the fast evolution and diversity of the Lucid family of languages, thus necessitating a flexible
compiler architecture, and a language-independent run-time system for the execution of Lucid programs. As a result, the GIPSY project’s architecture [241, 264, 282, 302, 370] aims at
providing such a flexible platform for the investigation on intensional and hybrid intensionalimperative programming [161]. The architecture of the General Intensional Programming
Compiler (GIPC) is framework-based, allowing the modular development of compiler components (e.g., parser, semantic analyzer, and translator). It is based on the notion of the
Generic Intensional Programming Language (GIPL) [361, 365], which is the core run-time
language into which all other flavors of the Lucid (a family of intensional programming
languages) language can be translated to [302]. The notion of a generic language also solved
the problem of language-independence of the run-time system by allowing a common representation for all compiled programs, the Generic Eduction Engine Resources (GEER), which
is a dictionary of run-time resources compiled from a GIPL program, that had been previously generated from the original program using semantic translation rules defining how
the original Lucid program can be translated into the GIPL [161, 302]. The design of the
GIPSY compiler framework went through several iterations [264, 282, 369, 399, 527]. The
architecture necessitates the presence of the intensional-imperative type system and support
137
Figure 33: High-level structure of the GIPC framework [315]
links to imperative languages [301, 302, 315]. A generic distributed run-time system has been
proposed in [161, 362].
GIPSY has a collection of compilers under the (GIPC, see Figure 33) framework and the
corresponding run-time environment under the eduction execution engine (GEE) among other
things that communicate through the GEE Resources (GEER) (see the high-level architecture
in Figure 32). These two modules are the major primary components for compilation and
execution of intensional programs, which require amendments for the changes proposed in
138
this work of intensional forensics [271, 305, 322].
6.2.1
General Intensional Program Compiler (GIPC)
The more detailed architecture of GIPC is conceptually represented at the higher level in Figure 33. It hosts the type abstractions and implementations that are located in the gipsy.lang
package and serve as a glue between the compiler (the GIPC—a General Intensional Program
Compiler) and the run-time system (known as the GEE—a General Eduction Engine) to do
the static and dynamic semantic analyses and evaluation respectively. Since GIPSY is a modular system, the majority of its components can be replaced as long as they comply with some
general architectural interface/API. One of such API interfaces is the GIPSYProgram (conceptually represented as a GEER—the GEE Resource—a dictionary of run-time resources) that
contains among other things, the type annotations that can be statically inferred during compilation. At run-time, the engine does its own type checking and evaluation when traversing
the abstract syntax tree (AST) stored in the GEER and evaluating expressions represented
in the tree. Since both the GIPC and the GEE use the same type system to do their analysis,
they consistently apply the semantics and rules of the type system with the only difference
that the GEE, in addition to the type checks, does the actual evaluation [302, 315].
6.2.1.1
GIPC Preprocessor
The Preprocessor [264, 282] is a component that is invoked first by the GIPC (see Figure 33)
on incoming GIPSY program’s source code stream. The Preprocessor’s role is to do preliminary program analysis, processing, and splitting the source GIPSY program into “chunks”,
each potentially written in a different language and identified by a language tag. In a very
general view, a GIPSY program is a hybrid program written in different language variants
in one or more source file. Consequently, there has to be an interface to glue all these code
segments together to ensure proper evaluation. Thus, the Preprocessor after some initial
parsing (using its own preprocessor syntax) and producing the initial parse tree, constructs a
preliminary dictionary of symbols used throughout all parts of the program. This is the basis
for type matching and semantic analysis applied later on. This is also where the first step
of type assignment occurs, especially on the boundary between typed and typeless parts of
139
the program, e.g., Java (typed) and a specific Lucid dialect (typeless). The Preprocessor
then splits the code segments of the GIPSY program into chunks preparing them to be fed
to the respective concrete compilers for those chunks. The chunks are represented through
the CodeSegment class, instances of which the GIPC collects [302, 315].
GIPSY Program Segments. There are four baseline types of segments defined in a
GIPSY program [315]. These are:
• #funcdecl (in a way similar to C’s extern) declares function prototypes written as
imperative language functions defined later or externally from this program to be used
by the intensional language part. The syntactical form of these prototypes is particular
to GIPSY programs and need not resemble the actual function prototype declaration
they describe in their particular programming language. They serve as a basis for static
and dynamic type assignment and checking within the GIPSY type system with regards
to procedural functions called by other parts of the GIPSY program, e.g., the Lucid
code segments [315].
• #typedecl lists all user-defined data types that can potentially be used by the intensional part, e.g., classes. These are the types that do not explicitly appear in the
matching table (in Table 17, Appendix B) describing the basic data types allowed in
GIPSY programs [315].
• #<IMPERATIVELANG> declares that this is a code segment written in whatever IMPERATIVELANG may be, e.g., #JAVA for Java, #CPP for C++, #FORTRAN for Fortran,
#PERL for Perl, and #PYTHON for Python, etc. [315].
• #<INTENSIONALLANG> declares that what follows is a code segment written in whatever
INTENSIONALLANG may be, for example #GIPL, #LUCX, #JOOIP, #INDEXICALLUCID,
#JLUCID, #OBJECTIVELUCID, #TENSORLUCID, #TRANSLUCID, #FORENSICLUCID [300], and
#ONYX [144], etc., as specified by the available GIPSY implementations and stubs. An
example of a hybrid program is presented in Listing 6.1. The preamble of the program
with the type and function declaration segments are the main source of type information
140
that is used at compile time to annotate the nodes in the tree to help both static and
semantic analyses [315].
# typedecl
myclass ;
# funcdecl
myclass foo ( int , double ) ;
float bar ( int , int ) : " ftp :// localhost / cool . class " : baz ;
int f1 () ;
# JAVA
myclass foo ( int a , double b ) {
return new myclass ( new Integer (( int ) ( b + a ) ) ) ;
}
class myclass {
public myclass ( Integer a ) {
System . out . println ( a ) ;
}
}
# CPP
# include < iostream >
int f1 ( void ) {
cout << " hello " ;
return 0;
}
# OBJ ECTIVEL UCID
A + bar (B , C )
where
A = foo (B , C ) . intValue () ;
B = f1 () ;
C = 2.0;
end ;
Listing 6.1: Example of a hybrid GIPSY program
6.2.1.2
GICF Overview
The General Imperative Compiler Framework (GICF) [261] is the particular GIPSY’s compiler framework that allows for a generalized way of inclusion of any imperative languages into
intensional variants within the GIPSY environment and allowing the syntactical co-existence
of the intensional and imperative languages in one source file by providing a Preprocessor
that splits the intensional and imperative code chunks that are fed to their respective compilers, and then the results are gathered and linked together to form a compiled hybrid program
as an instance of GEER [526].
GLU [188, 189], JLucid, and Objective Lucid [264] prompted the development of
GICF. The framework targets the integration of different imperative languages into GIPSY
programs for I/O, portability, extensibility, and flexibility reasons [271]. GLU promoted C
141
and Fortran functions within; JLucid/Objective Lucid/JOOIP promoted embedded
Java. Since GIPSY targets to unite most intensional paradigms in one research system, it
makes an effort to be as general as possible and as compatible as possible and pragmatic at
the same time [271].
The GICF is there if we need to be able to run, for example, GLU programs with minimum
modifications to the code base. GIPSY’s GIPC would be extended with a compiler module in
this case to support C and Fortran functions as it does for Java. GICF is made extensible
such that later on the language support for C++, Perl, Python, shell scripts, and so
on can be relatively easily added. With GICF it is also possible to have a multi-segment
multi-language GIPSY program with embedded code [271].
6.2.2
General Eduction Engine (GEE)
The primary purpose of the GEE is to evaluate compiled Lucid programs following their
operational semantics (see Section 4.1.1.2) either locally or distributively using the lazy
demand-driven model (i.e., eduction). The research in this area covers various extensions
and applications as well as comparative studies of various middleware technologies.
To address run-time scalability concerns, GEE is the component where the distributed
demand-driven evaluation takes place by relying on the Demand Migration System (DMS)
[383, 501] and on the multi-tier architecture overall [161, 271, 322, 362]. The distributed
system [73, 133] design architecture adopted for the run-time system is a distributed multitier architecture, where each tier can have any number of instances [302]. The architecture
bears resemblance with a peer-to-peer architecture [161, 362], where [302]:
• Demands are propagated without knowing where they will be processed or stored.
• Any tier or node can fail without the system to be fatally affected.
• Nodes and tiers can seamlessly be added or removed on the fly as computation is
happening.
• Nodes and tiers can be affected at run-time to the execution of any GIPSY program,
i.e., a specific node or tier could be computing demands for different programs.
142
The founding works cited earlier [161, 264, 362, 366, 383, 384, 385, 498, 499, 501] cover
the initial design, and proof-of-concept implementations of the DMF—Jini- and JMS—based
as well as the surrounding integration effort of them in to the GEE [192].
6.2.2.1
Multi-Tier Demand Migration System
The Demand Migration System (DMS) is an implementation of the Demand Migration
Framework (DMF) introduced first by Vassev and then extended by Pourteymour in [383,
384, 385, 499, 501]. The initial version of the DMS relied solely on Jini [194] (now known as
Apache River [20]) for transport and storage of the demands and the results with a JavaSpaces [245] repository acting as a data warehouse cache for the most frequently demanded
computations and their results (i.e., the demand store). The DMF is an architecture that
is centered around the demand store with transport agents (TAs) that implement a particular protocol (as a proof-of-concept Jini and JMS [449] TAs are used [384, 385]) to deliver
demands between the demand store, workers (that do the actual computation primarily for
procedural demands), and generators (that request the computation to be done and collect
results) [271, 322]. Thus, GIPSY has some implementation of RMI [523], Jini [194], and
JMS [449] and the one that relies on the DMS for its transport needs [271, 322].
6.2.2.1.1
Multi-Tier Unification of Jini and JMS DMS.
Initially, when Vassev
and Pourteymour provided the Jini and JMS implementations respectively, they were rather
disparate and did not integrate fully well with the updated multi-tier architecture put forward
by Paquet at the design and implementation level. However, it was important to bring
them under the same frameworked rooftop for comparative studies as well as concurrent use
in order to be able to gain insights and recommend a particular architecture for a given
GIPSY network setup. Pourteymour [383] initially researched the JMS and its properties
in comparison with Jini in the published literature and tutorials [11, 20, 65, 101, 114, 164,
197]. Ji and the author Mokhov subsequently did a design update and refactoring to enable
smooth scripted selection of either Jini- or JMS-based tiers (or both middleware technologies
used simultaneously) to participate in a single GIPSY computation network [192, 193]. Ji
afterward did an in-depth scalability study of the two [191] in the integrated environment.
143
6.2.2.1.2
Generic Eduction Engine Resources.
One of the central concepts of the
GIPSY’s’ solution is language independence of the run-time system. In order to achieve that,
the design relies on an intermediate representation that is generated by the compiler: the
Generic Eduction Engine Resources (GEER). The GIPC compiles a program into an instance
of the GEER(s), including a dictionary of identifiers extracted from the program [264, 282,
369]. Since the compiler framework provides with the potential to allow additions of any
flavor of the Lucid language to be added through automated compiler generation taking
semantic translation rules in input [527], the compiler authors need to provide a parser and
a set of rules to compile and link a GEER, often by translating a specific Lucid dialect to
GIPL first [302].
As the name suggests, the GEER structure is generic, in the sense that the data structure
and semantics of the GEER are independent of the source language. This is necessitated by
the fact that the engine was designed to be “source-language independent”, an important
feature made possible by the presence of the Generic Intensional Programming Language
(GIPL) as a generic language in the Lucid family of languages [161, 302, 362]. Thus, the
compiler first translates the source program (written in any flavor of Lucid) into the “generic
Lucid” [364], then generates the GEER run-time resources for this program, which are
then made available at run-time to the various tiers upon demand. The GEER contains,
for all Lucid identifiers in a given program, typing information, rank (i.e., dimensionality
information), as well as an abstract syntax tree (AST) representation of the declarative
definition of each identifier [161, 302, 362]. It is this latter tree that is traversed later on
by the demand generator tier in order to proceed with demand generation. In the case of
hybrid Lucid programs, the GEER also contains a dictionary of procedures called by the
Lucid program, known as Sequential Procedure Classes, as they in fact are wrapper classes
wrapping procedures inside a Java class in cases where the functions being called are not
written in Java [264, 282, 302] using JNI [442].
6.2.2.1.3
GIPSY Tier.
The architecture adopted for the most recent evolution of the
GIPSY is a multi-tier architecture where the execution of GIPSY programs is divided in three
different tasks assigned to separate tiers [161, 362]. Each GIPSY tier is a separate process that
144
communicates with other tiers using demands, i.e., the GIPSY Multi-Tier Architecture operational mode is fully demand-driven. The demands are generated by the tiers and migrated
to other tiers using the Demand Store Tier. We refer to a tier as an abstract and generic
entity that represents a computational unit independent of other tiers and that collaborates
with other tiers to achieve program execution as a group (GIPSY network) [161, 302, 362].
The author Mokhov made the tier architecture extensible to the application-specific domains
as well allowing problem-specific tier instances to use the architecture (e.g., see Section 8.3
and Section 5.4). In Figure 34 is the context use case diagram describing user interaction
Figure 34: GMT context use case diagram
with the nodes and tiers to get them started to form a GIPSY software network. The user
interaction is done either via command line or GUI support in the RIPE package interfacing
the GMT [393].
6.2.2.1.4
GIPSY Node.
Abstractly, a GIPSY Node is a computer (physical or virtual)
that has registered for the hosting of one or more GIPSY Tiers. GIPSY Nodes are registered
through a GIPSY Manager Tier (GMT) instance. Technically, a GIPSY Node is a controller
that wraps GIPSY Tier instances, and that is remotely reporting and being controlled by
145
a GIPSY Manager Tier [161, 362]. Operationally, a GIPSY Node hosts one tier controller
for each kind of Tier (see Figure 35). The Tier Controller acts as a factory that will, upon
necessity, create instances of this Tier, which provide the concrete operational features of the
Tier in question. This model permits scalability of computation by allowing the creation of
new Tier instances as existing tier instances get overloaded or lost [161, 302, 362].
Figure 35: Design of the GIPSY node [161, 362]
6.2.2.1.5
GIPSY Instance.
A GIPSY Instance is a set of interconnected GIPSY Tiers
deployed on GIPSY Nodes executing GIPSY programs by sharing their respective GEER
instances. A GIPSY Instance can be executing across different GIPSY Nodes, and the same
GIPSY Node may host GIPSY Tiers that are members of separate GIPSY Instances [161,
302, 362]. In Figure 35 is Paquet’s rending of the described design [362]. GIPSY Instances
form so called GIPSY software networks, similar in a way to software-defined networks [117].
6.2.2.1.6
Demand Generator Tier.
The Demand Generator Tier (DGT) generates
demands according to the program declarations and definitions stored in one of the instances
of GEER that it hosts. The demands generated by the Demand Generator Tier instance can
be further processed by other Demand Generator Tier instances (in the case of intensional
demands) or Demand Worker Tier instances (in the case of procedural demands), the demands being migrated across tier instances through a Demand Store Tier instance. Each
DGT instance hosts a set of GEER instances that corresponds to the Lucid programs it
can process demands for. A demand-driven mechanism allows the Demand Generator Tier
146
to issue system demands requesting for additional GEER instances to be added to its GEER
Pool (a local collection of cached GEERs it has learned), thus enabling DST instances to
process demands for additional programs in execution on the GIPSY networks they belong
to [161, 302, 362]. The author Mokhov additionally introduced the notion of problem-specific
DGTs (e.g., MARFCATDGT discussed later) to show the wide array of applications that are possible using the multi-tier architecture as a middleware platform.
6.2.2.1.7
Demand Store Tier.
The Demand Store Tier (DST) acts as a tier mid-
dleware in order to migrate demands between tiers. In addition to the migration of the
demands and values across different tiers, the Demand Store Tiers provide persistent storage
of demands and their resulting values, thus achieving better processing performances by not
having to re-compute the value of every demand every time it is re-generated after having
been processed. From this latter perspective, it is equivalent to the historical notion of an
intensional value warehouse [363, 459] in the eductive model of computation (Section 4.1.4,
page 86). A centralized communication point or warehouse is likely to become an execution
bottleneck for large long-running computations. In order to avoid that, the Demand Store
Tier is designed to incorporate a peer-to-peer architecture as needed and a mechanism to
connect all Demand Store Tier instances in a given GIPSY network instance. This allows
any demand or its resulting value to be stored on any available DST instance, but yet allows
abstract querying for a specific demand value on any of the DST instances. If the demanded
value is not found in the DST instance receiving the demand, it will contact its DST peers
using a peer-to-peer mechanism. This mechanism allows to see the Demand Store abstractly
as a single store that is, behind the scenes, a distributed one [161, 302, 362].
6.2.2.1.8
Demand Worker Tier.
The Demand Worker Tier (DWT) processes pri-
marily procedural demands, i.e., demands for the execution of functions or methods defined
in a procedural language, which are only present in the case where hybrid intensional programs are being executed. The DGT and DWT duo is an evolution of the generator-worker
architecture adopted in GLU [5, 188, 189]. It is through the operation of the DWT that the
increased granularity of computation is achieved. Similarly to the DGT, each DWT instance
147
hosts a set of compiled resident procedures (sequential thread procedure classes) that corresponds to the procedural demands it can process pooled locally. A demand-driven mechanism
allows the Demand Worker Tier to issue system demands requesting for additional GEERs
to be added to its GEER pool, thus achieving increased processing knowledge capacity over
time, eductively [161, 302, 362].
6.2.2.1.9
General Manager Tier.
A General Manager Tier (GMT) is a component
that enables the registration of GIPSY Nodes and Tiers, and to allocate them to the GIPSY
network instances that it manages. The General Manager Tier interacts with the allocated
tiers in order to determine if new tiers and/or nodes are necessary to be created, and issue
system demands to GIPSY Nodes to spawn new tier instances as needed. In order to ease
the node registration, the General Manager Tier can be implemented using a web interface
and/or a stand-alone graph-based UI [393], so that users can register nodes using a standard
web browser, rather than requiring a client. As with DSTs, multiple GMTs are designed
to be peer-to-peer components, i.e., users can register a node through any GMT, which will
then inform all the others of the presence of the new node, which will then be available for
hosting new GIPSY Tiers at the request of any of the GMT currently running. The GMT
uses system demands to communicate with Nodes and Tiers [161, 302, 362], in a way similar
to SNMP get and set requests [165, 251, 440]. In Figure 36 is a more detailed GMT-oriented
use case diagram of Figure 34 (where other tiers are implied with the focus on the GMT).
Once started, the GMT acts as a service using the DMS, which is designed to play an active
role in managing the GIPSY software network instance along with the interacting user.
6.2.2.2
Scalability
The GIPSY multi-tier architecture after extensive design revision, implementation and refactoring [160, 161, 191, 193] was put to scalability tests by Ji [191]. The need for such is also
emphasized in in the recent work by Fourtounis et al . [118] where eduction and massive parallelism of intensional evaluation are discussed in a formal model and PoC implementation
for analysis.
Scalability is an important attribute of any computing system as it represents the ability
148
Figure 36: Detailed GMT context use case diagram
to achieve long-term success when the system is facing growing demands load. The multi-tier
architecture was adopted for the GIPSY runtime system for research goals such as scalability [161, 362]; therefore, upon implementation of the PoC Jini and JMS DMS, the scalability
of the GIPSY runtime system needed to be assessed for the validation of the implementation,
which Ji has done in his mater’s thesis in 2011 [191, 192].
While scalability is an important term, Ji discovered during his research that although
the term scalability is widely used and treated as an important attribute in the computing
community, there was no commonly accepted definition of scalability available [96]. Without
a standardized definition, researchers used scalability to denote the capability for the longterm success of a system in different aspects, such as the ability of a system to hold increasing
amount of data, to handle increasing workload gracefully, and/or to be enlarged easily [46,
191, 192]. Ji did test under which circumstances Jini DMS was better or worse over the JMS
circumstances in terms of scaling out of GIPSY Nodes onto more physical computers and
their available CPUs for the tiers, then the amount of memory used on the nodes before DSTs
run out of memory, amount of demands they could handle, network delays and turnaround
time, and other characteristics [191].
149
The scalability aspect is very important to this work as there is always a possibility of
a need to do heavy-weight data mining and process vast amounts of digital evidence in an
investigation efficiently, robustly, and effectively, a problem akin to a human genome sequence
alignment and reconstruction [100].
6.2.3
Run-time Interactive Programming Environment—(RIPE)
RIPE’s place in the GIPSY’s architecture [264, 363] has to do with the user interaction
aspect of the system. While presently it is the most underdeveloped aspect, it has had some
valuable contributions made to it. The first of them is the PoC realization of the bidirectional
translation of Indexical Lucid and GIPL into visual Graphviz-based data-flow graphs [89]
based on the corresponding earlier assertion by Paquet [361] that such multidimensional
translation is possible (see Chapter E). In Figure 37 is the context use case diagram for RIPE
illustrating user goal-level activities the user needs.
The latest one of the contributions is
Figure 37: RIPE context use case diagram
the graph-based visual GMT (GGMT) integration by Rabah [393]. Its purpose is to allow
start up and configuration management of the GIPSY network instances initially developed
150
Figure 38: MARFCAT GIPSY network graph
mostly with the command-line interface (CLI) by Ji and a simple Simulator GUI by Vassev.
This more flexible approach made it significantly more usable to operate GIPSY networks.
See for examples its connectivity for the MARFCAT test case with the corresponding graph
in Figure 38.
The author Mokhov additionally contributed to the general refactoring and standardization of the RIPE package hierarchy as well as contributing a servlet-based web front-end
to launch the compilation and execution process via a browser on a server side [264] (see
Figure 39). CLI is also supported for RIPE primarily for invocation by various scripts. The
MAC Spoofer Analyzer case relies on this in particular (see Section 9.5) to start the analysis
process.
6.3
Summary
To summarize we have motivated the discussion and necessity of the General Intensional
Programming System (GIPSY) and its core GIPC, GEE, and RIPE components and reviewed
151
Figure 39: GIPSY WebEditor interface [264]
its existence in the historical context of GLU as well as follow-up design and development
iterations. Particular level of detail was given to the GIPC and GEE frameworks with the
goal of flexibility (data types, languages, replaceable components) and scalability (multi-tier
DMS) in mind, important for the Forensic Lucid project presented in this thesis, especially
its compiler and extended run-time system design and development described in Chapter 8.
152
Part II
Methodology
153
Chapter 7
Forensic Lucid Design and
Specification
This chapter discusses the core contribution of the thesis—the Forensic Lucid language.
This includes the methodology on the formalization of the necessary forensic constructs,
the syntax, semantics, and augmentation in dereference to the background work in Part I.
This chapter is an extended compilation of a number of the published and unpublished
works [269, 290, 300, 304, 307, 312] detailing various aspects of Forensic Lucid construction
and case studies including a number of supporting works.
Forensic Lucid [269, 300, 304, 309, 310] is a forensic case specification language for
automatic deduction and event reconstruction in digital crime incidents. The language itself
is general enough to specify any events (in terms of their properties and description), duration,
as well as the context-oriented system model [277, 308, 321]. Forensic Lucid is based on
Lucid [24, 25, 26, 379, 509] and its various dialects that allow natural expression of various
phenomena, inherently parallel, and most importantly, context-aware, i.e., the notion of
context is specified as a first-class value [365, 473, 513]. Forensic Lucid is also significantly
influenced by and is meant to be a more usable improvement of the work of Gladyshev et
al . on formal forensic analysis and event reconstruction using finite state automate (FSA)
to model incidents and reason about them [136, 137] by also including trustworthiness and
credibility factors into the equation using the Dempster–Shafer theory [277, 308, 321].
154
As previously mentioned (Chapter 1), the first formal approach to cybercrime investigation was the finite-state automata (FSA) approach by Gladyshev et al . [136, 137] (Section 2.2,
page 29). Their approach, however, is unduly complex to use and to understand for nontheoretical-computer science or equivalently minded investigators [300, 311]. The aim of
Forensic Lucid is to alleviate those difficulties, expressive and usable, and to be able to
specify credibility.
Thus, this chapter presents the summary of the requirements, design, and formalization of the syntax and semantics of our proposed language.
That includes an overview
(Section 7.1), design and requirements considerations of the Forensic Lucid language (Section 7.2), including higher-order contexts (Section 7.2.2), syntax (Section 7.3) and semantics
(Section 7.4), and discussion on related topics in Section 7.5, such as mapping of the Forensic Lucid concepts to the background work on intensional logic (cf. Section 3.2, page 59),
Gladyshev’s formalisms (cf. Section 2.4, page 56), and the Dempster–Shafer evidence theory
(cf. Section 3.3.2, page 66) in Section 7.5.1.
7.1
Approach Overview
As the reader may recall, Gladyshev created a finite-state-automata (FSA) model [136, 137]
to encode the evidence and witness accounts (Section 2.2, page 29) in order to combine
them into an evidential statement, then model the FSA of a particular case, and given the
FSA verify if a certain claim agrees with the evidential statement or not (via backtracing
and the event reconstruction algorithm) and if it does what were possible event sequences
that explain that claim [274, 313]. Based on the formal parameters and terms defined in
that work [135, 136, 137], we likewise model various pieces of evidence and witnesses telling
their own stories of an incident. The goal is to put them together to make the description
of the incident as precise as possible [307, 312]. To demonstrate that a certain claim may
be true, an investigator has to show that there are some explanations of the evidence that
agree with the claim. To disprove the claim, the investigator has to show there are no
explanations of evidence that agree with the claim [136, 137]. On the other hand, the
work by Dempster–Shafer and others [153, 422] defined a mathematical theory of evidence
155
(Section 3.3.2, page 66), where factors like credibility and trustworthiness play a role in the
evaluation of mathematical expressions [274, 313]. Thirdly, a body of work on intensional
logics and programming (Section 3.2, page 59) provided a formal model that throughout
years of development placed the context as a first-class value in logical and programming
expressions in the Lucid family (Chapter 4) of languages [274, 313].
Thus, in this work we augment Gladyshev’s formalization with the credibility weight
and other properties derived from the mathematical theory of evidence and we encode it
as an evidential context in the Forensic Lucid language for forensic case management,
evaluation, and event reconstruction.
7.2
The Forensic Lucid Language Requirements and
Design Considerations
This section presents concepts and considerations in the design of the Forensic Lucid language. The end goal is to define our Forensic Lucid language where its syntactic constructs
and expressions concisely model cyberforensic evidence and stories told by witnesses as a context of evaluations, which normally correspond to the initial state of the case (e.g., initial
printer state when purchased from the manufacturer, as in [137], see further Section 9.3), towards what we have actually observed (as corresponding to the final state in the Gladyshev’s
FSM, e.g., when an investigator finds the printer with two queue entries (Bdeleted , Bdeleted ) in
Section 2.2.5.1, page 44). The implementing system (i.e., GIPSY [366], Chapter 6, page 128)
is designed to backtrace intermediate results in order to provide the corresponding event
reconstruction path, if it exists. The fundamental result of a Forensic Lucid expression
in its basic form is either true or false, i.e., “guilty” or “not guilty” given the evidential
evaluation context per explanation with the backtrace(s). There can be multiple backtraces,
that correspond to the explanation of the evidence (or lack thereof) [300, 304, 305, 312].
156
7.2.1
Core Language Properties, Features, and Requirements
We define and use Forensic Lucid to model the evidential statements and other expressions
representing the evidence and observations as a higher-order hierarchical context of evaluation [304, 307, 312]. One of the goals is to be able to “script” the evidence and the stories
as expressions and run that “script” through an evaluation system that provides the results
of such an evaluation [305]. An execution trace of a running Forensic Lucid program is
designed to expose the possibility of the proposed claim to be true with the reconstructed
event sequence backtrace between the final observed event to the beginning of the events.
Intensionally, this means the initial possible world q0 is accessible from the final possible
world qf inal via one or more accessibility relations and possibly other worlds (states) (cf. Section 3.2, page 59). Forensic Lucid capitalizes in its design by aggregating the features and
semantics of multiple Lucid dialects mentioned in Chapter 4 needed for these tasks along
with its own extensions [300, 304, 305, 312].
Lucx’s context calculus with contexts as first-class values [513] and operators on simple
contexts and context sets (union, intersection, etc., defined further) are augmented to
manipulate hierarchical contexts in Forensic Lucid (Section 7.3.3, page 183). Additionally,
Forensic Lucid inherits the properties (described subsequently in detail in this chapter) of
MARFL (see Appendix C), Objective Lucid and JOOIP (see Section 4.3.2, page 92) and
their comprising dialects for the arrays and structural representation of data for modeling
the case data structures such as events, observations, and groupings and correlation of the
related data [300, 304, 305]. Hierarchical contexts in Forensic Lucid follow the example
of MARFL [272] using a dot operator and by overloading both @ and # (defined further in
Section 7.3, page 166) to accept different types as their left and right arguments [300, 304,
307, 312] (cf. Table 15, page 203).
One of the basic requirements in Forensic Lucid design is that the final language is
a conservative extension of the previous Lucid dialects (valid programs in those languages
are valid programs in Forensic Lucid). This is helpful when complying with the compiler
(GIPC) and the run-time subsystem (GEE) frameworks within the implementing system, the
General Intensional Programming System (GIPSY) (cf., Section 6.1) [304, 307, 312, 366, 370].
The partial translation rules (Section 7.3.2.2) provided when implementing the language
157
compiler within GIPSY, such that the run-time environment (General Eduction Engine, or
GEE) can execute it with minimal changes to GEE’s implementation [305, 312].
7.2.2
Higher Order Context
Higher-order contexts (HOCs) represent essentially nested contexts, e.g., as conceptually
shown in Figure 40 modeling evidential statement for forensic specification evaluation. Using
the ⟨dimension : tag⟩ representation, a HOC looks like:
{es : {os1 : {o1 : (P1, min, max), o2 : (P2, min, max), o2 : (P3, min, max)}, os2 : {...}, os3 : {...}}
The early notion and specification of nested contexts first appeared Swoboda’s works [452,
454, 455], but there the evaluation has taken place only at the leaf context nodes. Another,
more recent work on the configuration specification as a context in the intensional manner
was the MARFL language (Appendix C) of the author [269, 272], allowing evaluation at
arbitrary nesting of the configuration context with some defaults in the intermediate and leaf
context nodes [302].
Figure 40: Nested context hierarchy example for cyberforensic investigation [300, 304]
Forensic Lucid is context-oriented where a crime scene model comprises a state machine of evaluation and the forensic evidence and witness accounts comprise the context for
its possible worlds. Formally, the basic context entities comprise an observation o (in Equation 7.2.2.1), observation sequence os (in Equation 7.2.2.2), and the evidential statement (in
Equation 7.2.2.3). These terms are inherited from Gladyshev’s work [136, 137] (Section 2.1.4,
page 29) and represent the forensic context of evaluation in Forensic Lucid. An observation of a property P has a duration between [min, min + max] per Section 2.1.4, page 29.
158
This original definition of o is extended with w to amend each observation with weight factor
(to indicate probability, credibility, or belief mass) to model trustworthiness of observations,
evidence, and witness accounts in accordance with the Dempster–Shafer mathematical theory of evidence [422]. t is an optional timestamp as in a forensic log for that property used
to refine event co-relation in time for temporal data [277, 308, 321].
An observation sequence os represents a partial description of an incident told by evidence or a witness account (electronic or human). It is formally a chronologically ordered
collection of observations representing a story witnessed by someone or something (e.g., a
human witness, a sensor, or a logger). It may also encode a description of any digital or
physical evidence found. All these “stories” (observation sequences) all together represent an
evidential statement about an incident (its knowledge base). The evidential statement es is
an unordered collection of observation sequences. The property P itself can encode anything
of interest—an element of any data type or even another Forensic Lucid expression, a
nested object instance hierarchy, or an event [277, 308, 321].
o = (P, min, max, w, t)
(7.2.2.1)
os = {o1 , . . . , on }
(7.2.2.2)
es = {os1 , . . . , osm }
(7.2.2.3)
Having constructed the context in the form of the evidential statement, one needs to build
a transition function ψ and its inverse Ψ−1 to describe the “crime scene” at the center of
the incident (or the incident scene to generalize the term a little bit to include accidents,
malfunctions, etc. that are not necessarily criminal or malicious in nature).
7.2.3
Formal Syntax and Semantics
The Forensic Lucid system incorporates the necessary syntax and semantics constructs
[313] as detailed in the following sections.
159
7.2.3.1
Definitions and Terms
This section defines common terminology used subsequently in the chapter throughout the
formal definition of Forensic Lucid syntax and semantics and their description.
7.2.3.1.1
Structural Operational Semantics.
We use structural operational seman-
tics (see examples in Chapter 4) to describe the semantics of Forensic Lucid. We justify
this choice for a number of reasons.
• Plotkin proposed the notion in 1981 [381]. While in 1979–1982 Ashcroft and Wadge
argued for prescriptive denotational semantics for their language [28], Faustini in his
thesis the same year established the equivalence of a denotational and an operational
semantics of pure dataflows [108] (refer to Section 4.1.2, page 82 for information on
dataflows).
• Mosses in 2001 [326] discussed the differences between denotational and operational
semantics (among others). In the same year, Degano and Priami argued operational
semantics is close to intuition, mathematically simple, and allows easy and early prototyping [86].
• Paquet [361] in 1999 has given a structural operational semantics of GIPL in 1999
(recited earlier in Figure 23, page 82). The subsequently produced work on Lucx [473,
513], JOOIP [528], and previous material by the author Mokhov used this semantics
style consistently ever since.
• A lot of Java-related publications use structured operational semantics. GIPSY (Chapter 6) is implemented in Java and it is easier to relate to Java semantics. GEE’s main
interpreter follows this style for GIPL programs. Not to mention the hybrid LucidJava approaches such that of Objective Lucid and JOOIP mentioned earlier that
need to specify joint semantics.
• PRISM’s language semantics is also described in the structural operational semantics
style ([190], Section 8.2.2.1, page 215).
160
The basic operational semantics description has rules in the form of
P remises
Conclusions
with
Premises defined as possible computations, which, if take place, Conclusions will also take
place [86, 513].
Following the steps presented in Section 4.1.1.2, page 78, [361, 513] we thus define the
following:
Table 13: Forensic Lucid identifier types in D
type
form
dimension
(dim)
constant
(const, c)
operator
(op, f )
variable
(var, E)
function
(func, idi , E)
class
(class, cid, cdef)
member variable
(classV, cid.cvid, vdef)
member method
(classF, cid.cfid, fdef)
free procedure
(freefun, ffid, ffdef)
context operator
(cop, f )
context set operator
(sop, f )
observation
(odim, E, min, max, w, t)
observation sequence (osdim, ordered odimi )
evidential statement (esdim, unordered osdimi )
forensic operator
(fop, f )
Definition environment: The definition environment D stores the definitions of all of the
identifiers that appear in a Forensic Lucid program. As in Equation 4.1.1.3, it is a
partial function
D : Id → IdEntry
(7.2.3.1)
where Id is the set of all possible identifiers and IdEntry, summarized in Table 13,
has possible kinds of values for each identified kind. (This table is an extended version
of Table 1 with additional entries we define further.) Per Equation 4.1.1.1, D ⊢ E : v
means the expression E evaluates to the value v under D [361].
Evaluation context: The current evaluation context P (sometimes also referred to as a
point in the context space) is an additional constraint put on evaluation, so per Equation 4.1.1.2, D, P ⊢ E : v specifies that given D, and in P, expression E evaluates
to v [361].
161
Identifier kinds: The following identifier kinds can be found in D:
1. Dimensions define simple coordinate pairs, which one can query and navigate with
the # and @ operators. Their IdEntry is simply (dim) [361].
2. Constants (IdEntry = (const, c), where c is the value of the constant) are external entities that provide a single value independent of the context of evaluation.
Examples are integers, strings, and Boolean values, etc. [361].
3. Data operators are external entities that provide memoryless functions. Examples
are the arithmetic and Boolean functions. The constants and data operators are
said to define the basic algebra of the language. Their IdEntry is (op, f ), where
f is the function itself [361].
4. Variables carry the multidimensional streams (described in Section 4.1.2, page 82).
Their IdEntry is (var, E), where E is the Forensic Lucid expression defining
the variable. Variable names are unique [361]. Finite variable streams can be
bound by the special beginning-of-data (bod) and end-of-data (eod) markers (eod
was introduced in [360]). Scoping and nesting is resolved by renaming and via the
dot “.” operator.
5. Functions (func, idi , E) are Forensic Lucid user-defined functions, where the idi
are the formal parameters to the function and E is the body of the function [361].
6. Classes (class, cid, cdef) in our work are user-defined Java classes, members of which can be accessed from Forensic Lucid (come from Objective
Lucid and JOOIP) to support Java-Lucid hybrid intensional-imperative programming [264, 528]. This integration is needed, e.g., for the self-forensics project
(Appendix D). The same applies to member variables and methods described further. In general, the Classes entry does not need to be restricted to Java classes,
but can be expanded to be defined in any language that supports the notion and
the run-time supports hybridification. cid is the class identifier, and cdef is a class
definition in the imperative language (Java in our case, so cdef = JavaCDef).
7. Member variables (classV, cid.cvid, vdef) are user-defined Java data members which can be accessed from Forensic Lucid (are originally from Objective
162
Lucid and JOOIP) to support Java-Lucid hybrid intensional-imperative programming [264, 528]. cid.cvid is the variable identifier in the class cid, and vdef
is a member variable definition in the imperative language (Java in our case, so
vdef = JavaVDef).
8. Member methods (classF, cid.cfid, fdef) are user-defined Java methods (are
originally from Objective Lucid and JOOIP) to support Java-Lucid hybrid
intensional-imperative programming [264, 528]. cid.cfid is the method identifier in the class cid, and fdef is a member method definition in the imperative
language (Java in our case, so fdef = JavaFDef).
9. Free procedures (also known as free functions in the previous work [264, 528])
(freefun, ffid, ffdef) are user-defined Java methods written “freely” without
an explicit container class directly in a Forensic Lucid program (the notion
comes from JLucid) [264]. (It is a responsibility of the compiler to generate a
wrapper class containing any such procedures [264].)
10. Context operators (cop, f ) is a Lucx-style simple context operator type [473, 513].
These operators help us to manipulate context components in set-like operations,
such as union, intersection, difference, and the like. Both simple context and
context set operators are defined further.
11. Context set operators (sop, f ) is a Lucx-style context set operator type [473, 513].
Both simple context and context set operators are defined further.
12. Observations (odim, E, min, max, w, t) are Forensic Lucid dimensions each encoding an observation of a property defined by a Forensic Lucid expression
E, duration [min, min + max], basic belief mass assignment w, and a wall-clock
timestamp t. This is a basic unit of a forensic context. One can manipulate these
with the context operators.
13. Observation sequences (osdim, odimi ) are Forensic Lucid dimensions encoding
each observation sequence, as an ordered collection of observations odimi . This is
a part of a forensic context. One can manipulate these with the context operators.
14. Evidential statements (esdim, osdimi ) are Forensic Lucid dimensions encoding
163
an unordered collection of observation sequences osdimi . This is a part of a forensic
context. One can manipulate these with the context operators.
15. Forensic operators (fop, f ) are Forensic Lucid context operators [473, 513]
defined further in Section 7.3.4, page 191.
7.2.3.1.2
Context Types.
A general definition of context is [473, 513]:
Definition 1. Context: A context c is a finite subset of the relation: c ⊂ {(d, x)|d ∈ DIM ∧
x ∈ T }, where DIM is the set of all possible dimensions, and T is the set of all possible
tags [473, 513].
Definition 2. Tag: Tags are the indices to mark positions in dimensions [473, 513].
Definition 3. Simple context: A simple context is a collection of ⟨dimension : tag⟩ pairs,
where there are no two such pairs having the same dimension component. Conceptually,
a simple context represents a point in the context space, i.e., this is the traditional GIPL
context P. A simple context having only one ⟨dimension : tag⟩ pair is called a micro context.
It is the building block for all the context types [473, 513].
Syntactically, simple context is [E : E, ..., E : E] [473, 513], i.e., [Ed1 : Et1 , ..., Edn : Etn ]
where Edi evaluate to dimensions and Eti evaluate to tags.
Definition 4. Context set: A context set (also known as “non-simple context”) is a set of
simple contexts. Context sets represent regions of the context space, which can be seen as a
set of points, considering that the context space is discrete. Formally speaking, context set is
a set of ⟨dimension : tag⟩ mappings that are not defined by a function [473, 513].
Syntactically, context set is {E, ..., E}, where E → [E : E, ..., E : E].
7.2.3.1.3
Forensic Context.
Definition 5. Forensic context: Forensic context represents an evidential statement of a
case under consideration. It’s a hierarchical encoding of evidence in terms of the evidential
statement, observation sequence, and observation constructs.
Forensic contexts form the basis of cyberforensic reasoning in Forensic Lucid and are
elaborated further in detail in terms of syntax and semantics.
164
7.2.3.1.4
Context Calculus.
Definition 6. Context calculus: Context calculus is a calculus defining representation and
types of contexts and operators on them [473, 513].
In Lucx terms, context calculus defines a set of context operators on simple contexts
and context sets [513] presented earlier. In Forensic Lucid terms, the context calculus is
extened to support forensic contexts and additional operators and lifting of Lucx contexts
into forensic contexts. All the new constructs are defined in the sections that follow.
7.2.3.1.5
Tag Set Types.
Definition 7. Tag set: A tag set T is a collection of all possible tags attached to a dimension [473].
Traditionally, in many earlier Lucid dialects T was always an infinite ordered set of
natural numbers N. However, it is not always convenient in some application domains [473],
such as forensic computing. In Lucx’s compiler implementation by Tong [473] flexible tag
set types were designed to allow more application domains. Tags, therefore, can be of any
given desired type. It is up to the particular semantic analyzer to determine validity of tag
types [473]. By default each dimension’s tag set is both ordered and infinite, to maintain
backward compatibility [473].
Definition 8. Ordered Tag Set: A tag set on which a relation R satisfies the following three
properties [237, 237, 473]:
1. Reflexive: For any a ∈ S, we have aRa
2. Antisymmetric: If aRb and bRa, then a = b
3. Transitive: If aRb and bRc, then aRc
which is essentially a partial order set ( post) [431, Chapter 4]. Ordered tag set can be either
finite or infinite [473].
Definition 9. Unordered Tag Set: A tag set which is not ordered is called an unordered set.
Unordered tag set can be either finite or infinite [473].
165
Definition 10. Finite Tag Set: A tag set I is called finite and more strictly, inductive, if
there exists a positive integer n such that I contains just n members. Finite tag set can be
either ordered or unordered [473].
Definition 11. Infinite Tag Set: A tag set, which is not finite is called an infinite set. Infinite
tag set can be either ordered or unordered [473].
Definition 12. Periodic Tag Set: A tag set where its ordered subset represents a period
repeated in a pattern two or more times. Can be combined with the (un)ordered, finite, and
infinite types [473].
Definition 13. Non-periodic Tag Set: A tag set that is not periodic. Can be combined with
the (un)ordered, finite, and infinite types [473].
Syntactically, ordered, unordered, finite, infinite, periodic, and nonperiodic are
defined in the language to declare a desired permutation of a tag set type declared alongside
dimension [473]. Forensic Lucid inherits these definitions from Lucx. Certain combinations are of not much practical use (e.g., unordered infinite tag set types), but they
are provided for completeness by Tong [473]. In Forensic Lucid, ordered and unordered
finite tag sets are the most common in usage.
Following the above fundamental definitions, Section 7.3 is devoted to the complete syntax specification and Section 7.4 is devoted to the operational semantics specification of
Forensic Lucid.
7.3
Concrete Forensic Lucid Syntax
The concrete syntax of the Forensic Lucid language is summarized in several figures. It
is influenced by the productions from Lucx [513, 515] (see Section 4.3.1, page 92), JLucid
and Objective Lucid [264], GIPL and Indexical Lucid [361] (see Section 4.1, page 76),
and the hierarchical contexts from MARFL [272] (see Appendix C for details).
In Figure 41 are common top-level syntax expressions E of the language from identifiers,
to function calls, operators, arrays, the where clause, dot-notation, and so on. In Figure 42
are the Q where productions containing various types of declarations, including the common
166
Lucx dimensions and their tag sets as well as the new constructs for forensic dimension
types. In Figure 43 we elaborate on the syntactical details on the hierarchical forensic
context specification of the observations, observation sequences, and evidential statements
including different observation types. In Figure 44 is a syntax specification of different types
of operators, such as arithmetic, logic, and intensional operators, in their unary and binary
forms.
The embed operator, the notion of which is inherited from JLucid (Section 4.3.2.1,
page 92), is adapted in Forensic Lucid primarily to include large evidential specifications
from external to the main program sources, such as encoded log files.
7.3.1
Syntactical Extension for Observations
The syntactical notation of the unit of observation in Forensic Lucid extends the observation context definition with the optional w and t components. w is always defined and
by default is 1, and t can be left undefined (set to eod) as it is not always needed. When
defined, t is a wall-clock timestamp of the beginning of the observation, which can be used
to be mapped to from log files or other real-time sensitive data and to present in the event
reconstruction reports. t can also be used in reactive systems with clocks and self-forensics,
but this aspect is not discussed here. Thus, the following would be an actual expression in
the language given all the symbols are defined [313] and expressions declared.
observation o = (P, min, max, w, t);
or more concretely:
observation o = ("A printed", 1, 0, 0.85);
where t is omitted, w = 85% confidence/belief, and the rest as in the original definition by
Gladyshev (cf., Section 2.2). Next,
observation o = P;
P = "A printed";
is equivalent to
167
E
::=
bounds
id
(7.3.0.2)
|
E(E, ..., E)
(7.3.0.3)
|
E[E, ..., E](E, ..., E)
(7.3.0.4)
|
if E then E else E fi
(7.3.0.5)
|
#E
(7.3.0.6)
|
E@E
(7.3.0.7)
|
E⟨E, . . . , E⟩
(7.3.0.8)
|
select(E, E)
(7.3.0.9)
|
Box[E, . . . , E|E]
(7.3.0.10)
|
E where Q end; (see Figure 42)
(7.3.0.11)
|
[E : E, ..., E : E]
(7.3.0.12)
|
E bin-op E
(7.3.0.13)
|
un-op E
(7.3.0.14)
|
E i-bin-op E | E i-bin-op w E
(7.3.0.15)
|
i-un-op E | i-un-op w E
(7.3.0.16)
|
E cxtop E
(7.3.0.17)
|
bounds
(7.3.0.18)
|
embed(U RI, M ET HOD, E, E, ...)
(7.3.0.19)
|
E[E, ..., E]
(7.3.0.20)
|
[E, ..., E]
(7.3.0.21)
|
E.E
(7.3.0.22)
|
E.E(E, ..., E)
(7.3.0.23)
eod | bod | +INF | -INF
(7.3.0.24)
::=
Figure 41: Concrete Forensic Lucid syntax (E) [304, 305]
observation o = (P, 1, 0, 1.0);
P in the example can be any Forensic Lucid expression. If the wall-clock timestamp is
desired, it can be produced as a string or an integer in one of the standard date/time formats,
e.g., “Mon Jul 29 11:58:16 EDT 2013” or 16581129611312091 (as a number of seconds from
the epoch). Internally, they are uniformly represented. These can be supplied by human
168
Q ::= dimension id, ..., id;
(7.3.0.25)
|
dimension id : ordered finite [periodic | nonperiodic] {E, . . . , E } (7.3.0.26)
|
dimension id : ordered finite [periodic | nonperiodic] {E to E [step E] }
(7.3.0.27)
|
dimension id : ordered infinite [periodic | nonperiodic] {E to E [step E] }
(7.3.0.28)
|
dimension id : ordered infinite = E
|
dimension id : unordered finite [periodic | nonperiodic] {E, . . . , E }
(7.3.0.29)
(7.3.0.30)
|
dimension id : unordered infinite = E
(7.3.0.31)
|
evidential statement [unordered [finite]] id [= ES];
(7.3.0.32)
|
observation sequence [ordered [finite]] id [= OS];
(7.3.0.33)
|
observation id [= O];
(7.3.0.34)
|
id = E;
(7.3.0.35)
|
id(id, ...., id) = E;
(7.3.0.36)
|
E.id = E;
(7.3.0.37)
|
QQ
(7.3.0.38)
Figure 42: Concrete Forensic Lucid syntax (Q) [304, 305]
investigators, but more often by Forensic Lucid-generating logging facilities, such as that
of MAC Spoofer Investigation (Section 9.5, page 257). In the cases, when the wall-clock
timestamp is not needed, it is null.
observation o = ("A printed", 1, 0, 0.85, "Mon Jul 29 11:58:16 EDT 2013");
See Section 7.4 for the discussion on the semantic aspects of these declarations.
7.3.2
Core Operators
The basic set of the classic intensional operators [361] is extended with the similar operators, but inverted in one of their aspects: either negation of truth or reversal of direction
of navigation (or both). While some of such operators can be defined using the existing
operators, just as we define them further, their introduction as a syntactical sugar allows for
169
ES
OS
O
::=
{OS, ..., OS}
// evidential statement
(7.3.0.39)
|
OS
(7.3.0.40)
|
E
(7.3.0.41)
::=
{O, ..., O}
// observation sequence
(7.3.0.42)
|
O
(7.3.0.43)
|
E
(7.3.0.44)
::=
(E, E, E, E, E)
// observation (P, min, max, w, t)
(7.3.0.45)
// (P, min, max, w)
(7.3.0.46)
// (P, min, max)
(7.3.0.47)
// (P, min)
(7.3.0.48)
|
(E, E, E, E)
|
(E, E, E)
|
(E, E)
|
E
// P
(7.3.0.49)
|
$
// no-observation (Ct, 0, INF+)
(7.3.0.50)
|
\0(E)
// zero-observation (P, 0, 0)
(7.3.0.51)
Figure 43: Concrete Forensic Lucid syntax (ES, OS, O) [304, 305]
better expressibility. First, we provide a definition of these new operators alongside with the
classical ones to familiarize the reader what they do.
7.3.2.1
Definitions of Core Operators
The operators are defined below to give a more complete picture. The classical operators
first, next, fby, wvr, upon, and asa were previously defined in [361] and earlier works
(see [131, 350] and papers and references therein). The other complimentary, inverse, and
negation operators are added in this work (while most of the these seem obvious to have in
any Lucid dialect), so they were defined and revised from [303, 304]. In this list of operators,
especially the reverse ones, we make an assumption that the streams we are working with
are finite, which is sufficient for our tasks. Thus, our streams of tags (context values) can be
bounded between bod and eod constructs (similarly as in intensional databases [360, 367])
and be of a finite tag set type. For the summary of the application of the defined further
170
bin-op
::=
arith-op | logical-op | bitwise-op
(7.3.0.52)
un-op
::=
+|-
(7.3.0.53)
arith-op
::=
+|-|*|/|%|^
(7.3.0.54)
logical-op
::=
< | > | >= | <= | == | in | && | || | !
(7.3.0.55)
bitwise-op
::=
| | & | ~ | !| | !&
(7.3.0.56)
i-bin-op | i-bin-op w
::=
@ | i-bin-op-forw | i-bin-op-back
(7.3.0.57)
|
i-logic-bitwise-op | i-forensic-op
(7.3.0.58)
fby | upon | asa | wvr
(7.3.0.59)
nfby | nupon | nasa | nwvr
(7.3.0.60)
pby | rupon | ala | rwvr
(7.3.0.61)
npby | nrupon | nala | nrwvr
(7.3.0.62)
and | or | xor
(7.3.0.63)
|
nand | nor | nxor
(7.3.0.64)
|
band | bor | bxor
(7.3.0.65)
i-bin-op-forw
::=
|
i-bin-op-back
::=
|
i-logic-bitwise-op
::=
i-un-op | i-un-op w
::=
# | i-bin-un-forw | i-bin-un-back
(7.3.0.66)
i-bin-un-forw
::=
first | next | iseod
(7.3.0.67)
second | nnext | neg | not
(7.3.0.68)
last | prev | isbod
(7.3.0.69)
prelast | nprev
(7.3.0.70)
::=
\isSubContext | \difference
(7.3.0.71)
|
\intersection | \projection
(7.3.0.72)
|
\hiding | \override | \union | \in
(7.3.0.73)
combine | product | bel | pl
(7.3.0.74)
|
i-bin-un-back
::=
|
cxtop
i-forensic-op
::=
Figure 44: Concrete Forensic Lucid syntax (operators) [304, 305]
171
operators’ examples [300, 305], please refer to Section 7.3.2.3, page 183.
A less mundane version of the core intensional operators is added to account for the belief
mass assignments w in observations (denoted as opw in the examples, but syntactically they
are expressed identically to the classical versions in a Forensic Lucid program). These
augmented operator versions act just as normal operators would, but with the additional
requirement w ≥ 1/2. Such behavior is only defined over the forensic contexts when a runtime determination is made over the context type used. When w < 1/2, eod is returned.
When w = 1, such operators behave using their classical sense. It is possible to mix forensic
and non-forensic contexts, in such a case the non-forensic contexts are lifted to forensics
contexts (see the semantics in Section 7.4) before the application of the operator.
Definition 14. Let X be a Forensic Lucid stream of natural numbers (Section 4.1.2,
page 82) and OX is a stream of observations of X (effectively, lifting every element of X to
an observation; additionally a stream of observations is in effect an observation sequence as
we shall see), where min = 1, max = 0, w = 1.0. Let Y be another stream of Boolean values;
true is cast to 1 and false to 0 when used together with X in one stream. OY is a stream of
observations of Y .
X = (x0 , x1 , . . . , xi , . . .)
= (0, 1, . . . , i, . . .)
Y = (y0 , y1 , . . . , yi , . . .)
= (true, f alse, . . . , true, . . .)
OX = (o0 (x0 , 1, 0, 1.0), o1 (x1 , 1, 0, 1.0), . . . , oi (xi , 1, 0, 1.0), . . .)
= (o0 (0, 1, 0, 1.0), o1 (1, 1, 0, 1.0), . . . , oi (i, 1, 0, 1.0), . . .)
OY = (o0 (y0 , 1, 0, 1.0), o1 (y1 , 1, 0, 1.0), . . . , oi (yi , 1, 0, 1.0), . . .)
Definition 15. first: a stream of the first element of the argument stream.
first X = (x0 , x0 , . . . , x0 , . . .)
firstw OX = first OX && o0 .w ≥ 1/2 = (o0 , o0 , . . . , o0 , . . .)
172
Definition 16. second: a stream of the second element of the argument stream.
second X = (x1 , x1 , . . . , x1 , . . .)
secondw OX = second OX && o1 .w ≥ 1/2 = (o1 , o1 , . . . , o1 , . . .)
Definition 17. last: a stream of the last element of the argument stream.
last X = (xn , xn , . . . , xn , . . .)
lastw OX = (on , on , . . . , on , . . .)
This informal definition of the last operator relies on the earlier stated assumption that a
lot of our streams can be explicitly finite for the language we designed. This affects some of
the follow-up operators that rely in that fact just as well. last works with finite tag sets.
Definition 18. prelast: a stream of elements one before the last one of the argument
stream.
prelast X = (xn−1 , xn−1 , . . . , xn−1 , . . .)
prelastw OX = (on−1 , on−1 , . . . , on−1 , . . .)
Definition 19. next: a stream of elements of the argument stream after the first.
next X = (x1 , x2 , . . . , xi+1 , . . .)
second X = first next X
nextw OX = second OX && oi .w ≥ 1/2 = (o1 , o2 , . . . , on , . . .)
secondw OX = firstw nextw OX
173
Definition 20. prev: a stream of elements of the argument stream before the last.
prev X = (xn−1 , . . . , xi+1 , xi , xi−1 , . . .)
prelast X = first prev X
prevw OX = (on−1 , . . . , oi+1 , oi , oi−1 , . . .)
prelastw OX = firstw prevw OX
Definition 21. fby: the first element of the first argument stream followed by all of the
second argument stream.
X fby Y = (x0 , y0 , y1 , . . . , yi−1 , . . .)
OX fbyw Y = OX fby (oi (Y, 1, 0, 1.0))
OX fbyw OY = OX fby (OY wvr OY .w ≥ 1/2)
X fbyw OY = (o0 (X, 1, 0, 1.0)) fby (OY .P wvr OY .w ≥ 1/2)
In the definitions above mixing OX and Y shows lifting of Y to a default observation dimension per semantics described in Section 7.4, page 192. A similar technique applies to all
Forensic Lucid intensional operators, so we omit the repetition in subsequent definitions.
Definition 22. pby: the first element of the first argument preceded by all of the second.
X pby Y = (y0 , y1 , ..., yi−1 , ..., yn , x0 )
OX pbyw OY = OX pby (OY .P wvr OY .w ≥ 1/2)
... = ...
Definition 23. neg: a stream of negated arithmetic values of the argument.
neg X = (-x0 , -x1 , -x2 , ..., -xi+1 , ...)
174
Definition 24. not: a stream of inverted truth values of the argument.
not X = (!x0 , !x1 , !x2 , ..., !xi+1 , ...)
Definition 25. and: a logical AND stream of truth values of its arguments.
X and Y = (x0 && y0 , x1 && y1 , x2 && y2 , . . . , xi+1 && yi+1 , . . .)
Definition 26. or: a logical OR stream of truth values of its arguments.
X or Y = (x0 || y0 , x1 || y1 , x2 || y2 , . . . , xi+1 || yi+1 , . . .)
Definition 27. xor: a logical XOR stream of truth values of its arguments.
X xor Y = (x0 ⊕ y0 , x1 ⊕ y1 , x2 ⊕ y2 , . . . , xi+1 ⊕ yi+1 , . . .)
Definition 28. wvr (stands for whenever): wvr chooses from its left-hand-side operand only
values in the current dimension where the right-hand-side evaluates to true.
X wvr Y =
if first Y ̸= 0
then X fby (next X wvr next Y )
else (next X wvr next Y )
OX wvrw OY = OX wvr (OY .P && OY .w ≥ 1/2)
... = ...
Definition 29. rwvr (stands for retreat whenever): rwvr chooses from its left-hand-side
operand backwards only values in the current dimension where the right-hand-side evaluates
to true.
175
X rwvr Y =
if last Y ̸= 0
then X pby (prev X rwvr prev Y )
else (prev X rwvr prev Y )
OX rwvrw OY = OX rwvr (OY .P && OY .w ≥ 1/2)
... = ...
Definition 30. nwvr (stands for not whenever): nwvr chooses from its left-hand-side operand
only values in the current dimension where the right-hand-side evaluates to false.
X nwvr Y = X wvr not Y =
if first Y == 0
then X fby (next X nwvr next Y )
else (next X nwvr next Y )
OX nwvrw OY = OX nwvr (OY .P && OY .w ≥ 1/2)
... = ...
Definition 31. nrwvr (stands for not to retreat whenever): nrwvr chooses from its lefthand-side operand backwards only values in the current dimension where the right-hand-side
evaluates to false.
X nrwvr Y = X rwvr not Y =
if last Y == 0
then X pby (prev X nrwvr prev Y )
else (prev X nrwvr prev Y )
OX nrwvrw OY = OX nrwvr (OY .P && OY .w ≥ 1/2)
... = ...
176
Definition 32. asa (stands for as soon as): asa returns the value of its left-hand-side as a
first point in that stream as soon as the right-hand-side evaluates to true.
X asa Y = first (X wvr Y )
OX asaw OY = firstw (OX wvrw OY )
... = ...
Definition 33. ala (stands for as late as (or reverse of as soon as)): ala returns the value
of its left-hand-side as the last point in that stream when the right-hand-side evaluates to true
for the last time.
X ala Y = last (X wvr Y )
OX alaw OY = lastw (OX wvrw OY )
... = ...
Definition 34. nasa (stands for not as soon as): nasa returns the value of its left-hand-side
as a first point in that stream as soon as the right-hand-side evaluates to false.
X nasa Y = first (X nwvr Y )
OX nasaw OY = firstw (OX nwvrw OY )
... = ...
Definition 35. nala (stands for not as late as (or reverse of not a soon as)): nala
returns the value of its left-hand-side as the last point in that stream when the right-handside evaluates to false for the last time.
X nala Y = last (X nwvr Y )
OX nalaw Y = lastw (OX nwvrw OY )
... = ...
177
Definition 36. upon (stands for advances upon): unlike asa, upon switches context of its
left-hand-side operand if the right-hand side is true.
X upon Y = X fby (
if first Y ̸= 0
then (next X upon next Y )
else (X upon next Y ))
OX uponw OY = OX upon (OY .P && OY .w ≥ 1/2)
... = ...
Definition 37. rupon (stands for retreats upon): rupon switches context backwards of its
left-hand-side operand if the right-hand side is true.
X rupon Y = X pby (
if last Y ̸= 0
then (prev X rupon prev Y )
else (X rupon prev Y ))
OX ruponw OY = OX rupon (OY .P && OY .w ≥ 1/2)
... = ...
Definition 38. nupon (stands for not advances upon, or, rather advances otherwise): nupon
switches context of its left-hand-side operand if the right-hand side is false.
X nupon Y = X upon not Y = X fby (
if first Y == 0
then (next X nupon next Y )
else (X nupon next Y ))
OX nuponw OY = OX nupon (OY .P && OY .w ≥ 1/2)
... = ...
178
Definition 39. nrupon (stands for not retreats upon): nrupon switches context backwards
of its left-hand-side operand if the right-hand side is false.
X nrupon Y = X rupon not Y = X pby (
if last Y == 0
then (prev X nrupon prev Y )
else (X nrupon prev Y ))
OX nruponw OY = OX nrupon (OY .P && OY .w ≥ 1/2)
... = ...
Definition 40. “.” (dot): is a scope membership navigation operator.
The “.” operator is employed in multiple uses, so we include it into our syntax and
semantics.
• From indexical expressions [351, 361] with the operators defined in the preceding and
following sections it facilitates operations on parts of the evaluation context to denote
the specific dimensions of interest to work with.
• From JOOIP/Objective Lucid expressions, . allows nested class/object membership access.
• Additional use in Forensic Lucid (inspired from MARFL) is the contextual depth
navigation, similar in a way to the OO membership navigation, such as ES.OS.O.P
(see summary in Table 15).
Definition 41. #: queries the current context of evaluation.
Traditionally ([361], Section 4.1.3, page 85) # is defined as:
# = 0 fby (# + 1)
179
for N tag set, that were infinite and ordered, essentially being akin to array indices into the
current dimension X, and #X returned the current implicit index within X. Subsequently,
#.E = 0 fby.E(# + 1)
#C = C
#S = S
#F = F
#.F = 0 fby.F.(# + 1)
#w ≡ #
#.E was introduced to allow querying any part of the current multidimensional context where
E evaluates to a dimension, a part of the context. C and S are Lucx simple contexts and
context sets.
Since the tag set types were first augmented in Lucx to contain arbitrary enumerable tag
set values (cf. Section 7.2.3.1.5, page 165, [473]), such as strings, or non-numerically ordered
sequences, # is augmented to support such tag sets by returning the current tag of any type in
the current dimension indexing them in the order of their declaration in a Forensic Lucid
program.
In Forensic Lucid, # is augmented to support querying hierarchical forensic contexts
F . The indexing mechanism is same as in Lucx Thus, for an evidential statement ES, # ES
returns the current observation sequence OS, which is a tag in ES; likewise # OS returns the
current observation O, which is a tag in OS; # O returns the current tuple ⟨P, min, max, w, t⟩.
When combined with the Forensic Lucid dot . operator: ES.# returns the current OS,
ES.#.# returns the current O, ES.#.#.P returns the current O.P , and so on. #w is equivalent
to # as it is a query and not a navigation operator, so it does not take w into account when
querying the current context.
180
Definition 42. @ : switches the current context of evaluation.
Traditionally ([361], Section 4.1.3, page 85) @ is defined as:
X @ Y =if Y = 0 then first X
else ( next X) @ (Y − 1)
for N tag set in a single dimension. Subsequently,
X @ .dE
X@C
X@S
X @w F
X @w F.F
are also provided. X @ .dE is the indexical way to work with parts of the context [361]; C
and S are the simple contexts and context sets [513], and F are forensic contexts.
Augmentation to Lucx’s implementation extension of the tag sets [473] has the same
underlying principles as # in the preceding section: the order of declaration of the tags in
a tag set becomes and implicit index in that tag collection, and a corresponding mapping
of custom tags produces the desired effect with @ . For example, if an unordered finite tag
set of a dimension color has three strings it ranges over, {”red”, ”green”, ”blue”}, it is
legal to say, e.g., c@[color : ”green”], where internally, ”green” is indexed 1 in the order
of declaration in the tag set, and a mapping exists ”green” ↔ 1. “Unordered” in this sense
simply means the order in which the tags are declared is not important, but such a mapping
is necessary for operators defined earlier (first, next, fby, etc.) to work intuitively and
consistently with the classical N ones. (As an analogy, this is similar to the SQL standard and
the way results are returned in a SELECT query without the ORDER BY clause for subsequent
processing by the application.)
181
@ is likewise adapted to work with forensic contexts yielding different types of forensic
contexts depending on its types of arguments, as, e.g., illustrated in Table 15. Unlike #w ,
@w is does take w into the account when switching forensic contexts:
X @w O = X @ (O && O.w ≥ 1/2),
similarly to the previously presented operators. For observation sequences and evidential
statements, @w is more elaborate:
X @w OS = X @ (OS && bel OS ≥ 1/2)
X @w ES = X @ (ES && bel ES ≥ 1/2)
where the belief bel is defined in Section 7.3.4, page 191.
7.3.2.2
Definition of Core Operators by Means of @ and #
This step follows the same tradition as the most of the SIPLs in GIPSY (Chapter 6), which
are translated into GIPL (Section 4.1.1.2, page 78). The existing rewrite rules as well as
the new rules are applied here for the majority of our operators to benefit from the existing
interpretation available in GEE (Section 6.2.2, page 142) for evaluation purposes. This translation may also be helpful to the developers of similar systems other than GIPSY. Following
the steps similar to Paquet’s [361], we further represent the definition of the operators via @
and # (this includes the @w and #w interpretations of the operators). Again, there is a mix
of classical operators that were previously defined in [350, 361], such as first, next, fby,
wvr, upon, and asa as well as the new operators from this work [304]. The collection of the
translated operators is summarized in Figure 45.
For the same reasons of (im)practicality as in the PoC implementation of Lucx’s parser
and semantic analyzer [473], we don’t translate some Forensic Lucid constructs however.
Additionally, not all translations are currently possible, such as those dealing with the credibility factors. Therefore, for these we define a new set of operational semantic rules presented
further in Section 7.4, page 192. Instead, dedicated interpreter plug-ins are designed to work
182
directly with the untranslated constructs, such as the forensic contexts and their operators.
This is useful, e.g., for various interpretation backends, such as eductive vs. PRISM, and so
on (Chapter 8).
The primitive operators are founding blocks to construct more complex case-specific functions that represent a particular investigation case as well as more elaborate forensic operators [300].
7.3.2.3
Summary of the Core Operators’ Examples
Here we illustrate a few basic examples of application of the Forensic Lucid operators
(both, classical Lucid and the newly introduced operators). Assume we have two bounded
(between bod and eod) streams X and Y of ten elements. The X stream is just an ordered
sequence of natural numbers between 1 and 10. If queried for values below 1 an beginning-ofdata (bod) marker would be returned; similarly if queried beyond 10, the end-of-data marker
(eod) is returned. The Y stream is a sequence of ten truths values (can be replaced with 0
for “false” and 1 for “true”). The operators applied to these streams may return bounded
or unbounded streams of the same or different length than the original depending on the
definition of a particular operator. Also assume the current tag is 0 (#X = 0, #Y = 0).
The resulting table showing the application of the classical and the new operators is in
Table 14 [300, 305].
7.3.3
Context Calculus Operators
What follows are the definitions of the core context calculus operators: \isSubContext,
\difference, \intersection, \projection, \hiding, \override, and \union based on
Lucx and its compiler implementation [473, 513] and expanded to include forensic contexts
and tag sets through lifting. The context and context set operations defined below are
the Lucx original definitions [473, 513] provided for completeness. Their forensic context
and tag set versions are newly defined. While the concept of tag sets (Section 7.2.3.1.5,
page 165) is explored in detail at the implementation level by Tong [473], the operators on
this section were never defined on them. We define these operators to compose a complete
picture. This is achieved through the concept of lifting introduced by Guha [148] applied to
183
Table 14: Example of application of
stream/index -1
0
1
X
bod
1
2
Y
bod T
F
first X
1
1
last X
10 10
next X
2
prev X
bod
X fby Y
1
T
X pby Y
T
F
X wvr Y
1
X rwvr Y
10
X nwvr Y
2
X nrwvr Y
9
X asa Y
1
1
X nasa Y
2
2
X ala Y
10 10
X nala Y
9
9
X upon Y
1
2
X rupon Y
10
9
X nupon Y
1
1
X nrupon Y
10 10
neg X
-1 -2
not Y
F
T
X and Y
1
0
X or Y
1
2
X xor Y
0
2
Forensic Lucid
2 3 4 5
3 4 5 6
F T F F
1 1 1 1
10 10 10 10
3 4 5 6
F
F
F
T
4
8
T
F
F
F
operators
6 7
7 8
T T
1 1
10 10
7 8
F
T
7
7
T
T
8
4
to bounded streams [305]
8
9
10
11
9 10 eod eod
F T eod eod
1
1
10 10
9 10 eod eod
T
F
3
5 6
9
6
5 3
2
1 1 1 1 1 1 1
2 2 2 2 2 2 2
10 10 10 10 10 10 10
9 9 9 9 9 9 9
2 2 3 3 3 4 5
9 8 7 7 7 6 6
2 3 3 4 5 5 5
9 9 9 8 7 7 6
-3 -4 -5 -6 -7 -8 -9
T F T T F F T
0 1 0 0 1 1 0
3 5 5 6 7 9 9
3 5 5 6 6 9 9
F
T
10
1
T
1
1
2
10
9
5 eod
6 bod
6
6
5
5
-10 eod
F eod
1 eod
11 eod
11 eod
eod
eod
eod
bod
eod
eod
eod
eod
eod
contextual knowledge to allow the use of known formulas from one context type or its portion
to another. We also add the convenient \in operator. Through lifting simple contexts are
converted to observations and context sets to observation sequences when required.
Definition 43. \isSubContext
• If C1 and C2 are simple contexts and every micro context of C1 is also a micro context
of C2 , then C1 \isSubContext C2 returns true. An empty simple context is the subcontext of any simple context [473]. As in the concept of subset in set theory, C1 ⊆
C2 [473].
• If S1 and S2 are context sets, then S1 \isSubContext S2 returns true if every simple
184
context of S1 is also a simple context of S2 . An empty context set is the sub-context of
any context set [473]. Likewise, S1 ⊆ S2 .
• If F1 and F2 are forensic contexts, then F1 \isSubContext F2 returns true if every
nested context of F1 is also a nested context of F2 .
Definition 44. \difference
• If C1 and C2 are simple contexts, then C1 \difference C2 returns a simple context
that is a collection of all micro contexts that are members of C1 , but not members of
C2 : C1 \difference C2 = {mi | mi ∈ C1 ∧ mi ∈
/ C2 } [473]. If C1 \isSubContext C2
is true, then the returned simple context is empty. \difference is also valid if two
simple contexts have no common micro context; the returned simple context is simply
C1 [473].
• If S1 and S2 are context sets, \difference returns a context set S, where every simple
context C ∈ S is computed as C1 \difference C2 : S = S1 \difference S2 = {C |
C = C1 \difference C2 ∧ C ̸= ∅ ∧ C1 ∈ S1 ∧ C2 ∈ S2 } ∨ S = ∅ [473]. If for every
C1 and C2 , C1 \difference C2 = ∅, then S1 \difference S2 = ∅ [473]. However, if
there’s at least one pair of C1 and C2 where C1 \difference C2 ̸= ∅, then the result
is not empty [473].
• If T1 and T2 are finite tag sets, then T1 \difference T2 returns a tag set that is a collection of all tags that are members of T1 , but not members of T2 : T1 \difference T2 =
{ti | ti ∈ T1 ∧ ti ∈
/ T2 }. If T1 \in T2 is true (\in is defined further on page 190),
then the returned tag set is empty. \difference is also valid if two tag sets have no
common tag; the returned tag set is then T1 .
• If F1 and F2 are forensic contexts, \difference returns F , where every nested context
C ∈ F is computed as C1 \difference C2 : F = F1 \difference F2 = {C | C =
C1 \difference C2 ∧ C ̸= ∅ ∧ C1 ∈ F1 ∧ C2 ∈ F2 } ∨ F = ∅.
185
first X
=
X@0
(7.3.2.1)
last X
=
X@(#@(#iseod(#) − 1))
(7.3.2.2)
next X
=
X@(# + 1)
(7.3.2.3)
prev X
=
X@(# − 1)
(7.3.2.4)
X fby Y
=
if # = 0 then X else Y @(# − 1)
(7.3.2.5)
X pby Y
=
if iseod # then X else Y @(# + 1)
(7.3.2.6)
X wvr Y
=
X@T where
(7.3.2.7)
T = U fby U @(T + 1)
U = if Y then # else next U
end
X rwvr Y
=
X@T where
(7.3.2.8)
T = U pby U @(T − 1)
U = if Y then # else prev U
end
X nwvr Y
=
X@T where
(7.3.2.9)
T = U fby U @(T + 1)
U = if Y == 0 then # else next U
end
X rnwvr Y
=
X@T where
(7.3.2.10)
T = U pby U @(T − 1)
U = if Y == 0 then # else prev U
end
X asa Y
=
first (X wvr Y )
(7.3.2.11)
X nasa Y
=
first (X nwvr Y )
(7.3.2.12)
X ala Y
=
last (X rwvr Y )
(7.3.2.13)
X nala Y
=
last (X nrwvr Y )
(7.3.2.14)
X upon Y
=
X@W where
(7.3.2.15)
W = 0 fby (if Y then (W + 1) else W )
end
X rupon Y
=
X@W where
(7.3.2.16)
W = 0 pby (if Y then (W − 1) else W )
end
X nupon Y
=
X@W where
(7.3.2.17)
W = 0 fby (if Y == 0 then (W + 1) else W )
end
X nrupon Y
=
X@W where
(7.3.2.18)
W = 0 pby (if Y == 0 then (W − 1) else W )
end
neg X
=
−X
(7.3.2.19)
not X
=
if X then !X else X
(7.3.2.20)
X and Y
=
X&&Y
(7.3.2.21)
X or Y
=
X||Y
(7.3.2.22)
X xor Y
=
not((X and Y ) or not (X or Y ))
(7.3.2.23)
Figure 45: Operators translated to GIPL-compatible definitions [304, 305]
186
Definition 45. \intersection
• If C1 and C2 are simple contexts, then C1 \intersection C2 returns a new simple
context, which is the collection of those micro contexts that belong to both C1 and C2 :
C1 \intersection C2 = {mi | mi ∈ C1 ∧ mi ∈ C2 }. If C1 and C2 have no common
micro context, the result is an empty simple context [473].
• If S1 and S2 are context sets, then the resulting intersection context set is defined as
S = S1 \intersection S2 = {C | C = C1 \intersection C2 ∧ C ̸= ∅ ∧ C1 ∈
S1 ∧ C2 ∈ S2 } ∨ S = ∅. If for every C1 and C2 , C1 \intersection C2 = ∅, then
S1 \intersection S2 = ∅. However, if there’s at least one pair of C1 and C2 where
C1 \intersection C2 ̸= ∅, the result is not empty [473].
• If T1 and T2 are finite tag sets, then T1 \intersection T2 returns a new unordered tag
set, which is the collection of tags that belong to both T1 and T2 : T1 \intersection T2 =
{ti | ti ∈ T1 ∧ ti ∈ T2 }. If T1 and T2 have no common tag, the result is an empty tag
set.
• If F1 and F2 are forensic contexts, then the resulting intersection of the nested contexts
F = F1 \intersection F2 = {C | C = C1 \intersection C2 ∧F ̸= ∅∧C1 ∈ F1 ∧C2 ∈
F2 } ∨ F = ∅.
Definition 46. \projection
• If C is a simple context and D is a set of dimensions, this operator filters only those
micro contexts in C that have their dimensions in set D : C \projection D = {m |
m ∈ C ∧ dim(m) ∈ D}. The result is empty, if there’s no micro context having the
same dimension as in the dimension set. dim(m) returns the dimension of micro context
m [473].
• The projection of a dimension set onto a context set is a context set, which is a collection of all simple contexts having a \projection on the dimension set [473]. If
S is a context set, D is a dimension set; S ′ = S \projection D = {n | n =
C \projection D ∧ n ̸= ∅ ∧ C ∈ S} ∨ S ′ = ∅.
187
• If C is a simple context and T is a finite set of tags, this operator filters only those
micro contexts in C that have their tags in set T : C \projection T = {m | m ∈
C ∧ tag(m) ∈ T }. The result is empty, if there’s no micro context having the same tag
as in the tag set. tag(m) returns the tag of micro context m. The projection of a tag set
onto a context set is a context set, which is a collection of all simple contexts having a
\projection on the tag set. If S is a context set, S ′ = S \projection T = {n | n =
C \projection T ∧ n ̸= ∅ ∧ C ∈ S} ∨ S ′ = ∅.
• Projection of a dimension set onto a forensic context is a forensic context containing all
nested contexts filtered through the \projection on the dimension set. If F is a forensic
context, D is a dimension set; F ′ = F \projection D = {n | n = C \projection D∧
n ̸= ∅ ∧ C ∈ F } ∨ F ′ = ∅.
Definition 47. \hiding
• If C is a simple context and D is a dimension set, this operator removes all the micro
contexts in C whose dimensions are in D: C \hiding D = {m | m ∈ C ∧dim(m) ∈
/ D}.
If D contains all the dimensions appeared in C, the result is an empty simple context.
Additionally, C \projection D C \hiding D = C [473].
• For context set S, and dimension set D, the \hiding operator constructs a context set
S ′ where S ′ is obtained by \hiding each simple context in S on the dimension set D:
S ′ = S \hiding D = {n | n = C \hiding D ∧ n ̸= ∅ ∧ C ∈ S} ∨ S ′ = ∅.
• If C is a simple context and T is a finite tag set, this operator removes all the micro
contexts in C whose tags are in T : C \hiding T = {m | m ∈ C ∧ tag(m) ∈
/ T }.
If T contains all the tags appearing in C, the result is an empty simple context. For
context set S, the \hiding operator constructs a context set S ′ where S ′ is obtained by
\hiding each simple context in S on the tag set T : S ′ = S \hiding T = {n | n =
C \hiding T ∧ n ̸= ∅ ∧ C ∈ S} ∨ S ′ = ∅.
• For forensic context F and dimension set D, \hiding constructs a forensic context F ′
by hiding each nested context in F on dimension set D: F ′ = F \hiding D = {n |
n = C \hiding D ∧ n ̸= ∅ ∧ C ∈ F } ∨ F ′ = ∅.
188
Definition 48. \override
• If C1 and C2 are simple contexts, then C1 \override C2 returns a new simple context
C, which is the result of the conflict-free union of C1 and C2 , as defined as follows:
C = C1 \override C2 = {m | (m ∈ C1 ∧ dim(m) ∈
/ dim(C2 )) ∨ m ∈ C2 } [473].
• For every pair of context sets S1 , S2 , \override returns a set of contexts S, such that
every context C ∈ S is computed as C1 \override C2 ; C1 ∈ S1 , C2 ∈ S2 : S =
S1 \override S2 = {C | C = C1 \override C2 | C1 ∈ S1 ∧ C2 ∈ S2 ∧ C ̸= ∅} ∨ S =
∅ [473].
• For every pair of forensic contexts F1 , F2 , \override returns a nested context F : C1 ∈
F1 , C2 ∈ F2 : F = F1 \override F2 = {C | C = C1 \override C2 | C1 ∈ F1 ∧ C2 ∈
F2 ∧ C ̸= ∅} ∨ F = ∅.
Definition 49. \union
• If C1 and C2 are simple contexts, then C1 \union C2 returns a new simple context C,
for every micro context m in C: m is an element of C1 or m is an element of C2 :
C1 \union C2 = {m | m ∈ C1 ∨ (m ∈ C2 ∧ m ∈
/ C1 )} [473]. If there is at least one
pair of micro contexts in C1 and C2 sharing the same dimension and these two micro
contexts are not equal then the result is a non-simple context translated into a context
set: for a non-simple context C, construct the set Y = {yd = C \projection {d} |
d ∈ dim(C)}. Denoting the elements of set Y as y1 , . . . , yp , construct the set S(C)
of simple contexts: S(C) = {m1 \override m2 \override . . . \override mp | m1 ∈
y1 ∧ m2 ∈ y2 ∧ . . . mp ∈ yp }, The non-simple context is viewed as the set S(C), such
that S(C) = {s ∈ S | dim(s) = dim(C) ∧ s ⊂ C} [473].
• If C1 and C2 are context sets, then C = C1 \union C2 is computed as follows [473]:
1. D1 = {dim(m) ∧ m ∈ C1 }, D2 = {dim(m) ∧ m ∈ C2 }, D3 = D1 D2
2. Compute X1 : X1 = {mi (mj \hiding D3 ) ∧ mi ∈ C1 ∧ mj ∈ C2 }
3. Compute X2 : X2 = {mj (mi \hiding D3 ) ∧ mi ∈ C1 ∧ mj ∈ C2 }
189
4. Finally: C = X1
X2
This procedure ensures contexts sets always have the same well-defined context structure [473, 513].
• If T1 and T2 are finite tag sets of tags of the same type, T is their non-order-preserving
union: T = T1 \union T2 = {t | t ∈ T1 ∨ (t ∈ T2 ∧ t ∈
/ T1 )}.
• If forensic contexts are observations O1 and O2 their union is an observation sequence OS if the observations are ordered in wall-clock time: OS = O1 \union O2 =
{O1 .t, O2 .t | {O1 , O2 } ∧ O1 .t < O2 .t}. If the observations are conflicting in time (equal,
or undefined), the are lifted to an evidential statement containing two constructed observation sequences that have one observation each: ES = O1 \union O2 = {O1 ∈
OS1 , O2 ∈ OS2 }. If forensic contexts are observation sequences OS1 and OS2 , their
union is OS = OS1 \union OS2 , i.e., a fusion merge of the two observation sequences,
if OS1 and OS2 contain non-conflicting observations Oi , ordered chronologically by Oi .t
not sharing a common t. If observations are conflicting or Oi .t are undefined in any of
OS1 or OS2 , then the union of such observation sequences is an evidential statement
ES = OS1 \union OS2 = {OS1 , OS2 }. If forensic contexts are evidential statements
ES1 and ES2 , is a simple union of all observation sequences from both statements
ES = ES1 \union ES2 = {os | os ∈ ES1 ∨ (os ∈ ES2 ∧ os ∈
/ ES1 )}. We admit
conflicting observation sequences as they do happen in real-life investigations.
Definition 50. \in
• If C1 and C2 are simple contexts, then C1 \in C2 = C1 \isSubContext C2 .
• If S1 and S2 are context sets, then S1 \in S2 = S1 \isSubContext S2 .
• If F1 and F2 are forensic contexts, then F1 \in F2 = F1 \isSubContext F2 .
• If D1 ⊂ DIM and D2 ⊂ DIM are dimension sets, then the operator D1 \in D2 returns
true if D1 ⊂ D2 .
190
• If T1 and T2 are finite tag sets, then the operator T1 \in T2 returns true if T1 ⊆ T2 .
If T1 is infinite and T2 is finite, \in always returns false. If T2 is infinite and T1 is
either finite or infinite, the set membership can be determined by a function. This last
case is of little interest to us any further.
While this operator hasn’t appeared at the syntax level for any context types or their
parts, at the implementation level Tong provided isInTagSet() while developing Lucx
parser and semantic analyzer [473].
7.3.4
Forensic Context Operators
While the operators presented so far in the preceding sections are designed to support forensics context in addition to their traditional behavior, the operators presented here were
designed to work originally with the forensic contexts only specific to Forensic Lucid and
lift any non-forensic contexts if used.
The operators presented here are based on the discussion of the combination [137] function
and others to support the required case implementation [304]. The discussed earlier (see
Section 2.2.4.6.2, page 42) Gladyshev’s comb() operator [136, 137] needs to be realized in
the general manner in Forensic Lucid for combining our analogies of multiple partitioned
runs (MPRs) [136, 137], which in our case are higher-level contexts and context sets, in the
new dimension types [300, 304, 305, 312] (see Table 13).
Definition 51. combine corresponds to the comb function as originally described by Gladyshev ([137], Section 2.2.4.6.2, page 42).
Definition 52. product corresponds to the cross-product [137] of contexts.
Belief and plausibility operators bel and pl are introduced to deal with the evidence
evaluation in the Dempster–Shafer terms (Section 3.3.2, page 66). The basic argument to
these operators is an observation o; in this case they simply evaluate into the o.w component
of o. bel and pl and be computed of any two observations o1 and o2 of the same property
(o1 .P = o2 .P ) can be made according o1 .w and o2 .w. The latter is a founding block to
compute bel and pl of any number of observations relating to that property.
191
Definition 53. bel corresponds to belief computation based on DSTME (Section 3.3.2,
page 66) adapted to forensic contexts as follows: given an observation O, bel O = O.w;
if O = $, bel $ = 1; if O = \0, bel \0 = 0; if C is a simple context or S is a context set, bel C = bel S = 1; given an observation sequence OS and its observations Oi ,
bel OS = {Oi ∈ OS |
i Oi .w/i}; given an evidential statement ES and its subset of
interest of observation sequences ESB , bel ES = ESB |ESB ⊆ES w(ESB ).
Thus, the credibility values of OS and ES are derived from the beliefs assigned to the
contained observations.
Definition 54. pl corresponds to plausibility computation based on DSTME adapted to
forensic contexts as follows: given an observation O, pl O = bel O; given an observation sequence OS and its observations Oi , pl OS = bel OS; given an evidential statement ES and
its subset of interest of observation sequences ESB , bel ES = ESB |ESB ∩ES̸=∅ w(ESB ) =
1 − bel(ES).
7.4
Operational Semantics
Like the syntax, the operational semantics of Forensic Lucid capitalizes on the semantic
rules of GIPL [361] (Section 4.1.1.2, page 78), Indexical Lucid [109], Objective Lucid [264], and Lucx [513], JOOIP [528], and an inspiration from MARFL (Figure 84,
page 357) augmented with with the new operators, probabilities, and definitions [300, 304,
312]. We specify resulting semantic definitions in Forensic Lucid along with the explanation of the rules and the notation. The new rules of the operational semantics of Forensic
Lucid cover the operators primarily, including the reverse and logical stream operators as
well as forensic-specific operators [300, 304, 305]. We use the same notation as the referenced
languages to maintain consistency in defining our rules [300, 304].
7.4.1
General Semantic Rules
The rules are grouped in several figures: the basic core rules are in Figure 46, the core
context-related rules are in Figure 47, the hybrid Forensic Lucid-Java interaction rules
192
are in Figure 48, the rules related to observations, observation sequences, and evidential
statements are in Figure 49, Figure 51, Figure 52, Figure 53, and in Table 15 respectively.
What follows are notes on the additional details of rules of interest.
1. The evaluation context environment P defines a point in the multidimensional context
space at which an expression E is to be evaluated [361, 526]. P changes when the
@ operator is evaluated, or a dimension is declared in a where clause. It is a set of
⟨dimension : tag⟩ mappings, associating each dimension with a tag index over this
dimension. P is thus a partial function:
P : EI → T
(7.4.1.1)
where EI is a set of possible dimension identifiers, declared statically or computed
dynamically. T is a tag set.
In traditional Lucid semantics EI = Id ([361, 513], Section 4.1.1.2, page 78). In our
case, the extended EI includes Id and E.Id, where the latter form allows nested OOlike identifiers, both declared explicitly or be a result of evaluation of E into a dynamic
identifier allowing for intuitive representation of context hierarchies.
In traditional Lucid semantics, the tag set T = N, i.e., a set of natural numbers [361].
In Lucx’s extension, T = U is any enumerable tag set [513], that may include various orderings of numerical or string tag sets, both finite and infinite ([473], cf. Section 7.2.3.1.5, page 165). In Forensic Lucid, T = Uf that includes N, U, as well as
tag values associated with observations, their sequences, and evidential statements.
2. The initial definition environment D0 includes the predefined operators, the constants,
and P0 defines the initial context of evaluation [361, 513]. Thus, D0 , P0 ⊢ E : v
represents the computation of any Forensic Lucid expression E resulting in value v.
3. The semantic operator † represents the addition of a mapping in the definition environment D, associating an identifier with its corresponding semantic record, here
represented as a tuple [361, 528].
193
4. Each type of identifier (cf. Table 13, page 161) can only be used in the appropriate
situations. Identifiers of type op, func, and dim evaluate to themselves (Figure 46,
rules 7.4.1.3, 7.4.1.19, 7.4.1.4).
5. Constant identifiers (const) evaluate to the corresponding constant (Figure 46, rule
7.4.1.2).
6. Function calls, resolved by the Efct rule (Figure 46, rule 7.4.1.14), require the renaming
of the formal parameters into the actual parameters (as represented by E ′ [idi ← Ei ]).
7. Arrays are resolved by the Earray rule (Figure 46, rule 7.4.1.6), which is a collection
of expressions Ei evaluating into their values vi under the current context. Evaluating
an array @ certain context evaluates each element of the array at that context. When
needed, an array can be lifted to a tuple (the described further Etuple rule).
8. The function P ′ = P†[id →→ v ′′ ] specifies that P ′ (x) is v ′′ if x = id, and P(x) otherwise.
9. The rule for the where clause, Ew (Figure 46, rule 7.4.1.15), which corresponds to the
syntactic expression E where Q, evaluates E using the definitions Q therein.
10. The additions to the definition environment D and context of evaluation P made by
the Q rules (Figure 47, rule 7.4.1.29; Figure 46, rules 7.4.1.16, 7.4.1.17) are local to the
current where clause. This is represented by the fact that the Ew rule returns neither
D nor P.
11. The Qdim rule adds a dimension to the definition environment and (as a default convention for N tags maintained for GIPL compatibility), adds this dimension to the context
of evaluation with the tag 0 (Figure 47, rule 7.4.1.29). This allows us to maintain compatibility with the predecessor dialects. For the tag set types other than N where the
first tag is either non-zero or not a number, that first tag is added to the context of
evaluation with its underlying collection index of 0 (cf. Definition 42, page 181) (where
the collections at the implementation level is marf.util.FreeVector in our case).
12. The Qid and Qfid simply add variable and function identifiers along with their definition
to the definition environment (Figure 46, rules 7.4.1.16, 7.4.1.17) [300, 304, 361].
194
13. The semantic rule Etuple (7.4.1.26) evaluates a tuple as a finite stream whose dimension
is explicitly indicated as E in the corresponding syntax rule ⟨E1 , . . . , En ⟩id. Accordingly, the semantic rule Eselect (7.4.1.27) picks up one element indexed by E from the
tuple E ′ [513].
14. The evaluation rule for the navigation operator @ , Eat(cxt) (7.4.1.24), which corresponds to the syntactic expression E @ E ′ , evaluates E in context E ′ [513].
15. The evaluation rule for the set navigation operator @ , Eat(set) (7.4.1.28), which corresponds to the syntactic expression E @ E ′ , evaluates E in a set of contexts C. Therefore,
the evaluation result is a collection of results of evaluating E at each element of C [513].
16. The semantic rule Econstruction(cxt) (7.4.1.23, Figure 47) evaluates [Ed1 : Ei1 , . . . , Edn :
Ein ] to a simple context [513]. It specifically creates a context as a semantic item and
returns it as a context P that can then be used by the rule 7.4.1.24 to navigate to this
context by making it override the current context.
17. The semantic rule Econstruction(set) (7.4.1.31) correspondingly constructs {E1 , . . . , Em }
as a context set [513].
18. The semantic rule 7.4.1.13 is valid for the definition of the context operators, where the
actual parameters evaluate to values vi that are contexts Pi .
19. The semantic rule 7.4.1.22 expresses that the # symbol evaluates to the current context.
When used as a parameter to the context calculus operators, this allows for the generation of contexts relative to the current context of evaluation [300, 304, 361, 513, 515].
20. When intensional operators are used and one of their arguments happens to be of types
(odim), (osdim), or (esdim), their opw version is assumed (taking w into the account
as exemplified in Section 7.3.2, page 169. When binary intensional operators are used
and one of the arguments is of a forensic context type and another is a Lucx context
type, the latter is lifted (similar to type casting in classical imperative languages) to
the forensic context as described in Section 7.4.2, page 197 and in subsequent sections.
195
21. TG (E) corresponds to mapping of Forensic Lucid and Java data types by the GIPSY
Type System (Appendix B).
22. For hybrid intensional-imperative glue for Forensic Lucid and Java we define the
rules in Figure 48. This set of rules enables Forensic Lucid to interact with Java
(and vice versa), for example, to enable the use of Forensic Lucid in JOOIP for
purposes of self-forensics (Appendix D).
JCDef semantically identifies a Java class associates this class declaration to the identifier cid, and stores it in the definition environment D. A class can contain member
variables (JavaVDef) and member functions (JavaFDef). These are processed in a similar manner by the two following semantic rules [526, 528]. And the rules that follow
define the semantics of the evaluation of those.
23. JVDef specifies a Java member variable in a Java class JavaCDef by the syntactical specification: public type vid ... inside the class’ declaration. The specification
(classV, cid.vid, JavaVDef) is used to represent a Java class data member vid declared inside a class declaration JavaVDef for the class cid [526, 528].
24. JFDef specifies a Java member method in a Java class JavaCDef from the syntactic
specification: public ft fid(fpt1 fp1 , . . . , fptn fpn ){. . .}. Likewise, the specification
(classF, cid.fid, JavaFDef) is used to represent a Java member method fid declared
inside a class declaration JavaCDef for the class cid [526, 528].
25. JFFDef specifies the JLucid notion of “free Java function”, which is a method not
explicitly defined in a given class by a Forensic Lucid programmer. Its syntactical specification is: ft ffid(fpt1 fp1 , . . . , fptn fpn ){. . .}. The semantic specification
(freefun, ffid, JavaFreeFDef) represents the “free Java function” ffid, i.e., a function that is directly available in the Forensic Lucid program, and that is not a
member of any class [526]. Since free functions are not allowed in standard Java, in
terms of implementation, these “free functions” are all put inside a generated wrapper
class by the compiler to be part of the GEER of the execution engine as originally
defined in [264].
196
26. LobjV specifies the semantics of the evaluation of a reference to a class data member by
a Forensic Lucid expression using the object-oriented dot operator (Section 7.3.2,
page 169, page 179). The rule’s premises insure that, in E.vid: (1) the Forensic
Lucid expression E evaluates to a value v that is an object of type cid, as being
associated in the definition environment D to the tuple (class, cid, JavaCDef); (2) the
variable vid is a public member of the class cid. Once this is established as holding,
the Java Virtual Machine can be called upon to evaluate v.vid (noted as JVM[[v .vid ]]),
to yield a value vr [526, 528].
27. LobjF specifies of the evaluation of a reference to a class member method by a Forensic Lucid expression using the dot operator. The rule’s premises insure that in
E.f id(E1 , . . . , En ): (1) the Forensic Lucid expression E evaluates to a value v
that is an object of type cid, as being associated in the definition environment D
to the tuple (class, cid, JavaCDef); (2) the method fid is a public member of the
class cid. Once this is established as holding, all actual parameters are evaluated to
values v1 , . . . , vn , the JVM can be called upon to evaluate v.f id(v1 , . . . , vn ) (denoted as
JVM[[v .fid (v1 , . . . , vn )]]), to yield a value vr [526, 528].
28. LFF speficies the evaluation of free Java functions. The rule is a simpler version
of LobjF with no class type identifiers present, and no object to compute upon. As
mentioned earlier, Java does not have free functions, so all free functions are wrapped
in a “free function wrapper” class at compilation, with all free functions inserted in it
as static functions [145, 264], which are then processed by the JVM [526, 528]. The
JFFDef rule is inserting all the free functions in this wrapper class, which we called
ffw. Then, upon calling such “free functions”, this rule is called and assumes that the
“free functions” have been wrapped as static functions into the ffw class, then call the
appropriate function [526, 528].
7.4.2
Extended Observation
We augment the notion of observation based on Equation 7.2.2.1 to formalize it as illustrated
in Figure 49. In general, all components of an observation are stored with the assumption
197
Ecid
:
D(id) = (const, c)
D, P ⊢ id : c
(7.4.1.2)
Eopid
:
D(id) = (op, f )
D, P ⊢ id : id
(7.4.1.3)
Efid
:
D(id) = (func, idi , E)
D, P ⊢ id : id
(7.4.1.4)
Evid
:
D(id) = (var, E)
D, P ⊢ E : v
D, P ⊢ id : v
(7.4.1.5)
Earray
:
D, P ⊢ E : [E1 , . . . , Em ]
D, P ⊢ Ei : vi
D, P ⊢ [E1 , . . . , Em ] : [v1 , . . . , vm ]
(7.4.1.6)
EcT
:
D, P ⊢ E : true
D, P ⊢ E ′ : v ′
D, P ⊢ if E then E ′ else E ′′ : v ′
(7.4.1.7)
EcF
:
D, P ⊢ E : false
D, P ⊢ E ′′ : v ′′
D, P ⊢ if E then E ′ else E ′′ : v ′′
(7.4.1.8)
E∞+
:
D, P ⊢ E : INF+
D, P ⊢ INF+ : TG (Long.MAX VALUE)
(7.4.1.9)
E∞−
:
D, P ⊢ E : INFD, P ⊢ INF- : TG (Long.MIN VALUE)
(7.4.1.10)
Eeod
:
D, P ⊢ E : eod
D, P ⊢ eod : null
(7.4.1.11)
Ebod
:
D, P ⊢ E : bod
D, P ⊢ bod : null
(7.4.1.12)
Eop
:
D, P ⊢ E : id
D(id) = (op, f )
D, P ⊢ Ei : vi
D, P ⊢ E(E1 , . . . , En ) : f (v1 , . . . , vn )
(7.4.1.13)
Efct
:
Ew
:
Qid
:
Qfid
:
QQ
:
D, P ⊢ E : id
D(id) = (func, idi , E ′ )
D, P ⊢ E ′ [idi ← Ei ] : v
D, P ⊢ E(E1 , . . . , En ) : v
D, P ⊢ Q : D′ , P ′
D′ , P ′ ⊢ E : v
D, P ⊢ E where Q : v
D, P ⊢ id = E : D†[id →→ (var, E)], P
D, P ⊢ id(id1 , . . . , idn ) = E : D†[id →→ (func, idi , E)], P
D, P ⊢ Q : D′ , P ′
D′ , P ′ ⊢ Q′ : D′′ , P ′′
′
D, P ⊢ Q Q : D′′ , P ′′
(7.4.1.14)
(7.4.1.15)
(7.4.1.16)
(7.4.1.17)
(7.4.1.18)
Figure 46: Operational semantics rules of Forensic Lucid: E and Q Core
of default values when not all of the comprising components were specified, similarly to the
allowed syntactical constructs in Figure 43 where P = E with the w being the credibility or
trustworthiness weight of that observation, and the t being an optional wall-clock timestamp.
With w = 1 the o is equivalent to the original model proposed by Gladyshev [313].
The simple contexts and context sets are lifted (cf. Section 7.3.3, page 183) to observations
via P as illustrated by the rule Cop(flift) (7.4.2.13, in Figure 50), (with the defaults assigned
198
Edid
:
D(id) = (dim)
D, P ⊢ id : id
(7.4.1.19)
EE.did
:
D(E.id) = (dim)
D, P ⊢ E.id : id.id
(7.4.1.20)
Etag
:
D, P ⊢ E : id
D(id) = (dim)
D, P ⊢ #E : P(id)
(7.4.1.21)
E#(cxt)
:
(7.4.1.22)
D, P ⊢ # : P
D, P ⊢ Edj : idj
D, P ⊢ Eij : vj
D(idj ) = (dim)
P ′ = P0 †[id1 →→ v1 ]†. . .†[idn →→ vn ]
(7.4.1.23)
Econstruction(cxt)
:
Eat(cxt)
:
D, P ⊢ E ′ : P ′
D, P †P ′ ⊢ E : v
D, P ⊢ E @ E ′ : v
(7.4.1.24)
Edot
:
D, P ⊢ E2 : id2
D(id2 ) = (dim)
D, P ⊢ E1 .E2 : tag(E1 ↓ {id2 })
(7.4.1.25)
Etuple
:
D, P ⊢ E : id
D†[id →→ (dim)]
P †[id →→ 0]
D, P ⊢ Ei : vi
D, P ⊢ ⟨E1 , E2 , . . . , En ⟩E : v1 fby.id v2 fby.id . . . vn fby.id eod
(7.4.1.26)
Eselect
:
Eat(set)
:
Qdim
:
D, P ⊢ [Ed1 : Ei1 , Ed2 : Ei2 , . . . , Edn : Ein ] : P ′
E = [d : v’]
E ′ = ⟨E1 , . . . , En ⟩d
P ′ = P †[d →→ v ′ ]
D, P ⊢ select(E, E ′ ) : v
D, P ′ ⊢ E ′ : v
D, P ⊢ C : {P1 , . . . , P2 }
D, Pi:1...m ⊢ E : vi
D, P ⊢ E @C : {v1 , . . . , vm }
D, P ⊢ dimension id : D†[id →→ (dim)], P †[id →→ 0]
D, P ⊢ Edi : idi
D(idi ) = (dim)
{E1 , . . . , En } = dim(P1 ) = . . . = dim(Pm )
E ′ = fp (tag(P1 ), . . . , tag(Pm ))
D, P ⊢ E ′ : true
(7.4.1.27)
(7.4.1.28)
(7.4.1.29)
Cconstruction(box)
:
Cconstruction(set)
:
D, P ⊢ Ew:1...m : Pm
D, P ⊢ {E1 , . . . , Em } : {P1 , . . . , Pw }
(7.4.1.31)
Cop(cxt)
:
D, P ⊢ E : id
D(id) = (cop, f )
D, P ⊢ Ci : vi
D, P ⊢ E(C1 , . . . , Cn ) : f (v1 , . . . , vn )
(7.4.1.32)
Csop(set)
:
D, P ⊢ E : id
D(id) = (sop, f )
D, P ⊢ Ci : {vi1 , . . . , vik }
D, P ⊢ E(C1 , . . . , Cn ) : f ({v11 , . . . , v1s }, . . . , {vn1 , . . . , vnm })
(7.4.1.33)
D, P ⊢ Box [E1 , . . . , En |E ′ ] : {P1 , . . . , Pm }
(7.4.1.30)
Figure 47: Operational semantics rules of Forensic Lucid: E and Q Core Context
to min, max, w, t per the rules in Figure 49).
7.4.3
Observation Sequence
An observation sequence is an ordered collection of observations; the related semantic rules
are in Figure 51. The rule 7.4.3.3 corresponds to the most common case of declaration
of a witness account. The rule 7.4.3.4 serves to initialize the empty observation sequence
dimension; such a sequence will need to be populated at the program execution time via
199
LobjV
LobjF
:
:
LFF
:
JCDef
:
JVDef
JFDef
JFFDef
:
:
:
D, P ⊢ E : v TG (v) = D(cid) = (class, cid, JavaCDef)
D, P ⊢ vid : vid D(cid.vid) = (classV, cid.vid, JavaVDef)
D, P ⊢ JVM[[v .vid]] : vr
D, P ⊢ E.vid : vr
D, P
D, P
D, P
D, P
⊢ E : v TG (v) = D(cid) = (class, cid, JavaCDef)
⊢ f id : f id D(cid.f id) = (classF, cid.fid, JavaFDef)
⊢ E1 , . . . , En : v1 , . . . , vn
⊢ JVM[[v .fid(v1 , . . . , vn )]] : vr
D, P ⊢ E.f id(E1 , . . . , En ) : vr
D(ffid) = (freefun, ffid, JavaFFDef)
D, P ⊢ E1 , . . . , En : v1 , . . . , vn
D, P ⊢ JVM[[ffw .ffid(v1 , . . . , vn )]] : vr
D, P ⊢ ffid(E1 , . . . , En ) : vr
JavaCDef = class cid {. . .}
D, P ⊢ JavaCDef : D†[cid →→ (class, cid, JavaCDef)], P
JavaCDef = class cid {. . . JavaVDef . . .}
JavaVDef = public type vid . . . ;
D, P ⊢ JavaVDef : D†[cid.vid →→ (classV, cid.vid, JavaVDef)], P
JavaCDef = class cid {. . . JavaFDef . . .}
JavaFDef = public ft fid(fpt1 fp1 , . . . , fptn fpn ){. . .}
D, P ⊢ JavaFDef : D†[cid.fid →→ (classF, cid.fid, JavaFDef)], P
JavaFFWCDef = class ffw {. . . JavaFFDef . . .}
JavaFFDef = ft ffid(fpt1 fp1 , . . . , fptn fpn ){. . .}
D, P ⊢ JavaFFDef : D†[ffid →→ (freefun, ffid, JavaFFDef)], P
(7.4.1.34)
(7.4.1.35)
(7.4.1.36)
(7.4.1.37)
(7.4.1.38)
(7.4.1.39)
(7.4.1.40)
Figure 48: Operational semantics of Forensic Lucid (OO)
declaration of observations or observation expressions using context operators. The other
rules are defined as per usual. As described in Section 7.3.3, page 183 a context set is lifted
to an observation sequence (rule 7.4.2.14 in Figure 50), where each set element is lifted to an
observation as per preceding section.
7.4.4
Evidential Statement
Evidential statement is a collection of observation sequences where, unlike with observation
sequences, ordering is not important. Its semantics is in Figure 52. The expression and
declarations rules defined as per usual including common declaration and empty evidential
statements. The rule Qesdim (×) in particular is designed to handle the generic observation
sequences promoting them to an evidential statement forensic context type (Section 7.2.3.1.3,
page 164) when any of their contained observations have their max > 0.
Furthermore, in Table 15 are results of application of different arguments to @, #, and “.”
(dot) starting from the evidential statement all the way down to the observation components
200
Eodid
:
D(id) = (odim)
D, P ⊢ id : id
Eovid
:
D(id) = (odim, E, min, max, w, t)
D, P ⊢ id : ov
Qodim1
:
Qodim2
Qodim3
:
:
(7.4.2.1)
D, P ⊢ E : ov
D, P ⊢ E : ov
D†[id →→ (odim)],
P †[id.P →→ ov ,
id.min →→ min,
D, P ⊢ observation id = (E, min, max, w, t) :
id.max →→ max,
id.w
→
→
w,
(odim, E, min, max, w, t)
id.t →→ t]
D, P ⊢ E : ov
D†[id →→ (odim)],
P †[id.P →→ ov ,
id.min →→ min,
D, P ⊢ observation id = (E, min, max, w) :
id.max →→ max,
id.w →→ w,
(odim, E, min, max, w)
id.t →→ eod]
D, P ⊢ E : ov
D†[id →→ (odim)],
P †[id.P →→ ov ,
id.min →→ min,
D, P ⊢ observation id = (E, min, max) :
id.max →→ max,
id.w
→
→
1.0,
(odim, E, min, max)
(7.4.2.2)
(7.4.2.3)
(7.4.2.4)
(7.4.2.5)
id.t →→ eod]
Qodim4
Qodim5
:
:
D, P ⊢ E : ov
D†[id →→ (odim)],
P †[id.P →→ ov ,
id.min →→ min,
D, P ⊢ observation id = (E, min) :
id.max →→ 0,
id.w →→ 1.0,
(odim, E, min)
id.t →→ eod]
D, P ⊢ E : ov
D†[id →→ (odim)],
P †[id.P →→ ov ,
id.min →→ 1,
D, P ⊢ observation id = E :
id.max →→ 0,
id.w
→
→
1.0,
(odim, E)
id.t →→ eod]
Figure 49: Operational semantics of Forensic Lucid: an observation
201
(7.4.2.6)
(7.4.2.7)
Eat(o)
:
D, P ⊢ E ′ : O
D, P †O ⊢ E : ov
D, P ⊢ E @ E ′ : ov .⟨P, min, max, w, t⟩
(7.4.2.8)
Eat(os)
:
D, P ⊢ OS : {O1 , . . . , Om }
D, Oi:1...m ⊢ E : ovi
D, P ⊢ E @ OS : ordered {ov1 , . . . , ovm }
(7.4.2.9)
Eat(es)
:
D, P ⊢ ES : {OS m , . . . , OS m }
D, OS i:1...m ⊢ E : osvi
D, P ⊢ E @ ES : {osv1 , . . . , osvm }
(7.4.2.10)
Cfop(os)
:
D, P ⊢ E : id
D(id) = (fop, f )
D, P ⊢ Oi : ovi
D, P ⊢ E(O1 , . . . , On ) : f (ov1 , . . . , ovn )
(7.4.2.11)
Cfop(es)
:
Cop(flift)
:
Csop(flift)
:
D, P ⊢ E : id
D(id) = (fop, f )
D, P ⊢ OSi : {ovi1 , . . . , ovi }
k
D, P ⊢ E(OS1 , . . . , OSn ) : f ({ov11 , . . . , ov1s }, . . . , {ovn1 , . . . , ovnm })
D, P ⊢ E : id
D(id) = (cop, f ) D, P ⊢ C : v
D, P ⊢ E(O) : f (ov .P = v)
D, P ⊢ O : ov
D, P ⊢ E : id D(id) = (sop, f ) D, P ⊢ Ci : {vi1 , . . . , vik } D, P ⊢ OS : {ovi }
D, P ⊢ E(O1 , . . . , On ) : f (ov1 .P = {v11 , . . . , v1s }, . . . , ovn .P = {vn1 , . . . , vnm })
(7.4.2.12)
(7.4.2.13)
(7.4.2.14)
Figure 50: Operational semantics of Forensic Lucid: forensic operators and lifting
Eosdid
:
D(id) = (osdim)
D, P ⊢ id : id
(7.4.3.1)
Eosvid
:
D(id) = (osdim)
D, P ⊢ E : osv
D, P ⊢ id : osv
(7.4.3.2)
Qosdim
:
D, P ⊢ Ei : oi
D, P ⊢ E : osv
D†[id →→ (osdim)],
D, P ⊢ observation sequence id = {E1 , . . . , En } :
P †[id →→ ordered⟨id.ov1 , . . . , id.ovn ⟩]
(osdim, E1 , . . . , En )
(7.4.3.3)
Qosdim (∅)
:
D, P ⊢ E : osv
D†[id →→ (osdim)],
D, P ⊢ observation sequence id :
P †[id →→ ∅]
(osdim)
(7.4.3.4)
Figure 51: Operational semantics of Forensic Lucid: an observation sequence
following the definitions in Section 7.3.2, page 169. In case #ES and #OS angle brackets
⟨. . .⟩ simply denote the tuple of either observation sequences or observations of which the
current one is actually returned, of type OS or O respectively. While observation sequences
in ES are not strictly speaking ordered as mentioned, the order of declaration and updates
by context operators is used in the underlying implementation.
7.4.5
Belief and Plausibility
The semantics of the bel and pl forensic operators is in Figure 53. The semantics follows the
details explained in the operator definitions in Section 7.3.4, page 191. Belief and plausibility
202
Eesdid
:
D(id) = (esdim)
D, P ⊢ id : id
(7.4.4.1)
Eesvid
:
D(id) = (esdim)
D, P ⊢ E : esv
D, P ⊢ id : esv
(7.4.4.2)
Qesdim
:
(esdim, E1 , . . . , En )
D, P ⊢ Ei : osi
D, P ⊢ evidential statement id = {E1 , . . . , En } :
D, P ⊢ E : esv
D†[id →→ (esdim)],
P †[id →→ ⟨id.osv1 , . . . , id.osvn ⟩]
(7.4.4.3)
Qesdim (∅)
:
Qesdim (×)
:
D, P ⊢ E : esv
D†[id →→ (esdim)],
D, P ⊢ evidential statement id :
P †[id →→ ∅]
(esdim)
∃D, P ⊢ Ei : osi .oi .max > 0 D, P ⊢ E : esv
D†[id →→ (esdim)],
D, P ⊢ evidential statement id = {E1 , . . . , En } :
P †[id →→ ⟨id.osv1 × id.osvi ⟩]
(esdim, E1 , . . . , En )
D, P ⊢ Ei : osi
(7.4.4.4)
(7.4.4.5)
Figure 52: Operational semantics of Forensic Lucid: an evidential statement
Table 15: Types of context operators’ arguments and resulting types
Left Type
O
O
OS
OS
ES
OS
ES
OS
O
O
operator
@
@
combine
combine
combine
product
#
#
#
#
#
#
.
.
fby
stream-op
Right Type
OS
ES
OS
ES
ES
OS
ES
OS
O
O.P
O.w
O.min
OS
O
O
O
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
Resulting Type
O
O
OS
ES
ES
ES
⟨os1 , . . . , osn ⟩
⟨o1 , . . . , on ⟩
(P, min, max, w, t)
E
F LOAT
IN T EGER
OS
O
OS
OS
of no-observations $ basically say everything is believable and plausible (like the Any catchall case in the DSTME examples). Belief and plausibility of zero-observations \0, therefore,
correspond to the null-hypothesis, so they are set to 0.0.
7.5
Discussion
This section provides an additional insight into the just defined syntax and semantics and
how it all fits together with the material presented in the background (Part I).
203
Ebel
Ebel($)
Ebel(\0)
Ebel(EE)
Epl($)
Epl(\0)
:
:
:
:
:
:
D, P ⊢ E : id
D(id) = (fop, bel)
D, P ⊢ E ′ : id′ D, P ⊢ E ′ : ov
D†[id′ →→ (odim)] P †[id′ →→ id′ .w]
D, P ⊢ bel(E ′ ) : ov .w
D, P ⊢ E : id
D(id) = (fop, bel)
D, P ⊢ E ′ : id′ D, P ⊢ E ′ : $
D†[id′ →→ (odim)]
D, P ⊢ bel(E ′ ) : 1.0
D, P ⊢ E : id
D(id) = (fop, bel)
D, P ⊢ E ′ : id′ D, P ⊢ E ′ : \0
D†[id′ →→ (odim)]
D, P ⊢ bel(E ′ ) : 0.0
D, P ⊢ E : id
D(id) = (fop, bel)
D, P ⊢ E ′ : id′ D, P ⊢ E ′ : o′v
D, P ⊢ E ′′ : id′ D, P ⊢ E ′′ : o′′
v
D†[id′ →→ (odim)] P †[id′ →→ id′ .w]
D†[id′′ →→ (odim)] P †[id′′ →→ id′′ .w]
D, P ⊢ E ′ : o′v .P = D, P ⊢ E ′′ : o′′
v .P
D, P ⊢ bel(E ′ , E ′′ ) : o′v .w + o′′
v .w
D, P ⊢ E : id
D(id) = (fop, pl)
D, P ⊢ E ′ : id′ D, P ⊢ E ′ : $
D†[id′ →→ (odim)]
D, P ⊢ pl(E ′ ) : 1.0
D, P ⊢ E : id
D(id) = (fop, pl)
D, P ⊢ E ′ : id′ D, P ⊢ E ′ : \0
D†[id′ →→ (odim)]
D, P ⊢ pl(E ′ ) : 0.0
(7.4.5.1)
(7.4.5.2)
(7.4.5.3)
(7.4.5.4)
(7.4.5.5)
(7.4.5.6)
Figure 53: Operational semantics of Forensic Lucid: belief and plausibility
7.5.1
Mapping Forensic Lucid, Gladyshev and Dempster–Shafer
Theories
We illustrate how the Forensic Lucid computations fit in the context of formalization
introduced by Gladyshev (Section 2.2.2, page 30) as well as the use of the Dempster–Shafer
theory (Section 3.3.2, page 66).
1. Gladyshev’s finite collection of all possible events I (see Section 2.2.2) forms a tag set
to denote all possible computations c.
2. Every q in Gladyshev’s state machine Q is a possible world in our view. q is the current
contextual point in space P.
In the case of the Printer Case (Section 2.2.5.1.3, page 48) the notion of the printer is
the intension, its various possible world extensions include the states q of the printer
204
queue, where following the events I make transitions from world to world. Likewise,
in the Blackmail Case (Section 2.2.5.2, page 51) the intension of the disk cluster is
instantiated in different extensions of the letter fragment states.
3. Gladyshev’s runs and their partitionings have a length. Since Lucid streams are generally infinite we do not have a length, we denote finite streams end with the eod
(end-of-data) and bod (beginning-of-data) stream markers as well as the query operators such as iseod and isbod (see the corresponding syntax productions: 7.3.0.24,
7.3.0.67, and 7.3.0.69).
4. For convenience and consistency of expression we define
max ≡ opt
and
inf initum ≡ INF+
in Gladyshev’s formalization of observation (Section 2.2.4, page 32).
5. os = o simply means an observation sequence os containing simply a single observation
o as in os = {o}.
6. The no-observation in Section 2.2.4, $ = (CT , 0, inf initum) is like a catch-all case
in the Dempster–Shafer in Section 3.3.2 to complement belief mass assignment. CT
is a finite tag set, and inf initum (a constant longer than the longest run in Gladyshev’s formalism) is INF+ in our case, which maps to Java Long.MAX VALUE within the
GIPSYInteger type in the GIPSY Type System.
7. In general, o = (E1 , E2 , E3 , E4 , E5 ), in practice, min = E2 and max = E3 evaluate
to integers, w = E4 to a floating point number [0 . . . 1], and t = E5 is an optional
date/time expression when an activity began (consistently align to some epoch, e.g.,
a Unix timestamp, which is a number of seconds since the epoch, so it is technically
an integer as well, but standard string timestamps are also syntactically accepted to
facilitate encoding of log data).
205
8. We use fbyw for probabilistic fby where the syntax is the same as the regular fby,
but o’s w is taken into account, when w = 1, fbyw behaves like regular fby. The
same for wvr, X is whenever Y ’s w is sufficiently high (Y.w > 1/2) and similarly for
other operators defined in Section 7.3.2, page 169. w puts an additional constraint
on the transitions that use it to avoid taking unreliable evidence into the account.
Having a low w effectively may force an eod in a stream of observations earlier than
the actual end of stream discarding the evidence after a low-credibility observation as
non-reliable and therefore non-admissible. It is certainly possible for a given observation
sequence (testimony) to contain alternating sub-sequences of high- and low-credibility
observations; therefore, a full-stop at the first encounter of o.w < 1/2 may not be
desirable. In this case the investigator has to review the observation sequence and
break it up into individual sequences containing series of credible and non-credible
observations. The manual aspect can be automated if the observation sequences come
from software tools, logs, services, or monitoring agents, or as a preprocessing tool or
script before running the Forensic Lucid program.
7.5.2
Forward Tracing vs. Back-tracing
Naturally, the GEE (Section 6.2.2, page 142) makes demands in the demand-driven evaluation
in the order the tree (AST) of an intentional program is traversed. Tracing of the demand
requests in this case will be “forward tracing” (e.g., see the first top half of the Figure 25,
page 87). Such tracing is useful in the debugging and visual verification but is less useful than
the mentioned back-tracing when demands are resolved, when dealing with the back-tracing
in forensic investigation in an attempt to reconstruct events from the final state observations
back to the initial state. Back-tracing is also naturally present when demands are computed
and return results, which would be the second half of Figure 25, page 87. The latter may
not be sufficient in the forensic evaluation, so a set of reverse operators to next, fby, asa,
etc. were needed.
206
7.5.3
Constructing Forensic Context of Evaluation
We need to provide an ability to encode the “stories” told by the evidence and witnesses.
These constitute the primary context of evaluation that gives meaning to the case under
investigation. The more complete “return value” of the forensic expression evaluation is a
collection of backtraces (may be empty), which contain the “paths of fuzzy truth”. If a
given path trace contains values considered as true, it’s an explanation of a story. If there
is no such path (i.e., the trace is empty), there is not enough supporting evidence of the
entire claim to be true [300, 304, 305, 307, 312]. The backtraces are augmented with the
plausibility values using the Dempster–Shafer approach (Section 3.3.2, page 66) computed
from the belief mass assignments to the contained observations (in an equivalent Gladyshev’s
backtraces, the plausibility and belief are always 1.0).
The context spaces (Section 3.2.2, page 64) are finite and can be navigated through in
all directions of along the dimension indexes. The finiteness is not a stringent requirement
(as normal Lucid’s tag sets are infinite in nature), but in the cyberforensic investigation
there is always a finite number of elements in the stories told by witnesses and the evidence,
such that the investigator can process them in humanly reasonably time and arrive at some
conclusion [305].
We, therefore, defined streams of observations (i.e., observation sequences osi ). In fact,
in Forensic Lucid we defined higher-order dimensions and lower-order dimensions [300,
304]. The highest-level one is the evidential statement es, which is a finite unordered
set of observation sequences os. The observation sequence os is a finite ordered set of
observations o. The observation o is an “eyewitness” of a particular property P along with the
duration of the said observation [300, 304]. We mentioned before (Section 7.3.2, page 181), we
admit navigating unordered tag sets with operators like next, prev, fby, @ and dependent
operators by using the underlying collection index in the order they were stored meaning the
order of declaration is not important, but it is sequential. The order may change when using
context calculus operators (Section 7.2.3.1.4, page 165), but this is done prior navigation.
This is a Forensic Lucid extension to the Lucx’s context types [473]. Thus, it is not
important in which order the investigators lists the observation sequences in the evidential
statement. It is important, however, within the observation sequences, the observations are
207
strictly ordered.
7.5.4
Transition Function
A transition function (described in [136, 137], Section 2.2.2, page 30) derived from Gladyshev et al . [135, 136, 137] determines how the context of evaluation changes during forensic
computation. It represents in part the case’s crime scene modeling as a possible world state.
A transition function ψ is investigation-case-specific (i.e., depends on a particular forensic
case being modeled, and, therefore, is not readily available). In general, it is to be provided
by the investigator, just like in Gladyshev’s Common Lisp implementation. In the FSA
approach, the transition function is the labeled graph itself ([137], Figure 12, page 44).
In general, we already have basic intensional operators to query and navigate from one
possible world state to another (see Chapter 4, page 76). These operators represent the basic
“built-in” transition functions in themselves (the intensional operators such as @, #, iseod,
first, next, fby, wvr, upon, and asa as well as their inverse operators [304] defined earlier).
However, a specific problem being modeled requires more specific transition function than
just plain intensional operators. In this case the transition function is a Forensic Lucid
function where the matching state transition is modeled through a sequence of invocation
of intensional operators [300, 304, 305, 307, 312]. That is each state is a possible world and
intensional operators enable transitions between the states.
See the ACME Printing Case and Blackmail Case remodeled in Forensic Lucid in
Section 9.3 and Section 9.4 respectively in Chapter 9. There we provide the first Forensic
Lucid implementation of ψ, Ψ−1 , and the “main()” (program entry point) in Listing 9.5,
Listing 9.6, and Listing 9.4 respectively [303, 304] from Gladyshev’s cases.
7.5.5
Generic Observation Sequences
The generic observation sequence context contains observations whose properties’ duration is
not fixed to the min value alone (as in (P, min, 0), omitting w, t). The third position in the observation, max ̸= 0 in the generic observation, and, as a result, in the containing observation
sequence (e.g., os = (P1 , 1, 2)(P2 , 1, 1)). (Please refer to Section 2.2.4.5 and [135, 136, 137] for
208
a more detailed example of a generic observation sequence [304, 305, 312].) We adopt a simple
way of modeling generic observation sequences [137] by lifting the observation sequence type
to an evidential statement type enumerating all variants in the min = min + max arguments
for all possible max in the original and setting in the generated set max = 0 via the semantic
rule Qesdim (×) in Figure 52, page 203.
7.5.6
Conservative Extension
We claim Forensic Lucid is a conservative extension [253] of GIPL and other comprising
Lucid dialects despite its comparatively large feature base. That is programs written in those
dialects are also valid programs (but not necessarily forensically interesting) in Forensic
Lucid, i.e., any virtual machine interpreting Forensic Lucid programs during evaluation
should be capable of interpreting GIPL, Indexical Lucid, JLucid, Objective Lucid,
Lucx, and JOOIP programs.
The extension covers three main aspects: the additional IdEntry’s in Table 13, syntax, and semantics. However, the extensions do not alter the meaning of the programs in
predecessor Lucid systems.
1. Additional IdEntry’s:
Wan already established Lucx [513] is a conservative extension of GIPL with the
simple context operators and context set operators. The Java member extensions were
added by Objective Lucid et al ., again conservatively to GIPL and Indexical
Lucid (never explicitly told so, but we may as well do that here). The new forensic
entries covering the forensic contexts and operators are likewise additional entries.
2. Syntax extensions:
The reverse, forensic, probabilistic, and dot operators, as well as the observation, observation sequence, and evidential statement constructs are additional entities that did
not exist in the comprising dialects, as such they do not affect consistency of the original
programs.
3. Semantics extensions:
209
We augment the semantics of intensional operators with the credibility weight factor
w as well as ability to navigate forensic hierarchical contexts. If w = 1, the operators
behave according to their nominal definitions. Any operations on the forensic contexts
is new to Forensic Lucid and does not affect the predecessor dialects.
Thus, we can conclude Forensic Lucid does not introduce additional inconsistencies to
the predecessor dialects.
7.6
Summary
Forensic Lucid is a contribution that fuses intensional programming and the DSTME.
It offers new formalization of observations and its impact on observation sequences, and
evidential statements. This formalization addresses the shortcoming in Gladyshev’s theory
by making a more realistic evidence representation with the credibility assessment [313] as
well as more scalable and accessible to a wider audience due to the simpler nature of Lucidbased languages.
From the logic perspective, it was shown one can model computations as logic [222].
When armed with context and a demand-driven model adopted in the implementation of the
Lucid family of languages that limits the scope of evaluation in a given set of dimensions
and their tags, we come to the intensional programming artifact. In essence, we formalize
our forensic computation units in an intensional manner. We see a lot of potential for this
work to be successful and beneficial for cyberforensics as well as intensional programming
communities.
The authors of the FSA approach did a proof-of-concept implementation of the proposed
algorithms in CMU Common Lisp (cf. [137, Appendix]) that we improve the usability
of by re-writing in Forensic Lucid in Section 9.3, page 250 and Section 9.4, page 254.
Forensic Lucid’s software architecture design aspects within GIPSY are discussed further
in Chapter 8, page 211 and various use-cases and scenarios are discussed in Chapter 9,
page 244.
210
Chapter 8
Software Architecture Design
This chapter discusses the proposed software design and implementation aspects behind
Forensic Lucid (presented in the preceding chapter). This includes specific contributions
to GIPSY in terms of its GIPC and GEE frameworks redesign to support the Forensic
Lucid compilation and run-time. The architectural design centers around the Forensic
Lucid parser and semantic analyzer, various re-design details of GEE to support AspectJ
and PRISM backends and the multi-evaluation-backend framework in general, as well as
production of various data-to-Forensic Lucid encoders. We also discuss the related background work where applicable.
We present the necessary architectural design concepts,
frameworks, and some of their PoC implementation. While our main target evaluation platform is GIPSY (Chapter 6, page 128), the design is meant to be general enough for any
Forensic Lucid-implementing system.
We review related work where appropriate that
was not mentioned in earlier chapters that impacts the proposed design decisions in this
chapter as well as in Chapter 9.
8.1
Forensic Lucid Compiler
The general design approach (Section 6.2.1, page 139) for adding a new SIPL compiler
calls for implementing the IIntensionalCompiler interface augmented with a specific IPL
parser, semantic analyzer, and optionally a translator. Accordingly, we add the corresponding new Forensic Lucid compiler framework to GIPC of GIPSY. One needs to create a
211
JavaCC [506] grammar and Forensic Lucid-to-GIPL translation rules where applicable
(e.g., see Section 7.3.2.2, page 182) and a possible Java-based implementation of some of the
new operators [305] and constructs that were not translated, such that GEE can evaluate
them at run time. We likewise introduce the FORENSICLUCID FormatTag to handle Forensic
Lucid-specific constructs. These constitute annotations of the AST nodes (similarly to previously introduced ImperativeNodes in [264]) that allow GEE’s evaluation engines to deal
appropriately with them when an interpreter encounters such nodes. The FORENSICLUCID
annotation allows to invoke the appropriate operators from the GIPSY type system at runtime.
8.1.1
Forensic Lucid Parser
Following the tradition of many GIPC’s parsers, Forensic Lucid’s grammar (in accordance
with its syntax presented in Section 7.3, page 166) is specified in a particular grammar format.
We use Java Compiler Compiler (JavaCC) [506] to generate the parser for Forensic Lucid.
The resulting current grammar specification is in GIPSY’s CVS repository [366].
8.1.2
Forensic Lucid Semantic Analyzer
Forensic Lucid’s semantic analyzer’s design calls for it to be primarily an extension of
the Lucx’s semantic analyzer [473], primarily because Lucx is not fully translated into
GIPL and because Forensic Lucid adds new constructs, such as forensic contexts and the
DSTME that don’t have yet any known translation algorithm into GIPL. Thus, the programming artifact ForensicLucidSemanticAnalyzer is created to account for the new node types
in AST corresponding to the extensions (primarily the forensic and Lucx context types and
context calculus operators presented in Section 7.2.3.1.3, page 164 and Section 7.3.3, page 183
respectively). ForensicLucidSemanticAnalyzer capitalizes on the earlier semantic analyzers implemented by Tong [473] and Wu [527].
ForensicLucidSemanticAnalyzer’s responsibility is not ensure the static compiled AST
and the Dictionary of identifiers adhere to the declarative aspects of the Forensic Lucid
language’s operational semantics described in Section 7.4, page 192. The semantic analyzer
212
traverses the AST that came out of the JJTree tool of JavaCC top-down/depth-first to do
the static type-checking, identifier scope and definition checks, initial rank analysis, and
other typical semantic analysis tasks [527]. The differences from the traditional GIPL- and
Indexical Lucid-based dialects, additional type checks are done for Lucx-derived simple
contexts, context sets, and specifically tag sets [473].
Since not all of the semantic checks can be done at the compile time, the run-time evaluation engines do the run-time checks during program execution. However, the majority of
the QQ rule’s declarations in Forensic Lucid (rule 7.4.1.18, Figure 46, page 198) that correspond to the evidential context specification are usually all statically specified from either
log files or manually by the investigator.
8.2
Forensic Lucid Run-time System Design
In Forensic Lucid run-time GEE is instantiated to spawn concurrent threads of evaluation
of the same Forensic Lucid GEER by any or all designed backends by default as detailed
further. AspectJ (--aspectj, implies --flucid) is for forward tracing, potentially as an
optimization for just tracing the normal flow of execution ψ instead of backtracing. The
PRISM backend is invoked with --prism in addition to or to skip the normal backend. A
regular eductive ForensicLucidInterpreter for traditional GIPL-like evaluation is made
available via ForensicGEE (Figure 61, page 225). What follows is the detailed design description surrounding these components.
8.2.1
AspectJ Backend Design
Aspect-oriented programming (AOP) has ties with the intensional programming paradigm
when it comes to the notion of context [95]. Specifically, Du previously [95] proposed a
relationship between the two programming paradigms in 2005. The AspectJ language [29]
is the extension of the Java language to add AOP capabilities to Java programs. While
AOP’s implementation in AspectJ is mostly software engineering practices-oriented tool, it
can help resolving implementation issues with tracing the forensic evaluation process.
The AOP paradigm has also been applied to systematic security hardening aspects in
213
software systems design patterns by Laverdi‘e by reproposing a related Security Hardening
Language (SHL) in 2007 [228].
8.2.1.1
Tracing Program Execution
The GEE (Section 6.2.2, page 142) of GIPSY is the intensional demand-driven evaluation
engine. When it executes a Lucid-dialect program, it is natural to trace it. The trace can
be of that of the intensional program itself as well as its hybrid components, e.g., written
in Java. The proposed design covers the implementation of an eductive system in Java
and AspectJ. The designed example of an execution trace of an Objective Lucid [264]
program is in Figure 25, page 87. Such a trace is, for example, very useful for the proposed
cyberforensic evaluation [267, 291].
8.2.1.2
AspectJ Run-time Environment
The GEE’s executor’s (gipsy.GEE.Executor) design is adjusted to produce the required
traces similarly to the debug mode. Additionally, the AspectJ’s joint points [29], like before
and after triggers are specifically of use, of which the GEE itself is unaware, and such an
execution trace can be used as the explanation of a story told by a the evidence and witnesses.
The exact place of putting such tracing is in the implementation of intensional operators. Due
to the GIPL engine’s fine granularity of the execution (only @ and # intensional operators
along with context set operators and classical operators are executed), another instance
(extension) of the engine has to be made, implemented in AspectJ, that is capable of doing
tracing on the higher-level forensic operators to provide a more meaningful trace. Its design
also accommodates for the new forensic context types [291, 305].
An AspectJ wrapper is applied to the GEE, specifically around the engine’s implementation of the run-time system’s core that does the actual evaluation. A problem was identified
with the outlined designed approach is that the GEE’s interpreter implementation either too
coarse-grained when it comes to procedural demands too fine grained when it comes to all
the intensional operators translated to @ and #. Therefore, the design of the new intensional backend is undertaken for better tracing of forensic evaluation and its presentation.
However, the advantage of using AspectJ for this task is that an AspectJ wrapper of the
214
engine can be written without much, if any, alteration of any current engine running.
8.2.2
Probabilistic Model Checking Backend Design
Model checking has to do with defining a model of a system or component or concept usually
as a state machine, and then stating a hypothesis or a claim to see if it agrees with the
model [35]. Model checking was applied in various areas, including specifically autonomic
computing [502], security metrics for network security [123, 457], and UML/SysML models [190]. Probabilistic model checking is an extension that takes into account probabilities
associated with the model and its transitions.
Ritchley was one of the first to use a model checker to analyze network vulnerabilities [402].
The network is modeled as a graph; along with goal states (representing the desired attack
target), initial conditions, and other necessary details defined. Then, an assumption is made
that there is no way an attacker can reach a goal state. If indeed there is no such a path, the
model checker would respond with true. If there is a path, the model checker in Ritchley’s
case would respond by giving a single path that an attacker can follow to get to the goal.
Subsequently, Sheyner et al.’s [427] methods improved Ritchley’s work by altering the model
checker such that instead of giving only a single counterexample, the model checker will give
all the paths leading to the goal. The main critique of these model-checking methods was the
eventual state explosion problem for large and complex problems (networks) and the methods
were “abandoned”. Forensic computing approach presented in Gladyshev’s work may suffer
from the same issues. The attack graph community in the above examples switched to use
DAGs and graph search techniques to avoid such problems. We believe the eductive contextoriented manner of computation, however, can address some of these problems by computing
the paths that are only needed by the context specification in a demand-driven manner with
the resulting values cached in the scalable DST for later use even if the underlying problem
is complex. (This may bring the model-checking aspect back to the attack graph security
researchers as well).
8.2.2.1
PRISM
PRISM [467] is a probabilistic model checker with its simple input language syntax and
215
semantics. Internally it implements various modules for decision processes, such as, e.g.,
Markov Decision Process (MDP), and others. The basic core syntax of PRISM is in Figure 54,
which is in essence a series of declarations of variables and the associated actions and guards
(similar to the UML state diagrams) and the associated probabilities pi , and a reward/cost
specification [190]. In Figure 55, Figure 56, and Figure 57 are formal syntax and operational
semantics of PRISM recited from Jarraya’s specification [190].
Figure 54: PRISM input language basic syntax [190]
Figure 55: PRISM input language syntax (1) [190]
8.2.2.2
Model-Checking with PRISM
Once the probabilistic model in Forensic Lucid is built (evidential statement and transition
functions), to verify it (concurrently with the general evaluation) is to check the model with
the probabilistic model-checking tool PRISM [467]. As a result, the design of one of the
GIPSY evaluation engine backends of a Forensic Lucid program generates a translated
PRISM code, which is then passed on to the PRISM tool itself [313].
We considered and decided against another possibility of translating directly from the
Forensic Lucid specification to PRISM at compile time (i.e., in GIPC). That translation
does not necessarily retain the operational semantics and prohibits the traditional eductive
and parallel evaluation of Forensic Lucid programs that would normally be available in
216
Figure 56: PRISM input language syntax (2) [190]
Figure 57: PRISM input language operational semantics [190]
217
GEE [313]. It is also a factor that GEE deals with the core intensional operators (Section 7.3.2, page 169) translated into @
w
and #w (Section 7.3.2.3, page 183), which are
uniformly translatable to much simpler PRISM constructs.
The design of the PRISMWrapper backend includes interpretation of the Forensic Lucid
semantics in the compiled GEER while matching it to the PRISM input language operational
semantics (recited in Figure 57 as fully introduced and specified by Jarraya in [190]). As the
Forensic Lucid interpreter descends down the Forensic Lucid AST (that follows the
Forensic Lucid operational semantics), generate appropriate PRISM commands taking
into the account w of observations involved. This translation wouldn’t be much of use to
non-Forensic Lucid programs since it concerns only the forensic contexts that have the w
assigned to be used in the PRISM models.
8.3
Updates to GIPSY’s Frameworks’ Design
There were a number of design changes planned and carried out in various GIPSY frameworks, including GIPC and GEE to support Forensic Lucid (among other systems, such
as MARFCAT). This includes the type system and algebras support (Appendix B), intensional demands (Chapter 6), and substantial refactoring. GEE’s Lucid interpretation module
(Interpreter) had to be upgraded to be GIPSYContext-aware and use other GIPSY Type
System’s types consistently (as its previous iteration ([241]) was using arrays of integers to
represent the context of evaluation and did not support any of the new Lucx constructs).
Along with other GIPSY R&D team members, standardizing multi-tier behavioral aspects to
enable scalable Forensic Lucid evaluation, the type system, and developing the physical infrastructure and configuration (GIPSY cluster lab setup, Section 8.6) have been undertaken.
The author have been responsible for the major physical software architecture re-design and
core API specification and unification along with partial implementation of the mentioned
frameworks with Tong [473], Han [160], Ji [191] providing complete implementations of their
related parts on Lucx compiler and multi-tier DMS in their master’s theses and Wu integrating the JOOIP work on syntax and semantic analysis in her PhD thesis [528].
218
Figure 58: Forensic Lucid compilation and evaluation flow in GIPSY [322]
219
In Figure 58 [322] is a general conceptual design overview of the Forensic Lucid compilation and evaluation process involving various components and systems. Of main interest
to this work are the inputs to the compiler—the Forensic Lucid fragments (hierarchical
observation sequence context representing the encoded evidence and witness accounts) and
programs (descriptions of the crime scenes as transition functions) can come from different
sources as in input. The evidential knowledge of the case and the crime scene model are combined to form a Forensic Lucid program. The complete specification is then processed by
the compiler depicted as GIPC on the image (the General Intensional Program Compiler)
through the invocation of the Forensic Lucid SIPL compiler that is aware of the Forensic Lucid constructs, such as the forensic contexts, their properties along with credibility
weights, etc. and operators detailed in Section 7.3, page 166. The compiler produces an
intermediate version of the compiled program as an AST and a contextual dictionary of all
identifiers (e.g., observations among other things), encapsulated in a GEER that evaluation
engines (under the GEE component) understand. GEER linking is optionally needed if either
hybrid compilation of, e.g., JOOIP and Forensic Lucid takes place, or a multi-language
program is compiled where each compiled fragment has its own AST and they need to be
liked together as a single compiled program (see Section 6.2.1, page 139). GEER linking for
a pure Forensic Lucid program is a no-op. The compiled GEER and engine configuration
(either a default, or a custom file) are fed to GEE for processing by one or more evaluation engines. The configuration tells GEE which engines and their run-time setting to use.
The said Forensic Lucid evaluation engines are designed to use the traditional eduction,
AspectJ-based tracing, and probabilistic model checking with PRISM [467] to cover the
comprehensive subset of computations and provide a backtrace of event reconstruction [311]
and can run concurrently processing the same GEER.
In Figure 59 is the updated high-level structure of GIPC (cf. Figure 33, page 138 in
Chapter 6). This design incorporates a new framework for semantic analyzers that may
behave differently for different SIPLs (Section 4.1.1.2, page 78). The modifications (detailed
further) comprise the fact that SIPLs may produce an AST that is not necessarily translated
(fully, or partially) into GIPL (as is the case of Lucx and Forensic Lucid) bypassing the
SIPLtoGIPLtranslator. For such cases, the specific semantic analyzers are created to work
220
Figure 59: Updated high-level structure of the GIPC framework
specifically with such ASTs. The specific semantic analyzers are aware of any additional
types in the type system particular to the language in question and rely on its validating
methods, such as, e.g., statically declared tag set types, observation’s components are proper
types (credibility w is a float, min, max are integers, etc.). In figure, the semantic analyzers,
translators, and GEER linkers are conceptually represented as one class simply because these
modules work together closely. At the implementation level, they are separate Java classes.
At present, the Semantic Analyzers Framework is represented by the ISemanticAnalyzer
221
Figure 60: Semantic analyzers framework
interface and has three concrete instances implementing it: SemanticAnalyzer, which is the
general analyzer that is originally capable of handling of classical GIPL, Indexical Lucid,
JLucid, Objective Lucid, and JOOIP programs (combined with the procedural code
linking for the hybrid dialects in ProcedureClassGenerator) contributed to by Wu [527,
528] and Mokhov [264]. Then follow LucxSemanticAnalyzer, produced by Tong [473], and
the ForensicLucidSemanticAnalyzer for the work presented here on Forensic Lucid
following the inheritance hierarchy for convenience and code re-use for many of the semantic
rules. All semantic analyzers adhere to the same API and produce a GEER, corresponding to
their syntactical and semantic annotations. In Figure 60 is the class diagram corresponding
to the just described framework. ForensicLucidSemanticAnalyzer primarily inherits from
LucxSemanticAnalyzer to work with tag set, simple context, and context set type checks
as well as context calculus operators. Additional checks are added to work with forensic
contexts and operators. Any class implementing ISemanticAnalyzer can now be added to
the GIPC framework as a plug-in should there be need in doing so for future extensions,
dialects, and experimentation.
8.3.1
Engine Redesign
Semantically speaking, the Lucid program interpretation happens at the generator components, DGTs (Section 6.2.2.1.6, page 146), because this is where the eductive evaluation
engines reside. Then it can take various routes: be local, multi-threaded, or distributed via
the multi-tier DMS (Section 6.2.2.1, page 143). The aforementioned compiled GIPSYProgram
222
(a GEER instance) containing the annotated AST as well as the Dictionary of identifiers
(corresponding to the initial D0 ), and definitions, is passed to GEE (Figure 58, page 219).
GEE then hands them over to an interpreter. Traditionally, there was only a single interpreter, with compiled GIPL as its only input language. Then this design morphed into a
more general architecture to allow more than one interpreter and demand generator types.
Thus, to add interpretation non-translated-to-GIPL dialects (e.g., Lucx and Forensic Lucid), one first and foremost has to replace the Interpreter component in the DGT based
on the either a command-line option or the annotated node tag in the GEER’s AST (e.g.,
a FormatTag [264] of FORENSICLUCID in our case). In this process the DST, the primary
demand propagation machinery, does not change. However, the worker side of DWT may occasionally change, depending on the type of procedural demands (Section 6.2.2.1.8, page 147)
to be processed (such as in the case of MARFCAT DWT).
A framework support for this was needed and was provided to allow multiple evaluation engines interpreting new input language material. In the case of Forensic Lucid, we
override DGT’s Executor to be able to spawn one of the three evaluation engines, and the
eductive ForensicLucidInterpreter overrides Interpreter. As a side benefit, this also
includes support for so called “null”-languages of, e.g., applications (found in gipsy.apps)
such as MARFCAT (gipsy.apps.MARFCAT) and MARFPCAT (gipsy.apps.MARFPCAT) from
Chapter 5, OCTMARF [319] (gipsy.apps.OCTMARF), or genome sequence aligning application
(gipsy.apps.memocode.genome) that do not currently have a compilable Lucid-family language front-end, but could still use the multi-tier DMS middleware of GEE for distributed
computation. For mentioned applications, problem-specific (PS) DGT and DWT were created wrapping the functionality of the applications into the demand-driven model, e.g.,
MARFCATDGT, MARFCATDWT, and the like.
Thus, non-GIPL engine software layer differentiation became necessary for problemspecific-tiers as well as non-GIPL languages to be able to take advantage of the GEE’s
distributed middleware infrastructure by improving the GEE framework making it more
flexible.
That allows various programming language paradigms to be included into GIPSY that are
not necessarily compatible with GIPL or the translation effort to undertake is too great to be
223
effective, error-free, or efficient [473]. This can allow, for example, pLucid, TransLucid,
and MARFL run-times. However, the specific interpreter run-times need to be inserted
into the GEE framework by implementing the documented interfaces and the operational
semantics of these languages.
8.3.1.1
Evaluation Engines’ Components
The following is the design related to various evaluation engine components. The invocation
of the desired evaluation can be configured or directly invoked by the user via the GMT or its
RIPE front-end allowing selection of the evaluation engines from the GUI or via commandline options, of which currently the latter is implemented. In Figure 61 is the overall design
connecting various related GEE components.
• IEvaluationEngine—all run-time evaluation engines within the GEE framework are
to implement this API. It also allows for external plug-ins.
• IDemandGenerator is the interface annotating any component that can participate in
demand generation activity and DemandGenerator its generic implementation. All Lucid interpreters are by default meant to generate demands for eductive execution (local
non-eductive implementations simply implement no-op demand generation functionality). DemandGenerator also connects interpreters with the multi-tier DMS (discussed
further).
• Executor + LegacyInterpreter—compose the classical GIPL AST interpreter refitted back into the new framework. LegacyEductiveInterpreter is an update to
LegacyInterpreter to actually benefit from the demand generation as the classical
LegacyInterpreter did everything locally.
• Interpreter is a classical interpreter rewritten to use directly the GIPSY Type System.
• ForensicLucidInterpreter is forensic-context and context-operators aware extension
of the Interpreter.
• ForensicGEE is simply an extension of the Executor designed to work by default with
the ForensicLucidInterpreter.
224
• AspectGEE—a designed AOP extension stub of the Executors.
• PRISMWrapper—a designed non-eductive stub for evaluation of the GIPSYProgram such
that its “execution” yields a PRISM program that eventually is fed to the PRISM
itself [467].
Figure 61: Evaluation engines class diagram
Thus, the Forensic Lucid GIPSYProgram GEER, depending on the configuration, is
simultaneously passed to ForensicGEE and PRISMWrapper independent evaluation backends;
with AspectGEE optionally observing ForensicGEE for tracing purposes in our design.
8.3.1.2
Backtracing
The design of the backtracing in the ForensicGEE interpreting backend includes the execution
of the reverse Forensic Lucid operators, which is an extension of the classical Executor
backend (Figure 61).
8.3.1.3
Refactoring Demand Generators vs. Dispatchers within DGT
To enable the different evaluation engines and interpreters as presented in the previous sections in Figure 61 a refactoring re-design effort had to be undertaken [193] to allow Forensic
225
Lucid computation (and as a by-product problem-specific application DGTs mentioned earlier). The illustration of some of the concepts described in this section is in Figure 62. As
a part of the refactoring work, we discerned the relationship between and clearly defined
the roles of the demand generators and the middle-ware specific demand dispatchers within
the DGT. Since generators do not necessarily use the distributed DMS’s middleware (Jini
and JMS) in their interpreters, the dispatching logic was redefined to be invoked by the
generators when needed to make a demand by the interpreter. The notion of the dispatcher
was originally designed by Vassev [501] and had the roles of the dispatcher and generator
intertwined and reversed requiring decoupling of the generators from the knowledge of the
specific middleware technology used.
In accordance with this refined design, a Demand Generator in DGT maintains a local list
of demands, “My pending demands” for Forensic Lucid program identifiers. The demands
on that list are said to be generated (“issued”), but have not come back yet with results,
i.e., they preserve their PENDING state, in case this list ever to be re-issued to another DST.
Issuing of the demands is generally non-blocking, but is subject to the program semantics
incorporated into the GEER’s AST. An AYDY (are-you-done-yet) thread in the Generator
periodically checks if any results have come back for the demands on the local pending list
with the Dispatcher. They get removed from the list when the result comes back and returned
to the user.
The local demand list in the Generator is expected to be most frequently a size of one
(at a time) for a single Forensic Lucid program, i.e., the root node of the AST. This
may change, however, if the Generator picks up intensional demands (effectively subtrees) of
own or others’ programs that are in the associated store. It is technically also possible issue
demands for identifiers without data dependencies, e.g., a demand for B + C could have a
demand for B and a demand for C be in the pending demands list of the generator.
The Generator (ForensicLucidInterpreter in our case) is aware of the Forensic Lucid program semantics and the AST structure, but is unaware of the transport agents (TAs),
DSTs, communication mechanisms, and other lower-level middleware technological aspects
(Section 6.2.2.1, page 143). The Dispatcher is unaware of the program’s overall structured and
semantics (except for any annotations pertinent to scheduling not described here), but knows
226
Figure 62: Demand Generator and Dispatcher relationship
about TAs and communication protocols and scheduling strategies. The Generator runs at
the CPU speed; the Dispatcher runs at the communication speed including all the latencies,
marshaling/demarshaling costs. Thus, the Generator-Dispatcher pair of the evaluation engine are analogues to the device driver-halves—the general upper half is device-independent
running at the CPU speed and known to all and the lower half is device-dependent, communication driven by the events from the device. The two halves are synchronized via a
buffer. In this analogy replacing the device with the communication technology and strategy
selection and absorption of the inherent latency is handled by the Dispatcher, and upper
227
half that concerns more with the semantics of the GIPSY program (e.g., a Forensic Lucid
application) is the Generator.
The Generator issues the demands to the Dispatcher via the Dispatcher’s API. The Dispatcher may be aware of some preliminary scheduling strategies and the associated communication TAs for the demand transfer to the DST. The Dispatcher always maintains a link
to the local DST via a Dummy TA [191, 501]. Additionally, it keeps its own “Local demand
pool” [160, 362], as a buffer, to which the PENDING demands from the Generator are deposited
via the Dispatcher’s API. The TAs are then, in their separate threads of execution, work with
that local pool buffer to deliver the demands lazily to a DST with an appropriate TA. The
pool is an input/output buffer, that also harvests the evaluation results from the DST before
the Generator’s AYDY thread picks it up for delivery to the main application.
If the Generator dies, its local pending demands list goes with the Generator. However,
assuming the demands were successfully dispatched, stored, computed and results deposited
in the DST, a subsequent restart of the Generator by the user for the same program would
just return the results cached in the DST. As a result of the Generator’s crash and the loss
of the Generator’s state is tolerable and makes the forensic computation more reliable. If the
Dispatcher thread dies and its local demand pool buffer goes with it, assuming everything else
is healthy, the AYDY thread in the Generator with a timeout will re-issue the Generator’s
pending demands again refilling the Dispatcher’s local demand pool, which will be filled in
with the results later from the DST the Dispatcher communicates via TAs. The local DST
store also harbors system demands for the Generator-Dispatcher unit within their DGT
container.
This design was a necessary refinement of Paquet’s conceptual multi-tier design and
Mokhov’s integration of that design into the actual code base with some implementation
followed by Han [160] and Ji [191] detailed implementation efforts and further refinement
of the multi-tier aspects of the design. Without it, the Forensic Lucid backends would
have to be unjustifiably aware of Jini, JMS, RMI, or any new communication middleware
technology added to GEE later. It also benefits non-Forensic Lucid backends as well.
The adoption of such an architecture for Forensic Lucid programs is particularly important for scalable processing of large evidential statements, including avoiding computational
228
processing any duplicate forensic context that may happen frequently in large logs.
8.3.2
Configuration Management
Following the ideas of marf.Configuration and MARFL, gipsy.Configuration was created to contain name-value pairs of the currently running updatable nested configuration of
GEE. In Forensic Lucid run-time instance, the configuration tells the main GEE which
backends to start and any of their specific configuration details. This includes configuring the
multi-tier DMS (e.g., picking Jini or JMS middleware and their settings) to evaluate large
evidential data collections, configuration of the DST, an the overall GIPSY network the investigator anticipates is needed to evaluate the problem at hand efficiently. In Figure 58,
page 219, configuration is provided to GEE by the investigators (along with the compiled
GEER) according the their requirements and the size of the case problem to evaluate. There
are default settings that are otherwise used for local non-distributed evaluation.
• oConfigurationSettings is a Properties type of object encapsulated by the class.
It is properly synchronized and Serializable.
• Some of the Properties’ API is mirrored onto Configuration, so it can also be serialized as a text name=value file or XML, conveniently provided by Java.
• Upon initialization it sets initial defaults for the root path of the classes for dynamic
discovery as well as policy configuration files for Jini and JMS.
• For Forensic Lucid, sets the ca.concordia.cse.gipsy.GEE.backends.flucid to
all by default causing the startup of the three evaluation backends presented. Options
aspectj, prism, and eductive are also individually available and can be specified via
comma-separated values. Thus, all is a shorthand for aspectj,prism,eductive in
our design.
Ji made later use of this design and elaborated its concrete implementation further as
required for his scalability study [191].
229
Figure 63: Configuration UML class diagram
8.3.3
Compile-Time Annotations
Compile-time annotations were mentioned to annotate imperative nodes, Lucx nodes, or
Forensic Lucid nodes in the AST. Imperative nodes were originally introduced to denote
coarse-grained units of computation, such as Java methods (that become procedural demands in GIPSY’s multi-tier DMS presented in Section 6.2.2.1, page 143). The author the
introduced the annotations framework’s design and the corresponding interface originally to
allow different scheduling strategies and metrics within the GEE run-time to implement the
ideas in Ben-Hamed’s thesis [159] for follow-up scalability investigations that are a part of the
future work (Section 10.4.3, page 289). The annotations became useful in Forensic Lucid
(and Lucx) run-time design to invoke the appropriate evaluation engine and type system
230
components that are capable of processing untranslated (to GIPL) Forensic Lucid nodes.
• IAnnotation, in the package gipsy.interfaces
• NodeAnnotation (of the AST nodes), as per Figure 64
• An example annotation from [264] is the ImperativeNode, instances of which are procedural demand-related.
Figure 64: Annotations class diagram
8.3.4
Type System
Implementation-wise, Forensic Lucid relies on the GIPSY Type System (Appendix B) first
introduced in [264] and refined in [301, 315, 365, 473]. The new context types are encoded
correspondingly to the subclasses of GIPSYContext, Dimension, and TagSet types (defined
in [473]). These include the dimension types observation (Observation), observation
sequence (ObservationSequence), and evidential statement (EvidentialStatement)
that all extend Dimension. Likewise, GIPSYForensicContext is provided. The instances
of the type GIPSYOperator are designed to hold the definitions of the new operators [305],
including, e.g., bel and pl. While Appendix B discusses the GIPSY type system in detail, the
forensic extensions are illustrated in Figure 65. The package gipsy.lang.context.forensic
is added to contain the physical classes corresponding to the forensic contexts.
Lucx’s TagSet and its derivatives are updated to ise marf.util.FreeVector, which is an
extension of java.util.Vector that allows theoretically vectors of infinite length (bounded
by the memory available to the JVM), so it is possible to set or get an element of the vector
beyond its current physical bounds [264]. Getting an element beyond the boundaries returns
null, as if the object at that index was never set [264]. Setting an element beyond bounds
231
Figure 65: Forensic extensions to the GIPSY Type System
automatically grows the vector to that element. In the GIPSY, marf.util.FreeVector
is used as a base for Dictionary [264] and as a collection of Dimensions with their tag
sets in GIPSYContext. It’s the index within an instance of this class is what determines
the behavior of getting next, previous, etc. elements of finite tag sets, regardless if declared
ordered or unordered in Forensic Lucid.
8.4
Forensic Lucid Component Testing
The testing framework design for Forensic Lucid largely follows the testing principles used
by the previous GIPSY-related projects of using modules to test-drive main code the tests
package, including plain tests, source input files, and JUnit-based tests. We also update the
Regression application to automate some of the non-shell-scripting aspects of testing. The
following tests are designed for the Forensic Lucid compiler:
• Test all GIPL examples still parse
• Test all GIPL examples still pass semantic check
• Test all Lucx examples still parse
232
• Test all Lucx examples still pass semantic check
• Test all Forensic Lucid examples still parse
• Test all Forensic Lucid examples still pass semantic check
Other tests are designed according to the examples presented for the operators that work
on bound streams, e.g., Table 14 is as an input for testing of any implementation.
8.5
Forensic Lucid Encoders
The purpose of such encoders is to extract evidential information from various sources and
encode it in Forensic Lucid to be ready for use in investigations.
Specifically, exporting and encoding is necessary to present the evidence in the common
format (Forensic Lucid) for the case when the systems are at least partially automated.
Some software systems can be taught to log their data structure states directly in Forensic Lucid or after-the-fact transcribing log entries into Forensic Lucid format. This is
more evident in the Chapter 9’s MAC Spoofer Investigation when we generate and export a
collection of observation sequences from various sources. This particular section illustrates
some of the encoded evidence for the software systems mentioned to be exported during their
operation, should a fault, incident, bug, or a problem occur, the forensic log (encoded with
Forensic Lucid) can be used in a subsequent investigation. This is particularly useful in
Self-Forensics applications and application domains detailed in Appendix D.
Figure 66: Conceptual illustration of Forensic Lucid encoders
233
There were a number of exporters proposed to encode the evidence data from various
data sources (some conceptualized in Figure 66) (Section 9.1.1, page 244) [269, 310, 321] as
well as in the mentioned MAC spoofer investigations in Section 9.5.
8.5.1
MARF
MARF is one of potent evidential data sources mentioned in Figure 66. Since MARF, as
a pattern recognition system, along with its applications described in Chapter 5, able to
provide classification data (vulnerable, malicious code, or biometric classification tasks) and
assign the confidence in reliability of its classification results. These data can complement
other evidence in investigator’s possession and automate some of the process.
8.5.1.1
Classical MARF’s Evidence
The evidence extracted from the classification analysis results of MARF comes from the several internal data structures [269], namely Result, ResultSet, TrainingSet (stored trainedon data), and MARF instance’s Configuration (Chapter 5). The Result consists of tuples
containing ⟨ID, outcome⟩, which are the properties of a single classification result. The result
set, ResultSet, is a collection of such tuples. Processed samples (e.g., utterances, text, imagery, code; stored feature vectors or clusters of doubles, see Chapter 5 for details), alongside
with the training file names and IDs comprise the training set data, and configuration is a
collection of processing settings that led to the current results given the training set [269]. We
used this way of modeling of the forensic context to extend it to our other case studies [322]
(Appendix D).
The property P is specified in the three main categories: configuration, training set, and
the result set in MARF. Its observed duration is set as a default of (1, 0), as the notion of
real duration varies per configuration details, but we are interested in how we arrive from
the given configuration and training set to the results [269] in this case. The finer-grained
details, including the actual duration may be specified inside P . observation o = P as
syntactically written, is therefore equivalent to o = (P, 1, 0) as mentioned in Chapter 7. The
observation sequence os is defined as a sequence of three observations, each observation per
category. The observations are ordered as (1) configuration configo, (2) training set tseto,
234
and (3) the classification result resulto. The meaning of this observation sequence is that
given some MARF configuration settings and the existing training set, the system produces
the classification result. During the training, the observation sequence is slightly different,
but also has three observations: configuration, incoming sample, and the resulting training set, which are encoded accordingly as all the necessary primitives for that are already
defined [269]. The complete exportable Forensic Lucid expression is a 3-observation sequence as presented in Listing 67. With the simplifying assumption, the (1, 0) syntactical
constructs can be dropped, only keeping the P , which in this case is a higher-order context
specification, as shown in Figure 68 [269]. Such a contextual specification of MARF internals
Forensic Lucid inherited from the MARFL language [272, 322].
MARFos = { confo, tseto,
{
([
sample loader
:
preprocessing
:
feature extraction :
classification
:
], 1, 0),
resulto } =
WAV [ channels: 2, bitrate: 16, encoding: PCM, f : 8000 ],
LOW_PASS_FFT_FILTER [ cutoff: 2024, windowsize: 2048 ],
LPC [ poles: 40, windowsize: 2048 ],
MINKOWSKI_DISTANCE [ r : 6 ]
([data:{[5.2,3.5,7.5],[3.6,2.5,5.5,6.5]}, files:[‘‘/foo/bar.wav’’,‘‘/bar/foo.wav’’]], 1, 0),
([ID:5, outcome:1.5], 1, 0)
}
Figure 67: Example of a three-observation sequence context exported from MARF to Forensic Lucid [322]
MARFos = { confo, tseto,
{
[
sample loader
:
preprocessing
:
feature extraction :
classification
:
],
resulto } =
WAV [ channels: 2, bitrate: 16, encoding: PCM, f : 8000 ],
LOW_PASS_FFT_FILTER [ cutoff: 2024, windowsize: 2048 ],
LPC [ poles: 40, windowsize: 2048 ],
MINKOWSKI_DISTANCE [ r : 6 ]
[data:{[5.2,3.5,7.5],[3.6,2.5,5.5,6.5]}, files:[‘‘/foo/bar.wav’’,‘‘/bar/foo.wav’’]],
[ID:5, outcome:1.5]
}
Figure 68: Example of a simplified three-observation sequence context exported from MARF
to Forensic Lucid [322]
235
8.5.2
MARFCAT and MSA Encoders
Other encoder examples include MARFCAT (Section 5.4.1, page 111) shown in Listing 8.1
(see [314, Appendix] for an extensive example); MARFPCAT; and MAC Spoofer Analyzer
(MSA) modules (Section 9.5, page 257). In Listing 8.2, Listing 8.3, Listing 8.4 are examples
of the switch management, Argus netflows, and arp related evidence encoded in Forensic
Lucid as observation sequences.
// ...
weakness_25 @ [ id :25 , t o o l _ s p e c i f i c _ i d :25 , cweid :20 , cwename : " Input Validation ( CWE20 ) " ]
where
dimension id , tool_specific_id , cweid , cwename ;
observation sequence weakness_25 = ( locations_wk_25 , 1 , 0 , 0 . 0 0 1 4 9 5 0 0 3 3 2 0 8 4 3 1 4 1 ) ;
l oc at io n s_ wk _2 5 = locations @ [ t o o l _ s p e c i f i c _ i d :25 , cweid :20 , cwename : " Input Validation
( CWE20 ) " ];
observation l o c a t i o n _ i d _ 2 0 1 2 ( [ line = > 89 , path = > " wireshark -1.2.0/ plugins / docsis /
packet - bpkmreq . c " )
textoutput = " " ;
observation grade = ([ severity = > 1 , t o o l _ s p e c i f i c _ r a n k = > 668.8948352542972] , 1 , 0 ,
0.001495003320843141) ;
end ;
// ...
Listing 8.1: MARFCAT encoded evidence fragment example
While modi operandi of MSA are detailed in Section 9.5, page 257, we briefly describe
the encoding process used by its modules here. Effectively, there are two types of encoders in
MSA: the ones that work with the typical log data from services of interest (“dead forensics”)
and real-time probes (“live forensics”) (Section 2.1, page 24).
// ...
// ‘ swm ’ evidence , encoded : Tue Jul 13 1 5 : 3 7 : 2 2 2013
observation sequence swm_os = o_swm_switch fby os_swm_ entries ;
observation o_swm_switch = ([ switch : " switch1 " ] , 1 , 0) ;
observation sequence os_sw m_entrie s =
{
([ port : " Fa0 /1 " , port - state : " up / up " , mac : " 00:1 b :63: b5 : f8 :0 f " , hostname : " flucid -44. encs .
concordia . ca " ] = > " STATIC 666 secured , restrict , pfast # [ Auto ] " )
};
// end of ‘ swm ’ e v i d e n c e
// ...
Listing 8.2: swm encoded evidence example
The log-based encoders usually work with a relatively structured log files, which begin
with a timestamp, service, event logged, and details pertinent to each particular event, e.g.,
an IP address, a MAC address, a username, etc. These data are first filtered for the data
of potential interest (e.g., using grep and Perl regexp [403]). Each log line corresponds
to an observation o. The duration in such cases is commonly (min, max) = (1, 0) indicating
236
presence of the observed property P with reliability w = 1.0, and t being set to the timestamp
extracted from the log file. P is then a collection of ⟨dimension : tag⟩ pairs encoding the
details found on the log line. An observation sequence is an ordered finite collection of such
observations of log lines of interest from the same log file. Listing 8.3 is an example of the logbased encoded evidence. In MSA there are several log files, so the encoding is made uniform
via the EncodingUtils module that formats and normalizes the data that have more than
one way of legitimate representation in use (most common examples are timestamps, MAC
addresses, and hostnames).
The live-probe encoders are more dynamic and challenging as they work with often less
structured data than that of log entries. The data are also time-sensitive due to the liveness
aspect and are subject to what the possible spoofer is doing right now on the network. As
a result the live probes can come up with different degree of output collected with each run.
A set of the most pertinent details was identified from the most complete experimental runs
by a human expert. Thus, each live-probe encoder type tries to extract as much evidence
as possible based on the set of criteria set by the expert. Depending on how complete the
extracted data are, the encoder places different values of the reliability weight w encoded in
its observations. Typically, each live-probe’s encoders collector’s line also corresponds to an
observation in Forensic Lucid terms, but that data can sometimes span more than one
line of text pertaining to a single observation. P is then encoded similarly to the log-based
encoders with the ⟨dimension : tag⟩ pairs with the data of potential interest. Listing 8.2 and
Listing 8.4 are examples of the live-probe encoded evidence.
No-observations ($) are also encoded when the log search or probe come up empty for
whatever reason. Partial observations also occur when only some of the data are available,
then often w is set to values less than 1.0, based on the expert human estimates.
The quality of the encoders follows good common design and development practices and
reviewed by more than one person.
237
// ...
// Argus evidence , encoded : Tue Jul 13 1 5 : 3 7 : 2 1 2013
observation sequence argus_os =
{
netflow_o_1 ,
netflow_o_2 ,
netflow_o_3 ,
// ...
netfl ow_o_10 19
};
observation netflow_o_1 = ([ flow - start : " Tue Jul 13 11:24:53 2013 " , flow - end : " Tue Jul 13
11:24:53 2013 " , protocol : " tcp " , src - mac : " 02:5 f : f8 :93:80:00 " , dst - mac : " 02:29: bb :29: d9 :1 a "
, src - ipaddr : " aaa . aa . aa . aaa " , src - port :136 , direction : " ->" , dst - ipaddr : " 132.2 05.44.2 52 " ,
dst - port :252 , packets :2 , src - bytes :120 , dst - bytes :0 , state : " REQ " ] , 1 , 0 , 1.0 , " Tue Jul
13 11:24:53 2013 " ) ;
observation netflow_o_2 = ([ flow - start : " Tue Jul 13 11:49:05 2013 " , flow - end : " Tue Jul 13
11:49:05 2013 " , protocol : " tcp " , src - mac : " 02:5 f : f8 :93:80:00 " , dst - mac : " 02:29: bb :29: d9 :1 a "
, src - ipaddr : " bb . bb . bb . bbb " , src - port :212 , direction : " ->" , dst - ipaddr : " 132.20 5.44.252 " ,
dst - port :252 , packets :2 , src - bytes :124 , dst - bytes :0 , state : " REQ " ] , 1 , 0 , 1.0 , " Tue Jul
13 11:49:05 2013 " ) ;
observation netflow_o_3 = ([ flow - start : " Tue Jul 13 12:27:34 2013 " , flow - end : " Tue Jul 13
12:27:34 2013 " , protocol : " tcp " , src - mac : " 02:5 f : f8 :93:80:00 " , dst - mac : " 02:29: bb :29: d9 :1 a "
, src - ipaddr : " cc . ccc . ccc . ccc " , src - port :152 , direction : " ->" , dst - ipaddr : " 132 .205.44.252 "
, dst - port :252 , packets :2 , src - bytes :120 , dst - bytes :0 , state : " REQ " ] , 1 , 0 , 1.0 , " Tue Jul
13 12:27:34 2013 " ) ;
// ...
Listing 8.3: Argus encoded evidence example
// ...
// arp log evidence , encoded : Tue Aug 13 1 5 : 3 7 : 1 3 2013
observation sequence arp_os =
{
arp_o_1
};
observation arp_o_1 = ([ ipaddr : " 132.205 .44.252 " , mac : " 00:1 b :63: b5 : f8 :0 f " ] , 1 , 0 , 1.0) ;
// end of arp log e v i d e n c e
// ...
Listing 8.4: arp encoded evidence example
8.6
8.6.1
GIPSY Cluster Lab Setup
Experimental Setup and Script Design
Here we design our initial experimental setup, modeling and simulation, and the testbed
environments.
We describe the GIPSY cluster environment design, hardware and software, to manage
and to run the experiments as well script our testing environment around GIPSY, MARFCAT, and related platforms.
238
8.6.1.1
Hardware
1. 16 hosts (Dell Precision 390), running a mix of Linux and Windows
2. 2 Cisco Catalyst 2950 switches
3. Local DNS server (following the principles of [31])
4. Local router/firewall
5. Local DHCP server
6. GIPSY run-time deployed across the hosts
8.6.1.2
Software
The software tools included in the GIPSY cluster design are listed below. iptables and
BIND are the most essential installed on the routers to manage the local DNS names and network address translation between the physical GIPSY network (192.168.88.0/24, VLAN88)
and the ENCS network (132.205.8.224/28, VLAN629) to the outside world.
1. MRTG [345]
2. Argus [390]
3. iptables [389, 398]
4. BIND [8]
8.6.2
Topological Setup
The physical network design consists of 2 48-port Cisco Catalyst 2950 switches a machine
running DNS and DHCP servers, a machine running iptables for firewall and routing, a
monitoring machine running MRTG, Argus, Snort, and other tools of our own the uplink to
the Internet where all the clients would want to connect to. The purpose is to run scalable
computations running GIPSY networks for forensic and data mining computations. The
network schematic is in Figure 69, and the corresponding IP configuration is in Table 16.
239
Table 16: GIPSY cluster IP assignment
Node
Sp
S2p
Dp
Ep
N
S
α
β
γ
δ
ϵ
ζ
η
θ
ι
κ
λ
µ
ν
ξ
o
DNS name
storm.gipsy.private
storm2.gipsy.private
detroit.gipsy.private
elsinore.gipsy.private
newton.encs.concordia.ca
storm.encs.concordia.ca
alpha.gipsy.private
beta.gipsy.private
gamma.gipsy.private
delta.gipsy.private
epsilon.gipsy.private
zeta.gipsy.private
eta.gipsy.private
theta.gipsy.private
iota.gipsy.private
kappa.gipsy.private
lambda.gipsy.private
mu.gipsy.private
nu.gipsy.private
xi.gipsy.private
omicron.gipsy.private
IP
192.168.88.1
192.168.88.2
192.168.88.250
192.168.88.251
132.205.8.230
132.205.8.238
192.168.88.65
192.168.88.66
192.168.88.67
192.168.88.68
192.168.88.69
192.168.88.70
192.168.88.71
192.168.88.72
192.168.88.73
192.168.88.74
192.168.88.75
192.168.88.76
192.168.88.77
192.168.88.78
192.168.88.79
Additionally, we include the forensic and log-analysis tools with machine learning, data
mining, and statistical analysis capabilities. The GIPSY’s DSTs are designed to be typically
located at Storm and Storm2 as two powerful servers; however, when Storms are unavailable,
any node can have DSTs on them. Linux machines running Scientific Linux 5.x have an
additional VMWare virtual to support more hosts as well as the SATE experiment test bed
for MARFCAT code analysis described in Section 5.4, page 110. Each of the 390 node or its
virtual is typically a worker node DWT. A portion of the cluster was allocated to Rabah to
run XEN virtuals for virtual network experiments [392].
240
Figure 69: GIPSY cluster network topology
241
8.7
Summary
We described architectural design details for the evaluating system of GIPSY along with the
related work and PoC implementation aspects. This includes the necessary updates, modifications, and contributions to the design of the GIPSY Type System; compiler (GIPC) and
the semantic analyzers framework; the run-time system (GEE) and the additional engines for
Forensic Lucid: intensional/eductive backend (traditional evaluation), AspectJ (whose
primary role here is of tracing and human-readable trace presentation), and PRISM modelchecking backend to represent a probabilistic state machine similar to Gladyshev’s; and the
testing framework (to automate compilation and semantic analysis tests of the new Forensic Lucid and old Lucx components and JUnit tests to make sure they remain valid within
the Forensic Lucid system). We provided examples of encoders that provide translation
of data structures (or log entries or probes in the case of MSA in Section 9.5, page 257) into
the Forensic Lucid format in order to do follow-up investigations either manually or automatically (e.g., in self-forensics first briefly introduced in Section 1.2, page 4 and Section 2.1,
page 24 and detailed in Appendix D). The JavaCC grammar specification of the Forensic
Lucid parser ForensicLucid.jjt is in GIPSY’s source code repository. Its generated compiler sources are integrated in gipsy.GIPC.intensional.SIPL.ForensicLucid. Finally, we
described the GIPSY compute cluster physical design and network architecture to support
larger scale experiments.
242
Part III
Conclusion
243
Chapter 9
Evaluation Applications
This chapter presents example applications of Forensic Lucid based on the pertinent case
studies discussed prior (Section 2.2.5, page 43) and further. We first present an overview
(Section 9.1), including the background description of the data sources used (Section 9.1.1)
and the concept of misuse cases (use cases with hostile intent) in Section 9.1.2 followed by
various examples demonstrating the use of Forensic Lucid constructs. We then summarize
our findings in Section 9.6.
9.1
Overview
After the data and misuse case description we present the DSTME illustratory example encoded in Forensic Lucid in Section 9.2. We then rewrite Gladyshev’s Common Lisp implementation of the ACME Printing Case and the Blackmail Investigation with the Forensic
Lucid programs presented in this chapter in Section 9.3, page 250 and Section 9.4, page 254.
Additional cases other than these classical two are also presented, including the design of the
elaborate live MAC spoofer analysis and investigation in Section 9.5, page 257.
9.1.1
Data Sources and Data Description
In this section we provide some background information on the data and data sources used
for some of the case-related experiments in this thesis. These data complement the research
carried out for the data mining-related aspects presented in Chapter 5 including aspects that
244
have to do with the data gathering (such as the classification data from the classifiers, or
network and file data, or log data, or data taken with the live probes) and encoding it (such
as, e.g., from MARF and MARFCAT in Section 8.5.1.1, Section 9.5, Appendix D.4.6) into
Forensic Lucid for the purposes of subsequent analysis and automated reasoning, and
evaluation. Data are also something we make inferences from when representing knowledge
and what we visualize for comprehension and management properties (Appendix E).
9.1.1.1
9.1.1.1.1
Data Sets
MARFCAT.
For the MARFCAT-based (Section 5.4, page 110) investiga-
tions, we use the SAMATE data set; this includes the source code samples and a number of
associated CVEs or CWEs [284, 287]. The SAMATE reference data set contains C/C++,
Java, and PHP language tracks comprising CVE-selected cases as well as stand-alone cases
and the large generated synthetic C and Java test cases (CWE-based, with a lot of variants
of different known weaknesses). SATE IV expanded some cases from SATE2010 by increasing
the version number, and dropped some other cases (e.g., Chrome) [314].
The C/C++ and Java test cases of various client and server OSS software are compilable
into the binary and object code, while the synthetic C and Java cases generated for various
CWE entries provided for greater scalability testing (also compilable). The CVE-selected
cases had a vulnerable version of a software in question with a list of CVEs attached to it, as
well as the most known fixed version within the minor revision number. One of the goals for
the CVE-based cases is to detect the known weaknesses outlined in CVEs using static code
analysis and also to verify if they were really fixed in the “fixed version” [348]. The cases
with known CVEs and CWEs were used as the training models described in the methodology.
The summary below is a union of the data sets from SATE2010 and SATE IV [314].
The initial list of the CVEs to locate in the test cases were collected from the NVD
[340, 348] for Wireshark 1.2.0, Dovecot, Tomcat 5.5.13, Jetty 6.1.16, and Wordpress 2.0 [314].
The specific test cases with versions and language included CVE-selected [314]:
• C: Wireshark 1.2.0 (vulnerable) and Wireshark 1.2.18 (fixed, up from Wireshark 1.2.9
in SATE2010)
245
• C: Dovecot (vulnerable) and Dovecot (fixed)
• C++: Chrome 5.0.375.54 (vulnerable) and Chrome 5.0.375.70 (fixed)
• Java: Tomcat 5.5.13 (vulnerable) and Tomcat 5.5.33 (fixed, up from Tomcat 5.5.29 in
SATE2010)
• Java: Jetty 6.1.16 (vulnerable) and Jetty 6.1.26 (fixed)
• PHP: Wordpress 2.0 (vulnerable) and Wordpress 2.2.3 (fixed)
originally non-CVE selected in SATE2010 [314]:
• C: Dovecot
• Java: Pebble 2.5-M2
Synthetic CWE cases produced by the SAMATE team:
• C: Synthetic C covering 118 CWEs and ≈ 60K files
• Java: Synthetic Java covering ≈ 50 CWEs and ≈ 20K files
9.1.1.1.2
Other Data Sources.
Some experimental, empirical, real and meta data are
used from the Faculty of ENCS network, some of which is described in detail in [31]. Various
other data sources for encoding and experiments include:
1. Netflows and Argus [390] logs for various network-based investigations.
2. For the MAC Spoofer Investigation (Section 9.5, page 257) case in particular the data
sources used in probes of the live hosts as well as central logs of host activity, switch
activity, services likes DHCP, as well that of monitoring tools [31] and Nessus [460]
scans.
3. Data sources include faculty-wide installations of Linux /encs/bin binary software.
The use of these data works in conjunction with the MARFCAT (Section 5.4, page 110)
data described earlier in pseudo-hypothetical scenario in vulnerable software investigations.
246
4. The network-enabled malware investigations used in MARFPCAT (see Section 5.5,
page 121) including a very closely recent work [49] used the GFI SandBox [132] (since
March 2013 known as ThreatAnalyzer [470]. It supplies an extensive feed of malware
organized into a database and some preliminary analysis reports combined with pcap
data traces captured in the sandbox. The database at the time comprised more than
1.5 million malware samples that serve as a good source of machine learning in search
for insight and flag suspicious traffic. The malware was run in controlled environment
yielding ≈ 100000 pcap files labeled with hashes of malware [49].
5. ForensicDataSniffer -based investigations (the tool is an offshoot of the fileType (Section 5.3, page 107) and MARFCAT applications) relies on the common data sets provided by NIST and others such as collections of publicly released government documents
to detect such documents and types on hard disk and encode the resulting findings as
the results to compose an evidential statement in Forensic Lucid. The rest of the
data sources in file type analysis come from typical and atypical OS installations to
make an addition input of data in an investigation.
9.1.2
Misuse Cases
Ian Alexander [9, 10, 521] in 2002 introduced the notion of Misuse Cases in software requirements specification [233, 482] extending the terms of Use Case (UC) diagrams to visualize
adversarial cases that threaten the normal functionality of the innocent use cases. The misuse cases make the non-functional requirements more apparent telling why and where they
occur and how and where they can be mitigated.
We make use of the Misuse Case in our
UC context diagrams in the MAC Spoofer Investigation case study further in Section 9.5.
247
9.2
Toy DSTME Examples in Forensic Lucid
[ bel ( es ) , pl ( es ) ]
where
evidential statement es = { betty , sally };
observation sequence betty = { oBetty };
observation sequence sally = { oSally };
observation oBetty = ( " limb on my car " , 1 , 0 , 0.99) ;
observation oSally = ( " limb on my car " , 1 , 0 , 0.99) ;
end
Listing 9.1: Limb Example
Here for illustrative purposes we translate the previously presented DSTME example of
Shafer in Section 3.3.2.1.2, page 69 in Listing 9.1. The result of this program is an array of
two values containing the belief and plausibility of the Betty-Sally evidential statement.
In Listing 9.2 is a quick raining example written in Forensic Lucid roughly corresponding to the raining example in Figure 19, page 63. It illustrates several Forensic Lucid
concepts, such as the use of Lucx simple contexts and tag sets combined with the forensic
contexts and lifting in the use of the \intersection operator. Forensic context illustrate
either full explicit specification of a raining observation or “plain” raining observation. The
observed property itself here is a simple context as well. The result of this program is an
entire observation.
In Listing 9.3 the same raining example is rewritten to make the use of the observation
w component to represent the raining or not fact. In that listing we dropped the optional
timestamp and keywords for the tag sets that are otherwise redundant to specify when it is
unambiguous what tag set type to use. This way of representing the “raining” condition can
be used to allow expressing rain uncertainty chances other than 0.0 or 1.0 in future weather
predictions (e.g., 50% rain chance would be 0.5) or certainty of the sensors that captured the
data in the past, or represent that it rained only for a part of the day that day when it did.
This representation can be used with belief and plausibility computations accordingly.
248
raining @ [ city : " Montreal " , month : " Sep " , day :4]
where
dimension city : unordered finite nonperiodic { " Montreal " , " Ottawa " , " Quebec " };
dimension day : ordered finite nonperiodic {1 to 31};
evidential statement es_raining = { os_montreal_raining , os_ottawa_raining ,
o s _ q u e b e c _ r a i n i n g };
observation sequence o s _ m o n t r e a l _ r a i n i n g =
{
([ city : " Montreal " , month : " Sep " , day :1 , raining : true ] , 1 , 0 , 1.0 , " September
1 , 2013 " ) ,
[ city : " Montreal " , month : " Sep " , day :2 , raining : false ] ,
[ city : " Montreal " , month : " Sep " , day :3 , raining : true ] ,
[ city : " Montreal " , month : " Sep " , day :4 , raining : false ] ,
[ city : " Montreal " , month : " Sep " , day :5 , raining : true ] ,
[ city : " Montreal " , month : " Sep " , day :6 , raining : false ] ,
[ city : " Montreal " , month : " Sep " , day :7 , raining : true ] ,
[ city : " Montreal " , month : " Sep " , day :8 , raining : false ] ,
[ city : " Montreal " , month : " Sep " , day :9 , raining : true ]
};
observation sequence o s _ q u e b e c _ r a i n i n g =
{
([ city : " Quebec " , month : " Sep " , day :1 , raining : false ] , 1 , 0 , 1.0 , " September
1 , 2013 " ) ,
[ city : " Quebec " , month : " Sep " , day :2 , raining : false ] ,
[ city : " Quebec " , month : " Sep " , day :3 , raining : true ] ,
[ city : " Quebec " , month : " Sep " , day :4 , raining : true ] ,
[ city : " Quebec " , month : " Sep " , day :5 , raining : true ] ,
[ city : " Quebec " , month : " Sep " , day :6 , raining : false ] ,
[ city : " Quebec " , month : " Sep " , day :7 , raining : false ] ,
[ city : " Quebec " , month : " Sep " , day :8 , raining : false ] ,
[ city : " Quebec " , month : " Sep " , day :9 , raining : true ]
};
observation sequence o s _ o t t a w a _ r a i n i n g =
{
([ city : " Ottawa " , month : " Sep " , day :1 , raining : false ] , 1 , 0 , 1.0 , " September
1 , 2013 " ) ,
[ city : " Ottawa " , month : " Sep " , day :2 , raining : true ] ,
[ city : " Ottawa " , month : " Sep " , day :3 , raining : true ] ,
[ city : " Ottawa " , month : " Sep " , day :4 , raining : true ] ,
[ city : " Ottawa " , month : " Sep " , day :5 , raining : true ] ,
[ city : " Ottawa " , month : " Sep " , day :6 , raining : true ] ,
[ city : " Ottawa " , month : " Sep " , day :7 , raining : false ] ,
[ city : " Ottawa " , month : " Sep " , day :8 , raining : false ] ,
[ city : " Ottawa " , month : " Sep " , day :9 , raining : false ]
};
// The result is an o b s e r v a t i o n "([ city :" M o n t r e a l " , month :" Sep " , day :4 , raining : true
] , 1 , 0 , 1.0) "
raining = ([ city :# city , month :# month , day :# day ] \ intersection es_raining ) ;
end
Listing 9.2: Raining Example
249
raining @ [ city : " Montreal " , month : " Sep " , day :4]
where
dimension city : { " Montreal " , " Ottawa " , " Quebec " };
dimension day : {1 to 31};
evidential statement es_raining = { os_montreal_raining , os_ottawa_raining ,
o s _ q u e b e c _ r a i n i n g };
observation sequence o s _ m o n t r e a l _ r a i n i n g =
{
([ city : " Montreal " , month : " Sep " , day :1] ,
([ city : " Montreal " , month : " Sep " , day :2] ,
([ city : " Montreal " , month : " Sep " , day :3] ,
([ city : " Montreal " , month : " Sep " , day :4] ,
([ city : " Montreal " , month : " Sep " , day :5] ,
([ city : " Montreal " , month : " Sep " , day :6] ,
([ city : " Montreal " , month : " Sep " , day :7] ,
([ city : " Montreal " , month : " Sep " , day :8] ,
([ city : " Montreal " , month : " Sep " , day :9] ,
};
observation sequence o s _ q u e b e c _ r a i n i n g =
{
([ city : " Quebec " , month : " Sep " , day :1] ,
([ city : " Quebec " , month : " Sep " , day :2] ,
([ city : " Quebec " , month : " Sep " , day :3] ,
([ city : " Quebec " , month : " Sep " , day :4] ,
([ city : " Quebec " , month : " Sep " , day :5] ,
([ city : " Quebec " , month : " Sep " , day :6] ,
([ city : " Quebec " , month : " Sep " , day :7] ,
([ city : " Quebec " , month : " Sep " , day :8] ,
([ city : " Quebec " , month : " Sep " , day :9] ,
};
observation sequence o s _ o t t a w a _ r a i n i n g =
{
([ city : " Ottawa " , month : " Sep " , day :1] ,
([ city : " Ottawa " , month : " Sep " , day :2] ,
([ city : " Ottawa " , month : " Sep " , day :3] ,
([ city : " Ottawa " , month : " Sep " , day :4] ,
([ city : " Ottawa " , month : " Sep " , day :5] ,
([ city : " Ottawa " , month : " Sep " , day :6] ,
([ city : " Ottawa " , month : " Sep " , day :7] ,
([ city : " Ottawa " , month : " Sep " , day :8] ,
([ city : " Ottawa " , month : " Sep " , day :9] ,
};
1,
1,
1,
1,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1.0) ,
0.0) ,
1.0) ,
0.0) ,
1.0) ,
0.0) ,
1.0) ,
0.0) ,
1.0)
1,
1,
1,
1,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0.0) ,
0.0) ,
1.0) ,
1.0) ,
1.0) ,
0.0) ,
0.0) ,
0.0) ,
1.0)
1,
1,
1,
1,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0.0) ,
1.0) ,
1.0) ,
1.0) ,
1.0) ,
1.0) ,
0.0) ,
0.0) ,
0.0)
// The result is an o b s e r v a t i o n "([ city :" M o n t r e a l " , month :" Sep " , day :3] , 1 , 0 , 1.0) "
raining = ([ city :# city , month :# month , day :# day ] \ intersection es_raining ) ;
end
Listing 9.3: DSTME Raining Example
9.3
9.3.1
ACME Printing Case in Forensic Lucid
Investigative Analysis
The investigative analysis follows the same principles used in Gladyshev’s FSA/LISP approach described in Section 2.2.5.1, page 44 (cf., the Common Lisp implementation in [137]).
250
We remodel the case in Forensic Lucid as further illustrated in this section. Resulting
Forensic Lucid program fragments corresponding to this case are in Listing 9.5 modeling
ψ, Listing 9.6 modeling Ψ−1 , and Listing 9.4 modeling the “main” program with all initial
evidential declarations.
9.3.2
Sample Forensic Lucid Specification
The simulated printer case is specified in Forensic Lucid as follows. ψ is implemented in
Listing 9.5. We then provide the implementation of Ψ−1 [304] in Listing 9.6. Finally, the
“main program” is modeled in Listing 9.4 that sets up the context hierarchy and the invokes
Ψ−1 . This specification is the translation of the Common Lisp implementation by Gladyshev
described earlier [137] and described in Section 2.2.5.1 in semi-structured English [312].
alice_claim @ es
where
// C o n s i s t e n t e v i d e n t i a l s t a t e m e n t
evidential statement es = { printer , manuf };
observation sequence printer = F ;
observation sequence manuf = { Oempty , $ };
observation sequence alice = { Oalice , F };
observation F = ( " B_deleted " , 1 , 0) ;
observation Oalice = ( P_alice , 0 , INF +) ;
observation Oempty = ( " empty " , 1 , 0) ;
// No " add_A " event per claim from Alice
dimension P_alice : unordered finite nonperiodic { " add_B " , " take " };
dimension S : unordered finite nonperiodic { " empty " , " A_deleted " , " B_deleted " };
alice_claim = invpsiacme [ S ]( es \ union alice ) ;
// ... f u n c t i o n d e c l a r a t i o n s
end
Listing 9.4: Developing the Printing Case: “main” [304, 305, 312]
The “Main Program”
In Listing 9.4 is where the computation begins in our Forensic Lucid example. This is
an equivalent of main(), or the program entry point, in other (mainstream) languages like
Java, C, or C++. The goal of this fragment is to setup (declare) the context of evaluation, which is core to the case—the evidential statement es. The es is the highest level
251
dimension in Lucid parlance, and it is hierarchical. This is an unordered list (set) of stories and witness accounts of the incident (themselves known as observation sequences); their
ordering is arbitrary. The es is a context where in ⟨dimension : tag⟩ pairs dimensions are
denoted by their identifiers alice, printer, and manuf, and their tags are declared nested
values, i.e., expanding the declarations, we arrive at alice:{Oalice, F}, printer:{F}, and
manuf:{Oempty, $}. The relevant stories to the incident are that of Alice (alice), the evidence of the printer’s final state as found by the investigator (printer), and the “expert
testimony” by the manufacturer of how the printer works. These observation sequences are
in turn defined as ordered collections of observations nesting one level deeper into the context. The printer’s final state dimension F is the only observation for the printer found by
the investigator, which is an observation of the property of the printer’s queue “Bob’s job
deleted last” syntactically written as “B deleted” as inherited from Gladyshev’s notation. Its
duration ((..., 1, 0)) merely indicates that it was simply present. The manuf observation
sequence is dictated by the manufacturer’s specification that the printer’s queue state was
empty initially for an undetermined period of time ($) when the printer was delivered. These
are two observations, following sequentially in time. Alice’s timeline (also two observations)
is that from the beginning Alice did not perform any actions signified by the computation
properties P such as “add B” or “take” (implying the computation “add A” has never happened (0 duration for the “infinity”, i.e., until the investigator examined the printer). This
is effectively Alice’s claim. Thus, alice claim is a collection of Boolean results for possible
explanations or lack thereof for Alice’s claim in this case at the context of all the declared
evidence and as subsequently evaluated by invpsiacme (Ψ−1 ). If Alice’s claim were to validate, their results would be “true” and “false” otherwise [312]. As we know, her claim was
not consistent with the evidential statement (Figure 13, page 51).
Modeling the Forward Transition Function ψ
In Listing 9.5 is ψ illustrating the normal forward flow of operations to model the scene. It
is also a translation from the Common Lisp implementation by Gladyshev [137] using the
Forensic Lucid syntax and operators described earlier ([304], Section 7.3, page 166). The
function is modeled per manufacturer specifications of the printer queue. “A” corresponds
252
to “Alice” and “B” to “Bob” along with their prompted queue actions to add or delete print
jobs. The code is a rather straightforward translation of the FSA/LISP code from [137]. S
is a collection of computational all possible properties observed. c is a “computation” action
to add or take print jobs by the printer’s spooler [312].
acmepsi [ S ]( c , s )
where
d1 = first s ;
d2 = second s ;
// Add a print job from Alice
if c == " add_A "
then
if d1 == " A " || d2 == " A " then s
else
// d1 in S
if ( d1 \ in S ) then " A " fby d2
else
if ( d2 \ in S ) then d1 fby " A "
else s fi ;
fi ;
fi ;
// Add a print job from Bob
else if c == " add_B " then
if ( d1 == " B " ) || ( d2 == " B " ) then s
else
if ( d1 \ in S ) then " B " fby d2
else
if ( d2 \ in S ) then d1 fby " B "
else s fi ;
// Printer takes the job per m a n u f a c t u r e r s p e c i f i c a t i o n
else if c == " take "
if d1 == " A " then " A_deleted " fby d2
else
if d1 == " B " then " B " fby d2
else
if d2 == " A " then d1 fby " A_deleted " ;
else
if d2 == " B " then d1 fby " B_deleted " ;
else s fi
// Done
else s fby eod fi ;
end ; // psi
Listing 9.5: Transition function ψ in Forensic Lucid for the ACME Printing Case
Modeling the Inverse Transition Function Ψ−1
In Listing 9.6 is the inverse Ψ−1 backtracking implementation with the purpose of event
reconstruction, also translated from Common Lisp to Forensic Lucid like the preceding
fragments using the Forensic Lucid operators. It is naturally more complex than ψ due
to a possibility of choices (non-determinism) when going back in time, so these possibilities
253
have to be explored. This backtracking, if successful, for any claim, would provide the
Gladyshev’s “explanation” of that claim, i.e., the claim attains its meaning and is validated
within the provided evidential statement. Ψ−1 is based on the traversal from F to the initial
observation of the printer’s queue as defined in “main”. If such a path were to exist, then
Alice’s claim would have had an explanation. pby (preceded by) is the Forensic Lucid
inverse operator of classical Lucid’s fby (followed by). backtraces is an array of event
backtracing computations identified with variables; their number and definitions depend on
the crime scene and are derived from the state machine of Gladyshev [312].
The transition functions presented have to be written by the investigator. While they
are significantly shorter and more manageable than the Common Lisp implementation by
Gladyshev [137], the usability of the language can still be improved by allowing visual DFG
composition of these resorting to the normal ψ flow as a part of the future work (Appendix E).
Ψ−1 is defined by the investigator exploring known possibilities from the final state to the
possible preceding states; if the graph is available, following the transitions back from the
final state.
9.4
Blackmail Case in Forensic Lucid
The blackmail case example of the implementation steps (Section 2.2.5.2, page 51) modeled in
Forensic Lucid is in Listing 9.7. At the top of the example we construct the hierarchical
context representing the evidential statement and comprising observations. The syntax is
made to relate to the mathematical description of Gladyshev’s FSA, but with the semantics of
Forensic Lucid. Any event property can also be mapped to a human-readable description
via => that can be printed out in a trace. invtans corresponds to Ψ−1 ; given all states, the
evidential statement, and Mr. A’s claim as an argument it attempts to find possible backtrace
explanations within the disk cluster model. The function trans corresponds to ψ [307].
254
invpsiacme [ S ]( s )
where
backtraces = [A , B , C , D , E , F , G , H , I , J , K , L , M ];
d1 = first s ;
d2 = second s ;
A = if d1 == " A_deleted "
then d2 pby " A " pby " take " else bod fi ;
B = if d1 == " B_deleted "
then d2 pby " B " pby " take " else bod fi ;
C = if ( d2 == " A_deleted " ) && ( d1 != " A " ) && ( d2 != " B " )
then d1 pby " A " pby " take " else bod fi ;
D = if ( d2 == " B_deleted " ) && ( d1 != " A " ) && ( d2 != " B " )
then d1 pby " B " pby " take " else bod fi ;
// d1 in S and d2 in S
E = if ( d1 \ in S ) && ( d2 \ in S )
then s pby " take " else bod fi ;
F = if ( d1 == " A " ) && ( d2 != " A " )
then
[ d2 pby " empty " pby " add_A " ,
d2 pby " A_deleted " pby " add_A " ,
d2 pby " B_deleted " pby " add_A " ]
else bod fi ;
G = if ( d1 == " B " ) && ( d2 != " B " )
then
[ d2 pby " empty " pby " add_B " ,
d2 pby " A_deleted " pby " add_B " ,
d2 pby " B_deleted " pby " add_B " ]
else bod fi ;
H = if ( d1 == " B " ) && ( d2 == " A " )
then
[ d1 pby " empty " pby " add_A " ,
d1 pby " A_deleted " pby " add_A " ,
d1 pby " B_deleted " pby " add_A " ]
else bod fi ;
I = if ( d1 == " A " ) && ( d2 == " B " )
then
[ d1 pby " empty " pby " add_B " ,
d1 pby " A_deleted " pby " add_B " ,
d1 pby " B_deleted " pby " add_B " ]
else bod fi ;
J = if ( d1 == " A " ) || ( d2 == " A " )
then s pby " add_A " else bod fi ;
K = if ( d1 == " A " ) && ( d2 == " A " )
then s pby . d " add_B " else bod fi ;
L = if ( d1 == " B " ) && ( d2 == " A " )
then s pby " add_A " else bod fi ;
M = if ( d1 == " B " ) || ( d2 == " B " )
then s pby " add_B " else bod fi ;
end ; // inv
Listing 9.6: Inverse transition function Ψ−1 in Forensic Lucid for the ACME Printing
Case
255
os_mra @ es_blackmail
where
// Core context of e v a l u a t i o n
evidential statement es_blackmail = { os_final , os_unrelated };
// List of all p o s s i b l e d i m e n s i o n tags
dimension W : unordered finite nonperiodic
{
" ( u ) " , " ( t1 ) " , " ( o1 ) " ,
" (u , t2 ) " , " (u , o2 ) " ,
" ( t1 , t2 ) " , " ( t1 , o2 ) " ,
" ( o1 , t2 ) " , " ( o1 , o2 ) "
};
dimension Wd : unordered finite nonperiodic
{
" d (u , t2 ) " , " d (u , o2 ) " , " d ( o1 ) " , " d ( t1 , t2 ) " ,
" d ( t1 , o2 ) " , " d ( o1 , t2 ) " , " d ( o1 , o2 ) "
};
I = W \ union Wd ;
// Mr . A ’s story
observation sequence os_mra = { $ , o_unrelated_clean , $ , o_blackmail , $ };
// Crime scene d e s c r i p t i o n
observation sequence os_final = { $ , o_final };
observation sequence os_unrelated = { $ , o_unrelated , $ , \0( I ) , $ };
observation o_final = ( " (u , t2 ) " = > " threats in slack of unrelated letter " , 1) ;
observation o _ u n r e l a t e d _ c l e a n = ( " (u , o1 ) " , 1) ;
backtraces = invtrans [ I ]( es_blackmail \ union os_mra ) ;
trans [ I ]( c , s )
where
if ( c == " ( u ) " && s .#.# == ( " ( o1 , o2 ) " , 0) )
then ( " (u , o2 ) " , 1) fby trans [ next I ]( next s .#) ;
else if ( c == " (u , t2 ) " && s .#.# == ( " (u , o2 ) " , 1) )
then ( " (u , t2 ) " , 2) fby trans [ next I ]( next s .#) ;
else if ( c == " d (u , t2 ) " && s .#.# == ( " (u , o2 ) " , 1) )
then ( " (u , t2 ) " , 1) fby eod ;
else if ( c == " ( u ) " && s .#.# == ( " (u , t2 ) " , 2) )
then ( " (u , t2 ) " , 1) fby eod ;
end ;
invtrans [ I ]( es )
where
backtraces = [ A , B ];
A = o_final
pby [ es .# , I : " ( u ) " ]
( " (u , t2 ) " , 2) pby [ es .# , I : " (u , t2 ) " ]
( " (u , o2 ) " , 1) pby [ es .# , I : " ( u ) " ]
( " ( o1 , o2 ) " , 0) ;
B = o_final pby [ es .# , I : " d (u , t2 ) " ] ( " (u , o2 ) " , 1) pby [ es .# , I : " ( u ) " ] ( " ( o1 , o2 ) " , 0) ;
invtrans [ next I ]( next es .#]) ;
end ;
end
Listing 9.7: Blackmail Case modeling in Forensic Lucid
256
9.5
MAC Spoofer Investigation
There are programs that can be used to spoof an address but to physically change
it is hard. The address is coded into the [network] device on the computer. If
you changed it to another one, there is a chance of conflict with a valid address
on some other device. Should you get a major hub device, you will be getting
a service disconnect as soon as they trace it. Depending on what you are doing
when detected, you may get visited by people driving plain looking cars, wearing
nice suits with bulges under the coats. —Yahoo! Answers user “Laid back”1
The case presented here is designed to work in an operational environment and covers a
significant number of the Forensic Lucid features in encoding the related evidence from the
realistic data. The author Mokhov designed this case and its resulting programming artifact
of gathering and encoding of the evidence as a part of his work in the Faculty network
administration group (NAG).
A Media Access Control (MAC) address [180]2 spoofer3 analysis and investigation has
to do with the automated processing and reasoning about the possible MAC spoofer alerts
on the Faculty of ENCS network [31]. It’s fairly easy to spoof a MAC address by changing
the network card settings on a PC or a laptop or via a virtual machine in most operating
systems, or most house-grade routers that allow setting an arbitrary MAC address for a
variety of legitimate needs [329]. However, on the analyst-managed subnets a MAC spoofer
presents a great security concern if a MAC address of a MAC-based port security setting of
a tightly secured desktop in a managed lab or office gets spoofed by a visiting laptop (which
the user is in full control of) as it may potentially gain unauthorized access to restricted
services/resources and/or increase the Faculty’s liability if, for example, the laptop is infested
with malware.
The existing MAC Spoofer Watcher tool, msw, presently does a lot of work in detecting
such possibilities and alerting the network group’s security RT (RT: Request Tracker [504]4 )
queue (along with cellphone notifications) of a possible ongoing incident after a series of
1
http://answers.yahoo.com/question/index?qid=20080322213119AAY8yUf
http://en.wikipedia.org/wiki/MAC_address
3
http://en.wikipedia.org/wiki/MAC_spoofing
4
http://www.bestpractical.com/?rt=4.0.8
2
257
probes and checks. It goes out of its way to filter out the likely false positives (identified commonly through several iterations of the deployed tool in daily network operations).
However, despite all the filtering, more often than not, a false positive report is generated.
Analysts have to manually (and sequentially) double-check if it is a real spoofer or a false
positive right after the alert is generated to the RT ticket as well as to the analysts’ cellphones
at any time of the day or night. A possible slew of such events can overwhelm the limited
human resources dealing with such events while attending to many other duties.
The msw’s processing is triggered by switch port events registered in the switchlog that
msw observes, when LINK-UP events occur at Layer 2 (of the OSI model [79]). If all of
its preliminary false-positiveness tests incrementally fail, msw generates the RT email alert
at the time. This email is designed to trigger a follow-up investigation by the MSA tool
presented in this section to automate the process and establish the likelihood of a genuine
spoofer or a false positive with high confidence to assist the network security crew in their
daily operations by supplying the results of the analysis and event reconstruction back to the
same RT ticket.
This follow-up investigation includes two types of digital forensics: live and “dead” (passive) [59, 252, 373], simultaneously (see also Section 2.1, page 24). The process re-examines
the evidence used by msw as well as performing additional live probes as explained further.
The live forensics includes re-examining the switch port status, ARP [382]5 cache, ping, and
finger replies, etc. In parallel, the past logs on the switch port link activity, user activity
syslog, DHCP logs, netflows surrounding the time of the incident are also examined. These
tools, daemons, and their logs are the witnesses and their witness accounts of the relevant
events are a part of the evidential statement in our case. All evidence is gathered in the common format of Forensic Lucid, and is fed to the “analysis engine” (i.e., GIPSY presented
earlier in Chapter 6).
It is possible that more than one RT ticket/alert is generated for the same host in a short
period of time—currently that means all “offences” will be processed possibly concurrently
with the gradually growing evidence with each offence. That means the current and the recent past evidence will have a disadvantage of multiple handlings of logically the same case;
5
http://en.wikipedia.org/wiki/Address_Resolution_Protocol
258
the subsequent runs, however, would gather all the previous and the new evidence, thereby
potentially increasing the confidence in the subsequent analyses. Currently, there is no synchronization between the multiple concurrently running investigations working independently
on the same logical case; however, the “phase 2” design calls to leverage GIPSY’s implementation of the eductive computation model (Chapter 6), where the already-computed event
reconstruction results at the same forensic contexts (Section 7.2.3.1.3, page 164) are cached
in the DST (Section 6.2.2.1.7, page 147) speeding up later runs. This implies, a GIPSY
instance should be running on a standby waiting to process such things, whereas currently
we spawn a GIPSY instance.
9.5.1
MAC Spoofer RT Alert
In essence, when a MAC spoofer report/RT ticket (see, e.g., a shortened version in Figure 70)
arrives, it constitutes a primary claim “there is a MAC spoofer” with some initial facts and
evidence from the ticket (encoded in Forensic Lucid as exemplified in Listing 9.9). We
want to automatically verify its true- or false-positiveness to some degree of certainty by
gathering all the supporting evidence from various log files (e.g., switchlog, activity log,
argus/netflow data, etc.) in one common format of Forensic Lucid and reason about
it. Subsequently, we send a follow-up report message with the analysis and detail event
reconstruction results. In case of a likely true positive, other things are to be possibly
automatically verified/established, such as who is the likely perpetrator and their malignity,
whether NFS access was attempted, etc.
The proposed analyzer of such reports is designed to work complimentary to the mentioned MAC Spoofer Watcher (msw) tool [31].
9.5.2
Speculative Claim Evaluation of Possible
Here we present what we call a two-claim solution to the possible MAC spoofer analysis
problem. The automation assumes two main parallel routes of evaluation of a possible MAC
spoofer by evaluating two derived claims speculatively and simultaneously: (1) “there is a
MAC spoofer” and (2) “the spoofer claim is a false positive”. The expected outcomes to be
259
From: [email protected]
Subject: [encs.concordia.ca #259835] Possible MAC spoofer: flucid.encs.concordia.ca (switch1 Fa0/16)
Date: Wed, 1 Aug 2012 07:07:07 -0400
Wed Aug 01 07:07:07 2012: Request 259835 was acted upon.
Transaction: Ticket created by [email protected]
Queue: EX-Security
Subject: Possible MAC spoofer: flucid.encs.concordia.ca (switch1 Fa0/16)
Owner: Nobody
Requestors: [email protected]
Status: new
123.45.67.89
flucid.encs.concordia.ca
Room: H123
Jack: 22
Switch Port: switch1 Fa0/16
PID of the testing macspoofwatch flucid.encs.concordia.ca [123.45.67.89] (switch1 Fa0/16): 22814
Test data:
result refused
Figure 70: Example RT ticket alerting of a possible MAC spoofing attempt
proved with the event reconstruction trace are either (a) “(1) is true and (2) is false”, or (b)
“(1) is false and (2) is true”. (a) or (b), if successful, are then said to be consistent, in essence,
cross-validating and supplementing each other. Such a parallel evaluation can also give
preliminary partial results faster and is novel in comparison to traditional approaches [135,
136, 137].
There are of course pathological cases of high conflict of (c) and (d) where both (1) and
(2) are either true or false respectively, leading to contradiction (the claim is true and it is
a false positive or the claim is false and it is not a false positive). This means either the
evidence is insufficient or the “crime scene” modeled is not correct. Both (1) and (2) work
with the same evidence (analogous to the cases built by the prosecution and defence in a
court of law scenario), but their starting claims are of opposing views.
9.5.3
Report Analysis by Human Experts
A human expert manually examining the evidence and doing an investigation traditionally
goes sequentially through some of the steps occasionally taking shortcuts by omitting some
of the steps if something is obvious or performing them to increase confidence in the analysis
results:
1. Check the switch port is still up
260
2. Check the host responds to telnet with memory and hardware commands (the custom
commands are designed to return expected values)
3. Check switchlog for LINK-UP events regarding the switch port in question
4. Delete ARP entry for the host from the ARP cache with arp (for further checks)
5. Check the host responds to ping, finger (both should respond as expected, else the
firewall configuration is non-standard)
(a) If ping answers, check the TTL values for likely Linux (64) or Windows (128)
hosts 6 , or someone plugged in a router (63 or 127, i.e., decremented by 1 at each
hop) [39, 456]
(b) No answer to ping nor ARP cache entry re-appears, likely machine no longer up
due to a quick reboot or ghosting; often leading to a false-positive
(c) No answer to ping, but ARP query returns successfully. A good indicator of a
likelihood of a real MAC spoofer (the machine acquired an IP, is up, but does not
reply as it should)
6. Check the host responds to nbtscan for Windows hosts (should return one of the known
strings)
7. Attempt to ssh to the host (should accept certain logins and respond appropriately)
8. Check activity log for boot and login events for the host
9. Optionally check for swpvios for this port in the recent past
10. Check argus/netflow logs for connections made from the host after the spoofer report
for potential illicit network activity
6
http://www.kellyodonnell.com/content/determining-os-type-ping
261
9.5.4
Augmented Parallel Algorithm
We propose to automate and enhance the manual process presented in the preceding section
with a parallel algorithm for MSA that does extra verification, evidence gathering, and
subsequent reporting, all in near-realtime.
The augmented checking/investigation algorithm for the automated solution is in Algorithm 3. In the general case, it consists of both live- (probing and checking the active
MAC-spoofer-accused on the network) and dead-forensics (examining some of the logs after
the fact) techniques simultaneously. It should be noted while the algorithm is depicted in
the traditional sequential manner, many of steps related to both live and dead forensics data
gathering are designed to be done concurrently, which will commonly yield a performance
improvement compared to the human expert doing the same.
Additional evidence is gathered with an nmap scan to fingerprint the OS and ports. Then,
gather the DHCP requests made to see if the offending laptop by default (suddenly) queries
DHCP for the IP while not too long ago the legitimate host did its own DHCP requests (with
known intervals or, if rebooted frequently, with the additional log entries “booted successfully
in OS”). This will not work if the laptop has been intentionally configured with static IP,
but could provide additional confidence, if the DHCP was used (which is the default in many
cases). An additional query to nbtscan for additional evidence, à la msw is also made.
9.5.5
Use and Misuse Cases
The UML Use Case diagram in Figure 71 depicts most common use and misuse cases of the
ENCS network in the context of MAC spoofer investigations. Misuse cases are in gray. Following Alexander’s notion of Misuse Cases for requirements specification ([9], Section 9.1.2,
page 247) to visualize the threat and mitigation scenarios as well as the primary and supporting actors involved. A lot of investigative work is shared between the network administrator
and the MAC Spoofer Analyzer (MSA), where the latter automatically gathers all the evidence in one place, encodes, invokes the analysis computation via GIPSY and then generates
two reports: the summarized evidence and designed upcoming automated reasoning report.
The MAC spoofer misuse cases illustrate the typical actions a person with a UM laptop would
262
1 begin
2
Bootstrap by an RT alert notification of a possible MAC spoofer via a procmail handler;
// In the below live and dead forensics processings in the begin-end blocks is done parallel
processes, including the individual checks.
// Each check/evidence collector as a product, encodes its output in Forensic Lucid as an
observation sequence. All such sequences are later combined into an evidential statement for
further analysis.
// Live network MAC spoofer forensics
3
begin
4
Check the switch port is still up;
// SL5 and Windows 7 only should be there
5
Check host OS and ports open with nmap;
6
Check how the host responds to telnet with memory and hardware commands;
// Should respond as expected, else the firewall configuration is non-standard
7
Delete ARP entry for the host from the ARP cache with arp;
8
Check how the host responds to ping, finger;
9
begin
10
if ping answers then
11
Check the TTL values for likely Linux (64) or Windows (128) hosts, or someone plugged in a router
(63 or 127);
12
end
13
else
14
No answer to ping nor ARP cache entry re-appears, likely machine no longer up due to a quick
reboot or ghosting; often leading to a false-positive;
15
end
// May increase confidence in false-positiveness
16
begin
17
Reaffirm possible boot and patching in the activity log;
18
end
19
No answer to ping, but ARP query returns successfully. A good indicator of a likelihood of a real MAC
spoofer (the machine acquired an IP, is up, but does not reply as it should);
20
end
21
Check how the host responds to nbtscan;
// Should be allowed internally
22
Attempt to ssh to the host;
23
end
// ‘‘Dead’’ network MAC spoofer forensics
24
begin
25
Check switchlog for LINK-UP events regarding the switch port in question;
26
Check activity log for boot and login events for the host;
27
Optionally check for swpvios [31] for this port in the recent past;
28
Check argus/netflow logs for connections made from the host after the spoofer report for potential illicit
network activity;
29
Check the DHCP requests for the host;
30
end
// Analysis
31
begin
32
Gather evidence from all the checks and encode it in Forensic Lucid;
33
Invoke analysis engine;
34
end
35
Generate a report;
36 end
Algorithm 3: Augmented MAC spoofer checking/investigation algorithm
do to spoof a MAC address. While presently MSA is not downing the switch port, the design
includes this use case to be complete; this action depends on the result of the analysis and
confidence in it. In Section 9.5.8 is the description of the corresponding sequence diagram
(Figure 73).
263
Figure 71: Use and Misuse Cases in MAC spoofer investigations
9.5.6
False Positives
The typical false positive spoofer detections include:
1. Unusually slow booting (and often patching right afterward and rebooting of) Windows
hosts.
2. Older outdated images of OS’s that were rarely booted to and unpatched/not re-imaged
for a long time.
3. Accidental user-managed (UM) host (with the UM image) on a trusted virtual LAN
(VLAN) [31, 70] (either mistaken OS image/IP address or switch port configuration
264
settings).
4. Host in the middle of ghosting or in the ghosting console without a recognized NetBIOS
string on a trusted VLAN.
5. Exceptional printers on the trusted VLAN.
6. Malfunctioning host (hard drive or memory going bad).
While not all can be handled in an automated manner easily, the proposed MSA tool is
designed to be able to identify cases 1, 2, and 6 in its follow-up investigation.
9.5.7
Components
The MSA system’s design consists of the following components/modules acting together in
unison (sequentially or in parallel):
1. Procmail [481] handler
2. RT ticket email to Forensic Lucid encoder
3. Live and log data collector/Forensic Lucid encoder (activity, switchlog, argus [390],
DHCP, and others)
4. (under design consideration) Network database query component for previously known
information on the hosts, locations, switch ports, and MAC addresses
5. Forensic Lucid processor (GIPSY) invocator
6. Output/conclusion report generator
9.5.7.1
Procmail Handler
This component consists of two parts: the .procmailrc recipe for procmail [481]7 (see
Figure 72) looking for Possible MAC spoofer and From: matching a regular expression
.*(nobody|root(\+[a-z\d-]+)?|nag(db)?)@ [403] and handing over a copy of that RT
7
http://en.wikipedia.org/wiki/Procmail
265
ticket email to the handler (the variable $HOME is appropriately set to the installation directory); and the handler is a script that (1) parses the Subject: header for RT ticket number,
host, switch and port number, (2) checks in its internal storage if this ticket has already been
handled to avoid duplicate or useless handling of replies or comments to the ticket by people
in case the From: didn’t filter enough in (1), (3) if this is a new spoofer report claim, save
the RT email into a .rt file, which is a text file with the name of the RT ticket number
containing the RT email, e.g., 123456.rt (see Figure 70 for example), (4) ssh to primary
compute and control host and start the Collector/Encoder component there.
:0c:mac-spoofer-analyzer.lock
* ^Subject: .*Possible MAC spoofer
* ^From: .*(nobody|root(\+[a-z\d-]+)?|nag(db)?)@
| $HOME/bin/mac-spoofer-alert-handler | ...
Figure 72: Procmail handler trigger recipe (rule)
9.5.7.2
Collector/Encoder
The collector/encoder script consists of multiple modules and subcomponents that collect
information from various data sources (mostly logs or probes on different hosts) and convert
them into an evidential statement es context format encoded in Forensic Lucid. The
script, mac-spoofer-evidence-collector, is a multiprocess Perl application that calls
upon various modules (listed below) to gather the corresponding evidence available.
There are three primary API methods that the main collector/encoder dynamically discovers and calls for each of the following modules are collect(), encode(), and getType().
The collect() method implements a specific data gathering mechanism (from a log file on
some host or a direct live probe around the initial contextual point of interest in meta-Lucid
terms8 as initial filtering criteria), and stores the gathered data (RT-number.collector-type).
The encode() methods selectively encode the collected data into Forensic Lucid with
some preprocessing and filtering following the Occam’s razor principles and normalization
(for easier co-relation of timestamps and other data items of the same type, such as MAC
8
i.e., as a P (Section 7.2.3, page 159) analogy in mac-spoofer-evidence-collector in Perl
266
addresses). Something that is not encoded or not recognized by the encoder for a particular data log, is still preserved either as a simple observation in an observation sequence or
in a comment section of the corresponding Forensic Lucid fragment, such that a human
investigator can review and fine-tune it later. The getType() is simply a way for the main
collector/encoder to tell apart which module was called to use as an extension. All encoded
files have an additional extension of .ctx, that is context files (that have observation sequences of interest). The main script then check-sums all the collected and encoded data
with sha1sum.
1. RTForensicLucidEncoder is the module that is the very first to be invoked to parse
and encode the initial RT ticket into a claim (see Listing 9.9 as an example). In
particular, it extracts the date, hostname, IP, switch, and port context point data that
are used for subsequent lookups in the related logs, live data, and database entries that
follow to gather all relevant contextual evidence surrounding the initial RT report. The
other concrete collector/encoder modules search for data temporally near this context
point.
2. SwitchLogForensicLucidEncoder uses the date, switch, and port information to look
for related events in the switchlog and encode the events of interest such as link state
changes, swpvios, etc.
3. ActivityLogForensicLucidEncoder uses the date and host information to look for
host bootups, shutdowns, patching, and user logins/logouts.
4. MSWForensicLucidEncoder encodes log entries produced by msw. The older msw did
not produce non-STDERR log file entries, only errors and email notifications. To be
able to replay some of the data (that are normally real-time) from the past, we need an
event trace/decision log. This is especially relevant for development and testing on past
cases while no actual spoofing is going on. Thus as a part of this work, msw was itself
augmented to add extra logging functionality, and, while at it, making it in a more
Forensic Lucid-friendly manner to simplify extraction by logging sub-expressions
directly in the Forensic Lucid context format. This approach compensates somewhat
267
for the fact that real-time data for nbtscan and other probes and the like are not
available in the offline playback. Furthermore, those data serve as additional evidential
observations to improve the confidence in the subsequent analysis.
5. DHCPLogForensicLucidEncoder uses the date and host information to look for the
host’s DHCP requests and the corresponding DHCP protocol exchange messages.
6. ARPPingForensicLucidEncoder uses arp and ping to gather the live host presence
evidence, which is especially useful in the presence of a firewall on the spoofing host.
7. NmapForensicLucidEncoder uses nmap to gather the live OS and open ports evidence.
8. FingerForensicLucidEncoder uses finger to gather the live OS and login evidence.
9. SSHForensicLucidEncoder uses ssh to gather the live “genuineness” test evidence.
10. ArgusForensicLucidEncoder uses Argus commands to gather netflow summaries,
etc., as evidence of network activity prior the report arrival time tar , such as at least
tar − tlinkup primarily because a number of probes are done between the LINK-UP
event [68, 70, 343, 344] and the ticket arrival time to weed out the majority of false
positives.
11. NbtscanForensicLucidEncoder uses nbtscan to gather the live NetBIOS evidence
(for Windows hosts) for work groups and the MAC address.
12. SWMForensicLucidEncoder uses the switch management swm [31] to check the live
current port status at the time of the check if it is up.
13. EncodingUtils module was developed as a helper module for many of the above to
uniformly encode data such as timestamps, MAC addresses, and hostnames, which
sometimes vary in their lexical representation in different logs or probes. It, therefore,
has timestampformat(), macformat(), and hostnameformat() methods. For example, timestams formatted similar to "Jul 7 15:10:18", "2013-07-07 16:24 EDT",
"2013-07-07 13:10:55497751", or "201307020450", become formatted like "Tue Jul
268
9 08:56:56 2013" in the human-readable form for reporting and as a long epoch integer internally. Likewise, MAC addresses have different legal lexical representations used
by different vendors, like 00bb335588ff in raw, 0:bb:33:55:88:ff in Argus (stripping
leading zeroes), 00-BB-33-55-88-FF by Microsoft, 00bb.3355.88ff by Cisco are folded
into the DCHP ethers format 00:bb:33:55:88:ff. Hosts are simply formatted into
fully-qualified domain names (FQDNs).
9.5.7.2.1
Encoding the RT Ticket Claim.
The RT ticket claim’s evidence is pri-
marily the following:
• Ticket arrival timestamp tar , e.g., Wed, 1 Aug 2012 07:07:07 -0400, serves as a contextual point in time from which to move forward and backward to extract relevant
events in other evidential sources.
• ipaddr – possible spoofer’s IP address; filter for logs that use IP addresses only. This
works well if the spoofer uses the same IP address as the legitimate host, via DHCP or
statically. This partially breaks if the spoofer changes an IP address to be of another
host on the same subnet after checking, which IPs are unused. (Such behavior is a sign
of extreme deliberation and possibly malice of someone who knows what they are doing
as opposed to “scriptkiddie” tools for mere “Internet access” with a laptop.) This case
can still be caught via the other evidential sources and logs that do not use IPs, and
primarily host/switch/port based checks. The extra arp checks for the investigation
if the MAC address in DHCP matches the expected IP address talking on the port or
not, will confidently tell if the spoofer is genuine.
• hostname – possible spoofer’s DNS name; filter for logs that use hostnames.
• switch – uplink switch DNS name where the host is/was connected to.
• port – uplink port on the switch where the host is/was connected to.
• mac – the MAC address being spoofed; for lookups in DHCP, switch, database, and
other sources that have it.
269
// ...
observation msw_o = ( Pspoofer , 1 , 0 , 1.0 , t_ar ) ;
observation sequence msw_claim = { $ , msw_o };
// ...
observation sequence m s w _ c o u n t e r _ c l a i m = { $ , not ( msw_o ) , $ };
Pspoofer =
{
[ host :
{
[ hostname : " flucid . encs . concordia . ca " ] ,
[ IP : " 123.45.67.89 " ] ,
[ mac : " 00: aa :22: bb :44: cc " ] ,
[ room : " H123 " ] ,
[ jack : " 22 " ]
}
],
[ switch : " switch1 " ] ,
[ port : " Fa0 /16 " ] ,
[ t_ar : " Wed , 1 Aug 2012 07:07:07 -0400 " ]
};
// ...
Listing 9.9: MAC Spoofer Analyzer’s RT evidential statement context encoding example
The secondary context, encoded hierarchically (primarily for reporting to humans and
possibly other uses) includes a room number, jack number on the wall, and (possibly in the
future) hardware/memory/OS information extracted.
The primary context point is used to construct the two-claim solution (see Section 9.5.2)
and gather the evidence from the log files surrounding that context filtered by tar ± 24hrs,
ipaddr, hostname, switch, port, and mac further. The RT two-claim is made indirectly by
msw, as a “prosecution witness”.
9.5.7.2.2
Encoding Log and Probe Evidence.
Collection of other pertinent evidence
depends on the context of the initial report described in the preceding section. The modules
described earlier collect both live and dead evidence from probes and logs. Not everything
possible is collected to avoid unnecessary complexity in encoding and evaluation, so only
information is kept that is likely to be helpful in the decision making. The data selection is
made by the typical criteria a human expert selects the data for the same task. Following
the examples presented earlier in Section 8.5.2, page 236, we illustrate some of the examples
of the encoded evidence in Forensic Lucid for the modules mentioned. Context calculus
operators (Section 7.3.3, page 183) help with additional filtering of deemed unnecessary data
during computation.
270
In Listing 9.10 is the msw evidence encoding example. In Listing 9.11 is the empty
(no-observation) activity log example. In the regular cases, activity log features operating
system booted and users logged on. perp o in the presence of users can be set to the most
likely perpetrator in a follow-up investigation. finger and ssh live evidence supplements
the activity log evidence with similar information on expected banners and possible users, if
available. No-observations are recorded similarly.
// ...
// msw evidence , encoded : Jul 22 0 9 : 1 7 : 3 1 2013
observation sequence msw_os =
{
msw_encsldpd_o_1 ,
msw_ghost_o_2 ,
msw_arp_o_3
};
observation m sw _ e n c s l d p d _ o _ 1 = ([ switch : " switch1 " , port : " FastEthernet0 /1 " , ipaddr : "
132.2 05.44.2 52 " , hostname : " flucid -44. encs . concordia . ca " , encsldpd : false ] , 1 , 0 , 1.0 , " Jul
13 14:33:37 2013 " ) ;
observation msw_ghost_o_2 = ([ switch : " switch1 " , port : " FastEthernet0 /1 " , ipaddr : " 1 32.205. 44.252
" , hostname : " flucid -44. encs . concordia . ca " , ghost : false ] , 1 , 0 , 1.0 , " Jul 13 14:33:38 2013 "
);
observation msw_arp_o_3 = ([ switch : " switch1 " , port : " FastEthernet0 /1 " , ipaddr : " 132. 205.44.2 52 " ,
hostname : " flucid -44. encs . concordia . ca " , arp : true ] , 1 , 0 , 1.0 , " Jul 13 14:33:39 2013 " ) ;
// end of msw e v i d e n c e
// ...
Listing 9.10: msw encoded evidence example
// ...
// a c t i v i t y log evidence , encoded : Jul 22 0 9 : 1 7 : 2 5 2013
observation perp_o = $ ;
observation sequence activity_os =
{
activity_o_1
};
observation activity_o_1 = $ ;
// end of a c t i v i t y log e v i d e n c e
// ...
Listing 9.11: activity encoded no-observation evidence example
Live probes by nmap (and similarly complementary by nbtscan) give a list of open ports
and other aspects that are compared to the minimum expected ports and other values. In
Listing 9.12 is an example of captured and encoded nmap evidence. The samples of ignored
lines are in the comment section; they play no role in evaluation but recorded anyway in case
the investigator wants to include some of that data later on.
In Listing 9.13 is an example of the evidence encoded from the DHCP logs to supplement
the investigation and provide visibility into the situation.
271
// ...
// ‘ nmap ’ evidence , encoded : Jul 14 2 1 : 0 9 : 2 3 2013
observation sequence nmap_os = o s_ nm a p_ en tr i es ;
observation sequence o s _n ma p_ e nt ri es =
{
([ protocol_port :135 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :139 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :445 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :49152 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :49157 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :6002 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :49153 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :49154 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :49156 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :7001 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :7002 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :49155 , protocol : " tcp " ] , 1 , 0) ,
([ protocol_port :135 , protocol : " tcp " ] = > " open msrpc
Microsoft Windows RPC " , 1 , 0) ,
([ protocol_port :139 , protocol : " tcp " ] = > " open netbios - ssn " , 1 , 0) ,
([ protocol_port :445 , protocol : " tcp " ] = > " open netbios - ssn " , 1 , 0) ,
([ protocol_port :6002 , protocol : " tcp " ] = > " open http
SafeNet Sentinel License
Monitor httpd 7.3 " , 1 , 0) ,
([ protocol_port :7001 , protocol : " tcp " ] = > " open tcpwrapped " , 1 , 0) ,
([ protocol_port :7002 , protocol : " tcp " ] = > " open hbase - region Apache Hadoop Hbase 1.3.0 (
Java Console ) " , 1 , 0) ,
([ protocol_port :49152 , protocol : " tcp " ] = > " open msrpc
Microsoft Windows RPC " , 1 ,
0) ,
([ protocol_port :49153 , protocol : " tcp " ] = > " open msrpc
Microsoft Windows RPC " , 1 ,
0) ,
([ protocol_port :49154 , protocol : " tcp " ] = > " open msrpc
Microsoft Windows RPC " , 1 ,
0) ,
([ protocol_port :49155 , protocol : " tcp " ] = > " open msrpc
Microsoft Windows RPC " , 1 ,
0) ,
([ protocol_port :49156 , protocol : " tcp " ] = > " open msrpc
Microsoft Windows RPC " , 1 ,
0) ,
([ protocol_port :49157 , protocol : " tcp " ] = > " open msrpc
Microsoft Windows RPC " , 1 ,
0) ,
([ mac : " 00:13:72: xx : xx : xx " ] = > " ( Dell ) " , 1 , 0) ,
([ os : " Microsoft Windows 7|2008 " ] , 1 , 0) ,
([ hops :1] , 1 , 0)
};
// U n e n c o d e d data
/*
S t a r t i n g Nmap 6.25 ( http :// nmap . org ) at 2013 -07 -14 21:08 EDT
NSE : Loaded 106 scripts for s c a n n i n g .
NSE : Script Pre - s c a n n i n g .
I n i t i a t i n g ARP Ping Scan at 21:08
S c a n n i n g xxx . xxx . xx . xx [1 port ]
C o m p l e t e d ARP Ping Scan at 21:08 , 0.00 s elapsed (1 total hosts )
I n i t i a t i n g SYN Stealth Scan at 21:08
S c a n n i n g xxx . xxx . xx . xx [1000 ports ]
*/
// end of ‘ nmap ’ e v i d e n c e
// ...
Listing 9.12: nmap encoded evidence example
9.5.7.2.3
Collection/Encoding Summary.
All the encoded evidence, e.g., for the
ticket RT12345 is saved into the appropriate files: 12345.rt.ctx (e.g., as in Listing 9.9,
272
// ...
// DHCP evidence , encoded : Jul 14 2 1 : 0 8 : 1 3 2013
observation sequence dhcpd_os =
{
dhcp_log_o_1 ,
dhcp_log_o_2 ,
dhcp_log_o_3 ,
dhcp_log_o_4 ,
dhcp_log_o_5
};
observation dhcp_log_o_1 = ([ dhcpmsg : " DHCPOFFER " , direction1 : " on " , ipaddr : " xxx . xxx . xx . xx " ,
direction2 : " to " , mac : " xx : xx : xx : xx : xx : xx " , via : " xxx . xxx . xx . x " ] , 1 , 0 , 1.0 , " Jul 14
11:58:03 2013 " ) ;
observation dhcp_log_o_2 = ([ dhcpmsg : " DHCPREQUEST " , direction1 : " for " , ipaddr : " xxx . xxx . xx . xx "
, dhcpd : " xxx . xxx . xx . xxx " , direction2 : " from " , mac : " xx : xx : xx : xx : xx : xx " , via : " xxx . xxx . xx . x "
] , 1 , 0 , 1.0 , " Jul 14 11:58:03 2013 " ) ;
observation dhcp_log_o_3 = ([ dhcpmsg : " DHCPACK " , direction1 : " on " , ipaddr : " xxx . xxx . xx . xx " ,
direction2 : " to " , mac : " xx : xx : xx : xx : xx : xx " , via : " xxx . xxx . xx . x " ] , 1 , 0 , 1.0 , " Jul 14
11:58:03 2013 " ) ;
observation dhcp_log_o_4 = ([ dhcpmsg : " DHCPINFORM " , direction1 : " from " , ipaddr : " xxx . xxx . xx . xx "
, via : " xxx . xxx . xx . x " ] , 1 , 0 , 1.0 , " Jul 14 11:58:07 2013 " ) ;
observation dhcp_log_o_5 = ([ dhcpmsg : " DHCPACK " , direction1 : " to " , ipaddr : " xxx . xxx . xx . xx " , mac
: " xx : xx : xx : xx : xx : xx " , via : " bond0 " ] , 1 , 0 , 1.0 , " Jul 14 11:58:07 2013 " ) ;
// end of DHCP e v i d e n c e
// ...
Listing 9.13: dhcp log encoded evidence example
12345.switchlog.ctx, 12345.activity.ctx, 12345.nmap.ctx, and others plus the incident modeling transition functions ψ (forward tracing) and Ψ−1 (optimized backtracing, see
Section 7.5.4 for their description and definition) in the file mac-spoofer-transition.ipl
for the case into the case file 12345.spoofer.ipl for the primary claim “there is a spoofer”
and 12345.notspoofer.ipl as “defence” claim that “this is a false positive” for parallel
evaluation.
In case of reasonable true-positiveness, the design calls for subclaims to be created and
evaluated as well: 12345.perp.ipl, 12345.nfs.ipl to determine who (attribution) and how
malicious they are based on the previously extracted evidence (e.g., via activity and Argus
logs).
At the end of its operation, Collector/Encoder (after checksumming everything) passes all
the collected and encoded data to the Forensic Lucid processor (see the following section)
to do the actual reasoning and event reconstruction computations. A GIPSY instance is
spawned per claim to be evaluated.
273
9.5.7.3
Forensic Lucid Processor
The Forensic Lucid Processor presently includes the mac-spoofer-flucid-processor
script feeding the encoded evidence to GIPSY (see Section 6.1, page 129 and Chapter 8) that
has a Forensic Lucid parser and a distributed run-time system, implemented in Java.
This component is designed to do the heavy weight computation. The MSA design includes
a provision to act on the results of analysis if the confidence is high by, e.g., shutting down
the switch port or quarantining the IP address.
9.5.7.4
Output/Conclusion Generator
In MSA, this corresponds to the mac-spoofer-reporter to report the findings to system
administrators. Presently, 1 is already in production; 2 reports in the development environment.
• Decision tree, findings, conclusions; mail with the proper RT subject (under active
development).
• Multi-stage reporting mailings as data become available:
1. Gathered evidence first; grouped together in one place. (This is already in operation and is of help earlier on to any active network security people watching
in.)
2. Analysis, that may computationally take a longer time, will be be delivered in a
follow-up analysis update.
9.5.8
Sequence Diagram
The UML sequence diagram shown in Figure 73 depicts the design of a series of synchronous
and asynchronous calls by the modules involved. It is based on the components mentioned
earlier in Section 9.5.7.2 and further, their API, and on Algorithm 3. All the encoders work
asynchronously as child processes (fork()) started by mac-spoofer-evidence-collector.
This is because there is no particular ordering for their execution required (except for the
RT ticket encoding as there is a data dependency on the context point from it used by
274
the other modules as well as preferential start up of live forensics modules first to do their
probes sooner). All modules produce intermediate raw files and encoded timestamped files;
the latter are subsequently collected into one Forensic Lucid script when all the modules
return, which is then fed to GIPSY to compile and evaluate for reasoning purposes. When
GIPSY returns, the final analysis report is generated.
9.5.9
Challenges and Limitations
• Short-lived spoofer sessions, so not much live forensic data are available.
• Need to measure true negatives, false negatives, false positives.
• Sysadmin mistakes (assigning a managed VLAN or replacing the OS image from managed to UM without updates to the VLAN), which is nearly impossible to tell apart
from genuine spoofing.
9.5.10
Related Work
There is hardly any related work of that nature. However, naturally sysadmins and security
community alike tend to produce tools for various degrees of automation.
On of such open-source tools is mac-san [3] from Andrew Adams at Pittsburgh Supercomputing Center designed to scan hosts on subnets and VLANs for vulnerabilities. It uses
both SNMP (to poll switches) and ICMP (via nmap) to determine active hosts on the network
and then rush at them with Nessus [460]. It uses a database to for historical security scans
as well as previous associations between MACs and IPs. It also has a notification machinery
if the scan fails for some reason in the last two scans or so. It also checks for MACs/IPs
tuple mismatches between scans. It is roughly equivalent to a combinations of the tools we
use from our scripset [31] as well, including Nessus with daily scans of last active hosts that
are recorded regularly into the database. Like us, they allow for scan exceptions for a given
MAC/IP address.
A commercial equivalent of what we are doing includes the recently emerged FireSIGHT
with a FireAMP agent from SourceFire that is normally installed on all client computers.
275
Figure 73: MAC spoofer analyzer UML sequence diagram
276
We are considering possible integration of FireAMP with our toolset to define an appropriate
security policy to mandate FireAMP’s presence on all machines, to cover the user-managed
land when connecting to our network, and deny connection if after a certain grace time period
FireAMP is not responsive from a given UM IP.
9.5.11
Ongoing and Future Work
MSA is already a part of the nagtools set [31] in production that is being actively designed,
developed, and maintained in iterative builds. This ongoing process includes the Forensic
Lucid reasoning aspects as well as immediate future plans that follow:
• Plan to provide a direct hook for msw to invoke the reasoner not via RT. In this case one
can group relevant .ctx, etc. files by the hostname, IP, or MAC addresses instead of
RT ticket numbers. Some adjustments will need to be made about duplicate handling
for the same host for possible prior investigations.
• Plan to provide a hook to swm to shutdown the switch port in question in case of a high
confidence in the detection of the spoofing activity.
• Plan to provide a hook to the account management in case of high severity and high
confidence, autoblock the account of the offender (long term future goal).
277
9.6
Summary
We presented a few application examples of use of Forensic Lucid to show the use of some
of its intensional and probabilistic constructs.
The toy examples in the beginning encoding simple cases or statements follow by a more
concrete detailed case of the MAC Spoofer Investigation MSA tool design where numerous
evidence sources are encoded into Forensic Lucid format exhibiting both intensional and
probabilistic DSTME features.
Code size reduction from Common Lisp ([137]) to Forensic Lucid (Section 2.2.5.1.3,
page 48) is approximately 4-fold (545 lines of Common Lisp (not including DOT Graphviz
visualization output of event reconstruction) vs. 113 of Forensic Lucid). Forensic Lucid
code is arguably more readable and easier to maintain.
We briefly summarized the data sources used of various types data used in investigation
experiments and tests.
As of March 2013, the implementation in automatic gathering and alerting of the related
evidence in real-time in one place received positive feedback from NAG avoiding a lot of
manual lookups and data gathering outlined in Section 9.5.3.
278
Chapter 10
Concluding Remarks and Future
Work
This thesis addresses a notion of digital forensics investigation of swaths of digital and nondigital evidence, case and crime scene modeling, and event reconstruction of a crime scene
involving computing equipment incorporating credibility and trustworthiness factors of the
evidence and the witnesses into the equation. In crime investigation the investigators focus
on reconstructing what has possibly happened after arriving at the scene, gathering evidence,
and interviewing witnesses. Digital investigation bears some similarities with the traditional
process including proper handling of the evidence (storage, logs) reports, etc. However,
the problem with digital evidence is that there are too many devices and too much data
to humanly analyze to attempt to reconstruct what has happened, especially in a crime
involving computer and network equipment. Forensic Lucid is provided, along with a
proposed design of the evaluation GIPSY platform, to alleviate those difficulties based on
science and formal methods (to be credible in court of law), to represent the knowledge of
the case, including evidential description, witness accounts, assign credibility values to them,
all in a common shape and form and then validate hypotheses claims against that evidential
data.
We likewise presented in-depth background setting for this work in Part I to provide the
necessary historical context on the tools and techniques used, data sources, and where the
approach comes from as well as to acknowledge the work of predecessors and to provide
279
a reasonable self-contained reference resource to the readers. We also included complete
definition of the syntax and semantics of Forensic Lucid and further explored relevant
case studies, design and implementation aspects.
10.1
Objectives Achieved
In dereference to the primary thesis objectives stated in Section 1.5.1 we have designed the
Forensic Lucid incident specification/scripting language, its formal syntax and semantics,
and related case studies. Specifically:
• Forensic Lucid (Chapter 7)
– Forensic Lucid syntax (Section 7.3, page 166)
– Forensic Lucid operational semantics (Section 7.4, page 192)
– Hierarchical higher-order context specification (Section 7.2.2, page 158)
– Operators and transition functions (Section 7.3.4, page 191)
– Observation specification with credibility (Section 7.4.2, page 197)
• Forensic Lucid parser and semantic analyzer (Section 8.1, page 211)
• Design of the run-time evaluation framework and Forensic Lucid encoders (Section 8.2, page 213)
• Example cases (see Chapter 9, page 244)
10.2
Summary
Through the series of discussions, definitions of the syntax, operators, semantics, and some
examples of the Forensic Lucid applications we reached a milestone to show the benefits
of the intensional approach over the FSA. As far as the implementing system concerned,
it has advantages of parallelizing the computation and introduces the notion of context and
eduction that are absent in the FSA/LISP approach of Gladyshev et al . [135, 136, 137].
280
Context and eduction allow a better expression of or constraints on the evidence as well
as its more optimized evaluation respectively. We took advantage of the existing concepts,
syntax and semantic rules and constructs from the intensional programming and the Lucid
family of languages [161, 304, 305, 362].
We presented the fundamentals of Forensic Lucid, its concepts, ideas, and dedicated
purpose—to model, specify, and evaluate digital forensics cases. The process of doing so is
simpler and more manageable than the previously proposed FSA model and its Common
Lisp realization (Section 9.6, page 278). We re-wrote in Forensic Lucid (Section 9.3,
page 250) one of the sample cases initially modeled by Gladyshev in the FSA and Common Lisp to show the specification is indeed more manageable and comprehensible than
the original [137, 312]. We likewise modeled a specification of the example blackmail case
Forensic Lucid (Section 9.4, page 254). The notions and notations used in Forensic
Lucid re-use some of those used by Gladyshev ([136], Section 2.2.5.2, page 51) achieving
uniformity and comparability of the approaches [307]. We have likewise written a few simple
DSTME-oriented examples illustrating a concrete use of various language constructs together
matching with the background-set examples in Section 9.2, page 248.
We implemented a number of encoders and encoding techniques to automatically represent
the evidence from various data sources in the Forensic Lucid format as a case knowledge
base for the designed case studies. The most complete of these is the MAC Spoofer Analyzer
case detailed in Section 9.5, page 257 and MARFCAT (Section 5.4, page 110).
Our language is based on more than 35 years of research on correctness and soundness
of programs [25] and the corresponding mathematical foundations of the Lucid language
(Chapter 4), which is a significant factor should a Forensic Lucid-based analysis be presented in court in the future. The logical formalisms behind Forensic Lucid date even
further back in time (see Chapter 3).
We likewise presented a modular intensional programming research platform, GIPSY, for
reasoning tasks of HOIL expressions. The concept of context as a first-class value is central
in the programming paradigms GIPSY is built to explore, such as the family of the Lucid
programming languages. At the time of this writing, GIPSY has various degree of support for
compilation of GIPL, Indexical Lucid, Lucx, JLucid, Objective Lucid, JOOIP, and
281
now Forensic Lucid [302].
We augmented the GIPSY Type System to support forensic
contexts. We also refactored substantially its GIPC and GEE frameworks to support multiple
semantic analyzers and evaluation backends respectively and designed the API for the new
backends based on Forensic Lucid, PRISM, and AspectJ in Chapter 8.
10.3
Limitations
We realize by looking at the presented examples the usability aspect is still desired to be
improved upon further for easier adoption by investigators, especially when modeling ψ and
Ψ−1 , prompting one of the future work items to address it further [312] using visualization
techniques (Appendix E).
In Section 3.3.2.1.3, page 72, we mentioned points of criticism of the Dempster’s rule
of combination as a belief fusion operator in the DSTME. That rule favors a shared belief
between observations of the same property and ignores the others in the evidential statement
from different observation sequences. We partially address this with the averaging fusion
operator within an observation sequence and cumulative fusion between any two observations.
This appears to be appropriate enough for our purposes, but more formal investigation is
needed into selection of the most appropriate fusion operator. Thus, we defined the total
credibility of an observation sequence as an average of all the weights in this observation
sequence ([313], Definition 53, page 192):
Wavg
(wi )
=
n
(10.3.0.1)
A less naive way of calculating weights is using some pre-existing functions. What comes to
mind is the activation functions used in artificial neural networks (ANNs), e.g. [313]:
WAN N =
1
(1 + e−nwi )
(10.3.0.2)
The witness stories or evidence with higher scores of W have higher credibility. The lower
282
scores, therefore, indicate less credibility or more tainted evidence [313]. Such a representation would coincide well with machine learning and data mining techniques when the credibility weights can be learned and updated throughout the investigative run automatically,
as done for ANNs.
Another potential limitation that has emerged is the hardcoded notion of ≥ 1/2 from [87]
meaning to us as “credible enough”, such as in use in the opw operator types. There may
be forensic applications and investigations that require higher thresholds, making 1/2 an
investigator settable parameter. A possible solution to this is to introduce the parameter,
e.g., reserved dimension type wDST M E by default evaluating D0 , P0 ⊢ wDST M E : 1/2 that can
be altered at run-time to the desired value.
An issue of the credibility/belief weight assignment has inherent limitations in the reliability of the investigators assigning them or the encoders of the tools assigning them based
on expert estimates. As a corollary, the Forensic Lucid language with any of its implementing toolsets at this stage form a forensic assistant to an investigator or an expert, and
not an absolute proof system.
The credibility weight assignment also impacts the usability issues for an investigator
deciding between assigning, e.g., 93% or 97%. A possible solution to this is to add syntactically pre-defined keywords available to investigators to be used consistently, e.g., as
presented in research by Kent at CIA in 1964 [206] by quantifying uncertainty with ranges,
e.g., almost certain (93% ± 6%), probable (75% ± 12%), even chances (50% ± 10%),
probably not (30%±10%), almost certainly not (30%±10%). The ranges can be adapted
to be handled with the observation’s min, max parameters in this case.
It is important to mention, per Section 10.1, page 280, we provide static syntax and semantic analysis and the type system within the GIPSY system at this stage with the run-time
completion being a part of the near-future work. Likewise, complete formal mechanization
of the language in a proof assistant (started in [300]) is also beyond the scope of this thesis
and is also a part of the near-future work to establish a high degree of confidence in the
correctness of the semantics of Forensic Lucid.
Finally, we have not completed the complexity analysis and the resulting metrics and
283
measurements for the presented work’s run-time and scalability. Such analysis and evaluation are multifold. In the Lucid-family, one of such analyses is rank analysis (dimensionality
of expressions), which helps in run-time optimization and was performed by several research
publications in the past (see references in Chapter 4). The GIPSY’s scalability of the distributed run-time of GEE has been extensively studied by Ji and colleagues (see references
in Chapter 6). The event reconstruction (ERA, including ψ and Ψ−1 ) implementation within
the Forensic Lucid backend is the other major aspect. We began working on the metrics
framework within the GIPSY run-time (where Metric itself is designed as an annotation
type part of the GIPSY type system) to study such aspects as well as optimize run-time
scheduling of GIPSY networks. Thus, rank analysis, run-time scalability analysis, and the
ERA complexity analysis (including the individual context operators) form the various facets
of complexity analysis to be completed in the future work.
10.4
Future Work
Future work will focus on resolving the limitations presented in the preceding section as well
as will include ongoing related projects.
In general, the proposed practical approach in the cyberforensics field can also be used to
model and evaluate normal investigation process involving crimes not necessarily associated
with information technology or any other types of incident investigations [304, 307] as long
as there is a way to encode the evidence, hypotheses, and the incident scene. Combined with
an expert system (e.g., implemented in CLIPS [400]), it can also be used in training new
staff in investigation techniques [305, 307, 312].
There are, therefore, a number of topics of interest to investigate in subsequent research
endeavors. The R&D in digital forensics and forensic computing are more relevant and evident in the digital age and digital investigation. The need for standards and automation in
investigation processes to aid human investigators is paramount. There is a lot of research to
be done still before the formal concepts mature and become a standard practice in the field. A
particular aspect under consideration is the idea of self-forensics with the proposed Forensic Lucid language standard for autonomic software and hardware systems in all kinds of
284
vehicles, craft, and on the Internet (details of that project propositions are in Appendix D).
10.4.1
Tool Integration
There are a number of tools that the Forensic Lucid approach can be integrated with for
better incident investigation. While MSA (Section 9.5, page 257) is already integrated, there
are a number of others where it makes sense to integrate or develop plug-ins for:
• Augment common software packages and custom scripts to log centrally (e.g., via syslog) directly in the Forensic Lucid format as an option. This includes apache,
Tomcat, identd/authd, sshd, iptables, Event Log, etc. Additionally, a plug-in for
Splunk [439] to be Forensic Lucid-aware in its searches.
• A natural integration with the HIVE (an open infrastructure for malware collection
and analysis) [62] as well as with an incident analysis system NICTER and its engines
based on data mining techniques [104, 183].
• Forensic Lucid is also a knowledge representation language. Translation from other
knowledge representation languages, such as Vassev’s KnowLang is also envisioned for
the purposes of a common format for investigations.
• Forensic context specification with Forensic Lucid for case analysis, evaluation, and
event reconstruction in ubiquitous computing platforms.
• Integration with Ftklipse [225], GATE [462] and JPF ([21, 22, 85], Section 2.1.3,
page 28).
• Integration with JDSF ([271, 294], Section D.3.3, page 363).
• Design and implement the Forensic Data Flow Graph tool as an extension to RIPE
in GIPSY to improve the usability of the system. The activity of programming a
transition function, presented in Section 7.5.4, for non-computer programmer experts
may be tedious, but a Lucid program can be modeled as a data-flow graph. In GIPSY,
there was an implementation of a DFG tool [89] that allowed basic programming in
Indexical Lucid using graphical DFG representation, which provided bidirectional
285
translation from a DFG to Indexical Lucid code and back. We propose to extend
the tool to include Forensic Lucid constructs and make it more convenient to use for
non-programmers. This would enable an easier construction of the transition functions
graphically by an investigator [305].
Appendix E is dedicated to further discussion and progress on this aspect.
10.4.2
More on Formalization
• Refine the semantics of Lucx’s context sets and their operators to be more sound,
including Box, so some of our formalisms can be based on it.
• not applied to a transition event e ∈ I (where I is a set of all events) would mean
not e ⇔ I \ e, i.e., all the other events currently in I, except e for easier specification.
For example, if I = {“add A”, “add B”, “take”}, e = “add A”, then not e would mean
{“add B”, “take”} using \difference. I itself is fully specified in the program by the
investigator, but could be made to be learned dynamically as the Forensic Lucid
program executes.
• Complete an RFC1 on Forensic Lucid and release a full standard Forensic Lucid
specification (similar to that of Java [141]).
• Transition Function as an Operator Stream.
In order to model composite transition functions, we will define a stream of operators of
application in a scenario. Thus, each path in the graph is a finite stream of application
of Forensic Lucid operators. The operator streams can be derived from the corresponding DFGs modeled by the investigator. Since TransLucid [378] (mentioned
earlier in Section 4.1, page 76) does have notions of function streams and ranges, this
opens a possible area of collaboration.
• cluster operator.
Discrete statistical contextual points is of interest to define and navigate to represent
median and mean clusters (e.g., as in signal processing in MARF).
1
http://www.ietf.org/rfc.html, http://www.rfc-editor.org/
286
• HOIFL.
With the resulting combination of logics and theories presented here we may arrive
at something we call Higher-Order Intensional Fuzzy Logic (HOIFL). The founding
formalism we borrow are instantiated in Forensic Lucid (see Chapter 7). More
specifically, a HOIFL variant in this thesis combines intensionality an its concrete instantiation in the GIPSY type system (Appendix B), Dempster–Shafer belief functions,
the joined and modified rules of weighted combination of computations in Gladyshev’s
approach, and intensional programming in Lucid.
While giving a complete rigorous treatment of HOIFL is beyond the scope of this thesis,
we can roughly sketch that HOIFL includes the necessary intensional logic constructs
presented earlier, belief mass assignment and probability operators, hierarchical contexts, first-order intensional logic (FOIL), and a type system, in addition to the standard higher-order extensional logic constructs. To visualize HOIFL as a conservative
minimal extension:
min( HOL FOIL Theory of Types HOIL+DSTME ) HOIFL
A sketch (including Figure 74) of the formalism thus would begin as:
– Model M = ⟨Q, I, R, D, DI , m⟩.
– Q is the set of possible worlds (states).
– I are statements or events.
– D is a set of intensional objects ([508], Section 3.2, page 59); DI is the set of their
extensions.
– R is a set of accessibility relations DI → DI (such as transitions, or transition
functions, or operators, such as next , fby , ψ, Ψ,. . . ∈ R) between states q ∈ Q.
– m is belief mass interpretation that is assigned to each object in the domain
w = m(I, Γ), Γ ∈ Q.
– V is a valuation function assigning propositions from m to each world q.
A complete formulation of axiomatization is to be done in the future work beyond this
thesis. We can capitalize on a number of earlier and recent referenced work [112]. For
287
M, Γ |=w
v P (x1 , . . .) ⇔ V (< (v(x1 ), . . . ) >, Γ) = w
(10.4.2.1)
w
w
M, Γ |=w
v X ∧ Y ⇔ M, Γ |=v X and M, Γ |=v Y
(10.4.2.2)
... ⇔ ...
✷X ⇔ M, Γ |=w
v X f or every ∆ ∈ Q with ΓR∆
(10.4.2.3)
w
M, Γ |=w
v ✸X ⇔ M, Γ |=v X f or some ∆ ∈ G with ΓR∆
(10.4.2.4)
M, Γ
|=w
v
... ⇔ ...
M, Γ
|=w
v
next(X) ⇔ M, Γ |=w
v λn.X(X + 1) and w ≥ 1/2
(10.4.2.5)
... ⇔ ...
M, Γ
|=w
v
P (X) ≥ P (Y ) ⇔ M, Γ |=w
v X at least as probable as Y
(10.4.2.6)
M, Γ |=w
v P (X) ≥ ⊤ ⇔ X has a probability of 1.0
(10.4.2.7)
M, Γ |=w
v P (X) ≥ ¬P (X) ⇔ X has a probability of at least 1/2
(10.4.2.8)
... ⇔ ...
M, Γ
|=w
v
bel(X) ⇔ M, Γ |=w
v X has a belief of
m(B) ≤ P (X)
(10.4.2.9)
B|B⊆X
w
M, Γ |=w
v pl(X) ⇔ M, Γ |=v X has a plausibility of
m(B) ≤ P (X) (10.4.2.10)
B|B⊆X
... ⇔ ...
Figure 74: HOIFL initial sketch formulation
example, Muskens proposed intensional models for the Theory of Types [17, 328] in
2007. Our logic axiomatization will draw from the recent (2003–2011) existing axiomatizations and semantic formalizations of FOIL [113], probabilistic logics [87, 155], ADM
process logic [4], Halpern’s and Pucella’s logic for reasoning about evidence [158].
• Isabelle/HOL [339, 372, 518] is a proof assistant that works with Higher-Order Logics
(HOLs). The author is using it to assist with formalizing and proving Forensic Lucid
constructs, derived from the semantic rules, operators, and core language constructs.
The end goal of this exercise is to establish a solid formal provable base and avoid
mistakes of an equivalent manual exercise [300]. Thus, we will explore the theoretical
results, e.g., soundness, completeness of Forensic Lucid, by formally proving of the
correctness of the semantic rules of Forensic Lucid using Isabelle/HOL [372] and
their equivalence to the FSA approach through push-down systems. (As a side benefit,
if the proofs are done for core Forensic Lucid, they will automatically extend to
288
GIPL, Indexical Lucid, Lucx, and others [305].) Additionally, the culmination of
this aspect is to publish the Forensic Lucid proof theory (ForensicLucid.thy-inprogress along with other Lucid dialects) in the Archive of Formal Proofs [209].
10.4.3
Distributed Systems, Network and Security Research, and
Forensics
• Cf. Section 5.1.2, page 99, it is important to handle digital forensics of IPv6 traffic,
including evidence encoding and case analysis with Forensic Lucid. As IPv6 is
becoming more prominent, IPv6-based security investigations with a lot of unknowns
and recently identified security problems [120, 255] are necessary to cope with.
• Similarly to IPv6, the digital investigation, evidence modeling, and case specification
for wireless, mobile and ad-hoc networks is another major research area to follow up
on.
• Extend the work to digital evidence modeling and investigation in botnets.
• Similarly to the MSA tool, automate Faculty network scanning investigations, where
the network is undergoing port-scanning attacks.
• Ben Hamed [159] and Du proposed a scheduler and the corresponding heuristics, graph
annotation and analysis as well as their implementation under the scheduler for a distributed intensional evaluation engine. Paquet et al . proposed GEE on the other hand
within the GIPSY with the baseline language and a set of algebras in the engines. This
idea generalizes the notion of the Ben Hamed’s scheduler and proposes a compile-time
and run-time design within GIPSY to include the scheduler component for various optimization analyses, e.g., rank and graph analyses with the generic annotation support
that can be used for various dialects and their intermediate representations.
289
10.5
Acknowledgments
This research work took about 6-7 years to reach this point. Throughout the years the
author had the pleasure to work with a lot of people on the various facets of the research, get
inspiration from, and who provided diverse support and encouragement. While the author
is an inclusionist, please forgive if he forgot to mention someone as there are many people to
acknowledge.
First of all, an enormous thank you goes to my supervisors, Drs. Joey Paquet and Mourad
Debbabi for their extraordinary support and extreme patience and tolerance allowing me to
explore various research aspects in this thesis and beyond. Their contribution to the success of
this thesis is immense with their combined expertise in intensional programming, distributed
systems and frameworks, security and cyberforensics. Thanks to Joey Paquet for extensive
in-depth reviewing and comments with vast amounts of patience in doing so. Thanks to
Mourad Debbabi for introducing me to digital investigation, PRISM, realistic data and test
cases and being an excellent teacher during my M.Eng in Information Systems Security where
I discovered my topic in around 2006.
A lot of thanks go to Dr. Peter Grogono, who has been very supportive of my work all
the way back from my master’s studies and for the recommendation to use the Dempster–
Shafer theory. I would like to appreciate the additional examining committee members who
agreed to devote their valuable time to judge this thesis: Drs. Weichang Du, Terry Fancott,
and Amr Youssef. Thanks to Dr. Patrice Chalin for being there in the past with his critical
to-the-point reviews in my master’s thesis and the doctoral thesis proposal and the follow-up
seminar. I would like to also thank Dr. John Plaice for very thorough and helpful reviews of
Chapter 1, Chapter 4, and Chapter 6 in this thesis.
Thanks to Dr. Aiman Hanna for long-time support and advice back from my undergraduate days up to the presentation of this very thesis. Thanks to Dr. Ching Y. Suen whose course
back in 2002 inspired the work on MARF and its applications that are still around today
(Chapter 5). Another token of acknowledgment goes to Dr. Benjamin Fung for comments
on Chapter 5. Thanks to Drs. John McKay, Lingyu Wang, A. Ben Hamza, Chadi Assi, and
René Witte for various support of my work through courses, publications, advice, morally, or
290
funding. Thanks to Drs. Sabine Bergler and Leila Kosseim and the CLaC lab for the journey
through the internals of natural language processing related to the intensional side of this
work. Some of the practical implementation resulting from those courses went into MARF
and its applications as well (e.g., [283]). Thanks to Frank Rudzicz [410] as well. Thanks to
Dr. Brigitte Jaumard for contributing the node hardware and a rack for the GIPSY cluster
design (Section 8.6, page 238).
Big thanks goes to my professional Academic IT Services (AITS) unit in the Faculty of
ENCS. A special thanks goes to Michael J. Assels, the manager of Networks and Security
who approved and thoroughly reviewed the MAC Spoofer Analyzer material (Section 9.5,
page 257) in several iterations as a part of my work on this project and my thesis. It’s
an immense pleasure to work with him as a colleague for my professional AITS duties side
of things. Thanks to the rest of the AITS crew! It’s been a pleasure to work with all
the members of NAG (Michael Spanner and Manny Taveroff), Faculty Information Systems
(FIS) group, Desktop Operations Group (DOG), System Administration Group (SAG), User
Services and Service Desk (USG and SD). Thanks to Joel Krajden, Stan Swiercz, Francois
Carrière, Sigmund Lam, Paul Gill and Frank Maselli among others.
Thanks to many of the past and current GIPSY R&D team members for their valuable
contributions, collaboration, suggestions, and reviews, especially Sleiman Rabah, and including Touraj Laleh, Arash Khodadadi, Yi Ji, Bin Han, Dr. Aihua Wu, Dr. Emil Vassev, Xin
Tong, Amir Pourteymour, Dr. Kaiyu Wan, Chun Lei Ren, Dr. Bo Lu, Yimin Ding, Lei Tao.
The author would like to acknowledge the people previously involved with the MARF and
JDSF projects, namely: Stephen Sinclair, Ian Clement, Dimitrios Nicolacopoulos, Lee Wei
Huynh, Jian Li, Farid Rassai, and others.
Thanks go out to the Computer Security Lab team members: Amine Boukhtouta, Claude
Fachkha, Hamad Binsalleeh, Sujoy Ray, Nour-Eddine Lakhdari, William Nzoukou, and many
others for their strong team work and support. Thanks to Andrei Soeanu for presenting the
work on the initial ACME Printing Case in Forensic Lucid [312] at ICDF2C’2011 on
behalf of the author.
This research and development work was funded in part by NSERC, the Faculty of Engineering and Computer Science (ENCS), Concordia University (Montreal, Canada), the
291
NCFTA Canada, and NIST SAMATE group (a MARFCAT lecture). AITS donated two
switches and provided gigabit connectivity to the GIPSY cluster (Section 8.6, page 238) to
ENCS. As a visiting scholar at the Visualization and Graphics Lab, Department of Computer
Science and Technology, Tsinghua University, the author was a recipient of the Canada-China
Scholars’ Exchange Program (CCSEP) scholarship.
Thanks to the open-source communities that produce quality open-source software and inspired the author to do the same. Thanks to the communities of Eclipse, Apache, Open/Libre
Office, Linux (and Scientific Linux specifically), various LATEX distributions and tools, compilers, and many other OSS contributors. Likewise, thanks to various open-access resources,
such as Wikipedia, TeX.SE, and Scholarpedia among others.
Of course, to conclude this section it is important to mention my family again who were
there for me and suffered my long-duration absences from the family life while supporting me
and my wife with home chores as well as our wonderful kids Deschy and Timmy (sorry daddy
was very busy absent for long hours)—thank you my parents-in-law and brother-in-law: Fang
Liu, Lin Song, and Liu Song— without your support I could not have completed this work.
292
Bibliography
[1] A. F. Abdelnour and I. W. Selesnick. Nearly symmetric orthogonal wavelet bases. In Proc.
IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP), May 2001.
[2] AccessData. FTK – Forensic Toolkit. [online], 2008–2013. http://www.accessdata.com/
products/digital-forensics/ftk.
[3] A. K. Adams. mac-scan – scan hosts on a VLAN or network for vulnerabilities. [online], Pittsburgh Supercomputing Center, 2009–2013. http://www.psc.edu/index.php/networking/
647-mac-scan.
[4] K. Adia, M. Debbabi, and M. Mejri. A new logic for electronic commerce protocols. Int J
Theor Comput Sci (TCS), 291(3):223–283, Jan. 2003.
[5] I. Agi. GLU for multidimensional signal processing. In Orgun and Ashcroft [350]. ISBN:
981-02-2400-1.
[6] D. Agrawal et al. Autonomic computing expressing language. Technical report, IBM Corporation, 2005.
[7] V. S. Alagar, J. Paquet, and K. Wan. Intensional programming for agent communication.
In J. Leite, A. Omicini, P. Torroni, and P. Yolum, editors, Declarative Agent Languages and
Technologies II, volume 3476 of Lecture Notes in Computer Science, pages 239–255. Springer
Berlin Heidelberg, 2005.
[8] P. Albitz and C. Liu. DNS and BIND. O’Reilly, 3 edition, 1998. ISBN: 1-56592-512-2.
[9] I. Alexander. Misuse Cases: Use Cases with hostile intent. [online], Nov. 2002. http://wwwdse.doc.ic.ac.uk/Events/BCS-RESG/Aleksander.pdf.
[10] I. Alexander and L. Beus-Dukic. Discovering Requirements. Wiley, 2009.
[11] J. Allard, V. Chinta, S. Gundala, and G. G. R. III. JINI meets UPnP: An architecture
for JINI/UPnP interoperability. In Proceedings of the 2003 International Symposium on
Applications and the Internet 2003. SAINT, 2003.
[12] B. Allen. Monitoring hard disks with SMART. Linux Journal, 117, Jan. 2004. http://www.
linuxjournal.com/article/6983, last viewed May 2012.
[13] G. Allwein and J. Barwise, editors. Logical reasoning with diagrams. Oxford University Press,
Inc., New York, NY, USA, 1996.
[14] R. Alshammari and A. N. Zincir-Heywood. Investigating two different approaches for encrypted traffic classification. In Proceedings of the Sixth Annual Conference on Privacy,
Security and Trust (PST’08), pages 156–166. IEEE Computer Society, Oct. 2008.
[15] R. Alshammari and A. N. Zincir-Heywood. Machine learning based encrypted traffic classification: Identifying SSH and Skype. In Proceedings of the IEEE Symposium on Computational
Intelligence for Security and Defense Applications (CISDA 2009), pages 1––8. IEEE, July
2009.
[16] R. A. Alshammari. Automatically Generating Robust Signatures Using a Machine Learning
Approach To Unveil Encrypted VOIP Traffic Without Using Port Numbers, IP Addresses and
Payload Inspection. PhD thesis, Dalhousie University, Halifax, Nova Scotia, Canada, May
293
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
2012.
P. Andrews. Church’s type theory. In E. N. Zalta, editor, The Stanford Encyclopedia of
Philosophy. Stanford, spring 2009 edition, 2009. http://plato.stanford.edu/archives/
spr2009/entries/type-theory-church/.
I. Androutsopoulos. Temporal meaning representation in a natural language front-end. In
Gergatsoulis and Rondogiannis [131], pages 197–213.
S. Anson, S. Bunting, R. Johnson, and S. Pearson. Mastering Windows Network Forensics
and Investigation. Sybex, 2 edition, June 2012.
Apache River Community. Apache River. [online], 2010. http://river.apache.org/index.
html.
A. R. Arasteh and M. Debbabi. Forensic memory analysis: From stack and code to execution
history. Digital Investigation Journal, 4(1):114–125, Sept. 2007.
A. R. Arasteh, M. Debbabi, A. Sakha, and M. Saleh. Analyzing multiple logs for forensic
evidence. Digital Investigation Journal, 4(1):82–91, Sept. 2007.
E. A. Ashcroft. Multidimensional program verification: Reasoning about programs that deal
with multidimensional objects. In Orgun and Ashcroft [350], pages 30–41. Invited Contribution.
E. A. Ashcroft, A. A. Faustini, R. Jagannathan, and W. W. Wadge. Multidimensional Programming. Oxford University Press, London, Feb. 1995. ISBN: 978-0195075977.
E. A. Ashcroft and W. W. Wadge. Lucid – a formal system for writing and proving programs.
SIAM J. Comput., 5(3), 1976.
E. A. Ashcroft and W. W. Wadge. Erratum: Lucid – a formal system for writing and proving
programs. SIAM J. Comput., 6(1):200, 1977.
E. A. Ashcroft and W. W. Wadge. Lucid, a nonprocedural language with iteration. Communications of the ACM, 20(7):519–526, July 1977.
E. A. Ashcroft and W. W. Wadge. R for semantics. ACM Transactions on Programming
Languages and Systems, 4(2):283–294, Apr. 1982.
AspectJ Contributors. AspectJ: Crosscutting Objects for Better Modularity. eclipse.org, 2007.
http://www.eclipse.org/aspectj/.
M. J. Assels. The logic of global conventionalism. Master’s thesis, Department of Philosophy,
Concordia University, Montreal, Canada, Mar. 1985. Online at http://spectrum.library.
concordia.ca/5204/.
M. J. Assels, D. Echtner, M. Spanner, S. A. Mokhov, F. Carrière, and M. Taveroff. Multifaceted faculty network design and management: Practice and experience. In B. C. Desai,
A. Abran, and S. Mudur, editors, Proceedings of C3 S2 E’11, pages 151–155, New York, USA,
May 2010–2011. ACM. Short paper; full version online at http://www.arxiv.org/abs/1103.
5433.
AT&T Labs Research and Various Contributors. The DOT language. [online], 1996–2012.
http://www.graphviz.org/pub/scm/graphviz2/doc/info/lang.html.
AT&T Labs Research and Various Contributors. Graphviz – graph visualization software.
[online], 1996–2012. http://www.graphviz.org/.
F. Baader and H. J. Ohlbach. A multi-dimensional terminological knowledge representation
language. Journal of Applied Non-Classical Logics, 5(2), 1995.
C. Baier and J.-P. Katoen. Principles of Model Checking. Massachusetts Institute of Technology, 2008. ISBN: 978-0-262-02649-9.
M. Bailey, J. Oberheide, J. Andersen, Z. M. Mao, F. Jahanian, and J. Nazario. Automated
classification and analysis of Internet malware. Technical report, University of Michigan, Apr.
2007. http://www.eecs.umich.edu/techreports/cse/2007/CSE-TR-530-07.pdf.
294
[37] C. D. Ball. Helping lawyers master technology. [online], blog, column, publications, 2006–2013.
http://www.craigball.com/Ball_Technology.
[38] R. Bardohl, M. Minas, G. Taentzer, and A. Schürr. Application of graph transformation to
visual languages. In Handbook of Graph Grammars and Computing by Graph Transformation:
Applications, Languages, and Tools, volume 2, pages 105–180. World Scientific Publishing Co.,
Inc., River Edge, NJ, USA, 1999.
[39] R. Bejtlich. The Tao of Network Security: Beyond Intrusion Detection. Addison-Wesley,
2005. ISBN: 0-321-24677-2.
[40] J. Bennett. A Philosophical Guide to Conditionals. Oxford: Clarendon Press, 2003.
[41] T. Berners-Lee, R. Fielding, U. C. Irvine, and L. Masinter. RFC 2396: Uniform Resource
Identifiers (URI): Generic Syntax. [online], Aug. 1998. http://www.ietf.org/rfc/rfc2396.
txt, viewed in November 2007.
[42] L. Besnard, P. Bourani, T. Gautier, N. Halbwachs, S. Nadjm-Tehrani, and A. Ressouche.
Design of a multi-formalism application and distribution in a data-flow context: An example.
In Gergatsoulis and Rondogiannis [131], pages 149–167.
[43] H. Binsalleeh, T. Ormerod, A. Boukhtouta, P. Sinha, A. M. Youssef, M. Debbabi, and
L. Wang. On the analysis of the Zeus botnet crimeware toolkit. In Eighth Annual Conference on Privacy, Security and Trust, PST 2010, August 17-19, 2010, Ottawa, Ontario,
Canada, pages 31–38. IEEE, 2010.
[44] E. Bloedorn, A. D. Christiansen, W. Hill, C. Skorupka, L. M. Talbot, and J. Tivel. Data
mining for network intrusion detection: How to get started. Technical report, The MITRE
Corporation, 2001.
[45] F. Boccuni. A theory of Fregean abstract objects. In Quinon and Antonutti [391]. Online at
http://www.fil.lu.se/index.php?id=18879.
[46] A. B. Bondi. Characteristics of scalability and their impact on performance. In Proceedings
of the 2nd international workshop on Software and performance, pages 195–203, 2000.
[47] G. Booch, J. Rumbaugh, and I. Jacobson. The Unified Modeling Language User Guide.
Addison-Wesley, 1999. ISBN: 0201571684.
[48] C. Borgelt, M. Steinbrecher, and R. R. Kruse. Graphical Models: Representations for Learning, Reasoning and Data Mining. Wiley, second edition, Sept. 2009.
[49] A. Boukhtouta, N.-E. Lakhdari, S. A. Mokhov, and M. Debbabi. Towards fingerprinting
malicious traffic. In Proceedings of ANT’13, volume 19, pages 548–555. Elsevier, June 2013.
[50] J. R. Boyd. Destruction and creation. [online], U.S. Army Command and General
Staff College, Sept. 1976. http://www.goalsys.com/books/documents/DESTRUCTION_AND_
CREATION.pdf.
[51] J. R. Boyd. The essence of winning and losing. [online], June 1995. A five slide set by Boyd;
http://www.danford.net/boyd/essence.htm.
[52] M. Bozorgi, L. K. Saul, S. Savage, and G. M. Voelker. Beyond heuristics: Learning to classify
vulnerabilities and predict exploits. In Proceedings of the 16th ACM SIGKDD international
conference on Knowledge Discovery and Data Mining, KDD’10, pages 105–114, New York,
NY, USA, 2010. ACM.
[53] M. Bunge. Interpretation. In Semantics II: Interpretation and Truth, volume 2 of Treatise on
Basic Philosophy, pages 1–41. Springer Netherlands, 1974.
[54] S. Bunting. EnCase Computer Forensics – The Official EnCE: EnCase Certified Examiner
Study Guide. Sybex, 3 edition, Sept. 2012.
[55] L. Burdy et al. An overview of JML tools and applications. International Journal on Software
Tools for Technology Transfer, 7(3):212–232, 2005.
[56] J. Cao, L. Fernando, and K. Zhang. Programming distributed systems based on graphs. In
295
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
[75]
[76]
[77]
[78]
Orgun and Ashcroft [350], pages 83–95.
R. Carnap. Meaning and Necessity: a Study in Semantics and Modal Logic. University of
Chicago Press, Chicago, USA, 1947.
R. Carnap. Introduction to Symbolic Logic and its Applications. Dover Publications, June
1958.
B. D. Carrier. Risks of live digital forensic analysis. Communications of the ACM, 49(2):57–61,
Feb. 2006. http://www.d.umn.edu/~schw0748/DigitalForensics/p56-carrier.pdf.
B. D. Carrier. Autopsy forensic browser. [online], 2006–2013. http://www.sleuthkit.org/
autopsy/.
B. D. Carrier. The Sleuth Kit. [online], 2006–2013. http://www.sleuthkit.org/sleuthkit/.
D. Cavalca and E. Goldoni. HIVE: an open infrastructure for malware collection and analysis.
In Proceedings of the 1st Workshop on Open Source Software for Computer and Network
Forensics, pages 23–34, 2008.
P. R. Cavalin, R. Sabourin, and C. Y. Suen. Dynamic selection of ensembles of classifiers
using contextual information. In Multiple Classifier Systems, LNCS 5997, pages 145–154,
Mar. 2010.
H. Chen. Intelligence and Security Informatics for International Security: Information Sharing and Data Mining. Integrated Series in Information Systems. Springer-Verlag New York,
Inc., Secaucus, NJ, USA, 2006.
S. Chen and P. Greenfield. QoS evaluation of JMS: An empirical approach. In Proceedings of
the 37th Hawaii International Conference on System Sciences, 2004.
Y. Chen and W.-T. Tsai. Service-Oriented Computing and Web Data Management: from
Principle to Development. Kendall Hunt Publishing Company, 2 edition, 2008.
Z. Chlondowski. S.M.A.R.T. Linux: Attributes reference table. [online], S.M.A.R.T. Linux
project, 2007–2012. http://smartlinux.sourceforge.net/smart/attributes.php, last
viewed May 2012.
Cisco Systems, Inc. Catalyst 2950 Switch Hardware Installation Guide, Oct. 2003.
A. Clark and E. Dawson. Optimisation heuristics for the automated cryptanalysis of classical
ciphers. Journal of Combinatorial Mathematics and Combinatorial Computing, 1998.
K. Clark and K. Hamilton. Cisco LAN Switching. Cisco Press, 1999. ISBN: 1-57870-094-9.
CNN. California bus crash. [online], Apr. 2009. http://www.cnn.com/2009/US/04/29/
california.crash/index.html.
CNN. ‘Catastrophic failure’ caused North Sea copter crash. [online], Apr. 2009. http://www.
cnn.com/2009/WORLD/europe/04/11/scotland.helicopter.crash.failure/index.html.
G. Coulouris, J. Dollimore, and T. Kindberg. Distributed Systems: Concepts and Design.
Addison-Wesley, 4 edition, 2005. ISBN: 0-321-26354-5.
Cycling ’74. Jitter 1.5. [online], 2005. http://www.cycling74.com/products/jitter.html.
Cycling ’74. Max/MSP. [online], 2005. http://www.cycling74.com/products/maxmsp.
html.
K. Dahbur and B. Mohammad. The anti-forensics challenge. In Proceedings of the 2011
International Conference on Intelligent Semantic Web-Services and Applications (ISWSA’11),
pages 14:1–14:7, New York, NY, USA, Apr. 2011. ACM.
I. F. Darwin, J. Gilmore, G. Collyer, R. McMahon, G. Harris, C. Zoulas, C. Lowth, E. Fischer,
and Various Contributors. file – determine file type, BSD General Commands Manual,
file(1). BSD, Jan. 1973–2007. man file(1).
I. F. Darwin, J. Gilmore, G. Collyer, R. McMahon, G. Harris, C. Zoulas, C. Lowth, E. Fischer,
and Various Contributors. file – determine file type. [online], Mar. 1973–2008. ftp://ftp.
astron.com/pub/file/, last viewed April 2008.
296
[79] J. D. Day. The (un)revised OSI reference model. SIGCOMM Comput. Commun. Rev.,
25(5):39–55, 1995.
[80] J. D. Day and H. Zimmermann. The OSI reference model. In Proceedings of the IEEE,
volume 71, pages 1334––1340, Washington, DC, USA, Dec. 1983. IEEE Computer Society.
[81] T. Dean, J. Allen, and Y. Aloimonos, editors. Artificial Intelligence: Theory and Practice.
Benjamin/Cummings, 1995. ISBN 0-8053-2547-6.
[82] W. Dean. Soundness, reflection, and intensionality. In Quinon and Antonutti [391]. Online
at http://www.fil.lu.se/index.php?id=18879.
[83] M. Debbabi. INSE 6150: Lecture 6: Formal analysis (II). Concordia Institute for Information
Systems Engineering, Concordia University, Montreal, Canada, 2006. http://www.ciise.
concordia.ca/~debbabi.
[84] M. Debbabi. INSE 6150: Lecture notes. Concordia Institute for Information Systems Engineering, Concordia University, Montreal, Canada, 2006.
[85] M. Debbabi, A. R. Arasteh, A. Sakha, M. Saleh, and A. Fry. A collection of JPF forensic plugins. Computer Security Laboratory, Concordia Institute for Information Systems Engineering,
2007–2008.
[86] P. Degano and C. Priami. Enhanced operational semantics: A tool for describing and analyzing concurrent systems. ACM Computing Surveys, 33(2):135–176, 2001.
[87] L. Demey, B. Kooi, and J. Sack. Logic and probability. In E. N. Zalta, editor, The Stanford
Encyclopedia of Philosophy. Stanford, spring 2013 edition, 2013. http://plato.stanford.
edu/archives/spr2013/entries/logic-probability/.
[88] A. P. Dempster. A generalization of Bayesian inference. Journal of the Royal Statistical
Society, 30:205–247, 1968.
[89] Y. Ding. Automated translation between graphical and textual representations of intensional programs in the GIPSY. Master’s thesis, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, June 2004. http://newton.cs.
concordia.ca/~paquet/filetransfer/publications/theses/DingYiminMSc2004.pdf.
[90] G. Ditu. The Programming Language TransLucid. PhD thesis, University of New South
Wales, Australia, 2007.
[91] D. Dowty, R. Wall, and S. Peters. Introduction to Montague Semantics. D. Reidel, Dordrecht,
The Netherlands, 1981.
[92] W. Du. Indexical Parallel Programming. PhD thesis, Department of Computer Science,
Victoria University, Canada, 1991.
[93] W. Du. Object-oriented implementation of intensional language. In Proceedings of the 7th
International Symposium on Lucid and Intensional Programming, pages 37–45, Menlo Park,
California, USA, Sept. 1994. SRI International.
[94] W. Du. Toward an intensional model for programming large scale distributed systems. In
Gergatsoulis and Rondogiannis [131], pages 244–258.
[95] W. Du. On the relationship between AOP and intensional programming through context,
July 2005. Keynote talk at the Intensional Programming Session of PLC’05.
[96] L. Duboc, D. S. Rosenblum, and T. Wicks. A framework for characterization and analysis of
software system scalability. In I. Crnkovic and A. Bertolino, editors, ESEC/SIGSOFT FSE,
pages 375–384. ACM, Sept. 2007.
[97] E. Dulaney. CompTIA Security+ Study Guide: Exam SY0-301. Sybex, 5 edition, June 2011.
[98] M. Dužı́, B. Jespersen, and P. Materna. Procedural Semantics for Hyperintensional Logic:
Foundations and Applications of Transparent Intensional Logic, volume 17 of Logic, Epistemology, and the Unity of Science. Springer Science + Business Media, Inc., 2010.
[99] Eclipse contributors et al. Eclipse Platform. eclipse.org, 2000–2013. http://www.eclipse.
297
[100]
[101]
[102]
[103]
[104]
[105]
[106]
[107]
[108]
[109]
[110]
[111]
[112]
[113]
[114]
[115]
[116]
[117]
org, last viewed January 2013.
S. A. Edwards.
MEMOCODE 2012 hardware/software codesign contest: DNA sequence aligner.
[online], 2012.
Online at http://memocode.irisa.fr/2012/2012memocode-contest.pdf;; Reference implementation at http://memocode.irisa.fr/2012/
2012-memocode-contest.tar.gz.
R. Eggen and M. Eggen. Efficiency of distributed parallel processing using Java RMI, sockets,
and CORBA. In Proceedings of the 2001 International Conference on Parallel and Distributed
Processing Techniques and Applications (PDPTA’01). PDPTA, June 2001.
M. Eissele, M. Kreiser, and T. Erd. Context-controlled flow visualization in augmented reality.
In Proceedings of Graphics Interface 2008 (GI’08), pages 89–96, Windsor, ON, Canada, May
2008. Canadian Human-Computer Communications Society.
M. Endrei, J. Ang, A. Arsanjani, et al. Patterns: Service-Oriented Architecture and Web
Services. IBM, 2004. IBM Red Book; online at http://www.redbooks.ibm.com/abstracts/
sg246303.html.
M. Eto, D. Inoue, J. Song, J. Nakazato, K. Ohtaka, and K. Nakao. NICTER: a largescale network incident analysis system: case studies for understanding threat landscape. In
Proceedings of the First Workshop on Building Analysis Datasets and Gathering Experience
Returns for Security, BADGERS’11, pages 37–45, New York, NY, USA, 2011. ACM.
M. Eto, K. Sonoda, D. Inoue, K. Yoshioka, and K. Nakao. A proposal of malware distinction
method based on scan patterns using spectrum analysis. In Proceedings of the 16th International Conference on Neural Information Processing: Part II, ICONIP’09, pages 565–572,
Berlin, Heidelberg, 2009. Springer-Verlag.
R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi. Reasoning About Knowledge. MIT Press,
Cambridge, MA, USA, 2003.
W. Farmer. The seven virtues of simple type theory. Journal of Applied Logic, 6(3):267–286,
Sept. 2008.
A. A. Faustini. The Equivalence of a Denotational and an Operational Semantics of Pure
Dataflow. PhD thesis, University of Warwick, Computer Science Department, Coventry,
United Kingdom, 1982.
A. A. Faustini and R. Jagannathan. Multidimensional problem solving in Lucid. Technical
Report SRI-CSL-93-03, SRI International, 1993.
A. A. Faustini and W. W. Wadge. An eductive interpreter for the language Lucid. SIGPLAN
Not., 22(7):86–91, 1987.
M. Fisher and T. Kakoudakis. Flexible agent grouping in executable temporal logic. In
Gergatsoulis and Rondogiannis [131], pages 93–105.
M. Fitting. Intensional logic. In E. N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Stanford, winter 2012 edition, 2012. http://plato.stanford.edu/archives/win2012/
entries/logic-intensional/.
M. C. Fitting. FOIL axiomatized. [online], Aug. 2005. http://comet.lehman.cuny.edu/
fitting/bookspapers/pdf/papers/FOILAxioms.pdf.
R. Flenner. Jini and JavaSpaces Application Development. Sams, 2001.
J. Folina. Mathematical intensions and intensionality in mathematics. In Quinon and Antonutti [391]. Online at http://www.fil.lu.se/index.php?id=18879.
B. Fonseca.
Shuttle Columbia’s hard drive data recovered from crash site.
[online], May 2008.
http://www.computerworld.com/action/article.do?command=
viewArticleBasic&articleId=9083718.
N. Foster, M. J. Freedmany, A. Guha, R. Harrisonz, N. P. Kattay, C. Monsantoy, J. Reichy,
298
[118]
[119]
[120]
[121]
[122]
[123]
[124]
[125]
[126]
[127]
[128]
[129]
[130]
[131]
[132]
[133]
[134]
[135]
[136]
[137]
[138]
M. Reitblatt, J. Rexfordy, C. Schlesingery, A. Story, and D. Walkery. Languages for softwaredefined networks. In Proceedings of IEEE COMS’13. IEEE, 2013.
G. Fourtounis, P. C. Ölveczky, and N. Papaspyrou. Formally specifying and analyzing a
parallel virtual machine for lazy functional languages using Maude. In Proceedings of the
5th International Workshop on High-Level Parallel Programming and Applications, HLPP’11,
pages 19–26, New York, NY, USA, 2011. ACM.
A. A. Fraenkel. Abstract set theory. North-Holland Pub. Co., New York, Amsterdam, 4th
revised edition, 1976.
S. Frankel, R. Graveman, J. Pearce, and M. Rooks. Guidelines for the secure deployment of
IPv6. Technical Report Special Publication 800-119, NIST, Dec. 2010. http://csrc.nist.
gov/publications/nistpubs/800-119/sp800-119.pdf.
E. Freeman, E. Freeman, K. Sierra, and B. Bates. Head First Design Patterns. O’Reilly &
Associates, Inc., first edition, Oct. 2004. http://www.oreilly.com/catalog/hfdesignpat/
toc.pdf, http://www.oreilly.com/catalog/hfdesignpat/chapter/index.html.
B. Freeman-Benson. Lobjcid: Objects in Lucid. In Proceedings of the 1991 Symposium on
Lucid and Intensional Programming, pages 80–87, Menlo Park, California, USA, Apr. 1991.
SRI International.
M. Frigault and L. Wang. Measuring network security using bayesian network-based attack
graphs. In COMPSAC, pages 698–703, 2008.
T. Frühwirth. Constraint solving with constraint handling rules. In Gergatsoulis and Rondogiannis [131], pages 14–30. Tutorial.
M. Gabbay. Making sense of maths: a formalist theory of mathematical intension. In Quinon
and Antonutti [391]. Online at http://www.fil.lu.se/index.php?id=18879.
J.-R. Gagné and J. Plaice. Demand-driven real-time computing. In Gergatsoulis and Rondogiannis [131], pages 168–181. ISBN: 981-02-4095-3.
D. Gallin. Intensional and Higher-Order Modal Logic: With Applications to Montague Semantics. North-Holland, Amsterdam, The Netherlands, 1975.
E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable
Object-Oriented Software. Addison-Wesley, 1995. ISBN: 0201633612.
E. Garcia. Cosine similarity and term weight tutorial. [online], 2006. http://www.miislita.
com/information-retrieval-tutorial/cosine-similarity-tutorial.html.
P. Gärdenfors. Qualitative probability as an intensional logic. Journal of Philosophical Logic,
4:171–185, 1975.
M. Gergatsoulis and P. Rondogiannis, editors. Proceedings of ISLIP’99, volume Intensional
Programming II. World Scientific, June 1999. ISBN: 981-02-4095-3.
GFI, Inc. Malware analysis with GFI SandBox (formerly CWSandbox). [online], 2010–2012.
http://www.threattracksecurity.com/enterprise-security/sandbox-software.aspx.
S. Ghosh. Distributed Systems – An Algorithmic Approach. CRC Press, 2007. ISBN: 978-158488-564-1.
J.-Y. Girard. Linear logic. Theoretical Computer Science, 50(1):1–101, 1987.
P. Gladyshev. Formalising Event Reconstruction in Digital Investigations. PhD thesis, Department of Computer Science, University College Dublin, Aug. 2004. Online at http:
//www.formalforensics.org/publications/thesis/index.html.
P. Gladyshev. Finite state machine analysis of a blackmail investigation. International Journal
of Digital Evidence, 4(1), 2005.
P. Gladyshev and A. Patel. Finite state machine approach to digital event reconstruction.
Digital Investigation Journal, 2(1), 2004.
O. Göbel, S. Frings, D. Günther, J. Nedon, and D. Schadt, editors. IT-Incidents Management
299
[139]
[140]
[141]
[142]
[143]
[144]
[145]
[146]
[147]
[148]
[149]
[150]
[151]
[152]
[153]
[154]
[155]
[156]
[157]
[158]
& IT-Forensics - IMF 2008, Conference Proceedings, September 23-25, 2008, Mannheim,
Germany, volume 140 of LNI. GI, 2008.
O. Göbel, S. Frings, D. Günther, J. Nedon, and D. Schadt, editors. IT-Incidents Management & IT-Forensics - IMF 2009, Conference Proceedings, September 15-17, 2009, Stuttgart,
Germany. IEEE Computer Society, 2009.
S. Goodman and A. Hunter. Feature extraction algorithms for pattern classification. In
Proceedings of Ninth International Conference on Artificial Neural Networks, volume 2, pages
738–742, 1999.
J. Gosling, B. Joy, G. Steele, and G. Bracha. Java Language Specification. Addison-Wesley
Professional, 3 edition, 2005. ISBN 0321246780.
D. Green. Trail: Java reflection API. [online], 2001–2012. http://docs.oracle.com/javase/
tutorial/reflect/index.html.
P. Grogono. GIPC increments. Technical report, Department of Computer Science and
Software Engineering, Concordia University, Montreal, Canada, Apr. 2002.
P. Grogono. Intensional programming in Onyx. Technical report, Department of Computer
Science and Software Engineering, Concordia University, Montreal, Canada, Apr. 2004.
P. Grogono, S. Mokhov, and J. Paquet. Towards JLucid, Lucid with embedded Java functions in the GIPSY. In Proceedings of the 2005 International Conference on Programming
Languages and Compilers (PLC 2005), pages 15–21. CSREA Press, June 2005.
D. Grune, B. Berliner, D. D. Z. Zuhn, J. Polk, L. Jones, D. R. Price, M. D. Baushke, B. Murphy, C. T. Pino, F. U. M. ao, J. Hyslop, and J. Meyering. Concurrent Versions System (CVS).
[online], 1989–2012. http://savannah.nongnu.org/projects/cvs/.
P. D. Grünwald and J. Y. Halpern. When ignorance is bliss. In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, UAI’04, pages 226–234, Arlington, Virginia,
United States, 2004. AUAI Press.
R. V. Guha. Contexts: A Formalization and Some Applications. PhD thesis, Stanford University, Feb. 1995.
Guidance Software. EnCase. [online], 2006–2013. http://www.encase.com/.
C. Gundabattula and V. G. Vaidya. Building a state tracing Linux kernel. In O. Göbel,
S. Frings, D. Günther, J. Nedon, and D. Schadt, editors, Proceedings of the IT Incident
Management and IT Forensics (IMF’08), LNI140, pages 173–196, Sept. 2008.
V. Haarslev, Y. Lu, and N. Shiri. ONTOXPL – intelligent exploration of OWL ontologies.
In 2004 IEEE/WIC/ACM International Conference on Web Intelligence (WI 2004), pages
624–627. IEEE Computer Society, Sept. 2004.
R. Hadjidj, M. Debbabi, H. Lounis, F. Iqbal, A. Szporer, and D. Benredjem. Towards an
integrated e-mail forensic analysis framework. Digital Investigation, 5(3-4):124–137, 2009.
R. Haenni, J. Kohlas, and N. Lehmann. Probabilistic argumentation systems. Technical
report, Institute of Informatics, University of Fribourg, Fribourg, Switzerland, Oct. 1999.
R. Haenni and N. Lehmann. Probabilistic argumentation systems: a new perspective on the
dempster-shafer theory. International Journal of Intelligent Systems, 18(1):93–106, 2003.
R. Haenni, J.-W. Romeijn, G. Wheeler, and J. Williamson. Probabilistic Logics and Probabilistic Networks, volume 350 of Synthese Library: Studies in Epistemology, Logic, Methodology,
and Philosophy of Science. Springer Science+Business Media B.V., 2011.
J. Y. Halpern. Reasoning about Uncertainty. MIT Press, Cambridge, MA, USA, 2003.
J. Y. Halpern, R. Fagin, Y. Moses, and M. Y. Vardi. Reasoning About Knowledge. MIT Press,
1995. http://www.cs.rice.edu/~vardi/papers/book.pdf.
J. Y. Halpern and R. Pucella. A logic for reasoning about evidence. J. Artif. Int. Res.,
26(1):1–34, May 2006.
300
[159] K. M. B. Hamed. Multidimensional Programs on Distributed Parallel Computers: Analysis
and Implementation. PhD thesis, Computer Science, the University of New Brunswick, Feb.
2008.
[160] B. Han. Towards a multi-tier runtime system for GIPSY. Master’s thesis, Department of
Computer Science and Software Engineering, Concordia University, Montreal, Canada, 2010.
[161] B. Han, S. A. Mokhov, and J. Paquet. Advances in the design and implementation of a multitier architecture in the GIPSY environment with Java. In Proceedings of SERA 2010, pages
259–266. IEEE Computer Society, 2010. Online at http://arxiv.org/abs/0906.4837.
[162] J. Han, M. Kamber, and J. Pei. Data Mining: Concepts and Techniques. Morgan Kaufmann
Publishers Inc., San Francisco, CA, USA, 3rd edition, 2011.
[163] A. Hanna, H. Z. Ling, X. Yang, and M. Debbabi. A synergy between static and dynamic
analysis for the detection of software security vulnerabilities. In R. Meersman, T. S. Dillon,
and P. Herrero, editors, OTM Conferences (2), volume 5871 of Lecture Notes in Computer
Science, pages 815–832. Springer, 2009.
[164] M. Hapner, R. Burridge, R. Sharma, J. Fialli, and K. Stout. Java(TM) Message Service API
Tutorial and Reference. Prentice Hall PTR, 2002. ISBN 0201784726.
[165] D. Harrington, R. Presuhn, and B. Wijnen. RFC 2571: An Architecture for Describing SNMP
Management Frameworks. [online], Apr. 1999. http://www.ietf.org/rfc/rfc2571.txt,
viewed in January 2008.
[166] J. Heylen. Peano numerals as buck-stoppers. In Quinon and Antonutti [391]. Online at
http://www.fil.lu.se/index.php?id=18879.
[167] M. G. Hinchey, J. L. Rash, W. Truszkowski, C. Rouff, and R. Sterritt. Autonomous and
autonomic swarms. In Software Engineering Research and Practice, pages 36–44. CSREA
Press, 2005.
[168] R. Hinden, B. Carpenter, and L. Masinter. RFC 2732: Format for Literal IPv6 Addresses in
URL’s. [online], Dec. 1999. http://www.ietf.org/rfc/rfc2732.txt, viewed in November
2007.
[169] H. Hiss. Checking the satisfiability of XML-specifications. Technical Report 2008-1-Ait Mohamed, Department of Electrical and Computer Engineering, Concordia University, Montreal,
Canada, Aug. 2008. In Theorem Proving in Higher Order Logics (TPHOLs2008): Emerging
Trends Proceedings.
[170] N. Hnatiw, T. Robinson, C. Sheehan, and N. Suan. Pimp my PE: Parsing malicious and
malformed executables. In H. Martin, editor, Proceedings of the 17th Virus Bulletin International Conference, pages 9–17, Vienna, Austria: The Pentagon, Abingdon, OX143YP,
England, Sept. 2007.
[171] J. R. Hobbs and S. J. Rosenschein. Making computational sense of Montague’s intensional logic. Artificial Intelligence, 9:287–306, 1978. http://www.stanford.edu/class/
linguist289/hobbs78.pdf.
[172] Honeynet Project. Honeynet forensics project scans. [online], 2002–2013. http://honeynet.
org/scans.
[173] P. Horn. Autonomic computing: IBM’s perspective on the state of information technology.
Technical report, IBM T. J. Watson Laboratory, Oct. 2001.
[174] W. hua Xu, X. yan Zhang, J. min Zhong, and W. xiu Zhang. Attribute reduction in ordered
information systems based on evidence theory. Knowl Inf Syst, 25:169–184, Sept. 2009.
[175] G. Hunter. Metalogic: An Introduction to the Metatheory of Standard First-Order Logic.
University of California Press, 1971.
[176] K. Hwang and D. Jung. Anti-malware expert system. In H. Martin, editor, Proceedings of the
17th Virus Bulletin International Conference, pages 9–17, Vienna, Austria: The Pentagon,
301
[177]
[178]
[179]
[180]
[181]
[182]
[183]
[184]
[185]
[186]
[187]
[188]
[189]
[190]
[191]
[192]
[193]
[194]
[195]
Abingdon, OX143YP, England, Sept. 2007.
IBM, BEA Systems, Microsoft, SAP AG, and Siebel Systems. Business Process Execution
Language for Web Services version 1.1. [online], IBM, Feb. 2007. http://www.ibm.com/
developerworks/library/specification/ws-bpel/.
IBM Corporation. An architectural blueprint for autonomic computing. Technical report,
IBM Corporation, 2006.
IBM Tivoli. Autonomic computing policy language. Technical report, IBM Corporation,
2005.
IEEE. 802-1990: IEEE standards for local and metropolitan networks: Overview and architecture. [online], Sept. 2004. http://grouper.ieee.org/groups/802/802overview.pdf.
IEEE. 754-2008: IEEE standard for floating-point arithmetic. [online], Aug. 2008. http:
//ieeexplore.ieee.org/servlet/opac?punumber=4610933.
E. C. Ifeachor and B. W. Jervis. Speech Communications. Prentice Hall, New Jersey, USA,
2002.
D. Inoue, K. Yoshioka, M. Eto, M. Yamagata, E. Nishino, J. Takeuchi, K. Ohkouchi, and
K. Nakao. An incident analysis system NICTER and its analysis engines based on data
mining techniques. In Proceedings of the 15th International Conference on Advances in NeuroInformation Processing – Volume Part I, ICONIP’08, pages 579–586, Berlin, Heidelberg, 2009.
Springer-Verlag.
Internet Assigned Numbers Authority (IANA). Private enterprise numbers: Smi network
management private enterprise codes. [online], IANA, June 2013. http://www.iana.org/
assignments/enterprise-numbers.
G. M. Jackson. Predicting Malicious Behavior: Tools and Techniques for Ensuring Global
Security. Wiley, 1 edition, June 2012.
P. Jackson, editor. Introduction to Expert Systems. Addison-Wesley, third edition, 1995. ISBN
0-201-87686-8.
R. Jagannathan. Intensional and extensional graphical models for GLU programming. In
Orgun and Ashcroft [350], pages 63–75.
R. Jagannathan and C. Dodd. GLU programmer’s guide. Technical report, SRI International,
Menlo Park, California, 1996.
R. Jagannathan, C. Dodd, and I. Agi. GLU: A high-level system for granular data-parallel
programming. In Concurrency: Practice and Experience, volume 1, pages 63–83, 1997.
Y. Jarraya. Verification and Validation of UML and SysML Based Systems Engineering
Design Models. PhD thesis, Department of Electrical and Computer Engineering, Concordia
University, Montreal, Canada, Feb. 2010.
Y. Ji. Scalability evaluation of the GIPSY runtime system. Master’s thesis, Department of
Computer Science and Software Engineering, Concordia University, Montreal, Canada, Mar.
2011.
Y. Ji, S. A. Mokhov, and J. Paquet. Design for scalability evaluation and configuration
management of distributed components in GIPSY. Unpublished, 2010–2013.
Y. Ji, S. A. Mokhov, and J. Paquet. Unifying and refactoring DMF to support concurrent Jini
and JMS DMS in GIPSY. In B. C. Desai, S. P. Mudur, and E. I. Vassev, editors, Proceedings
of the Fifth International C* Conference on Computer Science and Software Engineering
(C3S2E’12), pages 36–44, New York, NY, USA, June 2010–2013. ACM. Online e-print http:
//arxiv.org/abs/1012.2860.
Jini Community. Jini network technology. [online], Sept. 2007. http://java.sun.com/
developer/products/jini/index.jsp.
R. Johnsonbaugh. Advanced Engineering Mathematics. Pearson Prentice Hall, 7th edition,
302
[196]
[197]
[198]
[199]
[200]
[201]
[202]
[203]
[204]
[205]
[206]
[207]
[208]
[209]
[210]
[211]
[212]
[213]
[214]
[215]
2009. ISBN: 978-0-13-159318-3.
H. F. Jordan and G. Alaghband. Fundamentals of Parallel Processing. Pearson Education,
Inc., 2003. ISBN 0-13-901158-7.
R. Joshi. A comparison and mapping of Data Distribution Service (DDS) and Java Message
Service (JMS). Real-Time Innovations, Inc., 2006.
A. Jøsang, J. Diaz, and M. Rifqi. Cumulative and averaging fusion of beliefs. Information
Fusion, 11(2):192–200, 2010.
A. Jøsang and R. Hankin. Interpretation and fusion of hyper opinions in subjective logic.
In Proceedings of the 15th International Conference on Information Fusion (FUSION), pages
1225–1232, 2012.
A. Jøsang and S. Pope. Dempster’s rule as seen by little colored balls. Computational Intelligence, 28(4), May 2012.
D. S. Jurafsky and J. H. Martin. Speech and Language Processing. Prentice-Hall, Inc., Pearson
Higher Education, Upper Saddle River, New Jersey 07458, 2000. ISBN 0-13-095069-6.
G. Kahn. The semantics of a simple language for parallel processing. In Proceedings of the
IFIP Congress ’74, pages 471–475, Amsterdam, 1974. Elsevier North-Holland.
G. Kahn and D. B. MacQueen. Coroutines and networks of parallel processes. In Proceedings
of the IFIP Congress ’77, pages 993–998, Amsterdam, 1977. Elsevier North-Holland.
F. O. Karray and C. de Silva. Soft Computing and Intelligent Systems Design: Theory, Tools,
and Applications. Person Education Ltd. / Addison Wesley, 2004. ISBN: 0-321-11617-8.
M. Kaufmann and J. S. Moore. An ACL2 tutorial. In Mohamed et al. [259], pages 17–21.
S. Kent. Words of estimative probability. [online], CIA, 1964. https://www.cia.
gov/library/center-for-the-study-of-intelligence/csi-publications/booksand-monographs/sherman-kent-and-the-board-of-national-estimates-collectedessays/6words.html.
J. O. Kephart and D. M. Chess. The vision of autonomic computing. IEEE Computer,
36(1):41–50, 2003.
M. Khalifé. Examining orthogonal concepts-based micro-classifiers and their correlations
with noun-phrase coreference chains. Master’s thesis, Department of Computer Science and
Software Engineering, Concordia University, Montreal, Canada, 2004.
G. Klein, T. Nipkow, and L. C. Paulson. The archive of formal proofs. SourceForge.net, 2008.
http://afp.sourceforge.net/, last viewed: April 2008.
D. Koenig. Web services business process execution language (WS-BPEL 2.0): The standards
landscape. Presentation, IBM Software Group, 2007.
M. Kokare, P. K. Biswas, and B. N. Chatterji. Texture image retrieval using new rotated
complex wavelet filters. IEEE Transaction on Systems, Man, and Cybernetics-Part B: Cybernetics, 6(35):1168–1178, 2005.
M. Kokare, P. K. Biswas, and B. N. Chatterji. Rotation-invariant texture image retrieval using
rotated complex wavelet filters. IEEE Transaction on Systems, Man, and Cybernetics-Part
B: Cybernetics, 6(36):1273–1282, 2006.
Y. Kong, Y. Zhang, and Q. Liu. Eliminating human specification in static analysis. In
Proceedings of the 13th International Conference on Recent Advances in Intrusion Detection,
RAID’10, pages 494–495, Berlin, Heidelberg, 2010. Springer-Verlag.
C. D. Koutras and C. Nomikos. On the computational complexity of stratified negation
in linear-time temporal logic programming. In Gergatsoulis and Rondogiannis [131], pages
106–117.
T. Kremenek, K. Ashcraft, J. Yang, and D. Engler. Correlation exploitation in error ranking.
In Foundations of Software Engineering (FSE), 2004.
303
[216] T. Kremenek and D. Engler. Z-ranking: Using statistical analysis to counter the impact of
static analysis approximations. In SAS 2003, 2003.
[217] T. Kremenek, P. Twohey, G. Back, A. Ng, and D. Engler. From uncertainty to belief: Inferring
the specification within. In Proceedings of the 7th Symposium on Operating System Design
and Implementation, 2006.
[218] S. A. Kripke. A completeness theorem in modal logic. Journal of Symbolic Logic, 31(2):276–
277, 1966.
[219] S. A. Kripke. Semantical considerations on modal logic. Journal of Symbolic Logic, 34(3):501,
1969.
[220] P. Krishnan. An asynchronous calculus based on absence of actions. In Orgun and Ashcroft
[350], pages 234–248.
[221] P. Kropf and J. Plaice. Intensional objects. In International symposium on Languages for
Intensional Programming, pages 37–45, Athens, Greece, June 1999. Demokrits Institute.
[222] R. Lalement. Computation as Logic. Prentice Hall, 1993. C.A.R. Hoare Series Editor. English
translation from French by John Plaice.
[223] P. J. Landin. The next 700 programming languages. Communications of the ACM, 9(3):157–
166, 1966.
[224] C. Larman. Applying UML and Patterns: An Introduction to Object-Oriented Analysis and
Design and Iterative Development. Pearson Education, third edition, Apr. 2006. ISBN:
0131489062.
[225] M.-A. Laverdière, S. A. Mokhov, D. Bendredjem, and S. Tsapa. Ftkplipse – Forensic Toolkits
Eclipse Plug-ins. SourceForge.net, 2005–2008. http://ciisesec.svn.sourceforge.net/
viewvc/ciisesec/forensics, last viewed April 2008.
[226] M.-A. Laverdière, S. A. Mokhov, S. Tsapa, and D. Benredjem. Ftklipse–design and implementation of an extendable computer forensics environment: Software requirements specification
document, 2005–2009. http://arxiv.org/abs/0906.2446.
[227] M.-A. Laverdière, S. A. Mokhov, S. Tsapa, and D. Benredjem. Ftklipse–design and implementation of an extendable computer forensics environment: Specification design document,
2005–2009.
[228] M.-A. Laverdière-Papineau. Towards systematic software security hardening. Master’s thesis,
Concordia University, 2007. ISBN: 9780494344446; http://spectrum.library.concordia.
ca/975561/.
[229] G. T. Leavens. The Java modeling language (JML). [online], 2007. http://www.jmlspecs.
org/.
[230] G. T. Leavens and Y. Cheon. Design by contract with JML. Technical report, Formal Systems
Laboratory (FSL) at UIUC, 2006.
[231] H. C. Lee. The utilization of forensic evidence in IED incidents. In Proceedings of the European
Intelligence and Security Informatics Conference (EISIC) 2012, pages 1–2, Aug. 2012.
[232] W. Lee, S. J. Stolfo, and K. W. Mok. Adaptive intrusion detection: A data mining approach.
Artificial Intelligence Review, 14:533–567, 2000.
[233] D. Leffingwell and D. Widrig. Managing Software Requirements: A Use Case Approach.
Addison-Wesley, 2 edition, 2003. ISBN: 0-321-12247-X.
[234] R. Li, O.-J. Xi, B. Pang, J. Shen, and C.-L. Ren. Network application identification based
on wavelet transform and k-means algorithm. In Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS2009), volume 1, pages 38–41,
Nov. 2009.
[235] M. Ligh, S. Adair, B. Hartstein, and M. Richard. Malware Analyst’s Cookbook and DVD:
Tools and Techniques for Fighting Malicious Code. Wiley, 1 edition, Nov. 2010.
304
[236] K. Limthong, F. Kensuke, and P. Watanapongse. Wavelet-based unwanted traffic time series
analysis. In 2008 International Conference on Computer and Electrical Engineering, pages
445–449. IEEE Computer Society, 2008.
[237] S. Lipschutz. Schaum’s Outline of Theory and Problems of Set Theory and Related Topics.
Schaum’s Outlines. McGraw-Hill, New York, 2nd edition, 1998. ISBN: 978-0070381599.
[238] Y. Liu and J. Staples. Building logic constructs into procedural programming languages. In
Orgun and Ashcroft [350], pages 96–109.
[239] C. Livadas, R. Walsh, D. E. Lapsley, and W. T. Strayer. Using machine learning techniques to
identify botnet traffic. In LCN, pages 967–974, Washington, DC, USA, 2006. IEEE Computer
Society.
[240] K. C. Louden. Compiler Construction: Principles and Practice. PWS Publishing Company,
1997. ISBN 0-564-93972-4.
[241] B. Lu. Developing the Distributed Component of a Framework for Processing Intensional
Programming Languages. PhD thesis, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, Mar. 2004.
[242] B. Lu, P. Grogono, and J. Paquet. Distributed execution of multidimensional programming
languages. In Proceedings of the 15th IASTED International Conference on Parallel and
Distributed Computing and Systems (PDCS 2003), volume 1, pages 284–289. International
Association of Science and Technology for Development, Nov. 2003.
[243] Y. Lu, I. Cohen, X. S. Zhou, and Q. Tian. Feature selection using principal feature analysis.
In Proceedings of the 15th International Conference on Multimedia, pages 301–304, Augsburg,
Germany, 2007. ACM.
[244] W. Ma and M. A. Orgun. Verifying MULTRAN programs with temporal logic. In Orgun and
Ashcroft [350], pages 186–206.
[245] Q. H. Mamoud. Getting started with JavaSpaces technology: Beyond conventional distributed programming paradigms. [online], July 2005. http://java.sun.com/developer/
technicalArticles/tools/JavaSpaces/.
[246] B. Mancilla and J. Plaice. Possible worlds versioning. Mathematics in Computer Science,
2(1):63–83, 2008.
[247] K. Mandia, C. Prosise, and M. Pepe. Incident Response and Computer Forensics. McGrawHill, 2nd edition, 2003.
[248] C. D. Manning and H. Schutze. Foundations of Statistical Natural Language Processing. MIT
Press, 2002.
[249] D. Mares. Software links for forensics investigative tasks. [online], 2006. http://www.dmares.
com/maresware/SITES/tasks.htm.
[250] E. Mark et al. Lucid (PoC in Haskell). [online], HaskellWiki, 2006–2011. http://www.
haskell.org/haskellwiki/Lucid.
[251] D. R. Mauro and K. J. Schmidt. Essential SNMP. O’Reilly, 2001. ISBN: 0-596-00020-00.
[252] M. McDougal. Live forensics on a Windows system: Using Windows Forensic Toolchest (WFT). [online], 2003–2006. http://www.foolmoon.net/downloads/Live_Forensics_
Using_WFT.pdf.
[253] E. Mendelson. Introduction to Mathematical Logic. Chapman & Hall, 4 edition, 1997.
[254] W. J. Meng, J. Rilling, Y. Zhang, R. Witte, S. Mudur, and P. Charland. A context-driven
software comprehension process model. In Proceedings of the IEEE Software Evolvability
Workshop (SE’06), Sept. 2006.
[255] M. Messner. Pen testing on IPv6 networks: In through the back door. Linux Magazine, 143:16–
20, Oct. 2012. http://www.linux-magazine.com/Online/Features/IPv6-Pen-Testing.
[256] W. Metcalf. Snort in-line. [online], 2011. http://snort-inline.sourceforge.net/.
305
[257] B. Meyer. On formalism in specifications. IEEE Software, 2(1):6–26, 1985.
[258] N. G. Miller. A Diagrammatic Formal System for Euclidean Geometry. PhD thesis, Cornell
University, U.S.A, 2001.
[259] O. A. Mohamed, C. A. M. noz, and S. Tahar, editors. Theorem Proving in Higher Order
Logics, 21st International Conference, TPHOLs 2008, Montreal, Canada, August 18-21, 2008,
volume 5170 of LNCS. Springer, 2008.
[260] S. Mokhov, I. Clement, S. Sinclair, and D. Nicolacopoulos. Modular Audio Recognition
Framework. Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, 2002–2003. Project report, http://marf.sf.net, last viewed April
2012.
[261] S. Mokhov and J. Paquet. General imperative compiler framework within the GIPSY. In
Proceedings of the 2005 International Conference on Programming Languages and Compilers
(PLC 2005), pages 36–42. CSREA Press, June 2005.
[262] S. Mokhov and J. Paquet. Objective Lucid – first step in object-oriented intensional programming in the GIPSY. In Proceedings of the 2005 International Conference on Programming
Languages and Compilers (PLC 2005), pages 22–28. CSREA Press, June 2005.
[263] S. A. Mokhov. Lucid, the intensional programming language and its semantics in PVS.
Department of Computer Science and Software Engineering, Concordia University, Montreal,
Canada, Apr. 2004. Semantics of Programming Languages Course Project Report.
[264] S. A. Mokhov. Towards hybrid intensional programming with JLucid, Objective Lucid, and
General Imperative Compiler Framework in the GIPSY. Master’s thesis, Department of
Computer Science and Software Engineering, Concordia University, Montreal, Canada, Oct.
2005. ISBN 0494102934; online at http://arxiv.org/abs/0907.2640.
[265] S. A. Mokhov. On design and implementation of distributed modular audio recognition
framework: Requirements and specification design document. [online], Aug. 2006. Project
report, http://arxiv.org/abs/0905.2459, last viewed April 2012.
[266] S. A. Mokhov. Intensional Cyberforensics – a PhD Proposal. Department of Computer Science
and Software Engineering, Concordia University, Montreal, Canada, Dec. 2007.
[267] S. A. Mokhov. Intensional forensics – the use of intensional logic in cyberforensics. Technical report, Concordia Institute for Information Systems Engineering, Concordia University,
Montreal, Canada, Jan. 2007. ENGR6991 Technical Report.
[268] S. A. Mokhov. Choosing best algorithm combinations for speech processing tasks in machine
learning using MARF. In S. Bergler, editor, Proceedings of the 21st Canadian AI’08, LNAI
5032, pages 216–221, Berlin Heidelberg, May 2008. Springer-Verlag.
[269] S. A. Mokhov. Encoding forensic multimedia evidence from MARF applications as Forensic
Lucid expressions. In T. Sobh, K. Elleithy, and A. Mahmood, editors, Novel Algorithms and
Techniques in Telecommunications and Networking, proceedings of CISSE’08, pages 413–416,
University of Bridgeport, CT, USA, Dec. 2008. Springer. Printed in January 2010.
[270] S. A. Mokhov. Study of best algorithm combinations for speech processing tasks in machine
learning using median vs. mean clusters in MARF. In B. C. Desai, editor, Proceedings of
C3S2E’08, pages 29–43, Montreal, Quebec, Canada, May 2008. ACM.
[271] S. A. Mokhov. Towards security hardening of scientific distributed demand-driven and
pipelined computing systems. In Proceedings of the 7th International Symposium on Parallel and Distributed Computing (ISPDC’08), pages 375–382. IEEE Computer Society, July
2008.
[272] S. A. Mokhov. Towards syntax and semantics of hierarchical contexts in multimedia processing
applications using MARFL. In Proceedings of the 32nd Annual IEEE International Computer
Software and Applications Conference (COMPSAC), pages 1288–1294, Turku, Finland, July
306
[273]
[274]
[275]
[276]
[277]
[278]
[279]
[280]
[281]
[282]
[283]
[284]
[285]
[286]
[287]
2008. IEEE Computer Society.
S. A. Mokhov. WriterIdentApp – Writer Identification Application. Unpublished, 2008–2013.
S. A. Mokhov. Enhancing the formal cyberforensic approach with observation modeling with
credibility factors and mathematical theory of evidence. [online], also in ;login: vol. 34, no.
6, p. 101, Dec. 2009. Presented at WIPS at USENIX Security’09, http://www.usenix.org/
events/sec09/wips.html.
S. A. Mokhov. Java Data Security Framework (JDSF) and its applications: API design
refinement. [online], also in ;login: vol. 34, no. 6, p. 93, Dec. 2009. Poster at USENIX
Security’09, http://www.usenix.org/events/sec09/poster.html.
S. A. Mokhov. The role of self-forensics modeling for vehicle crash investigations and event
reconstruction simulation. In J. S. Gauthier, editor, Proceedings of the Huntsville Simulation
Conference (HSC’09), pages 342–349. SCS, Oct. 2009. Online at http://arxiv.org/abs/
0905.2449.
S. A. Mokhov. Towards improving validation, verification, crash investigations, and event
reconstruction of flight-critical systems with self-forensics. [online], June 2009. A white paper
submitted in response to NASA’s RFI NNH09ZEA001L, http://arxiv.org/abs/0906.1845,
mentioned in http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20100025593_
2010028056.pdf.
S. A. Mokhov. Combining and comparing multiple algorithms for better learning and classification: A case study of MARF. In S. Jabin, editor, Robot Learning, chapter 2, pages
17–42. InTech, Aug. 2010. ISBN: 978-953-307-104-6, online at http://www.intechopen.
com/download/pdf/pdfs_id/12131.
S. A. Mokhov. Complete complimentary results report of the MARF’s NLP approach to the
DEFT 2010 competition. [online], June 2010. http://arxiv.org/abs/1006.3787.
S. A. Mokhov. Cryptolysis: A Framework for Verification of Optimization Heuristics for the
Automated Cryptanalysis of Classical Ciphers and Natural Language Word Segmentation. In
Proceedings of SERA 2010, pages 295–302. IEEE Computer Society, May 2010.
S. A. Mokhov. Evolution of MARF and its NLP framework. In Proceedings of C3S2E’10,
pages 118–122. ACM, May 2010.
S. A. Mokhov. Hybrid Intensional Computing in GIPSY: JLucid, Objective Lucid and GICF.
LAP - Lambert Academic Publishing, Mar. 2010. ISBN 978-3-8383-1198-2.
S. A. Mokhov. L’approche MARF à DEFT 2010: A MARF approach to DEFT 2010. In
Proceedings of the 6th DEFT Workshop (DEFT’10), pages 35–49. LIMSI / ATALA, July
2010. DEFT 2010 Workshop at TALN 2010; online at http://deft.limsi.fr/actes/2010/
pdf/2_clac.pdf.
S. A. Mokhov. The use of machine learning with signal- and NLP processing of source code
to fingerprint, detect, and classify vulnerabilities and weaknesses with MARFCAT. [online],
Oct. 2010. Online at http://arxiv.org/abs/1010.2511.
S. A. Mokhov.
MARFCAT – MARF-based Code Analysis Tool.
Published electronically within the MARF project, http://sourceforge.net/projects/marf/files/
Applications/MARFCAT/, 2010–2013. Last viewed April 2012.
S. A. Mokhov. Review of “Attribute reduction in ordered information systems based on
evidence theory. Xu W., Zhang X., Zhong J., Zhang W. Knowledge and Information Systems 25(1): 169-184, 2010”. In Computing Reviews [174]. CR138828 (1109-0972); online at
http://computingreviews.com/review/review_review.cfm?review_id=138828.
S. A. Mokhov. The use of machine learning with signal- and NLP processing of source code
to fingerprint, detect, and classify vulnerabilities and weaknesses with MARFCAT. Technical
Report NIST SP 500-283, NIST, Oct. 2011. Report: http://www.nist.gov/manuscript-
307
[288]
[289]
[290]
[291]
[292]
[293]
[294]
[295]
[296]
[297]
[298]
[299]
[300]
publication-search.cfm?pub_id=909407, online e-print at http://arxiv.org/abs/1010.
2511.
S. A. Mokhov. Review of “The Anti-Forensics Challenge. Dahbur K., Mohammad B.,
ISWSA’11, ACM. April 18–20, 2011, Amman, Jordan”. In Computing Reviews [76].
CR139793 (1207-0756); online at http://computingreviews.com/review/review_review.
cfm?review_id=139793.
S. A. Mokhov. MARFPCAT – MARF-based PCap Analysis Tool. Published electronically
within the MARF project, 2012–2013. http://sourceforge.net/projects/marf/files/
Applications/MARFCAT/.
S. A. Mokhov and M. Debbabi. File type analysis using signal processing techniques and
machine learning vs. file unix utility for forensic analysis. In O. Goebel, S. Frings, D. Guenther, J. Nedon, and D. Schadt, editors, Proceedings of the IT Incident Management and IT
Forensics (IMF’08), LNI140, pages 73–85. GI, Sept. 2008.
S. A. Mokhov et al. Intensional Programming for AOP Tasks for Intensional Programming.
Unpublished, 2008.
S. A. Mokhov, L. W. Huynh, and J. Li. Managing distributed MARF with SNMP. Concordia
Institute for Information Systems Engineering, Concordia University, Montreal, Canada, Apr.
2007. Project report. Hosted at http://marf.sf.net and http://arxiv.org/abs/0906.
0065, last viewed February 2011.
S. A. Mokhov, L. W. Huynh, and J. Li. Managing distributed MARF’s nodes with SNMP. In
Proceedings of PDPTA’2008, volume II, pages 948–954, Las Vegas, USA, July 2008. CSREA
Press.
S. A. Mokhov, L. W. Huynh, J. Li, and F. Rassai. A Java Data Security Framework (JDSF) for
MARF and HSQLDB. Concordia Institute for Information Systems Engineering, Concordia
University, Montreal, Canada, Apr. 2007. Project report. Hosted at http://marf.sf.net,
last viewed April 2008.
S. A. Mokhov, L. W. Huynh, and L. Wang. The integrity framework within the Java Data
Security Framework (JDSF): Design refinement and implementation. In T. Sobh, K. Elleithy,
and A. Mahmood, editors, Novel Algorithms and Techniques in Telecommunications and Networking, Proceedings of CISSE’08, pages 449–455. Springer, Dec. 2008. Printed in January
2010.
S. A. Mokhov and R. Jayakumar. Distributed Modular Audio Recognition Framework
(DMARF) and its applications over web services. In T. Sobh, K. Elleithy, and A. Mahmood, editors, Proceedings of TeNe’08, pages 417–422, University of Bridgeport, CT, USA,
Dec. 2008. Springer. Printed in January 2010.
S. A. Mokhov, M.-A. Laverdière, and D. Benredjem. Taxonomy of Linux kernel vulnerability
solutions. In Innovative Techniques in Instruction Technology, E-learning, E-assessment, and
Education, pages 485–493, 2007. Proceedings of CISSE/SCSS’07.
S. A. Mokhov, M.-A. Laverdière, N. Hatami, and A. Benssam. Cryptolysis v.0.0.1 – a framework for automated cryptanalysis of classical ciphers. [online], 2005–2013. Project report;
http://arxiv.org/abs/1101.1075.
S. A. Mokhov, J. Li, and L. Wang. Simple dynamic key management in SQL randomization.
In Proceedings of NTMS’09, pages 458–462. IEEE, Dec. 2009. ISBN: 978-1-4244-4765-7.
S. A. Mokhov and J. Paquet. Formally specifying and proving operational aspects of
Forensic Lucid in Isabelle. Technical Report 2008-1-Ait Mohamed, Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada, Aug. 2008. In
Theorem Proving in Higher Order Logics (TPHOLs2008): Emerging Trends Proceedings.
Online at: http://users.encs.concordia.ca/~tphols08/TPHOLs2008/ET/76-98.pdf and
308
[301]
[302]
[303]
[304]
[305]
[306]
[307]
[308]
[309]
[310]
[311]
[312]
[313]
http://arxiv.org/abs/0904.3789.
S. A. Mokhov and J. Paquet. A type system for higher-order intensional logic support for variable bindings in hybrid intensional-imperative programs in GIPSY. In T. Matsuo, N. Ishii,
and R. Lee, editors, 9th IEEE/ACIS International Conference on Computer and Information Science, IEEE/ACIS ICIS 2010, pages 921–928. IEEE Computer Society, May 2010.
Presented at SERA 2010; online at http://arxiv.org/abs/0906.3919.
S. A. Mokhov and J. Paquet. Using the General Intensional Programming System (GIPSY)
for evaluation of higher-order intensional logic (HOIL) expressions. In Proceedings of SERA
2010, pages 101–109. IEEE Computer Society, May 2010. Online at http://arxiv.org/abs/
0906.3911.
S. A. Mokhov, J. Paquet, and M. Debbabi. Designing a language for intensional cyberforensic
analysis. Unpublished, 2007.
S. A. Mokhov, J. Paquet, and M. Debbabi. Formally specifying operational semantics and
language constructs of Forensic Lucid. In O. Göbel, S. Frings, D. Günther, J. Nedon, and
D. Schadt, editors, Proceedings of the IT Incident Management and IT Forensics (IMF’08),
LNI140, pages 197–216. GI, Sept. 2008. Online at http://subs.emis.de/LNI/Proceedings/
Proceedings140/gi-proc-140-014.pdf.
S. A. Mokhov, J. Paquet, and M. Debbabi. Designing Forensic Lucid – an intensional specification language for automated cyberforensic reasoning. Submitted for publication to J.MCS,
2008–2013.
S. A. Mokhov, J. Paquet, and M. Debbabi. Reasoning about a simulated printer case investigation with Forensic Lucid. In J. S. Gauthier, editor, Proceedings of the Huntsville
Simulation Conference (HSC’09), page 45. SCS, Oct. 2009. Abstract, fully online at
http://arxiv.org/abs/0906.5181.
S. A. Mokhov, J. Paquet, and M. Debbabi. Towards automated deduction in blackmail
case analysis with Forensic Lucid. In J. S. Gauthier, editor, Proceedings of the Huntsville
Simulation Conference (HSC’09), pages 326–333. SCS, Oct. 2009. Online at http://arxiv.
org/abs/0906.0049.
S. A. Mokhov, J. Paquet, and M. Debbabi. Towards formal requirements specification of
self-forensics for autonomous systems. Submitted for review to J. Req. Eng., 2009–2013.
S. A. Mokhov, J. Paquet, and M. Debbabi. The need to support of data flow graph visualization of Forensic Lucid programs, forensic evidence, and their evaluation by GIPSY. [online],
Sept. 2010. Poster at VizSec’10; online at http://arxiv.org/abs/1009.5423.
S. A. Mokhov, J. Paquet, and M. Debbabi. Towards automatic deduction and event reconstruction using Forensic Lucid and probabilities to encode the IDS evidence. In S. Jha,
R. Sommer, and C. Kreibich, editors, Proceedings of RAID’10, LNCS 6307, pages 508–509.
Springer, Sept. 2010.
S. A. Mokhov, J. Paquet, and M. Debbabi. On the need for data flow graph visualization of
Forensic Lucid programs and forensic evidence, and their evaluation by GIPSY. In Proceedings
of the Ninth Annual International Conference on Privacy, Security and Trust (PST), 2011,
pages 120–123. IEEE Computer Society, July 2011. Short paper; full version online at http:
//arxiv.org/abs/1009.5423.
S. A. Mokhov, J. Paquet, and M. Debbabi. Reasoning about a simulated printer case investigation with Forensic Lucid. In P. Gladyshev and M. K. Rogers, editors, Proceedings of
ICDF2C’11, number 0088 in LNICST, pages 282–296. Springer, Oct. 2011. Submitted in
2011, appeared in 2012; online at http://arxiv.org/abs/0906.5181.
S. A. Mokhov, J. Paquet, M. Debbabi, and P. Grogono. Enhancing the formal cyberforensic approach with observation modeling with credibility factors and mathematical theory of
309
[314]
[315]
[316]
[317]
[318]
[319]
[320]
[321]
[322]
[323]
[324]
[325]
[326]
[327]
[328]
[329]
[330]
evidence. Unpublished, 2008–2013.
S. A. Mokhov, J. Paquet, M. Debbabi, and Y. Sun. MARFCAT: Transitioning to binary and
larger data sets of SATE IV. [online], May 2012. Submitted for publication to JSS; online at
http://arxiv.org/abs/1207.3718.
S. A. Mokhov, J. Paquet, and X. Tong. A type system for hybrid intensional-imperative
programming support in GIPSY. In Proceedings of C3S2E’09, pages 101–107, New York, NY,
USA, May 2009. ACM.
S. A. Mokhov, F. Rassai, L. W. Huynh, and L. Wang. The authentication framework within
the Java data security framework (JDSF): Design refinement and implementation. In T. Sobh,
K. Elleithy, and A. Mahmood, editors, Novel Algorithms and Techniques in Telecommunications and Networking, Proceedings of CISSE’08, pages 423–429. Springer, Dec. 2008. Printed
in January 2010.
S. A. Mokhov, S. Sinclair, I. Clement, D. Nicolacopoulos, and the MARF Research & Development Group. SpeakerIdentApp – Text-Independent Speaker Identification Application.
Published electronically within the MARF project, http://marf.sf.net, 2002–2013. Last
viewed February 2010.
S. A. Mokhov, M. Song, and C. Y. Suen. Writer identification using inexpensive signal
processing techniques. In T. Sobh and K. Elleithy, editors, Innovations in Computing Sciences
and Software Engineering; Proceedings of CISSE’09, pages 437–441. Springer, Dec. 2009.
ISBN: 978-90-481-9111-6, online at: http://arxiv.org/abs/0912.5502.
S. A. Mokhov and Y. Sun. OCT segmentation survey and summary reviews and a novel 3D
segmentation algorithm and a proof of concept implementation. [online], 2011–2013. Online
at http://arxiv.org/abs/1204.6725.
S. A. Mokhov and E. Vassev. Autonomic specification of self-protection for Distributed MARF
with ASSL. In Proceedings of C3S2E’09, pages 175–183, New York, NY, USA, May 2009.
ACM.
S. A. Mokhov and E. Vassev. Self-forensics through case studies of small to medium software
systems. In Proceedings of IMF’09, pages 128–141. IEEE Computer Society, Sept. 2009.
S. A. Mokhov, E. Vassev, J. Paquet, and M. Debbabi. Towards a self-forensics property in
the ASSL toolset. In Proceedings of C3S2E’10, pages 108–113. ACM, May 2010.
M. Monroe, R. Lan, J. M. del Olmo, B. Shneiderman, C. Plaisant, and J. Millstein. The
challenges of specifying intervals and absences in temporal queries: a graphical language approach. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,
CHI’13, pages 2349–2358, New York, NY, USA, 2013. ACM.
R. Montague. Pragmatics and intensional logic. Synthese, 22(1-2):68–94, 1970.
B. Moolenaar and Contributors. Vim the editor – Vi Improved. [online], 2009. http:
//www.vim.org/.
P. D. Mosses. The varieties of programming language semantics and their uses. In 4th International Andrei Ershov Memorial Conference on Perspective of System Informatics (PSI’01),
volume 2244 of LNCS, pages 165–190, Berlin, 2001. Springer-Verlag.
R. Murch. Autonomic Computing: On Demand Series. IBM Press, Prentice Hall, 2004.
R. Muskens. Intensional models for the theory of types. Journal of Symbolic Logic, 72:98–118,
2007.
My Digital Life Editorial Team. How to change or spoof MAC address in Windows XP, Vista,
Server 2003/2008, Mac OS X, Unix and Linux. [online], June 2008.
V. P. Nair, H. Jain, Y. K. Golecha, M. S. Gaur, and V. Laxmi. MEDUSA: MEtamorphic
malware dynamic analysis using signature from API. In Proceedings of the 3rd International
Conference on Security of Information and Networks, SIN’10, pages 263–269, New York, NY,
310
[331]
[332]
[333]
[334]
[335]
[336]
[337]
[338]
[339]
[340]
[341]
[342]
[343]
[344]
[345]
[346]
[347]
[348]
[349]
[350]
[351]
USA, 2010. ACM.
NASA. Hubble status report. [online], Dec. 2008. http://www.nasa.gov/mission_pages/
hubble/servicing/SM4/news/status_rpt_20081230.html.
NASA. Hubble status report. [online], Dec. 2008. http://www.nasa.gov/mission_pages/
hubble/servicing/SM4/news/status_rpt_20081219.html.
NASA. Hubble status report #3: Hubble science operations deferred while engineers examine
new issues. [online], Oct. 2008. http://www.nasa.gov/mission_pages/hubble/servicing/
SM4/news/status_update_20081017.html.
NASA. Hubble status report #4. [online], Oct. 2008. http://www.nasa.gov/mission_
pages/hubble/servicing/SM4/news/status_rpt_4_20081017.html.
NASA. Mars rover team diagnosing unexpected behavior: Mars exploration rover mission
status report. [online], Jan. 2009. http://www.nasa.gov/mission_pages/mer/news/mer20090128.html.
NASA. Kepler mission manager update: December 22 2010 safe mode event investigation. [online], Dec. 2010. http://www.nasa.gov/mission_pages/kepler/news/keplerm20101230.html.
S. Negri. The intensional side of algebraic-topological representation theorems. In Quinon
and Antonutti [391]. Online at http://www.fil.lu.se/index.php?id=18879.
NetBeans Community. NetBeans Integrated Development Environment. [online], 2004–2013.
http://www.netbeans.org.
T. Nipkow, L. C. Paulson, and M. Wenzel. Isabelle/HOL: A Proof Assistant for HigherOrder Logic, volume 2283. Springer-Verlag, Nov. 2007. http://www.in.tum.de/~nipkow/
LNCS2283/, last viewed: December 2007.
NIST. National Vulnerability Database. [online], 2005–2013. http://nvd.nist.gov/.
NIST. National Vulnerability Database statistics. [online], 2005–2013. http://web.nvd.
nist.gov/view/vuln/statistics.
T. S. Obuchowicz. It’s Only VHDL (But I Like It). Pearson Custom Publishing, 2005. ISBN
0-536-10092-6.
W. Odom. CCENT/CCNA ICND1: 640-822 Official Cert Guide. Cisco Press, 3 edition,
2012. ISBN: 978-1-58720-425-8.
W. Odom. CCNA ICND2: 640-816 Official Cert Guide. Cisco Press, 3 edition, 2012. ISBN:
978-1-58720-435-7.
T. Oetiker, D. Rand, and the MRTG Community. Tobi oetiker’s MRTG – the Multi Router
Traffic Grapher. [online], 2008–2011. http://oss.oetiker.ch/mrtg/.
Y. Okada, S. Ata, N. Nakamura, Y. Nakahira, and I. Oka. Comparisons of machine learning algorithms for application identification of encrypted traffic. In Proceedings of the 10th
International Conference on Machine Learning and Applications and Workshops (ICMLA),
volume 2, pages 358–361, Dec. 2011.
V. Okun, A. Delaitre, P. E. Black, and NIST SAMATE. Static Analysis Tool Exposition
(SATE) 2010. [online], 2010. See http://samate.nist.gov/SATE2010Workshop.html.
V. Okun, A. Delaitre, P. E. Black, and NIST SAMATE. Static Analysis Tool Exposition
(SATE) IV. [online], Mar. 2012. See http://samate.nist.gov/SATE.html.
OpenESB Contributors. BPEL service engine. [online], 2009. https://open-esb.dev.java.
net/BPELSE.html.
M. A. Orgun and E. A. Ashcroft, editors. Proceedings of ISLIP’95, volume Intensional Programming I. World Scientific, May 1995. ISBN: 981-02-2400-1.
M. A. Orgun and W. Du. Multi-dimensional logic programming: Theoretical foundations.
Theoretical Computer Science, 185(2):319–345, 1997.
311
[352] M. A. Orgun, C. Liu, and A. C. Nayak. Knowledge representation, reasoning and integration
using temporal logic with clocks. Mathematics in Computer Science, 2(1):143–163, 2008.
[353] D. O’Shaughnessy. Speech Communications. IEEE, New Jersey, USA, 2000.
[354] C. B. Ostrum. The Luthid 1.0 Manual. Department of Computer Science, University of
Waterloo, Ontario, Canada, 1981.
[355] H. Otrok, J. Paquet, M. Debbabi, and P. Bhattacharya. Testing intrusion detection systems in
MANET: A comprehensive study. In CNSR’07: Proceedings of the Fifth Annual Conference
on Communication Networks and Services Research, pages 364–371, Washington, DC, USA,
2007. IEEE Computer Society.
[356] D. Ottenheimer and M. Wallace. Securing the Virtual Environment: How to Defend the
Enterprise Against Attack. Wiley, 1 edition, May 2012. Included DVD.
[357] G. Palmer (Editor). A road map for digital forensic research, report from first digital forensic
research workshop (DFRWS). Technical report, DFRWS, 2001.
[358] T. Panayiotopoulos. Temporal reasoning with TRL. In Gergatsoulis and Rondogiannis [131],
pages 133–148.
[359] N. S. Papaspyrou and I. T. Kassios. GLU# embedded in C++: a marriage between multidimensional and object-oriented programming. Softw., Pract. Exper., 34(7):609–630, 2004.
[360] J. Paquet. Relational databases as multidimensional dataflows. Master’s thesis, Departement
d’Informatique, Université Laval, Québec, Canada, 1995.
[361] J. Paquet. Scientific Intensional Programming. PhD thesis, Department of Computer Science,
Laval University, Sainte-Foy, Canada, 1999.
[362] J. Paquet. Distributed eductive execution of hybrid intensional programs. In Proceedings of the
33rd Annual IEEE International Computer Software and Applications Conference (COMPSAC’09), pages 218–224, Seattle, Washington, USA, July 2009. IEEE Computer Society.
[363] J. Paquet and P. Kropf. The GIPSY architecture. In Proceedings of Distributed Computing
on the Web, Quebec City, Canada, 2000.
[364] J. Paquet and S. A. Mokhov. Furthering baseline core Lucid. [online], 2011–2013. http:
//arxiv.org/abs/1107.0940.
[365] J. Paquet, S. A. Mokhov, and X. Tong. Design and implementation of context calculus in
the GIPSY environment. In Proceedings of the 32nd Annual IEEE International Computer
Software and Applications Conference (COMPSAC), pages 1278–1283, Turku, Finland, July
2008. IEEE Computer Society.
[366] J. Paquet, S. A. Mokhov, E. I. Vassev, X. Tong, Y. Ji, A. H. Pourteymour, K. Wan, A. Wu,
S. Rabah, B. Han, B. Lu, L. Tao, Y. Ding, C. L. Ren, and The GIPSY Research and Development Group. The General Intensional Programming System (GIPSY) project. Department
of Computer Science and Software Engineering, Concordia University, Montreal, Canada,
2002–2014. http://newton.cs.concordia.ca/~gipsy/, last viewed January 2014.
[367] J. Paquet and J. Plaice. The intensional relation. In Orgun and Ashcroft [350], pages 214–227.
[368] J. Paquet and J. Plaice. The semantics of dimensions as values. In Gergatsoulis and Rondogiannis [131], pages 259–273.
[369] J. Paquet, A. Wu, and P. Grogono. Towards a framework for the General Intensional Programming Compiler in the GIPSY. In Proceedings of the 19th Annual ACM Conference on
Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA 2004), pages
164–165, New York, NY, USA, Oct. 2004. ACM.
[370] J. Paquet and A. H. Wu. GIPSY – a platform for the investigation on intensional programming
languages. In Proceedings of the 2005 International Conference on Programming Languages
and Compilers (PLC 2005), pages 8–14. CSREA Press, June 2005.
[371] M. Parashar and S. Hariri, editors. Autonomic Computing: Concepts, Infrastructure and
312
[372]
[373]
[374]
[375]
[376]
[377]
[378]
[379]
[380]
[381]
[382]
[383]
[384]
[385]
[386]
[387]
[388]
[389]
[390]
[391]
Applications. CRC Press, Dec. 2006.
L. C. Paulson, T. Nipkow, and M. Wenzel. Isabelle: A generic proof assistant. [online],
University of Cambridge and Technical University of Munich, 2007–2013. http://isabelle.
in.tum.de/, last viewed March 2013.
C. Pearce. Computing forensics: a live analysis. [online], Apr. 2005. http://www.linux.
org.au/conf/2005/security_miniconf/presentations/crpearce-lca2005.pdf.
C. Pearce. Helix: Open-source forensic toolkit. [online], Apr. 2005. http://www.e-fense.
com/helix.
M. Peralta, S. Mukhopadhyay, and R. Bharadwaj. Automatic synthesis and deployment of
intensional kahn process networks. In D. Ślȩzak, T. hoon Kim, S. S. Yau, O. Gervasi, and B.-H.
Kang, editors, Grid and Distributed Computing, volume 63 of Communications in Computer
and Information Science, pages 73–87. Springer Berlin Heidelberg, 2009.
J. Plaice. Particle in-cell simulation with Lucid. In Orgun and Ashcroft [350], pages 149–161.
J. Plaice. Cartesian programming. Technical Report UNSW-CSE-TR-1101, University of
Grenoble, France, Jan. 2011. Habilitation Thesis, online at ftp://ftp.cse.unsw.edu.au/
pub/doc/papers/UNSW/1101.pdf.
J. Plaice, B. Mancilla, and G. Ditu. From Lucid to TransLucid: Iteration, dataflow, intensional
and Cartesian programming. Mathematics in Computer Science, 2(1):37–61, 2008.
J. Plaice, B. Mancilla, G. Ditu, and W. W. Wadge. Sequential demand-driven evaluation
of eager TransLucid. In Proceedings of the 32nd Annual IEEE International Computer Software and Applications Conference (COMPSAC), pages 1266–1271, Turku, Finland, July 2008.
IEEE Computer Society.
J. Plaice and J. Paquet. Introduction to intensional programming. In Orgun and Ashcroft
[350], pages 1–14. Tutorial.
G. D. Plotkin. A structural approach to operational semantics. Lecture Notes DAIMI FN19,
Department of Computer Science, University of Aarhus, 1981.
D. C. Plummer. RFC 826: An Ethernet Address Resolution Protocol. [online], Nov. 1982.
http://tools.ietf.org/html/rfc826, viewed in December 2012.
A. H. Pourteymour. Comparative study of Demand Migration Framework implementation
using JMS and Jini. Master’s thesis, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada, Sept. 2008.
A. H. Pourteymour, E. Vassev, and J. Paquet. Towards a new demand-driven message-oriented
middleware in GIPSY. In Proceedings of PDPTA 2007, pages 91–97. PDPTA, CSREA Press,
June 2007.
A. H. Pourteymour, E. Vassev, and J. Paquet. Design and implementation of demand migration systems in GIPSY. In Proceedings of PDPTA 2009. CSREA Press, June 2008.
N. Provos. Steganography detection with stegdetect. [online], 2004. http://www.outguess.
org/detection.php.
N. Provos, D. McNamee, P. Mavrommatis, K. Wang, and N. Modadugu. The ghost in the
browser analysis of web-based malware. http://www.usenix.org/events/hotbots07/tech/
fullpapers/provos/provos.pdf, 2007.
M. Puckette and PD Community. Pure Data. [online], 2007–2013. http://puredata.org.
G. N. Purdy. Linux iptables: Pocket Reference. O’Reilly, 2004. ISBN: 978-0-596-00569-6.
QoSient, LLC. Argus: Auditing network activity. [online], 2000–2013. http://www.qosient.
com/argus/.
P. Quinon and M. Antonutti, editors. Workshop on Intensionality in Mathematics, May 2013.
Online at http://www.fil.lu.se/index.php?id=18879.
313
[392] S. Rabah. A broker-based resource publication and discovery framework for network virtualization environment. Master’s thesis, Department of Computer Science and Software
Engineering, Concordia University, Montreal, Canada, Jan. 2014.
[393] S. Rabah, S. A. Mokhov, and J. Paquet. An interactive graph-based automation assistant:
A case study to manage the GIPSY’s distributed multi-tier run-time system. In C. Y. Suen,
A. Aghdam, M. Guo, J. Hong, and E. Nadimi, editors, Proceedings of the ACM Research in
Adaptive and Convergent Systems (RACS 2013), pages 387–394, New York, NY, USA, Oct.
2011–2013. ACM. Pre-print: http://arxiv.org/abs/1212.4123.
[394] A. Raffaetà and T. Frühwirth. Two semantics for temporal annotated constraint logic. In
Gergatsoulis and Rondogiannis [131], pages 78–92.
[395] A. N. M. E. Rafiq and Y. Mao. A novel approach for automatic adjudification of new malware.
In N. Callaos, W. Lesso, C. D. Zinn, J. Baralt, J. Boukachour, C. White, T. Marwala, and
F. V. Nelwamondo, editors, Proceedings of the 12th World Multi-Conference on Systemics,
Cybernetics and Informatics (WM-SCI’08), volume V, pages 137–142, Orlando, Florida, USA,
June 2008. IIIS.
[396] T. Rahilly and J. Plaice. A multithreaded implementation for TransLucid. In Proceedings of the 32nd Annual IEEE International Computer Software and Applications Conference
(COMPSAC), pages 1272–1277, Turku, Finland, July 2008. IEEE Computer Society.
[397] A. Ranganathan and R. H. Campbell. A middleware for context-aware agents in ubiquitous
computing environments. In M. Endler and D. Schmidt, editors, Proceedings of Middleware
2003, volume 2672 of Lecture Notes in Computer Science, pages 143–161. Springer Berlin
Heidelberg, 2003.
[398] M. Rash. Linux Firwalls: Attack Detection and Response with iptables, psad, and fwsnort.
No Starch Press, Inc., San Francisco, 3 edition, 2007. ISBN: 978-1-59327-141-1.
[399] C. L. Ren. General intensional programming compiler (GIPC) in the GIPSY. Master’s thesis,
Department of Computer Science and Software Engineering, Concordia University, Montreal,
Canada, 2002.
[400] G. Riley. CLIPS: A tool for building expert systems. [online], 2007–2011. http://
clipsrules.sourceforge.net/, last viewed May 2012.
[401] J. Rilling, W. J. Meng, and O. Ormandjieva. Context driven slicing based coupling measure.
In ICSM, page 532, 2004.
[402] R. W. Ritchey and P. Ammann. Using model checking to analyze network vulnerabilities.
In Proceedings of the 2000 IEEE Symposium on Security and Privacy (SP’00), pages 156–,
Washington, DC, USA, 2000. IEEE Computer Society.
[403] RJK. Regexp syntax summary. http://www.greenend.org.uk/rjk/2002/06/regexp.html,
last viewed May 2008, June 2002.
[404] Y. Rogers, H. Sharp, and J. Preece. Interaction Design: Beyond Human - Computer Interaction. Wiley Publishing, 3rd edition, 2011. Online resources: id-book.com.
[405] P. Rondogiannis. Higher-Order Functional Languages and Intensional Logic. PhD thesis,
Department of Computer Science, University of Victoria, Victoria, Canada, 1994.
[406] P. Rondogiannis. Adding multidimensionality to procedural programming languages. In
Gergatsoulis and Rondogiannis [131], pages 274–291.
[407] P. Rondogiannis. Adding multidimensionality to procedural programming languages. Software: Practice and Experience, 29(13):1201–1221, 1999.
[408] P. Rondogiannis and W. W. Wadge. Extending the intensionalization algorithm to a broader
class of higher-order programs. In Orgun and Ashcroft [350], pages 228–233.
[409] P. Rubin, A. Robbins, J. Kingdon, D. MacKenzie, and R. Smith. touch – change file timestamps, touch(1). GNU coreutils 8.10, Feb. 2011. man file(1).
314
[410] F. Rudzicz and S. A. Mokhov. Towards a heuristic categorization of prepositional phrases in
English with WordNet. [online], 2003–2013. http://arxiv.org/abs/1002.1095.
[411] S. J. Russell and P. Norvig, editors. Artificial Intelligence: A Modern Approach. Prentice
Hall, New Jersey, USA, 1995. ISBN 0-13-103805-2.
[412] B. Sarnecka. The first few numbers: How children learn them and why it matters. In Quinon
and Antonutti [391]. Online at http://www.fil.lu.se/index.php?id=18879.
[413] B. Sateli, E. Angius, S. S. Rajivelu, and R. Witte. Can text mining assistants help to improve
requirements specifications? In Proceedings of the Mining Unstructured Data (MUD’12), Oct.
2012. http://sailhome.cs.queensu.ca/mud/res/sateli-mud2012.pdf.
[414] B. Sateli and R. Witte. Natural language processing for mediawiki: The semantic assistants
approach. In Proceedings of the 8th International Symposium on Wikis and Open Collaboration
(WikiSym’12). ACM, Aug. 2012.
[415] D. Savage, J. Harrington, J. Yembrick, M. Curie, and NASA. NASA to discuss Hubble
anomaly and servicing mission launch delay. [online], Sept. 2008. http://www.nasa.gov/
home/hqnews/2008/sep/HQ_M08-187_HST_Telecon.html.
[416] U. Schöning. Logic for Computer Scientists. Birkhäuser Boston, 2008.
[417] M. C. Schraefel, B. Mancilla, and J. Plaice. Intensional hypertext. In Gergatsoulis and
Rondogiannis [131], pages 40–54.
[418] R. Schreiber. MATLAB. Scholarpedia, 2(6):2929, 2007. http://www.scholarpedia.org/
article/MATLAB.
[419] M. G. Schultz, E. Eskin, E. Zadok, and S. J. Stolfo. Data mining methods for detection of new
malicious executables. In Proceedings of IEEE Symposium on Security and Privacy, pages
38–49, Oakland, 2001.
[420] I. Selesnick, S. Cai, K. Li, L. Sendur, and A. F. Abdelnour. MATLAB implementation of
wavelet transforms. Technical report, Electrical Engineering, Polytechnic University, Brooklyn, NY, 2003. Online at http://taco.poly.edu/WaveletSoftware/.
[421] K. Sentz and S. Ferson. Combination of evidence in dempster-shafer theory. Technical Report SAND 2002-0835, Sandia, Apr. 2002. http://www.sandia.gov/epistemic/Reports/
SAND2002-0835.pdf.
[422] G. Shafer. The Mathematical Theory of Evidence. Princeton University Press, 1976.
[423] G. Shafer. Perspectives on the theory and practice of belief functions. International Journal
of Approximate Reasoning, 3:1–40, 1990.
[424] G. Shafer. Dempster-Shafer theory. [online], 2002. http://www.glennshafer.com/assets/
downloads/articles/article48.pdf.
[425] G. Shafer and J. Pearl, editors. Readings in Uncertain Reasoning. Morgan Kaufmann, 1990.
[426] V. Sharma, J. G. Muers, and S. Lewis. Continual feature selection: a cost effective method
to enhancing the capabilities. In H. Martin, editor, Proceedings of the 17th Virus Bulletin
International Conference, pages 9–17, Vienna, Austria: The Pentagon, Abingdon, OX143YP,
England, Sept. 2007.
[427] O. Sheyner, J. Haines, S. Jha, R. Lippmann, and J. M. Wing. Automated generation and
analysis of attack graphs. In Proceedings of the 2002 IEEE Symposium on Security and
Privacy, pages 273–, Washington, DC, USA, 2002. IEEE Computer Society.
[428] L. Shi, S. Lu, T. Sun, and D. Ouyang. A hybrid system combining intuitionistic fuzzy description logics with intuitionistic fuzzy logic programs. In Proceedings of the Eighth International
Conference on Fuzzy Systems and Knowledge Discovery (FSKD’11), volume 1, pages 60–64,
2011.
[429] B. Shishkov, J. Cordeiro, and A. Ranchordas, editors. ICSOFT 2009 - Proceedings of the 4th
International Conference on Software and Data Technologies, volume 1. INSTICC Press, July
315
[430]
[431]
[432]
[433]
[434]
[435]
[436]
[437]
[438]
[439]
[440]
[441]
[442]
[443]
[444]
[445]
[446]
[447]
[448]
[449]
[450]
2009.
G. J. Simon, H. Xiong, E. Eilertson, and V. Kumar. Scan detection: A data mining approach.
In Proceedings of SDM 2006, pages 118–129, 2006. http://www.siam.org/meetings/sdm06/
proceedings/011simong.pdf.
D. A. Simovici and C. Djeraba. Mathematical Tools for Data Mining: Set Theory, Partial
Orders, Combinatorics. Springer Publishing Company, Incorporated, 1st edition, 2008. ISBN:
9781848002005.
M. P. Singh and M. N. Huhns. Service-Oriented Computing: Semantics, Processes, Agents.
John Wiley & Sons, Ltd, West Sussex, England, 2005.
E. Skoudis and L. Zelster. Malware: Fighting Malicious Code. Computer Networking and
Distributed Systems. Prentice Hall, 2004.
M. G. Solomon, K. Rudolph, E. Tittel, N. Broom, and D. Barrett. Computer Forensics
JumpStart. Sybex, 2 edition, Mar. 2011.
D. Song. BitBlaze: Security via binary analysis. [online], 2010. Online at http://bitblaze.
cs.berkeley.edu.
D. Song. WebBlaze: New techniques and tools for web security. [online], 2010. Online at
http://webblaze.cs.berkeley.edu.
M. Song. Computer-Assisted Interactive Documentary and Performance Arts in Illimitable
Space. PhD thesis, Special Individualized Program/Computer Science and Software Engineering, Concordia University, Montreal, Canada, Dec. 2012. Online at http://spectrum.
library.concordia.ca/975072 and http://arxiv.org/abs/1212.6058.
Sourcefire. Snort: Open-source network intrusion prevention and detection system (IDS/IPS).
[online], 1999–2013. http://www.snort.org/.
Splunk Inc. Splunk: Search and analysis engine for IT data. [online], 2005–2012. http:
//www.splunk.com/.
W. Stallings. SNMP, SNMPv2, SNMPv3, and RMON 1 and 2. Addison-Wesley, 3 edition,
1999. ISBN: 0-201-48534-6.
N. Stankovic, M. A. Orgun, W. Cai, and K. Zhang. Visual Parallel Programming, chapter 6,
pages 103—129. World Scientific Publishing Co., Inc., 2002.
B. Sterns. The Java Native Interface (JNI). [online], 2001–2005. http://java.sun.com/
developer/onlineTraining/Programming/JDCBook/jni.html.
W. R. Stevens and S. A. Rago. Advanced Programming in the UNIX Environment. Pearson
Education, Inc., second edition, June 2005. ISBN 0-201-43307-9.
J. M. Stewart, M. Chapple, and D. Gibson. CISSP: Certified Information Systems Security
Professional Study Guide. Sybex, 6 edition, July 2012.
M. Suenaga. Virus linguistics – searching for ethnic words. In H. Martin, editor, Proceedings of
the 17th Virus Bulletin International Conference, pages 9–17, Vienna, Austria: The Pentagon,
Abingdon, OX143YP, England, Sept. 2007.
Sun Microsystems, Inc. Java IDL. Sun Microsystems, Inc., 2004. http://java.sun.com/
j2se/1.5.0/docs/guide/idl/index.html.
Sun Microsystems, Inc. The Java web services tutorial (for Java Web Services Developer’s Pack, v2.0). [online], Feb. 2006. http://download.oracle.com/docs/cd/E17802_
01/webservices/webservices/docs/2.0/tutorial/doc/.
Sun Microsystems, Inc. Class URI. Sun Microsystems, Inc., 2007. http://java.sun.com/
j2se/1.5.0/docs/api/java/net/URI.html, Viewed in November, 2007.
Sun Microsystems, Inc. Java Message Service (JMS). [online], Sept. 2007. http://java.sun.
com/products/jms/.
Sun Microsystems, Inc. NetBeans 6.7.1. [online], 2009–2010. http://netbeans.org/
316
[451]
[452]
[453]
[454]
[455]
[456]
[457]
[458]
[459]
[460]
[461]
[462]
[463]
[464]
[465]
[466]
[467]
[468]
[469]
downloads/6.7.1/index.html.
A. H. Sung, J. Xu, P. Chavez, and S. Mukkamala. Static analyzer of vicious executables
(SAVE). In Proceedings of 20th Annual of Computer Security Applications Conference, pages
326–334, Dec. 2004.
P. Swoboda. A Formalisation and Implementation of Distributed Intensional Programming.
PhD thesis, The University of New South Wales, Sydney, Australia, 2004.
P. Swoboda and J. Plaice. An active functional intensional database. In F. Galindo, editor,
Advances in Pervasive Computing, pages 56–65. Springer, 2004. LNCS 3180.
P. Swoboda and J. Plaice. A new approach to distributed context-aware computing. In
A. Ferscha, H. Hoertner, and G. Kotsis, editors, Advances in Pervasive Computing. Austrian
Computer Society, 2004. ISBN 3-85403-176-9.
P. Swoboda and W. W. Wadge. Vmake and ISE general tools for the intensionalization of
software systems. In Gergatsoulis and Rondogiannis [131], pages 310–320. ISBN: 981-02-40953.
A. S. Tanenbaum and D. J. Wetherall. Computer Networks. Prentice Hall, fifth edition, 2011.
ISBN: 978-0-13-212695-3.
W. N. Tankou. A unified framework for measuring a network’s mean time-to-compromise.
Master’s thesis, Concordia Institute for Information Systems Engineering, Concordia University, Montreal, Canada, June 2013.
C. Tao, K. Wongsuphasawat, K. Clark, C. Plaisant, B. Shneiderman, and C. G. Chute. Towards event sequence representation, reasoning and visualization for EHR data. In Proceedings
of the 2nd ACM SIGHIT International Health Informatics Symposium, IHI’12, pages 801–806,
New York, NY, USA, 2012. ACM.
L. Tao. Warehouse and garbage collection in the GIPSY environment. Master’s thesis,
Department of Computer Science and Software Engineering, Concordia University, Montreal,
Canada, 2004.
Tenable Network Security. Nessus: the network vulnerability scanner. [online], 2002–2013.
http://www.nessus.org/nessus/.
The Coroner’s Toolkit Project. The coroner’s toolkit (TCT). [online], 1999. http://www.
porcupine.org/forensics/tct.html.
The GATE Team. General Architecture for Text Engineering (GATE). [online], 1995–2013.
http://gate.ac.uk/, last viewed April 2012.
The GIPSY Research and Development Group. The General Intensional Programming System
(GIPSY) project. Department of Computer Science and Software Engineering, Concordia
University, Montreal, Canada, 2002–2013. http://newton.cs.concordia.ca/~gipsy/, last
viewed March 2013.
The Honeynet Project. Know Your Enemy. Honeynet, 2nd edition, 2004.
The MARF Research and Development Group. The Modular Audio Recognition Framework
and its Applications. [online], 2002–2013. http://marf.sf.net and http://arxiv.org/
abs/0905.1235, last viewed April 2012.
The PlanetLab Project. PlanetLab – an open platform for developing, deploying, and accessing
planetary-scale services. [online], 2007–2013. http://www.planet-lab.org/.
The PRISM Team. PRISM: a probabilistic model checker. [online], 2004–2013. http://www.
prismmodelchecker.org/, last viewed January 2013.
The Sphinx Group at Carnegie Mellon. The CMU Sphinx group open source speech recognition engines. [online], 2007–2012. http://cmusphinx.sourceforge.net.
The Weka Project. Weka 3: Data mining with open source machine learning software in Java.
[online], 2006–2013. http://www.cs.waikato.ac.nz/ml/weka/.
317
[470] ThreatTrack Security. ThreadAnalyzer: Dynamic sandboxing and malware analysis (formerly GFI SandBox). [online], 2013. http://www.threattracksecurity.com/enterprisesecurity/sandbox-software.aspx.
[471] C. Thuen. Understanding counter-forensics to ensure a successful investigation. [online],
University of Idaho, 2007. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.
1.138.2196.
[472] S. Tlili. Automatic detection of safety and security vulnerabilities in open source software.
PhD thesis, Concordia Institute for Information Systems Engineering, Concordia University,
Montreal, Canada, 2009. ISBN: 9780494634165.
[473] X. Tong. Design and implementation of context calculus in the GIPSY. Master’s thesis,
Department of Computer Science and Software Engineering, Concordia University, Montreal,
Canada, Apr. 2008.
[474] X. Tong, J. Paquet, and S. A. Mokhov. Complete context calculus design and implementation
in GIPSY. [online], 2007–2008. http://arxiv.org/abs/1002.4392.
[475] K. Trinidad, K. Herring, S. Hendrix, and E. C. NASA. NASA sets target shuttle launch
date for Hubble servicing mission. [online], Dec. 2008. http://www.nasa.gov/home/hqnews/
2008/dec/HQ_08-320_Hubble_May2009.html.
[476] W. Truszkowski, M. Hinchey, J. Rash, and C. Rouff. NASA’s swarm missions: The challenge
of building autonomous software. IT Professional, 6(5):47–52, 2004.
[477] D. G. Ullman. Making Robust Decisions: Decision Management For Technical, Business, and
Service Teams. Victoria: Trafford, 2007. ISBN: 1-4251-0956-X, http://www.stsc.hill.af.
mil/CrossTalk/2007/04/0704Ullman.html.
[478] T. Uustalu and V. Vene. The essence of dataflow programming. In K. Yi, editor, Proceedings
of APLAS’05, volume 3780 of Lecture Notes in Computer Science, pages 2–18. Springer Berlin
Heidelberg, 2005.
[479] U. Vahalia. UNIX Internals: The New Frontiers. Prentice Hall, Inc., second edition, 1996.
ISBN 0-13-101908-2.
[480] J. van Benthem. A Manual of Intensional Logic. CSLI Publications, Stanford and The
University of Chicago Press, 1988.
[481] S. R. van den Berg and P. A. Guenther. procmail v3.22. [online], Sept. 2001. http://www.
procmail.org/.
[482] A. van Lamsweerde. Requirements Engineering: From System Goals to UML Models to Software Specifications. Wiley, 2009. ISBN: 978-0-470-01270-3.
[483] Various Contributors. fstat, fstat64, lstat, lstat64, stat, stat64 – get file status, BSD
System Calls Manual, stat(2). BSD, Apr. 1994. man stat(2).
[484] Various contributors and MITRE. Common Weakness Enumeration (CWE) – a communitydeveloped dictionary of software weakness types. [online], 2006–2013. See http://cwe.mitre.
org.
[485] Various Contributors and the GNU Project. GNU Compiler Collection (GCC). [online],
1988–2009. http://gcc.gnu.org/onlinedocs/gcc/.
[486] E. Vassev. ASSL: Autonomic System Specification Language – A Framework for Specification
and Code Generation of Autonomic Systems. LAP Lambert Academic Publishing, Nov. 2009.
ISBN: 3-838-31383-6.
[487] E. Vassev and M. Hinchey. ASSL: A software engineering approach to autonomic computing.
IEEE Computer, 42(6):90–93, 2009.
[488] E. Vassev and M. Hinchey. ASSL specification model for the image-processing behavior in
the NASA Voyager mission. Technical report, Lero - The Irish Software Engineering Research
Center, 2009.
318
[489] E. Vassev, M. Hinchey, and A. J. Quigley. A self-adaptive architecture for autonomic systems
developed with ASSL. In Shishkov et al. [429], pages 163–168.
[490] E. Vassev, M. Hinchey, and A. J. Quigley. Towards model checking with Java PathFinder
for autonomic systems specified and generated with ASSL. In Shishkov et al. [429], pages
251–256.
[491] E. Vassev, M. G. Hinchey, and J. Paquet. Towards an ASSL specification model for NASA
swarm-based exploration missions. In Proceedings of the 23rd Annual ACM Symposium on
Applied Computing (SAC 2008) - AC Track, pages 1652–1657. ACM, 2008.
[492] E. Vassev, H. Kuang, O. Ormandjieva, and J. Paquet. Reactive, distributed and autonomic
computing aspects of AS-TRM. In J. Filipe, B. Shishkov, and M. Helfert, editors, ICSOFT
(1), pages 196–202. INSTICC Press, Sept. 2006.
[493] E. Vassev and S. A. Mokhov. An ASSL-generated architecture for autonomic systems. In
Proceedings of C3S2E’09, pages 121–126, New York, NY, USA, May 2009. ACM.
[494] E. Vassev and S. A. Mokhov. Self-optimization property in autonomic specification of Distributed MARF with ASSL. In B. Shishkov, J. Cordeiro, and A. Ranchordas, editors, Proceedings of ICSOFT’09, volume 1, pages 331–335, Sofia, Bulgaria, July 2009. INSTICC Press.
[495] E. Vassev and S. A. Mokhov. Towards autonomic specification of Distributed MARF with
ASSL: Self-healing. In Proceedings of SERA 2010 (selected papers), volume 296 of SCI, pages
1–15. Springer, 2010.
[496] E. Vassev and S. A. Mokhov. Developing autonomic properties for distributed patternrecognition systems with ASSL: A Distributed MARF case study. LNCS Transactions on
Computational Science, Special Issue on Advances in Autonomic Computing: Formal Engineering Methods for Nature-Inspired Computing Systems, XV(7050):130–157, 2012. Accepted
in 2010; appeared February 2012.
[497] E. Vassev, O. Ormandjieva, and J. Paquet. ASSL specification of reliability self-assessment
in the AS-TRM. In J. Filipe, B. Shishkov, and M. Helfert, editors, Proceedings of the 2nd
International Conference on Software and Data Technologies (ICSOFT 2007), volume SE,
pages 198–206. INSTICC Press, July 2007.
[498] E. Vassev and J. Paquet. A general architecture for demand migration in a demand-driven
execution engine in a heterogeneous and distributed environment. In Proceedings of the 3rd
Annual Communication Networks and Services Research Conference (CNSR 2005), pages
176–182. IEEE Computer Society, May 2005.
[499] E. Vassev and J. Paquet. A generic framework for migrating demands in the GIPSY’s demanddriven execution engine. In Proceedings of the 2005 International Conference on Programming
Languages and Compilers (PLC 2005), pages 29–35. CSREA Press, June 2005.
[500] E. Vassev and J. Paquet. Towards autonomic GIPSY. In Proceedings of the Fifth IEEE
Workshop on Engineering of Autonomic and Autonomous Systems (EASE 2008), pages 25–
34. IEEE Computer Society, 2008.
[501] E. I. Vassev. General architecture for demand migration in the GIPSY demand-driven execution engine. Master’s thesis, Department of Computer Science and Software Engineering,
Concordia University, Montreal, Canada, June 2005. ISBN 0494102969.
[502] E. I. Vassev. Towards a Framework for Specification and Code Generation of Autonomic
Systems. PhD thesis, Department of Computer Science and Software Engineering, Concordia
University, Montreal, Canada, 2008.
[503] L. Verdoscia. ALFA fine grain dataflow machine. In Orgun and Ashcroft [350], pages 110–134.
[504] J. Vincent, D. Rolsky, D. Chamberlain, R. Foley, and R. Spier. RT Essentials. O’Reilly
Media, Inc., Aug. 2005.
[505] P. C. Vinh and J. P. Bowen. On the visual representation of configuration in reconfigurable
319
[506]
[507]
[508]
[509]
[510]
[511]
[512]
[513]
[514]
[515]
[516]
[517]
[518]
[519]
[520]
[521]
[522]
[523]
[524]
[525]
computing. Electron. Notes Theor. Comput. Sci., 109:3–15, 2004.
S. Viswanadha and Contributors. Java compiler compiler (JavaCC) - the Java parser generator. [online], 2001–2008. https://javacc.dev.java.net/.
W. W. Wadge. Possible WOOrlds. In Orgun and Ashcroft [350], pages 56–62. Invited
Contribution.
W. W. Wadge. Intensional logic in context. In Gergatsoulis and Rondogiannis [131], pages
1–13. Tutorial.
W. W. Wadge and E. A. Ashcroft. Lucid, the Dataflow Programming Language. Academic
Press, London, 1985.
W. W. Wadge, G. Brown, M. C. Schraefel, and T. Yildirim. Intensional HTML. In 4th
International Workshop PODDP’98, Mar. 1998.
W. W. Wadge and M. C. Schraefel. Putting the hyper back in hypertext. In Gergatsoulis and
Rondogiannis [131], pages 31–39.
W. W. Wadge and A. Yoder. The Possible-World Wide Web. In Orgun and Ashcroft [350],
pages 207–213.
K. Wan. Lucx: Lucid Enriched with Context. PhD thesis, Department of Computer Science
and Software Engineering, Concordia University, Montreal, Canada, 2006.
K. Wan, V. Alagar, and J. Paquet. A context theory for intensional programming. In Workshop on Context Representation and Reasoning (CRR05), July 2005.
K. Wan, V. Alagar, and J. Paquet. Lucx: Lucid enriched with context. In Proceedings of
the 2005 International Conference on Programming Languages and Compilers (PLC 2005),
pages 48–14. CSREA Press, June 2005.
L. Wang, S. Jajodia, and D. Wijesekera. Preserving Privacy in On-line Analytical Processing
(OLAP). Springer, Berlin, 2007. ISBN: 0-387-46273-2.
T. D. Wang, A. Deshpande, and B. Shneiderman. A temporal pattern search algorithm for
personal history event visualization. IEEE Trans. on Knowl. and Data Eng., 24(5):799–812,
May 2012.
M. Wenzel, L. C. Paulson, and T. Nipkow. The Isabelle framework. In Mohamed et al. [259],
pages 33–38.
Wikipedia. S.M.A.R.T. — Wikipedia, the free encyclopedia. [Online; accessed 2-May-2012],
2012. http://en.wikipedia.org/w/index.php?title=S.M.A.R.T.&oldid=489447839.
Wikipedia. Dempster–Shafer theory — Wikipedia, The Free Encyclopedia. [Online; accessed
15-July-2013], 2013. http://en.wikipedia.org/w/index.php?title=Dempster-Shafer_
theory&oldid=558643087.
R. Witte. SOEN 6481: Systems requirements specification, lecture notes. Department of
Computer Science and Software Engineering, Concordia University, Montreal, Canada, 2012.
Fall 2012.
R. Witte, Q. Li, Y. Zhang, and J. Rilling. Text mining and software engineering: An integrated
source code and document analysis approach. IET Software Journal, Special Issue on Natural
Language in Software Development, 2(1):3–16, Feb. 2008. http://www.semanticsoftware.
info/system/files/witte_etal_iet2008.pdf.
A. Wollrath and J. Waldo. Java RMI tutorial. Sun Microsystems, Inc., 1995–2005. http:
//java.sun.com/docs/books/tutorial/rmi/index.html.
F. Wolter and M. Zakharyaschev. Multi-dimensional description logics. In T. Dean, editor, Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence,
(IJCAI 1999), volume 2, pages 104–109. Morgan Kaufmann, 1999. http://ijcai.org/
PastProceedings/IJCAI-99-VOL-1/PDF/016.pdf.
S. W. Wood. A forensic computing framework to fit any legal system. In O. Göbel, S. Frings,
320
[526]
[527]
[528]
[529]
[530]
[531]
[532]
[533]
[534]
[535]
[536]
[537]
[538]
[539]
[540]
D. Günther, J. Nedon, and D. Schadt, editors, Proceedings of the IT Incident Management
and IT Forensics (IMF’08), LNI140, pages 41–54, Sept. 2008.
A. Wu, J. Paquet, and S. A. Mokhov. Object-oriented intensional programming: Intensional
Java/Lucid classes. In Proceedings of SERA 2010, pages 158–167. IEEE Computer Society,
2010. Online at: http://arxiv.org/abs/0909.0764.
A. H. Wu. Semantic checking and translation in the GIPSY. Master’s thesis, Department of
Computer Science and Software Engineering, Concordia University, Montreal, Canada, 2002.
A. H. Wu. OO-IP Hybrid Language Design and a Framework Approach to the GIPC. PhD
thesis, Department of Computer Science and Software Engineering, Concordia University,
Montreal, Canada, 2009.
A. H. Wu and J. Paquet. Object-oriented intensional programming in the GIPSY: Preliminary investigations. In Proceedings of the 2005 International Conference on Programming
Languages and Compilers (PLC 2005), pages 43–47. CSREA Press, June 2005.
A. H. Wu, J. Paquet, and P. Grogono. Design of a compiler framework in the GIPSY system.
In Proceedings of the 15th IASTED International Conference on Parallel and Distributed
Computing and Systems (PDCS 2003), volume 1, pages 320–328. International Association of
Science and Technology for Development, Nov. 2003.
M.-D. Wu and S. D. Wolfthusen. Network forensics of partial SSL/TLS encrypted traffic
classification using clustering algorithms. In O. Göbel, S. Frings, D. Günther, J. Nedon, and
D. Schadt, editors, Proceedings of the IT Incident Management and IT Forensics (IMF’08),
LNI140, pages 157–172, Sept. 2008.
H. Yahyaoui, M. Debbabi, and N. Tawbi. A denotational semantic model for validating
JVML/CLDC optimizations under Isabelle/HOL. In Proceedings of the 7th International
Conference on Quality Software (QSIC’07), pages 348–355, 2007.
S. Yamane. Real-time object-oriented specification and verification. In Orgun and Ashcroft
[350], pages 162–185.
Y. Yan. Description language and formal methods for web service process modeling. In
Business Process Management: Concepts, Technologies and Applications, volume Advances
in Management Information Systems. M.E. Sharpe Inc., 2008.
Y. Yan and X. Zheng. A planning graph based algorithm for semantic web service composition.
In CEC/EEE 2008, pages 339–342, 2008.
L. A. Zadeh. On the validity of Dempster’s rule of combination of evidence. Technical Report
79/24, University of California, Berkely, CA, 1979.
L. A. Zadeh. Fuzzy logic. Scholarpedia, 3(3):1766, 2008. http://www.scholarpedia.org/
article/Fuzzy_logic.
Y. Zhang, J. Rilling, and V. Haarslev.
An ontology based approach to software
comprehension–reasoning about security concerns in source code. In 30th IEEE COMPSAC
2006, Chicago, Sept. 2006.
Q. Zhao. Implementation of an object-oriented intensional programming system. Master’s
thesis, Department of Computer Science, University of New Brunswick, Canada, 1997.
C. Zheng and J. R. Heath. Simulation and visualization of resource allocation, control, and
load balancing procedures for a multiprocessor architecture. In MS’06: Proceedings of the 17th
IASTED international conference on Modelling and simulation, pages 382–387, Anaheim, CA,
USA, 2006. ACTA Press.
321
Part IV
Appendices
322
Appendix A
Glossary
ACL — access control list
ACL2 — A Computational Logic for Applicative Common Lisp [205]
ADMARF — Autonomic DMARF
ADM — ADM (Adia–Debbabi–Mejri) process logic [4]
ACME — fictitious company name1
AGIPSY — Autonomic GIPSY [500]
AOP — aspect-oriented programming
API — application programming interface2
ARP — Address Resolution Protocol [382]3
AE — autonomic element
AS — autonomic system
AS-TRM — autonomic system-time reactive model [492, 497]
ASSL — Autonomic System Specification Language [502]
AST — abstract syntax tree
AYDY — are-you-done-yet polling
BIOS — Basic Input/Output System
BPEL — Business Process Execution Language [177, 210, 349]
CAD — computer-aided design
CAF — computer anti-forensics
CCD — charge-coupled device
CF — computer forensics
CFG — control-flow graph
1
http://en.wikipedia.org/wiki/Acme_Corporation
http://en.wikipedia.org/wiki/API
3
http://en.wikipedia.org/wiki/Address_Resolution_Protocol
2
323
Cisco IOS — originally Internetwork Operating System
CIFS — Common Internet File System
CLI — command-line interface
CLIPS — C Language Integrated Production System [400]
codecessor — A (technically) co-descendant notion or concept, but not necessarily a close
sibling to concepts alike emerged around the same timeframe from the same or similar
predecessor concept(s) [300, 304, 312]
CORBA — Common Object Requester Broker Architecture
CPU — central processing unit
CSV — comma-separated values
CVE — Common Vulnerabilities and Exposures
CVS — Concurrent Versions System [146]
CWE — Common Weakness Enumeration [484]
DAG — directed acyclic graph
DBMS — database management system
DBSCAN — Density Based Spacial Clustering of Application with Noise [531]
DFG — data-flow graph
DFRWS — Digital Forensics Research Workshop [357]
DGT — Demand Generator Tier
DIR — directory
DHCP — Dynamic Host Control Protocol
DL — description logic
DMARF — Distributed MARF
DMF — Demand Migration Framework
DMS — Demand Migration System
DNS — Domain Name Service
DSTME — Dempster–Shafer theory of mathematical evidence
DST — Demand Store Tier
DWT — Demand Worker Tier
EHR — Electronic Health Records [458]
ENCS — Faculty of Engineering and Computer Science, Concordia University
ERA — event reconstruction algorithm (Section 2.2.4.4)
FOIL — first-order intensional logic [113]
FFT — Fast Fourier Transform
FSA — finite-state automata
FTK — Forensic Toolkit [2]
324
FTP — File Transfer Protocol
GATE — General Architecture for Text Engineering [462]
GC — Global Conventionalism [30]
GCC — GNU Compiler Collection [485]
GEE — General Eduction Engine
GEER — GEE Resources
GICF — General Imperative Compiler Framework [261]
GIPC — General Intensional Program Compiler, Figure 32, [370, 463]
GIPL — General Intensional Programming Language [361, 370, 463]
GIPSY — General Intensional Programming System [370, 463]
GLU — Granular Lucid [188, 189, 361]
GMT — General Manager Tier
GGMT — graphical (graph-based) GMT [393]
GUI — graphical user interface
HCI — human-computer interaction
HOIFL — higher-order intensional fuzzy logic
HOIL — higher-order intensional logic
HOL — higher-order logic
I/O — input/output
IC — identified-context (class) [241]
ICMP — Internet Control Message Protocol
ICT — information and communications technology
IDE — interactive development environment
IDS — intrusion detection system
iHTML — Intensional HTML [510, 511]
IME — interactive modeling environment
IRSNET — intelligent RSNET
IODEF — the Incident Object Description Exchange Format4
IP — depending on context of a particular chapter, may refer to
• Intensional Programming [131, 350] (Chapter 1, Section 3.2)
• Internet Protocol Layer-3 address [79, 80] (Section 9.5)
IPL — Intensional Programming Language (e.g., Forensic Lucid, GLU, Lucid, Indexical Lucid, JLucid, Tensor Lucid, Objective Lucid, Onyx [144], GIPL,
TransLucid)
4
www.ietf.org/rfc/rfc5070.txt
325
ISS — Illimitable Space System [437]
JavaCC — Java Compiler Compiler [506]
JDSF — Java Data Security Framework
JMS — Java Messaging Service
JOOIP — Java-based object-oriented intensional programming language
JPF — Java Plug-in Framework
JSON — JavaScript Object Notation
LAN — local area network
LOD — level of detail
LPC — linear predictive coding
MAC — Media Access Control address [180]5
MARF — Modular Audio Recognition Framework [465]
MARFCAT — MARF-based Code Analysis Tool [285, 287]
MARFL — MARF Language [272]
MARFPCAT — MARF-based PCap Analysis Tool
MRTG — the Multi Router Traffic Grapher [345]
NAG — Network Administration Group (of the Faculty of ENCS, Concordia)
NICTER — Network Incident analysis Center for Tactical Emergency Response [104]
MDP — Markov Decision Process
ME — managed element
MPR — map of partitioned runs (see Section 2.2.4.4.4)
MSPR — map of sequence of partitioned runs (see Section 2.2.4.6.2)
MSA — MAC Spoofer Analyzer (Section 9.5)
msw — MAC Spoof Watcher
NFS — Network File System [443, 479]
NLP — natural language processing
NVD — National Vulnerability Database [340]
OO — object-oriented
OODA — Observe-Orient-Decide-Act loop6
OpenGL — Open Graphics Library
OSI — Open Systems Interconnection model [79, 80]7
OSS — open source software
5
http://en.wikipedia.org/wiki/MAC_address
http://en.wikipedia.org/wiki/OODA_loop
7
http://en.wikipedia.org/wiki/OSI_reference_model
6
326
OWL — Web Ontology Language
PC — personal computer
PCM — pulse-code modulation
pcap — packet capture data
PDS — push down system
PA — Peano arithmetic
PID — process identifier [443, 479]
PL — programming language
PoC — proof-of-concept
PR — depending on context of a particular chapter, may refer to
• partitioned run (Section 2.2.2)
• processing resource (as in GATE, Chapter 5)
PRISM — probabilistic model checker [467]
PS — problem-specific
QoS — quality of service
R&D — research and development
RIPE — Run-time Intensional Programming Environment (“IDE”)
RMI — Remote Method Invocation
RPC — Remote Procedure Call
RSNET — road-side network
RT — RT: Request Tracker [504]8
SAMATE — Software Assurance Metrics And Tool Evaluation NIST project9
SATE — Static Analysis Tool Exposition
10
SAVE — Static Analysis of Vicious Executables framework [451]
SDWT — separating discrete wavelet transform
self-CHOP — self-(configuring, healing, optimizing, and protecting) properties
SFAP — self-forensics autonomic property (Appendix D)
SIPL — Specific IPL (e.g., Indexical Lucid, JLucid, Tensor Lucid, Objective Lucid, Onyx)
SHL — Security Hardening Language [228]
S.M.A.R.T. — Self-Monitoring, Analysis, and Reporting Technology [12, 67, 519]
SNMP — Simple Network Management Protocol [440]
8
http://www.bestpractical.com/?rt=4.0.8
http://samate.nist.gov
10
http://samate.nist.gov/SATE.html
9
327
SMI — Structure of Management Information [184]11
SLO — service-level objective
SOA — service-oriented architecture
SSD — solid state drive
SSH — secure shell
SSL — secure socket layer
STDERR — standard error stream [443, 479]
STDIN — standard input stream [443, 479]
STDOUT — standard output stream [443, 479]
STGT — Simple Theory of GIPSY Types, an STT extension (Appendix B)
STT — Mendelson’s Simple Theory of Types [253]
swpvio — switch port violation (see Section 9.5)
SYN — “synchronize” packet
SysML — Systems Modeling Language
TA — transport agent
TIL — Transparent Intensional Logic [98]
TTL — time-to-live
TCT — The Coroner’s Toolkit [461]
TLS — transport layer security
UEFI — Unified Extensible Firmware Interface12
UC — Use Case
UM — user-managed
UML — Unified Modeling Language [47, 224, 482]
URI — Unified Resource Identifier
VANET — vehicular ad hoc network
VCN — vehicular network
ViM — Vi Improved [325]
VLAN — virtual LAN [31, 70]
WAV — wave audio file format
WFT — Windows Forensic Toolkit [252]
XML — Extensible Markup Language
11
12
http://en.wikipedia.org/wiki/Structure_of_Management_Information
http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
328
Appendix B
A Type System Theory for
Higher-Order Intensional Logic
Support in Hybrid
Intensional-Imperative Programs in
GIPSY
We describe a type system and its theory the General Intensional Programming System
(GIPSY), designed to support intensional programming languages (built upon higher-order
intensional logic) and their imperative counter-parts for the eductive execution model. We
extend the Mendelson’s simple theory of types (STT) [17, 107, 253] by adding the intensionality axiom to it. The intensionality principle covers language expressions that explicitly
take into the account a multidimensional context space of evaluation with the context being
a first-class value that serves a number of applications that need the notion of context to
proceed. The theory is complemented with the software engineering design and implementation study of the GIPSY type system [301]. In GIPSY, the type system glues the static and
dynamic typing between intensional and imperative languages in its compiler and run-time
environments to support the intensional expressions and their evaluation written in various
dialects of the intensional programming language Lucid. We, therefore, describe and discuss
the properties of such a type system and the related type theory as well as particularities of
the semantics, design and implementation of the GIPSY type system [315].
B.1
Overview
This work primarily a combination of the previous results describing the concrete GIPSY
types specification and implementation in [315] as well as theoretical foundations behind the
GIPSY Type System [301].
The General Intensional Programming System (GIPSY) (see Chapter 6 for an in-depth
background discussion) has been built around the Lucid family of intensional programming languages (see Chapter 4) that rely on the higher-order intensional logic (HOIL, see
329
Section 4.4) to provide context-oriented multidimensional reasoning about intensional expressions. HOIL combines functional programming with various intensional logics to allow
explicit context expressions to be evaluated as first-class values that can be passed as parameters to functions and return as results with an appropriate set of operators defined on
contexts [302]. GIPSY’s frameworks are implemented in Java as a collection of replaceable
components for the compilers of various Lucid dialects and the demand-driven eductive evaluation engine that can run distributively. GIPSY provides support for hybrid programming
models that couple intensional and imperative languages for a variety of needs. Explicit
context expressions limit the scope of evaluation of math expressions (effectively a Lucid
program is a mathematics or physics expression constrained by the context) in tensor physics,
regular math in multiple dimensions, etc., and for cyberforensic reasoning as one of the usecases of interest (Chapter 7). Thus, GIPSY is a support testbed for HOIL-based languages
some of which enable such reasoning, as in formal cyberforensic case analysis with event
reconstruction. Its type system is designed to support all these hybrid paradigms because
the standard Lucid algebra (i.e., types and operators) is extremely fine-grained and can
hardly benefit from parallel evaluation of their operands (cf., Chapter 4). Adding granularity to the data elements manipulated by Lucid inevitably comes through the addition of
coarser-grained data types and their corresponding operators. Lucid semantics being defined as typeless, a solution to this problem consists in adding a hybrid counterpart to Lucid
to allow an external language to define an algebra of coarser-grained types and operators.
In turn, this solution raises the need for an elaborated type system to bridge the implicit
types of Lucid semantics with the explicit types and operators (i.e., functions) defined in
the hybrid counterpart language [315]. This chapter, therefore, presents such a type system
used in GIPSY at compile time for static type checking, as well as at run-time for dynamic
type checking [301, 315].
B.1.1
Organization
After our problem statement and short presentation of our proposed solution, in Section B.1.2
follows the concise description of the GIPSY Type System subproject of the GIPSY research
and development effort in Section B.2.1. Having briefly covered this introductory material we
move onto the actual definition of the Simple Theory of GIPSY Types (STGT) as an extension
of the classical Simple Theory of Types (STT) in Section B.3. Further, we describe the
properties of our STGT via various categories of types and their applicability to our system
in Section B.5 to illustrate where STGT stands [301]. Then, we present the complete GIPSY
type system as used by the compiler (the General Intensional Programming Compiler—
GIPC, see Section 6.2.1) and the run-time system (the General Eduction Engine—GEE, see
Section 6.2.2) when producing and executing a binary GIPSY program [264, 282] (called
General Eduction Engine Resources—GEER) respectively [161, 301, 315, 362]. Finally, we
conclude in Section B.6 describing limitations and future work to address them [301]. (We
don’t overview the GIPSY project here as it and the related work are detailed in Chapter 6.)
330
B.1.2
Summary of the Problem and the Proposed Solution
The beginnings of the GIPSY Type System were bootstrapped by a number of the related
works [264, 282, 301, 315] in GIPSY to support hybrid and object-oriented intensional programming [526, 528], and Lucx’s context type extension known as context calculus implementation [365, 473] for contexts to act as first-class values [301].
Problem. Data types are implicit in Lucid (as well as in its dialects or many functional
languages). As such, the type declarations normally never appear in the Lucid programs
at the syntactical level. The data type of a value is inferred from the result of evaluation of
an expression. In most imperative languages, like Java, C++ and the like, the types are
explicit and the programmers must declare the types of the variables, function parameters
and return values before they are used in evaluation [315]. In GIPSY, we needed to allow
any Lucid dialect to be able to uniformly invoke functions/methods written in imperative
languages and the other way around and perform semantic analysis in the form of type
assignment and checking statically at compile time or dynamically at run time, perform any
type conversion if needed, and evaluate such hybrid HOIL expressions. At the same time,
we need to allow a programmer to specify, or declare, the types of variables, parameters, and
return values for both intensional and imperative functions as a binding contract between
inter-language invocations despite the fact that Lucid is not explicitly typed [315]. Thus,
we need a general type system, well specified and founded in a sound theory, designed and
implemented to support such scenarios [301, 315].
Proposed Solution. The unique particularity of the type system presented here is that
it is above a specific programming language model of either the Lucid family of languages
or imperative languages. It is designed to bridge programming language paradigms, the
two most general of them would be the intensional and imperative paradigms. GIPSY has
a collection of frameworks designed to support a common run-time environment and coexistence of the intensional and imperative languages [315]. Thus, the type system is that of
a generic GIPSY program that can include code segments written in a theoretically arbitrary
number of intensional and imperative dialects supported by the system vs. being a type
system for a specific language [315]. What follows is the details of the proposed solution and
the specification of the simple GIPSY type system and its theory [301, 315].
B.2
B.2.1
The GIPSY Type System
Introduction to the GIPSY Type System
The introduction of JLucid, Objective Lucid, and the General Imperative Compiler
Framework (GICF) [264, 282] prompted the development of the GIPSY Type System as
implicitly understood by the Lucid language and its incarnation within GIPSY to handle
types in a more general manner as a glue between the imperative and intensional languages
within the system. Further evolution of different Lucid variants such as Lucx introducing
contexts as first-class values and JOOIP [526, 528] (Java Object-Oriented Intensional Programming language) highlighted the need of the further development of the type system to
331
accommodate the more general properties of the intensional and hybrid languages [301, 315].
The type system is also required to extend the higher-order intensional logic (HOIL) [302]
support onto the imperative dialects in the hybrid intensional-imperative programs. This in
particular includes the notion of context added to the imperative programs as well as dynamic variables binding and assignment upon intensional type discovery when done evaluating intensional expressions and, for example, assigning their result to a variable declared
in a Java class. The same applies to the function and method parameters as well as
their return results. This further necessitates the type matching rules between Lucid and
other languages, similar to the ones defined for example for Java in Table 17 per earlier
works [264, 282, 301, 315, 365, 526, 528].
B.2.2
Matching Lucid and Java Data Types
Here we present a case of interaction between Lucid and Java [141]. Allowing Lucid to call
Java methods brings a set of issues related to the data types, especially when it comes to type
checks between Lucid and Java parts of a hybrid program. This is pertinent when Lucid
variables or expressions are used as parameters to Java methods and when a Java method
returns a result to be assigned to a Lucid variable or used in an intensional expression. The
sets of types in both cases are not exactly the same. The basic set of Lucid data types as
defined by Grogono [143] is int, bool, double, string, and dimension. Lucid’s int is of
the same size as Java’s long. GIPSY and Java double, boolean, and String are equivalent.
Lucid string and Java String are simply mapped internally through StringBuffer; thus,
one can think of the Lucid string as a reference when evaluated in the intensional program.
Based on this fact, the lengths of a Lucid string and Java String are the same. Java
String is also an object in Java; however, at this point, a Lucid program has no direct
access to any String’s properties (though internally we do and we may expose it later to the
programmers). We also distinguish the float data type for single-precision floating point
operations. The dimension index type is said to be an integer or string (as far as its dimension
tag values are concerned), but might be of other types eventually, as discussed in [365, 473].
Therefore, we perform data type matching as presented in Table 17. Additionally, we allow
the void Java return type, which will always be matched to a Boolean expression true in
Lucid as an expression has to always evaluate to something. As for now our types mapping
and restrictions are as per Table 17. This is the mapping table for the Java-to-IPL-to-Java
type adapter. Such a table would exist for mapping between any imperative-to-intensional
language and back, e.g., the C++-to-IPL-to-C++ type adapter [315].
B.2.3
Design and Implementation of the Type System
While the main language of GIPSY, Lucid, is polymorphic and does not have explicit types,
co-existing with other languages necessitates definition of GIPSY types and their mapping
to a particular language being embedded. In Figure 75 is the detailed design of the GIPSY
Type System [315].
Each class is prefixed with GIPSY to avoid possible confusion with similar definitions in
the java.lang package. The GIPSYVoid type always evaluates to the Boolean true, as
described earlier in Section B.2.2. The other types wrap around the corresponding Java
332
Figure 75: The GIPSY type system [315]
333
Table 17: Matching data types between Lucid and Java [301, 315]
Parameter Types
Corresponding
Used in Lucid
Java Types
string
float
double
int
dimension
bool
class
URI
[]
operator
function
String
float
double
int
int, String
boolean
Object
Object
[]
Method
Method
Return Types
Types of
of Java Methods
Lucid Expressions
int, byte, long
int, dimension
float
float
double
double
boolean
bool
char
char
String
string, dimension
Method
function
Object
class
Object
URI
void
bool::true
Internal
GIPSY Types
GIPSYString
GIPSYFloat
GIPSYDouble
GIPSYInteger
Dimension
GIPSYBoolean
GIPSYObject
GIPSYEmbed
GIPSYArray
GIPSYOperator
GIPSYFunction
Internal
GIPSY Types
GIPSYInteger
GIPSYFloat
GIPSYDouble
GIPSYBoolean
GIPSYCharacter
GIPSYString
GIPSYFunction
GIPSYObject
GIPSYEmbed
GIPSYVoid
object wrapper classes for the primitive types, such as Long, Float, etc. Every class keeps a
lexeme (a lexical representation) of the corresponding type in a GIPSY program and overrides
toString() to show the lexeme and the contained value. These types are extensively used
by the Preprocessor, imperative and intensional compilers, and SemanticAnalyzer for the
general type of GIPSY program processing, and by the GEE’s Executor [315].
The other special types that have been created are either experimental or do not correspond to a wrapper of a primitive type. GIPSYIdentifier type case corresponds to a declaration of some sort of an identifier in a GIPSY program to be put into the dictionary, be it a
variable or a function name with the reference to their definition. Constants and conditionals
may be anonymous and thereby not have a corresponding explicit identifier. GIPSYEmbed is
another special type that encapsulates embedded code via a URI parameter and later is exploded into multiple types corresponding to procedural demands (Java or any other language
methods or functions) [264]. GIPSYFunction and its descendant GIPSYOperator correspond
to the function types for regular operators and user-defined functions. A GIPSYFunction
can either encapsulate an ordinary Lucid function (which is immutable as in functional programming) or a procedure (e.g., a Java method), which may often be mutable (i.e., with side
effects). These four types (identifier, embed, function, and operator ) are not directly exposed
334
to a GIPSY programmer and at this point are managed internally. An operation is usually
mapped by the actual operators like addition, subtraction, and others via the corresponding
algebras. GIPSYContext and Dimension are a new addition to the type system implementation since its first inception [264]. They represent context-as-first-class-values in the context
calculus defined by Wan in [513] and refined and implemented by Tong [473]. The rest of the
type system is exposed to the GIPSY programmers in the preamble of a GIPSY program,
i.e., the #funcdecl and #typedecl segments, which result in the embryo of the dictionary for
linking, semantic analysis, and execution. Once imperative compilers of procedural demands
return, the type data structures (return and parameter types) declared in the preamble are
matched against what was discovered by the compilers and if the match is successful, the link
is made. By capturing the types such as identifier, embed, function, operator and context,
dimension, the GIPSY type system lays down fundamentals the higher-order intensional logic
(HOIL) support that combines functional programming, intensional logic, context calculus,
and in some instances hybrid paradigm support, and the corresponding types [315].
We further describe various properties of the concrete GIPSY types and their more detailed specification in Section B.4 and Section B.5. There we detail the inner workings of
each type in more detail [315] as well describe some of the properties through the notions of
existential, union, intersection, and linear types.
B.2.4
Use of the GIPSY Type System
Some of the types are in use only in the hybrid programs, that’s where they mostly appear
visible to a programmer. Another subset of types is internally used by the GIPSY in its compiler (GIPC) and run-time environment (GEE) frameworks primarily for dynamic discovery
and semantic checking of the types [315].
Relationship to GIPC. The GIPC uses the type system and classes defined for it in the
compilation process to do static type checking and type annotation of literal values in source
code programs as well as storing the type annotations with the parameter or return values in
hybrid dialects to indicate expected data types to be passed or returned when calling a Java
method from a Lucid dialect and back. Some of the static type declarations come from the
Preprocessor after parsing the #funcdecl and #typedecl sections [315].
Relationship to GEE. The GEE uses the type system at run-time to do run-time dynamic
type checking as well as to perform the actual evaluation of arithmetic, context set, and object
operators. When the execution of a given GIPSY program reaches an ImperativeNode [264]
indicating a call to an imperative procedure, the type annotations and expressions are used
to validate the parameter and return types to check whether they match or not and indicate
an error if they don’t [264, 315].
335
B.3
B.3.1
Simple Theory of GIPSY Types
Simple Theory of Types
Our simple theory of the GIPSY types (STGT) is based on the “Simple theory of types”
(STT) by Mendelson [253]. The theoretical and practical considerations are described in the
sections that follow. The STT partitions the qualification domain into an ascending hierarchy
of types with every individual value assigned a type. The type assignment is dynamic for the
intensional dialects as the resulting type of a value in an intensional expression may not be
known at compile time. The assignment of the types of constant literals is done at compiletime, however. In the hybrid system, which is mostly statically typed at the code-segment
boundaries, the type assignment also occurs at compile-time. On the boundary between the
intensional programming languages (IPLs) and imperative languages, prior to an imperative
procedure being invoked, the type assignment to the procedure’s parameters from IPL’s
expression is computed dynamically and matched against a type mapping table similar to
that of Table 17. Subsequently, when the procedure call returns back to the IPL, the type of
the imperative expression is matched back through the table and assigned to the intensional
expression that expects it. The same table is used when the call is made by the procedure
to the IPL and back, but in the reverse order [301].
Further in STT, all quantified variables range over only one type making the first-order
logic applicable as the underlying logic for the theory. This also means the all elements in the
domain and all co-domains are of the same type. The STT states there is an atomic type that
has no member elements in it, and the members of the second-high from the basic atomic
type. Each type has a next higher type similarly to succ (≻) in Peano arithmetic (PA) and
the next operator in Lucid. This is also consistent to describe the composite types, such
as arrays and objects as they can be recursively decomposed (or “flattened”, see [264, 282])
into the primitive types to which the STT applies [301].
Symbolically, the STT uses primed and unprimed variables and the infix set notation of
∈. The formulas Φ(x) rely on the fact that the unprimed variables are all of the same type.
This is similar to the notion of a Lucid stream with the point-wise elements of the stream
having the same type. The primed variables (x′ ) in STT range over the next higher type.
There are two atomic formulas in STT of the form of identity, x = y, and set membership,
y ∈ x′ [301].
B.3.2
GIPSY Types Axioms
The STT [253] defines the four basic axioms for the variables and the types they can range
over: Identity, Extensionality, Comprehension, and Infinity. In STGT, we add the Intensionality on as the fifth axiom [301]. The variables in the definition of the Identity relationship
and in the Extensionality and Comprehension axioms typically range over the elements of
one of the two nearby types. In the set membership [119, 237], only the unprimed variables
that range over the lower type in the hierarchy can appear on the left of ∈; conversely, the
primed ones that range over higher types can only appear on the right of ∈ [253]. The axioms
are defined as:
1. Identity: x = y ↔ ∀z ′ [x ∈ z ′ ↔ y ∈ z ′ ]
336
2. Extensionality: ∀x[x ∈ y ′ ↔ x ∈ z ′ ] → y ′ = z ′
3. Comprehension: ∃z ′ ∀x[x ∈ z ′ ↔ Φ(x)]. This covers objects and arrays as any
collection of elements here may form an object of the next, higher type. The STT
states the comprehension axiom is schematic with respect to Φ(x) (which is a firstorder formula with x as a free variable) and the types [253]. Φ(x) works with type
hierarchies and is not arbitrary like Church’s [17], Russel’s, and other type theories
avoiding Russel’s paradox.
4. Infinity: ∀x, y[x ̸= y → [xRy yRx]]. There exists a non-empty binary relation R over
the elements of the atomic type that is transitive, irreflexive, and strongly connected.
5. Intensionality: the intensional types and operators (cf. Table 13, page 161) are based
on the intensional logic (Section 3.2, page 59) and context calculus (Section 7.2.3.1.4,
page 165). These are extensively described in Chapter 3, Chapter 4, Chapter 6,
Chapter 7 and the previously cited works [113, 131, 350, 361, 365, 473, 513]. This
present type system accommodates the two in a common hybrid execution environment of the GIPSY. A context c is a finite subset of the relation (Section 7.2.3.1.4,
page 165, [473, 516]): c ⊂ {(d, x) | d ∈ DIM ∧ x ∈ T }, where DIM is the set of all
possible dimensions, and T is the set of all possible tags [301] (indices). Therefore,
∀x @ c[(x @ c) ∈ y ′ ↔ (x @ c) ∈ z ′ ] → y ′ = z ′ .
B.4
Concrete Specification of the GIPSY Types
The following sections describe the behavior, equivalence, implementation, valid range of
values, and supported operators of every type in the GIPSY system. We define the lexical
specification if applicable, its visibility to (explicit ability to use by) the programmers, and
the type implementation details in the GIPSY environment [315].
Integer.
• Lexeme: int
• Visibility to programmers: explicitly visible only in the hybrid GIPSY programs.
• The valid range of the type is the same as the corresponding value of the type long in
Java. That covers all the Java byte, int, and long types.
• Internal implementation: the GIPSYInteger class corresponds to this type. Internally,
the class encapsulates java.lang.Long thereby making all operations available to that
type in Java to GIPSY.
• Operators supported: arithmetic, logic, equality
337
Single-Precision Floating Point Number.
• Lexeme: float
• Visibility to programmers: explicitly visible only in the hybrid GIPSY programs.
• The valid range of the type is the same as the corresponding value of the type float
of Java.
• Internal implementation: the GIPSYFloat class corresponds to this type. Internally,
the class encapsulates java.lang.Float thereby making all operations available to
that type in Java to GIPSY.
• Operators supported: arithmetic, logic, equality
Double-Precision Floating Point Number.
• Lexeme: double
• Visibility to programmers: explicitly visible only in the hybrid GIPSY programs.
• The valid range of the type is the same as the corresponding value of the type double
of Java.
• Internal implementation: the GIPSYDouble class corresponds to this type. Internally,
the class encapsulates java.lang.Double thereby making all operations available to
that type in Java to GIPSY.
• Operators supported: arithmetic, logic, equality
Character String.
• Lexeme: string
• Visibility to programmers: explicitly visible only in the hybrid GIPSY programs. Implicitly defined in Lucid programs as enclosed into either single (’) or double (”)
quotation marks. In the case of single (’) quote, it has to be more than one character
to be considered a string, otherwise it is treated as a type Character.
• The valid range of the type is the same as the corresponding value of the type String
of Java.
• Internal implementation: the GIPSYString class corresponds to this type. Internally,
the class encapsulates an instance of java.lang.StringBuffer thereby making all
operations available to that type in Java to GIPSY.
• Operators supported: concatenation, substrings, subscript, dot, equality
338
Character.
• Lexeme: char
• Visibility to programmers: explicitly visible only in the hybrid GIPSY programs. Implicitly defined in Lucid programs as a single character enclosed into single (’) quotation marks. It is assignable to the String type (in a way similar to integer being
assignable to a double) and can participate in the concatenation to form character
strings.
• The valid range of the type is the same as the corresponding value of the type char of
Java that is 2-byte UNICODE character set.
• Internal implementation: the GIPSYCharacter class type corresponds to this type.
Internally, the class encapsulates an instance of java.lang.Character thereby making
all operations available to that type in Java to GIPSY.
• Operators supported: concatenation, logic, equality
Void.
• Lexeme: void
• Visibility to programmers: explicitly visible only in the hybrid GIPSY programs when
declaring the procedure prototypes in the preamble of a GIPSY program’s source code.
Can only be a “return type”.
• The valid range of the type is a Boolean constant true as seen as a return value from
an intensional language.
• Internal implementation: the GIPSYVoid class that extends the GIPSYBoolean class
with the truth value always set to true.
• Operators supported: equality
Dimension.
• Lexeme: dimension
• Visibility to programmers: explicitly visible in intensional-only and hybrid GIPSY
programs when declaring the dimensions in a Lucid dialect program or listing the
dimension-type variables, function parameters or return types in the preamble of the
hybrid GIPSY programs. The dimension point-wise or set-wise ⟨dimension : tag⟩ mappings also appear in the intensional expressions setting up the context of evaluation
and are, as such, a part of the GIPSYContext type.
339
• The valid range of the type was traditionally the natural numbers [508]. In Lucx
and its context calculus [365, 473, 513] the dimension’s tag values internally can be
represented as integers or strings; thus, the dimension type implementation includes
a general opaque GIPSYType to contain the actual type of tags that this dimension
represents. This design keeps the door open for addition of other tag value types to
dimensions (e.g., floating-point or objects, but these dimension tag types have their set
of issues and are not discussed here).
• Internal implementation: the Dimension class.
• Operators supported: equality, switch, query
Context and Context Set.
• Lexeme: in the general case a set of ⟨dimension : tag⟩ mappings in the syntactical form
of {[d:v,...], ...}. For more details, see [365, 473, 513].
• Visibility to programmers: explicitly visible in intensional-only GIPSY programs in the
intensional expressions setting up the context of evaluation.
• The valid range of the type is determined by the constraints set forth by the underlying
tag set types specified in [473].
• Internal implementation: the GIPSYContext class (depends on Dimension and TagSet
types). According to the definitions of simple context and context set, we apply the
Composite design pattern [121, 128, 224] to organize the context class’ implementation.
The context calculus operators have been specified via the GIPSYContext’s public interface. Due to the recursive nature of the composition, the definition of these operators
is also recursive in the case of context sets.
• Operators supported: simple context and context set operators (@, #, and others),
equality
Boolean.
• Lexeme: bool
• Visibility to programmers: explicitly visible only in the hybrid GIPSY programs.
• The valid range of the type is determined by the constants true and false.
• Internal implementation: the GIPSYBoolean class corresponds to this type. Internally,
the class encapsulates an instance of java.lang.Boolean thereby making all operations
available to that type in Java to GIPSY.
• Operators supported: logical, equality
340
Array.
• Lexeme: []
• Visibility to programmers: explicitly visible in hybrid and pure intensional GIPSY
programs with array support via the [] notation.
• The valid range of the type is determined by the types of the elements of the array and
array’s length for the array’s index. Array is a collection of elements of the same type
and has a length.
• Internal implementation: the GIPSYArray class corresponds to this type. Internally,
the class extends GIPSYObject as stated in [264] the objects and arrays are treated
similarly with the exception that arrays are a collection of elements of the same type
vs. objects being a “collection” of elements potentially of different types.
• Operators supported: equality, dot, subscript, set
Object.
• Lexeme: class
• Visibility to programmers: explicitly visible in the hybrid GIPSY programs that support classes and class-like structures and types (e.g., struct in C).
• The valid range of the type is determined by the public operations defined on the
objects in their particular language.
• Internal implementation: the GIPSYObject class corresponds to this type. Internally,
the class encapsulates an instance of java.lang.Object to hold the value of the class.
The value does not need to be a Java object, but, for example, a serialized binary
version of a C++ class or its source code (to be lazily compiled on demand potentially
on another host and platform).
• Operators supported: equality, dot
Identifier.
• Lexeme: a non-reserved word user-defined identifier
• Visibility to programmers: explicitly visible in all GIPSY programs.
• The valid range of the type is the same as specified by a legal subset of lexically valid
character strings in GIPSY program identifiers. These are currently UNICODE enabled
strings that begin with a letter.
• Internal implementation: the GIPSYIndentifier class corresponds to this type. Internally, the class encapsulates the identifier lexeme as java.lang.String.
• Operators supported: assignment, equality
341
Function.
• Lexeme: a non-reserved word user-defined identifier that syntactically corresponds to
the function prototype declaration or definition, plus the keyword immutable that refers
to a function definition that is free of side-effects.
• Visibility to programmers: explicitly visible in all GIPSY programs.
• The valid range of the type is the same as specified by a legal subset of lexically and
syntactically valid function definitions. Functions have types (range over “functional”
(as in functional programming) or “procedural” (sequential threads written in imperative languages) and states (ranging over “immutable” and “volatile”). The states are
primarily for the “procedural” functions as the intensional functions are automatically
“immutable”.
• Internal implementation: the GIPSYFunction class corresponds to this type. Internally, the class encapsulates the state and the type mentioned earlier along with the
FunctionItem (transformed from [527]) class instance that represents the function entry in the dictionary and the AST and the dictionary of the compiled GIPSYProgram.
That, in turn, encapsulates function parameters, return types, and function identifier
thereby making all the available operations on those components.
• Operators supported: equality, evaluation
Operator.
• Lexeme: multiple; primarily the classical arithmetic, logic, bitwise, set, and intensional
depending on the types applied
• Visibility to programmers: explicitly visible in all GIPSY programs.
• The valid range of the type is the same as specified by a legal subset of lexically and
syntactically valid operator kinds definitions.
• Internal implementation: the GIPSYOperator class corresponds to this type. Internally,
the class extends GIPSYFunction. Operators internally are said to be immutable and
functional by default.
• Operators supported: equality, evaluation
Embedded Payload.
• Lexeme: embed() in JLucid and Objective Lucid [264] and a specific syntactically
declared function prototype with an URI, function identifier, and arguments in the
preamble of a hybrid GIPSY program [264], see Listing 6.1 for the example.
• Visibility to programmers: explicitly visible in JLucid and Objective Lucid (“standalone”) and general hybrid GIPSY programs.
342
• The valid range of the type is the same as specified by a legal subset of lexically and
syntactically valid URI and argument identifier definitions [41, 168, 448].
• Internal implementation: the GIPSYEmbed class corresponds to this type. Internally,
the class encapsulates java.lang.Object to hold any value of the embedded code and
data and its URI using java.net.URI.
• Operators supported: equality, evaluation
B.5
Describing Some GIPSY Types’ Properties
To demonstrate most of the pertinent properties of GIPSY types and to put it in perspective
for the readers to get a better and more complete understanding of the spectrum of their
behavior, we cover them in light of describing types of types and comparing to existential,
union, intersection, and linear types [301].
B.5.1
Types of Types
Types of types are generally referred to as kinds. Kinds provide categorization to the types
of similar nature. While some type systems provide kinds as first class entities available to
programmers, in GIPSY we do not expose this functionality in our type system at this point.
However, at the implementation level there are provisions to do so that we may later decide
to expose for the use of programmers. Internally, we define several broad kinds of types,
presented the sections that follow [301].
B.5.1.1
Numeric Kind
The primitive types under this category are numerical values, which are represented by
GIPSYInteger, GIPSYFloat, and GIPSYDouble. They provide implementation of the common
arithmetic operators, such as addition, multiplication and so on, as well as logical comparison
operators of ordering and equality. Thus, for a numerical type T , the following common
operators are provided. The resulting type of any arithmetic operator is the largest of the
two operands in terms of length (the range of double of length say k covers the range
of int with the length say m and if both appear as arguments to the operator, then the
resulting value’s type is that of without loss of information, i.e., largest in length double).
The result of the logical comparison operators is always Boolean B regardless the length of
the left-hand-side and right-hand-side numerical types [301].
1. Tmax : T1 ≥ T2 → T1 | T1 < T2 → T2
2. Tmultiply : Tk × Tm → Tmax(k,m)
3. Tdivide : Tk /Tm → Tmax(k,m)
4. Tadd : Tk + Tm → Tmax(k,m)
5. Tsubtract : Tk − Tm → Tmax(k,m)
343
6. Tmod : Tk %Tm → Tmax(k,m)
7. Tpow : Tk ˆ Tm → Tmax(k,m)
8. T> : T > T → B
9. T< : T < T → B
10. T≥ : T ≥ T → B
11. T≤ : T ≤ T → B
12. T= : T = T → B
Figure 76: Example of provider interfaces and comparators [301]
A generalized implementation of the arithmetic operators is done by realization of the interface called IArithmeticOperatorsProvider and its concrete implementation developed in
the general delegate class GenericArithmeticOperatorsDelegate. This design and implementation allow not only further exposure of kinds-as-first-class values later on after several
iterations of refinement, but also will allow operator and type overloading or replacement of
344
type handling implementation altogether if some researchers wish to do so. The implementation of the engine, GEE, is thus changed, to only refer to the interface type implementation
when dealing with these operators. Equivalently for the logic comparison operators we have
ILogicComparisonOperatorsProvider
and the
GenericLogicComparisonOperatorsDelegate
classes. The latter relies on the comparator implemented for the numerical kind, such as
NumericComparator. Using comparators (that is classes that implement the standard Java
Comparator interface) allows Java to use and to optimize built-in sorting and searching
algorithms for collections of user-defined types. In our case,
GenericLogicComparisonOperatorsDelegate
is the implementation of the delegate class that also relies on it. The example for the numeric
types for the described design is in Figure 76 [301].
It is important to mention, that grouping of the numeric kind of integers and floating-point
numbers does not violate the IEEE 754 standard [181], as these kinds implementation-wise
wrap the corresponding Java’s types (which are also grouped under numeric kind) and their
semantics including the implementation of IEEE 754 by Java in accordance with the Java
Language Specification [141, 301].
B.5.1.2
Logic Kind
Similarly to numeric types, the primitive type GIPSYBoolean fits the category of the types
that can be used in Boolean expressions. The operations the type provides expect the arguments to be of the same type—Boolean. The following set of operators on the logic type B
we provide in the GIPSY type system [301]:
1. Band : B B → B
2. Bor : B B → B
3. Bnot : ¬B → B
4. Bxor : B B → B
5. Bnand : ¬(B B) → B
6. Bnor : ¬(B B) → B
Note that the logical XOR operator (denoted as ) is distinct from the corresponding bitwise
XOR operator in Section B.5.1.3 in a way similar to the logical vs. bitwise AND and OR
are respectively distinct. Again, similarly to the generalized implementation of arithmetic
operators, logic operator providers implement the ILogicOperatorsProvider interface, with
the most general implementation of it in GenericLogicOperatorsDelegate [301].
345
Figure 77: Composite types [301]
B.5.1.3
Bitwise Kind
Bitwise kind of types covers all the types that can support bitwise operations on the entire bit
length of a particular type T . Types in this category include the numerical and logic kinds
described earlier in Section B.5.1.2 and Section B.5.1.1. The parameters on both sides of the
operators and the resulting type are always the same. There are no implicit compatible type
casts performed unlike for the numeric kind [301].
1. Tbit−and : T &T → T
2. Tbit−or : T |T → T
3. Tbit−not : !T → T
4. Tbit−xor : T
T →T
5. Tbit−nand : !(T &T ) → T
6. Tbit−nor : !(T |T ) → T
The implementation of this kind’s operators is done through the interface
IBitwiseOperatorsProvider,
which is in turn generically implemented in the
GenericBitwiseOperatorsDelegate
class [301].
346
B.5.1.4
Composite Kind
As the name suggests, the composite kind types consist of compositions of other types,
possibly basic types. The examples of this kind are arrays, structs, and abstract data types
and their realization such as objects and collections. In the studied type system these are
GIPSYObject, GIPSYArray, and GIPSYEmbed. This kind is characterized by the constructors,
dot operator to decide membership as well as to invoke member methods and define equality.
The design of these types, just like the entire type system, adheres to the Composite design
pattern [121, 128, 224]. The most prominent examples of this kind are in Figure 77, including
GIPSYContext which is composed of Dimensions and indirectly of the TagSets [301].
B.5.1.5
Intensional Kind
The intentional types kind primarily has to do with encapsulation dimensionality, context
information, and their operators. These are represented by the GIPSYContext, Dimension,
and TagSet1 types. The common operators on these types include the context switching
and querying operators @ and # as well as context calculus operators. Additional operators can be included depending on the intensional dialect used, but the mentioned operators are said to be the baseline operators that any intensional language can be translated
to use. Implementation-wise there is a IContextSetOperatorsProvider interface and its
general implementation in GenericContextSetOperatorsDelegate. The context calculus
operators on simple contexts include standard set operators, such as \union, \difference,
\intersection, \in, and Lucx-specific, such as \isSubContext, \projection, \hiding,
and \override [301, 365, 473, 513].
B.5.1.6
Function Kind
The function types represent either “functional” functions, imperative procedures, binary
and unary operators that, themselves, can be passed as parameters or returned as results. In
our type system these are represented by GIPSYFunction, GIPSYOperator, and GIPSYEmbed.
The common operators on these include equality and evaluation [301].
B.5.2
Existential Types
All of the presented GIPSY types are existential types because they represent and encapsulate
modules and abstract data types and separate their implementation from the public interface
specified by the abstract GIPSYType. These are defined as shown in Figure 78. They are
implemented concretely as shown in Figure 79 [301].
T = ∃GIPSYType{Object a; Object getEnclosedTypeObject(); Object getValue(); }
Figure 78: Abstract GIPSY existential types [301].
1
Not mentioned here; please refer to [365, 473].
347
boolT {Boolean a; Object getEnclosedTypeObject(); Boolean getValue(); }
intT {Long a; Object getEnclosedTypeObject(); Long getValue(); }
doubleT {Double a; Object getEnclosedTypeObject(); Double getValue(); }
f unctionT {FunctionItem a; Object getEnclosedTypeObject(); FunctionItem getValue(); }
...
Figure 79: Concrete GIPSY existential types [301].
All these correspond to be subtypes of the more abstract and general existential type
T . Assuming the value t ∈ T , then t.getEnclosedTypeObject() and t.getValue() are well
typed regardless the what the actual GIPSYType may be [301].
B.5.3
Union Types
A union of two types produces another type with the valid range of values that is valid in
either of the two; however, the operators defined on the union types must be those that
are valid for both of the types to remain type-safe. A classical example of that is in C or
C++ the range for the signed char is −128 . . . 127 and the range of the unsigned char is
0 . . . 255, thus:
signed char ∪ unsigned char = −128 . . . 255
In C and C++ there is a union type that roughly corresponds to this notion, but it does not
enforce the operations that are possible on the union type that must be possible in both left
and right types of the uniting operator. In the class hierarchy, such as in GIPSY, the union
type among the type and its parent is the parent class; thus, in our specific type system the
following holds [301]:
∀T ∈ G : T ∪ GIPSYType = GIPSYType
GIPSYArray ∪ GIPSYObject = GIPSYObject
GIPSYVoid ∪ GIPSYBoolean = GIPSYBoolean
GIPSYOperator ∪ GIPSYFunction = GIPSYFunction
where T is any concrete GIPSY type and G is a collection of types in the GIPSY type system
we are describing. Equivalently, the union of the two sibling types is their common parent
class in the inheritance hierarchy. Interestingly enough, while we do not explicitly expose
kinds of types, we still are able to have union type relationships defined based on the kind of
operators they provide as siblings under a common interface, as shown in Figure 80, where T is
any concrete GIPSY type and A is a collection of types that provide arithmetic operators, L—
logic operators providers, B—bitwise operators providers, C—context operators providers,
D—composite operator providers, and F —function operator providers. Thus, resulting in
types shown in Figure 81 [301]. Another particularity of the GIPSY type system is the
union of the string and integer types under dimension:
GIPSYInteger ∪ GIPSYString = Dimension
348
∀T ∈ A : T ∪ IArithmeticOperatorProvider = IArithmeticOperatorProvider
∀T ∈ L : T ∪ ILogicOperatorProvider = ILogicOperatorProvider
∀T ∈ B : T ∪ IBitwiseOperatorProvider = IBitwiseOperatorProvider
∀T ∈ C : T ∪ IContextOperatorProvider = IContextOperatorProvider
∀T ∈ D : T ∪ ICompositeOperatorProvider = ICompositeOperatorProvider
∀T ∈ F : T ∪ IFunctionOperatorProvider = IFunctionOperatorProvider
Figure 80: Concrete GIPSY union types (providers) [301]
{GIPSYInteger, GIPSYFloat, GIPSYDouble} ∈ A
{GIPSYBoolean} ∈ L
{GIPSYInteger, GIPSYFloat, GIPSYDouble, GIPSYBoolean} ∈ B
{GIPSYContext, Dimension} ∈ C
{GIPSYObject, GIPSYArray, GIPSYEmbed}, GIPSYString} ∈ D
{GIPSYFunction, GIPSYOperator, GIPSYEmbed} ∈ F
Figure 81: Concrete GIPSY union types [301]
and this is because in our dimension tag values we allow them to be either integers or strings.
While not a very common union in the majority of type system, they do share a common set
of tag set operators defined in [365, 473] for ordered finite tag sets (e.g., next(), etc.) [301].
B.5.4
Intersection Types
An intersection type of given two types is a range where the sets of valid values overlap.
Such types are safe to pass to methods and functions that expect either of the types as the
intersection types are more restrictive and compatible in both original types. A classical
example of an intersection type if it were implemented in C or C++ would be:
signed char ∩ unsigned char = 0 . . . 127
The intersection types are also useful in describing the overloaded functions. Sometimes they
are called as refinement types. In a class hierarchy, the intersection between the parent and
child classes is the most derived type, and the intersection of the sibling classes is empty.
While the functionality offered by the intersection types is promising, it is not currently
explicitly or implicitly considered in the GIPSY type system, but planned for the future
work [301].
349
B.5.5
Linear Types
Linear (or “uniqueness”) types are based on linear logic [134]. The main idea of these types
is that values assigned to them have one and only one reference to them throughout. These
types are useful to describe immutable values like strings or hybrid intensional-imperative
objects (see [526] for details). These are useful because most operations on such an object
“destroy” it and create a similar object with the new values, and, therefore, can be optimized
in the implementation for the in-place mutation. Implicit examples of such types in the
GIPSY type system are GIPSYString that internally relies on Java’s StringBuffer that
does something very similar as well as the immutable GIPSYObject is in JOOIP [526] and
immutable GIPSYFunction. Since we either copy or retain the values in the warehouse
(DST, see Section 6.2.2), and, therefore, one does not violate referential transparency or
create side effects in, and at the same time be more efficient as there is no need to worry
about synchronization overhead [301].
B.6
Summary
Through a series of discussions, specification, design, and implementation details we presented
a type system and its theory used by the GIPSY project for static and dynamic type checking
and evaluation of intensional and hybrid intensional-imperative HOIL expressions potentially
written in multiple languages. We highlighted the particularities of the system that does not
attribute to a particular specific language as traditionally most type systems do, but to an
entire set of languages and hybrid paradigms that are linked through the type system [315].
This is a necessary contribution to GIPSY-like systems to have a homogeneous environment
to statically and dynamically type-check and evaluate intensional and hybrid programs that
are based on a sound theory [301, 315].
The type system described in this chapter has been implemented to the large extent in the
GIPSY. However, our implementation still needs thorough testing using complex program
examples testing the limits of the type system. Additional endeavours, noted in the previous
sections, include [301, 315]:
• Exposing kinds as first class entities, allowing programs to have a more explicit manipulation of types.
• Allowing custom user-defined types and extension of the existing operators and operator
overloading.
• Exposing more operators for composite types to Lucid code segments.
• Adding intersection types for more flexibility in the future development of the type
system, allowing more type casting possibilities at the programming level.
350
Appendix C
MARFL
This edited chapter is primarily based on the corresponding article [272] presented at the
first SECASA workshop at COMPSAC 2008. We focus on defining context expressions in
terms of initial syntax and semantics for an intensional MARF (Modular Audio Recognition
Framework, see Section 5.2) Language, MARFL. It is there to allow scripting MARF-based
applications as context-aware, where the notion of context represents coarse-grained and
fine-grained configuration details of a given MARF instance and a set of overloaded context
operators borrowed from the Generic Intensional Programming Language (GIPL) @ and #
to help with the task. This is a preliminary research on MARFL that has considerable
practical implications on the usability of MARF’s resources and beyond. In this chapter we
focus exclusively on the context specification for multimedia pattern recognition tasks and
available MARF resources for its applications [272].
C.1
Overview
C.1.1
Motivation
The Modular Audio Recognition Framework (MARF, introduced in Section 5.2) has a good
potential for multimedia pattern recognition research and comparison of various patternrecognition algorithms. It can also serve as a re-usable library in applications that have to
do with audio, natural language text, image processing due to its generality. The existing
example applications include speaker, gender, accent, emotion, writer, language detection and
identification; authentication and others. While the designers of MARF made every effort
and example applications to be simpler and more accessible to a wider audience of users,
they still require relatively skilled Java developers and relatively good understanding of the
MARF’s design architecture. Often scientists who just need to run their experiments and
simulations do not fall into this category. Thus, to make it possible we state a requirement
to be able to “script” in a convenient manner applications like SpeakerIdentApp [317],
FileTypeIdentApp [290], WriterIdentApp [273, 318], MARFCATApp [285], etc., by specifying
parts or all of the configuration context parameters. The syntax of the scripting language
should be simpler and more natural than that of Java [272].
351
C.1.2
Approach
We introduce syntactical and semantic constructs of context definitions for the configuration
into this new MARF language that inherits some properties for the intensional programming
languages, e.g., GIPL and Lucx (see Chapter 4) as well as their context operators, such as @
and # to switch and query the current context. Intensionality and Lucid-compatible or nearsimilar syntax and semantics makes it more available to the scientific community and can
be easily implemented in the intensional evaluation platforms like GIPSY (see Chapter 6).
Through the examples of the syntactical expressions in MARFL, we build the initial syntax,
semantics, and a brief type system to express the MARF configuration-as-context in the
new language. We show how we deal with the resulting context-hierarchies occurring in our
solution from the level-of-detail (LOD) point of view. We further re-write one of the MARF
applications as how it would look in MARFL. We discuss the more general implications of
our design. We propose to either interpret the MARFL scripts directly or “compile” them
and generate an equivalent Java code (or even other language) of a MARF-based application.
This can well integrate with plug-in frameworks for application development and verification
like the Java Plug-in Framework (JPF), Eclipse [99], and existing cyberforensic toolkits (see
Section 2.1) [272].
C.2
Theoretical Framework
This is a theoretical framework with practical considerations for the existing system. The
said framework focuses very specifically on the initial syntactical and semantic specification
of the context in the MARFL language, the issues that come up with it, and how to address
them, followed by implications of the work on the future work in this and other areas, such
as common media pattern recognition applications and cyberforensics. We leave out other
aspects (such as storage, statistics gathering, input/output in general) of MARFL unless
they are absolutely necessary for illustrative purposes [272].
Dealing with MARF-specific contexts needs to be put into a perspective of the inner
workings of MARF and its applications (cf., Section 5.2). This gives the needed requirements
specification, syntax, and semantics of the contextual expressions. The rich parameterization
of the MARF contexts necessitates a context hierarchy, i.e., contexts, sub-contexts, and subsub-contexts and so-on for the different levels of detail in the configuration settings. This
contextuality translates into a specific configuration instance with the needed settings. If
some dimensions are omitted in the scripted MARFL specification, they are filled in with
the values that would come from the specific module instance defaults backing the system.
The modules represent higher-order dimensions and can be added, updated, and removed as
the system evolves to support change [380]. We do not support user-defined variables and
identifiers in this case nor functions, so the corresponding rules have been excluded from
the syntax and semantics. All our present dimension identifiers are preset dictated by the
available modules of MARF as a precompiled dictionary. They make up reserved words of
MARFL [272] as a result.
We borrow two prime operators of GIPL [361, 365], @ to switch the context and # to query
for the current context [361]. We treat them as functions, and we overload their definitions to
accept different types of arguments of their expressions and as a result return different types
352
of dimension values. We add the dot operator for the dimensions allowing us to navigate into
the depth of the higher-order dimension objects [272].
We build our theory by examples to capture the desired properties of MARFL. A comprehensive example of a context for processing a WAV file in the MARF’s pipeline can be
modeled as shown in Figure 82. In that example we illustrate a complex hierarchical context
expression where several nested dimensions are explicitly specified. In general, MARFL’s
context follows the definition of a simple context in Lucx [365, 513], where it is defined as
a collection of ⟨dimension : tag⟩ pairs. What we do differently from Lucx (Section 4.3.1,
page 92) is that a single pair may not necessarily be an atomic micro context in Lucx terminology, but may contain sub-contextual elements. The inner-most context is always simple
and atomic as in Lucx and typically has dimensions of primitive types, such as integer, IEEE
754 floating point value [181], or a string. The outer layers of context hierarchy are composite
objects. Thus, a [sample loader:WAV] denotes a dimension of type sample loader with
its higher-order tag value WAV. The WAV dimension value can further be decomposed to an
atomic simple context if needed, that contains three dimensions of primitive types [272].
In away, the described theory of higher-order context so far is similar to equivalent definitions described by Swoboda, Wadge, et al . in [452, 453, 454, 455, 511], where a tree of
contexts is defined for languages like iHTML [510] (with nested tags) and the depth for
LOD, functional intensional databases annotated with XML, and so on. While very similar in nature, the authors there suggest that they traverse their context tree all the way to
the “leaf” simple contexts before doing evaluation and do not an actual evaluation at the
higher-order contextual entities, which we allow here. We are also being very detailed of the
specification and its operators including semantics in this work [272].
[
sample loader
preprocessing
feature extraction
classification
:
:
:
:
WAV [ channels: 2, bitrate: 16, encoding: PCM, f : 8000 ],
LOW-PASS-FFT-FILTER [ cutoff: 2024, windowsize: 1024 ],
LPC [ poles: 20, windowsize: 1024 ],
MINKOWSKI-DISTANCE [ r : 5 ]
]
Figure 82: Example of hierarchical context specification for an evaluation configuration of
MARF [272]
The sample loader dimension can vary over tags WAV, SINE, MP3, MIDI, OGG, TXT, JPG,
TIFF, etc., each of which in turn can vary over channels, bitrate, sampling frequency (it
can even apply to the text (natural or programming) and images depending on the context
of interpretation). The channels dimension usually varies over 1 (mono) and 2 (stereo), but
theoretically can have other valid tags. Similarly, the bitrate is commonly 8 or 16 data
storage bits per frequency sample, and the frequency sampling rate f is typically 4, 8, 16,
44 kHz, but is not restricted to those values. The latter three are a finer-grained context
examples while WAV and others are coarse-grained. A MARFL-written application should be
able to alter any of these or switch between them. We should also be able to easily express
all that [272].
We need to extend the semantics of @ and # from GIPL to deal with the hierarchy of
contexts, i.e., being able to switch sub-contexts and query them arbitrarily. This implies the
argument types and the resulting type may not be necessarily the same. This is where we
353
define the context operators overloading. In Table 18 is an incomplete collection of operator
overloading rule examples. The meaning of the mappings in that table is as follows: a file
type F or a data vector type V can be evaluated at the higher-order dimension of type sample
loader LD producing a “loaded sample” (that maps to an internal data structure). Evaluating
a directory with sample files D produces a collection of samples S. S can be evaluated at
dimension types preprocessing PR and feature extraction FE , per MARF’s specification,
though the latter usually means S has been preprocessed that is why the return types are
internally different in their content. The feature data vector V can be used for training
or classification resulting in either a new or updated training set TR or a result set RS
respectively. The subdivision of dimension types can go on further until the primitive tag
values are reached, e.g., the RS contains a collection of an integer ID and a double outcome
pairs and their index (not illustrated in the table). The unary operator # doesn’t have a left
argument, and the evaluation of the right argument is similar to that of @. The highest-order
dimension we define is MARF itself that if queried upon, returns the higher order collection
of the main processing dimensions, and we can go all the way down the configuration context
hierarchy. The dot operator provides a syntactical OO-like way of traversing the depth of
the higher-level dimensions to the lowest ones. Though this may look like we make up
OO dimensions, it is not presently so as we do not provide any true direct member access,
including method objects, plus behind a single dimension may not necessarily be a single
implementing class [272].
A few small examples: #MARF.sl would return WAV. Then #MARF.WAV.channels would return 2. WAV @ [channels:1]—switches WAV loader’s configuration dimension channels to
mono, essentially returning a loader with the new configuration setting. WAV @ [channels:1,
bitrate:8, f:4000]—would conceptually return (configure) a WAV sample loader object
ready and configured to load mono WAV samples, 8 bits per sample with the sampling frequency 4000 Hz. ‘‘sample1.wav’’ @ WAV @ [channels:1, bitrate:8, f:4000] – would
load the specified sample given the configured loader object producing an interpreted (loaded
and decoded) double-stream of amplitude values, an array, encoded in the sample object [272].
The preprocessing dimension’s higher-order valid tag values can range over tags like
RAW-PREPROCESSING, DUMMY, FFT-LOW-PASS-FILTER, ENDPOINTING, and others, each of which
may have common sub-dimensions and algorithm-specific sub-dimensions. For example, most
FFT-based filters have a cutoff frequency dimension but endpointing doesn’t, so trying to
refer to the sub-context dimension of cutoff in endpointing would return an error as it is
not a part of its sub-context. The feature extraction dimension is currently allowed to
vary along the values FFT, LPC, MINMAX, RANDOM-FEATURE-EXTRACTION, RAW and others. The
classification dimension is allowed to vary along the abstracted classification modules
similarly to the preprocessing and others (see MARF pipeline in Section 5.2, page 102).
By labeling the dimension identifiers as being a part of the reserved keywords necessitates
making them a part of the syntax, which is illustrated in part in Figure 83. If the finer
details of the higher-level dimensions are omitted, they assume the defaults, defined by the
semantics of the implemented modules. For example, the default for LPC’s poles dimension
is 20 and windowsize is 128. Here we do not describe how the defaults are picked and what
they mean for every single module; for such details please refer to the related work [268, 270]
and Section 5.2 [272].
Valid dimension tag values are defined by the MARF’s currently implemented modules
354
E
Q
SLid
PRid
FEid
CLid
cxtop
::=
|
|
|
|
|
|
::=
|
|
|
|
|
::=
|
|
|
|
|
::=
|
|
|
|
|
|
|
::=
|
|
|
|
::=
|
|
|
|
|
|
|
::=
|
#E
[E : E, ..., E : E]
{E, ..., E}
E where Q
E@E
E.did
E cxtop E
sample loader SLid
preprocessing P Rid
feature extraction F Eid
classification CLid
did = E
QQ
WAV
MP3
SINE
MIDI
...
TEXT
FFT-LOW-PASS-FILTER
FFT-HIGH-PASS-FILTER
FFT-BAND-PASS-FILTER
FFT-BAND-STOP-FILTER
CFE-LOW-PASS-FILTER
...
DUMMY-PREPROCESSING
RAW-PREPROCESSING
FFT
LPC
MINMAX
...
RANDOM-FEATURE-EXTRACTION
CHEBYSHEV-DISTANCE
EUCLIDEAN-DISTANCE
MINKOWSKI-DISTANCE
DIFF-DISTAMCE
HAMMING-DISTAMCE
COSINE-SIMILARITY-MEASURE
...
RANDOM-CLASSIFICAION
train
classify
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
(42)
Figure 83: MARFL context syntax [272]
and the list can grow. Consequently, we need to be able to automatically accommodate
the growth if possible making MARFL more adaptable and sustainable. All modules in
MARF are plug-ins. When a new plug-in is registered, it can “deposit” the valid operational
characteristics of itself into the “pool” (D0 ) of valid dimension identifiers and their range. The
identifiers are largely predefined in this work that symbolize reserved words from MARF’s
resource collection and are not user-defined variables at this time [272].
Syntactically and semantically, we need to separate the operators for training and classification for the last stage as we need to know how to evaluate a feature vector V at the
classification module CL. V @CL is ambiguous as we do not know whether we want to train
CL on V or identify V . As a result, we create the train and classify operators to resolve
the @ ambiguity. The dimension type system is thus becoming more apparent: we have
object-like higher-level dimension types as well as primitive numerical types [272].
355
Table 18: Common example types of context operators’ arguments and resulting types [272]
Left Type
F
V
D
S
S
V
V
MARF
MARF
MARF
MARF
LD
CL
CL
RS
RS
operator
@
@
@
@
@
train(@)
classify(@)
#
#
#
#
#
.
.
.
.
.
.
.
.
.
Right Type
LD
LD
LD
PR
FE
CL
CL
MARF
LD
channels
LD.channels
LD.f
LD
PR
FE
CL
channels
TS
RS
ID
outcome
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
→
Resulting Type in D
S
S
[S, . . . , S]
S
V
TS
RS
[LD : TAG, PR : TAG, FE : TAG, CL : TAG]
[channels : TAG, bitrate : TAG, f : TAG]
INTEGER
INTEGER
FLOAT
LD
PR
FE
CL
INTEGER
TS
RS
INTEGER
FLOAT
Semantically speaking for our context definitions we draw a lot from Lucx and GIPL
(Chapter 4), as shown in Figure 84. The definition environment, D, internally implemented
by the marf.Configuration and marf.MARF classes that encapsulate all the higher- and
lower-level configuration’s contextual information P in their data members. The semantic rules other than (C.2.4) come from GIPL and Lucx and depict identifier definitions
of constants (C.2.1), operators (C.2.2), regular Lucid dimensions (C.2.3), then the operators themselves (C.2.5), dimension tag values (C.2.6), the classical operator @ (C.2.7), the
language where clause that we use for declarative configuration customization (C.2.8), the
current context query with # (C.2.9), a simple context (collection of ⟨dimension : tag⟩ pairs)
construction rule (C.2.10) and its navigation with @ (C.2.11), and re-definition of any dimension type in the syntactic where clause in (C.2.12). The rule (C.2.4) is introduced by
MARFL to allow object-like dot-notation for dimension identifiers. This is a preliminary
research semantics specification of MARFL. Notice also, we entirely omit the issues of storage management of the intermediate training sets and the returning of the results other than
the sample loading aspect (and not even recording). MARF internally maintains a local
database or intermediate cache of the processed utterances stored as training sets and returns the results (usually a some sort of measure and rank) as a result set. The training sets
internally can be just plain serializable Java objects, CSV text data, XML data, or relational
SQL data to a various degree depending on the options. None of this we cover in this work
at the present and defer it to the future work [272].
356
Ecid
:
D(id) = (const, c)
D, P ⊢ id : c
(C.2.1)
Eopid
:
D(id) = (op, f )
D, P ⊢ id : id
(C.2.2)
Edid
:
D(id) = (dim)
D, P ⊢ id : id
(C.2.3)
EE.did
:
D(E.id) = (dim)
D, P ⊢ E.id : id.id
(C.2.4)
Eop
:
D, P ⊢ E : id
D(id) = (op, f )
D, P ⊢ Ei : vi
D, P ⊢ E(E1 , . . . , En ) : f (v1 , . . . , vn )
(C.2.5)
Etag
:
D, P ⊢ E : id
D(id) = (dim)
D, P ⊢ #E : P(id)
(C.2.6)
Eat
:
D, P ⊢ E ′ : id D(id) = (dim) D, P ⊢ E ′′ : v ′′
D, P ⊢ E @E ′ E ′′ : v
D, P †[id →→ v ′′ ] ⊢ E : v
(C.2.7)
′
Ew
:
E#(cxt)
:
Econstruction(cxt)
:
Eat(cxt)
:
Qdim
:
′
′
′
D, P ⊢ Q : D , P
D ,P ⊢ E : v
D, P ⊢ E where Q : v
(C.2.8)
(C.2.9)
D, P ⊢ # : P
D, P ⊢ Edj : idj
D(idj ) = (dim)
D, P ⊢ Eij : vj
P ′ = P0 †[id1 →→ v1 ]†. . .†[idn →→ vn ]
D, P ⊢ [Ed1 : Ei1 , Ed2 : Ei2 , . . . , Edn : Ein ] : P ′
D, P ⊢ E ′ : P ′
D, P †P ′ ⊢ E : v
D, P ⊢ E @ E ′ : v
D, P ⊢ dimension id : D†[id →→ (dim)], P †[id →→ 0]
(C.2.10)
(C.2.11)
(C.2.12)
Figure 84: MARFL context semantic rules [272]
C.3
Applications
Generalization of MARFL and context-oriented intensional programming can be extended
to other similar pattern-recognition and multimedia processing systems, including mobile
applications. A possible PoC example of SpeakerIdentApp [317] rewritten in MARFL
is in Figure 85. It is much more concise and clear than its raw Java equivalent, which
is approximately 500 times longer (15 lines in MARFL vs. 1700 in Java modulo all the
omitted functionality and comments). Other possible applications include the aforementioned
cyberforensic analysis and others where nested context definitions are a case [272].
357
[ train /var/training-samples && classify /var/testing-samples ] @ MARF [sl, pr, fe, cl]
where
// Our dimension types with the tag values we are interested to evaluate over
sample loader
sl = WAV;
preprocessing
pr = { FFT-LOW-PASS, FFT-BAND-STOP };
feature extraction fe = { FFT, LPC, MINMAX, RANDOM };
classification
cl = { CHEBYSHEV-DISTANCE, MINKOWSKI-DISTANCE, NEURAL-NETWORK };
// Custom configuration for LPC and Minkowski distance
// other than the defaults
where
LPC = { poles:40, windowsize:256 };
MINKOWSKY-DISTANCE = { r:5 };
end;
end
Figure 85: Initial SpeakerIdentApp re-written in MARFL [272]
C.4
Summary
We presented initial context definition for the syntax and semantics of MARFL, the MARF
Language, which is bound around practical concerns to script MARF-based applications that
are context-aware. The context in MARF is a hierarchical configuration that we manipulate
using two intensional operators. We override the definitions of @ and # from GIPL and introduce the object membership operator (dot-notation) to accept hierarchies of contexts. This
brings a lot more understandability to the MARF applications allowing non-programmers,
e.g., scientists, to benefit from the MARF-provided resources and make the application significantly more re-usable and maintainable from the software engineering point of view [272].
In the presented application, all contexts are always finite. While the theoretical framework could support potentially infinite contexts (e.g., very long continuous audio streams
coming from a recorder or microphone which is normally never turned off), it is generally
not very practical [272].
358
Appendix D
Self-Forensics
This chapter introduces the concept of self-forensics in addition to the standard autonomic
self-* properties (also known as self-CHOP : self-configuration, self-healing, self-optimization,
and self-protection in autonomic computing [327, 371]) of self-managed systems, specified in
the Forensic Lucid language. We argue that self-forensics, with the digital forensics aspect
taken out of the cybercrime domain, is applicable to “self-dissection” of autonomous software
and hardware systems for automated incident and anomaly analysis and event reconstruction
for the benefit of the engineering teams in a variety of incident scenarios [277, 308].
We focus on the core concepts and fundamentals of self-forensics for ICT using Forensic Lucid to specify the evidence. Self-forensics can be roughly understood as being the
application of autonomic or semi-autonomic self-diagnostics combined with reasoning. This
includes forensic logging by the software/hardware sensors, evidence encoding, and case modeling. The process can be automated or interactive, “after-the-fact”, with the investigator
present. Thus, we take out the Forensic Lucid language from the cybercrime context
to apply to any autonomic software or hardware systems (e.g., vehicles or distributed software systems) as an example of self-forensic case specification [277, 308, 321]. The earlier
presented properties in Chapter 7 make Forensic Lucid an interesting and appropriate
choice for self-forensic computing in self-managed systems to complement the existing self-*
properties [277, 308, 321].
This proposition in this chapter in large part is a synthesis of published and unpublished
works and proposals of the author of various applications [269, 276, 277, 308, 321, 322]
for the self-forensics concept as well as a work towards improving the safety and incident
investigation of autonomous systems.
D.1
Overview
In this work we introduce a new concept and aggregate our previous findings in requirements
specification for autonomous systems, flight-critical integrated software and hardware systems, etc. to analyze themselves forensically on demand as well as keeping forensics data for
further automated analysis in cases of reports of anomalies, failures, and crashes. We insist
this should be a part of the protocol for each such a system, (even not only space missions
or flight systems, but any large and/or critical self-managed system [277, 308]).
359
We review some of the related work that these ideas are built upon prior describing the
requirements for self-forensics-enabled components. We subsequently describe the general
requirements as well as limitations and advantages [277].
The property of self-forensics [276, 277] was introduced to encompass and formally apply
to or be included in the design of not only autonomic hardware and/or software systems,
which are inherently complex and hard to analyze in general when incidents happen, but also
as an optional requirement for smaller-scale projects [322].
Self-forensics in a nutshell includes a dedicated module or modules observing the regular
modules in some way, logging the observations and events that led to them in a predefined
format suitable for automatic forensic processing and deduction, and event reconstruction in
case of incidents. The modules can optionally have a capability of automatically diagnose
themselves based on the collected evidence and make more complex and complete decisions
after the analysis than ad-hoc binary decisions. In a sense, the self-forensic modules can be
seen as smart “blackboxes” like in aircraft, but can be included in spacecraft, road vehicles,
large and small software systems, etc. that assist with the incident analysis. Human experts
can be also trained in investigation techniques based on the forensic data sets collected
during different incidents. In a traditional sense, one argues that any self-diagnostics, as well
as traditional logging, hardware or software, are necessary parts of self-forensics that have
been around for a long time [322].
We compile examples of Forensic Lucid specifications for self-forensics of a few software projects as case studies. The specifications are there to be built into the systems for the
purpose of tracing and understanding complex relationships and events within some components of the said systems, especially the distributed ones. In this work, we first reasonably
narrowly focus on “exporting” the states of the systems as data structures in chronological
encoded order as Forensic Lucid contexts using its syntax in accordance with the grammar for compiling the cases [322] and try to generalize the concept to a wider application
spectrum.
We further proceed to describe the background and the related work, followed by the
example specifications of the core data structures and dataflows in Forensic Lucid of
the case studies, such as the Distributed Modular Audio Recognition Framework (DMARF,
Section 5.2, page 102), General Intensional Programming System (GIPSY, Chapter 6), Java
Data Security Framework (JDSF, Section D.3.3, page 363), and Cryptolysis—an automated
cryptanalysis framework for classical ciphers. These are primarily academic/open-source
systems with the first two being distributed and a lot more complex than the last two. After
their specification, we generalize to other domains, such as cloud forensics, and we conclude
and list a few near future work items [322].
Additionally, this preliminary conceptual work discusses a notion of self-forensics as an autonomic property to augment the Autonomic System Specification Language (ASSL) framework of formal specification tools for autonomic systems. The core of the proposed methodology leverages existing designs, theoretical results, and implementing systems to enable
rapid completion of and validation of the experiments and their the results initiated in this
work. Specifically, we leverage the ASSL toolkit to add the self-forensics autonomic property
(SFAP) to enable generation of the Java-based Object-Oriented Intensional Programming
(JOOIP, Section 4.3.2.3, page 94) language code laced with traces of Forensic Lucid to
encode contextual forensic evidence and other expressions [322].
360
The notion and definition of self-forensics was introduced by the author Mokhov in circa
2009 to encompass software and hardware capabilities for autonomic and other systems to
record their own states, events, and others encoded in a forensic form suitable for (potentially
automated) forensic analysis, evidence modeling and specification, and event reconstruction
for various system components. For self-forensics, “self-dissection” is possible for analysis
using a standard language and decision making if the system includes such a self-forensic
subsystem. The self-forensic evidence is encoded in a cyberforensic investigation case and
event reconstruction language, Forensic Lucid (Chapter 7). The encoding of the stories
depicted by the evidence comprise a context as a first-class value of a Forensic Lucid
“program”, after which an investigator models the case describing relationships between
various events and pieces of information. It is important to get the context right for the
case to have a meaning and the proper meaning computation. As PoC case studies of some
small-to-medium (distributed or not) primarily academic open-source software systems are
examined. In this work, for the purpose of implementation of the small self-forensic modules
for the data structures and event flow, we specify the requirements of what the context should
be for those systems. The systems share in common the base programming language—Java,
so our self-forensic logging of the Java data structures and events as Forensic Lucid
context specification expressions is laid out ready for an investigator to examine and model
the case [322].
D.2
Motivation
We motivate this work through examples that include enhancement and help with the blackboxes like in aircraft, or where self-forensics would have been helpful to analyze anomalies
say in the spacecraft, when Mars Exploration Rover s behaved strangely [335], or even with
one is doing a hard disk recovery, such as from the shuttle Columbia [116], or automatically as well as interactively reasoning about events, possibly speeding up the analysis of the
anomalies in subsystems following sound methodology. Another example is when the Hubble
Space Telescope was switched from its “side A” of its instruments to the redundant “side B”.
The self-forensics units would have helped Hubble to analyze the problem and self-heal later.
Of course, the cost of such self-forensic units would not be negligible; however, the cost of
self-forensics units maybe well under than the costs of postponing missions, as, e.g., it was
happening with the Hubble Space Telescope Servicing Mission 4 (SM4) and the corresponding shuttle processing delay and costs of moving shuttles around [331, 332, 333, 334, 415, 475]
and others [277, 308].
Furthermore, the concept of self-forensics would be even a greater enhancement and help
with flight-critical systems, (e.g., included with the blackboxes in aircraft) to help with
crash investigations [72, 277]. Similar examples can made for a bus crash in California in
2009 [71] or the Kepler spacecraft entering safe mode in 2010 [336]. Any large-scale software
systems, distributed or not, web services, operating systems, embedded systems are a natural
environment for self-forensics analysis, prevention, and reaction just as well. Thus, we insist
that self-forensics, if included earlier in the design and development of the spacecraft, craft or
vehicles, systems, not only helps during the validation and verification, but also a posteriori,
during the day-to-day operations incident investigation and instrumenting corrections by the
361
system itself when it is unreachable by personnel or human investigators after the incident
if the system is not recoverable [277].
D.2.1
Self-Forensic Computing
Many ideas in this work come from computer forensics and forensic computing. Computer forensics has traditionally been associated with computer crime investigations (see
Section 2.1, page 24). We show the approach is useful in autonomic systems [308]. Additionally, the approach is useful as an aid in for validation and verification during design,
testing, and simulations of such systems as well as during the actual operations and any kind
of incident investigations. We earlier argued [276, 308] if the new technologies are built with
the self-forensics components and toolkits from the start, it would help a wide spectrum of
various autonomous and autonomic software and hardware industries [277].
Existing self-diagnostics, computer BIOS/UEFI-like reports with name:value attributes,
Cisco IOS states/log reports (e.g., see Section 9.5), and S.M.A.R.T. [12, 67, 519] reporting
for hard drives as well as many other devices are a good source for such data computing.
The idea is to be more forensics-friendly and provide appropriate interfaces for self-forensic
analysis and investigation as well as allowing engineering teams extracting, analyzing, and
reconstructing events using such data [277, 308, 322].
D.2.2
Problem and Proposed Solution
The emerging concept of self-forensics and the idea of its proposed realization within ASSL
and GIPSY are described through their core founding related work. These preliminary findings and discussions are currently at the conceptual level, but the author provides the complete requirements, design, and implementation of the concept described here by leveraging
the resources provided by the previous research work. To the author’s knowledge there is no
preceding work other than the author’s own that does attempt something similar [322].
First, we give a glimpse overview of the founding background work on ASSL and selfforensics in Section D.3.6 and Section D.4.1. Then, we describe the core principles and ideas
of the methodology of realization of the self-forensics autonomic property (SFAP) within the
ASSL framework in Section D.4.5. We provide a quick notion of the syntactical notation of
SFAP and where it fits within the generating toolset of ASSL. We conclude in Section D.5
for the merits and the future endeavors for the developments in this direction [322].
D.3
Related Work
There are a number of inter-related works of relevance listed below.
D.3.1
Available Related Technologies and Tools
Some of these have been previously mentioned:
• S.M.A.R.T. technologies [12, 67, 519]
362
• Recent Cisco netflow uplink modules running at the 10G speeds allowing collection of
the pcap netflow data. Forensic Lucid rules like ACLs in Cisco equipment would
run at line speed. Local data with SD storage until transmitted later to the control
and maintenance centers for analysis.
• Gigamon span-port splitters for sorting out the pcap netflow traffic.
• Linux state tracing kernel [150].
• AspectJ [29] (Section 8.2.1) can be used to create forensics-relevant aspects for Java
programs.
• Various software based logging mechanisms provided by operating systems and services
(syslog, Event Log, etc.).
D.3.2
Expert Systems
The AT&T early diagnosing expert system for troubleshooting their phone line networks
with a corresponding knowledge base is a good early example [186] of relevance to us. There
are many of such examples [186] where self-forensics would be useful.
D.3.3
Java Data Security Framework
JDSF [275, 294, 295, 299, 316] has been proposed to allow security researches working with
several types of data storage instances or databases in Java to evaluate different security
algorithms and methodologies in a consistent environment. The JDSF design aims at the
following aspects of data storage security: confidentiality (data are private), integrity (data
are not altered in transit), origin authentication (data are coming from a trusted source), and
SQL randomization (for relational databases only, not discussed here any further) [271, 322].
JDSF also provides an abstraction of the common essential cryptographic primitives. The
abstraction exposes a common API proxied to the common Java open-source implementations of the cryptographic algorithms for encryption, hashing, digital signatures, etc. The
higher-level JDSF design summary is illustrated in several UML specification diagrams documented in the related works [295, 299, 316]. The design presented in those works illustrates
all the necessary main subpackages of JDSF and its configuration, the design of securityenhanced storage, the authentication subframework, privacy subframework, integrity subframework, and the abstraction API over the concrete cryptographic primitives [271, 322].
JDSF is convenient to use in the scope of the presented research, and, like GIPSY and
DMARF, it is implemented in Java, and is open-source [271, 322].
D.3.4
Cryptolysis
Cryptolysis [298] is a small framework that includes a collection of automated attacks on
the classical ciphers using a set heuristics algorithm implementation in Java. The algorithms primarily come from the related work on the same subject [69]. Cryptolysis also
features additional algorithms that are wrappers around classification and signal processing
363
tasks of MARF [465] for additional imprecise and spectral reasoning. Cryptolysis, among
other things, also performs some NLP parsing that segments the deciphered whole-block text
and inserts spaces in-between the word boundaries automatically for the ease of readability.
Cryptolysis, like the others, is an open-source project [322].
D.3.5
Autonomic Computing and Self-Management Properties
The common aspects of self-managing systems, such as self-healing, self-protection, selfoptimization, and the like (self-CHOP) are now fairly well understood in the literature and
R&D [167, 178, 207, 327, 371, 476, 491, 500]. We formally augment that list with the selfforensics autonomic property that we would like to be a part of the standard list of autonomic
systems requirements specification and design specification [277, 308].
The self-forensics property is meant to embody and formalize all existing and future
aspects of self-analysis, self-diagnostics, related data collection and storage, software and
hardware components (“sensors”) and automated decision making that were not formalized
as such and define a well-established category in the industry and academia [277, 308]. In that
view, self-forensics encompasses self-diagnostics, blackbox-like recording, traditional logs, web
services, (Self-Monitoring, Analysis, and Reporting Technology) S.M.A.R.T. reporting [12,
67, 519], and encoding this information in analyzable standard form of Forensic Lucid [304,
310] for later automated analysis and event reconstruction using the corresponding expert
system tool [277, 308]. Optional parallel logging is done of the forensics events during the
normal operation of the autonomous and semi-autonomous systems, especially during the
“blackout” periods (when a given remote system is unreachable by operators) will further
enhance the durability of the live forensics data logged from the system to a nearby loghost
(a remote logging facility), some sort of receiving station, etc. [277, 308].
D.3.6
ASSL Formal Specification Toolset
The ASSL (Autonomic System Specification Language) framework [486, 493, 502] takes as an
input a specification of properties of autonomic systems [6, 173, 178, 179, 207, 327, 371], does
formal syntax and semantics checks of the specifications, and if the checks pass, it generates
a collection of Java classes and interfaces corresponding to the specification. Subsequently,
a developer has to fill in some overridden interface methods corresponding to the desired
autonomic policies in a proxy implementation within the generated Java skeleton application
or map them to the existing legacy application [322, 486, 493, 502].
Similarly to GIPSY (Section 6.2, page 136), the ASSL framework [502] includes the
autonomic multi-tier system architecture (AS) including formal language constructs to specify service-level objectives (SLOs), core self-CHOP (i.e., self-configuration, self-healing, selfoptimization, and self-protection) autonomic properties, corresponding architecture, allowed
actions, events, and metrics to aid the self-management aspect of the system. It also specifies
the interaction protocols between the AS’s managed autonomic elements, including specification of messages exchanged and how they are communicated. Finally, it provides for
specification of the autonomic element (AE) architecture, like for the whole system, each element is a subject to the SLOs, self-CHOP policies, behavior, actions, metrics, and interaction
protocols, the summary of all of which is enumerated in Figure 86, page 365 [322].
364
I. Autonomic System (AS)
* AS Service-level Objectives
* AS Self-managing Policies
* AS Architecture
* AS Actions
* AS Events
* AS Metrics
II. AS Interaction Protocol (ASIP)
* AS Messages
* AS Communication Channels
* AS Communication Functions
III. Autonomic Element (AE)
* AE Service-level Objectives
* AE Self-managing Policies
* AE Friends
* AE Interaction Protocol (AEIP)
- AE Messages
- AE Communication Channels
- AE Communication Functions
- AE Managed Elements
* AE Recovery Protocol
* AE Behavior Models
* AE Outcomes
* AE Actions
* AE Events
* AE Metrics
Figure 86: Vassev’s ASSL multi-tier model [322]
ASSL formal modeling, specification, and model checking [489, 490] has been applied to
a number open-source, academic, and research software system specifications, e.g., such as
Voyager imagery processing [488]; the Distributed Modular Audio Recognition Framework
(DMARF) [320, 494, 495]; the General Intensional Programming System (GIPSY) [500];
reliability of self-assessment, distributed, and other autonomic aspects of the autonomic
system-time reactive model (AS-TRM) [492, 497]; self-adapting properties of NASA swarm
missions [167, 476, 491], and others [322, 487].
365
D.4
D.4.1
Self-Forensics Methodology Overview
Self-Forensics Concept
The study of self-forensics [276, 277, 321] is an additional property the author is investigating
throughout this topic with the contextual forensic logging with Forensic Lucid and case
specification [269, 300, 304, 306, 307]. As previously mentioned, Forensic Lucid is an
intensional context-oriented forensic case specification, modeling, and evaluation language
(see Chapter 7 for details). Forensic Lucid was initially proposed for specification and
automatic deduction and event reconstruction in the cybercrime domain of digital forensics [357]. It has been proposed to extend its use onto other domains such as investigation
of incidents in various vehicle crash investigations, and autonomous software and hardware
systems. Its primary feature inherited from the Lucid family of languages is to be able to
specify and work with context [365, 473, 513] as a first-class value, and the context represents
the evidence and witness accounts [322].
Forensic Lucid’s primary experimental platform for compilation (Forensic Lucid
compiler is a member of the General Intensional Programming Compiler (GIPC) framework,
Section 6.2.1, page 139) and evaluation is the General Intensional Programming System
(GIPSY, Chapter 6) [161, 302, 362]. GIPSY’s run-time system, the General Eduction Engine
(GEE, Section 6.2.2, page 142), is designed to be flexible to allow various modes of execution,
including the use of the evaluation by the PRISM- [467] and AspectJ-based [29] backends
as illustrated in Figure 32, page 137 [315] and Figure 58, page 219 [322] (Chapter 6 and
Chapter 1).
D.4.2
Generalities
As we encode the observation sequences for our case study systems, we observe some general
tendencies. It is possible to have gaps in the evidence stories when some of the expected data
structures were not instantiated or some events did not occur. To recap, we model them
as no-observations [136, 137], $. A no-observation is an observation $ = (CT , 0, infinitum)
that puts no restrictions on computations (Section 2.2, page 29). The infinitum is an integer
constant that is greater than the length of any finite computation that may have happened
and C is a set of all possible computations in the system T [137, 321] (Chapter 2).
All systems’ high-level evidence follows a similar traditional blackbox evidential pattern:
given an instance of a configuration followed by the initial “input” data followed by one or
more computations on the data followed by a result or a result set. This contextual pattern
can be exploded at the context level to the very fine-grained details to the lowest level of core
data structures and variables. The computations can be arbitrarily complex and nesting of
the context expressions, as evidential logs, can be arbitrarily deep, depending on the desired
level-of-detail (LOD) [321].
Inherently, a parallel evaluation can also be captured as variable-length generic observations [136, 137] (Section 2.2.4.5, page 39, Section 7.5.5, page 208), forming a multidimensional
box [513], where each element can be evaluated concurrently. All elements should then complete the evaluation prior transiting to the final result or result set observation [321].
Subsequently we describe some possible example applications of self-forensics and its
366
requirements that must be made formal, in the industry, if the property to be used in a wider
context and scope and included in the design of the various systems early on [277, 308, 321].
Given this initial discussion we further proceed with some of the contextual Forensic
Lucid specifications of our studied systems, specifically DMARF in Section D.4.6.1, GIPSY
in Section D.4.6.2, JDSF in Section D.4.6.3, and Cryptolysis in Section D.4.6.4 [321].
D.4.3
Requirements
Based on the previously cited work on different aspects of self-forensics we define the requirements scope for the autonomic self-forensics property. This is a mix of mostly high-level
functional and some non-functional requirements, aggregated as the initial “check-list” to
be used in the real system requirements specification in a much more elaborate detail and
expected to be carried over into the design specification.
Here we define very rough high-level the requirements scope for the autonomic selfforensics property:
• Should be optional if affected by the severe budget constraints for less critical system
components, but easily “pluggable”. Must not be optional for mission critical and
safety-critical systems and blackboxes [277, 308].
• Must be included in the design specification at all times using some formal method of
specification (e.g., Forensic Lucid, ASSL, and Isabelle) [277, 308].
• Must cover all types of the self-diagnostics events (e.g., S.M.A.R.T.-like capabilities
and others.) [277, 308].
• Must have a formal specification (that what it makes it different from various typical
self-diagnostics and otherwise various aspects of self-monitoring) [277, 308].
• Must have tools for automated reasoning and reporting about incident analysis matching the specification, real-time or a posteriori during investigations. The tools may
accompany the system in question if resources allow, and also be remotely available for
independent parallel analysis and investigation [277, 308].
• Context should be specified in the terms of system specification involving the incidents,
e.g., parts and the software and hardware engineering design specification should be
formally encoded (e.g., in Forensic Lucid) in advance, during the design and manufacturing. These are the static forensic data. The dynamic forensic-relevant data are
recorded in real-time during the system operation [277, 308].
• Preservation of forensic evidence must be atomic, reliable, robust, and durable [277,
308].
• The forensic data must be able to include any or all related not-necessarily-forensic
data for analysis when needed in case it can be a source of a problem (incident). Depending on the system, e.g., the last image or application-specific data passing through
reconnaissance or science images for military, exploration, and scientific aircraft taken
367
by a camera, measurements done around the incident by an instrument and/or protocol
states around the time of the incident. The observing modules may include the entire
trace of a lifetime of a system or system components logged somewhere for automated
analysis and event reconstruction [277, 308].
• Levels of forensic logging and detail should be optionally configurable in collaboration
with other design requirements in order not to hog other activities’ resources, create significant overhead, or fill in the network bandwidth of downlinks or to preserve
power [277, 308].
• Self-forensics components should optionally be duplicated in case they themselves also
fail [277].
• Event co-relation mechanisms optionally should be specified, e.g., as in time-reactive
models and systems [277, 308], or combined within Forensic Lucid.
• Probabilistic trustworthiness and credibility factors need be specified if corrupt and/or
unreliable (evidential) data is detected, so its credibility weight is reduced during selfinvestigation to reduce the risk of “wrong” decisions.
• Some forensic analysis can be automatically done by the autonomous system itself
(provided having enough resources to do so), e.g., when it cannot communicate with
system administrators, or with flight controllers, or the engineering crew (e.g., during
solar conjunction for a week or two the robotic spacecraft are left unattended) [277, 308].
D.4.4
Example Scenario
Below is a high-level description of the self-forensic process described as a scenario that
is to become a refined algorithm. Terminology: an autonomous system is any software
and/or hardware system exhibiting self-management properties, e.g., distributed systems,
operating systems, cloud computing platforms, road-side networks (RSNETs), any time of
autonomous craft or vehicle. A sensor is a software or hardware component observing events
of interest and doing forensic logging. A functional unit is a software/hardware component
or instrument under observation.
1. Self-forensic “sensors” observe functional units (e.g., instruments and subsystems of
spacecraft, aircraft in flight, systems of IRSNET, subsystems of a cloud, or software
systems) [277, 308].
2. Every event of interest (engineering or scientific) is logged using Forensic Lucid [277].
3. Each forensic sensor observes one or several components or instruments (hardware or
software).
4. Each sensor composes a witness testimony in the form of Forensic Lucid observational sequence os about a component, a subsystem or instrument it observes [277].
368
5. A collection of witness accounts from multiple sensors, properly encoded, represent the
evidential statement, es forming the local knowledge base either during the verification
or actual operation [277].
6. If an incident (simulated or real) happens, systems and engineers define and encode
theories (or hypotheses) in Forensic Lucid about what happened. The theories are
likewise encoded as observation sequences os. When evaluated against the collected
forensic evidence in es, they are added to es. Then any appropriate evaluating expert
system (e.g., based on GIPSY with extensions in our case studies) can automatically
verify their theory against the context of the evidential statement es: if the theory T
agrees with the evidence, meaning this theory has an explanation within the given evidential context (and the amount of evidence can be significantly large for “eyeballing”
it by humans), then likely the theory is a possible explanation of what has happened.
It is possible to have multiple explanations and multiple theories agreeing with the
evidence. In the latter case usually the “longer” (in the amount of reconstructed events
and observations involved) theory is preferred or the one that has a higher cumulative credibility weight w. Given the previously collected and accumulated knowledge
base of Forensic Lucid facts, some of the analysis and events reconstructed can be
done automatically. The transition function and its inverse in this case are assumed
to have already been incorporated and available from the time the system was designed. Its Forensic Lucid specification is simply processed with the es that was
received [277, 308].
7. The incident is also processed in a somewhat similar fashion by the autonomous system
in question if resources allow. It may have actions defined on what to do based on the
results of the self-investigation. The specification of what the potential claims may be
in this case can be pre-defined in the hardware or software for expected incidents. The
unexpected incidents may be more difficult to deal with, but over time those can be
either machine-learned or added by the engineering team over time after simulations
with the test systems.
D.4.5
Self-Forensics Autonomic Property (SFAP) in ASSL
In the PoC ASSL specification of self-forensics we add a notion of a SELF FORENSICS policy
specification for the AS tier and AE, just like it is done for the self-CHOP properties. The
proposed property introduction consists of two major parts: (1) adding the syntax and
semantic support to the lexical analyzer, parser, and semantic checker of ASSL as well as
(2) adding the appropriate code generator for JOOIP and Forensic Lucid to translate
forensic events. The JOOIP code is mostly Java with embedded fragments of Forensic
Lucid-encoded evidence [269, 321, 322].
We use ASSL’s managed-element (ME) specification of AE to encode any module or
subsystem of any software system under study to increase or reduce the amount of forensic
evidence logged as Forensic Lucid events depending on the criticality of faults (that can
be expressed as ASSL metrics) [322].
A very high-level example of the generic self-forensic specification is in Listing D.1,
page 371. Many details are presently omitted due to the preliminary work on this new
369
concept and will be provided in our subsequent future publications [322].
Wu and the GIPSY R&D team came up with a hybrid intensional OO language, JOOIP
([526, 528], Section 4.3.2.3, page 94), to allow mixing Java and Lucid code by placing Lucid
fragments nearly anywhere within Java classes (as data members or within methods). As
a part of this conceptual research work, we propose that the ASSL toolset in this instance
be augmented with a code-generation plug-in that generates JOOIP [526, 528] code laced
with Forensic Lucid contextual expressions for forensic analysis. The evaluation of the
JOOIP+Forensic Lucid code further is to be performed by the GIPSY’s general eduction
engine (GEE), described in detail in [161, 302, 322, 362] (see also Section 6.2.2, page 142).
Furthermore, in this proposed prototype the EVENTS members are the basic building
blocks of the contextual specification of the Forensic Lucid observation sequences. The
INITIATED BY and TERMINATED BY clauses correspond to the beginning and end-of-datastream Lucid operators bod and eod. ASSL fluents map to the Lucid streams of the
observation sequences where each stream is a witness account of systems behavior. All fluents constitute an evidential statement. The mapping and actions correspond to the handling
of the anomalous states within the JOOIP’s Java code [322].
In the proposed design, once JOOIP code with Forensic Lucid fragments is generated
by the ASSL toolset, it is passed on to the hybrid compiler of GIPSY, the GIPC to properly
compile the JOOIP and Forensic Lucid specifications, link them together in a executable
code inside the GEE engine resources (GEER), which then would have three choices of
evaluation of it—the traditional eduction model of GEE, AspectJ-based eduction model,
and probabilistic model checking with the PRISM backend [322] (see Chapter 8).
D.4.6
Example Applications
We illustrate some simple examples of how a Forensic Lucid specification of self-forensics
context will begin in a form of a scripting template. Then, we follow with some more
concrete examples in the subsequent subsections. Having Forensic Lucid helps scripting
the forensics events in the autonomous systems. The real blackboxes can contain the forensic
data encoded anyhow including forensics expressions, XML, or just compressed binary and
using external tool to convert it to a Forensic Lucid specification at a later time. Below
is a high-level preamble of any typical specification of the hardware or software system in
question.
invtrans(T @ system_es)
where
evidential statement system_es = { ... };
// T is a theory or hypothesis of what has transpired
observation sequence T = { ... };
end
Default binary Forensic Lucid specification corresponds to a serialized AST (as in earlier
presented GEER) for more optimal evaluation and retrieval. The evidential statement es is
a dictionary of the evidential observation sequences. It may contain definitions that are not
necessarily used by the modeled case program. The following template can be remodeled for
370
AS ADMARF {
TYPES {
M o n i t o r e d El e m e n t }
ASSELF_MANAGEMENT {
SELF _FORENSI CS {
FLUENT i n I n t e n s i v e F o r e n s i c L o g g i n g {
INITIATED_BY { EVENTS . an o ma ly De t ec te d }
TERMINATED_BY {
EVENTS . anomalyResolved ,
EVENTS . a n o m a l y F a i l e d T o R e s o l v e
}
}
MAPPING {
CONDITIONS { i n I n t e n s i v e F o r e n s i c L o g g i n g }
DO_ACTIONS { ACTIONS . s t a r t F o r e n s i c L o g g i n g }
}
}
}
ACTIONS {
ACTION s t a r t F o r e n s i c L o g g i n g {
GUARDS { A S S E L F _ M A N A G E M E N T . S ELF_FORE NSICS . i n I n t e n s i v e F o r e n s i c L o g g i n g }
VARS { Boolean ... }
DOES {
...
FOREACH member in AES {
...
};
}
ONERR_DOES {
// if error then log it too
...
}
}
} // ACTIONS
EVENTS { // these events are used in the fluents s p e c i f i c a t i o n
EVENT an om a ly De te c te d {
ACTIVATION { SENT { ASIP . MESSAGES .... } }
}
...
} // EVENTS
METRICS {
METRIC t h e r e I s I n s e c u r e P u b l i c M e s s a g e {
METRIC_TYPE { CREDIBILITY }
DESCRIPTION { " sets event ’s t r us tw or t hi ne ss / credibility AE " }
VALUE { ... }
...
}
}
} // AS ADMARF
// ...
MANAGED_ELEMENTS
{
M AN AG ED _ EL EM EN T STAGE_ME
{
I N T E R F A C E _ F U N C T I O N l o g F o re n s i c E v e n t
{
PARAMETERS { F o r e n s i c L u c i d E v e n t poEvent }
RETURNS { Boolean }
}
}
}
Listing D.1: The prototype syntactical specification of the SELF FORENSICS in ASSL for
ADMARF [322]
371
example to fit network equipment, such as a switch or a router, based on a lot more level of
detail exemplified in the MAC spoofer investigations (see Section 9.5, page 257).
invtrans(T @ switch_es)
where
evidential statement switch_es = { ... };
// T is a theory of what has happened, e.g., with a switch port
observation sequence T = { ... };
end
Such functions can be precompiled as “stored procedures” (using the DBMS terminology)
for very well known manufacturer specifications of components as well as validated software,
possibly with Petri nets, or the PRISM state machines alike [136, 137].
As the application examples, the following subsections illustrate some context encoding examples in Forensic Lucid for small-to-medium software systems followed by the
IRSNETs and cloud computing systems as larger scale project proposals where it is sensible
to implement self-forensics.
D.4.6.1
DMARF
Distributed aspect of DMARF (Section 5.2, page 102), as opposed to the classical MARF,
makes gathering of more forensic data due to intrinsically more complex communication
between modules, and potentially remote ones. In addition to the earlier specification recited
in Figure 68, page 235, we need to capture the configuration data related to the connection
settings, protocols, and any other properties related to the distributed systems [321].
The main difference is that now given the same configuration it is possible to have multiple distributed/parallel training sets to be computed as well as multiple results produced
on different nodes and the configuration object extension is designed to include the communication protocols and the related information. In Equation D.4.1 is a complete high-level
specification of a DMARF run of observations. dconfo is an observation of the initial DMARF
configuration, tseto i is the observation of the X × Y training sets, where X and Y form a box
of all possible training sets that can be available in that run, and the resulto i is an observation
of possible corresponding results of the pipelines’ runs. There could be multiple pipelines
formed through the lifetime of a DMARF network of the computing stages and nodes [321].
(dconfo, 1, 0), (tseto i , X, Y ), . . . , (resulto i , X, Y )
(D.4.1)
A system instance that has not produced a result (e.g., due to a crash at the classification
stage) would still have a 3-observation sequence, with the last one being a no-observation, as
shown in Equation D.4.2.
(dconfo, 1, 0), (tseto i , X, Y ), . . . , $
(D.4.2)
It is theoretically possible to have no-observations for the configuration or training set data
components say in case such real evidential data were lost or are not trustworthy due to
372
poor handling or improper chain of custody. In such a case, only partial reasoning can be
performed on the evidence [321].
Note, at this point we do not cover the typical advanced distributed systems features such
as the replication, write-ahead-logging, and load balancing and their related data structures,
communication, etc. that deserve a separate large treatment and for now we focus only on
the normal business operation of the system [321].
D.4.6.2
GIPSY
In GIPSY (Chapter 6), the core data structure, connecting the compiler GIPC (Section 6.2.1,
page 139) and the execution engine GEE (Section 6.2.2, page 142) is the GEER, represented
by the GIPSYProgram class. This is the initial observation as far as GEE concerned. It
causes the engine to produce a number of demands in the order of the abstract syntax tree
(AST) traversal, abstracted by the Demand class and the corresponding IDemand interface.
Demands live and die inside a some sort of a store (that acts like a cache) and are transported
by the transport agents (TAs) using various communication technologies mentioned earlier
in the previous section. IDemand is used for both demands and their results. When the
result of computation reaches the top of the evaluation tree, the final result of the program
is computed—it is an instance of some subclass of the GIPSYType from the GIPSY Type
System [315] (Appendix B). Thus, the high-level observation sequence is encoded as [321]:
(p, 1, 0), (di , X, Y ), . . . , (r, 1, 0)
(D.4.3)
where GIPSYProgram p—is the initial program (GEER), IDemand di —is a cross product X ×Y
of all possible demands that may be observed, GIPSYType r—is the result of computation. X
and Y form a cross product of all possible demands that can be made throughout the program
execution, of whether they need to be computed or retrieved from a cache, an intensional data
warehouse. An example of the observation sequence expression is in Figure 87, page 374.
The AST corresponds to the abstract syntax tree of a compiled GIPSY program, followed
by a dictionary of identifiers and the format of the compiled language. Then, at run-time
demands are generated for the two identifiers at the context d : 1 as well as their sum with
the + operator yielding a resulting value r [321].
The high-level encoding does not constitute the compilation data structures used in the
GIPC compiler family (Section 6.2.1, page 139) as we are more concerned with the run-time
execution at this point; however, one has to plan for certified compilation eventually, which
is becoming more important these days, and that’s what we will recommend to the GIPSY
designers and developers for the future work. Once certified compilation is in place, the
corresponding compilation evidence can be encoded for further offline or run-time analysis.
Likewise, as in DMARF, we avoid talking about typical distributed system properties that
ensure availability of the services, replication, etc., but in the final self-forensics modules
design all the distributed middleware also needs to be covered [321].
D.4.6.3
JDSF
JDSF’s core data structures comprise secure data beans amended with security information
layered as needed [275, 294, 295, 299, 316] (Section D.3.3, page 363). Thus, most of the
373
observation sequence GIPSYos = { p, d1, d2, di, r } =
{
(
[
AST
: ..., // abstract syntax tree
dictionary : [id1:’x’, id2:’y’],
formattag : COMPILED_GEER
], 1, 0
),
([identifier:id1, tag:d, tagval:1], 1, 0),
([identifier:id2, tag:d, tagval:1], 1, 0),
([identifier:+, tag:d, tagval:1], 1, 0),
([result:X], 1, 0)
}
Figure 87: Example of observations in GIPSY [321]
observations comprise these data structures (the beans), and the observation sequence, using
strict ordering gives away the sequence, in which a given security information was applied,
e.g., signing happened before encryption and so on. Another observation is the configuration
object instance that provides the details of all the module configuration parameters. The final observation is the secure object after application of all the security levels. As a result, the
high-level sequence of events captures the observation context of given data and configuration
that result in a secured bean. Therefore, the data structures covered by the Forensic Lucid
context include the base SecureBean object type, and its derivatives AuthenticatedObject,
EncryptedObject, IntegrityAddedObject. The basic forensic context structure is, therefore [321]:
bean: { value:X, security_info:Y }
The dimensions value (i.e., payload content) and security info comprise the concrete
security-enhanced payload data X and the details about the said security technique applied
Y . On top of the actual data beans, we have the corresponding configuration object that is
a part of the system’s context. The security Configuration object’s context varies in three
basic dimensions [321]:
configuration:
[
confidentiality:C,
integrity:I,
authentication:A
]
374
observation sequence JDSFos = { config, data, bean } =
{
(
ordered [
confidentiality : RSA [ key:12345, keylength:1024 ],
integrity
: MD5,
authentication : DSA [ key:23456, keylength:256 ]
], 1, 0
),
([1,2,3,4], 1, 0),
([value:[3,1,4,2], security_info:Y], 1, 0)
}
Figure 88: Example of observations in JDSF [321]
These dimension’s tag values evaluate to integers enumerating concrete algorithms used
for each of these stages. The context also determines in which order they were applied, which
may be an important factor, e.g., in the public key encryption schemes where the order of
signing and encryption is important. In Figure 88, page 375 is an example of the Forensic
Lucid context instance of a particular configuration [321].
D.4.6.4
Cryptolysis
Cryptolysis [280, 298] being a small system does not have a lot of deep contextual complexity.
While it has no explicit configuration object, there are implicit configuration state that impacts the results of the runs of the system. There there are key several operational evidential
context pieces: the cryptographic Key, the plain-text and cipher-text data, cipher algorithm
type, cryptanalysis algorithm type, statistics, and a result. The Key k in the forward encryption process is a part of input configuration, along with the cipher algorithm to use r, and
the input plain text data t, and the result is the cipher text c. In the reverse process, which
is the main mode of operation for Cryptolysis, the configuration consists of the cryptanalysis
algorithm type a, followed by the input cipher text data c, and the primary result is the
guessed encryption Key k and the secondary result is the associated accuracy statistics s
built along, and the plain text t, as shown in Equation D.4.4 and Equation D.4.5. The two
equations denote these two scenarios; the latter is exemplified in Figure 89, page 376 [321].
(k, 1, 0), (r, 1, 0), (t, 1, 0), (c, 1, 0)
(D.4.4)
(a, 1, 0), (c, 1, 0), (k, 1, 0), (s, 1, 0), (t, 1, 0)
(D.4.5)
There is also a natural language processing (NLP) aspect of Cryptolysis that we do cover
in [280] that deals with natural language word boundary detection in the deciphered text
375
observation sequence Cryptolysis_os = { a, c, k, s, t } =
{
([alogrithm:GENETIC_ALGORITHM], 1, 0),
([’k’,’h’,’o’,’o’,’r’], 1, 0)
([key:[xyzabc...], length:26], 1, 0),
([accuracy:0.95], 1, 0),
([’h’,’e’,’l’,’l’,’o’], 1, 0)
}
Figure 89: Example of observations for the second equation in Cryptolysis [321]
that has no punctuation or spaces [321].
D.4.6.5
Hardware/Software Systems
Various types of craft (space or air) usually have many instruments and sensors on board,
including high-end computers. All of these, or most important of these, “normal” operational
units can also have their design augmented (hardware and software) to include additional
functional units to observe the instruments and components, both hardware and software,
for anomalies and log them appropriately for forensics purposes [277, 308]. Given enough
on-board processing resources for the forensic log data, the system can automatically perform
self-investigation of any related anomalies and institute corrective measures whenever possible
while a remote operator access in unavailable.
For example, in the event when a spacecraft is “safing” itself due to, say, a cosmic ray hit,
is a critical and appropriate moment to have some forensic data logged before and after (as
well as during if possible) to help achieving a quicker analysis of what happened to bring out
the spacecraft out of the safe mode sooner because presently the engineering teams spend
a lot of effort to do the investigation after getting the telemetry data (which is would be,
e.g., impossible when a craft is behind a planetary body or the Sun and is unreachable for a
period of time) [308].
Finally, self-forensic investigation engines could run continuously in the aircraft and automotive vehicles, employing remote secure logging while doing in-situ self-investigation for
problems and reporting to the pilots or drivers or taking a corrective action.
D.4.6.6
Complex, Distributed Software Systems and Services
This topic was explored in some detail earlier [321] covering example distributed software
systems as case studies, such as the Autonomic Distributed Modular Audio Recognition
Framework (ADMARF) [496] and the Autonomic General Intensional Programming System (AGIPSY) [500] among others. The principles and concepts are very similar to the ones
described in the preceding section except applied to the software components only and the
corresponding middleware technologies used for communications. The MAC Spoofer Investigation presented in Section 9.5, page 257 also fits this criteria.
376
D.4.6.7
Self-Forensics for Intelligent Roadside Networks
In [276] we have already discussed the idea of self-forensics for road vehicles (and similar ideas
proposed in response to NASA’s call in a white paper for spacecraft and aircraft safety and
autonomous operation of distributed software systems [277, 321]), but there was almost next
to no mention of the backbone and other support on the roadside networks (RSNETs), which
was argued to aid the road safety and incident investigations. Self-forensics is important for
diagnosing in remote areas where human monitors and operators may not be available on the
highways. We’d like to include the term of self-forensics into the vocabulary of the roadside
networks. RSNETs are also the receiving end of the vehicle undergoing an accident, it is
therefore important to capture all the relevant data for the investigation by expert systems
and human investigators with the intent of event reconstruction later on. Roadside networks
have to be autonomous (e.g., for self-protection) in the modern age to sustain and self-manage
to the maximally possible extent. As a result the corresponding equipment and intelligent
software support must be designed accordingly.
Main Objectives. Main objectives of this topic include:
• Standardize the terminology and requirements of roadside networks for intelligent selfmanagement and self-forensic support for roadside backbones specified for VCNs.
• Define the R&D activities in roadside networks for the future test fields for self-forensics.
• Define and formalize self-monitoring and specification aspects for the design and development of the roadside networks to monitor and react in case of intrusions, power
failures, incidents, and other faults with a blueprint of autonomous recovery strategies.
• Formalize context specification for the internetworking for VANETs and roadside networks, mobility, ubiquity, etc. reducing the management traffic overhead and improve
incident response.
• Relevant QoS and security aspects and related protocols as solutions to performance
and safety aspects of self-forensics in IRSNETs.
D.4.6.8
Self-Forensics for Cloud Computing
This represents another project topic for the use of the self-forensics techniques previously
proposed for both manual and autonomous security monitoring, event reconstruction, incident response, and investigation in distributed software systems or information systems in
general onto cloud computing and its platforms. We argue that self-forensics, which is applicable to “self-dissection” of autonomous software in clouds for automated incident and anomaly
analysis and event reconstruction including after-the-fact analysis by the investigators in a
variety of incident scenarios. The idea, however, is the same, to construct and analyze the
relevant formalizations, terminology, and standards to be included into the design of cloud
computing platforms and tools to facilitate their management and digital investigations due
to such systems being large and complex.
377
The concepts presented in the preceding sections of self-forensics for autonomic operation of distributed software systems [321]), multimedia information systems forensic specification and ASSL-based specification [322] include the notion of self-protection and other
self-management aspects as well as forensics and incident investigations that we intend to
include into the vocabulary cloud computing as well.
This project is set to cover the following topics to a certain degree in the context of cloud
computing with self-forensics as a part of the self-management properties that cross-cover a
number of aspects and topics.
• Similarly for the other scenarios, standardize the terminology and requirements of
cloud computing platforms for intelligent self-management and self-forensic support,
including self-forensic sensors deployed in the cloud servers logging forensic contexts in
Forensic Lucid format.
• Like for IRSNETs, define and formalize self-monitoring and specification aspects for
the design and development of clouds to monitor and react in case of intrusions, attack surface, other kinds of cybercrime incidents, and other faults with a blueprint of
autonomous recovery strategies.
D.5
Summary
We introduced a new concept of self-forensics with Forensic Lucid to aid validation and
verification for critical systems to be used during the design, manufacturing, and testing, as
well as its continuous use during the actual operation [277].
We outlined some of the background work, including the forensic case specification language, Forensic Lucid, that we adapted from the cybercrime investigations domain to
aid the validation and verification of the autonomic subsystems design by proposing logging
the forensics data in the Forensic Lucid context format available for manual/interactive
analysis on the ground as well as real-time by a corresponding expert system [277].
We drafted some of the requirements for such a property to be included into the design as well as its potential most prolific limitations today [277]. Additionally, we laid out
some preliminary groundwork of requirements to implement formally the self-forensics autonomic property within the ASSL toolset in order to allow any implementation of the selfforensics property added to the legacy small-to-medium open-source and academic software
systems [322]. We note that all studied software systems are remarkably similar in their evidence structure suggesting that a reusable model can be built to observe a large number of
similar systems for self-forensic evidence [321]. Using an AspectJ-based implementation for
these software-based systems and component- or even method-level external observations can
give very fine-grained as well as coarse-grained observation sequences and allow to model not
only the contexts in the form of observation sequences of data structures and dataflows, but
also automatically build the state transition function ψ in Forensic Lucid to be ready to
be exported for the use by investigators to allow modeling the Ψ−1 for event reconstruction.
This approach can be very well used for automated debugging of complex systems allowing
the developers and quality assurance teams trace the problem and recreate the sequence of
378
events back to what may have been the cause better than a mere stack trace and the resulting core dump from individual modules. Moreover, mobile devices these days increasingly
use Java, so the said forensic data can be created on-the-fly with AspectJ observation of
mobile device OS components [321].
We argued that a property of self-forensics formally encompassing self-monitoring, selfanalyzing, self-diagnosing systems along with a decision engine and a common forensics
logging data format can standardize the design and development of hardware and software of
airborne vehicles, road vehicles, spacecraft, marine and submarine vehicles, to improve safety
and autonomicity of the ever-increasing complexity of the software and hardware systems in
such vehicles and further their analysis when incidents happen [277].
After having specified the high-level evidential observation sequences, each self-forensic
module designer needs specify the complete details, as required and deemed important. Then,
following the normal operation of the systems the self-forensic modules are turned on and
logging data in the Forensic Lucid format continuously (notice, the forensic logs of course
themselves must of high integrity and confidentiality, and there should be enough external storage to accommodate them, but these problems are not specific to this research, are
addressed for other needs as well elsewhere like any type of logging or continuous data collection). The format, assuming being a standard, would be processable by Forensic Lucidenabled expert systems, fully automatically (appropriate for autonomic systems) or interactively with the investigators making their case and analyzing the gathered evidence [321].
In summary, such a self-forensic specification is not only useful for cybercrime investigations, incident analysis, and response, but also useful to train new (software and other)
engineers on a team, flight controllers, and others involved, in data analysis, to avoid potentially overlooking facts and relationships in the data and making incorrect ad-hoc decisions
when performing an investigation.
In an Forensic Lucid-based expert system (see Section D.3.2, page 363) on the “humanside” of investigation (that’s what was the original purpose of Forensic Lucid in the
first place in cybercrime investigations) one can accumulate a number of contextual facts
from the self-forensic evidence to form the knowledge base and the trainees can formally
construct their theories of what happened and see of their theories agree with the evidential
data. Over time, (unlike in most cybercrime investigations) it is possible to accumulate a
general enough contextual knowledge base of encoded facts that can be analyzed across cases,
deployed systems, flights, or missions, networks via sharing and firmware updates locally or
globally and on the deep web when multiple agencies and aircraft manufactures collaborate
for scalability [277, 308]. Such data can be shared across law enforcement, military, and
research institutions [321] similarly to the Security Information Exchange (SIE) project of
ISC (isc.org).
For the purposes of portability or model-checking, depending on the evaluating engine,
Forensic Lucid constructs comprising the context of the encoded self-forensic evidence can
be translated into XML specifications (that can be unsatisfiable [169] and less compact, but
maybe more portable) of a specification for embedded systems and the like or translated into
the PRISM probabilistic model checker’s language [467] for model validation [321].
379
D.5.1
Advantages Forensic Lucid for Self-Forensics
Forensic Lucid (Chapter 7) is context-enabled, built upon intensional logic and the Lucid
language family that existed in the literature and mathematics for more than 35 years and
served the initial purpose of program verification [25, 26]. Forensic Lucid and its evaluation engine provide an ability for event reconstruction upon validating claims against the
available evidence [308] including when evidence is incomplete or imprecise. It is, therefore,
a natural choice for self-forensics.
D.5.2
Limitations of Self-Forensics
The self-forensics autonomic property is essential to have for automated analysis of simulated
(testing and verification) and real incidents the autonomic or semi-autonomic hardware and
software systems, but it can not be mandated as absolutely required due to a number of
limitations it creates. However, whenever the monetary, time, and otherwise resources allow,
it should be included in the design and development of the autonomous spacecraft, military
equipment, or software systems [277, 308]. What follows is a summary of the limitations.
1. The cost of the overall autonomic systems will increase [277, 308]. The cost can be
offset if the data are logged centrally and processed by central controllers.
2. If built into software, the design and development requires functional long-term storage
and CPU power [277, 308].
3. Required communications will likely increase the bandwidth requirements (e.g., for
scientific and exploratory aircraft if the science data are doubled in the forensic stream.
If the forensic data are mirrored into the scientific one, more than twice bandwidth and
storage used.) [277, 308].
4. An overall overhead if collecting forensics data continuously (e.g., can be offloaded
along with the usual science and control data down to the flight control towers or
centers periodically.) [277, 308].
5. The self-forensics logging and analyzing software ideally should be in ROM or similar
durable flash type of memory or an SSD; but should allow firmware and software
upgrades [277, 308].
6. We do not tackle other autonomic requirements of the system assuming their complete
coverage and presence in the system from the earlier developments and literature, such
as self-healing, protection, etc. [277, 308].
7. Transition function modeling the hardware and software specification has to be produced by the qualified engineering team throughout the design phase and encoded in
Forensic Lucid. To simplify the task, the data-flow graph (DFG) [89, 311] interactive
development environment (IDE), like a CAD application is to be available [308].
380
D.5.3
Future Directions
As a part of the future work in this direction we plan to complete the implementation
of the self-forensics modules to generate evidence according to the specifications presented
earlier. Specifically, our priority is to implement the notion of self-forensics in the GIPSY
and DMARF systems. Then, we plan to gather performance and storage overhead statistics
when the self-forensics modules are turned on. We also need to improve the granularity of
the event collection, correlation, and encoding with AspectJ-based tracing on the level of
the method calls with the before and after triggers. Further, we want to explore events with
probabilities, credibility, trustworthiness factors and the like using a probabilistic approach
when exact reasoning is not possible [321].
We also need to explore and specify on how malicious and determined attackers would
be dealt with by the self-forensics methods and techniques to retain valuable trustworthy
forensic evidence. (In the nutshell, the issue is similar to remove logging in the specified
format, perhaps offsite, maintaining a secure channel, but this deserves a separate complete
discussion; additionally an investigated concept, like self-protection for DMARF [320] can be
employed as one of the assurance techniques for self-forensics) [321].
D.5.3.1
Expected Outputs
Summarizing the expected output of this self-forensics project: a formal model, for validation
and verification, of self-forensics, mathematically sound and complete to be finalized. It is
to be followed by the concrete implementation of it first in the two mentioned research
distributed agent systems such as GIPSY and DMARF. Then we move on to larger scale
commercial and open-source software systems with the companies listed, extending it to the
actual networked devices further. This would allow gather overhead and other metrics and
refine the model and its scalability as well as user tools for analysis and investigation by
the human investigators (expert system with updatable forensic knowledge base). Then the
outcomes would be robotic and research craft prototypes—based on whatever we get access
to or even build a mini research nano-satellite (e.g., a part of the ServerSky.org project),
have self-forensic modules and sensors in a car and a plane.
D.5.3.2
Expected Outcomes
Standardized self-forensic hardware and software components in robotic devices, road vehicles, aircraft, spacecraft, marine vessels, operating systems and distributed autonomous
software systems alike. The corresponding industries are the audience as well as law enforcement agencies and any type of human investigators involved in incident analysis, even
reconstruction, and response.
381
Appendix E
Graph-based Representation and
Visualization
This chapter is formed from the previously published and unpublished discussions on the
need and ideas of a variety of visualization aspects, techniques, and tools, pertaining to
this research. The purpose is to determine how the findings can be applied to Forensic
Lucid and investigation case management. It is also natural to want a convenient and usable
evidence visualization, its semantic linkage and the reasoning machinery for it. We present
some of these deliberations as a future work item with the detailed related work review.
Visualization requirements in Forensic Lucid have to do with different levels of case
knowledge abstraction, representation, aggregation, as well as the operational aspects as the
final long-term goal of this proposal. It encompasses anything from the finer detailed representation of hierarchical contexts (Section 7.2.2, page 158) to Forensic Lucid programs,
to the documented evidence and its management, its linkage to programs, to evaluation, and
to management of GIPSY software networks. This includes an ability to arbitrarily switch
between those views combined with usable multimodal interaction.
E.1
Related Work
There is a number of items and proposals in graph-based visualization and the corresponding
languages. In 1982 Faustini proved that any Indexical Lucid program can be represented
as a DFG [108]. In 1995, Jagannathan defined various graphical intensional and extensional
models for GLU programming—arguably one of the first practical graph-based visualization
proposals for Lucid programs [187]. Paquet subsequently in 1999 expanded on this for
multidimensional intensional programs as exemplified in Figure 90 [361]. Stankovic, Orgun,
et al . proposed the idea of visual parallel programming in 2002 [441].
The GIPSY R&D team’s own work in the area (Chapter 6) includes the theoretical foundation and initial practical implementation of the DFGs by Ding [89, 361]. Ding provided
the first implementation of Paquet’s notion within the GIPSY project in 2004 [89] using
Graphviz’s lefty’s GUI and dot languages [32, 33] along with bi-directional translation
between GIPL’s or Indexical Lucid’s abstract syntax trees (ASTs) and dot’s [311]. Additionally, an idea was proposed for visualization and control of communication patterns
382
and load balancing by having a “3D editor” within RIPE [282]. The concept behind such
an editor is to render graphs in 3D space the current communication patterns of a GIPSY
program-in-execution or to replay it back and allow the user to visually redistribute demands
if the workload between workers becomes unbalanced. It is designed as a kind of a “virtual
3D remote control” with a mini-expert system, an input from which can be used to teach the
planning, caching, and load-balancing algorithms to perform more efficiently the next time a
similar GIPSY application is run. Related research work on visualization of load balancing,
configuration, formal systems for diagrammatic modeling and visual languages and the corresponding graph systems was also presented by several authors in [13, 38, 258, 505, 540]. They
defined a number of key concepts that are relevant to our visualization mechanisms within
GIPSY [311] and its corresponding General Manager Tier (GMT) [191], for which Rabah
provided the initial PoC visualization and basic configuration management of the GIPSY
nodes and tiers with the graphical GMT [393] (e.g., Figure 38, page 151).
More recently (2012), another very interesting work of relevance was proposed by Tao et
al . on visual representation of event sequences, reasoning and visualization [458] (in their case
of the EHR data) and around the same time Wang et al . proposed a temporal search algorithm
for personal history event visualization [517]. Monore et al . note the challenges of specifying
intervals and absences in temporal queries and approach those with the use of a graphical
language [323]. This could be particular useful for no-observations (Section 7.5.1, page 204)
in our case. A recent novel HCI concept of documentary knowledge visual representation and
gesture and speech-based interaction in the Illimitable Space System (ISS) was introduced
by Song [437] in 2012. A multimodal case management interaction system was proposed for
the German police called Vispol Tangible Interface: An Interactive Scenario Visualization 1 .
We propose to build upon those works to represent the nested evidence, crime scene as
a 2D or even 3D DFG, and the reconstructed event flow upon evaluation. Such a feature
is designed to support the work on intensional forensic computing, evidence modeling and
encoding, and Forensic Lucid [304, 310, 312, 321, 322] (Chapter 7) and MARFL ([269,
272], Appendix C) in order to aid investigator’s tasks to build and evaluate digital forensic
cases [311].
E.2
Visualization Example Prototypes
From the related work, a conceptual example of a 2D DFG corresponding to a simple Lucid
program is in Figure 90 produced by Paquet [361]. The actual rendering of such graphs
currently in the GIPSY environment is exemplified in Figure 91 by Ding [89] in 2004 [311].
In Figure 40, page 158 is the conceptual model of hierarchical nesting of the evidential
statement es context elements, such as observation sequences os, their individual observations o (consisting of the properties being observed (P, min, max, w, t), details of which are
discussed in the referenced related works and in Chapter 7). These 2D conceptual visualizations are proposed to be renderable at least in 2D or in 3D via an interactive interface
to allow modeling complex crime scenes and multidimensional evidence on demand. The
end result is envsioned to look like either expanding or “cutting out” nodes or complex-type
1
http://www.youtube.com/watch?v=_2DywsIPNDQ
383
Figure 90: Canonical example of a 2D dataflow graph-based program [361]
results as exemplified in Figure 922 [311].
Figure 91: Example of an actual rendered 2D DFG-based program with Graphviz [89]
E.3
Visualization of Forensic Lucid
To aid investigators to model the scene and evaluate it, we propose to expand the design
and implementation of the Lucid DFG programming onto Forensic Lucid case modeling
and specification to enhance the usability of the language and the system and its behavior
2
cutout image credit is that of Europa found on Wikipedia http://en.wikipedia.org/wiki/File:
PIA01130_Interior_of_Europa.jpg from NASA
384
Figure 92: Modified conceptual example of a 2D DFG with 3D elements [311]
Figure 93: Conceptual example of linked 3D observation nodes [311]
in 3D. With the ongoing advances the visualization project was proposed to further enhance
usability of the discussed language and system and the related tools following the good
interaction design practices [404].
E.3.1
3 Dimensions
The need to represent visually forensic cases, evidence, and other specification components
is obvious for usability and other issues. Placing it in 3D helps to structure the “program”
(specification) and the case in 3D space can help arrange and structure the case in a virtual
environment better with the evidence items encapsulated in 3D spheres akin to Russian dolls,
and can be navigated in depth to any level of detail, e.g., via clicking (see Figure 93) [311].
The depth and complexity of operational semantics and demand-driven (eductive) execution model are better represented and comprehended visually in 3D especially when doing
event reconstruction. Ding’s implementation allows navigation from a graph to a subgraph
by expanding more complex nodes to their definitions, e.g., more elaborate operators such
whenever (wvr) or advances upon (upon), their reverse operators, forensic operators, and
others [311] found in Section 7.3, page 166.
385
E.3.2
Requirements Summary
Based on the preceding discussion, some immediate requirements to realize the envisioned
DFG visualization of Forensic Lucid programs and their evaluation are summarized below [311]:
• Visualization of the hierarchical evidential statements (potentially deeply nested contexts), see Figure 40, page 158 [311].
• Placement of hybrid intensional-imperative nodes into the DFGs such as mixing Java
and Lucid program fragments [311]. The GIPSY Research and Development group’s
previous research did not deal with aspects on how to augment the DFGAnalyzer and
DFGGenerator of Ding to support hybrid GIPSY programs. This can be addressed by
adding an “unexpandable” (one cannot click their way through its depth) imperative
DFG node to the graph. To make it more useful, i.e., expandable, and so it’s possible to
generate the GIPSY code off the DFG or reverse it back, it should possible to leverage
recent additions to Graphviz and GIPSY [311]. The newer versions of Graphviz support
additional features that are more usable for our needs at the present. Moreover, with the
advent of JOOIP ([528], Section 4.3.2.3, page 94), the Java 5 ASTs are made available
along with embedded Lucid fragments that can be tapped into when generating the
dot code’s AST [311].
• There has to be a Java-based wrapper for the DFG Editor of Ding [89] to enable its
native use within the Java-based GIPSY and plug-in IDE environments like Eclipse or
NetBeans [311].
• Leveraging visualization and control of communication patterns and load balancing for
the task in Euclidean space with the GGMT of Rabah [393].
• Ability to switch between different views (control, evidence, DFG, etc.).
E.3.3
Selection of the Visualization Languages and Tools
One of the goals of this work is to find the most optimal technique, with soundness and
completeness and formal specifications along with the ease of implementation and better
usability. We began by gathering insights and requirements in selecting a technique or a
combination of techniques with the most plausible outcome [311]. The current design allows
any of the implementation paths to be chosen [311].
E.3.3.1
Graphviz
First, the most obvious is Ding’s [89] basic DFG implementation within GIPSY as it is already
a part of the project and done for the two predecessor Lucid dialects: GIPL and Indexical
Lucid. Additionally, the modern version of Graphviz now has some integration done with
Eclipse [99], so GIPSY’s IDE—RIPE (Run-time Interactive Programming Environment)—
may very well be an Eclipse-based plug-in [311].
386
E.3.3.2
PureData
Puckette came up with the PureData [388] language and its commercial offshoots (Jitter/Max/MSP [74]), which also employ DFG-like programming with boxes and inlets and
outlets of any data types graphically placed and connected as “patches” and allowing for
sub-graphs and external implementations of inlets in procedural languages. Puckette’s original design was targetting signal processing for electronic music and video processing and
production for interactive artistic and performative processes that has since outgrown that
notion. The PureData externals allow deeper media visualizations in OpenGL, video, etc.
thereby potentially enhancing the whole aspect of the process significantly [311]. Curiously,
Lucid as a dataflow language, is said to have influenced PureData3 .
E.3.3.3
BPEL
The BPEL (Business Process Execution Language) and its visual realization within NetBeans [338, 450] for SOA (service-oriented architectures) and web services is another good
model for an inspiration [177, 210, 349] that has recently undergone a lot of research and
development, including flows, picking structures, faults, and parallel/asynchronous and sequential activities. More importantly, BPEL notations have a backing formalism modeled
based upon Petri nets. (See, e.g., visual BPEL graph in BPEL Designer (first came with
the NetBeans IDE) that illustrates two flows and three parallel activities in each flow as well
asynchrony and timeouts modeling.) BPEL specifications actually translate to executable
Java web services composition code [311].
E.3.3.4
Illimitable Space System
We explore an idea of a scalable management, visualization, and evaluation of digital evidence with modifications to the interactive 3D documentary subsystem of the Illimitable
Space System (ISS) [437] to represent, semantically link, and provide a usable interface to
digital investigators. As we stated in Chapter 1, that work may scale when properly reengineered and enhanced to act as an interactive “3D window” into the evidential knowledge
base grouped into the semantically linked “bubbles” visually representing the documented
evidence. By moving such a contextual window, or rather, navigating within the theoretically illimitable space an investigator can sort out and re-organize the knowledge items as
needed prior launching the reasoning computation. The interaction design aspect would be
of a particular usefulness to open up the documented case knowledge and link the relevant
witness accounts and group the related knowledge together. This is a proposed solution to
the large scale visualization problem of large volumes of “scrollable” evidence that does not
need to be all visualized at once, but be like a snapshot of a storage depot.
Stills from the actual ISS installation hosting multimedia data (documentary videos) users
can call out by voice or gestures to examine the contents are in Figure 944 . We propose to reorganize the latter into more structured spaces linked together by the investigators grouping
the relevant evidence semantically instead of the data containing bubbles floating around
randomly.
3
4
http://en.wikipedia.org/wiki/Lucid_(programming_language)
http://vimeo.com/51329588
387
Figure 94: Interactive documentary illimitable space visualization and management [437]
E.4
Summary
With the goal to have a visual DFG-based tool to model Forensic Lucid case specification,
we deliberate on the possible choice of the visualization languages and paradigms within
today’s technologies and their practicality and attempt to build upon previous sound work in
this area. Main choices so far identified include Ding-derived Graphviz-based implementation,
PureData-based, BPEL-like, or the ISS. All languages are more or less industry standards and
have some formal backing. The ones that are not, may require additional work to formally
specify their semantics and prove correctness and soundness of the translation to and from
Forensic Lucid [311].
The main problem with PureData and Graphviz’es dot is that their languages do not have
formal semantics specified only some semantic notes and lexical and grammatical structures
(e.g., see dot’s [32]). If we use any and all of these, we will have to provide translation rules
and their semantics and equivalence to the original Forensic Lucid specification similarly
as it is (e.g., was done by Jarraya between the UML 2.0/SysML state/activity diagrams
and probabilities in [190] when translating to PRISM) [311]. ISS seems the most scalable
approach that can aggregate all the others, but requires significant number of modifications.
Thus, this aspect of the work is at the active research stage. Given author’s some familiarity with the languages and systems mentioned, the final choice may result being an
intermediate form or a combination of inter-translatable forms [311] of those languages and
systems.
388
Index
Forensic Lucid
Back-tracing, 206
Context, 207
Features, 157
Forward tracing, 206
Generic Observation Sequences, 208
Properties, 157
Requirements, 157
Syntax, 166
Transition Function, 208
#, 80, 85, 86, 162, 179–182, 195, 202, 214, 218
@, 80, 81, 85, 86, 162, 181, 182, 193–195, 207,
214, 218, 337
., 179, 180
ADMARF, 323, 371, 376
advances upon, 85
advances upon, 178
AGIPSY, 133, 136, 323, 376
an observation, 31
an observation sequence, 31
API, 77, 95, 96, 136, 351–356, 358
[], 334, 341
#, 77, 81, 95, 96, 136, 351–354, 356, 358
$, 252
$HOME, 266
ActivityLogForensicLucidEncoder, 267
alice, 252
alice claim, 252
ArgusForensicLucidEncoder, 268
ARPPingForensicLucidEncoder, 268
AspectGEE, 225
AST, 373
AuthenticatedObject, 374
backtraces, 254
bitrate, 353
bod, 170, 183, 205, 370
bool, 332, 334, 340
bool::true, 334
boolean, 332, 334
389
byte, 334, 337
ByteArrayFileReader, 108
c, 253
ca.concordia.cse.gipsy.GEE.backends.flucid, 229
channels, 353, 354
char, 334, 339
class, 341
classification, 354
classify, 355
cluster, 286
CodeSegment, 140
collect(), 266
comb(), 51, 191
Comparator, 345
Configuration, 229, 234, 374
cutoff, 354
DEFT2010App, 101
Demand, 373
DemandGenerator, 224
DFGAnalyzer, 386
DFGGenerator, 386
DHCPLogForensicLucidEncoder, 268
Dictionary, 212, 223, 232
dim(m), 187
Dimension, 231, 232, 334, 335, 340, 347
dimension, 332, 334, 339
double, 106, 234, 332, 334, 338, 343
DUMMY, 354
embed(), 342
empty, 185, 187, 188
encode(), 266
EncodingUtils, 237, 268
EncryptedObject, 374
ENDPOINTING, 354
eod, 170, 183, 205, 370
es, 251, 252
EVENTS, 370
evidential statement, 231
EvidentialStatement, 231
Executor, 223–225, 334
extern, 140
F, 252, 254
f, 353
false, 191, 340
fby, 254
feature extraction, 354
FFT, 354
FFT-LOW-PASS-FILTER, 354
FileItem, 117, 118
FileTypeIdentApp, 104, 109, 351
FileTypesIdentApp, 108
FingerForensicLucidEncoder, 268
Float, 334
float, 332, 334, 338
ForensicGEE, 213, 224, 225
FORENSICLUCID, 212, 223
ForensicLucidInterpreter, 213, 223, 224, 226
ForensicLucidSemanticAnalyzer, 212, 222
fork(), 274
FormatTag, 212, 223
FunctionItem, 342
GEE, 213
GenericArithmeticOperatorsDelegate, 344
GenericBitwiseOperatorsDelegate, 346
GenericContextSetOperatorsDelegate, 347
GenericLogicComparisonOperatorsDelegate,
345
GenericLogicOperatorsDelegate, 345
getType(), 266, 267
GIPC, 140
gipsy.Configuration, 229
gipsy.GEE, 128
gipsy.GEE.Executor, 214
gipsy.GIPC, 128
gipsy.interfaces, 231
gipsy.lang, 139
gipsy.RIPE, 128
GIPSYArray, 334, 341, 347
GIPSYBoolean, 334, 339, 340, 345
GIPSYCharacter, 334, 339
GIPSYContext, 218, 231, 232, 335, 339, 340,
347
GIPSYDouble, 334, 338, 343
GIPSYEmbed, 334, 343, 347
GIPSYFloat, 334, 338, 343
GIPSYForensicContext, 231
GIPSYFunction, 334, 342, 347, 350
GIPSYIdentifier, 334
390
GIPSYIndentifier, 341
GIPSYInteger, 205, 334, 337, 343
GIPSYObject, 334, 341, 347, 350
GIPSYOperator, 231, 334, 342, 347
GIPSYProgram, 139, 222, 225, 342, 373
GIPSYString, 334, 338, 350
GIPSYType, 340, 347, 348, 373
GIPSYVoid, 332, 334, 339
hostnameformat(), 268
IAnnotation, 231
IArithmeticOperatorsProvider, 344
IBitwiseOperatorsProvider, 346
IContextSetOperatorsProvider, 347
IDemand, 373
IDemandGenerator, 224
IEvaluationEngine, 224
IIntensionalCompiler, 211
ILogicComparisonOperatorsProvider, 345
ILogicOperatorsProvider, 345
immutable, 342
ImperativeNode, 212, 231, 335
INITIATED BY, 370
InputStream, 108
int, 106, 332, 334, 337, 343
IntegrityAddedObject, 374
Interpreter, 218, 223, 224
invpsiacme, 252
invtans, 254
ISemanticAnalyzer, 221, 222
isInTagSet(), 191
java.lang, 332
java.lang.Boolean, 340
java.lang.Character, 339
java.lang.Double, 338
java.lang.Float, 338
java.lang.Long, 337
java.lang.Object, 341, 343
java.lang.String, 341
java.lang.StringBuffer, 338
java.net.URI, 343
java.util.Vector, 231
JPG, 353
Key, 375
LegacyEductiveInterpreter, 224
LegacyInterpreter, 224
Long, 334
long, 332, 334, 337
Long.MAX VALUE, 198, 205
Long.MIN VALUE, 198
LPC, 354
LucxSemanticAnalyzer, 222
m, 187, 188
macformat(), 268
main(), 251
manuf, 252
MARF, 106
marf.Configuration, 229, 356
marf.MARF, 356
marf.util.FreeVector, 194, 231, 232
MARFCATApp, 351
MARFCATDGT, 147, 223
MARFCATDWT, 223
Method, 334
Metric, 284
MIDI, 353
MINMAX, 354
MP3, 353
MSWForensicLucidEncoder, 267
NbtscanForensicLucidEncoder, 268
next(), 349
NmapForensicLucidEncoder, 268
NodeAnnotation, 231
null, 169
NumericComparator, 345
Object, 334
Observation, 231
observation, 231
observation sequence, 231
ObservationSequence, 231
oConfigurationSettings, 229
OCTMARF, 223
OGG, 353
OpenGL, 326, 387
Packages
gipsy.apps, 223
gipsy.apps.MARFCAT, 223
gipsy.apps.MARFPCAT, 223
gipsy.apps.memocode.genome, 223
gipsy.apps.OCTMARF, 223
gipsy.GIPC.intensional.SIPL.ForensicLucid,
242
gipsy.lang.context.forensic, 231
GMT, 145
RIPE, 145
pby, 254
PENDING, 226, 228
391
perp o, 271
place, 62
poles, 354
preprocessing, 354
Preprocessor, 132, 139–141, 334, 335
printer, 252
PRISMWrapper, 218, 225
Properties, 229
RANDOM-FEATURE-EXTRACTION, 354
RAW, 354
RAW-PREPROCESSING, 354
Reader, 108
Regression, 232
Result, 234
ResultSet, 106, 117, 234
RTForensicLucidEncoder, 267
S, 253
sample loader, 353
SecureBean, 374
security info, 374
SELF FORENSICS, 369, 371
SemanticAnalyzer, 222, 334
Serializable, 229
signed char, 348
SINE, 353
SpeakerIdentApp, 104, 108, 351, 357, 358
SSHForensicLucidEncoder, 268
stat(), 107, 109
String, 332, 334, 338, 339
string, 332, 334, 338
StringBuffer, 332, 350
struct, 341
succ, 336
SwitchLogForensicLucidEncoder, 267
SWMForensicLucidEncoder, 268
synchronized, 229
tag(m), 188
TagSet, 231, 340, 347
TERMINATED BY, 370
tests, 232
TIFF, 353
time, 62
timestampformat(), 268
toString(), 334
train, 355
TrainingSet, 234
trans, 254
true, 184, 185, 190, 191, 332, 339, 340
CLIPS, 5, 284, 324
Common Lisp, 1, 6–9, 43, 46, 84, 208, 210, 244,
250–254, 278, 281
Context, 164, 207
context, 64, 76, 85
Context calculus, 165
Context set, 164
context set, 340
CORBA, 106, 324
CVE
CVE-2006-7195, 120
CVE-2006-7196, 120
CVE-2006-7197, 120
CVE-2007-0450, 120
CVE-2007-1355, 120
CVE-2007-1858, 120
CVE-2007-2449, 120
CVE-2007-2450, 120
CVE-2007-3382, 120
CVE-2007-3385, 120
CVE-2007-3386, 120
CVE-2007-5333, 120
CVE-2007-5342, 120
CVE-2007-5461, 120
CVE-2007-6286, 120
CVE-2008-0128, 120
CVE-2008-1232, 120
CVE-2008-1947, 120
Back-tracing, 206
CVE-2008-2370, 120
Background, 24
CVE-2008-2938, 120
basic belief assignment, 68
CVE-2008-4308, 120
belief, 68
CVE-2008-5515, 120
Blackmail Investigation, 51
CVE-2008-5519, 120
BPEL, 323, 387, 388
CVE-2009-0033, 120
CVE-2009-0580, 120
C, 111, 114–116, 126, 130, 140–142, 245, 246,
CVE-2009-0781, 120
251, 324, 341, 348, 349
CVE-2009-0783, 120
C++, 90, 91, 111, 126, 130, 132, 140, 142, 245,
CVE-2009-2693, 120
246, 251, 331, 332, 341, 348, 349
CVE-2009-2901, 120
C#, 126
CVE-2009-2902, 120
Case
CVE-2009-3548, 120
Printer, 44
CVE-2010-2295, 119
Forensic Lucid, 250
CVE-2010-2297, 119
Cases
CVE-2010-2298, 119
Blackmail, 51
CVE-2010-2299, 119
Chrome
CVE-2010-2300, 119
5.0.375.54, 111, 119, 246
CVE-2010-2301, 119
5.0.375.70, 111, 246
CVE-2010-2302, 119
TXT, 353
union, 348
unsigned char, 348
upon, 385
value, 374
void, 332, 334, 339
Warning, 117
WAV, 353, 354
WebEditor, 152
where, 81, 89, 193, 194, 356
windowsize, 354
WriterIdentApp, 101, 351
wvr, 385
Architecture, 211
as soon as, 85
as late as, 177
as soon as, 177
asa, 85
Ashcroft, E. A., 82
AspectJ, 26, 27, 89, 211, 213, 214, 220, 242, 282,
363, 366, 370, 378, 379, 381
ASSL, 136, 323, 360, 362, 364, 365, 367, 369–371,
378
AST, 139, 144, 206, 212, 213, 218, 220, 221, 223,
224, 226, 230, 323, 342, 370, 373, 382,
386
autonomous system, 368
392
explanation, 33
explanation model, 31
explanation of an evidential statement, 34
explanation of observation sequence, 34
CVE-2010-2303, 119
CVE-2010-2304, 119
Cyberforensic
Finite State Automata, 29
Cyberforensics, 1
data types
matching Lucid and Java, 332
dataflow
pipeline, 85
pipelined, 83
definition environment, 79
Dempster–Shafer, 2, 3, 14, 58, 65–68, 72–74, 154,
155, 159, 191, 192, 203–205, 207, 210,
212, 244, 248, 250, 278, 281, 282, 287,
290, 324
Design, 211
DFG, 10, 11, 254, 285, 286, 324, 380, 382–388
DGT, 97, 146, 147, 222, 223, 225, 226, 228, 324
digital forensics, 1
dimensions, 64
DMARF
Autonomic, 323, 371, 376
DMF, 10, 132, 134, 135, 143, 324
DMS, 13, 132–134, 142, 143, 148, 149, 152, 218,
223, 224, 226, 229, 230, 324
Dovecot 1.2.0, 245, 246
Dovecot 1.2.17, 246
Dovecot 2.0-beta6, 111
Dovecot 2.0.beta6.20100626, 246
DST, 117, 147, 149, 215, 223, 226, 228, 229, 240,
259, 324, 350
duration, 207
DWT, 97, 147, 223, 240, 324
eduction, 64, 86
evaluation context, 79, 161
event reconstruction algorithm, 36
Events
LINK-UP, 258, 261, 263, 268
evidence
Dempster–Shafer, 2, 3, 14, 58, 65–68, 72–74,
154, 155, 159, 191, 192, 203–205, 207,
210, 212, 244, 248, 250, 278, 281, 282,
287, 290, 324
evidential statement, 34
Expert systems
CLIPS, 5, 284, 324
fby, 84
FFT, 324
Files
.ctx, 267, 277
.procmailrc, 265
.rt, 266
12345.activity.ctx, 273
12345.nfs.ipl, 273
12345.nmap.ctx, 273
12345.notspoofer.ipl, 273
12345.perp.ipl, 273
12345.rt.ctx, 272
12345.spoofer.ipl, 273
12345.switchlog.ctx, 273
123456.rt, 266
activity, 263
ForensicLucid.jjt, 242
ForensicLucid.thy, 289
mac-spoofer-transition.ipl, 273
switchlog, 258, 259, 261, 263, 267
Finite State Automata for Cyberforensic Investigation, 29
finite state machine, 30
Finite Tag Set:, 166
first, 84
fixed-length observation, 33
fixed-length observation sequences, 36
FOIL, 61, 288, 324
followed by, 84
Forensic Computing, 24
Forensic context, 164
Forensic Lucid, iii, 1–14, 18–20, 29, 43, 46, 51,
57, 63, 65, 73, 74, 77, 84, 86, 91, 92,
95, 97, 100–102, 106, 111, 118, 121, 122,
125–127, 133, 135, 136, 152, 154–172,
174, 179, 180, 182–184, 191–193, 196–
204, 206–213, 216, 218–220, 222, 223,
225, 226, 228–233, 235–237, 242, 244,
245, 247, 248, 250–259, 263, 265–267,
270, 273–275, 277–289, 291, 325, 359–
361, 363, 364, 366–370, 372, 374, 375,
378–380, 382–384, 386, 388, 389, 392,
397
393
., 170
#, 170
ala, 170
and, 170
asa, 170
fby, 170
first, 170
last, 170
Methodology, 154
nala, 170
nasa, 170
neg, 170
New Operators, 169
next, 170
not, 170
nrupon, 170
nrwvr, 170
nupon, 170
nwvr, 170
Operational Semantics, 192
Operators, 169
and #, 182
or, 170
pby, 170
prev, 170
rupon, 170
rwvr, 170
upon, 170
wvr, 170
xor, 170
forensic operators, 183
Fortran, 130, 140, 142
Forward tracing, 206
Frameworks
Autonomic Distributed MARF, 323, 371, 376
Cryptolysis, 360, 363, 364, 367, 375, 376
Distributed MARF, 104, 106, 323, 324, 360,
363, 365, 367, 372, 373, 381
DMF, 10, 132, 134, 135, 143, 324
GEE, 10, 13, 91, 128, 132, 133, 136, 138,
139, 142, 143, 151, 152, 157, 158, 160,
182, 206, 211, 212, 214, 218, 220, 223,
224, 228–230, 242, 282, 284, 289, 325,
330, 334, 335, 345, 366, 370, 373
GICF, 132–134, 141, 142, 325, 331
GIPC, 128, 131–133, 137–139, 142, 144, 151,
152, 157, 211, 212, 216, 218, 220–222,
242, 282, 325, 330, 335, 366, 370, 373
GIPSY Network, 151
GIPSY Type System, 331
JDSF, 285, 291, 326, 360, 363, 367, 373, 375
MARF, 95, 97–99, 101–104, 106–111, 115,
116, 121, 122, 127, 133, 234, 235, 245,
286, 290, 291, 324, 326, 351–356, 358,
364, 372
RIPE, 128, 132–134, 150, 151, 224, 285, 327,
383, 386
FTP, 325
functional unit, 368
GEE, 10, 13, 91, 128, 132, 133, 136, 138, 139,
142, 143, 151, 152, 157, 158, 160, 182,
206, 211, 212, 214, 218, 220, 223, 224,
228–230, 242, 282, 284, 289, 325, 330,
334, 335, 345, 366, 370, 373
GEER, 137–139, 141, 144, 146–148, 196, 213,
218, 220–223, 225, 226, 229, 325, 330,
370, 373
generic observation, 33, 39
Generic Observation Sequences, 208
generic observation sequences, 36
GICF, 132–134, 141, 142, 325, 331
GIPC, 128, 131–133, 137–139, 142, 144, 151, 152,
157, 211, 212, 216, 218, 220–222, 242,
282, 325, 330, 335, 366, 370, 373
Preprocessor, 139
Structure, 138, 221
GIPL, iii, 4, 9, 77–79, 81, 82, 92, 93, 131, 132,
136, 137, 144, 150, 160, 164, 166, 182,
186, 192, 194, 209, 212–214, 220, 222–
224, 231, 232, 281, 289, 325, 351–353,
356, 358, 382, 386
Syntax, 78
GIPSY, 2–4, 7, 9, 10, 13, 17–20, 57, 65, 78, 92–
95, 97, 102–104, 106, 112, 117, 121, 128–
144, 148–152, 156–158, 160, 182, 210–
212, 214, 216, 218, 219, 223, 224, 228–
230, 232, 238–240, 242, 258, 259, 262,
265, 273–275, 279, 281, 283–285, 287,
289, 291, 323, 325, 329–332, 335, 337–
340, 343, 348–350, 352, 360, 362–367,
369, 370, 373, 374, 381–383, 386
Structure, 137
Type System, 331
Types, 331
GIPSY Network, 151
394
140, 142, 144, 160, 162, 163, 192, 196,
GIPSY Program
197, 205, 209, 212–214, 221, 229, 230,
Segments, 140
245, 246, 251, 274, 286, 326, 330–332,
GIPSY program, 330
334, 335, 337–341, 345, 350–352, 357,
GIPSY Type System, 331
360, 361, 363, 364, 369, 370, 379, 386,
GLU, 77, 90, 98, 129–131, 135, 137, 141, 142,
387
147, 152, 325, 382
Jetty
GLU#, 90, 94, 95, 130
6.1.16, 245, 246
GMT, 145, 148, 150, 224, 325, 383
6.1.26, 246
Graphviz, 150, 278, 382, 384, 386, 388
Jini, 117, 132, 133, 135, 143, 149, 226, 228, 229
Header
Jitter, 89, 387
From:, 265, 266
JLucid, 92, 93, 132, 134, 141, 142, 163, 166, 167,
Subject:, 266
196, 209, 222, 281, 325, 327, 331, 342
Higher-order context, 158
JMS, 117, 132, 133, 135, 143, 149, 226, 228, 229,
Higher-Order Intensional Fuzzy Logic, 287
326
higher-order intensional fuzzy logic, 14
joint mass, 72
Higher-order Intensional Logic, 64
JOOIP, iii, 9, 10, 77, 94, 95, 133, 135, 136, 142,
HOIFL, 7, 14, 287, 288, 325
157, 160, 162, 163, 179, 192, 196, 209,
HOIL, 7, 9, 10, 13, 14, 17, 64, 76, 88, 96, 133,
218, 220, 222, 281, 326, 331, 350, 360,
281, 287, 325, 329–332, 335, 350
369, 370, 386
identifier-context, 64
IDS
Snort, 5
if then else, 85, 86, 181
iHTML, 74, 95, 325, 353
Implementation, 211
Indexical Lucid, iii, 9, 62, 77, 78, 89, 93, 130–
132, 150, 166, 192, 209, 213, 222, 281,
285, 286, 289, 325, 327, 382, 386
Infinite Tag Set:, 166
intensional
logic, 59, 64
operators, 64
programming, 59, 85
intensional logic, 64
Intensional operators, 64
Intensional Programming, 6
Intensional programming, 64
intensional value warehouse, 90
Intensionality, 336
intensionality, 329
Introduction, 1
IPL, 336
Isabelle, 15, 110, 288, 367
ISWIM, 85
Kahn, G., 83
Landin, P. J., 85
Libraries
Autonomic Distributed MARF, 323, 371, 376
Distributed MARF, 104, 106, 323, 324, 360,
363, 365, 367, 372, 373, 381
JDSF, 285, 291, 326, 360, 363, 367, 373, 375
MARF, 95, 97–99, 101–104, 106–111, 115,
116, 121, 122, 127, 133, 234, 235, 245,
286, 290, 291, 324, 326, 351–356, 358,
364, 372
OpenGL, 326, 387
LISP, 1–3, 6–8, 10, 12, 13, 44, 84, 250, 253, 280
Common Lisp, 1, 6–9, 43, 46, 84, 208, 210,
244, 250–254, 278, 281
Live forensics, 26
Logic
Intensional
First-order, 61, 288, 324
Higher-order, 7, 9, 10, 13, 14, 17, 64, 76,
88, 96, 133, 281, 287, 325, 329–332, 335,
350
logic
intensional, 64
non-intensional, 64
Java, 10, 17, 27, 29, 92–95, 98, 99, 102, 104, 108,
temporal, 62
111, 115, 116, 126–129, 131–133, 135,
395
Lucid, iii, 2–4, 6–9, 12, 13, 16, 17, 19, 61, 64,
65, 74, 76–80, 82–86, 88–96, 98, 128–
130, 132–137, 140, 142, 144, 146, 154,
156, 157, 160, 162, 163, 165, 170, 183,
193, 205, 207, 209, 210, 214, 218, 222–
224, 252, 254, 266, 281, 284, 285, 287,
289, 325, 329–332, 334–336, 338, 339,
350, 352, 356, 366, 370, 380, 382–384,
386, 387
Abstract Syntax, 78
Lucx, iii, 9, 74, 77, 78, 81, 83, 92, 96, 132, 133,
136, 157, 160, 163, 165–167, 180–183,
191–193, 195, 207, 209, 212, 213, 218,
220, 223, 230–233, 242, 248, 281, 286,
289, 331, 340, 347, 352, 353, 356
Luthid, 86
MacQueen, D. B., 83
map of a sequence of partitioned runs, 42
map of partitioned runs, 39
MARF, 95, 97–99, 101–104, 106–111, 115, 116,
121, 122, 127, 133, 234, 235, 245, 286,
290, 291, 324, 326, 351–356, 358, 364,
372
Applications
MARFCAT, 97, 101, 102, 110–118, 120,
121, 125–127, 133, 151, 218, 223, 236,
238, 240, 245–247, 281, 292, 326
MARFPCAT, 97, 99, 102, 121, 122, 127,
133, 223, 236, 247, 326
Autonomic, 323, 371, 376
Distributed, 104, 106, 323, 324, 360, 363,
365, 367, 372, 373, 381
Distributed Pipeline, 107
MARFCAT, 97, 101, 102, 110–118, 120, 121,
125–127, 133, 151, 218, 223, 236, 238,
240, 245–247, 281, 292, 326
MARFL, iii, 77, 95, 98, 102, 106, 133, 157, 158,
166, 179, 192, 224, 229, 235, 326, 351–
353, 355–358, 383
MARFPCAT, 97, 99, 102, 121, 122, 127, 133,
223, 236, 247, 326
Max/MSP, 89, 387
meaning of evidential statement, 36
meaning of observation sequence, 34
Methodology, 154
Forensic Lucid, 154
micro context, 164, 353
Misuse Cases, 247
ML, 15
Model-checking
PRISM, 4, 7, 14, 160, 183, 211, 213, 215–
218, 220, 225, 242, 282, 290, 327, 366,
370, 372, 379, 388
next, 84
NFS, 326
no-observation, 33, 366
no-observations, 34
Non-periodic Tag Set:, 166
not advances upon, 178
not as late as, 177
not as soon as, 177
not retreats upon, 179
not to retreat whenever, 176
not whenever, 176
observation, 32, 207
observation sequence, 33, 207
Onyx, 91, 132, 325, 327
OpenGL, 326, 387
Operational Semantics, 192
Forensic Lucid, 192
Indexical Lucid, 192
Lucx, 192
Operators
Forensic Lucid, 169
Options
–aspectj, 213
–flucid, 213
–prism, 213
-binary, 123–125
-cheb, 119, 120, 123–125
-cos, 119, 120, 123–125
-diff, 119, 120, 123–125
-dynaclass, 123–125
-eucl, 119, 120, 123–125
-fft, 119, 120, 123–125
-flucid, 123–125
-hamming, 119, 120, 123–125
-low, 125
-mink, 119, 120, 123–125
-nopreprep, 119, 120, 123–125
-raw, 119, 120, 123
-sdwt, 124
all, 229
396
#<INTENSIONALLANG>, 140
#CPP, 140
#FORENSICLUCID, 140
#FORTRAN, 140
#GIPL, 140
#INDEXICALLUCID, 140
Packages
#JAVA, 140
gipsy.apps, 223
#JLUCID, 140
gipsy.apps.MARFCAT, 223
#JOOIP, 140
gipsy.apps.MARFPCAT, 223
#LUCX, 140
gipsy.apps.memocode.genome, 223
#OBJECTIVELUCID, 140
gipsy.apps.OCTMARF, 223
#ONYX, 140
gipsy.GIPC.intensional.SIPL.ForensicLucid, 242 #PERL, 140
gipsy.lang.context.forensic, 231
#PYTHON, 140
GMT, 145
#TENSORLUCID, 140
RIPE, 145
#TRANSLUCID, 140
partitioned run, 31
#funcdecl, 140, 335
partitioning, 31
#typedecl, 140, 335
Pebble, 111, 115, 116, 246
self-forensic computing, 26
Periodic Tag Set:, 166
self-forensics, 26, 359, 364
Perl, 122, 126, 140, 142, 236, 266
sensor, 368
PHP, 111, 126, 245, 246
set of all partitioned runs, 31
plausibility, 68
set of all runs, 31
pLucid, 86, 224
Simple context, 164
possible world semantics, 60
simple context, 340, 353
post, 165
SIPL, 78, 182, 211, 220, 327
Preprocessor, 139
Snort, 5
GIPC, 139
stream, 77, 83, 85
Printer Case, 44
swpvio, 261, 263, 267
Forensic Lucid, 250
Syntax
PRISM, 4, 7, 14, 160, 183, 211, 213, 215–218,
Forensic Lucid, 166
220, 225, 242, 282, 290, 327, 366, 370,
GIPL, 78
372, 379, 388
Proofs
Tag, 164
Isabelle, 15, 110, 288, 367
Tag set, 165
PureData, 86, 89, 387, 388
Tensor Lucid, iii, 9, 77, 86, 87, 91, 93, 94, 132,
Python, 140, 142
134–136, 141, 142, 157, 160, 162, 163,
166, 179, 192, 209, 214, 222, 281, 325,
retreat whenever, 175
327, 331, 342
retreats upon, 178
Test cases
RIPE, 128, 132–134, 150, 151, 224, 285, 327, 383,
Chrome 5.0.375.54, 111, 119, 246
386
Chrome 5.0.375.70, 111, 246
RMI, 106, 132, 134, 143, 228, 327
Dovecot 1.2.0, 245, 246
RPC, 135, 327
Dovecot 1.2.17, 246
run, 30
Dovecot 2.0-beta6, 111
run of computation, 31
Dovecot 2.0.beta6.20100626, 246
Jetty 6.1.16, 245, 246
Segments
Jetty 6.1.26, 246
#<IMPERATIVELANG>, 140
aspectj, 229
aspectj,prism,eductive, 229
eductive, 229
prism, 229
Ordered Tag Set:, 165
397
Pebble 2.5-M2, 111, 115, 116, 246
Tomcat 5.5.13, 111, 115, 116, 120, 245, 246
Tomcat 5.5.29, 111, 246
Tomcat 5.5.33, 115, 116, 246
Wireshark 1.2.0, 111, 245
Wireshark 1.2.18, 245
Wireshark 1.2.9, 111, 245
Wordpress 2.0, 245, 246
Wordpress 2.2.3, 246
Theory
Dempster–Shafer, 2, 3, 14, 58, 65–68, 72–74,
154, 155, 159, 191, 192, 203–205, 207,
210, 212, 244, 248, 250, 278, 281, 282,
287, 290, 324
Tomcat
5.5.13, 111, 115, 116, 120, 245, 246
5.5.29, 111, 246
5.5.33, 115, 116, 246
Tools
arp, 236, 238, 261, 263, 268, 269
CLIPS, 5, 284, 324
dot, 382, 386, 388
file, 104, 107–110
fileType, 97, 107–109, 247
finger, 258, 261, 263, 268, 271
Graphviz, 150, 278, 382, 384, 386, 388
grep, 236
iptables, 239
Isabelle, 15, 110, 288, 367
JavaCC, 132, 212, 213, 242, 326
Jitter, 89, 387
JUnit, 232, 242
lefty, 382
libintense, 92, 95
libpcap, 121
mac-san, 275
mac-spoofer-evidence-collector, 266, 274
mac-spoofer-flucid-processor, 274
mac-spoofer-reporter, 274
MARFCAT, 97, 101, 102, 110–118, 120, 121,
125–127, 133, 151, 218, 223, 236, 238,
240, 245–247, 281, 292, 326
MARFPCAT, 97, 99, 102, 121, 122, 127, 133,
223, 236, 247, 326
Max/MSP, 89, 387
msw, 257–259, 262, 267, 270, 271, 277
nbtscan, 261–263, 268, 271
nmap, 262, 263, 268, 271, 272, 275
ping, 258, 261, 263, 268
PRISM, 4, 7, 14, 160, 183, 211, 213, 215–
218, 220, 225, 242, 282, 290, 327, 366,
370, 372, 379, 388
procmail, 263, 265
PureData, 86, 89, 387, 388
RT: Request Tracker, 257, 327
sha1sum, 267
Snort, 5
ssh, 99, 261, 263, 266, 268, 271
stegdetect, 29
swm, 236, 268, 277
telnet, 261, 263
touch, 28
vmake, 92
training sets, 106
transition, 30
Transition Function, 208
transition function, 159, 208
TransLucid, 77, 131, 224, 286, 325
two-claim solution, 259
Types, 331
Unordered Tag Set:, 165
upon, 85
URI, 328
Wadge, W. W., 82
Web Services
BPEL, 323, 387, 388
whenever, 175
whenever, 85
Wireshark
1.2.0, 111, 245
1.2.18, 245
1.2.9, 111, 245
Wordpress
2.0, 245, 246
2.2.3, 246
wvr, 85
zero-observation, 33
398
| 1 |
arXiv:1804.03926v2 [math.PR] 13 Apr 2018
Weighted Poincaré inequalities, concentration inequalities and tail
bounds related to the behavior of the Stein kernel in dimension one
Adrien Saumard
CREST-ENSAI, Université Bretagne Loire
April 16, 2018
Abstract
We investigate the links between the so-called Stein’s density approach in dimension one and
some functional and concentration inequalities. We show that measures having a finite first
moment and a density with connected support satisfy a weighted Poincaré inequality with the
weight being the Stein kernel. Furthermore we prove asymmetric Brascamp-Lieb type inequalities
related to the Stein kernel. We also show that existence of a uniformly bounded Stein kernel is
sufficient to ensure a positive Cheeger isoperimetric constant. Then we derive new concentration
inequalities. In particular, we prove generalized Mills’ type inequalities when the Stein kernel
is uniformly bounded and sub-gamma concentration for Lipschitz functions of a variable with
sub-linear Stein kernel. When some exponential moments are finite, a general concentration
inequality is then expressed in terms of Legendre-Fenchel transform of the Laplace transform of
the Stein kernel. Along the way, we prove a general lemma for bounding the Laplace transform of a
random variable, that should be very useful in many other contexts when deriving concentration
inequalities. Finally, we provide density and tail formulas as well as tail bounds, generalizing
previous results that where obtained in the context of Malliavin calculus.
1
Introduction
Since its introduction by Charles Stein ([Ste72, Ste86]), the so-called Stein’s method is a corpus of
techniques that revealed itself very successful in studying probability approximation and convergence
in law (see for instance [CGS11, Cha14, LRS17] and references therein). Much less in known regarding the interplay between Stein’s method and functional inequalities. Recently, a series of papers
([LNP15, LNP17, FN17, CFP17]) undertook to fill this gap.
More precisely, Ledoux et al. [LNP15] provide some improvement of the log-Sobolev inequality
and Talagrand’s quadratic transportation cost inequality through the use of the Stein kernel and in
particular, the Stein discrepancy that measures the closeness of the Stein kernel to identity. In a
second paper [LNP17], these authors also provide a lower bound of the deficit in the Gaussian logSobolev inequality in terms Stein’s characterization of the Gaussian distribution. Recently, Fathi and
Nelson [FN17] also consider free Stein kernel and use it to improve the free log-Sobolev inequality.
Finally, Courtade et al. [CFP17] proved that the existence a reversed weighted Poincaré inequality
is sufficient to ensure existence of a Stein kernel. To do so, they use an elegant argument based on
the Lax-Milgram theorem. They also provide bounds on the Stein discrepancy and application to a
quantitative central limit theorem.
The present paper aims at pursuing investigations about the relations between Stein’s method
- especially the Stein kernel - and some functional inequalities, together with some concentration
inequalities. We prove that a measure having a finite first moment and a density with connected
1
support satisfies a weighted Poincaré inequality in the sense of [BL09b], with the weight being the
Stein kernel. This allows us to recover by different techniques some weighted Poincaré inequalities
previously established in [BL14] for the Beta distribution or in [BJM16a] for the generalized Cauchy
distribution and to highlight new ones, considering for instance Pearson’s class of distributions. We
also derive asymmetric Brascamp-Lieb type inequalities related to the Stein kernel and show that
existence of a uniformly bounded Stein kernel is sufficient to ensure a positive Cheeger isoperimetric
constant.
There is also a growing literature, initiated by Chatterjee ([Cha07]), about the links between
Stein’s method and concentration inequalities. Several approaches are considered, from the method
of exchangeable pairs ([Cha07, CD10, MJC+ 14, PMT16]), to the density approach coupled with
Malliavin calculus ([NV09, Vie09, EV13, TM15]), size biased coupling ([GG11b, GG11a, GGR11,
CGJ18]), zero bias coupling ([GIs14]) or more general Stein couplings ([BRW18]). As emphasized
for instance in the survey by Chatterjee [Cha14], one major strength of Stein-type methodologies
applied to concentration of measure is that it often allows to deal with dependent and complex system
of random variables, finding for instance applications in statistical mechanics or in random graph
theory.
In the present work, we investigate relationships between the Stein kernel and concentration
of measure by building upon ideas and exporting techniques about the use of covariance identities
for Gaussian concentration from Bobkov, Götze and Houdré [BGH01]. We also put to emphasis a
covariance identity first obtained by Menz and Otto [MO13] and further studied by the author and
Wellner ([SW14, SW18a, SW18b]). It actually appears that Menz and Otto’s covariance identity,
which is in fact a consequence of an old result by Hoeffding (see the discussion in [SW18b]), is
essentially equivalent to the so-called generalized Stein covariance identity ([LRS17]).
As a result, we obtain new concentration inequalities related to the behavior of the Stein kernel
in dimension one. Considering first that the Stein kernel is uniformly bounded, we recover the wellknown fact that the associated random variable admits a sub-Gaussian behavior. But we also prove in
this setting some refined concentration inequalities, that we call generalized Mills’ type inequalities,
in reference to the classical Mills’ inequality for the normal distribution (see for instance [Düm10]).
Strongly log-concave variables are known to have a bounded Stein kernel and our concentration
inequalities improve on the previously best known bounds for this important class of measures. Beta
distributions also have a bounded Stein kernel and our concentration inequalities again improve on
previously best known concentration inequalities for Beta distributions, recently due to Bobkov and
Ledoux [BL14].
Furthermore, we consider the situation where the Stein kernel has a sub-linear behavior, recovering and extending in this case sub-Gamma concentration previously established by Nourdin and
Viens [NV09]. More generally, we prove that the Fenchel-Legendre dual of the Laplace transform of
the Stein kernel controls the Laplace transform of a Lipschitz function taken on the related distribution. It is worth noting that to prove such a result, we state a generic lemma allowing to bound the
Laplace transform of a random variable. We believe that this lemma has an interest by itself, as it
may be very convenient when dealing with Chernoff’s method in general.
We also obtain lower tail bounds without the need of Malliavin calculus, thus extending previous
results due to Nourdin and Viens [NV09] and Viens [Vie09].
The paper is organized as follows. In Section 2 we introduce some background material, by
discussing some well-known and new formulas for the Stein kernel and more generally for Stein
factors, in connection with Menz and Otto’s covariance identity. Then we prove in Section 3 some
(weighted) functional inequalities linked to the behavior of the Stein kernel. In Section 4 we make
use of some covariance inequalities to derive various concentration inequalities for Lipschitz functions
2
of a random variable having a Stein kernel. Finally, we prove some density, tail formulas and tail
bounds related to the behavior of the Stein kernel in Section 5.
2
On covariance identities and the Stein kernel
Take a real random variable X of distribution ν with density p with respect to the Lebesgue measure
on R and cumulative distribution function F . Assume that the mean of the distribution ν exists and
denote it by µ = E [X]. Denote also Supp (ν) = {x ∈ R : p (x) > 0} the support of the measure ν
and assume that this support is connected, that is Supp (ν) is an interval, possibly unbounded. We
denote a ∈ R {−∞}, b ∈ R ∪ {+∞}, a < b, the edges of Supp (ν). The distribution ν is said to have
a Stein kernel τ ν , if the following identity holds true,
E [(X − µ) ϕ (X)] = E τ ν (X) ϕ′ (X) ,
with ϕ being any differentiable test function such that the functions x 7→ (x − µ) ϕ (x) and x 7→
τ ν (x) ϕ′ (x) are ν-integrable and [τ ν pϕ]ba = 0. It is well-known [LNP15, CFP17, LRS17], that under
our assumptions the Stein kernel τ ν exists, is unique up to sets of ν-measure zero and a version of
the latter is given by the following formula,
Z ∞
1
τ v (x) =
(y − µ) p (y) dy ,
(1)
p (x) x
for any x in the support of ν. Formula (1) comes from a simple integration by parts. Notice that τ ν
is almost surely positive on the support of ν.
Although we will focus only on dimension one, it is worth noting that the definition of a Stein
kernel extends to higher dimension, where it is matrix-valued. The question of existence of the Stein
kernel for a particular multi-dimensional measure ν is nontrivial and only a few general results are
known related to this problem (see for instance [LNP15] and [CFP17]). In particular, [CFP17] proves
that the existence of a Stein kernel is ensured whenever a (converse weighted) Poincaré inequality is
satisfied for the probability measure ν.
In this section, that essentially aims at stating some background results that will be instrumental
for the rest of the paper, we will among other things recover Identity (1) and introduce a new
formula for the Stein kernel by means of a covariance identity recently obtained in [MO13] and
further developed in [SW18a].
We define a non-negative and symmetric kernel kν on R2 by
kν (x, y) = F (x ∧ y) − F (x) F (y) ,
for all (x, y) ∈ R2 .
(2)
For any p ∈ [1, +∞], we denote Lp (ν) the space of measurable functions f such that kf kpp =
R
|f |p dν < +∞ for p ∈ [1, +∞) and kf k∞ = ess supx∈R |f (x)| < +∞ for p = +∞. If f ∈ Lp (ν),
g ∈ Lq (ν), p−1 + q −1 = 1, we also write
Z
Z
Cov (f, g) =
f − f dν gdv
the covariance of f and g with respect to ν. For f ∈ L2 (ν), we write Var (f ) = Cov (f, f ) the
variance of f with Rrespect to ν. For a random variable X of distribution ν, we will also write
E [h (X)] = E [h] = hdν.
3
Proposition 1 (Corollary 2.2, [SW18a]) If g and h are absolutely continuous and g ∈ Lp (ν),
h ∈ Lq (ν) for some p ∈ [1, ∞] and p−1 + q −1 = 1, then
ZZ
g′ (x)kν (x, y)h′ (y)dxdy.
(3)
Cov(g, h) =
R2
Remark 2 In the context of goodness-of-fit tests, Liu et al. [LLJ16] introduce the notion of kernelized
Stein discrepancy as follows. If K (x, y) is a kernel on R2 , p and q are two densities and (X, Y ) is a
pair of independent random variables distributed according to p, then the kernelized Stein discrepancy
Sk (p, q) between p and q related to k is
Sk (p, q) = E [δ q,p (X) K (X, Y ) δ q,p (Y )] ,
where δ q,p (x) = (log q (x))′ − (log p (x))′ is the difference between scores of p and q. This notion is in
fact presented in [LLJ16] in higher dimension and is used as an efficient tool to assess the proximity
of the laws p and q. From formula (3), we see that if we take Kν (x, y) = kν (x, y)pν (x)−1 pν (y)−1 ,
then we get the following formula,
p
SKν (p, q) = Varν log
.
q
In higher dimension, Bobkov et al. [BGH01] proved that if a measure ν satisfies a covariance identity
of the same form as in (3), with derivatives replaced by gradients, then the measure ν is Gaussian.
More precisely, let (X,
of independent
normalized Gaussian vectors in Rd , let µα be the
Y ) be a pair
√
measure of the pair X, αX + 1 − α2 Y and let pN (x, y) be, the density associated the measure
R1
0 µα dα. Then we have
ZZ
∇g(x)T pN (x, y)∇h(y)dxdy .
Cov (g (X) , h (X)) =
R2
This gives that for a kernel KN (x, y) = pN (x, y) ϕ−1 (x) ϕ−1 (y), where ϕ is the standard normal
density on Rd , we also have
p
(X) .
SKN (p, q) = Var log
q
The following formulas will also be useful. They can be seen as special instances of the previous
covariance representation formula.
Corollary 3 (Corollary 2.1, [SW18a]) For an absolutely continuous function h ∈ L1 (F ),
Z
Z z
Z
kν (z, y) h′ (y)dy
hdν =
F (z) hdν −
−∞
R
and
− (1 − F (z))
Z
hdν +
R
Z
(4)
R
hdν =
(z,∞)
Z
kν (z, y) h′ (y)dy.
R
By combining Theorem 1 and Corollary 3, we get the following covariance identity.
4
(5)
Proposition 4 Let ν be a probability measure on R. If g and h are absolutely continuous and
g ∈ Lp (ν), h ∈ Lq (ν), p−1 + q −1 = 1, then
Z
g′ (x) Lh (x) dx ,
(6)
Cov (g, h) =
R
R∞
R
Rx
where Lh (x) = x hdν − (1 − F (x)) R hdν = F (x) R hdν − −∞ hdν for every x∈ R. Furthermore,
if ν has a density p with respect to the Lebesgue measure that has a connected support, then
Z
g ′ (x) L̄h (x) p (x) dx = E g′ (X) L̄h (X) ,
(7)
Cov (g, h) =
R
R
where, for every x ∈ Supp(ν),
L̄h (x) = p (x)−1 Lh (x)
Z ∞
1
1 − F (x)
=
E [h] .
hdν −
p (x) x
p (x)
(8)
Proof. Identity (6) is a direct consequence of Theorem 1 and Corollary 3. If ν has a density p with
respect to the Lebesgue measure that has a connected support then for every x outside the support
we have Lh (x) = 0. Consequently, from Identity (6) we get, for g ∈ L∞ (ν), h ∈ L1 (ν) absolutely
continuous,
Z
g′ (x) Lh (x) dx
Cov (g, h) =
Supp(ν)
Z
g′ (x) L̄h (x) p (x) dx
=
Supp(ν)
and so Identity (7) is proved.
From Proposition 4, we can directly recover formula (1) for the Stein kernel. Indeed, by taking
h (x) = x − µ, we have h ∈ L1 (ν) and differentiable and so, for any absolutely continuous function
g ∈ L∞ (ν), applying Identity (7) yields
Z
Z
g ′ · L̄hdν ,
(9)
(x − µ) g (x) p (x) dx =
Cov (g, h) =
R
R
where for convenience, we set L̄h (x) = 0 if x ∈
/ Supp (ν). Hence, a version of the Stein kernel τ v is
given by L̄h, which is nothing but the right-hand side of Identity (1).
Following the nice recent survey [LRS17] related to the Stein method in dimension one, identity
(7) is nothing but the so-called ”generalized Stein covariance identity”, written in terms of the inverse
of the Stein operator rather than for the Stein operator itself. Indeed, it is easy to see that the inverse
Tν of the operator L̄ acting on integrable functions with mean zero is given by the following formula
Tν f =
(f p)′
1
,
p Supp(ν)
which is nothing else but the Stein operator (see Definition 2.1 of [LRS17]).
It is also well known, see again [LRS17], that the inverse of the Stein operator, that is L̄, is
highly involved in deriving bounds for distances between distributions. From Corollary 3, we have
the following seemingly new formula for this important quantity,
Z
1
kν (x, y) h′ (y)dy .
(10)
Tν−1 h (x) = L̄h (x) =
p (x) R
5
Particularizing the latter identity with h (x) = x − µ, we obtain the following identity for the Stein
kernel,
Z
1
kν (x, y) dy .
(11)
τ ν (x) =
p (x) R
A consequence of (11) that will be important in Section 3 when deriving weighted functional inequalities is that for any x ∈ Supp (ν) the function y 7→ kν (x, y) (p (x) τ ν (x))−1 can be seen as the
density - with respect to the Lebesgue measure - of a probability measure, since it is nonnegative
and integrates to one.
We also deduce from (10) the following upper bound,
Z x
Z
Z
kh′ k∞
kh′ k∞
−1
xdν (x) ,
Tν h (x) ≤
kν (x, y) dy =
F (x) xdν (x) −
p (x) R
p (x)
−∞
R
which exactly the formula given in Proposition 3.13(a) of [Döb15].
Let us note ϕ = − log p on Supp (ν) and +∞ otherwise, the so-called potential of the density
p. If on the interior of the support of ν, ϕ has derivative ϕ′ ∈ L1 (ν) absolutely continuous, then
Corollary 2.3 in [SW18a] gives
Z
kν (x, y) ϕ′′ (y)dy = p (x) .
R
Using the latter identity together with (10), we deduce the following upper-bound: if p is strictly
log-concave (that is ϕ′′ > 0 on Supp (ν)), then
Tν−1 h ≤
|h′ (x)|
.
′′
x∈Supp(ν) ϕ (x)
sup
(12)
In particular, if p is c-strongly log-concave, meaning that ϕ′′ ≥ c > 0 on R, then the Stein kernel
is uniformly bounded and kτ ν k∞ ≤ c−1 . For more about the Stein method related to (strongly)
log-concave measures, see for instance [MG16].
Furthermore, by differentiating (10), we obtain for any x ∈ Supp (ν),
Z
−1 ′
′
−1
F (y) h′ (y) dy
Tν h (x) = ϕ (x) Tν h (x) − h (x) −
R
′
= ϕ
that is
(x) Tν−1 h (x)
− h (x) + E [h (X)] ,
′
Tν−1 h (x) − ϕ′ (x) Tν−1 h (x) = −h (x) + E [h (X)] .
This is nothing but the so-called Stein equation associated to the Stein operator.
3
Some weighted functional inequalities
Weighted functional inequalities appear naturally when generalizing Gaussian functional inequalities
such as Poincaré and log-Sobolev inequalities. They were put to emphasis for the generalized Cauchy
distribution and more general κ-concave distributions by Bobkov and Ledoux [BL09b, BL09a], also in
connection with isoperimetric-type problems, weighted Cheeger-type inequalities and concentration
of measure. Then several authors proved related weighted functional inequalities ([BCG08, BJ14,
BJM16a, BJM16b, CGGR10, CGW11, CEG17, Goz10]). In the following, we show the strong connection between the Stein kernel and the existence of weighted functional inequalities. Note that
a remarkable first result in this direction was recently established by Courtade et al. [CFP17] who
proved that a reversed weighted Poincaré inequality is sufficient to ensure the existence of a Stein
kernel in Rd , d ≥ 1.
6
3.1
Weighted Poincaré-type inequality
According to [BL09b], a measure ν on R is said to satisfy a weighted Poincaré inequality if there
exists a nonnegative measurable weight function ω such that for any smooth function f ∈ L2 (ν),
h
2 i
.
(13)
Var (f (X)) ≤ E ω (X) f ′ (X)
The following theorem shows that a probability measure having a finite first moment and density
with connected support on the real line satisfies a weighted Poincaré inequality, with the weight
being its Stein kernel.
Theorem 5 Take a real random variable X of distribution ν with density p with respect to the
Lebesgue measure on R. Assume that E [|X|] < +∞, p has a connected support and denote τ ν the
Stein kernel of ν. Take f ∈ L2 (ν), absolutely continuous. Then
h
2 i
.
(14)
Var (f (X)) ≤ E τ ν (X) f ′ (X)
The preceding inequality is optimal whenever ν admits a finite second moment, that is E X 2 < +∞,
since equality is reached for f = Id, by definition of the Stein kernel.
Proof. We have
"
#
L̄f (X)
Var (f (X)) = E
τ ν (X)g (X) p
τ ν (X)
v "
r h
2 #
iu
u
L̄f
(X)
E τ ν (X) (g ′ (X))2 tE τ ν (X)
≤
.
τ ν (X)
′
p
By the use of Jensen’s inequality, for any x ∈ Supp (ν),
L̄f (x)
τ ν (x)
2
Hence,
"
E τ ν (X)
L̄f (X)
τ ν (X)
2 #
2
kν (x, y)
dy
=
f (y) R
kν (x, z) dz
Z
2 kν (x, y)
f ′ (y) R
≤
dy
z kν (x, z) dz
Z
2
kν (x, y)
=
f ′ (y) dy .
τ ν (x) p (x)
Z
′
2
kν (x, y)
′
f (y) dy dx
≤
τ v (x) p (x)
τ ν (x) p (x)
Z Z
2
=
kν (x, y) f ′ (y) dxdy
Z
2
=
τ ν (y) f ′ (y) p (y) dy ,
Z
Z
which concludes the proof.
Let us detail some classical examples falling into the setting of Theorem 5.
7
The beta distribution Bα,β , α, β > 0 is supported on (0, 1), with density pα,β given by
pα,β (x) =
xα−1 (1 − x)β−1
, 0 < x < 1.
B (α, β)
(15)
The normalizing constant B (α, β) is the classical beta function of two variables. The beta distribution
has been for instance recently studied in [BL14] in connection with the analysis of the rates of
convergence of the empirical measure on R for some Kantorovich transport distances. The Stein
kernel τ α,β associated to the Beta distribution is given by τ α,β (x) = (α + β)−1 x (1 − x) for x ∈ (0, 1)
(see for instance [LRS17]) and thus Theorem 5 allows to exactly recover Proposition B.5 of [BL14].
Our techniques are noticeably different since the weighted Poincaré inequality is proved in [BL14] by
using orthogonal (Jacobi) polynomials.
Note that considering Laguerre polynomials, that are eigenfunctions of the Laguerre operator for
which the Gamma distribution is invariant and reversible, one can also show an optimal weighted
Poincaré inequality for the Gamma distribution, which include as a special instance the exponential
distribution (see [BL97] and also [BGL14], Section 2.7). Theorem 5 also give an optimal weighted
Poincaré inequality for the Gamma distribution and more generally for Pearson’s class of distributions
(see below).
Note also that the beta distribution seems to be outside of the scope of the weighted Poincaré
inequalities described in [BJM16a] since it is assumed in the latter article that the weight of the
considered Poincaré-type inequalities is positive on R, which is not the case for the beta distribution.
Furthermore, [BL14] also provides some weighted Cheeger inequality for the Beta distribution, but
such a result seems outside the scope of our approach based on covariance identity (3). When
considering concentration properties of beta distributions in Section 4 below, we will however provide
some improvements compared to the results of [BL14].
It has also been noticed that the generalized Cauchy distribution satisfies a weighted Poincaré
distribution, which also implies in this case a reverse weighted Poincaré inequality (see [BL09b,
BJM16a]). In fact, [BL09b] shows that the generalized Cauchy distribution plays a central role when
considering functional inequalities for κ-concave measures, with κ < 0.
−β
The generalized Cauchy distribution of parameter β > 1/2 has density pβ (x) = Z −1 1 + x2
for x ∈ R and normalizing
constant Z > 0. Its Stein kernel τ β exists for β > 3/2 and writes
2
τ β (x) = 1 + x /(2 (β − 1)). This allows us in this case to recover the optimal weighted Poincaré
inequality also derived in [BJM16a], Theorem 3.1. Note that Theorem 3.1 of [BJM16a] also provides
the optimal constant in the weighted Poincaré inequality with a weight proportional to 1 + x2 in the
range β ∈ (1/2, 3/2].
Let us conclude this short list of examples by mentioning Pearson’s class of distributions, for
which the density p is solution to the following differential equation,
α−x
p′ (x)
=
,
2
p (x)
β 2 (x − λ) + β 1 (x − λ) + β 0
(16)
for some constants λ, α, β j , j = 0, 1, 2. This class of distributions, that contains for instance Gaussian,
Gamma, Beta and Student distributions, has been well studied in the context of Stein’s method,
see [LRS17] and references therein. In particular, if a density satisfies (16) with β 2 6= 1/2, then
the corresponding distribution ν has a Stein kernel τ ν (x) = (1 − 2β 2 )−1 β 0 + β 1 x + β 2 x2 , for
any x ∈ Supp (ν). Particularizing to the Student distribution tα with density pα proportional to
−(1+α)/2
α + x2
on R for α > 1, we get that for any smooth function f ∈ L2 (tα ),
Z
1
x2 + α f ′2 (x) dtα (x) .
Vartα (f ) ≤
α−1
8
We will investigate concentration inequalities in Section 4. In fact, existence of a weighted
Poincaré inequality already implies concentration of measure. The following corollary is a direct
consequence of Theorem 4.1 and Corollary 4.2 in [BL09b], in light of Theorem 5 above.
Corollary 6 Take a real random variable X of distribution ν with density p with respect to the
Lebesgue measure on R. Assume that E [|X|] < +∞, p has a connected support and denote τ ν the
√
Stein kernel of ν. Assume that τ ν has a finite rth moment, r ≥ 2. Then any Lipschitz function f
on R has a finite rth moment. More precisely, if f is 1-Lipschitz and E [f ] = 0, then
r √
kf kr ≤ √ k τ ν kr .
2
Furthermore, by setting t1 =
√
τν
r
er, it holds
ν (|f | ≥ t) ≤
(
√
τ ν kr e)
2e−t/(k
,
√
k τ ν kr r r
2
,
t
if 0 ≤ t ≤ t1 ,
if t ≥ t1 .
Fathi et al. [CFP17] proved that if a probability measure on Rd , d ≥ 1, satisfies a converse
weighted Poincaré inequality, then the Stein kernel exists for this measure and the authors further
provide an estimate of the moment of order two of the Stein kernel under a moment assumption
involving the inverse of the weight.
Actually, a slight modification of the arguments provided in [CFP17] shows that, reciprocally to
Theorem 5, if a probability measure satisfies a (direct) weighted Poincaré inequality then the Stein
kernel also exists.
Theorem 7 Assume that a probability measure ν on R with mean zero satisfies a weighted Poincaré
inequality (13) with weight ω, then ν admits a Stein kernel τ ν , satisfying
Z
Z
2 −1
τ ν ω dν ≤ x2 dν .
(17)
Proof. The proof is a simple modification of the proof of Theorem 2.4 in [CFP17]. We give it
1,2
for the sake of completeness. Denote Wν,ω
the Sobolev
defined
of all smooth
as the closure1,2
R space
1,2
f 2 + f ′2 ω dν. Also set Wν,ω,0
= Wν,ω
∩
functions in L2 (ν) with respect to the Sobolev norm
R
R ′ ′
1,2
1,2
f ∈ L2 (ν) : f dν = 0 . Then f g ωdν is a continuous bilinear form on Wν,ω,0 × Wν,ω,0 and is
coercive by the weighted Poincaré inequality (13). So by the Lax-Milgram theorem, there exists a
1,2
unique g0 ∈ Wν,ω,0
such that
Z
Z
′ ′
f g ωdν = xf dν
1,2
for any f ∈ Wν,ω,0
. Hence, τ ν = g0′ ω is a Stein kernel for ν. Furthermore, we have
Z
g0′2 ωdν
=
Z
≤
sZ
xg0 dν
sZ
sZ
g02 dν
x2 dν
≤
9
g0′2 ωdν
sZ
x2 dν
R
R
Noticing that ω > 0 a.s. on Supp (ν), we have g0′2 ωdν = τ ν ω −1 dν, which gives (17).
Note that for ease of presentation, we stated Theorem 7 in dimension one, but it is seen from the
proof above that it directly extends to Rd , d ≥ 2, by considering the Hilbert-Schmidt scalar product
between matrices - since the Stein kernel is matrix-valued - just as in [CFP17].
3.2
Asymmetric Brascamp-Lieb-type inequalities
The celebrated Brascamp-Lieb inequality [BL76] states that if a measure π on Rd , d ≥ 1, is strictly
log-concave - that is ϕ = − ln p is convex and Hess (ϕ) (x) is a positive definite symmetric matrix for
any x ∈ Supp (π) - then for any smooth function h,
Z
Varπ (h) ≤ ∇hT (Hess (ϕ))−1 ∇hdπ .
Considering the one dimensional case d = 1, Menz and Otto [MO13] proved that for any smooth
g ∈ L1 (π), h ∈ L∞ (π),
h′
g′ 1 .
(18)
|Covπ (g, h)| ≤
ϕ′′ ∞
The authors call the later inequality an asymmetric Brascamp-Lieb inequality. This inequality has
been then generalized to higher dimension ([CCEL13]) and beyond the log-concave case ([ABJ16]).
Considering the covariance of two smooth functions, we will derive in the next proposition inequalities involving the derivative of these functions and as well as quantities related to the Stein
kernel.
Proposition 8 Assume that τ ν > 0 on the support of ν. Then for any p, q ∈ [1, +∞], p−1 +q −1 = 1,
|Cov (g, h)| ≤ g′ τ ν
L̄ (h)
τν
p
(19)
q
Furthermore, if p = 1 and q = +∞, we have
|Cov (g, h)| ≤ h′
g′ τ ν
∞
1
.
(20)
If p, q ∈ (1, +∞), we also have
|Cov (g, h)| ≤ g ′ τ ν
where m (x) = p (x)−1
then
R
h′ m1/q
p
q
,
(21)
2
k (x, y) τ −1
ν (y) dy on the support of ν. If in addition, τ ν ≥ σ min > 0 a.s.,
′
|Cov (g, h)| ≤ σ −2
min g τ ν
p
h′ τ 1/q
ν
.
q
(22)
Note that if ν is strongly log-concave, meaning that ϕ′′ ≥ c > 0 for ϕ = − ln p, then the
asymmetric Brascamp-Lieb inequality (18) and covariance identity (20) both induce the following
inequality,
|Cov (g, h)| ≤ c−1 h′ ∞ g′ 1 .
Proof. By (9) and Hölder’s inequality, we have
|Cov (g, h)| =
≤
Z
L̄h
dν
τν
L̄ (h)
p
τν
g′ τ ν
R
g′ τ ν
10
.
q
So Inequality (19) is proved. To prove (20) it suffices to apply (19) with p = 1 and q = +∞ and
remark that
Z
kν (x, y) dy
L̄ (h)
h′ (y) R
=
≤ h′ ∞ .
τν ∞
kν (x, z) dz ∞
Now, to prove (21) simply note that
Z
kν (x, y) dy
L̄ (h)
h′ (y) R
=
τν q
kν (x, z) dz
q
≤
Z Z
h′ (y)
p
R
kν (x, y)
dxdy = h′ m1/q
kν (x, z) dz
q
.
To deduce (22) from (21), just remark that
Z
Z
1
kν (x, y)
τ ν (x)
1
dy ≤ 2
kν (x, y) dy = 2
m (x) =
.
p (x)
τ ν (y)
σ min p (x)
σ min
3.3
Isoperimetric constant
Let us complete this section about some functional inequalities linked to the Stein kernel but studying
the isoperimetric constant. Recall that for a measure µ on Rd , an isoperimetric inequality is an
inequality of the form
µ+ (A) ≥ c min {µ (A) , 1 − µ (A)} ,
(23)
where c > 0, A is an arbitrary measurable set in Rd and µ+ (A) stands for the µ-perimeter of A,
defined to be
µ Ah − µ (A)
+
,
µ (A) = lim inf
h
h→0+
with Ah = x ∈ Rd : ∃a ∈ A, |x − a| < h the h-neighborhood of A. The optimal value of c = Is (ν)
in (23) is referred to as the isoperimetric constant of ν.
The next proposition shows that existence of a uniformly bounded Stein kernel is essentially
sufficient for guaranteeing existence of a positive isoperimetric constant.
Proposition 9 Assume that the probability measure ν has a connected support and continuous density p with respect to the Lebesgue measure. Assume also that its Stein kernel τ ν is uniformly bounded
on Supp (ν), kτ ν k∞ < +∞. Then ν admits a positive isoperimetric constant Is (ν) > 0.
A further natural question would be: does a measure having a Stein kernel satisfies a weighted
isoperimetric-type inequality, with a weight related to the Stein kernel? So far, we couldn’t give
an answer to this question. Note that Bobkov and Ledoux [BL09a, BL09b] proved some weighted
Cheeger and weighted isoperimetric-type inequalities for the generalized Cauchy and for κ-concave
distributions.
Proof. Let F be the cumulative distribution function of ν, µ be its mean and let ε > 0 be such that
[µ − ε, µ + ε] ⊂ Supp (ν). Recall ([BH97], Theorem 1.3) that the isoperimetric constant associated
to ν satisfies
p (x)
,
Is (ν) = ess inf
a<x<b min {F (x) , 1 − F (x)}
where a < b are the edges of the support of ν. Take x ∈ Supp (ν) such that x − µ ≥ ε/2, then
Z ∞
1
τ v (x) =
(y − µ) p (y) dy
p (x) x
ε min {F (x) , 1 − F (x)}
1 − F (x)
=
.
≥ ε
2p (x)
2
p (x)
11
The same estimate holds for x ≤ µ − ε/2 since τ v (x) = p (x)−1
ess
Rx
−∞ (µ
− y) p (y) dy . Hence,
2
p (x)
>0.
≥
ε kτ ν k∞
x∈Supp(ν), |x−µ|≥ε/2 min {F (x) , 1 − F (x)}
inf
(24)
Furthermore, we have
inf |x−µ|≤ε/2 p (x)
p (x)
≥
>0.
min {F (µ − ε/2) , 1 − F (µ + ε/2)}
|x−µ|≤ε/2 min {F (x) , 1 − F (x)}
inf
(25)
The conclusion now follows from combining (24) and (25).
4
Concentration inequalities
From Proposition 4, Section 2, we get the following proposition.
Proposition 10 Assume that ν has a a finite first moment and a density p with respect to the
Lebesgue measure that has a connected support. If g ∈ L∞ (ν) and h ∈ L1 (ν) are absolutely continuous and h is 1-Lipschitz, then
|Cov (g, h)| ≤ E g′ · τ v ,
where τ v is given in (1) and is the Stein kernel. Furthermore, if the Stein kernel is uniformly bounded,
that is τ v ∈ L∞ (ν), then
|Cov (g, h)| ≤ kτ v k∞ E g′ .
(26)
Recall that if ν is strongly log-concave, that is ϕ′′ ≥ c > 0, then ν has uniformly bounded Stein
kernel satisfying kτ v k∞ ≤ c−1 .
Proof. Start from identity (7) and simply remark that, for hν (x) = x,
Z
1
kν (x, y) h′ (y) dy
L̄h (x) =
p (x) R
Z
kh′ k∞
≤
kν (x, y) dy
p (x) R
= h′ ∞ τ v (x) .
Applying techniques similar to those developed in [BGH01] for Gaussian vectors (see especially
Theorem 2.2), we have the following Gaussian type concentration inequalities when the Stein kernel
is uniformly bounded.
Theorem 11 Assume that ν has a finite first moment and a density p with respect to the Lebesgue
measure that has a connected support. Assume also that the Stein kernel τ v is uniformly bounded,
τ v ∈ L∞ (ν), and denote c = kτ v k−1
∞ . Then the following concentration inequalities hold. For any
1-Lipschitz function g,
Z
P g≥
Furthermore, the function
Tg (h) = ech
gdν + r
2 /2
≤ e−cr
2 /2
.
E (g − Eg) 1{g−Eg≥h}
12
(27)
is non-increasing in h ≥ 0. In particular, for all h > 0,
2
e−ch /2
P (g − Eg ≥ h) ≤ E (g − Eg)+
,
h
2
e−ch /2
P (|g − Eg| ≥ h) ≤ E |g − Eg|
.
h
(28)
(29)
Inequality (27) is closely related to Chatterjee’s Gaussian coupling for random variables with
bounded Stein kernel [Cha12]. To our knowledge refined concentration inequalities such as (28) and
(29) are only available in the literature for Gaussian random variables. We refer to these inequalities
as generalized Mills’ type inequalities since taking g = Id in Inequality (29) allows to recover Mills’
inequality (see for instance [Düm10]): if Z is the normal distribution, then for any t > 0,
r
2
2 e−t /2
P (|Z| > t) ≤
.
π t
Here the setting of a bounded Stein kernel is much larger and include for instance strongly log-concave
variables.
Note that Beta distributions Bα,β as defined in (15) are known to be log-concave of order α
whenever α ≥ 1 and β ≥ 1, [BL14]. Using this fact, Bobkov and Ledoux [BL14], Proposition B.10,
prove the following concentration inequality: for X a random variable with distribution B (α, β) ,
α ≥ 1, β ≥ 1 and any r ≥ 0,
P (|X − E (X)| ≥ r) ≤ 2e−(α+β)r
2 /8
.
Actually, for any α, β > 0, the Beta distribution B (α, β) belongs to the Pearson’s class of
distributions and its Stein kernel is given a polynomial of order 2, τ B(α,β) (x) = (α + β)−1 x (1 − x)
on [0, 1] (see [LRS17]). In particular, τ B(α,β) ∞ = 4−1 (α + β)−1 and Theorem 11 applies even in
the case where α, β < 1, for which the B (α, β) distribution is not log-concave.
Corollary 12 Let α, β > 0. Take X a random variable with distribution B (α, β) and g a 1-Lipschitz
function on [0, 1]. Then for all r > 0,
P (g (X) − E [g (X)] ≥ r) ≤ exp −2 (α + β) r 2 .
Furthermore, for all r > 0,
2
P (g (X) − E [g (X)] ≥ r) ≤ E (g − Eg)+
e−2(α+β)r
,
r
e−2(α+β)r
P (|g (X) − E [g (X)]| ≥ r) ≤ E |g − Eg|
r
and, if α, β ≥ 1,
2
e−2(α+β)r
C
P (|g (X) − E [g (X)]| ≥ r) ≤ √
r
α+β+1
(30)
2
(31)
with the value C = 2.5 - for which we always have C (α + β + 1)−1/2 < 2 - that holds.
Proof. We only need to prove (31). Start from (30). It is sufficient to prove the following inequality,
E |g − Eg| ≤ √
13
2.5
.
α+β+1
By proposition B.7 of [LRS17], by setting m the median of g (X), we have
E |g − m| ≤
≤
=
Z 1p
2.5
√
x (1 − x) g′ (x) dBα,β (x)
α+β+1 0
Z 1p
2.5
√
x (1 − x)dBα,β (x)
α+β+1 0
2.5B (α + 1/2, β + 1/2)
√
.
α + β + 1B (α, β)
Now the conclusion follows from the basic inequalities, B (α + 1/2, β + 1/2) ≤ B (α, β) /2 and
E |g − Eg| ≤ 2E |g − m|.
Proof of Theorem 11. Take g to be 1-Lipschitz and mean zero with respect to ν, then for any
λ ≥ 0,
i
h
E geλg = Cov g, eλg
′
≤ kτ v k∞ E
eλg
λ h λg i
.
E e
c
≤
Define J (λ) = log E eλg , λ ≥ 0. We thus have the following differential inequality, J ′ (λ) ≤ λ/c.
2
Since J (0) = 0, this implies that J (λ) ≤ λ2 / (2c). Equivalently, E eλg ≤ eλ /(2c) , which by the use
of Chebyshev’s inequality gives (27).
Now, assume that as a random variable g has a continuous positive density p on the whole real
line. Take f = U (g) where U is a non-decreasing (piecewise) differentiable function on R. Applying
(26), we get
E [gU (g)] ≤ E U ′ (g) /c .
(32)
Let G be the
distribution function of g. Given h > 0 and ε > 0, applying (32) to the function
U (x) = min (x − h)+ , ε leads to
Z
h+ε
h
x (x − h) dG (x) + ε
Z
+∞
h+ε
xdG (x) ≤
G (h + ε) − G (h)
.
c
Dividing by ε and letting ε tend to 0, we obtain, for all h > 0,
Z ∞
p (h)
.
xdG (x) ≤
c
h
R +∞
R +∞
Thus, the function V (h) = h xdG (x) = h xp (x) dx satisfies the differential inequality V (h) ≤
−V ′ (h) /(ch), that is
(log V (h))′ ≤ −ch ,
which is equivalent to saying that log V (h) + ch2 /2 is non-increasing, and therefore the function
Tg (h) is non-increasing.
We relax now the condition on the Stein kernel (which exists as soon as a spectral gap exists,
[CFP17]) by assuming that it is sub-linear. This condition is fulfilled by many important distributions, for instance by the Gaussian, Gamma or Beta distributions. We deduce a sub-Gamma
behavior.
14
Theorem 13 Assume that ν has a finite first moment and a density p with respect to the Lebesgue
measure that has a connected support. Assume also that the Stein kernel τ v is sub-linear, that is
τ v (x) ≤ a (x − µ) + b, where µ is the mean value of ν. Then the following concentration inequalities
hold. For any 1-Lipschitz function g,
Z
r2
P g ≥ gdν + r ≤ e− 2ar+b .
(33)
When g = Id, inequality (33) was proved by Nourdin and Viens [NV09].
Proof. Take g to be 1-Lipschitz and mean zero with respect to ν, then for any λ ≥ 0,
i
h
E geλg = Cov g, eλg
′
λg
τv
e
≤ E
i
h
≤ λE eλg τ v .
(34)
Furthermore,
If λ < 1/a, this gives
i
h i
i
h
h
E eλg τ v ≤ aE (X − µ) eλg(X) + bE eλg
i
h i
h
= λaE g′ eλg τ v + bE eλg
i
h i
h
≤ λa eλg τ v + bE eλg .
h i
b
E eλg .
1 − λa
Combining (34) and (35), we obtain, for any λ < 1/a,
i
h
h i
λb
E geλg ≤
E eλg .
1 − λa
λg
Define J (λ) = log E e , λ ≥ 0. We thus have the following differential inequality,
i
h
E eλg τ v ≤
J ′ (λ) ≤
(35)
λb
.
1 − λa
2
Since J (0) = 0, this implies that J (λ) ≤ λ2 b/ (2(1 − λa)). Equivalently, E eλg ≤ eλ b/(2(1−λa)) ,
which by the use of Chebyshev’s inequality gives (27).
Let us now state a more general theorem.
Theorem 14 Assume that ν has a finite first moment, a density p with respect to the Lebesgue
measure that has a connected support and denote τ ν its Stein kernel. Set X a random variable of
distribution ν. Take f a 1-Lipschitz function with mean zero with respect to ν and assume
that f
af
(X)
has an exponential moment with respect to ν, that is there exists a > 0 such that E e
< +∞.
Then for any λ ∈ (0, a),
i
i
h 2
h
(36)
E eλf (X) ≤ E eλ τ ν (X) .
i
h 2
Consequently, if we denote ψ τ (λ) = ln E eλ τ ν (X) ∈ [0, +∞] and ψ ∗τ (t) = supλ∈(0,a) {tλ − ψ τ (λ)}
the Fenchel-Legendre dual function of ψ λ , then for any t > 0,
P (f (X) > t) ∨ P (f (X) < −t) ≤ exp (−ψ ∗τ (t)) .
15
(37)
Theorem 14 states that the concentration of Lipschitz functions taken on a real random variable
with existing Stein kernel is controlled by the behavior of the exponential moments of the Stein kernel
itself - if it indeed admits finite exponential moments. However, these exponential moments seem
rather hard to estimate in general.
Let us now briefly detail how to recover from Theorem 14 some results of Theorems 11 and 13,
although with less accurate constants. If kτ ν k∞ < +∞, then inequality (36) directly implies
i
h
2
E eλf (X) ≤ eλ kτ ν k∞ ,
which gives
λ2
P (f (X) > t) ∨ P (f (X) < −t) ≤ exp −
4 kτ ν k∞
.
The latter inequality takes the form of Inequality (27) of Theorem 11, although with a factor 1/2 in
the argument of the exponential in the right-hand side of the inequality.
Assume now, as in Theorem 13, that the Stein kernel τ ν is sub-linear, that is there exist a, b ∈ R+
such that τ ν (x) ≤ a (x − µ) + b, where µ is the mean value of ν. Inequality (36) implies in this case,
i 2
i
h 2
h
(38)
E eλf (X) ≤ E eaλ (X−µ) ebλ .
The latter inequality being valid for any f being 1-Lipschitz and centered with respect to ν, we can
apply it for f (X) = X − µ. This gives
i 2
i
h 2
h
E eλ(X−µ) ≤ E eaλ (X−µ) ebλ .
i
h 2
aλ
Now, considering λ < a−1 , we have by Hölder’s inequality, E eaλ (X−µ) ≤ E eλ(X−µ) . Plugging
this estimate in the last inequality and rearranging the terms of the inequality gives
h
i
bλ2
E eλ(X−µ) ≤ e 1−λa .
Going back to inequality (38), we obtain, for any λ ∈ 0, a−1 ,
iaλ 2
i
h
h
λa
bλ2
2
ebλ ≤ ebλ ( 1−λa +1) = e 1−λa .
E eλf (X) ≤ E eλ(X−µ)
By the use of Cramèr-Chernoff method, this gives the result of Theorem 13, although with a constant
1/2 in the argument of the exponential term controlling the deviations.
Proof. First note that Inequality (37) is direct consequence of Inequality (36) via the use of the
Cramèr-Chernoff method (see for instance Section 2.2 of [BLM13]). To prove Inequality (36), also
note that by Lemma 15 below, it suffices to prove that for any λ ∈ (0, a),
i
i
h
h
(39)
E λf (X) eλf (X) ≤ E λ2 τ ν (X) eλf (X) .
Take λ ∈ (0, a), it holds by identity (7),
i
h
E f (X) eλf (X) = Cov f (X) , eλf (X)
i
h
= E λf ′ (X) L̄ (λf ) (X) eλf (X) .
16
Hence, we obtain
i
i
h
i
h
h
E f (X) eλf (X) ≤ λ2 E f ′ (X) τ ν (X) eλf (X) ≤ E λ2 τ ν (X) eλf (X) .
Inequality (39) is thus proved, which completes the proof.
Lemma 15 Take X a random variable on a measurable space (X , T ). Take g and h two measurable
functions from X to R such that
i
i
h
h
(40)
E g (X) eg(X) ≤ E h (X) eg(X) < +∞ .
Then it holds,
i
i
h
h
E eg(X) ≤ E eh(X) .
(41)
Lemma 15 summarizes the essence of the argument used in the proof of Theorem 2.3 of [BGH01].
We could not find a reference in the literature for Lemma 15. We point out that Lemma 15 may have
an interest by itself as it should be very handy when dealing with concentration inequalities using
the Cramèr-Chernoff method. Its scope may thus go beyond our framework related to the behavior
of the Stein kernel.
Proof. Note that if E eh(X) = +∞ then Inequality (41) is satisfied. We assume now that
E eh(X) < +∞ and β = ln E eh(X) . By setting U = h (X) − β, we get E eU = 1 and so,
by the duality formula for the entropy (see for instance Theorem 4.13 in [BLM13]), we have
h
i
h
i
h
i h
i
E U eg(X) ≤ Ent eg(X) = E g (X) eg(X) − E eg(X) ln E eg(X)
.
Furthermore,
i
i
h
i
h
h
E g (X) eg(X) − βE eg(X) ≤ E U eg(X) .
Putting the above inequalities together, we obtain β ≥ ln E eg(X) , which is equivalent to (41).
5
Density formula and tail bounds
The following formulas are available when considering a density with connected support.
Proposition 16 Assume that X is a random variable with distribution ν having a density p with
connected support with respect to the Lebesgue measure on R. Take h ∈ L1 (ν) with E [h (X)] = 0
and assume that the function L̄h defined in (8) is ν-almost surely strictly positive. We have, for any
x0 , x ∈ Supp (ν),
Z x
E h (X) 1{X≥x0 }
h (y)
exp −
dy .
(42)
p (x) =
L̄h (x)
x0 L̄h (y)
Consequently, if E [X] = 0 then for any x ∈ Supp (ν),
Z x
E [|X|]
y
p (x) =
exp −
dy .
(43)
2τ ν (x)
0 τ ν (y)
R
x h(y)
By setting Th (x) = exp − x0 L̄h(y)
dy , if the function h is ν-almost positive, differentiable on
[x, +∞) and if the ratio Th (y) /h (y) tends to zero when y tends to infinity, then we have
Z +∞ ′
Th (x)
h (y)
−
T
(y)
dy
.
(44)
P (X ≥ x) = E h (X) 1{X≥x0 }
h
h (x)
h2 (y)
x
17
Formula (42) can also be found in [Döb15], Equation (3.11), under the assumption that h is
decreasing and for a special choice of x0 . Since E [h (X)] = 0, it is easily seen through its definition
(8), that if h 6= 0 ν − a.s. then L̄h > 0 ν − a.s. When h = Id, formulas (43) and (44) were first
proved respectively in [NV09] and [Vie09], although with assumption that the random variable X
belongs to the space D1,2 of square integrable random variables with the natural Hilbert norm of
their Malliavin derivative also square integrable.
Proof. Begin with Identity (42). As x0 ∈ Supp (ν) and the function Lh defined in (4) is ν-almost
surely positive, we have for any x ∈ Supp (v),
Z x
′
(ln (Lh)) (y) dy .
Lh (x) = Lh (x0 ) exp
x0
To conclude, note that Lh (x0 ) = E h (X) 1{X≥x0 } and (ln (Lh))′ = h/L̄h. To prove (43), simply
remark that is follows from (42) by taking h = Id and x0 = 0.
It remains to prove (44). We have from (42), p = E h (X) 1{X≥x0 } Th /L̄h and by definition of
Th , Th′ = hTh /L̄h. Hence, integrating between x and +∞ gives
+∞
Th (y)
dy
L̄h (y)
x
Z
+∞ Th′ (y)
dy
= E h (X) 1{X≥x0 }
h (y)
x
!
Z +∞ ′
h (y) Th (y)
−Th +∞
−
dy
= E h (X) 1{X≥x0 }
h x
h2 (y)
x
Z +∞ ′
Th (x)
h (y) Th (y)
−
dy
.
= E h (X) 1{X≥x0 }
h (x)
h2 (y)
x
P (X ≥ x) = E h (X) 1{X≥x0 }
Z
In order to take advantage of formulas (42) and (44), one has to use some information about L̄h.
The most common choice is h = Id, which corresponds to the Stein kernel L̄ (Id) = τ ν .
In the following theorem, we establish lower tail bounds when the Stein kernel is uniformly
bounded away from zero. In particular, we prove in this case that the support of the measure is R.
Theorem 17 Take a real random variable X of distribution ν with density p with respect to the
Lebesgue measure on R. Assume that E [X] = 0, p has a connected support and denote τ ν the Stein
kernel of ν. If τ ν ≥ σ 2min > 0 ν-almost surely, then the density p of ν is positive on R and the
function
Z
R (x) = ex
+∞
2 /σ 2
min
yp (y) dy
x
is nondecreasing on R+ . In particular, for any x ≥ 0,
Z +∞
2
2
yp (y) dy ≥ E [(X)+ ] e−x /σmin .
(45)
x
By symmetry, for any x ≤ 0,
−
Z
x
−∞
yp (y) dy ≥ E [(X)− ] e−x
18
2 /σ 2
min
.
(46)
Assume in addition that the function L (x) = x1+β p (x) is nonincreasing on [s, +∞) , s > 0. Then
for all x ≥ s, it holds
1 E (X)+
x2
.
(47)
P (X ≥ x) ≥ 1 −
exp − 2
β
x
2σ min
Alternatively, assume that there exists α ∈ (0, 2) such that lim supx→+∞ x−α log τ ν (x) < +∞. Then
for any δ ∈ (0, 2), there exist L, x0 > 0 such that, for all x > x0 ,
L
x2
P (X ≥ x) ≥ exp −
.
(48)
x
(2 − δ) σ 2min
The results presented in Theorem 17 can be found in [NV09] under the additional assumption,
related to the use of Malliavin calculus, that the random variable X ∈ D1,2 .
Proof. For any smooth function ϕ nondecreasing,
E [Xϕ (X)] = E τ ν (X) ϕ′ (X) ≥ σ 2min E ϕ′ (X) .
Take ϕ (x) = min (x − c)+ , ε , for some c ∈ R and ε > 0. Then
E [Xϕ (X)] =
Z
c+ε
x (x − c) p (x) dx + ε
c
Z
+∞
xp (x) dx
c+ε
and E [ϕ′ (X)] = P (X ∈ (c, c + ε]). Dividing the latter two terms by ε and letting ε tend to zero
gives,
Z +∞
(49)
xp (x) dx ≥ σ 2min p (c) .
c
Now set V (c) =
R +∞
c
xp (x) dx. Inequality (49) writes, for any c ≥ 0,
c
V
σ 2min
(c) ≥ −V ′ (c) .
Then define, for any c ≥ 0,
R (c) = V (c) exp
We can differentiate R and we have
′
R (c) = V ′ (c) +
c
σ 2min
c2
2σ 2min
V (c) exp
.
c2
2σ 2min
≥0.
In particular R (c) ≥ R (0), which gives (45). As τ −X (x) = τ X (−x), we deduce by symmetry that
(46) also holds. The proof of inequalities (47) and (48) follows from the same arguments as in the
proof of points (ii) and (ii)’, Theorem 4.3, [NV09]. We give them, with slight modifications, for the
sake of completeness. By integration by parts, we have
Z +∞
P (X ≥ x) dx .
V (c) = cP (X ≥ c) +
c
We also have, for x > 0,
P (X ≥ x) =
Z
+∞
x
y 1+β p (y)
dy ≤ x1+β p (x)
y 1+β
19
Z
x
+∞
dy
xp (x)
=
.
y 1+β
β
Hence,
V (c) ≤ cP (X ≥ c) + β
−1
or equivalently,
Z
+∞
xp (x) dx = cP (X ≥ c) +
c
P (X ≥ c) ≥
1
1−
β
V (c)
β
V (c)
.
c
The conclusion follows by combining the latter inequality with inequality (45). It remains to prove
(48). By formula (44) applied with h (y) ≡ y - note that this is possible since by assumption τ ν > 0
on R -, it holds
Z x
E [|X|]
y
p (x) =
exp −
dy
2τ ν (x)
0 τ ν (y)
E [|X|]
x2
.
≥
exp − 2
2τ ν (x)
2σ min
Let us fix ε > 0. By assumption on τ ν , we get that there exists a positive constant C such that, for
x large enough,
x2
x2
α
.
p (x) ≥ C exp − 2 − x
≥ C exp −
2σ min
(2 − ε)σ 2min
Hence, for x large enough,
P (X ≥ x) ≥ C
Z
+∞
x
y2
exp −
(2 − ε)σ 2min
dy .
R +∞
2
The conclusion now easily follows from the following classical inequality: x e−y /2 dy ≥ (x/(1 +
x2 )) exp −x2 /2 .
In the following proposition, we give some further tail bounds under a variety of assumptions on
the Stein kernel. We omit the proof as it follows directly from the same arguments as in the proof
of Corollary 4.5 in [Vie09], where they are derived under the assumption that X ∈ D1,2 .
Proposition 18 Take a real random variable X of distribution ν with density p with respect to the
Lebesgue measure on R. Assume that E [|X|] < +∞, p has a connected support and denote τ ν the
Stein kernel of ν. If there exist c ∈ (0, 1) and x0 > 1 such that for all x > x0 , τ ν (x) ≤ cx2 , then
there exists a positive constant L such that, for all x > x0
R
x
exp − 0 τ νy(y) dy
.
P (X ≥ x) ≥ L
x
If in addition τ ν ≥ σ 2min > 0 ν-almost surely, then
x2
L
P (X ≥ x) ≥ exp − 2
.
x
2σ min
If there exists rather a positive constant c− ∈ (0, c] such that τ ν (x) ≥ c− x2 for all x > x0 , then there
exists a positive constant K such that for all x > x0 ,
P (X ≥ x) ≥
20
K
x1+1/c−
.
If there exist instead p ∈ (0, 2) and cp > 0 such that, for all x > x0 , τ ν (x) ≥ cp xp , then there exists
a positive constant H such that, for all x > x0 ,
L
x2−p
P (X ≥ x) ≥ exp −
.
x
(2 − p) cp
In the last two points, if the inequalities on τ ν in the hypotheses are reversed, then the conclusions
are also reversed, without changing any of the constants.
Acknowledgments
I owe thanks to Guillaume Poly for an instructive discussion about Stein’s method, to Yvik Swan for
a number of pointers to the literature and to Jon Wellner as well as Max Fathi for useful comments. I
also warmly thank Michel Bonnefont and Aldéric Joulin for nice discussions and insightful comments
related to some (weighted) functional inequalities.
References
[ABJ16]
M. Arnaudon, M. Bonnefont, and A. Joulin. Intertwinings and generalized
Brascamp-Lieb inequalities. arXiv preprint arXiv:1602.03836 (2016).
[BCG08]
D. Bakry, P. Cattiaux, and A. Guillin. Rate of convergence for ergodic continuous
Markov processes: Lyapunov versus Poincaré. J. Funct. Anal. 254(3), 727–759 (2008).
[BGH01]
S. G. Bobkov, F. Götze, and C. Houdré. On Gaussian and Bernoulli covariance
representations. Bernoulli 7(3), 439–451 (2001).
[BGL14]
D. Bakry, I. Gentil, and M. Ledoux. “Analysis and geometry of Markov diffusion
operators”, volume 348 of “Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]”. Springer, Berlin (2014).
[BH97]
S. G. Bobkov and C. Houdré. Isoperimetric constants for product probability measures. Ann. Probab. 25(1), 184–205 (1997).
[BJ14]
M. Bonnefont and A. Joulin. Intertwining relations for one-dimensional diffusions
and application to functional inequalities. Potential Anal. 41(4), 1005–1031 (2014).
[BJM16a] Michel Bonnefont, Aldéric Joulin, and Yutao Ma. A note on spectral gap and
weighted Poincaré inequalities for some one-dimensional diffusions. ESAIM Probab. Stat.
20, 18–29 (2016).
[BJM16b] Michel Bonnefont, Aldéric Joulin, and Yutao Ma. Spectral gap for spherically
symmetric log-concave probability measures, and beyond. J. Funct. Anal. 270(7), 2456–
2482 (2016).
[BL76]
H. J. Brascamp and E. H. Lieb. On extensions of the Brunn-Minkowski and PrékopaLeindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. J. Functional Analysis 22(4), 366–389 (1976).
[BL97]
S. Bobkov and M. Ledoux. Poincaré’s inequalities and Talagrand’s concentration
phenomenon for the exponential distribution. Probab. Theory Related Fields 107(3),
383–400 (1997).
21
[BL09a]
S. G. Bobkov and M. Ledoux. On weighted isoperimetric and Poincaré-type inequalities. In “High dimensional probability V: the Luminy volume”, volume 5 of “Inst. Math.
Stat. (IMS) Collect.”, pages 1–29. Inst. Math. Statist., Beachwood, OH (2009).
[BL09b]
S. G. Bobkov and M. Ledoux. Weighted Poincaré-type inequalities for Cauchy and
other convex measures. Ann. Probab. 37(2), 403–427 (2009).
[BL14]
S. Bobkov and M. Ledoux. One-dimensional empirical measures, order statistics, and
Kantorovich transport distances (2014). To appear in the Memoirs of the Amer. Math.
Soc.
[BLM13]
S. Boucheron, G. Lugosi, and P. Massart. “Concentration inequalities”. Oxford University Press, Oxford (2013). A nonasymptotic theory of independence, With a
foreword by Michel Ledoux.
[BRW18]
A.D. Barbour, N. Ross, and Y. Wen. Central moment inequalities using Stein’s
method. arXiv preprint arXiv:1802.10225 (2018).
[CCEL13] E. A. Carlen, D. Cordero-Erausquin, and E. H. Lieb. Asymmetric covariance
estimates of Brascamp-Lieb type and related inequalities for log-concave measures. Ann.
Inst. Henri Poincaré Probab. Stat. 49(1), 1–12 (2013).
[CD10]
S. Chatterjee and P. S. Dey. Applications of Stein’s method for concentration
inequalities. Ann. Probab. 38(6), 2443–2485 (2010).
[CEG17]
D. Cordero-Erausquin and N. Gozlan. Transport proofs of weighted Poincaré
inequalities for log-concave distributions. Bernoulli 23(1), 134–158 (2017).
[CFP17]
T. A. Courtade, M. Fathi, and A. Pananjady. Existence of Stein Kernels under a
Spectral Gap, and Discrepancy Bound. arXiv preprint arXiv:1703.07707 (2017).
[CGGR10] P. Cattiaux, N. Gozlan, A. Guillin, and C. Roberto. Functional inequalities for
heavy tailed distributions and application to isoperimetry. Electron. J. Probab. 15, no.
13, 346–385 (2010).
[CGJ18]
N. Cook, L. Goldstein, and T. Johnson. Size biased couplings and the spectral gap
for random regular graphs. Ann. Probab. 46(1), 72–125 (2018).
[CGS11]
L. H. Y. Chen, L. Goldstein, and Q.-M. Shao. “Normal approximation by Stein’s
method”. Probability and its Applications (New York). Springer, Heidelberg (2011).
[CGW11]
P. Cattiaux, A. Guillin, and L.-M. Wu. Some remarks on weighted logarithmic
Sobolev inequality. Indiana Univ. Math. J. 60(6), 1885–1904 (2011).
[Cha07]
S. Chatterjee. Stein’s method for concentration inequalities. Probab. Theory Related
Fields 138(1-2), 305–321 (2007).
[Cha12]
S. Chatterjee. A new approach to strong embeddings. Probab. Theory Related Fields
152(1-2), 231–264 (2012).
[Cha14]
S. Chatterjee. A short survey of Stein’s method. In “Proceedings of the International
Congress of Mathematicians—Seoul 2014. Vol. IV”, pages 1–24. Kyung Moon Sa, Seoul
(2014).
22
[Döb15]
C. Döbler. Stein’s method of exchangeable pairs for the beta distribution and generalizations. Electron. J. Probab. 20, no. 109, 34 (2015).
[Düm10]
Lutz Dümbgen. Bounding Standard Gaussian Tail Probabilities. Technical Report,
(2010).
[EV13]
R. Eden and F. Viens. General upper and lower tail estimates using Malliavin calculus
and Stein’s equations. In “Seminar on Stochastic Analysis, Random Fields and Applications VII”, volume 67 of “Progr. Probab.”, pages 55–84. Birkhäuser/Springer, Basel
(2013).
[FN17]
M. Fathi and B. Nelson. Free Stein kernels and an improvement of the free logarithmic
Sobolev inequality. Adv. Math. 317, 193–223 (2017).
[GG11a]
S. Ghosh and L. Goldstein. Applications of size biased couplings for concentration
of measures. Electron. Commun. Probab. 16, 70–83 (2011).
[GG11b]
S. Ghosh and L. Goldstein. Concentration of measures via size-biased couplings.
Probab. Theory Related Fields 149(1-2), 271–278 (2011).
[GGR11]
S. Ghosh, L. Goldstein, and M. Raič. Concentration of measure for the number of
isolated vertices in the Erdös–Rényi random graph by size bias couplings. Statist. Probab.
Lett. 81(11), 1565–1570 (2011).
[GIs14]
L. Goldstein and Ü. I¸slak. Concentration inequalities via zero bias couplings. Statist.
Probab. Lett. 86, 17–23 (2014).
[Goz10]
N. Gozlan. Poincaré inequalities and dimension free concentration of measure. Ann.
Inst. Henri Poincaré Probab. Stat. 46(3), 708–739 (2010).
[LLJ16]
Q. Liu, J. Lee, and M. Jordan. A kernelized Stein discrepancy for goodness-of-fit
tests. In “International Conference on Machine Learning”, pages 276–284 (2016).
[LNP15]
Michel Ledoux, Ivan Nourdin, and Giovanni Peccati. Stein’s method, logarithmic Sobolev and transport inequalities. Geom. Funct. Anal. 25(1), 256–306 (2015).
[LNP17]
M. Ledoux, I. Nourdin, and G. Peccati. A Stein deficit for the logarithmic Sobolev
inequality. Sci. China Math. 60(7), 1163–1180 (2017).
[LRS17]
Christophe Ley, Gesine Reinert, and Yvik Swan. Stein’s method for comparison
of univariate distributions. Probab. Surv. 14, 1–52 (2017).
[MG16]
L. Mackey and J. Gorham. Multivariate Stein factors for a class of strongly logconcave distributions. Electron. Commun. Probab. 21, Paper No. 56, 14 (2016).
[MJC+ 14] L. Mackey, M. I. Jordan, R. Y. Chen, B. Farrell, and J. A. Tropp. Matrix
concentration inequalities via the method of exchangeable pairs. Ann. Probab. 42(3),
906–945 (2014).
[MO13]
G. Menz and F. Otto. Uniform logarithmic sobolev inequalities for conservative spin
systems with super-quadratic single-site potential. Annals of Probability 41, 2182–2224
(2013).
23
[NV09]
I. Nourdin and F. G. Viens. Density formula and concentration inequalities with
Malliavin calculus. Electron. J. Probab. 14, no. 78, 2287–2309 (2009).
[PMT16]
D. Paulin, L. Mackey, and J. A. Tropp. Efron-Stein inequalities for random matrices. Ann. Probab. 44(5), 3431–3473 (2016).
[Ste72]
C. Stein. A bound for the error in the normal approximation to the distribution of
a sum of dependent random variables. In “Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif.,
1970/1971), Vol. II: Probability theory”, pages 583–602. Univ. California Press, Berkeley, Calif. (1972).
[Ste86]
C. Stein. “Approximate computation of expectations”, volume 7 of “Institute of Mathematical Statistics Lecture Notes—Monograph Series”. Institute of Mathematical Statistics, Hayward, CA (1986).
[SW14]
Adrien Saumard and Jon A. Wellner. Log-concavity and strong log-concavity: A
review. Statist. Surv. 8, 45–114 (2014).
[SW18a]
A. Saumard and J. A. Wellner. Efron’s monotonicity property for measures on R2 .
J. Multivariate Anal. 166C, 212–224 (2018).
[SW18b]
A. Saumard and J. A. Wellner. On the isoperimetric constant, covariance inequalities and Lp -poincaré inequalities in dimension one. Bernoulli (2018). to appear,
arXiv:1711.00668.
[TM15]
J. Treilhard and A.-R. Mansouri. Concentration inequalities via Malliavin calculus
with applications. Electron. Commun. Probab. 20, no. 36, 14 (2015).
[Vie09]
F. G. Viens. Stein’s lemma, Malliavin calculus, and tail bounds, with application to
polymer fluctuation exponent. Stochastic Process. Appl. 119(10), 3671–3698 (2009).
24
| 1 |
Artificial Intelligence Based Malware Analysis
Avi Pfeffera,∗, Brian Ruttenberga,∗, Lee Kellogga , Michael Howarda , Catherine Calla , Alison O’Connora ,
Glenn Takataa , Scott Neal Reillya , Terry Pattena , Jason Taylora , Robert Halla , Arun Lakhotiab , Craig
Milesb , Dan Scofieldc , Jared Frankc
arXiv:1704.08716v1 [cs.CR] 27 Apr 2017
a Charles
River Analytics
625 Mt. Auburn St.
Cambridge, MA, 02138
b Software Research Lab
University of Louisiana at Lafayette
Lafayette, LA
c Assured Information Security
Rome, NY
Abstract
Artificial intelligence methods have often been applied to perform specific functions or tasks in the cyber–
defense realm. However, as adversary methods become more complex and difficult to divine, piecemeal
efforts to understand cyber–attacks, and malware–based attacks in particular, are not providing sufficient
means for malware analysts to understand the past, present and future characteristics of malware.
In this paper, we present the Malware Analysis and Attributed using Genetic Information (MAAGI)
system. The underlying idea behind the MAAGI system is that there are strong similarities between malware
behavior and biological organism behavior, and applying biologically inspired methods to corpora of malware
can help analysts better understand the ecosystem of malware attacks. Due to the sophistication of the
malware and the analysis, the MAAGI system relies heavily on artificial intelligence techniques to provide
this capability. It has already yielded promising results over its development life, and will hopefully inspire
more integration between the artificial intelligence and cyber–defense communities.
Keywords: cyber–defense, malware, probabilistic models, hierarchical clustering, prediction
1. Introduction
Artificial intelligence (AI) ideas and methods have been successfully applied in countless domains to
learn complex processes or systems, make informed decisions, or to model cognitive or behavior processes .
While there is a tremendous need for artificial intelligence applications that can automate tasks and supplant
human analysis, there is also still a place for artificial intelligence technology to enhance human analysis
of complex domains and help people understand the underlying phenomenon behind human processes. For
instance, artificial intelligence is often applied in the medical field to assist in diagnostic efforts by reducing
the space of possible diagnoses for a patient and helping medical professionals understand the complex
processes underlying certain diseases [1].
Cyber security and defense is one field that has tremendously benefited from the explosive growth of
artificial intelligence. Spam filters, commercial malware software, and intrusion detection systems are just
small example of applications of artificial intelligence to the cyber security realm [2]. However, like in the
medical diagnosis field, human judgment and analysis is often needed to reason about complex processes in
the cyber security domain. Nevertheless, artificial intelligence tools and concepts can be used help people
∗ Corresponding
authors
Email addresses: [email protected] (Avi Pfeffer), [email protected] (Brian Ruttenberg)
Preprint submitted to Elsevier
May 1, 2017
discover, understand and model these complicated domains. Malware analysis is one task within the cyber
defense field that can especially benefit from this type of computer–based assistance.
Loosely defined, malware is just unwanted software that performs some non–benign, often nefarious,
operation on any system. Unfortunately, most modern computer users are all too familiar with the concept
of malware; McCafee reports almost 200 million unique malware binaries in their archive and increasing
at nearly 100,000 binaries per day [3]. As a result of this threat, there has been a concerted push within
the cyber defense community to improve the mitigation of malware propagation and reduce the number of
successful malware attacks by detailed analysis and study of malware tactics, techniques and procedures.
Principally, malware analysis is focused on several key ideas: Understanding how malware operates;
determining the relationship between similar pieces of malware; modeling and predicting how malware
changes over time; attributing authorship; and how do new ideas and tactics propagate to new malware.
These malware analysis tasks can be very difficult to perform. First, the shear number of new malware
generated makes it extremely hard to automatically track patterns of malware over time, let alone allow a
human to manually explore different malware binaries. Second, the relationships and similarities between
different pieces of malware can be very complex and difficult to find in a large corpus of malware. Finally,
malware authors also purposefully obfuscate their methods to prevent analysis, such as by encrypting their
binaries (also known as packing).
Due to these difficulties, malware analysis could tremendously benefit from artificial intelligence techniques, where automated systems could learn models of malware evolution over time, cluster similar types
of malware together, and predict the behavior or prevalance of future malware. Cyber defenders and malware analysts could use these AI enabled malware analysis systems to explore the ecosystem of malware,
determine new and significant threats, or develop new defenses.
In this paper, we describe the Malware Analysis and Attribution Using Genetic Information (MAAGI)
system. This malware analysis system relies heavily on AI techniques to assist malware analysts in their
quest to understand, model and predict the malware ecosystem. The underlying idea behind the MAAGI
system is that there are strong similarities between malware behavior and biological organism behavior.
As such, the MAAGI system borrows analysis methods and techniques from the genomics, phylogenetics,
and evolution fields and applies them to malware ecosystems. Given the large amount of existing (and
future) malware, and the complexity of malware behavior, AI methods are a critical tool needed to apply
these biological concepts to malware analysis. While the MAAGI system is moving towards a production
level deployment, it has already yielded promising results over its development life and has the potential to
greatly expand capabilities of malware analysts
2. Background and Motivation
The need for AI applications in malware analysis arises from the typical work flow of malware analysts,
and the limitations of the methods and tools they currently use to support that work flow. After malware
is collected, the analysis of that malware occurs in two main phases. The first is a filtering or triage stage,
in which malware are selected for analysis according to some criteria. That criteria includes whether or not
they have been seen before, and if not, whether or not they are interesting in some way. If they can be
labeled as a member of a previously analyzed malware family, an previous analysis of that family can be
utilized and compared against to avoid repetition of effort. If they are novel, and if initial inspection proves
them to be worthy of further analysis, they are passed to the second phase where a deep–dive analysis is
performed.
The triage phase typically involves some signature or hash–based filtering [4]. Unfortunately, this process
is only based on malware that has been explicitly analyzed before, and is not good at identifying previously–
seen malware that has been somehow modified or obfuscated to form a new variant. In these cases the
responsibility generally falls to a human analyst to recognize and label the malware, where it risks being
mistaken as truly novel malware and passed on for deep–dive analysis. When strictly relying on human–
driven processes for such recognition and classification tasks there are bound to be oversights due to the
massive volume and velocity with which new malware variants are generated and collected. In this case,
2
oversights lead to repetition of human effort, which exacerbates the problem, as well as missed opportunities
of recognizing and analyzing links between similar malware. Intelligent clustering and classification methods
are needed to support the triage process, not to replace the human analysts but to support them, by learning
and recognizing families of similar malware among very large collections where signatures and hashes fail,
suggesting classifications of incoming malware and providing evidence for those distinctions. A lot of work
has been done towards applying clustering techniques to malware, including approaches based on locality–
sensitive hashing [5], prototype–based hierarchical clustering and classification [6], and other incremental
hierarchical approaches [7, 8]. The question of whether a truly novel sample is interesting or not is also
an important one. With so many novel samples arriving every day it becomes important to prioritize
those samples to maximize the utilization of the best human analysts on the greatest potential threats.
Automated prioritization techniques based on knowledge learned from past data could provide a baseline
queue of incoming malware without any initial human effort required.
The deep–dive phase can also strongly benefit from AI applications. A typical point of interest investigated during this phase is to recognize high–level functional goals of malware; that is, to identify the purpose
of the malware, and the intention or motivation of its author. Currently this task is performed entirely by a
human expert through direct inspection of the static binary using standard reverse–engineering tools such
as a disassembler or a decompiler, and perhaps a debugger or a dynamic execution environment. This has
been a very effective means of performing such analysis. However, it relies heavily on the availability of
malware reverse–engineering experts. Also, characterizing complex behaviors and contextual motivation of
an attacker requires the ability of those experts to recognize sometimes complicated sequences of interrelated behaviors across an entire binary, which can be a very difficult task, especially in large or obfuscated
malware samples.
Besides such single–malware factors, other goals of the deep–dive phase could benefit from intelligent
cross–malware sample analysis. For example, the evolution of malware families is of great interest to analysts
since it indicates new capabilities being added to malware, new tactics or vulnerabilities being exploited, the
changing goals of malware authors, and other such dynamic information that can be used to understand and
perhaps even anticipate the actions of the adversary. While it is possible for a human analyst to manually
compare the malware samples in a small family and try to deduce simple single–inheritence patterns, for
example, it quickly becomes a very difficult task when large families or complex inheritance patterns are
introduced [9]. Research has been done towards automatically learning lineages of malware families [10],
and even smarter methods based on AI techniques could provide major improvements and time savings over
human analyst–driven methods. Besides the technical evolution of malware, other cross–family analyses
could benefit from automated intelligent techniques. For example, the life cycle patterns of malware, which
can be useful to understand and anticipate growth rates and life spans of malware families, can be difficult
for an analyst to accurately identify since the data sets are so large.
The goals of the malware analyst, and the various types of information about binaries gathered during the
triage and deep–dive phases, are broad. The level of threat posed by a piece of malware, the nature of that
threat, the motivation of the attacker, the novelty of the malware, how it fits into the evolution of a family
and other factors can all be of interest. Also, these factors may be of particular interest when combined with
one another. For example, combining the output of clustering, lineage and functional analyses could reveal
the method by which the authors of a particular malware family made the malware’s credential stealing
module more stealthy in recent versions, suggesting that other families may adopt similar techniques. For
this reason, an automated malware analysis tool should take all of these types of information into account,
and should have solutions for detecting all of them and presenting them to the analyst in a cohesive manner.
3. The Cyber Genome Program
The MAAGI system was developed under the US Defense Advanced Research Projects Agencies’ (DARPA)
Cyber Genome program. One of the core research goals of this program is the development of new analysis
techniques to automate the discovery, identification, and characterization of malware variants and thereby
accelerate the development of effective responses [11].
3
The MAAGI system provides the capabilities described in Sec. 2 based on two key insights, each of which
produces a useful analogy. The first is that malware is very rarely created de novo and in isolation from
the rest of the malware community. Attackers often reuse code and techniques from one malware product
to the next. They want to avoid having to write malware from scratch each time, but they also want to
avoid detection, so they try to hide similarities between new and existing malware. If we can understand
reuse patterns and recognize reused elements, we will be able to connect novel malware to other malware
from which it originated, create a database of malware with relationships based on similarity that analysts
can interactively explore, and predict and prepare to defend against future attacks.
This insight suggests a biological analogy, in which a malware sample is compared to a living organism.
Just like a living organism, the malware sample has a phenotype, consisting of its observed properties and
behavior (e.g. eye color for an organism, type of key–logger used by malware). The phenotype is not itself
inherited between organisms; rather it is the expression of the genotype which is inherited. Likewise, it is
the source code of the malware that is inherited from one sample to the next. Similar to biological evolution,
malware genes that are more ”fit” (i.e., more successful) are more likely to be copied and propagated by
other malware authors. By reverse engineering the malware to discover its intrinsic properties, we can get
closer to the genetic features that are inherited.
The second insight is that the function of malware has a crucial role to play in understanding reuse
patterns. The function of malware what it is trying to accomplish is harder to change than the details
of how it is accomplished. Therefore, analysis of the function of malware is a central component of our
program. In our analysis of function, we use a linguistic analogy: a malware sample is like a linguistic
utterance. Just like an utterance, malware has temporal structure, function, and context in which it takes
place. The field of functional linguistics studies utterances not only for their meaning, but also for the
function they are trying to accomplish and the context in which they occur.
The MAAGI system provides a complete malware analysis framework based on these biological and
linguistic analogies. While some of the ideas in biology and linguistics have been applied to malware analysis
in the past, the MAAGI system integrates all of these novel analysis techniques into a single workflow, where
the output of one analysis can be fed to another type of analysis. While biology and linguistics provide
the foundational theory for these malware analysis schemes, they could not be implemented or integrated
without using sophisticated artificial intelligence methods.
4. Overview of MAAGI system
The Malware Analysis and Attribution using Genetic Information (MAAGI) system is a complete, automated malware analysis system that applies AI techniques to support all phases of traditional malware
analysis work flows, including reverse–engineering, high–level functional analysis and attacker motivational
characterization, clustering and family classification of malware, evolutionary lineage analysis of malware
families, shared component identification, and predicting the future of malware families. An overview of the
system is shown in Fig. 1.
Malware is first uploaded to the system, then submitted for static reverse–engineering and dynamic execution and tracing, via HTTP requests. The static analysis is built on top of the Hex-Rays IDA disassembler.
In addition to other features of the binary including strings and API imports, it generates a semantic representation of the function of code blocks, called BinJuice [12]. This code abstraction has been shown to
be more robust to obfuscation, compiler changes, and other cross–version variation within malware families,
which improves performance on clustering, lineage, and other types of analysis that rely on measuring the
similarity between binaries.
As malware are reversed, they are fed into our clustering system where they are incrementally organized
into a hierarchy of malware families. The system first classifies the new malware into the hierarchy at the
leaves, then re–shapes sections of the hierarchical structure as needed to maintain an optimal organization
of the malware into families. Besides providing the basis for an efficient method of incremental clustering,
maintaining a database of malware as a hierarchy offers other benefits, such as the ability to perform analysis
tasks on nodes at any level of the hierarchy, which are essentially representatives of malware families at
various levels of granularity.
4
Figure 1: High level architecture of the MAAGI system.
The hierarchical clustering of malware can then be viewed and explored remotely through the MAAGI
user interface, our analysis and visualization tool. To understand malware of interest in the context of
their family, users can view and compare the features of various malware and their procedures, including
strings, API calls, code blocks, and BinJuice. Search functions are also available to allow users to find
related samples according to their features, including searches for specific features, text–based searches, and
searches for the most similar procedures in the database to a particular procedure, which uses MinHash [13]
as an approximation of the similarity between sets of BinJuice blocks.
From the user interface, users can initiate our various other analyses over subsets of the database. For
example, lineage analysis can be run over the set of malware currently selected within the tool. Lineage
analysis uses a probabilistic model of malware development, built using the Figaro probabilistic programming
language [14], to learn an inheritance graph over a set of malware. This directed graph represents the pattern
of code adoption and revision across a family of malware, and can include complex structures with branches,
merges, and multiple roots. The returned lineage is visualized in the user interface where the malware can
once again be explored and compared to one another. Learning a lineage over a family can assist malware
analysts in understand the nature of the relationships between similar samples, and provide clues concerning
the strategy and motivation behind changes in code and functionality adopted during development of the
malware.
To further understand how code sharing occurs between malware samples, users can also run the component identification process, which identifies cohesive, functional code components shared across malware
binaries. Component identification uses a multi–step clustering process. In the first step it clusters together
similar procedures across the set of binaries using Louvain clustering, which is a greedy agglomerative clustering method [15]. In the second step it proceeds to cluster these resulting groups of procedures together
based on the samples in which they appear. This results in the specification of a number of components,
which are collections of procedures within and across malware binaries that together comprise a functional
unit.
5
MAAGI also includes a functional analysis capability that can determine the functional goals of malware,
or the motivational context behind the attack. When executed over any sample in the database, it analyzes
the static features of the malware using systemic functional grammars (SFGs). Systemic functional linguistics is an approach to the study of natural language that focuses on the idea of speech as a means to serve
a specific function. The application of SFGs in this case leverages the analogy of malware as an application
of the language of Win32 executable code for the purpose of exhibiting particular malicious behaviors. The
grammars themselves are created by subject matter experts and contain structured descriptions of the goals
of malicious software and the context in which such attacks occur. The result of MAAGI’s functional analysis
is a human–readable report describing the successful parses through the grammar, ranked according to the
level of confidence, along with the evidence found within the binary to justify those functional designations.
The final element of the MAAGI system is a trend analysis and prediction capability. This tool predicts
future attributes of malware families, given only the first malware sample discovered from that family. These
attributes include such factors as the number of distinct versions within the family, the amount of variability
within the family, and the length of time until the final variant will be seen. The regression model for these
predictions are learned from data using machine learning techniques including support vector machines
(SVM) [16].
5. MAAGI Persistent Malware Hierarchy
While the focus of most malware analysis is to learn as much as possible about a particular malware
binary, the analysis is both facilitated and improved by first clustering incoming malware binaries into
families along with all previously seen malware. First, it assists in the triage process by immediately
associating new malware binaries with any similar, previously–seen malware. This allows those binaries
to be either assigned to analysts familiar with that family or lowered in priority if the family to which
they are assigned is deemed uninteresting, or if the binary is only superficially different from previously–
analyzed malware. Second, providing clustering results to an analyst improves the potential quality of the
information gathered by the deep–dive analysis process. Rather than looking at the malware in isolation, it
can be analyzed within the context of its family, allowing analysts to re–use previous analyses while revealing
potentially valuable evolutionary characteristics of the malware, such as newly–added functionality or novel
anti–analysis techniques.
To support clustering within a real–world malware collection and analysis environment, however, it is
important that the operation be performed in an online fashion. As new malware are collected and added to
the database, they need to be incorporated into the existing set of malware while maintaining the optimal
family assignments across the entire database. The reason for this is that clustering of high–dimensional
data such as malware binaries is by nature a highly time–complex operation. It is simply not reasonable to
cluster the entire database of malware each time a new binary is collected. To support the online clustering
of malware we propose a novel incremental method of maintaining a hierarchy of malware families. This
hierarchical representation of the malware database supports rapid, scalable online clustering; provides
a basis for efficient classification of new malware into the database; describes the complex evolutionary
relationships between malware families; and provides a means of performing analysis over entire families of
malware at different granularities by representing non–leaf nodes in the hierarchy as agglomerations of the
features of their descendants.
In the following sections, we will outline the hierarchical data structure itself, the online algorithm,
the specific choices made for our implementation of the algorithm, the results of some of the experiments
performed over actual malware data sets, and conclusions we have drawn based on our work. For a more
thorough description of our approach, see our paper titled Hierarchical Management of LargeScale Malware
Data [? ], which has been submitted for publication, and from which much of this material originated.
5.1. Hierarchy
At the center of our approach is a hierarchical data structure. Malware are inserted into the hierarchy
as they are introduced to the system. The various steps of our incremental algorithm operate over this data
structure, and it has a number of properties and restrictions to support these operations.
6
The hierarchy is a tree structure with nodes and edges, and no restriction on the number of children a node
may have. There is a single root at the top and the malware themselves are at the leaves. All of the nodes
that are not leaves are called exemplars, and they act as representatives of all of the malware below them at
the leaves. Just as the binaries are represented in the system by some features, exemplars are represented
in the same feature space based on the binaries below them. Sec. 5.3 describes some methods of building
exemplars. This common representation of exemplars allows them to be compared to malware binaries, or
to one another, using common algorithms and distance functions. The exemplars that have binaries as their
children are called leaf exemplars and simultaneously define and represent malware families. Leaf exemplars
must only have binaries as children and not any other exemplars. There is no restriction that the hierarchy
must be balanced and if the hierarchy is lopsided binaries in different families may have very different path
distances from the root.
This hierarchy design is based on the concept of a dendogram, which is the output of many hierarchical
clustering algorithms. In a dendogram, like our hierarchy, each node represents some subset of the dataset,
with the most general nodes at the top and the most specific nodes at the bottom. Each level represents some
clustering of the leaves. As you traverse up the tree from the leaf exemplars, the exemplars progressively
represent more and more families, and the cohesion between the binaries within those families becomes
weaker. Finally, the root represents the entire corpus of malware.
This structure provides some very important insight into the data. Specifically, distance in the hierarchy
reflects dissimilarity of the malware. For example, any given exemplar will typically be more similar to an
exemplar that shares its parent than an exemplar that shares its grandparent. This relationship holds for
binaries as well. We use this insight to our advantage to represent malware families as more binaries are
introduced.
5.2. Online Operation
The hierarchical organization of malware is an effective approach to managing the volume of data, but
we also need to consider the velocity at which new binaries are discovered and added to the system. As new
malware binaries are rapidly encountered, we need a means to incorporate them into the hierarchy that is
fast, flexible and accurately places new binaries into new or existing families.
One naive approach to managing the influx of new binaries is to re–cluster the entire database upon arrival
of some minimum amount of new binaries to create a new hierarchy. This has the obvious disadvantage
that large amounts of the hierarchy may not change and unnecessary work is being performed at each re–
clustering. Another naive approach is to add new malware binaries to existing families in the hierarchy using
some classification algorithm. This too is problematic, as the families would be fixed. For example, two
malware binaries that are originally assigned to different families would never end up in the same family,
even if the influx of new binaries later suggested that their differences were relatively insignificant and that
they should in fact be siblings.
To alleviate these problems, we propose an incremental classification and clustering algorithm that is
run online as new binaries are ingested by the system. The main idea behind our incremental update is that
we only re–cluster portions of the hierarchy that have significantly changed since the last update, and we
have designed the re–clustering process so that no malware binary is ever fixed in a family. A summary of
the algorithm is illustrated in Fig. 2.
5.2.1. Algorithm Basics
Before we go into more detail on the incremental clustering algorithm operation, we first define some
basic terms and concepts. Let H be some hierarchy of malware binaries, where each node i in the hierarchy
is represented by an exemplar Ei and let Cni = {E1 , . . . , Ek } denote the descendants of exemplar Ei at
depth n. Furthermore, let Fi denote the features of Ei in some feature space F. Fi0 are the original features
of Ei at its time of creation. We further define a function θ : F × · · · × F → F that aggregates features from
a set of exemplars into a single feature representation. We now define what it means for a function θ to be
transitive on the hierarchy.
7
(a) Insertion: A new malware binary (Snew ) is inserted into
the hierarchy below E5 , the closest leaf exemplar according to
the distance function δ. The exemplar has features F5 , and
it is marked as modified. When all new binaries have been
classified, a post–order traversal of the hierarchy is initiated.
(b) Comparison and Marking: When the modified parent E5 is
reached during the post–order traversal, its new features F50 are
calculated using the θ function over its children. If the distance
function δ determines that the node has changed enough since
it was originally created, its (dr − 1)th ancestor E2 is marked
for re-clustering.
(c) Re-clustering: When the post–order traversal reaches E2 ,
which is marked for re-clustering, its dr th descendants are
passed to the clustering algorithm, in this case its great–
grandchildren. The old exemplars in between are removed.
(d) Sub-hierarchy Replacement: The clustering algorithm returns a sub-hierarchy, which is inserted back into the full hierarchy below E2 , replacing the old structure. There is no
restriction on the depth of the returned sub-hierarchy. The
features of E2 are unchanged.
Figure 2: A diagram of the online hierarchical clustering algorithm operating over a portion of the hierarchy. In this case the
re-clustering depth dr is set to 3.
Definition 1. Transitive Feature Function. A function θ is transitive on hierarchy H if for every
exemplar Ei ,
θ(F1,n , . . . , Fk,n ) = θ(F1,m , . . . , Fj,m ) where
E1,n , . . . , Ek,n ∈ Cni , E1,m , . . . , Ej,m ∈ Cm
i ,
That is, a transitive function on the hierarchy is one where application of θ to the features of an exemplar’s
children at depth n results in a new feature that is the same as the application of θ to children at depth
m. Two examples of transitive functions are the intersection function and the weighted average function,
described in Sec. 5.3.
In addition to θ, there are three additional parameters that are used in the clustering algorithm. The
first, dr , limits the portion of the hierarchy that is re–clustered when an exemplar has been marked for
re–clustering. δ is a distance function defined between the features Fi and Fj of two exemplars (or binaries).
Finally, τ specifies a distance threshold that triggers a re–clustering event at some exemplar.
8
5.2.2. Algorithm Detail
When a sufficient amount of new malware binaries have been collected, the algorithm is initiated to
integrate them into the hierarchy. Note that the timing of this process is subjective and different uses of
the hierarchy may require more or less initiations of the algorithm. The first thing the algorithm does is to
insert each new malware binary into an existing family in the hierarchy (Fig. 2(a)). As the existing families
in the hierarchy are defined as the leaf exemplars, the distance from each new malware binary to each leaf
exemplar is computed using the distance function δ, and a binary is added as a child to the minimum
distance exemplar. When a binary has been added to a leaf exemplar, the exemplar is marked as modified.
After the classification step, we perform a post–order traversal of the hierarchy. A post–order traversal
is a depth–first pass in which each node is only visited after all of its children has been visited. At each
node, we execute a function, Update, that updates the exemplars with new features and controls when to
re–cluster portions of the hierarchy. The first thing the function does is to check if the exemplar has been
modified, meaning it has had new malware inserted below it. If it has been modified, it marks its parent as
modified as well. It then creates the new features of the exemplar, Fi0 , by combining the child exemplars of
the node using the θ function. If the distance between Fi0 and the original features of the exemplar, Fi0 , is
greater than τ , when we mark the (dr − 1)th ancestor as needing re–clustering (Fig. 2(b)).
Finally, we check if the exemplar has been marked for re–clustering. If so, we perform a clustering
operation using Cdi r , the children at a depth of dr from Ei (Fig. 2(c)). The clustering operation returns a
sub–hierarchy H0 , which has Ei as its root and Cdi r as its leaves. The old exemplars between Ei and Cdi r
are discarded, and H0 is inserted back into H between Ei and Cdi r (Fig. 2(d)). There are no assumptions
made about the operation and algorithm employed in the re–clustering step. We only enforce that the
clustering operation returns a sub–hierarchy with Ei as its root and Cdi r as its leaves; the internal makeup
of this sub–hierarchy is completely dependent upon the clustering algorithm employed. This means that
the clustering algorithm is responsible for creating new exemplars between the leaves and the root of the
sub–hierarchy (and hence we must pass θ to the algorithm). We also do not enforce that the depth of the
new sub–hierarchy equals dr . That is, after re–clustering, the distance from Ei to any one of Cdi r may no
longer be dr – it could be more or less depending upon the clustering algorithm and the data.
There are several key properties of the algorithm that enable it manage the large amounts of malware
effectively. First, the dr parameter controls how local changes in portions of the hierarchy are propagated
to the rest of the hierarchy. For instance, if dr = ∞, then any change in the hierarchy will trigger a re–
clustering event on all malware binaries in the hierarchy; such an operation can be extremely time consuming
with a large amount of binaries. Using a reasonable value of dr , however, lets us slowly propagate changes
throughout the hierarchy to regions where the impact of the changes will be felt the most. Clearly, higher
values of dr will allow modified exemplars to potentially cluster with a larger set of exemplars, but at the
cost of higher computational effort.
Another key property is the ability to create new structure in the hierarchy upon re–clustering a set of
exemplars. If we enforced that upon re–clustering, the distance between Ei and Cdi r remained fixed at dr ,
then we would never add any new families to the hierarchy. This property is critical, as certain parts of the
hierarchy may contain more binaries which should be reflected in a more diverse hierarchical organization.
The detailed pseudocode for the algorithm is in Fig. 3. In the pseudocode, the applyP ostOrder function
applies the Update function to the exemplars in H in post–order fashion, passing along the arguments
dr , θ, δ, and τ (line 9). The ancestor(Ei , n) function retrieves the nth ancestor of Ei (lines 14, 17).
The cluster(Cdi r , δ, θ) function uses a standard hierarchical clustering algorithm to cluster the exemplars
or binaries at a depth of dr below the exemplar Ei . Finally, the insert(H, H0 , Ei , dr ) function inserts a
sub-hierarchy H0 into the full hierarchy H, replacing the nodes between Ei and Cdi r (line 22).
5.2.3. Clustering Correctness
The hierarchy is a representation of a database of malware, and changes to the hierarchy should only
reflect changes in the underlying database. That is, an exemplar should be marked for re–clustering because
of some change in the composition of the malware database, and not due to the implementation of the
clustering algorithm. This means that when the clustering algorithm returns H0 , the features of the leaves of
9
1:
2:
3:
4:
5:
M ← new malware
procedure IncrementalCluster(M, H, dr , θ, δ, τ )
L ← leaf exemplars ∈ H
for m ∈ M do
Emin ← argmin δ(m, Ei )
Ei ∈L
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
addChild(Emin , m)
mark Emin as modified
end for
applyP ostOrder(H, dr , θ, δ, τ, Update)
end procedure
procedure Update(Ei , dr , θ, δ, τ )
if isM odif ied(Ei ) then
mark ancestor(Ei , 1) as modified
Fi0 ← θ(C1i )
if δ(Fi0 , Fi0 ) > τ then
mark ancestor(Ei , dr − 1) for re–clustering
end if
end if
if needsReclustering(Ei ) then
H0 ← cluster(Cdi r , δ, θ)
H ← insert(H, H0 , Ei , dr )
end if
end procedure
Figure 3: The incremental classification and clustering algorithm.
10
H0 and the root (i.e., Ei that initiated the re–clustering) must remain unchanged from before the clustering.
For instance, if the clustering algorithm returns H0 with the root features changed, then we may need to
mark one of Ei ’s ancestors for re–clustering, but this re–clustering was triggered by how Cdi r was clustered,
and not because of any significant change in the composition of the malware below Ei . This idea leads us
to Thm. 1 below.
Theorem 1. Let Fi be the features of exemplar Ei whose children at depth dr need to be re–clustered (i.e.,
the root of H0 ) and let Fi0 be the features of Ei after re–clustering. If the re–clustering algorithm uses θ to
generate the features of exemplars, and θ is a transitive feature function, then Fi = Fi0 .
Proof. The proof follows naturally from the definition of a transitive feature function. Let Fidr = θ(Cdi r )
0
be the features computed for Ei using the children of Ei at depth dr . After re–clustering, let Fidr −1 be
the features computed for Ei using children of Ei at depth dr − 1. By the definition of a transitive feature
function,
Fidr = Fidr −1 = · · · = Fi1
0
Fidr = Fidr −1 = · · · = Fi1
0
0
As Fi1 = Fi1 , then Fi remains unchanged before and after clustering. Hence, if θ is a transitive function,
then it is guaranteed that the insertion of the new sub–hierarchy into H will itself not initiate any new
re–clustering.
5.2.4. Clustering Time Complexity
It would also be beneficial to understand how the theoretical time complexity of our incremental clustering
algorithm compares with the time complexity of the true clustering of the database. Without loss of
generality, we assume the hierarchy H has a branching factor b and let Hd indicate the total depth of the
hierarchy. We also assume that we have some clustering algorithm that runs in complexity O(np ), for some
number of exemplars n and exponent p. For all of this analysis, we take a conservative approach and assume
that each time the incremental clustering algorithm is invoked with a new batch of malware, that the entire
hierarchy is marked for re–clustering.
We first consider the basic case and compare creating a hierarchy from a single, large clustering operation
to creating one incrementally. This is not a realistic scenario, as the incremental algorithm is intended to
operate on batches of malware over time, but it serves to base our discussion of time complexity. With a
re–clustering depth of dr , on the entire hierarchy, we would perform bHd −dr +1 clustering operations. Each
operation in turn would take O(bp·dr ) time. In contrast, clustering the entire database at once would take
O(bp·Hd ) time. So our incremental clustering algorithm is faster if:‘
b(Hd −dr +1) bpdr < bpHd
(Hd − dr + 1) + (pdr ) < pHd
(1)
dr (p − 1) + 1 < Hd (p − 1)
So, for any value of p > 1.5 our method is faster if dr is at most Hd − 2, which is a very reasonable
assumption. In practice we typically use dr values of 2 or 4, and Hd might be on the order of 100 edges
deep when clustering 10,000 binaries collected from the wild, for example.
Of course, we don’t perform a single clustering operation but perform incremental clustering online when
a sufficient number of new malware binaries have been collected. For purposes of comparison, however, let
assume that there are a finite amount of binaries that comprise H. Let us further assume that the incremental
Hd
clustering is broken up into S batches, so that each time the algorithm is run, it is incorporating b S new
Hd −dr +1
malware binaries into the database. That is, the first time incremental clustering is run, we perform b S
Hd −dr +1
clustering operations and so forth. In this case, our method
clustering operations, the second time 2b S
11
is faster if:
2
1 (Hd −dr +1) pdr
b
+ b(Hd −dr +1) bpdr + . . . < bpHd
b
S
S
2
(Hd −dr +1) pdr 1
b
b ( + + . . . ) < bpHd
S
S
S
1 (Hd −dr +1) pdr X
b
b
i < bpHd
S
i=1
1 (Hd −dr +1) pdr S(S + 1)
b
b
< bpHd
S
2
(S + 1) (Hd −dr +1) pdr
< bpHd
b
b
2
log (S+1)
2
+ dr (p − 1) + 1 < Hd (p − 1)
log b
(2)
In this situation, we add a constant time operation to our method that depends on the number of incremental
clustering batches and the branching factor of the hierarchy. While it is difficult to find a closed form
expression of when our method is faster in this situation, we note that in Eqn. 2, the single clustering
method scales linearly with the overall depth of the hierarchy, whereas the incremental method scales
logarithmically with the number of batches (assuming a constant b). When all the other parameters are
known, we can determine the maximum value of S where our method is faster. This indicates that in an
operational system, we should not initiate the incremental clustering operation too often or we would incur
too much of a penalty.
5.3. Implementation
A number of details need to be specified for an actual implementation of the system, including a way to
represent the items in memory, a method of comparing those representations, a θ function for aggregating
them to create representations of exemplar nodes, and a hierarchical clustering algorithm.
5.3.1. Feature–Based Representation
The feature–based representation of the binaries and exemplars is controls how they will be both compared and aggregated. For brevity, we only provide a brief overview of the feature generation and representation for malware binaries, and refer the reader to previous works for a complete description of malware
feature extraction and analysis [12, 17, 18].
While our system is defined to handle different types of features, a modified bag–of–words model of
representing an item’s distinct features provides the basis for our implementation. We extract three types
of features from the malware to use as the words. The first two are the strings found in a binary and the
libraries it imports. The third type of feature are n–grams over the Procedure Call Graph (PCG) of the
binary. The PCG is a directed graph extracted from a binary where the nodes are procedures in the binary
and the edges indicate that the source procedure made a call to the target procedure. Each n–gram is a set
of n procedure labels indicating a path through the graph of length n − 1. In our implementation, we used
2–grams, which captures the content and structure of a binary while keeping the complexity of the features
low.
The labels of the n–grams are based on MinHashing [13], a technique typically used to approximate the
Jaccard Index [19] between two sets. Each procedure is composed of a set of code blocks, and each block is
represented by its BinJuice [12], a representation of the semantic function of a code block that also abstracts
away registers and memory addresses. A hash function is used to generate a hash value for the BinJuice of
every block in a procedure, and the lowest value is selected as the label for that procedure. The equivalence
between procedure’s hash value is an approximation of the Jaccard index between the procedures, hence we
use more hash functions (four) for a better approximation. As a result, we have multiple bags of n–grams,
one per hash function. So in our implementation a malware binary is represented by four bags of n–grams
over the PCG, along with a bag of its distinct strings, and another bag of its imported libraries.
12
5.3.2. Creating Exemplars and Defining Distance
We also must define methods to create exemplars from a set of binaries or other exemplars (e.g., θ).
As explained in Sec. 5.2, we must define θ as a transitive function for proper behavior of the algorithm.
We propose two methods in our implementation; intersection and weighted average. For intersection, θ is
defined simply as θ(F1 , . . . , Fn ) = F1 ∧ F2 ∧ · · · ∧ Fn .
Slightly more complicated is the definition of average–based exemplars, as it is necessary to consider the
number of malware binaries represented by each child of an exemplar. In this case, we must also assign a
weight to each word in the bag–of–words model. We define θ for the weighted average method as
n
P
θ(F1 , . . . , Fn ) = {(f, wf )}, wf =
si ∗ wf,i
i=1
n
P
si
i=1
where wf,i is the weight of feature f in feature set Fi (or 0 if f ∈
/ Fi ), and si is the number of actual malware
binaries represented by Fi .
The distance function, δ, is needed for classification, sub–hierarchy re-clustering, and the exemplar
distance threshold calculation. Since exemplars must be compared to the actual malware binaries, as well
as to other exemplars, the comparison method is also driven by the exemplar representation method. In
our implementation, we use a weighted Jaccard distance measure on the bag–of–word features [20]. The
weighted Jaccard distance is calculated by dividing the sum of the minimum weights in the intersection
(where it is assumed the lack of a feature is a weight of 0), by the sum of maximum weights in the union.
Note that in the case where θ is the intersection function, the weights of feature words are all one, and
hence the weighted Jaccard is the normal Jaccard distance. The distance between each of the three bags
of features (i.e., n–grams, strings and libraries) for two binaries/exemplars are computed using δ, and these
three distances are combined using a weighted average to determine the overall distance.
5.3.3. Other Parameters
Beyond the preferred measure of distance and exemplar creation, the user is faced with additional
decisions in regard to the preferred clustering algorithm, along with its settable parameters. The Neighbor
Join [21] and Louvain methods [15] are both viable hierarchical clustering methods implemented in our
system. While both methods have proven to be successful, specific conditions or preferences may lead a user
to run one instead of the other.
Another adjustable parameter comes from the fact that the hierarchical clustering algorithms return a
hierarchy that extends down to each individual item, so no items share a parent. At this point every item
is in its own family. In order to produce a hierarchy that represents the optimal family assignments, the
hierarchy needs to be flattened by merging families from the bottom up to the point where the items are
optimally grouped under the same parent nodes. To flatten a hierarchy we perform a post–order traversal,
and at each exemplar a condition is applied that determines whether to delete its children and become the
parent of its grandchildren, placing them into the same family and flattening that part of the hierarchy by
one level (note this will also not change the exemplar features due to exemplar transitivity). The condition
we apply is to flatten whenever λ ∗ cohesion(C2i ) < cohesion(C1i ), where Cni are all the children at a depth
n from Ei , and cohesion(Cni ) is defined as the average of the distances between all pairs of exemplars in
Cni . The other value in this condition, λ, is another parameter that can be adjusted in order to influence
the flatness of the hierarchy. This operation is performed at the end of the re–clustering step before a
sub–hierarchy is inserted back into the hierarchy.
The other parameter worth discussing is the re-clustering depth parameter, dr , which controls the depth
of the sections of hierarchy modified during re-clustering. Initially, we performed re-clustering with dr = 2.
In more recent experiments, we have obtained higher clustering performance using a greater depth. These
experiments, among others, are discussed in the following section.
13
1.00
0.95
0.90
0.85
0.80
0.75
0.70
Hours
1:20:00
1 Batch
Increm.
Increm. vs. 1 Batch
P
IP
ARI
1:00:00
0:40:00
1 Batch
Increm.
0:20:00
0:00:00
Total Cluster. Dis. Class. Other
Matrix
JI
(a) Correctness evaluation
(b) Correctness run time
Figure 4: Correctness tests: performance agaist the cluster evaluation metrics (Purity, Inverse Purity, Adjusted Rand Index, and
Jaccard Index; higher is better) and run times (total run time, time spent clustering, calculating distance matrices, classifying
new binaries, and other operations; lower is better).
5.4. Experiments
To test our system, we performed different types of experiments to evaluate its effectiveness in several
areas including the scalability of the system, the correctness of the families it produces, its sensitivity to
various parameters, and its ability to adapt to changes in implementation details and how these changes
affect accuracy and speed. Here we will describe the results of the correctness experiments.
5.4.1. Datasets and Metrics
For our correctness tests, we used a set of 8,336 malware binaries provided by MIT Lincoln Labs during
the Cyber Genome program’s Independent Verification and Validation (IV&V). The dataset included 128
clean-room binaries built by MIT Lincoln Labs comprising six distinct families of malware. These 128
binaries provided the ground truth. The others were collected from the wild and provided background for
the test.
The accuracy of a clustering was determined by running cluster evaluation metrics over the cluster
assignments for the 128 ground truth binaries. The metrics we used were Jaccard Index, Purity, Inverse
Purity, and Adjusted Rand Index. Jaccard Index is defined as the number of pairs of items of the same truth
family that were correctly assigned to the same cluster, divided by the number of pairs that belong to the
same truth family and/or assigned cluster. Purity is defined as the percentage of items that are members
of the dominant truth family in their assigned cluster, and Inverse Purity is the percentage of items that
were assigned to their truth family’s dominant cluster. Adjusted Rand Index is a version of the Rand Index
cluster accuracy measure that is adjusted to account for the expected success rate when clusters are assigned
at random.
5.4.2. Base Accuracy Results
To evaluate the correctness of our incremental system, we compared the clustering produced by our
system to that produced by simply clustering the entire dataset at once, both using the Neighbor Join
algorithm for clustering. The incremental run used batches of 1,000 binaries and set dr to 6 (to put that
into perspective, a resulting cluster hierarchy over this dataset is typically on the order of 50-100 generations
deep at its maximum). Fig. 4(a) shows the evaluation metrics for both methods. Both score well compared
to the families on the 128 ground truth binaries, where the incremental method is only slightly worse than
the single batch version. The entire incremental result (all 8,336 binaries) was also scored using the single–
batch run as the truth, which shows the overall high similarity between the two cluster assignments for the
entire dataset, not just the clean room binaries.
The most obvious measurable difference between the two approaches is speed, shown in Fig. 4(b), where
the time spent on each task in the algorithm is detailed. Clustering the binaries in a single batch took over five
hours, whereas the incremental system completed in under an hour. In the single–batch run, the application
spent 21 minutes calculating the distance matrix, after which the vast majority of the time was spent on
clustering. In the incremental run, the application only spent a total of about 20 minutes clustering, which
14
occurred when the first batch was introduced to the system and whenever subsequent batches initiated
re–clustering. Neighbor Join is a polynomial time algorithm, so the many smaller clustering operation
performed during the incremental run finish orders of magnitude faster than a single clustering of the entire
dataset. Besides the time spent building the distance matrix prior to clustering, the incremental run also
spent some time classifying new binaries as they were introduced. In both runs, some other time was spent
persisting the database, manipulating data structures, initializing and updating exemplar features, and other
such overhead.
As can be seen, this hierarchical clustering approach can assist in malware analysts’ need to efficiently
and effectively group together and classify malware in an online fashion. In this way we can support both
the triage and deep–dive stages of the malware analysis work flow. Organizing malware into families also
provides a basis for performing some of MAAGI’s other types of analysis, including lineage generation and
component identification, which can provide insight into the specific relationships between malware binaries
within and across families.
6. MAAGI Analysis Framework
The MAAGI system incorporates several malware analysis tools that rely heavily on AI techniques to
analysis capabilities to the user. Each of these techniques build upon the malware hierarchy described in
Sec. 5.1 that serves as the persistant organization of detected malware into families. We present the details
of each of the analyses in the following sections.
6.1. Component Identification
Malware evolution is often guided by the sharing and adaptation of functional components that perform
a desired purpose. Hence, the ability to identify shared components of malware is central to the problem of
determining malware commonalities and evolution. A component can be thought of as a region of binary
code that logically implements a “unit” of malicious operation. For example, the code responsible for key–
logging would be a component. Components are intent driven, directed at providing a specific malware
capability. Sharing of functional components across a malware corpus would thus provide an insight into
the functional relationships between the binaries in a corpus and suggest connection between their attackers.
Detecting the existence of such shared components is not a trivial task. The function of a component
is the same in each malware binary, but the instantiation of the component in each binary may vary.
Different authors or different compilers and optimizations may cause variations in the binary code of each
component, hindering detection of the shared patterns. In addition, the detection of these components is
often unsupervised, that is the number of common components, their size, and their functions may not be
known a priori.
The component identification analysis tool helps analysts find shared functional components in a malware corpus. Our approach to this problem is based on an ensemble of techniques from program analysis,
functional analysis, and artificial intelligence. Given a corpus of (unpacked) malware binaries, each binary is
reverse engineered and decomposed into a set of smaller functional units, namely procedures (approximately
corresponding to language functions). The code in each procedure is then analyzed and converted into a
representation of the code’s semantics. Our innovation is in the development of a unsupervised method to
learn the components in a corpus. First, similar procedures from across the malware corpus are clustered.
The clusters created are then used as features for a second stage clustering where each cluster represents a
component.
Under supervision of DARPA, an independent verification and validation (IV&V) of our system was
performed by Massachusetts Institute of Technology Lincoln Laboratory (MITLL). Their objective was to
measure the effectiveness of the techniques under a variety of obfuscations used in real malware. The team
constructed a collection of malware and experiments on this collection indicate that our method is very
effective in detecting shared components in malware repositories.
15
Sample 2
Sample 1
Sample 3
Byte 0
Component 3
Component 1
Component 1
Component 3
Component 1
Component 2
Component 2
Byte n
Procedure 4
Procedure 1
Procedure 2
Procedure 3
Features
≈
Features
Procedure 5
Procedure 6
Figure 5: Component generative process in malware. Instantiations of components in different malware binaries should have
similar features.
6.1.1. Generative Process
The basic idea of the unsupervised learning task can be thought of as reverse engineering a malware
generative process, as shown in Fig. 5. A malware binary is composed of several shared components that
perform malicious operations. Each component in turn is composed of one or more procedures. Likewise,
we assume each procedure is represented by a set of features; in our system, features are extracted from the
code blocks in the procedure, but abstractly, can be any meaningful features from a procedure.
The main idea behind our method is that features from the procedures should be similar between instances of the same component found in different binaries. Due to authorship, polymorphism, and compiler
variation or optimizations, they may not be exact. However, we expect that two functionally similar procedures instantiated in different binaries should be more similar to each other than to a random procedure
from the corpus. This generative process provides the foundation for our learning approach to discovery
and identification of components.
6.1.2. Basic Algorithm
Building off of the generative process that underlies components, we develop a two–stage clustering
method to identify shared components in a set of malware binaries, outlined in Fig. 6. For the moment,
we assume that each procedure is composed of a set of meaningful features that describe the function of
the procedure. Details on the features we employ in our implementation and evaluation can be found in
Sections 6.1.4 and 6.1.6.
Given a corpus of malware binaries M = {M1 , . . . , M|M| }, we assume it contains a set of shared functional
components T = {T1 , . . . , T|T| }. However, we are only able to observe Ti,j , which is the instantiation of the
ith component in Mj . If the component is not part of the binary Mj , then Ti,j is undefined. We also
denote Ti,∗ as the set of all instantiations of the ith component in the entire corpus. Note that Ti,j may
not be an exact code replica of Ti,k , since the components could have some variation due to authorship and
compilation. Each Mj consists of a set of procedures pi,j , denoting the ith procedure in the j th binary.
Procedure-Based Clustering. The first stage of clustering is based on the notion that if Ti,j ∈ Mj and
Ti,k ∈ Mk , then at least one procedure in Mj must have high feature similarity to a procedure in Mk . Since
components are shared across a corpus and represent common functions, even among different authors and
compilers, it is likely that there is some similarity between the procedures. We first start out with a strong
assumption and assert that the components in the corpus satisfy what we term as the component uniqueness
property.
16
Figure 6: Two-stage clustering procedure to identify shared components in a malware corpus. Procedures are clustered based
on feature similarity, then the centroids are converted to the space of binaries and clustered again. The resulting clusters are
the shared functional components.
Definition 2. Component Uniqueness. A component satisfies the component uniqueness property if
the following relation holds true for all instantiations of Ti,∗ :
∀ px,j ∈ Ti,j , ∃ pa,k ∈ Ti,k | d(px,j , pa,k ) d(px,j , p∗,∗ ),
∀ p∗,∗ ∈ T∗,k , Ti,j , Ti,k ∈ Ti,∗
where d(p∗,∗ , p∗,∗ ) is a distance function between the features of two procedures. Informally, this states that
all procedures in each instantiation of a component are much more similar to a single procedure from the
same component in a different binary than to all other procedures.
Given this idea, the first step in our algorithm is to cluster the entire set of procedures in a corpus.
These clusters represent the common functional procedures found in all the binaries, and by the component
uniqueness property, similar procedures in instantiations of the same component will tend to cluster together.
Of course, even with component uniqueness, we cannot guarantee that all like procedures in instantiations
of a component will be clustered together; this is partially a function of the clustering algorithm employed.
However, as we show in later sections, with appropriately discriminative distance functions and agglomerative
clustering techniques, this clustering result is highly likely.
These discovered clusters, however, are not necessarily the common components in the corpus. Components can be composed of multiple procedures, which may exhibit little similarity to each other (uniqueness
does not say anything about the similarity between procedures in the same component). In Fig. 5, for
example, Component 1 contains three procedures in binary 1 and binary 3. After clustering, three clusters
are likely formed, each with one procedure from binary 1 and 3. This behavior is often found in malware:
A component may be composed of a procedure to open a registry file and another one to compute a new
registry key. Such overall functionality is part of the same component, yet the procedures could be vastly
dissimilar based on the extracted features. However, based on component uniqueness, procedures that are
part of the same shared component should appear in the same subset of malware binaries.
17
Figure 7: Conversion of procedure clusters into a vector space representing the presence of a procedure from each binary in the
cluster, and then the subsequent clustering of the vector space representations. The clusters of vector space representations
are the shared components.
Binary-Based Clustering. Next, we perform a second step of clustering on the results from the first stage,
but first convert the clusters from the space of procedure similarity to what we denote as binary similarity.
Let Ci represent a procedure cluster, where px1 ,y1 , . . . , pxk ,yk ∈ Ci are the procedures from the corpus that
were placed into the cluster. We then represent Ci by a vector S~i , where
(
1 if ∃ pxk ,yk ∈ Ci | yk = j
Si [j] =
(3)
0 otherwise
That is, Si [j] represents the presence of a procedure from Mj in the cluster. In this manner, each procedure
cluster is converted into a point in an |M|-dimensional space, where |M| is the number of malware binaries
in the corpus. Consider the example shown in Fig. 7. The procedures in the corpus have been clustered
into four unique procedure clusters. Cluster C1 contains procedures p2,1 from M1 , and p1,4 from binary M4 .
Hence, we convert this cluster into the point S~1 = [1, 0, 0, 1], and convert the other clusters into the vector
space representation as well.
This conversion now allows us to group together clusters that appear in the same subset of malware
binaries. Using component uniqueness and a reasonable clustering algorithm in the first step, it is likely
that a px,j ∈ Ti,j has been placed in a cluster Cv with other like procedures from Ti,∗ . Similarly, it is also
likely that a different procedure in the same component instance, py,j , is found in cluster Cw with other
procedures from Ti,∗ . Since Cv and Cw both contain procedures from Ti,∗ , then they will contain procedures
from the same set of binaries, and therefore their vector representations S~v and S~w will be very similar. We
can state this intuition more formally as
d(S~v , S~w ) ≈ 0 ⇒ px,j , py,k ∈ Ti,∗ ∀ px,j , py,k ∈ {Cv , Cw }
Based on these intuitions, we then cluster the newly generated S~i together to yield our components. Looking
again at Fig. 7, we can see that when cluster C1 and C4 are converted to S1 and S4 , they contain the same
18
set of binaries. These two procedure groups therefore constitute a single component instantiated in two
binaries, and would be combined into a single cluster in the second clustering step, as shown.
Analysis. Provided the component uniqueness property holds for all components in the data set, then the
algorithm is very likely to discover all shared components in a malware corpus. However, if two components
in the corpus are found in the exact same subset of malware binaries, then they become indistinguishable
in the second stage of clustering; the algorithm would incorrectly merge them both into a single cluster.
Therefore, if each component is found in the corpus according to some prescribed distribution, we can
compute the probability that two components are found in the same subset of malware.
Let Ti be a random variable denoting the set of malware binaries that contain the ith component. If Ti
is distributed according to some distribution function, then for some t = {Mx , . . . , My } ⊆ M, we denote the
probability of the component being found in exactly the set t as P r(Ti = t). Assuming uniqueness holds,
we can now determine the probability that a component is detected in the corpus.
Theorem 2. The probability that the ith component is detected in a set of malware binaries is
X
P r(Tk 6= t, . . . , Ti = t, . . . , Tk 6= t)
t∈all subsets of M
Proof. If Ti = tj for some tj ⊆ M, the component will be detected if no other component in the corpus
is found in the exact same subset. That is, Tk 6= tj for all other components in the corpus. Assuming
nothing about component or binary independence, the probability of no other component sharing tj is the
joint distribution P r(Tk 6= tj , . . . , Ti = tj , . . . , Tk 6= tj ). Summing over all possible subsets of M then yields
Thm 2.
Thm. 2 assumes nothing about component and binary independence. However, if we do assume that
components are independent of each other and a component Ti appears independently in each binary with
probability pi , then Ti is distributed according to a binomial distribution. As such, we can compute a lower
bound for the probability of detection by ignoring equality between distribution sets as
P r(Detection of Ti )
X
=
P r(Tk 6= t, . . . , Ti = t, . . . , Tk 6= t)
t∈all subsets of M
X
=
P r(Ti = t)
t∈all subsets of M
≥
|M|
X
x=0
=
|M|
X
Y
(1 − P r(Tk = t))
k6=i
P r(|Ti | = x)
Y
(1 − P r(|Tk | = x))
k6=i
Bin(x, |M|, pi )
x=0
Y
(1 − Bin(x, |M|, pk ))
k6=i
where Bin(·) is the binomial probability distribution function. This lower bound can provide us with
reasonable estimates on the probability of detection. For instance, even in a small data set of 20 binaries
with two components that both have a 20% chance of appearing in any binary, the probability of detection
is at least 0.85.
Based on component uniqueness, the basic algorithm can easily locate the shared components in a set
of malware binaries. However, in practice, component uniqueness rarely holds in a malware corpus. That
is, it is likely that some procedures in different components are quite similar. This situation can be quite
problematic for the basic algorithm. In the next section, we relax the component uniqueness assumption
and detail a more sophisticated algorithm intended to identify components.
19
M1
M2
M3
M4
p1,1
p1,2
p1,3
p1,4
p2,1
≈
p2,2
≈
p2,3
p2,4
≈
Clustering Algorithm
C1
p1,1 p1,3
S1 = [1, 0, 1, 0]
C2
p
p2,1 p2,2 2,3 p2,4
S2 = [1, 1, 1, 1]
C3
p1,2
p1,4
S3 = [0, 1, 0, 1]
Clustering Algorithm
S1
S
S3
2
Figure 8: Basic algorithm without the component uniqueness property.
There are two components in the set, yet the end result
clustering produces three, of which one is a combination of two components
6.1.3. Assumption Relaxation
When component uniqueness does not hold, the basic algorithm may not correctly identify components.
Consider the example shown in Fig. 8. There are two components in four binaries, each composed of two
procedures. Assuming component uniqueness does not hold, then the second procedure in each binary could
show high similarity to each other (it is possible they perform some basic function to set up malicious
behavior). After the first step, p2,1 , p2,2 , p2,3 , and p2,4 are placed in the same cluster; this results in creation
of S~2 that does not resemble any other cluster vectors. Hence, any clustering of S~2 with S~1 or S~3 will result
in a misidentification of the procedures in each component.
To remediate this error, we utilize an algorithm that “splits” clusters discovered in the first step of the
algorithm before the second stage of clustering is performed. This requires that we relax the component
uniqueness assumption in Def. 2 to what we term as procedure uniqueness, which states that only one procedure in an instantiation of a component must exhibit high similarity to a procedure in another instantiation.
0
The intuition is that after the first stage of clustering, there are clusters C10 . . . C|T|
that each contain the set
0
of procedures pi,∗ . That is, from procedure uniqueness and a reasonable clustering algorithm, it is highly
likely that we get |T| clusters, one for each component in the data set. We do not discuss the detailed
algorithmic process and theoretic limits of splitting in this work, but refer the reader to Ruttenberg et.
al. [18] for more details.
6.1.4. Implementation
Our component identification system is intended to discover common functional sections of binary code
shared across a corpus of malware. The number, size, function and distribution of these components is
generally unknown, hence our system architecture reflects a combination of unsupervised learning methods
coupled with semantic generalization of binary code. The system uses two main components:
1. BinJuice: To extract the procedures and generate a suitable Juice features.
2. Clustering Engine: To perform the actual clustering based on the features.
We use Lakhotia et al.’s BinJuice system [12] to translate the code of each basic block into four types
of features: code, semantics, gen semantics, and gen code. Since each of the features are strings, they
20
may be represented using a fixed size hash, such as md5. As a procedure is composed of a set of basic
blocks, a procedure is represented as an unordered set of hashes on the blocks of the procedure. We measure
similarity between a pair of procedures using the Jaccard index [22] of their sets of features.
6.1.5. Clustering Engine
For the first stage of clustering, we choose to use a data driven clustering method. Even if we know the
number of shared components in a corpus, it is far less likely that we will know how many procedures are in
each component. Thus, it makes sense to use a method that does not rely on prior knowledge of the number
of procedure clusters.
We use Louvain clustering for the procedure clustering step [15]. Louvain clustering is a greedy agglomerative clustering method, originally formulated as a graph clustering algorithm that uses modularity
optimization [23]. We view procedures as nodes in a graph and the weights of the edges between the nodes
as the Jaccard index between the procedure features. Modularity optimization attempts to maximize the
modularity of a graph, which is defined as groups of procedures that have higher intra–group similarity and
lower inter–group similarity than would be expected at random. Louvain clustering iteratively combines
nodes together that increases the overall modularity of the graph until no more increase in modularity can
be attained.
For the second stage, we experimented with two different clustering methods: Louvain and K–means.
These methods represent two modalities of clustering, and various scenarios may need different methods of
extracting components. For instance, in situations where we know a reasonable upper bound on the number
of components in a corpus, we wanted to determine if traditional iterative clustering methods (i.e., K–means)
could outperform a data driven approach. In the second step of clustering, the L2 distance between vectors
was used for K–means, and since Louvain clustering operates on similarity (as opposed to distance), an
inverted and scaled version of the L2 distance was employed for the second stage Louvain clustering.
6.1.6. Experiments
Since it is quite a challenge to perform scientifically valid controlled experiments that would estimate the
performance of a malware analaysis system in the real-world, MITLL was enlisted (by DARPA) to create a
collection of malware binaries with byte–labeled components. That is, for each component they know the
exact virtual memory addresses of each byte that is part of the component in every binary. We tested the
performance of the component identification algorithm using this data set.
In all tests, the algorithms do not have prior knowledge of the number of components in the data set.
For the K-means tests, we set a reasonable upper bound on the estimated number of components (K = 50).
We used two metrics to gauge the effectiveness of our method. The first is using the Jaccard index to
measure the similarity between the bytes identified as components in our algorithm and the ground truth
byte locations. We also used the Adjusted Rand Index [24] to measure how effective our algorithm is at
finding binaries with the same components.
Results. We ran all of our tests using the two clustering algorithms (Louvain and K-means), and additionally
tested each method with and without splits to determine how much the relaxation of component uniqueness
helps the results. Note that no parameters (besides K) were needed for evaluation; we utilize a completely
data driven and unsupervised approach.
The results of the component identification are shown in Fig. 9, where each metric is shown for three
different test sets derived from the IV&V data (TC1, TC2 and TC3). The code and semantics features,
as expected, produced inferior results as compared to gen code and gen semantics features during initial
testing. Hence subsequent testing on those feature was discontinued.
In general, all four methods have fairly low ARIs regardless of the BinJuice feature. This indicates that in
our component identifications the false positives are distributed across the malware collection, as opposed to
concentrated in a few binaries. Furthermore, as indicated by the Jaccard index results, the misclassification
rate at the byte level is not too high. The data also shows that the Louvain method outperforms K-means on
all BinJuice features, though in some cases the splitting does not help significantly. Note that the difference
between Louvain with and without splitting is mainly in the binary ARI. Since Louvain without splitting
21
1
1
0.9
0.9
0.8
0.8
0.7
Byte Jaccard TC01
0.7
Byte Jaccard TC01
0.6
Byte Jaccard TC02
0.6
Byte Jaccard TC02
0.5
Byte Jaccard TC03
0.5
Byte Jaccard TC03
0.4
0.4
Sample ARI TC01
0.3
0.2
Sample ARI TC02
0.2
Sample ARI TC03
0.1
Sample ARI TC01
0.3
Sample ARI TC02
Sample ARI TC03
0.1
0
0
K‐means (w/o
splits)
K‐means
Louvain (w/o
splits)
K‐means
(w/o splits)
Louvain
(a) gen code
K‐means
Louvain (w/o
splits)
Louvain
(b) gen semantics
Figure 9: Byte Jaccard and binary ARI comparisons of the different methods on the IV&V data-set using two of the BinJuice
features.
is not able to break clusters up, it mistakenly identifies non–component code in the malware as part of
a real component; hence, it believes that binaries contain many more components than they actually do.
These results demonstrate the robustness of the Louvain method and the strength of the BinJuice generated
features. The data also shows that relaxing the component uniqueness property can improve the results in
real malware.
Study with Wild Malware. We also performed component identification on a small data set of wild malware
consisting of 136 wild malware binaries. We identified a total of 135 unique components in the data set. On
an average 13 components were identified per malware binary. Fig. 10(a) shows the histogram of the number
of components discovered per binary. As evident from the graph, the distribution is not uniform. Most
malware binaries have few components, though some can have a very large number of shared components.
In addition, we also show the number of binaries per identified component in Fig. 10(b). As can be seen,
most components are only found in a few binaries. For example, 25% of components are only found in a
single binary, and thus would most likely not be of interest to a malware analyst (as components must be
shared among malware binaries in our definition).
In general, many of the identified components are similar in size (bytes), as shown in Fig. 10(c). In
the figure, we plot the variance of the size of the instantiations in each of the 135 components against the
number of binaries that contain the component. As can be seen, many of the binaries have low variance
in their component size, indicating that it is likely that many of the components are representing the same
shared function (components with large variation in observed instantiation size are likely false positive
components). In addition, many of these low variance components are non-singleton components, meaning
that the component has been observed in many malware binaries. While further investigation is needed to
determine the exact function and purpose of these components, these results do indicate that our method
is capable of extracting shared components in a corpus of wild malware.
The component identification analysis in the MAAGI system is clearly capability of identifying shared
components across a corpus with high accuracy. As malware becomes more prevalent and sophisticated,
determining the commonalities between disparate pieces of malware will be key in thwarting attacks or
tracking their perpetrators. Key to the continued success of finding shared components in light of more
complex malware is the adoption of even more sophisticated AI methods and concepts to assist analyst
efforts.
6.2. Lineage
Many malware authors borrow source code from other authors when creating new malware, or will take
an existing piece of malware and modify it for their needs. As a result, malware within a family of malware
(i.e., malware that is closely related in function and structure) often exhibit strong parent–child relationships
(or parents and children). Determining the nature of these relationships within a family of malware can be a
22
0.2
0.3
0.18
0.25
0.16
0.2
0.12
Ratio
Ratio
0.14
0.1
0.08
0.15
0.1
0.06
0.04
0.05
0.02
0
0
Number of Components
Number of Samples per Component
(a) Components per binary
(b) binaries per component
Samples Per Component
120
100
80
60
40
20
0
0
5000
10000
15000
20000
25000
30000
Standard Deviation in Component Size (bytes)
(c) Component size compared to binaries per component
Figure 10: Histograms of the number of components found in each binary and the number of binaries per identified component
for the wild malware. In addition, we also show the variation in component size as a function of the number of binaries
containing each component.
powerful tool towards understanding how malware evolves over time, which parts of malware are transferred
from parent to child, and how fast this evolution occurs.
Analyzing the lineage of malware is a common task for malware analysis, and one that has always relied
on artificial intelligence. For instance, Karim et al. used unsupervised clustering methods to construct
a lineage of a well–known worm [10]. Lindorfer et al. constructed lineages by relying on rule learning
systems to construct high–level behaviors of malware. Most recently, Jang et al. [25] also used unsupervised
clustering to infer the order and a subsequent straight line lineage of a set of malware. While all of these
methods have been shown to be effective, the MAAGI system relies heavily on AI techniques to mostly
automate the entire process of inferring lineages with diverse structure.
In the MAAGI system, a lineage graph of a set of malware binaries is a directed graph where the nodes of
the graph are the set of binaries input to the component, and the directed edge from binary A to B implies
that binary B evolved partly from sample A (and by implication, was created at a later time). Lineage’s
can contain multiple rooted binaries, in addition to binaries that contain multiple parents. The lineage
component operates over a family of malware, defined by the persistent clustering data structure. That is,
all malware binaries in the same cluster constitute a family. While it is envisioned that a malware analyst
would generally be interested in lineage of a particular family, there is nothing that precludes one from
generating a lineage from binaries from multiple families.
6.2.1. Lineage as a Probabilistic Model
To determine the lineage of malware, it is essential to know the order in which binaries were generated.
Without this information, it would be hard to determine the direction of parent–child relationships. As
such, the lineage of a set of binaries is conditioned upon the the creation times of each of the binaries. This
knowledge allows us to create a simple, high–level probabilistic model that represents (a small part of) the
generative process of malware. Fig. 11 shows the basic model.
The Lineage variable represents a distribution over the possible lineages that can be constructed from
a set of binaries, conditioned upon the Creation Times variable and the Malware Features variable. The
23
Figure 11: Lineage as a random variable conditioned upon the creation times of the malware and their features.
Figure 12: Learning a distribution over the creation times of each binary. We create a small probabilistic model for each
malware that infers a lineage–independent distribution of its creation time using time stamp and time seen as evidence.
Creation Times variable represents a distribution over the dates that each binary was created. Finally, the
Malware Features is a deterministic variable that contains extracted features of the malware that is used as
a soft constraint on the distribution of lineages. The more features that malware binaries share, the more
likely that an edge in the lineage exists between them; the actual parent–child assignment of the two nodes
depends upon the given creation times of the binaries. The lineage of a set of malware M is then defined as
LineageM = argmax P (LineageM,i |F eaturesM , T imesM )
(4)
LineageM,i
The compiler timestamp in the header is an indication of the time at which malware is generated.
Unfortunately, it may be missing or purposely obfuscated by the malware author. However, it should not
be ignored completely, as it provides an accurate signal in cases where it is not obfuscated. Therefore, we
must infer the creation times of each malware binary from any available timestamp information, in addition
to any other external information we have on the provenance of a malware binary. Fortunately, detailed
malware databases exist that keep track of new occurrences of malware, and as a result we can also use the
date that the malware was first encountered in the wild as additional evidence for the actual creation times
of the binaries.
One of the key insights in this formulation of the lineage problem is that the lineage of a set of malware
and their creation times are joint processes that can inform each other; knowing the lineage can improve
inference of the creation times, and knowing the creation times can help determine the lineage. As such,
inferring the creation times and the lineage jointly can potentially produce better results than inferring the
creation times first and conditioning the lineage inference on some property of the inferred creation times
(e.g., such as the most likely creation times).
We break down the overall probabilistic model into two separate models, one to represent the creation
times and another to represent the lineage construction process. The lineage inference algorithm explained
in subsequent sections details how joint inference is performed using the two models.
Creation Time Model. The creation time model for a set of malware binaries is shown in Fig. 12. For a set
of N binaries, we instantiate N independent probabilistic models, one for each binary. While the creation
24
Figure 13: Lineage model given distributions of the creation times of each malware. The malware features are used as soft
constraints on the inheritence relationships, and given the edges for each binary, constructing the lineage is deterministic
times of each malware binary are not truly independent, the dependence between the creation times of
different binaries is enforced through the joint inference algorithm.
Each probabilistic model contains five variables. First, there is a variable to represent the actual creation
time of the malware. There is also a variable to represent the time the sample was first seen in the wild,
which depends upon the actual creation time of the sample. There is also a variable to represent the time
stamp of the sample (from the actual binary header). This variable depends upon the creation time, as well
as two additional variables that represent any obfuscation by the malware author to hide the actual creation
time; one variable determines if the time stamp is obfuscated, and the other represents how the time stamp
is obfuscated (either empty or some random value).
Evidence is posted to the time seen and time stamp variables and a distribution of each malware’s
creation time can be inferred. Note that the priors for the obfuscation variables and parameters for the
conditional distributions can be learned and are discussed later.
Lineage Model. The model for lineage is shown in Fig. 13. For each malware binary, we create a set of
variables that represent how the binary can be used in creating a lineage. First, there is a variable called
P ossibleEdgesi , which represents the possible existence of edges between a binary i and all the other binaries.
This variable is deterministic conditioned upon the creation times of all the binaries; since a malware binary
can only inherit from prior binaries, the value of this variable is simply all of the binaries with earlier creation
times.
There are also two variables that control the number of edges (i.e., parents) for each binary, as well
as a variable that specifies whether a particular binary is a root in the lineage (and thus has no parents).
Finally, there is a variable that represents a set of actual lineage edges of a binary i which depends upon
the possible edges of the binary, the number of edges it has, and whether it is a root binary. By definition,
the values of the actual edges variable for all binaries defines the lineage over the set of malware (i.e., it can
be deterministically construced from the edges and the creation times.)
In addition, the conditional probability distribution of the actual edges variable is constrained by the
difference between the features of the binaries. That is, the higher similarity between two binaries, the more
likely they are to have an edge between them. The similarity measure between binaries is based on the
similarity measures used in other parts of the MAAGI system (clustering, component identification), and
thus we do not discuss the measure in any further detail.
25
Figure 14: Inference algorithm to jointly infer the best lineage and creation times. Once the distribution of creation times
is inferred, we sample a creation time for each binary, and iteratively maximize the lineage and creation time inference. The
sampling process is repeated to avoid local maxima, the best lineage over all random starting points is selected as the algorithm
output.
6.2.2. Inference Algorithm
As shown in Eqn. 4, the lineage that is provided by the MAAGI system to the user is the maximal
probability lineage given the creation times and the malware features. Since the creation times are unknown,
we must infer both the lineage and the creation times jointly. To accomplish this, we employed an iterative
algorithm (similar to expectation–maximization) to jointly infer the most likely binary creation times and
lineage.
The algorithm runs as follows, and is also shown in Fig. 14:
• Infer a distribution over the creation time of each binary. Using the observable time stamp
and time seen in the wild information, we infer a distribution of the possible creation times of each
binary. This distribution is still conditioned upon the lineage; this process simply marginalizes out
some of the information not needed to compute a lineage (time stamps, first observations, obfuscation).
• Sample the creation times. We take a sample from the creation time distributions of the malware
binaries. This creates a fixed ordering over the binaries.
• Infer the most likely lineage. We infer the most likely lineage of the malware binaries given the
fixed creation times. That is, we compute the lineage described in Eqn. 4.
• Infer the most likely creation times. We now infer the most likely creation times of the malware
given the most likely lineage just inferred. The most likely creation times is defined as
T imesM = argmax P (T imesM,i |F eaturesM , LineageM )
(5)
T imesM,i
Since we are conditioning on the previously computed lineage, we fix the inheritance relationship
between two binaries, but the direction of the edge can change depending on the inferred creation times.
This means that after finding the most likely creation times, some of the parent/child relationships in
the lineage may have reversed.
• Repeat steps 3 and 4 until convergence. We repeat the dual maximization process until convergence, which can be defined as either a fixed number of iterations or until there are no more changes
to both the mostly likely lineage and the creation times.
26
Time
Seen Prior
Creation
Time1
Time
Seen1
Zero
Obfuscated
Prior
Is Obfuscated
Prior
Is
Obfuscated1
Creation
Timen
Zero
Obfuscated1
Time
Seenn
Time
Stamp1
Is
Obfuscatedn
Zero
Obfuscatedn
Time
Stampn
Binary 1
Binary n
Figure 15: Learning the priors on the creation time model. The priors model the time between when a malware was created
and when it is first seen in the wild, and the levels of obfuscation that are seen in malware.
• Repeat steps 2–5 until enough samples have been collected. Since there is no guarantee that
the maximization process converges on the global maximum, we restart the process to increase the
likelihood that the global maximum is reached. Let LineagejM and T imejM be the lineage and creation
times computed on the j th restart of the maximization process. Then the lineage and creation times
returned to the user is
(LineageM , T imesM ) =
argmax
LineagejM ,T imesjM
P (T imesjM , LineagejM |F eaturesM )
The algorithm is very similar to the expectation maximization algorithm, but we must re–sample several
initial malware creation times to ensure that we are finding a satisfactory maximum value. At each iteration,
we find the most probable lineage and then subsequently the most probable creation times. So every iteration
of the algorithm increases the joint probability of both lineage and creation times. At the end of the
algorithm, we select the lineage with the highest probability (from the multiple restarts of the algorithm)
as the lineage of the set of malware.
6.2.3. Learning
Inferring the creation times of the malware depends upon some variable parameters which are generally
unknown. For instance, we need to know the prior probability that a binary’s time stamp has been obfuscated, and any parameters of the conditional probability of the time the malware was first seen in the wild
given its creation time. To determine these parameters, we learn them on a training set of data, as shown
in Fig. 15.
We create three prior variables, each of which is a beta distribution. The obfuscation variables that use
the priors are bernoulli random variables, whereas the conditional probability distribution between the time
seen and creation time is an exponential distribution (yet it still uses a beta distribution as its prior).
The learning process is as follows. We sample a value of the priors and compute the probability of the
evidence of the model given the prior values, were the evidence is the time seen and time stamp information
for each binary. This probability of evidence is used as a weighting factor in a maximization algorithm. That
is, we explore the space of the priors and eventually converge upon the value of the priors that produces
the highest probability of the observed evidence. Once the parameters are learned they can be used for any
lineage on the malware.
6.2.4. Implementation and Experiments
We implemented the creation time model, the lineage model, and the learning model in Figaro, an
open–source probabilistic programming language. All of the maximization used in the lineage analysis tool
(lineage inference and learning priors) used Simulated Annealing to compute the most likely values. The
Metropolis–Hastings inference algorithm was used to infer the distribution of creation times for the binaries.
27
40%
35%
0.9
45
0.8
40
35
30
25
Geometric
20
Uniform
15
10
0.7
0.6
0.5
Geometric
0.4
Uniform
0.3
0.2
0
0
0
10
20
30
40
50
60
70
80
90
100
Percent Real Time Stamps
(a) Expected Error
30%
25%
20%
15%
10%
5%
0%
0.1
5
Percent Reduction in Error
50
Probability of < 15 error
Expected Error (Time)
1
0
10
20
30
40
50
60
70
80
90
100
Percent Real Time Stamps
(b) Probability of Error
‐5%
0
10
20
30
40
50
60
70
80
90
100
Percent Real Time Stamps
(c) Reduction in Error after Lineage
Figure 16: Results of the creation time inference on the synthetic data set.
Inferring the Creation Time Distributions. The first step in our algorithm is to infer a distribution of the
creation time for each malware binary. Although this distribution is used in the lineage algorithm, it can be
computed and tested independently. Unfortunately, it is extremely difficult to determine the true creation
time of wild malware binaries, hence we resorted to using synthetic data.
For a set of N synthetic binaries, we first created ground–truth lineages from randomly generated Prufer
sequences [26]. We then generated synthetic creation times for each synthetic binary with respect to the
lineage constraints (i.e., parent time must be before child). We used a synthetic time scale from 1 to 10,000.
We then generated the time seen information from each binary by modeling the distance between the
creation time and time seen as either a geometric distribution or a uniform distribution. We also varied the
obfuscation of the synthetic time stamps from 5% of the data set to 95% to replicate real–world conditions.
We generated 50 random lineages, and computed the expected error between the inferred time stamps
and the actual time stamps. The results are shown in Fig. 16(a). As can be seen, as the data set becomes
less obfuscated, we are able to estimate the actual creation time of the binaries with higher accuracy. We
also see that the expected errors are very similar using either a geometric or uniform distribution to generate
the time seen information (and note that the algorithm has no knowledge of the generative model of time
seen from creation time).
To put this error in perspective, we also show the probability that the estimated creation time is within
15 time units of the actual creation time in Fig. 16(b). The number 15 is significant here as this the expected
time between a parent to its child (this number was set during the random lineage generation). The uniform
model of time seen generation has less error in this case since it has lower variance than the geometric.
Finally, during the lineage phase of our algorithm, we iterate between improving the lineage and improving the creation time. So we wanted to determine the improvement of the creation time estimation
before and after the lineage portion of the algorithm is run, shown in Fig. 16(c). As can be seen, for most of
the tests, the error in the creation time estimation was reduced after running the lineage estimation. This
demonstrates that our initial lineage hypothesis is correct; lineage can inform the creation times as much as
creation times can inform the lineage.
Inferring Lineage. Next, we tested the actual lineage estimation algorithm. Since we do not have ground
truth wild malware, we tested our algorithm on 17 lineages generated by MITLL. Since the binaries in this
data set are clean room generated data (i.e., non–wild) they do not have data for the time they were first
seen in the wild. To test our algorithm on this data, we had to create an artificial time system that generated
new creation time, time stamp, and time seen data for the malware. This synthetic time generation was the
same process as described for the creation time testing. Although the time data was artificial, it was still
generated according to the ground–truth lineage. We also obfuscated a portion of the time stamps to make
it more realistic. Fig. 17 shows the results of the test.
The figure shows the accuracy of the lineage algorithm on four metrics. As can be seen, our lineage estimation algorithm can reconstruct lineages of malware with fairly high accuracy. These results demonstrate
that our lineage analysis tool is able to reconstruct the lineage of a set of malware, and can be successfully
used to analyze a set of wild malware.
28
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Parent Correct
Parent Precision
Parent Recall
Parent FMeasure
Figure 17: Results of the lineage estimation on the MITLL data.
As indicated by our experiments, AI tools and methods can be used to accurately estimate the lineage
of a set of malware. This type of tool will prove invaluable to malware analysts in their quest to understand
the provenance and evolution of malware. As malware authors increase the volume and sophistication of
their attacks, we see AI playing a large role reconstructing increasingly complex lineages.
6.3. Trend Prediction
Currently, cyber defense is virtually always responsive. The attacker initiates an attack and the defenders
need to find a way to respond to the attack, mitigate its damage, and prevent it from reoccurring, a process
that invariable favors the attacker. Preemptive cyber defense has the potential to shift the advantage
away from the attacker and on to the defender. In preemptive defense, the defender tries to predict future
attacks and respond to them before they happen, yet requires an effective capability to predict future cyber
developments.
To date, most malware trend analyses are retrospective analyses of existing malware [3], or are predictions
that reflect the opinions of human experts and have a broad, high level focus [27, 28]. This type of prediction
may not be of practical use to a malware analyst, as they are not able to quantify the threat level of a malware
family or trend. A number of tools and systems [29, 30, 31] have been proposed for detecting novelty or
anomaly among individual malware binaries. These do not include the notion of a malware family, do not
consider changes in the families over time, nor do they make long term predictions about the behavior of a
family over time.
The goal of this analysis tool is to predict how malware families will evolve, and to determine which
families will become the most harmful. Prediction is challenging since the space of possibilities is enormous
and attackers try to be unpredictable. Nevertheless, the MAAGI trend analysis tool uses modern machine
learning methods to make effective predictions even in difficult situations. As prediction about the behavior
of a large family of malware is generally not useful (most of the damage has already been done), this tool
only needs a single binary from a malware family to provide significant and actionable information about
the future behavior of the family, such as the overall prevalence of the family.
6.3.1. Prediction Problem
When a new malware binary is encountered(i.e., a malware not clustered into an existing family), the
MAAGI prediction tool attempts to predict various characteristics of the entire family from the single
instance from the family, as shown in Fig. 18. The tool first extracts a set of features from the binary
which are used as the input values for the prediction algorithm. Attributes at the family level are ultimately
derived from the features of the binaries they contain. Temporal aspects of the family are obtained from an
external source such as VirusTotal.
The list of features extracted from a malware binary is as follows:
1. The number of blocks, library imports, strings, and procedures contained in a binary.
2. The average length of a block in a binary.
29
Figure 18: Setup of prediction experiment: Families are ordered temporally based on their first-seen date, then the earliest
appearing sample in each family is used to predict its future characteristics
3. The number of rare and unique blocks, strings, imports and procedures contained in a binary. We
consider a rare feature to be one that appears in less than 10 other binaries.
4. The average procedure complexity of a binary, using a measure such as cyclomatic complexity.
5. Whether the compiler timestamp is plausible.
6. The similarity of a binary to representative binaries of a existing malware families. We use the
similarity measure used in the hierarchical clustering technique described in 5.
We defined seven primary features of malware families that the prediction algorithm estimates from a
single malware:
1. Continued Growth is a binary value which indicates whether or not any binaries are added to the
family in the future.
2. The Future Size of the family is the number of binaries which will be added to the family in the future.
3. The Distinctiveness is the level of similarity between a family and all other families.
4. The Mobility of a family is the degree to which it changes over time. Computationally, this is the level
of similarity of the family in the present to itself in the future.
5. Sophistication is the degree to which the binaries in the family become more complex. This is computed
by computing the average number of blocks which the family contains.
6. The Time Till Stability is the time at which novel binaries are no longer being added to the family.
This is determined by looking at the first–seen date of the oldest binary in a family.
7. Search Engine Response is the number of hits that a major search engine (Bing) returns for a query
on the binarys MD5. This value is averaged out across the family.
The MAAGI system can make predictions about these characteristics individually, or they can be combined together to define an impact score.
6.3.2. Prediction Methods and Training
The prediction framework consists a set of prediction algorithms applied to a malware binary supplied
by the user. Before this can occur, however, we need to select a set of training data, a prediction model or
algorithm, and a testing set of data. We first use a feature aggregation program to extract features from
a feature database and create a dataset suitable as input to machine learning models. A set of families
and binaries are provided as input to the aggregation program, along with a date range and cutoff time.
Each model is trained on the input features of the earliest binary in each cluster (family) and predicts
the corresponding output characteristics of the full family. The models are trained separately against each
output.
We have applied a variety of machine learning methods to learn the output characteristics of families
from a single binary. This includes generalized linear models such as logistic[32] and Poisson regression[33];
genetic algorithms learning formulas of input features[34]; neural networks[35]; support vector machines
with sigmoid, polynomial, radial basis and linear kernels[16][36]; CN2 rules[37], and classification trees and
30
random forests[38]. Among the SVM models, we found that using the radial basis kernel exhibited the best
results.
Each of these machine learning algorithms is input to an ensemble learning process. This means that
after we trained a first set of ’tier–1’ models, we used the output values produced as inputs to a second tier
of models. We refer to the second set as the ‘tier–2’ models. There are temporal considerations here as well:
We sub-divide the training and testing data a second time to ensure that none of the models in either tier
are trained on families having data in the test set.
In each experiment, the data is split into training and testing sets. The training set contains 70% of
the data set binaries, with the remaining 30% for testing. However, care must be taken that we do not
leak information about future binaries in the training data set. That is, we must ensure that all binaries
in the test data set appear temporally after the oldest binary in the test data set. To ensure the temporal
correctness of the data, we mark the point in time after 70% of the binaries have appeared, and use families
first before this point in time for training, and the remaining binaries for testing. We must account for
changes in the features space with respect to time. Because we consider features such as the number of
unique blocks or unique strings contained by a binary, we build the features incrementally. Features are
counted as unique if they were unique when the binary first appeared.
6.3.3. Experiments and Results
While the MAAGI system can make predictions about the features explained in Sec. 6.3.1, we only detail
the experiments that focused on the anticipated continued growth of the family. That is, these experiments
detail a classification problem: Given a single binary, will the family ever grow beyond one?
Data Sets. We were provided by DARPA with a large dataset totaling over a million malware binaries. To
obtain information about when a family was born, we used the family time stamps assigned by VirusTotal,
an online virus database. We assume that the first binary in the family is the one with the earliest ‘first-seen’
date. Many binaries did not have a ‘first-seen’ field and were discarded. The dataset contained families
which first appeared between 2006 and 2012, with the most binaries appearing in 2008 and 2009. The
data contained a considerable amount of benignware and many binaries were packed (and thus could not
be analyzed). For the experiments described here, we selected approximately 100k unpacked binaries with
temporal information and families and supplied them to our hierarchical clustering technique described in
Sec. 5.
Evaluation. We evaluated the models’ accuracy, precision and recall. These metrics are all derived from the
true positive (TP) and false positive (FP) rates of the classifier. Accuracy the fraction of correct predictions
out of the total number of predictions. Precision is the fraction of actual positives out of the number
predicted positives. Recall is the fraction of correctly predicted positives out of total number of actual
positives. The calculations used to obtain these performance metrics are shown below:
Accuracy = (T P + T N )/(T P + F P + T N + F N )
P recision = T P/(T P + F P )
Recall = T P/(T P + F N )
Another useful metric for evaluation is the F-measure [39], which is defined as
P recision · Recall
P recision + Recall
F-measure assesses the accuracy of a model by considering rate of true positives identified by the model. A
higher value of F-measure indicates a better score.
We chose to target F-measure and recall as the most important metrics in our evaluation. Intuitively, we
believe that correctly identifying a highly threatening family is more important than correctly classifying a
sample as unimportant or benignware. We can tolerate a higher rate of false positives if the true positives
are correctly identified.
F =2∗
31
(a) Evaluation of tier 1 models
(b) Evaluation results of tier 2 models
Figure 19: Evaluation of the prediction tool.
Results. To classify a single binary family as either one with continued growth or not, we first train a tier–1
set of classifiers on the original data. The results from these classifiers were added to the test data to create
a new data set, which was used for ’tier–2’ learning to see if we could learn a way to combine these results
to perform better than any single learning algorithm.
As a baseline, we include the metrics obtained by a majority classifier which always assigned the most
common occurring value in the dataset. In this data, the most commonly occurring classification is that the
families remain singletons. Because the majority classifier always outputs the same value, it has a score of
0 for the metrics besides accuracy.
Figures 19(a) and 19(b) show the performance of the models. The ’tier–1’ classifiers are shown in Figure
19(a), followed by ’tier–2’ in Figure 19(b).
The basic classifiers were able to perform only a mild level above the baseline majority classifier. However,
the difference between the first and second tier classifiers shows promising results. Using an ensamble
classifier based on several ’tier–1’ classifiers, the MAAGI system is able to obtain better results for all
metrics, and a significantly stronger value of recall using a genetic algorithm. In the malware defense
domain, it is much more important to correctly recognize a significant family than an unimportant one, and
so we feel this is a strong result.
The prediction analysis tool in the MAAGI system represents an important landmark towards cyber
prediction and pre-emptive malware defense. To our knowledge, this capability is the first of its kind
in malware analysis and defense, prediction of future malware developments can greatly improve cyber–
defensive efforts. The MAAGI system has only scratched the surface of malware prediction, integration of
more advanced artificial intelligence technology into the system could enable even more fruitful prediction
possibilities.
6.4. Functional Analysis
The goal of functional analysis (FA) is to more thoroughly and deeply analyze malware with the goal
of characterizing its function and context. Functional analysis detects multiple purposes that exist simultaneously in a malware sample. It uses structured, contextual reasoning to infer complex purposes. For
example, credential theft is a complex purpose that involves credential capturing and data exfiltration, each
of which may be realized in many ways. Included in this reasoning are basic characterizations of the attack
and attacker, such as degree of sophistication.
Functional analysis uses only the results of static analysis. We have found that in many cases, a malware
sample does not exhibit its behaviors in a sandbox, so we cannot rely on dynamic analysis. However, using
only static analysis is more difficult, because API calls in the static code are not inherently ordered, so we
cannot assume ordering of API calls in the analysis.
32
(1)
Unpacked Binaries
Static
Analysis
Queue
IDA Static
Call Graph (JSON)
Disassembly
FA Request (From MAAGI Analysis Client)
Load FA Request (From IDA)
Local File
System
(Cache)
IDA Display
Mongo DB
(Database)
SFG
Preprocessor
and Parser
FA Report
(2-4)
(5)
(6)
Figure 20: Functional analysis workflow.
For functional analysis, we use parsing techniques derived from natural language processing (NLP).
Because we only use static analysis results, we cannot directly use NLP methods that depend on the
ordering of words in a sentence. We must treat a sentence using a bag of words model and try to identify
components of meaning out of it. Also, just like in language we can analyze writing competence based on
the sentence and vocabulary complexity, so in malware can we identify attack and attacker characteristics
based on the words used and their organization.
Our representation is based on the Systemic Functional Linguistics, which organizes Grammars (SFGs)
according to the function and context of language. They provide a functional hierarchy for meaning and
purpose, and an overlaid context hierarchy for context. So, it is possible to reason about the purpose and
context from the words. Similar mechanisms can be used to reason about function and characterizations of
malware. One nice feature of SFGs for our purposes is that unlike most NLP frameworks, which require a
fully ordered sentence, SFGs accommodate partial orderings by treating orderings as constraints. In domains
such as malware where no ordering is available, we can simply not specify any ordering constraints.
Our approach to functional analysis is as follows:
1. Static analysis creates the call graph and a set of searchable items, such as system API calls and
strings.
2. A lexification system extracts items of interest to functional analysis, such as particular strings that
are indicative of specific functions. These items are annotated with semantic tags, where relevant.
3. These unordered items are parsed using SFGs. This parsing process can produce multiple alternative
parses. Each parse represents a different way of accomplishing an attack.
4. We prioritize these alternative parses based on the minimum graph span in the call graph. This is
based on the heuristic that related behaviors will be taken at times that are close to each other.
5. We produce a human-readable report that summarizes the results.
6.4.1. Functional Analysis Design
Fig.20 describes the functional analysis workflow. Creation of a static analysis call graph, Step 1, above,
is transparently triggered whenever a sample is submitted for static analysis. Steps 2-5: Lexical Analysis,
Parsing, Parse Prioritization, and report generation; are initiated by a request from the user, and occurs
within the functional analysis software component (detailed in the next section).
As seen in Fig. 21, the system performs analysis in several distinct stages, each passing its results to
the next. The system takes as inputs three sets of data. The first two represent domain knowledge and the
inferences that the system is attempting to make. That is, the Systemic Functional Grammar which captures
33
System Inputs
Systemic Functional
Grammar (JSON)
Preprocessor
Definitions (N3)
Static Analysis Call
Graph (JSON)
Graph Tokens
Graph Analysis
Grammar Paths
Preprocessor
Lexification
Constraint Solver
Solution Prioritization
Report Generation
System Outputs
Functional Analysis Report
(text)
Figure 21: Functional analysis workflow.
34
Grammar
Metadata
Figure 22: Functional analysis workflow.
Figure 23: Functional analysis workflow.
generalizations and characterizations of functions of malicious software, and the Preprocessor Definitions file,
which supplied preprocessing instructions and grammar metadata. These files represent domain knowledge
and tend to be static unless an analyst wishes to introduce new domain knowledge. The third input, a Static
Analysis Call Graph, is generated by the static analysis pipeline for each sample to be analyzed.
The output of the system is a report which summarizes the parses that were satisfied by the tokens
found within the call graph. This report is human readable and is displayed in the MAAGI user interface, in
addition to being able to export and loaded the report into external malware analysis tools (such as IDA, a
disassembler). The current templates for report generation make use of IDAs context–aware console window
to provide active content. Active content allows users to click within the console window and be directed to
the binary location which contains identified functions.
6.4.2. Grammar Representation
In the context of the MAAGI system, a grammar is composed of two interconnected strata a grammatical
stratum and a contextual stratum. The contextual stratum and the grammatical stratum are structurally
similar, with each an instance of a system network. Each plays a different role in the overall representation,
but both are built from the same set of components. We describe that structure below.
System Networks. A system network describes a hierarchy of systems, which represent exclusive alternatives
that might hold in the presence of a given feature in the input. For example, if a sentence contains a
noun, that noun can be countable or uncountable, common or proper, concrete or abstract, etc. These
are orthogonal properties that can each be represented by a system with noun as the entry condition. An
example system network is shown in Fig. 22.
As noted in the figure, orthogonal choices (indicated by curly braces) represent an ”AND” relationship.
In the case in Fig. 22, for example, this indicates that a choice must be selected in both the ”SUBFUNCTION
35
ONE” and ”SUBFUNCTION TWO” systems. Exclusive choices (indicated by T shaped branches) represent
an exclusive choice or ”XOR” relation. At most one feature class must be satisfied for each such instance
in the grammar. A system network optionally defines forward chaining rules, or gates, which take multiple
feature classes as entry conditions (described in the rules section below).
The grammar in the MAAGI system is composed of two system networks (grammatical and contextual),
as indicated above. We refer to these as strata, as they represent two distinct, but interrelated levels of
abstraction for viewing the input. The grammatical stratum is what many people mean when they use the
term grammar in SFG. It contains a hierarchy of functional choices. These functional choices are realized
when the constraints, i.e. realization rules, associated with them hold for a given set of input. A valid parse
is said to exist when there is a path to a leaf where all realization rules hold for every orthogonal choice in
the stratum. The contextual stratum uses the same notations as the grammatical stratum but it represents
a hierarchy of the salient contexts. These contexts may alter the likelihood or validity of particular choices
in the grammatical stratum.
Realization Rules. Realization rules are constraints which define which elements in the input correspond to
given functional roles, the mappings or equivalences between those roles and others, as well as the order in
which those roles must appear. Some example rules include:
• Lexification: Lexification draws a correspondence between a string present in the input and a textual
role.
• Insertion: Insertion requires that a particular role must be observed in the data. Typically the role is
one that is lexified directly, or one that is lexified indirectly via expansion. This is used as a mechanism
for representing expectation.
• Expansion: Expansion allows for groups of roles to be referred to collectively.
• Preselection: Preselection is used to map contextual classes to grammatical classes. In such cases,
preselections define a many-to-many mapping between contextual classes and grammatical classes.
• Gates: Gates are forward chaining rules that represent classes that stem from an arbitrary Boolean
combination of other classes. They are directly analogous to logic gates. For example, in Fig. 23, the
class ”gate1” is inferred if alternative1.2 is chosen with either alternative2.1 or alternative2.2.
Other rules include definitions for Adjacency, Partition, and Conflation.
6.4.3. Parsing and Classification
In essence, SFG strata are analogous to decision trees, with realization rules serving as queries for
establishing which conditions (vis a vis features) are observed in a given set of input. Each permutation of
choices in the contextual stratum represents a possible context in a given domain, and the permutation of
choices in the grammatical stratum that remain after preselections from that context are applied represents
the space of possible interpretations of the data for the context. Conversely, given an interpretation that is
known to hold, we can infer the contexts that support it. We refer to the set of features in an interpretation
and its context as a traversal. The set of realization rules associated with a traversal are the set of queries
that must be satisfied in order for the features of that traversal to be valid for the input data. As noted
above, these rules take the form of constraints, and thus a traversal describes a Boolean expression over
features.
The MAAGI system classifies an attack by generating traversals through the grammatical stratum of the
grammar, constructing the associated expression from the queries associated with its features, and passing
the result to a constraint solver (Copris) (CITE). From the set of possible valid traversals, we can then infer
the possible contexts in which the associated attacks could have been carried out. Multiple realizations can
exist, however. For this reason, realizations are ordered by the average proximity of the binary’s API calls
lexified in the traversal. Applicable contexts to each are ordered heuristically.
36
6.4.4. Testing and Experimentation
The functional analysis in MAAGI has been able to reliably operate over a set of 283 unpacked Bankerfamily malware samples (CITE) while providing accurate characterizations of its command and control behavior, as well as its theft of personal information. After capturing a large number of alternative approaches
for these and related malicious activates, we have been able to note some of the limits and successes of
functional analysis in MAAGI.
Presently, the MAAGI Systemic Functional Grammar (SFG) Parser and grammar representation language are capable of expressing an impressive variety of malware behaviors and actions. In our current
environment, however, the lack of data–flow and temporal information can limit the variety of behaviors
expressed. When two behaviors differ only with respect to data–flow or runtime temporal characteristics,
the grammatical representation of these two behaviors is identical. Often, this information is the key discriminator between a malicious sample and a benign sample with the same capabilities. Currently, the
system compensates for the lack of this data by using call graph distance as a proxy for temporal proximity.
This issue should not be interpreted as a limitation of functional analysis, but rather as a limitation of using
modern static analysis for producing tokens/annotations later consumed by the parser. Temporal and data
flow features can absolutely be represented in the grammar and could be leveraged to better discriminate
samples with meaningful differences with respect to those features. However, it is difficult or impossible to
glean this information from Static Analysis alone.
Despite these limitations, the functional analysis tool will provide malware analysts the ability to understand the behavior and function of the malware. Incorporating more advanced AI techniques, however, will
allow the user to fully take advance of the wealth of information that can be extracted from malware using
SFGs.
7. Conclusion
In this paper, we introduced the MAAGI malware analysis system. This system takes advantage of many
of the artificial intelligence advances over the past several decades to provide a comprehensive system for
malware analysts to understand the past, present and future characteristics of malware.
As a practical application of artificial intelligence, the MAAGI system will have an enormous impact on
the cyber–defense community. While the cyber–defense community has always employed artificial intelligence methods to analyze malware, to the best of our knowledge, nobody has attempted to integrate these
technologies into a single system where various analyses use the results of other analyses to model, learn
and predict characteristics of malware. It is our hope that the MAAGI system also inspires other innovative applications of artificial intelligence to the cyber–defense community. Towards this goal, the MAAGI
system is also expandable, and we envision that new and intelligent methods to analyze malware will be
incorporated into the system in the future, giving cyber–defenders a powerful and continually evolving tool
to combat malware–based attacks.
8. Acknowledgments
This work was supported by DARPA under US Air Force contract FA8750-10-C-0171, with thanks to
Mr. Timothy Fraser. The views expressed are those of the authors and do not reflect the official policy or
position of the Department of Defense or the U.S. Government.
References
[1] P. Szolovits, R. S. Patil, W. B. Schwartz, Artificial intelligence in medical diagnosis, Annals of internal medicine 108 (1)
(1988) 80–87.
[2] E. Tyugu, Artificial intelligence in cyber defense, in: Cyber Conflict (ICCC), 2011 3rd International Conference on, IEEE,
2011, pp. 1–11.
[3] B. Cruz, P. Greve, B. Kay, H. Li, D. McLean, F. Paget, C. Schmugar, R. Simon, D. Sommer, B. Sun, J. Walter,
A. Wosotowsky, C. Xu, Mcafee labs threats report: Fourth quarter 2013, McAfee Labs.
37
[4] J. Jang, D. Brumley, S. Venkataraman, BitShred: feature hashing malware for scalable triage and semantic analysis, in:
Proceedings of the 18th ACM conference on Computer and communications security, CCS ’11, ACM, New York, NY,
USA, 2011, p. 309320. doi:10.1145/2046707.2046742.
[5] U. Bayer, P. M. Comparetti, C. Hlauschek, C. Kruegel, E. Kirda, Scalable, behavior-based malware clustering., in: NDSS,
Vol. 9, Citeseer, 2009, pp. 8–11.
[6] K. Rieck, P. Trinius, C. Willems, T. Holz, Automatic analysis of malware behavior using machine learning, Journal of
Computer Security 19 (4) (2011) 639–668.
[7] R. Perdisci, D. Ariu, G. Giacinto, Scalable fine-grained behavioral clustering of http-based malware, Computer Networks
57 (2) (2013) 487–500.
[8] N. Sahoo, J. Callan, R. Krishnan, G. Duncan, R. Padman, Incremental hierarchical clustering of text documents, in:
Proceedings of the 15th ACM international conference on Information and knowledge management, ACM, 2006, pp.
357–366.
[9] T. Dumitras, I. Neamtiu, Experimental challenges in cyber security: a story of provenance and lineage for malware, Cyber
Security Experimentation and Test.
[10] M. E. Karim, A. Walenstein, A. Lakhotia, L. Parida, Malware phylogeny generation using permutations of code, Journal
in Computer Virology 1 (1-2) (2005) 13–23.
[11] DARPA, Cyber genome program (2010).
URL http://www.darpa.mil/Our_Work/I2O/Programs/Cyber_Defense_(Cyber_Genome).aspx
[12] A. Lakhotia, M. Dalla Preda, R. Giacobazzi, Fast location of similar code fragments using semantic ‘juice’, in: SIGPLAN
Program Protection and Reverse Engineering Workshop, ACM, 2013, p. 5.
[13] A. Z. Broder, On the resemblance and containment of documents, in: Compression and Complexity of Sequences 1997.
Proceedings, IEEE, 1997, pp. 21–29.
[14] A. Pfeffer, Practical probabilistic programming, in: Inductive Logic Programming, Springer, 2011, pp. 2–3.
[15] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, E. Lefebvre, Fast unfolding of communities in large networks, Journal of
Statistical Mechanics: Theory and Experiment 2008 (10) (2008) P10008.
[16] S. R. Gunn, et al., Support vector machines for classification and regression.
[17] A. Pfeffer, C. Call, J. Chamberlain, L. Kellogg, J. Ouellette, T. Patten, G. Zacharias, A. Lakhotia, S. Golconda, J. Bay,
et al., Malware analysis and attribution using genetic information, in: Malicious and Unwanted Software (MALWARE),
IEEE, 2012, pp. 39–45.
[18] B. E. Ruttenberg, C. Miles, L. Kellogg, V. Notani, M. Howard, C. LeDoux, A. Lakhotia, A. Pfeffer, Identifying shared
software components to support malware forensics, in: To appear in 11th Conference on Detection of Intrusions and
Malware and Vulnerability Assessment, 2014.
[19] P. Jaccard, The distribution of the flora in the alpine zone. 1, New phytologist 11 (2) (1912) 37–50.
[20] M. Manasse, F. McSherry, K. Talwar, Consistent weighted sampling, Unpublished technical report) http://research.
microsoft. com/en-us/people/manasse.
[21] N. Saitou, M. Nei, The neighbor-joining method: a new method for reconstructing phylogenetic trees., Molecular biology
and evolution 4 (4) (1987) 406–425.
[22] S. Theodoridis, K. Koutroumbas, Pattern Recognition, Elsevier Science, 2008.
URL http://books.google.com/books?id=QgD-3Tcj8DkC
[23] M. E. Newman, Modularity and community structure in networks, Proceedings of the National Academy of Sciences
103 (23) (2006) 8577–8582.
[24] L. Hubert, P. Arabie, Comparing partitions, Journal of classification 2 (1) (1985) 193–218.
[25] J. Jang, M. Woo, D. Brumley, Towards automatic software lineage inference, in: Proceedings of the 22nd USENIX
conference on Security, USENIX Association, 2013, pp. 81–96.
[26] H. Prüfer, Neuer beweis eines satzes über permutationen, Arch. Math. Phys 27 (1918) 742–744.
[27] R. Blanch, Malware threats, trend and predictions for 2014, McAfee Labs.
[28] W. S. Labs, 2013 security predictions (2012).
URL http://www.websense.com/content/websense-2013-security-predictions.html
[29] B. Kang, T. Kim, H. Kwon, Y. Choi, E. G. Im, Malware classification method via binary content comparison, in:
Proceedings of the 2012 ACM Research in Applied Computation Symposium, ACM, 2012, pp. 316–321.
[30] J. Deerman, Advanced malware detection through attack life cycle analysis.
[31] V. Juzonis, N. Goranin, A. Cenys, D. Olifer, Specialized genetic algorithm based simulation tool designed for malware
evolution forecasting, in: Annales UMCS, Informatica, Vol. 12, 2012, pp. 23–37.
[32] D. W. Hosmer, S. Lemeshow, R. X. Sturdivant, Introduction to the logistic regression model, Wiley Online Library, 2000.
[33] N. R. Draper, H. Smith, Applied regression analysis 2nd ed.
[34] M. Schmidt, H. Lipson, Distilling free-form natural laws from experimental data, science 324 (5923) (2009) 81–85.
[35] S. Dreiseitl, L. Ohno-Machado, Logistic regression and artificial neural network classification models: a methodology
review, Journal of biomedical informatics 35 (5) (2002) 352–359.
[36] C.-C. Chang, C.-J. Lin, LIBSVM: A library for support vector machines, ACM Transactions on Intelligent Systems and
Technology 2 (2011) 27:1–27:27, software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
[37] P. Clark, T. Niblett, The cn2 induction algorithm, Machine learning 3 (4) (1989) 261–283.
[38] J. R. Quinlan, Induction of decision trees, Machine learning 1 (1) (1986) 81–106.
[39] D. M. Powers, Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation.
38
| 2 |
Deep Convolutional Neural Networks
on Cartoon Functions
Philipp Grohs∗ , Thomas Wiatowski† , and Helmut Bölcskei†
arXiv:1605.00031v2 [cs.LG] 12 Feb 2018
∗ Dept.
Math., ETH Zurich, Switzerland, and Dept. Math., University of Vienna, Austria
† Dept. IT & EE, ETH Zurich, Switzerland,
∗ [email protected], † {withomas, boelcskei}@nari.ee.ethz.ch
Abstract—Wiatowski and Bölcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by
the network structure per se rather than the specific convolution
kernels and non-linearities. While the translation invariance
result applies to square-integrable functions, the deformation
stability bound holds for band-limited functions only. Many
signals of practical relevance (such as natural images) exhibit,
however, sharp and curved discontinuities and are, hence, not
band-limited. The main contribution of this paper is a deformation stability result that takes these structural properties into
account. Specifically, we establish deformation stability bounds
for the class of cartoon functions introduced by Donoho, 2001.
I. I NTRODUCTION
Feature extractors based on so-called deep convolutional
neural networks have been applied with tremendous success
in a wide range of practical signal classification tasks [1].
These networks are composed of multiple layers, each of
which computes convolutional transforms, followed by the
application of non-linearities and pooling operations.
The mathematical analysis of feature extractors generated by
deep convolutional neural networks was initiated in a seminal
paper by Mallat [2]. Specifically, Mallat analyzes so-called
scattering networks, where signals are propagated through
layers that compute semi-discrete wavelet transforms (i.e., convolutional transforms with pre-specified filters obtained from
a mother wavelet through scaling operations), followed by
modulus non-linearities. It was shown in [2] that the resulting
wavelet-modulus feature extractor is horizontally translationinvariant [3] and deformation-stable, with the stability result
applying to a function space that depends on the underlying
mother wavelet.
Recently, Wiatowski and Bölcskei [3] extended Mallat’s
theory to incorporate convolutional transforms with filters
that are (i) pre-specified and potentially structured such as
Weyl-Heisenberg (Gabor) functions [4], wavelets [5], curvelets
[6], shearlets [7], and ridgelets [8], (ii) pre-specified and
unstructured such as random filters [9], and (iii) learned in a
supervised [10] or unsupervised [11] fashion. Furthermore, the
networks in [3] may employ general Lipschitz-continuous nonlinearities (e.g., rectified linear units, shifted logistic sigmoids,
hyperbolic tangents, and the modulus function) and pooling
through sub-sampling. The essence of the results in [3] is
that vertical translation invariance and deformation stability
are induced by the network structure per se rather than the
specific choice of filters and non-linearities. While the vertical
translation invariance result in [3] is general in the sense
of applying to the function space L2 (Rd ), the deformation
stability result in [3] pertains to square-integrable band-limited
functions. Moreover, the corresponding deformation stability
bound depends linearly on the bandwidth.
Many signals of practical relevance (such as natural images) can be modeled as square-integrable functions that are,
however, not band-limited or have large bandwidth. Large
bandwidths render the deformation stability bound in [3] void
as a consequence of its linear dependence on bandwidth.
Contributions. The question considered in this paper is
whether taking structural properties of natural images into
account can lead to stronger deformation stability bounds.
We show that the answer is in the affirmative by analyzing
the class of cartoon functions introduced in [12]. Cartoon
functions satisfy mild decay properties and are piecewise
continuously differentiable apart from curved discontinuities
along C 2 -hypersurfaces. Moreover, they provide a good model
for natural images such as those in the MNIST [13], Caltech256 [14], and CIFAR-100 [15] datasets as well as for images
of geometric objects of different shapes, sizes, and colors
[16]. The proof of our main result is based on the decoupling
technique introduced in [3]. The essence of decoupling is that
contractivity of the feature extractor combined with deformation stability of the signal class under consideration—under
smoothness conditions on the deformation—establishes deformation stability for the feature extractor. Our main technical
contribution here is to prove deformation stability for the class
of cartoon functions. Moreover, we show that the decay rate
of the resulting deformation stability bound is best possible.
The results we obtain further underpin the observation made in
[3] of deformation stability and vertical translation invariance
being induced by the network structure per se.
Notation. We refer the reader to [3, Sec. 1] for the general
notation employed in this paper. In addition, we will need the
following notation. For x ∈ Rd , we set hxi := (1 + |x|2 )1/2 .
The Minkowski sum of sets A, B ⊆ Rd is (A + B) := {a +
b | a ∈ A, b ∈ B}. The indicator function of a set B ⊆ Rd
is defined as 1B (x) := 1, for x ∈ B, and 1B (x) := 0, for
d
d
d
x
R ∈ R \B. For aR measurable set B ⊆ R , we let vol (B) :=
1 (x)dx = B 1dx.
Rd B
(j) (l) (m)
U λ1 , λ2 , λ3
f
(p) (r) (s)
U λ1 , λ2 , λ3
f
(j) (l)
U λ1 , λ2 f
(p) (r)
U λ1 , λ2
f
(j) (l)
U λ1 , λ2 f ∗ χ2
(p) (r)
U λ1 , λ2
f ∗ χ2
(j)
U λ1 f
(j)
U λ1 f ∗ χ1
(p)
U λ1 f
U [e]f = f
(p)
U λ1 f ∗ χ1
f ∗ χ0
(k)
Fig. 1: Network architecture underlying the feature extractor (2). The index λn corresponds to the k-th atom gλ(k) of the
n
collection Ψn associated with the n-th network layer. The function χn is the output-generating atom of the n-th layer.
II. D EEP CONVOLUTIONAL NEURAL NETWORK - BASED
FEATURE EXTRACTORS
We set the stage by briefly reviewing the deep convolutional
feature extraction network presented in [3], the basis
of which
is a sequence of triplets Ω := (Ψn , Mn , Rn ) n∈N referred
to as module-sequence. The triplet (Ψn , Mn , Rn )—associated
with the n-th network layer—consists of (i) a collection Ψn :=
{gλn }λn ∈Λn of so-called atoms gλn ∈ L1 (Rd ) ∩ L2 (Rd ),
indexed
Pby a countable set Λn and satisfying the Bessel condition λn ∈Λn kf ∗gλn k2 ≤ Bn kf k22 , for all f ∈ L2 (Rd ), for
some Bn > 0, (ii) an operator Mn : L2 (Rd ) → L2 (Rd ) satisfying the Lipschitz property kMn f − Mn hk2 ≤ Ln kf − hk2 ,
for all f, h ∈ L2 (Rd ), and Mn f = 0 for f = 0, and (iii) a
sub-sampling factor Rn ≥ 1. Associated with (Ψn , Mn , Rn ),
we define the operator
Un [λn ]f := Rnd/2 Mn (f ∗ gλn ) (Rn ·),
(1)
and extend it to paths on index sets q = (λ1 , λ2 , . . . , λn ) ∈
Λ1 × Λ2 × · · · × Λn := Λn1 , n ∈ N, according to
U [q]f = U [(λ1 , λ2 , . . . , λn )]f
ΦΩ (f ) :=
∞
[
{(U [q]f ) ∗ χn }q∈Λn1 ,
(2)
n=0
where (U [q]f ) ∗ χn , q ∈ Λn1 , is a feature generated in the
n-th layer of the network, see Fig. 1. It is shown in [3,
Thm. 2] that for all f ∈ L2 (Rd ) the feature extractor ΦΩ is
vertically translation-invariant in the sense of the layer depth
n determining the extent to which the features (U [q]f ) ∗ χn ,
q ∈ Λn1 , are translation-invariant. Furthermore, under the
condition
max max{Bn , Bn L2n } ≤ 1,
(3)
n∈N
:= Un [λn ] · · · U2 [λ2 ]U1 [λ1 ]f,
where for the empty path e := ∅ we set
U [e]f := f , for f ∈ L2 (Rd ).
gλ∗n , λ∗n ∈ Λn , of the (n − 1)-th layer. The atoms
{gλn }λn ∈Λn \{λ∗n } ∪{χn−1 } are thus used across two consecutive layers in the sense of χn−1 = gλ∗n generating the output in
the (n−1)-th layer, and the remaining atoms {gλn }λn ∈Λn \{λ∗n }
propagating signals to the n-th layer according to (1), see Fig.
1. From now on, with slight abuse of notation, we write Λn
for Λn \{λ∗n } as well.
The extracted features ΦΩ (f ) of a signal f ∈ L2 (Rd ) are
defined as [3, Def. 3]
Λ01
:= {e} and
referred to as weak admissibility condition in [3, Def. 4]
and satisfied by a wide variety of module sequences Ω (see
[3, Sec. 3]), the following result is established in [3, Thm.
1]: The feature extractor ΦΩ is deformation-stable on the
space of R-band-limited functions L2R (Rd ) w.r.t. deformations
(Fτ f )(x) := f (x−τ (x)), i.e., there exists a universal constant
C > 0 (that does not depend on Ω) such that for all
f ∈ L2R (Rd ) and all (possibly non-linear) τ ∈ C 1 (Rd , Rd )
1
, it holds that
with kDτ k∞ ≤ 2d
Remark 1.
on the atoms gλn is equiP The Bessel condition
valent to λn ∈Λn |d
gλn (ω)|2 ≤ Bn , for a.e. ω ∈ Rd (see [3,
Prop. 2]), and is hence easily satisfied even by learned filters
[3, Remark 2]. An overview of collections Ψn = {gλn }λn ∈Λn
of structured atoms gλn (such as, e.g., Weyl-Heisenberg (Gabor) functions, wavelets, curvelets, shearlets, and ridgelets)
and non-linearities Mn widely used in the deep learning
literature (e.g., hyperbolic tangent, shifted logistic sigmoid,
rectified linear unit, and modulus function) is provided in [3,
App. B-D].
Here, the feature
P∞ P space norm is defined
|||ΦΩ (f )|||2 := n=0 q∈Λn k(U [q]f ) ∗ χn k22 .
For every n ∈ N, we designate one of the atoms
Ψn = {gλn }λn ∈Λn as the output-generating atom χn−1 :=
For practical classification tasks, we can think of the deformation Fτ as follows. Let f be a representative of a certain
|||ΦΩ (Fτ f ) − ΦΩ (f )||| ≤ CRkτ k∞ kf k2 .
(4)
as
1
for some C > 0 (not depending on f1 ,f2 ). Furthermore, we
denote by
K
CCART
:= {f1 + 1B f2 | fi ∈ L2 (Rd ) ∩ C 1 (Rd , C), i = 1, 2,
|∇fi (x)| ≤ Khxi−d , vold−1 (∂B) ≤ K, kf2 k∞ ≤ K}
the class of cartoon functions of “size” K > 0.
Fig. 2: Left: A natural image (image credit: [17]) is typically
governed by areas of little variation, with the individual areas
separated by edges that can be modeled as curved singularities.
Right: An image of a handwritten digit.
signal class, e.g., f is an image of the handwritten digit “8”
1
} is a collection
(see Fig. 2, right). Then, {Fτ f | kDτ k∞ < 2d
of images of the handwritten digit “8”, where each Fτ f may
be generated, e.g., based on a different handwriting style. The
1
on the Jacobian matrix of τ imposes
bound kDτ k∞ < 2d
a quantitative limit on the amount of deformation tolerated,
rendering the bound (4) to implicitly depend on Dτ . The deformation stability bound (4) now guarantees that the features
1
}
corresponding to the images in the set {Fτ f | kDτ k∞ < 2d
do not differ too much.
III. C ARTOON FUNCTIONS
The bound in (4) applies to the space of square-integrable
R-band-limited functions. Many signals of practical significance (e.g., natural images) are, however, not band-limited
(due to the presence of sharp and possibly curved edges, see
Fig. 2) or exhibit large bandwidths. In the latter case, the
deformation stability bound (4) becomes void as it depends
linearly on R.
The goal of this paper is to take structural properties of
natural images into account by considering the class of cartoon
functions introduced in [12]. These functions satisfy mild
decay properties and are piecewise continuously differentiable
apart from curved discontinuities along C 2 -hypersurfaces.
Cartoon functions provide a good model for natural images
(see Fig. 2, left) such as those in the Caltech-256 [14] and
CIFAR-100 [15] data sets, for images of handwritten digits
[13] (see Fig. 2, right), and for images of geometric objects
of different shapes, sizes, and colors [16].
We will work with the following—relative to the definition
in [12]—slightly modified version of cartoon functions.
Definition 1. The function f : Rd → C is referred to as
a cartoon function if it can be written as f = f1 + 1B f2 ,
where B ⊆ Rd is a compact domain whose boundary ∂B
is a compact topologically embedded C 2 -hypersurface of Rd
without boundary1 , and fi ∈ L2 (Rd ) ∩ C 1 (Rd , C), i = 1, 2,
satisfy the decay condition
|∇fi (x)| ≤ Chxi−d ,
i = 1, 2,
(5)
1 We refer the reader to [18, Chapter 0] for a review on differentiable
manifolds.
We chose the term “size” to indicate the length vold−1 (∂B)
K
of the hypersurface ∂B. Furthermore, CCART
⊆ L2 (Rd ), for
all K > 0; this follows from the triangle inequality according
to kf1 + 1B f2 k2 ≤ kf1 k2 + k1B f2 k2 ≤ kf1 k2 + kf2 k2 < ∞,
where in the last step we used f1 , f2 ∈ L2 (Rd ). Finally, we
note that our main results—presented in the next section—can
easily be generalized to finite linear combinations of cartoon
functions, but this is not done here for simplicity of exposition.
IV. M AIN RESULTS
We start by reviewing the decoupling technique introduced
in [3] to prove deformation stability bounds for band-limited
functions. The proof of the deformation stability bound (4) for
band-limited functions in [3] is based on two key ingredients.
The first one is a contractivity property of ΦΩ (see [3,
Prop. 4]), namely |||ΦΩ (f ) − ΦΩ (h)||| ≤ kf − hk2 , for
all f, h ∈ L2 (Rd ). Contractivity guarantees that pairwise
distances of input signals do not increase through feature
extraction. The second ingredient is an upper bound on the
deformation error kf −Fτ f k2 (see [3, Prop. 5]), specific to the
signal class considered in [3], namely band-limited functions.
Recognizing that the combination of these two ingredients
yields a simple proof of deformation stability is interesting as
it shows that whenever a signal class exhibits inherent stability
w.r.t. deformations of the form (Fτ f )(x) = f (x − τ (x)),
we automatically obtain deformation stability for the feature
extractor ΦΩ . The present paper employs this decoupling
technique and establishes deformation stability for the class
of cartoon functions by deriving an upper bound on the
K
.
deformation error kf − Fτ f k2 for f ∈ CCART
Proposition 1. For every K > 0, there exists a constant CK >
K
0 such that for all f ∈ CCART
and all (possibly non-linear)
d
d
τ : R → R with kτ k∞ < 21 , it holds that
kf − Fτ f k2 ≤ CK kτ k1/2
∞ .
(6)
Proof. See Appendix A.
The Lipschitz exponent α = 12 on the right-hand side (RHS)
of (6) determines the decay rate of the deformation error
kf − Fτ f k2 as kτ k∞ → 0. Clearly, larger α > 0 results
in the deformation error decaying faster as the deformation
becomes smaller. The following simple example shows that
the Lipschitz exponent α = 21 in (6) is best possible, i.e., it
can not be larger. Consider d = 1 and τs (x) = s, for a fixed
s satisfying 0 < s < 21 ; the corresponding deformation Fτs
amounts to a simple translation by s with kτs k∞ = s < 12 .
K
Let f = 1[−1,1] . Then, f ∈ CCART
for some K > 0 and
√
√
1/2
kf − Fτs f k2 = 2s = 2kτ k∞ .
Remark 2. It is interesting to note that in order to obtain
bounds of the form kf − Fτ f k2 ≤ Ckτ kα
∞ , for f ∈ C ⊆
L2 (Rd ), for some C > 0 (that does not depend on f , τ ) and
some α > 0, we need to impose non-trivial constraints on the
set C ⊆ L2 (Rd ). Indeed, consider, again, d = 1 and τs (x) = s,
for small s > 0. Let fs ∈ L2 (Rd ) be a function that has its
energy kfs k2 = 1 concentrated in a small interval according
to supp(fs ) ⊆ [−s/2, s/2]. Then, fs and √
Fτs fs have disjoint
support sets and hence kfs − Fτs fs k2 = 2, which does not
α
decay with kτ kα
∞ = s for any α > 0. More generally, the
amount of deformation induced by a given function τ depends
strongly on the signal (class) it is applied to. Concretely, the
2
deformation Fτ with τ (x) = e−x , x ∈ R, will lead to a
small bump around the origin only when applied to a lowpass function, whereas the function fs above will experience
a significant deformation.
We are now ready to state our main result.
Theorem 1. Let Ω = (Ψn , Mn , Rn ) n∈N be a modulesequence satisfying the weak admissibility condition (3). For
every size K > 0, the feature extractor ΦΩ is deformationK
stable on the space of cartoon functions CCART
w.r.t. deformations (Fτ f )(x) = f (x − τ (x)), i.e., for every K > 0,
there exists a constant CK > 0 (that does not depend on Ω)
K
such that for all f ∈ CCART
, and all (possibly non-linear)
1
1
d
d
, it holds
τ ∈ C (R , R ) with kτ k∞ < 21 and kDτ k∞ ≤ 2d
that
|||ΦΩ (Fτ f ) − ΦΩ (f )||| ≤ CK kτ k1/2
(7)
∞ .
Proof. Applying the contractivity property |||ΦΩ (g) −
ΦΩ (h)||| ≤ kg − hk2 with g = Fτ f and h = f , and using
(6) yields (7) upon invoking the same arguments as in [3, Eq.
58] and [3, Lemma 2] to conclude that f ∈ L2 (Rd ) implies
1
Fτ f ∈ L2 (Rd ) thanks to kDτ k∞ ≤ 2d
.
The strength of the deformation stability result in Theorem
1 derives itself from the fact that the only condition we
need to impose on the underlying module-sequence Ω is
weak admissibility according to (3), which as argued in [3,
Sec. 3], can easily be met by normalizing the elements in
Ψn , for all n ∈ N, appropriately. We emphasize that this
normalization does not have an impact on the constant CK
in (7), which is shown in Appendix A to be independent of
Ω. The dependence of CK on K does, however, reflect the
intuition that the deformation stability bound should depend
on the signal class description complexity. For band-limited
signals, this dependence is exhibited by the RHS in (4)
being linear in the bandwidth R. Finally, we note that the
vertical translation invariance result [3, Thm. 2] applies to all
K
f ∈ L2 (Rd ), and, thanks to CCART
⊆ L2 (Rd ), for all K > 0,
carries over to cartoon functions.
Remark 3. We note that thanks to the decoupling technique
underlying our arguments, the deformation stability bounds
(4) and (7) are very general in the sense of applying to every
contractive (linear or non-linear) mapping Φ. Specifically, the
identity mapping Φ(f ) = f also leads to deformation stability
on the class of cartoon functions (and the class of band-limited
functions). This is interesting as it was recently demonstrated
that employing the identity mapping as a so-called shortcutconnection in a subset of layers of a very deep convolutional
neural network yields state-of-the-art classification performance on the ImageNet dataset [19]. Our deformation stability
result is hence general in the sense of applying to a broad class
of network architectures used in practice.
For functions that do not exhibit discontinuities along C 2 hypersurfaces, but otherwise satisfy the decay condition (5),
we can improve the decay rate of the deformation error from
α = 21 to α = 1.
Corollary 1. Let Ω = (Ψn , Mn , Rn ) n∈N be a modulesequence satisfying the weak admissibility condition (3).
For every size K > 0, the feature extractor ΦΩ is
deformation-stable on the space HK := {f ∈ L2 (Rd ) ∩
C 1 (Rd , C) | |∇f (x)| ≤ Khxi−d } w.r.t. deformations
(Fτ f )(x) = f (x − τ (x)), i.e., for every K > 0, there exists a
constant CK > 0 (that does not depend on Ω) such that for
all f ∈ HK , and all (possibly non-linear) τ ∈ C 1 (Rd , Rd )
1
with kτ k∞ < 21 and kDτ k∞ ≤ 2d
, it holds that
|||ΦΩ (Fτ f ) − ΦΩ (f )||| ≤ CK kτ k∞ .
Proof. The proof follows that of Theorem 1 apart from employing (12) instead of (6).
A PPENDIX A
P ROOF OF P ROPOSITION 1
The proof of (6) is based on judiciously combining deformation stability bounds for the components f1 , f2 in (f1 +
K
and for the indicator function 1B . The first
1B f2 ) ∈ CCART
bound, stated in Lemma 1 below, reads
kf − Fτ f k2 ≤ CDkτ k∞ ,
(8)
and applies to functions f satisfying the decay condition (11),
with the constant C > 0 as defined in (11) and D > 0 not
depending on f , τ (see (14)). The bound in (8) requires the
assumption kτ k∞ < 21 . The second bound, stated in Lemma
2 below, is
1/2
k1B − Fτ 1B k2 ≤ C∂B kτ k1/2
∞ ,
(9)
where the constant C∂B > 0 is independent of τ . We now
show how (8) and (9) can be combined to establish (6). For
K
f = (f1 + 1B f2 ) ∈ CCART
, we have
kf − Fτ f k2 ≤ kf1 − Fτ f1 k2
+ k1B (f2 − Fτ f2 )k2 + k(1B − Fτ 1B )(Fτ f2 )k2
(10)
≤ kf1 − Fτ f1 k2 + kf2 − Fτ f2 k2 + k1B − Fτ 1B k2 kFτ f2 k∞ ,
where in (10) we used (Fτ (1B f2 ))(x) = (1B f2 )(x−τ (x)) =
1B (x − τ (x))f2 ((x − τ (x))) = (Fτ 1B )(x)(Fτ f2 )(x). With
the upper bounds (8) and (9), invoking properties of the class
K
of cartoon functions CCART
(namely, (i) f1 ,f2 satisfy (5) and
thus, by Lemma 1, (8) with C = K, and (ii) kFτ f2 k∞ =
supx∈Rd |f2 (x − τ (x))| ≤ supy∈Rd |f2 (y)| = kf2 k∞ ≤ K),
this yields
1/2
k1B − Fτ 1B k2 ≤ C∂B kτ k1/2
∞ .
1/2
kf − Fτ f k2 ≤ 2 KD kτ k∞ + KC∂B kτ k1/2
∞
1/2
≤ 2 max{2KD, KC∂B } kτ k1/2
∞ ,
|
{z
}
=:CK
which completes the proof of (6).
It remains to show (8) and (9).
Lemma 1. Let f ∈ L2 (Rd ) ∩ C 1 (Rd , C) be such that
|∇f (x)| ≤ Chxi−d ,
(11)
for some constant C > 0, and let kτ k∞ < 12 . Then,
kf − Fτ f k2 ≤ CDkτ k∞ ,
(12)
for a constant D > 0 that does not depend on f , τ .
sup
|∇f (y)|
y∈Bkτ k∞ (x)
≤ Ckτ k∞
sup
hyi−d ,
y∈Bkτ k∞ (x)
{z
|
=:h(x)
k1B − Fτ 1B k22 =
RProof. In order to upper-bound
2
|1B (x)−1B (x−τ (x))| dx, we first note that the integrand
Rd
h(x) := |1B (x) − 1B (x − τ (x))|2 satisfies h(x) = 1, for
x ∈ S, where S := {x ∈ Rd | x ∈ B and x − τ (x) ∈
/
B} ∪ {x ∈ Rd | x ∈
/ B and x − τ (x) ∈ B}, and h(x) = 0,
for x ∈ Rd \S. Moreover, owing to S ⊆ ∂B + Bkτ k∞ (0) ,
where (∂B + Bkτ k∞ (0)) is a tube of radius kτ k∞ around the
boundary ∂B of B, and [22, Lemma 2], there exists a constant
C∂B > 0 such that vold (S) ≤ vold (∂B + Bkτ k∞ (0)) ≤
C∂B kτ k∞ , for all τ : RdR → Rd with kτ k∞
R ≤ 1. We therefore
have k1B − Fτ 1B k22 = Rd |h(x)|2 dx = S 1dx = vold (S) ≤
C∂B kτ k∞ , which completes the proof.
R EFERENCES
Proof.
We first upper-bound the integrand in kf − Fτ f k22 =
R
|f
(x)−f
(x−τ (x))|2 dx. Owing to the mean value theorem
Rd
[20, Thm. 3.7.5], we have
|f (x) − f (x − τ (x))| ≤ kτ k∞
(that does not depend on τ ) such that for all τ : Rd → Rd
with kτ k∞ ≤ 1, it holds that
}
where the last inequalityRfollows by assumption. The idea is
now to split the integral Rd |h(x)|2 dx into integrals over the
sets B1 (0) and Rd \B1 (0). For x ∈ B1 (0), the monotonicity
of the function x 7→ hxi−d implies h(x) ≤ Ckτ k∞ h0i−d =
Ckτ k∞ , and for x ∈ Rd \B1 (0), we have (1 − kτ k∞ ) ≤
k∞
), which together with the monotonicity of x 7→
(1 − kτ|x|
k∞
hxi−d yields h(x) ≤ Ckτ k∞ h(1− kτ|x|
)xi−d ≤ Ckτ k∞ h(1−
−d
kτ k∞ )xi . Putting things together, we hence get
kf − Fτ f k22 ≤ C 2 kτ k2∞ vold B1 (0)
Z
d
+2
hui−2d du
(13)
Rd
(14)
≤ C 2 kτ k2∞ vold B1 (0) + 2d kh·i−d k22 ,
{z
}
|
=:D 2
where in (13) we used the change of variables u = (1 −
d
−d
kτ k∞ )x, together with du
, where
dx = (1 − kτ k∞ ) ≥ 2
1
the last inequality follows from kτ k∞ < 2 , which is by
assumption. Since kh·i−d k2 < ∞, for d ∈ N (see, e.g., [21,
Sec. 1]), and, obviously, vold B1 (0) < ∞, it follows that
D2 < ∞, which completes the proof.
We continue with a deformation stability result for indicator
functions 1B .
Lemma 2. Let B ⊆ Rd be a compact domain whose boundary
∂B is a compact topologically embedded C 2 -hypersurface of
Rd without boundary. Then, there exists a constant C∂B > 0
[1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521,
pp. 436–444, 2015.
[2] S. Mallat, “Group invariant scattering,” Comm. Pure Appl. Math.,
vol. 65, no. 10, pp. 1331–1398, 2012.
[3] T. Wiatowski and H. Bölcskei, “A mathematical theory of deep convolutional neural networks for feature extraction,” arXiv:1512.06293, 2015.
[4] K. Gröchening, Foundations of time-frequency analysis. Birkhäuser,
2001.
[5] I. Daubechies, Ten lectures on wavelets. Society for Industrial and
Applied Mathematics, 1992.
[6] E. J. Candès and D. L. Donoho, “Continuous curvelet transform: II.
Discretization and frames,” Appl. Comput. Harmon. Anal., vol. 19, no. 2,
pp. 198–222, 2005.
[7] G. Kutyniok and D. Labate, Eds., Shearlets: Multiscale analysis for
multivariate data. Birkhäuser, 2012.
[8] E. J. Candès, “Ridgelets: Theory and applications,” Ph.D. dissertation,
Stanford University, 1998.
[9] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, “What is the
best multi-stage architecture for object recognition?” in Proc. of IEEE
International Conference on Computer Vision (ICCV), 2009, pp. 2146–
2153.
[10] F. J. Huang and Y. LeCun, “Large-scale learning with SVM and
convolutional nets for generic object categorization,” in Proc. of IEEE
International Conference on Computer Vision and Pattern Recognition
(CVPR), 2006, pp. 284–291.
[11] M. A. Ranzato, F. J. Huang, Y. Boureau, and Y. LeCun, “Unsupervised
learning of invariant feature hierarchies with applications to object
recognition,” in Proc. of IEEE International Conference on Computer
Vision and Pattern Recognition (CVPR), 2007, pp. 1–8.
[12] D. L. Donoho, “Sparse components of images and optimal atomic
decompositions,” Constructive Approximation, vol. 17, no. 3, pp. 353–
382, 2001.
[13] Y. LeCun and C. Cortes, “The MNIST database of handwritten digits,”
http://yann.lecun.com/exdb/mnist/, 1998.
[14] G. Griffin, A. Holub, and P. Perona, “Caltech-256 object category
dataset,” http://authors.library.caltech.edu/7694/, 2007.
[15] A. Krizhevsky, “Learning multiple layers of features from tiny images,”
Master’s thesis, University of Toronto, 2009.
[16] “The baby AI school dataset,” http://www.iro.umontreal.ca/%7Elisa/
twiki/bin/view.cgi/Public/BabyAISchool, 2007.
[17] G. Kutyniok and D. Labate, “Introduction to shearlets,” in Shearlets:
Multiscale analysis for multivariate data, G. Kutyniok and D. Labate,
Eds. Birkhäuser, 2012, pp. 1–38.
[18] M. P. do Carmo, Riemannian geometry. Birkhäuser, 2013.
[19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” arXiv:1512.03385, 2015.
[20] M. Comenetz, Calculus: The elements. World Scientific, 2002.
[21] L. Grafakos, Classical Fourier analysis, 2nd ed. Springer, 2008.
[22] T. Wiatowski, P. Grohs, and H. Bölcskei, “Energy propagation in deep
convolutional neural networks,” arXiv:1704.03636, 2017.
| 2 |
Estimating occupation time functionals∗
arXiv:1706.03418v2 [math.PR] 26 Jul 2017
Randolf Altmeyer
Institute of Mathematics
Humboldt-Universität zu Berlin
[email protected]
Abstract
Rt
We study the estimation of integral type functionals 0 f (Xr )dr for a function f and a d-dimensional càdlàg process X with respect to discrete observations by a Riemann-sum estimator. Based on novel semimartingale approximations in the Fourier domain, central limit theorems are proved for L2 -Sobolev
functions f with fractional smoothness and continuous Itô semimartingales
X. General L2 (P)-upper bounds on the error for càdlàg processes are given
under weak assumptions. These bounds combine and generalize all previously
obtained results in the literature and apply also to non-Markovian processes.
Several detailed examples are discussed. As application the approximation of
local times for fractional Brownian motion is studied. The optimality of the
L2 (P)-upper bounds is shown by proving the corresponding lower bounds in
case of Brownian motion.
1
Introduction
Let X = (Xt )0≤t≤T be an Rd -valued stochastic process with càdlàg paths on a
filtered probability space (Ω, F , (Ft )0≤t≤T , P). The goal of this paper is to estimate
occupation time functionals
Z t
Γt (f ) =
f (Xr ) dr, 0 ≤ t ≤ T,
0
for a function f from discrete observations of X at tk = k∆n , where ∆n = T /n and
k = 0, . . . , n. Integral-type functionals of this form are important tools for studying
the properties of X and appear therefore in many fields (see e.g. Chesney et al.
(1997), Hugonnier (1999), Mattingly et al. (2010), Catellier and Gubinelli (2016)).
The most important case for applications is the occupation time ΓT (1A ) for a Borel
Many thanks to Jakub Chorowski for helpful comments on an early draft of this manuscript.
Support by the DFG Research Training Group 1845 “Stochastic Analysis with Applications in
Biology, Finance and Physics” is gratefully acknowledged.
Key words and Phrases: Markov processes; integral functionals; occupation time; semimartingale;
Sobolev spaces; fractional Brownian motion; lower bound.
AMS subject classification: Primary 60G99; 62M99; Secondary 60F05.
∗
1
set A, which measures the time that the process spends in A. From a statistical point
of view, occupation time functionals are also used to study functionals withR respect
to the invariant measure µ of an ergodic process X, because T −1 ΓT (f ) → f dµ as
T → ∞ by the ergodic theorem under appropriate regularity assumptions (Dalalyan
(2005), Mattingly et al. (2010)).
The natural estimator for discrete observations is the Riemann-sum estimator
⌊t/∆n ⌋
bn,t (f ) = ∆n
Γ
X
f (Xtk−1 ).
k=1
It has been applied in the statistics literature, for instance, in order to estimate
the occupation time (Chorowski (2015)) or functionals of the local time of a diffusion process (Florens-Zmirou (1993), Jacod (1998)). The obtained error bounds for
bn,t (f ) are often suboptimal and very specific to the problem at hand. The
Γt (f ) − Γ
bn,t (f ) is used for simulating from
approximation error has to be determined also, if Γ
the law of Γt (f ). For this, the Xtk actually have to be approximated by some Xtnk ,
obtained for example by an Euler-scheme (Mattingly et al. (2010)). The increasing
availability of exact simulation methods, however, alleviates this problem to some
extent (Beskos and Roberts (2005)). Jacod et al. (2003) considered the Riemannsum estimator
for f (x) = x in order to find the rate of convergence of the integrated
Rt
error 0 (Xr − X⌊r/∆n ⌋∆n )dr for semimartingales with jump discontinuities, because
in this case the error Xt − X⌊t/∆n ⌋∆n does not converge to zero in the Skorokhod
sense. Estimation of occupation time functionals, where the process is not observed
directly, has been considered for example by Li et al. (2013), when X is the volatility
of an Itô semimartingale.
bn,t (f ) have been studied systematically only in
The theoretical properties of Γ
few works and only for rather specific processes X and functions f . Consistency as
∆n → 0 follows from Riemann approximation already under weak assumptions. A
central limit theorem for Itô semimartingales and f ∈ C 2 (Rd ) was proven in the
monograph of Jacod and Protter (2011, Chapter 6) with rate of convergence ∆n .
1/2
This is much faster than the ∆n -rate when approximating f (Xt ) by f (X⌊t/∆n ⌋∆n )
for continuous X. Interestingly, the weak limit depends only on ∇f and therefore
it seems that the CLT might also hold for C 1 (Rd )-functions. The proof, however,
works only for f ∈ C 2 (Rd ), using Itô’s formula.
For less smooth functions no CLT has been obtained so far. Instead, several
bn,t (f ). For αauthors considered L2 (P)-bounds for the estimation error Γt (f ) − Γ
(1+α)/2
Hölder functions f and 0 ≤ α ≤ 1 the rate of convergence ∆n
, up to log
factors, has been obtained by Malliavin calculus for one dimensional diffusions
(Kohatsu-Higa et al. (2014)) and by assuming heat kernel bounds on the transition
densities for Markov processes in Rd (Ganychenko (2015); Ganychenko and Kulik
(2014)). The only result for indicator functions, which is of high importance for
3/4
applications, is the surprising rate ∆n for one-dimensional Brownian motion and
indicators f = 1[a,b) , a < b (see Ngo and Ogawa (2011)). Interestingly, this corresponds to the Hölder-rate for α = 1/2. A partial explanation combining the different
rates was given by Altmeyer and Chorowski (2016) which considered f in fractional
L2 -Sobolev spaces using a specific analysis with respect to stationary Markov pro2
cesses. It is not clear if similar results hold generally in higher dimensions or for
different processes. Note that all studied processes until now are Markov processes.
In this work we study the estimation of occupation time functionals from several different points of views. Related to the classical work of Geman and Horowitz
bn,t (f )
(1980) on occupation densities, a central idea is to rewrite the error Γt (f ) − Γ
as
⌊t/∆
Z
Xn ⌋ Z tk
−d
−ihu,Xtk−1 i
−ihu,Xr i
dr du
e
−e
(2π)
F f (u)
k=1
tk−1
by inverse Fourier transform under suitable regularity assumptions. Together with
a pathwise analysis of the exponentials e−ihu,Xr i and with functions f having sufficiently regular Fourier transforms this is just the right idea to control the estimation
error. The pathwise analysis is inspired by the one-step Euler approximations of
Fournier and Printems (2008). These ideas allow us in Section 2 to extend the central limit theorem of Jacod and Protter (2011) to L2 -Sobolev functions f ∈ H 1 (Rd )
and non-degenerate continuous Itô semimartingales with the same rate of convergence ∆n . The proof is based on tight bounds for the Itô-correction term in Itô’s
formula. Note that a function f ∈ H 1 (Rd ) is not necessarily continuous for d > 1.
For less smooth functions it is in general not possible to prove central limit theorems, because the bias becomes degenerate asymptotically. Instead, Section 3 probn,t (f ) and general
vides non-asymptotic upper bounds for the L2 (P)-error Γt (f ) − Γ
d-dimensional càdlàg processes X under weak assumptions. Only the smoothness
of the bivariate distributions of (Xh , Xr ) in 0 ≤ h < r ≤ T is required, i.e. either
the joint densities or the characteristic functions are differentiable in h and r. This
(1+s)/2
allows us to prove the rate ∆n
for a large class of d-dimensional processes and
L2 -Sobolev functions with fractional smoothness 0 ≤ s ≤ 1. In particular, this covers
the previous results for Hölder and indicator functions. We therefore obtain a unifying mathematical explanation for the different rates. Several examples demonstrate
the wide applicability of these upper bounds, for example to Markov processes, but
also to fractional Brownian motion. These results are used to prove, to the best of
our knowledge, unknown rates of convergence for approximating the local times of
fractional Brownian motion. Note that the L2 (P)-bounds also yield improved bounds
b n,t(f )], which are of key importance
for the so-called weak approximations E[Γt (f )− Γ
in Monte-Carlo simulations (cf. Gobet and Labart (2008)).
Rate optimality is addressed in Section 4. We prove the corresponding lower
bounds for the L2 (P)-error in case of L2 -Sobolev functions and d-dimensional Brownian motion. In this case we can even conclude the efficiency of the Riemann-sum
estimator in terms of its asymptotic variance.
We want to emphasize that the L2 (P)-bounds are not only optimal and explicit
with respect to their dependence
on ∆n , but also with respect to T . This allows for
R
approximating functionals f dµ in an ergodic setting with respect to the invariant
bn,T (f ), independent of ∆n
measure µ at the optimal rate T −1/2 by the estimator T −1 Γ
being fixed or ∆n → 0. We therefore believe that our results may be instrumental
in bridging the gap between results in statistics obtained for high-frequency and
low-frequency observations.
In fact, the results in Section 3 have been crucial for
Rt
approximating 0 1[a,b) (Xr )dr, a < b, with respect to a one-dimensional stationary
3
diffusion X in an effort to find a universal estimator for the volatility process which
is minimax optimal at high and low frequency (cf. Chorowski (2015)). Moreover, it
is well-known that, under suitable regularity assumptions, T −1 ΓT (f ) converges to
R
bn,T (f ). This suggests that
f dµ at the rate T −1/2 . This is the same rate as for T −1 Γ
our results can also be applied to transfer results obtained in statistics for continuous
observations to discrete observations by approximating the corresponding integral
functionals.
Proofs can be found in the appendix. Let us first introduce some notation. k·k
and k·k∞ always denote the Euclidean and sup norms on Rd (or Rd×d ), while k·kLp
and k·kLp (P) for p ≥ 1 are the Lp norms on Lp (Rd ) and Lp (P). Cc∞ (Rd ) is the
space of smooth functions with compact support and S(Rd ) is the space of Schwartz
functions which decay rapidly at infinity. Denote by C s (Rd ) for s ≥ 0 the space of
⌊s⌋-times differentiable functions whose partial derivatives of order ⌊s⌋ are (s −⌊s⌋)Hölder continuous. D([0, T ], Rd) is the Skorokhod space on [0, T ]. Moreover, C and
Cp always denote positive absolute constants which may change from line to line.
We write a . b for a ≤ Cb. Zn = OP (an ) means for a sequence of random variables
(Zn )n≥1 and real numbers (an )n≥1 that a−1
n Zn is tight, while Zn = oP (an ) means that
a−1
Z
→
−
0
in
probability.
n
n
2
Central limit theorems
bn,t (f ) as
We will derive in this section central limit theorems for the error Γt (f ) − Γ
∆n → 0 with 0 ≤ t ≤ T and T fixed. We assume that (Ω, F , (Ft)0≤t≤T , P) satisfies
the usual conditions and that X is a d-dimensional continuous Itô semimartingale
of the form
Z
Z
t
Xt = X0 +
t
br dr +
0
σr dWr ,
0
0 ≤ t ≤ T,
(2.1)
where X0 is F0 -measurable, (Wt )0≤t≤T is a standard d-dimensional Brownian motion,
b = (bt )0≤t≤T is a locally bounded Rd -valued process and σ = (σt )0≤t≤T is a càdlàg
Rd×d -valued process, all adapted to (Ft )0≤t≤T .
The central limit theorems are based on the concept of stable convergence
(Rényi (1963)), which we recall now. For more details and examples refer to
Jacod and Shiryaev (2013) or Podolskij and Vetter (2010). Let (Yn )n≥1 be a sequence of random variables on a probability space (Ω, F , P) with values in a Polish
st
→ Y , if Y is defined
space (E, E). We say that Yn converges stably to Y , written Yn −
d
on an extension (Ω′ , F ′ , P′ ) of the original probability space and if (Yn , U) −
→ (Y, U)
for all F -measurable random variables U. Stable convergence implies convergence in
distribution and allows for standardizing estimators when the parameter of interest
is random (cf. Remark 2.2). If Zn and Z are stochastic processes on [0, T ], we furucp
P
ther write (Zn )t −−→ Zt for sup0≤t≤T k(Zn )t − Zt k −
→ 0. Proving stable convergence
with respect to stochastic processes is generally quite difficult. Our main tool will
be Theorem 7.28 of Jacod and Shiryaev (2013).
4
2.1
CLT for C 2-functions
We first review the basic situation when f ∈ C 2 (Rd ). The following is a special case
of Theorem 6.1.2 of Jacod and Protter (2011) for continuous X.
Theorem 2.1. Let f ∈ C 2 (Rd ). Then we have the stable convergence
Z tD
E
1
st f (Xt ) − f (X0 )
−1
b
fr
∆n Γt (f ) − Γn,t (f ) −
→
+√
∇f (Xr ) , σr dW
2
12 0
(2.2)
f is a d-dimensional Brownian motion defined
as processes on D([0, T ], Rd), where W
on an independent extension of (Ω, F , (Ft )0≤t≤T , P).
In order to explain the main ideas of the proof consider the decomposition Γt (f )−
b
Γn,t (f ) = Mn,t (f ) + Dn,t (f ), where
Mn,t (f ) =
⌊t/∆n ⌋ Z t
k
X
k=1
Dn,t (f ) =
tk−1
⌊t/∆n ⌋ Z t
k
X
k=1
tk−1
f (Xr ) − E f (Xr )| Ftk−1 dr,
E f (Xr ) − f Xtk−1 Ftk−1 dr.
(2.3)
(2.4)
By the martingale structure of Mn,t (f ) and Itô’s formula it is easy to check using
Theorem 7.28 of Jacod and Shiryaev (2013) that
Z tD
Z t
E
1
st 1
−1
f
∇f (Xr ) , σr dW r
(2.5)
h∇f (Xr ) , σr dWr i + √
∆n Mn,t (f ) →
2 0
12 0
f is a d-dimensional Brownian
holds for n → ∞ as processes on D([0, T ], Rd), where W
motion defined on an independent extension of (Ω, F , (Ft )0≤t≤T , P). In fact, here
f ∈ C 1 (Rd ) is sufficient (for a proof see Proposition A.4). With respect to Dn,t (f )
it can be shown by Itô’s formula that
Z
1 t
ucp f (Xt ) − f (X0 )
−1
∆n Dn,t (f ) −−→
h∇f (Xr ) , σr dWr i .
(2.6)
−
2
2 0
−1
In particular, ∆−1
n Dn,t (f ) is not negligible asymptotically. Summing up ∆n Mn,t (f )
and ∆−1
n Dn,t (f ) as well as the corresponding limits yields the theorem. It is interb
esting to note that the CLT implies the stable convergence of ∆−1
n (Γt (f ) − Θn,t (f ))
√ Rt
fr i, where
to 1/ 12 0 h∇f (Xr ), σr dW
X f Xtk−1 + f (Xtk )
2
k=1
⌊t/∆n ⌋
b n,t (f ) = ∆n
Θ
b n,t (f ) is actually the more natural estiis the trapezoid rule estimator. Therefore Θ
b n,t (f ) and Θ
b n,t (f ) have the same rate of convergence.
mator for Γt (f ). In particular, Γ
This is not true generally for deterministic integrands. We will see in Section 4 that
both estimators are rate optimal and that the asymptotic variance in (2.2) is efficient.
5
Remark 2.2. From a statistical point of view Theorem 2.1 can be exploited to obtain a feasible central limit theorem. More precisely, the estimaP
\
tor AV
ART (f ) = 1/12 nk=1 h∇f (Xtk−1 ), Xtk − Xtk−1 i2 converges in probabil√ RT
RT
fr i).
ity to 1/12 0 kσr⊤ ∇f (Xr )k2 dr, which is equal to Var(1/ 12 0 h∇f (Xr ), σr dW
The stable convergence and the continuous mapping theorem therefore yield
d
−1/2
b n,T (f )) −
\
∆−1
(ΓT (f ) − Θ
→ N(0, 1). This can be used to derive
n (AV ART (f ))
b n,T (f ).
asymptotic confidence intervals for Θ
2.2
CLT for Fourier-Lebesgue functions
Interestingly, the weak limit in (2.2) is also well-defined for less smooth functions.
The argument above, however, cannot be applied, since it relies on Itô’s formula. In
order to study the limit of ∆−1
n Dn,t (f ) for more general f , note that we can write
Z
−d
(2.7)
F f (u) e−ihu,Xr i − e−ihu,Xtk−1 i du
f (Xr ) − f Xtk−1 = (2π)
R
for sufficiently regular f , where F f (u) = f (x)eihu,xi dx is the Fourier transform of
f . In principle, we can now study e−ihu,Xr i − e−ihu,Xtk−1 i instead of f (Xr ) − f (Xtk−1 ).
The error can be calculated exactly, if the characteristic functions of the marginals
Xr are known. For the general Itô semimartingale X in (2.1), however, this is a
difficult issue. Instead, the key idea is to replace the marginals Xr for some ε =
ε(u, n) by the close approximations Xr−ε + br−ε (r − ε) + σr−ε (Wr − Wr−ε ), whose
distributions are Gaussian conditional on Fr−ε . This idea is inspired by the one-step
Euler approximation of Fournier and Printems (2008). For this σ needs to be nondegenerate and the approximation error has to be sufficiently small. We therefore
work under the following Assumption.
Assumption 2.3 (SM-α-β). Let 0 ≤ α, β ≤ 1. There exists a constant C and a
sequence of stopping times (τK )K≥1 with τK → ∞ as K → ∞ such that
2α
2
2
E sup kσ(s+r)∧τK − σs∧τK k ≤ Ct , E sup kb(s+r)∧τK − bs∧τK k ≤ Ct2β
0≤r≤t
0≤r≤t
for all 0 ≤ s, t ≤ T with s + t ≤ T . Moreover, the process ((σt σt⊤ )−1 )0≤t≤T is almost
surely bounded.
The smoothness assumptions on σ and b are rather general and appear frequently
in the literature (see e.g. Jacod and Mykland (2015), Jacod and Protter (2011, Section 2.1.5)). They exclude fixed times of discontinuities, but allow for non-predictable
jumps. The assumptions are satisfied, if σ and b are themselves Itô semimartingales
(with α = 1/2 or β = 1/2) or if their paths are Hölder continuous with regularity α
or β. In particular, they hold with α = β = 1/2 if X is a diffusion process such that
σt = σ̃(Xt ), bt = b̃(Xt ) with Lipschitz continuous functions σ̃, b̃.
The right hand side in (2.7) shows that it is natural to assume that the Fourier
transform of f is integrable, which leads to the the Fourier-Lebesgue spaces. They
appear in the form below for example in Catellier and Gubinelli (2016).
6
Definition 2.4. Let s ∈ R, p ≥ 1 and denote by F Ls,p (Rd ) := {f ∈ Lp (Rd ) :
spaces of order (s, p) with norm kf kF Ls,p =
kf
R kF Ls,p <p ∞} the Fourier-Lebesgue
sp
1/p
d
( |F f (u)| (1 + kuk) du) . Denote by F Ls,p
loc (R ) the localized Fourier-Lebesgue
spaces which contain all functions f such that f ϕ ∈ F Ls,p (Rd ) for all ϕ ∈ Cc∞ (Rd ).
This definition assumes implicitly for f ∈ F Ls,p (Rd ) that the Fourier transform
F f exists as a function in Lp (Rd ). For p = 1 we just write F Ls (Rd ) (or F Lsloc (Rd ))
s
d
and kf kF Ls . For p = 2 the spaces H s (Rd ) := F Ls,2(Rd ) (or Hloc
(Rd ) := F Ls,2
loc (R ))
2
are the fractional L -Sobolev spaces of order s with norm k·kH s := k·kF Ls,2 . In
particular, a function f ∈ H s (Rd ) is ⌊s⌋-times weakly differentiable. By properties
of the Fourier transform it can be shown for s ≥ 0 that F Lsloc (Rd ) ⊂ C s (Rd ),
′
s−ε
s
C s (Rd ) ⊂ Hloc
(Rd ) for any ε > 0 and Hloc
(Rd ) ⊂ F Lsloc (Rd ), if s > s′ + d/2.
Note that we can gain in regularity for some functions by considering larger p. For
example, the Fourier transforms of the indicator functions 1[a,b] , a < b, decay as |u|−1
for |u| → ∞ and thus 1[a,b] ∈ F L0− (R), but also 1[a,b] ∈ H 1/2− (R). Similarly, x 7→
e−|x| lies in F L1− (R) and in H 3/2− (R). For another example of negative regularity
see Theorem 3.14. More details on these spaces can be found in Adams and Fournier
(2003), Di et al. (2012) and Triebel (2010).
If f ∈ F Lsloc (Rd ) for s ≥ 1, then f ∈ C 1 (Rd ) such that (2.5) remains true. Moreover, we will prove for sufficiently smooth σ and b that also the limit for ∆−1
n Dt,n (f )
in (2.6) remains valid. This yields the wanted CLT. For a concise statement we use
the trapezoid rule estimator from the last section.
Theorem 2.5. Assume (SM-α-β) for 0 ≤ α, β ≤ 1. Let s > 2−2α, s ≥ 1, s+β > 1.
Then we have for f ∈ F Lsloc (Rd ) the stable convergence
Z tD
E
1
st
−1
f
b
∇f (Xr ) , σr dW r
∆n Γt (f ) − Θn,t (f ) −
→√
12 0
f is a d-dimensional Brownian motion deas processes on D([0, T ], Rd), where W
fined on an independent extension of (Ω, F , (Ft)0≤t≤T , P). The feasible central limit
theorem of Remark 2.2 remains valid.
This result is remarkable since it is only based on regularity assumptions for f ,
σ and b. In particular, for smoother coefficients the conditions on f can be relaxed.
For α > 1/2, f ∈ F L1loc (Rd ) is allowed. For α ≤ 1/2 there is a trade-off between
the regularities of f and σ. The theorem also extends to L2 -Sobolev functions for
′
s
sufficiently large regularity, because Hloc
(Rd ) ⊂ F Lsloc (Rd ), if s > s′ + d/2.
Remark 2.6. We want to emphasize that, as the proof of Theorem 2.5 reveals, it
is not possible to argue as in Section 2.1 by using a more general Itô formula for
f ∈ C 1 (Rd ), for example by Russo and Vallois (1996).
2.3
CLT for L2 -Sobolev functions
The proof of Theorem 2.5 does not apply to all C 1 (Rd )-functions. The weak limit,
1
however, is also well-defined for f ∈ Hloc
(Rd ). A minor issue in this case is that the
random variables f (Xr ) depend on the version of f that we choose in its equivalence
7
class in L2loc (Rd ). This problem disappears if f is continuous or if Xr has a density.
Note that H 1 (Rd ) ⊂ C(Rd ) only for d = 1. Interestingly, it can be shown by the
methods of Romito (2017), which are in turn also inspired by Fournier and Printems
(2008), under Assumption (SM-α-β) that the marginals Xr have Lebesgue densities
pr for r > 0.
1
In order to extend the central limit theorem to f ∈ Hloc
(Rd ), we need to make
the following stronger assumption.
Assumption (X0). X0 is independent of (Xt − X0 )0≤t≤T and Lebesgue density µ.
Either, F µ ∈ L1 (Rd ), or F µ is non-negative and µ is bounded.
This assumption can be understood in two ways. First, the independence and the
boundedness of µ imply that the marginals Xr have uniformly bounded Lebesgue
densities. Second,
f itself becomes more regular, as by independence E[Γt (f )|(Xr −
Rt
X0 )0≤r≤t ] = 0 (f ∗ µ̃)(Xr − X0 )dr with µ̃(x) = µ(−x). Unfortunately, this property
can not be used directly in the proof.
1
We can show under this assumption that (2.5) remains true for f ∈ Hloc
(Rd ).
s
Moreover, for f ∈ Hloc
(Rd ) and sufficiently large s ≥ 1 we can prove that ∆−1
n Dn,T (f )
converges to (2.6) in probability. This convergence is not uniform in 0 ≤ t ≤ T
anymore. Therefore the weak convergence is not functional and holds only at the
fixed time T .
Theorem 2.7. Assume (SM-α-β) for 0 ≤ α, β ≤ 1 and (X0). Let s > 2−2α, s ≥ 1,
s
s + β > 1. Then we have for f ∈ Hloc
(Rd ) the stable convergence
∆−1
n
Z TD
E
1
st
b
fr ,
ΓT (f ) − Θn,T (f ) −
→√
∇f (Xr ) , σr dW
12 0
f is a d-dimensional Brownian motion defined on an independent extension
where W
of (Ω, F , (Ft )0≤t≤T , P). The feasible central limit theorem of Remark 2.2 remains
valid.
Because of independence, Assumption (X0) can be relaxed by randomizing the
initial condition and a coupling argument. This yields the following corollary.
Corollary 2.8. Assume (SM-α-β) for 0 ≤ α, β ≤ 1. Let s > 2 − 2α, s ≥ 1,
s
s + β > 1. For any function f ∈ Hloc
(Rd ) there exists a set E ⊂ Rd such that Rd \E
has Lebesgue measure 0 and such that the stable convergence in Theorem 2.7 holds
for all X0 = x0 ∈ E.
This result generalizes Theorem 2.1 considerably. The set E depends in general
on the function f , i.e. it can change if we consider a different function f˜ with f = f˜
almost everywhere. If f has a bit more regularity, then the CLT holds for all initial
values.
Corollary 2.9. Assume (SM-α-β) for 0 ≤ α, β ≤ 1. Let s > 2 − 2α, s > 1. Then
we have the stable convergence in Theorem 2.7 for any f ∈ C s (Rd ) and all initial
values X0 = x0 ∈ Rd .
8
Note that here s is strictly larger than 1. In a way this generalizes Theorem 2.5,
because F Ls (Rd ) ⊂ C s (Rd ) for s ≥ 1. On the other hand, the stable convergence in
Theorem 2.5 is functional, while Corollary 2.9 proves stable convergence at a fixed
time.
s
Remark 2.10. In some cases it is possible to derive similar CLTs for f ∈ Hloc
(Rd )
1/2−
with 0 ≤ s < 1. For example, we have f = 1[a,∞) ∈ Hloc (R) and the proof of
−3/4
b n,T (fε )), where fε = f ∗ ϕε with
Theorem 2.7 implies a CLT for ∆n (ΓT (fε ) − Γ
1/2
ϕ ∈ Cc∞ (Rd ), ϕε = ε−1 ϕ(ε−1 (·)) and ε = ∆n . The limiting distribution is similar
to Corollary 3.4 of Ngo and Ogawa (2011) and involves local times of X. The rate
3/4
∆n will be explained in the next section. It is not possible to extend this to a CLT
−3/4
bn,T (f )), as the error ΓT (f − fε ) − Γ
bn,T (f − fε ) is only of order
for ∆n (ΓT (f ) − Γ
3/4
OP (∆n ).
3
Upper bounds for less smooth functions
The aim of this section is to derive finite sample upper bounds on kΓT (f ) −
bn,T (f )kL2 (P) with explicit dependence on ∆n , T and f . The function f is possibly
Γ
much rougher than in the last section. It is therefore not possible to use arguments
based on Taylor’s theorem such as Itô’s formula. Except for special cases, it is imbn,T (f ) in this case (cf. Remark
possible to prove central limit theorems for ΓT (f ) − Γ
2.10). Instead of using martingale arguments, the results here are based on direct
calculations with respect to the distribution of X. The following is inspired by the
proof of Ganychenko (2015, Theorem 1).
We always assume that X = (Xt )0≤t≤T is a càdlàg process with respect to
(Ω, F , (F )0≤t≤T , P), not necessarily a semimartingale or a Markov process. Then
bn,T (f )k2 2
kΓT (f ) − Γ
L (P)
Z
Z
n
t
t
j
k
X
f (Xh ) − f Xtj−1
dhdr.
E f (Xr ) − f Xtk−1
=
tj−1
tk−1
k,j=1
Assume that the bivariate distributions of (Xa , Xb ), a < b, have Lebesgue densities
pa,b . Under suitable regularity assumptions the expectation in the last display can
be written as
Z r Z
f (x) f (y) ∂b ph,b (x, y) − ∂b ptj−1 ,b (x, y) d (x, y) db
tk−1
=
Z
r
tk−1
Z
h
tj−1
Z
f (x) f
2
(y) ∂ab
pa,b
(x, y) d (x, y) dadb.
(3.1)
bn,T (f )k2 2 . Their
From this we can obtain general upper bounds on kΓT (f ) − Γ
L (P)
structure reflects that the distributions of (Xa , Xb ) degenerate for a = b, therefore
requiring a different argument.
Proposition 3.1. Assume that the joint densities pa,b of (Xa , Xb ) exist for all 0 <
a < b ≤ T.
9
(i) Assume that b 7→ pa,b (x, y) is differentiable for all x, y ∈ Rd , 0 < a < b < T ,
with locally bounded derivatives ∂b pa,b . Then there exists a constant C such
that for all bounded f with compact support
b n,T (f )k2 2
kΓT (f ) − Γ
L (P)
X
Z
n Z
2
≤ C∆n (f (y) − f (x))
+
X Z
k−1>j≥2
tk
tk−1
Z
k=1
tj
tj−1
tk
ptk−1 ,r (x, y) dr
tk−1
|∂r ph,r (x, y) | + |∂r ptj−1 ,r (x, y) | dhdr d (x, y) .
(ii) In addition, assume that a 7→ ∂b pa,b (x, y) is differentiable for all x, y ∈ Rd and
2
0 < a < b < T , with locally bounded derivatives ∂ab
pa,b . Then we also have
b n,T (f )k2 2
kΓT (f ) − Γ
L (P)
Z
n Z
X
2
2
−1
≤ C∆n (f (y) − f (x)) ∆n
+
X Z
k−1>j≥2
tk
tk−1
Z
k=1
tj
tj−1
2
∂hr
ph,r
tk
ptk−1 ,r (x, y) dr
tk−1
(x, y) dhdr d (x, y) .
Concrete upper bounds can be obtained from this by combining the smoothness
2
of f with bounds on ∂b pa,b and ∂ab
pa,b . Another way for getting upper bounds comes
from formally applying the Plancherel theorem to (3.1). Denote by ϕa,b = F pa,b
the characteristic function of (Xa , Xb ). Under sufficient regularity conditions (3.1)
is equal to
Z r Z h Z
−2d
2
(2π)
F f (u) F f (v) ∂ab ϕa,b (u, v)d (u, v) dadb.
tk−1
tj−1
This yields the following version of the last proposition.
Proposition 3.2. Let ϕa,b be the characteristic functions of (Xa , Xb ) for 0 ≤ a, b ≤
T with ϕa,a (u, v) = ϕa (u + v) for u, v ∈ Rd .
(i) Assume that b 7→ ϕa,b (u, v) is differentiable for 0 < a < b < T , u, v ∈ Rd , with
locally bounded derivatives ∂b ϕa,b . Then there exists a constant C such that for
all f ∈ S(Rd )
bn,T (f )k2 2
kΓT (f ) − Γ
L (P)
X
Z
n Z
≤ C∆n |F f (u)| |F f (v)|
+
X Z
k−1>j≥2
tk
tk−1
Z
k=1
tj
tj−1
tk
gtk−1 ,r (u, v)dr
tk−1
|∂r ϕh,r (u, v)| + ∂r ϕtj−1 ,r (u, v) dhdr d (u, v) ,
with gtk−1 ,r (u, v) = |ϕr,r (u, v)| + |ϕtk−1 ,r (u, v)| + |ϕtk−1 ,tk−1 (u, v)|.
10
(ii) In addition, assume that a 7→ ∂b ϕa,b (u, v) is differentiable for all u, v ∈ Rd and
2
0 < a < b < T , with locally bounded derivatives ∂ab
ϕa,b . Then we also have
bn,T (f )k2 2
kΓT (f ) − Γ
L (P)
Z
n Z
X
2
−1
≤ C∆n |F f (u)| |F f (v)| ∆n
k=1
tk
+ ∂r ϕtk−1 ,r (u, v) )dhdr +
X Z
k−1>j≥2
tk−1
tk
tk−1
Z
Z
tj
tj−1
tk
(|∂r ϕh,r (u, v)|
tk−1
2
∂hr
ϕh,r
(u, v) dhdr d (u, v) .
The second proposition is useful if the characteristic functions ϕa,b are explicitly
known, while the densities pa,b are not. This is true for many Lévy or affine processes.
Moreover, it can be easier to find upper bounds on characteristic functions than for
the respective densities. Note that the second proposition does not require the joint
densities pa,b to exist. This is relevant, for instance, when studying jump processes
without marginal densities (cf. Example 3.12). In some cases both propositions apply
and the results can differ as we will see in the next section.
We will now study several concrete examples of processes X and function spaces
for f and derive explicit upper bounds.
3.1
Markov processes
Let X be a continuous-time Markov process on Rd Rwith transition densities ξh,r ,
0 ≤ h < r ≤ T , such that E[g(Xr )|Xh = x]= g(y)ξh,r (x, y)dy for x ∈ Rd
and all continuous, bounded functions g. Denote by Px0 the law of X conditional on X0 = x0 . The joint density of (Xh , Xr ), conditional on X0 = x0 , is
ph,r (x, y; x0 ) = ξ0,h (x0 , x)ξh,r (x, y). The necessary differentiability conditions on ph,r
from Proposition 3.1 translate to assumptions on ξh,r . The following heat kernel
bounds are similar to the ones in Ganychenko (2015).
Assumption 3.3. The transition densities ξh,r for 0 ≤ h < r < T satisfy one of
the following conditions:
(A)
The function r 7→ ξh,r (x, y) is continuously differentiable for all x, y ∈ Rd and
there exist probability densities qr on Rd satisfying
sup
x,y∈Rd
|ξh,r (x, y)|
≤ 1,
qr−h (y − x)
sup
x,y∈Rd
|∂r ξh,r (x, y)|
1
≤
.
qh−r (y − x)
h−r
(3.2)
(B-γ) Let 0 < γ ≤ 2. In addition to (A), the function h 7→ ∂r ξh,r (x, y) is continuously differentiable for all x, y ∈ Rd and the qh satisfy
2
1
|∂hr
ξh,r (x, y)|
≤
.
(3.3)
(r − h)2
x,y∈Rd qr−h (y − x)
Moreover,
if γ < 2, then supx∈Rd kxk2s+d qh (x) . h2s/γ for 0 < s ≤ γ/2,
R
while kxk2s qh (x)dx . hs for 0 < s ≤ γ/2 = 2.
sup
11
These conditions are satisfied in case of elliptic diffusions with Hölder continuous
−1/2 2
coefficients with qh (x) = c1 h−d/2 e−c2 kxh k and γ = 2 for some constants c1 , c2 .
They are also satisfied for many Lévy driven diffusions with qh (x) = c1 h−d/γ (1 +
kxh−1/γ kγ+d )−1 and 0 < γ < 2 (Ganychenko et al. (2015)). Different upper bounds
in (3.2), (3.3) are possible yielding different results below.
Based on Proposition 3.1 we recover the main results of Ganychenko (2015) and
Ganychenko et al. (2015). For 0 ≤ s ≤ 1 denote by kf kC s the Hölder seminorm
(y)k
supx6=y kf (x)−f
.
kx−yks
Theorem 3.4. Let n ≥ 2 and x0 ∈ Rd . Let X be a Markov process with transition
densities ξa,b .
(i) Assume (A). There exists a constant C such that for every bounded f
1/2 1/2
b n,T (f )k 2
∆n (log n)1/2 .
kΓT (f ) − Γ
L (Px0 ) ≤ Ckf k∞ T
(ii) Assume (B-γ) for 0 < γ ≤ 2. There exists a constant C such that for f ∈
C s (Rd ) with 0 ≤ s ≤ γ/2
( 1+2s/γ
∆n 2 ,
2s/γ < 1,
1/2
bn,T (f )k 2
sT
≤
Ckf
k
kΓT (f ) − Γ
C
L (P x 0 )
1/2
∆n (log n) , 2s/γ = 1.
(1+2s/γ)/2
Up to log factors the rate of convergence (for fixed T ) is ∆n
for f ∈
1/2
s
d
C (R ), interpolating between the worst-case rates ∆n and the “best” rate ∆n .
Interestingly, smaller γ means faster convergence for the same smoothness s.
Remark 3.5. The T 1/2 -term in the upper bound is optimal and appears in almost
every other example below (observe however Theorem 3.13). If X is
R ergodic with
invariant measure µ, then this can be used to estimate functionals f dµ with rebn,T (f ) with optimal rate T −1/2 , independent of
spect to µ by the estimator T −1 Γ
any condition on the discretization order ∆n , i.e. there is essentially no difference
between the high and the low frequency setting. This generalizes Theorem 2.4 of
Altmeyer and Chorowski (2016) considerably, since stationarity is not required.
1/2
Theorem 3.4 yields for f = 1[a,b] , a < b, only the rate ∆n (log n)1/2 . This cannot
3/4
explain the ∆n -rate obtained for Brownian motion in Ngo and Ogawa (2011). In
order to find a unifying view consider now f ∈ H s (Rd ), 0 ≤ s ≤ 1.
Theorem 3.6. Let X be a Markov process with transition densities ξa,b and bounded
initial density µ.
(i) Assume (A). There exists a constant C such that for f ∈ L2 (Rd )
bn,T (f )kL2 (P) ≤ Ckµk1/2 kf kL2 T 1/2 ∆1/2 (log n)1/2 .
kΓT (f ) − Γ
n
∞
(ii) Assume (B-γ) for 0 < γ ≤ 2. There exists a constant C such that for f ∈
H s (Rd ) with 0 ≤ s ≤ γ/2
1+2s/γ
2
, γ < 2, 2s/γ < 1,
∆n
1+s
1/2
1/2
b
kΓT (f ) − Γn,T (f )kL2 (P) ≤ Ckµk∞ kf kH s T
∆n 2 (log n)1/2 , γ = 2,
∆ (log n)1/2 , 2s/γ = 1.
n
12
While the regularity of f is now measured in the L2 -Sobolev sense, we still
(1+2s/γ)/2
obtain the interpolating rate ∆n
up to log factors. Since C s (K) ⊂ H s− (Rd )
for compacts K ⊂ Rd and because f = 1[a,b] ∈ H 1/2− (R), this theorem also yields
(1+2s/γ)/2−
3/4−
the rates ∆n
for s-Hölder functions on compacts and ∆n
(up to log
factors) for indicators. By an explicit interpolation as in Theorems 3.5 and 3.6
(1+2s/γ)/2
3/4
of Altmeyer and Chorowski (2016) this can be improved to ∆n
and ∆n ,
respectively. By considering L2 -Sobolev spaces we therefore unify the different rates
obtained for Markov processes. The log factors in Theorem 3.6 can be removed in
many cases (cf. Section 3.2).
Remark 3.7. (i) The role of µ in the proof of Theorem 3.6 is to ensure that the
marginals have uniformly bounded densities ph , i.e. sup0≤h≤T kph k∞ ≤ kµk∞ .
This is necessary, because the bounds in Assumption 3.3 degenerate at 0.
Otherwise it is not even clear that kΓT (f )kL2 (P) < ∞ for f ∈ L2 (Rd ). If
RT
supx∈Rd 0 ξ0,r (x)dr < ∞, then the initial distribution can be arbitrary. This
−1/2 2
holds, for instance, when d = 1 and qh (x) = c1 h−1/2 e−c2 kxh k .
(ii) A different possibility for removing the initial condition is to wait until T0 > 0
such that XT0 has dispersed enough to have a bounded Lebesgue density. The
RT
b n,T0 ,T (f )kL2 , where
proof of Theorem 3.6 can then be applied to k T0 f (Xr )dr−Γ
b n,T0 ,T (f ) is a Riemann-sum estimator taking only observations in [T0 , T ] into
Γ
account.
bn,T (f ) =
(iii) A similar argument as in the proof of Corollary 2.8 shows ΓT (f ) − Γ
d
OPx0 (an ) for almost all initial conditions X0 = x0 ∈ R , where an corresponds
to the rates in Theorem 3.6 up to an additional log factor.
3.2
Additive processes
Let Y = (Yt )0≤t≤T be an additive process on Rd with Y0 = 0 and local characteristics (σt2 , Ft , bt ), where σ 2 = (σt2 )0≤t≤T is a continuous Rd×d -valued function such
that σt2 is symmetric for all t, b = (bt )0≤t≤T is a locally integrable Rd -valued function and (F
is a family of positive measures on Rd with Ft ({0}) = 0 and
R t )0≤t≤T
sup0≤t≤T { (kxk2 ∧ 1)dFt (x)} < ∞ (cf. Tankov (2003, Section 14.1)). The increments Yr − Yh , 0 ≤ h < r ≤ T , are independent and have infinitely divisible distributions. In particular, the corresponding characteristic functions are eψh,r (u) , u ∈ Rd ,
by the Lévy-Khintchine representation (Tankov (2003, Theorem 14.1)), where the
characteristic exponents ψh,r (u) are given by
Z
Z rZ
Z r
1 r ⊤ 2
kσt uk dt +
eihu,xi − 1 − i hu, xi 1{kxk≤1} dFt (x) dt.
i
hu, bt i dt −
2 h
h
h
Applying Proposition 3.2 yields the following result. The independence in (X0) is
always satisfied, because Y has independent increments.
Theorem 3.8. Let T ≥ 1. Consider the process Xt = X0 + Yt , where Y = (Yt )0≤t≤T
is an additive process with local characteristics (σt2 , Ft , bt ) as above and such that X0
satisfies (X0).
13
(i) Let 0 < γ ≤ 2 and assume that |∂r ψh,r (v)| ≤ c(1 + kvk)γ+βr and |eψh,r (v) | ≤
γ
ce−ckvk (r−h) for a constant c and all 0 ≤ h < r ≤ T , v ∈ Rd and some
0 ≤ βr ≤ β ∗ ≤ γ/2 with 0 < γ + βr ≤ 2. Then there exists a constant Cµ such
that for f ∈ H s (Rd ) with β ∗ /2 ≤ s ≤ γ/2 + β ∗
1
bn,T (f )kL2 (P) ≤ Cµ kf kH s T ∆n2 +
kΓT (f ) − Γ
s−β ∗ /2
γ−β ∗
1/2
.
1/2
If F µ ∈ L1 (Rd ), then Cµ = CkF µkL1 and otherwise Cµ = Ckµk∞ . If even
|∂r ψh,r (v)| ≤ ckvkγ+βr , then the same upper bound holds with T 1/2 instead of
T.
(ii) If |∂r ψh,r (v)| ≤ c, then we have for f ∈ L2 (Rd )
b n,T (f )kL2 (P) ≤ Cµ kf kL2 T ∆n .
kΓT (f ) − Γ
The same upper bound holds with T 1/2 instead of T , if c1 ≤ ρ(v) ≤ ∂r ψh,r (v)≤
c2 ρ(v) ≤ 0 for a bounded function v 7→ ρ(v) and constants c1 ≤ c2 .
By the comments before Remark 3.7 we can obtain from this upper bounds for
Hölder and indicator functions. The condition |∂r ψh,r (v)| ≤ c(1 + kvk)γ+βr gives
an additional degree of freedom in order to account for time-inhomogeneity (cf.
Example 3.11). Note that there are no log terms as compared to Theorem 3.6. The
smaller γ/2 + β ∗ , the less smoothness is necessary for f to achieve a ∆n rate.
Remark 3.9. In some situations it is sufficient to consider directly Xt = Yt . This is
true, for instance, if d = 1 and γ > 1 (cf. Remark B.3). For different d or γ it follows
in (i) that YT0 for any T0 > 0 has a density pT0 with F pT0 ∈ L1 (Rd ). Similarly to
RT
Remark 3.7(ii) the proof of Theorem 3.8 can then be applied to k T0 f (Xr )dr −
bn,T0 ,T (f )kL2 . For OP bounds and almost all initial values X0 = x0 ∈ Rd refer to
Γ
Remark 3.7(iii).
We study now a few examples.
Example 3.10 (Non-vanishing Gaussian part). Assume that Y has local characteristics (σt2 , Ft , 0) with sup0≤r≤T k(σr σr⊤ )−1 k < ∞. Then γ = 2, β ∗ = 0 and
|∂r ψh,r (v)| . kvk2 (cf. Sato (1999, Equation (8.9))). Part (i) of Theorem 3.8 there(1+s)/2
for f ∈ H s (Rd ) with
fore yields up to a constant the upper bound kf kH s T 1/2 ∆n
0 ≤ s ≤ 1, thus improving on Theorem 3.6.
Rr
Example 3.11 (γ-stable processes). Let ψh,r (v) = −c h kvkγ+βt dt with c, γ, βr as in
Theorem 3.8. A process with these characteristic exponents exists if β is continuous.
X is a generalized symmetric γ-stable process with stability index γ +βr changing in
time. For d = 1 it is a multistable Lévy motion (cf. Example 4.1 in Falconer and Liu
(2012)). If β ∗ = 0, then X is just a symmetric γ-stable process and Theorem 3.8
1/2+s/γ
for f ∈ H s (Rd ) and 0 ≤ s ≤ γ/2. Again,
yields the upper bound kf kH s T 1/2 ∆n
this improves on Theorem 3.6.
Example 3.12 (Compound Poisson
R ihv,xi process). Let X be a compound Poisson process. Then ψh,r (v) = (r − h) (e
− 1)dF (x) for all 0 ≤ h < r ≤ T and a finite
14
measure F . Observe that the marginals
Xr do not have Lebesgue densities unless X0
R
does. Since ρ(v) := ∂r ψh,r (v) = (eihv,xi − 1)dF (x) is bounded in v, part (ii) of the
theorem yields the upper bound kf kL2 T ∆n for all f ∈ L2 (Rd ). The improved bound
applies, if F is symmetric (cf. Section 3.1 of Altmeyer and Chorowski (2016)).
3.3
Fractional Brownian motion
Let B = (Bt )0≤t≤T be a fractional Brownian motion in Rd with Hurst index 0 <
(m)
H < 1. This means that the d component processes (Bt )0≤t≤T for m = 1, . . . , d
are independent and centered Gaussian processes with covariance function c(h, r) :=
(m) (m)
E[Bh Br ] = 12 (r 2H + h2H − (r − h)2H ), 0 ≤ h ≤ r ≤ T . If H = 1/2, then B is just
a Brownian motion. For H 6= 1/2 it is an important example of a non-Markovian
process which is also not a semimartingale.
Theorem 3.13. Let T ≥ 1, n ≥ 2. Consider the process Xt = X0 + Bt , where
(Bt )0≤t≤T is a fractional Brownian motion with Hurst index 0 < H < 1 and where
X0 satisfies (X0). Then there exists a constant Cµ as in Theorem 3.8 such that for
any f ∈ H s (Rd ) and 0 ≤ s ≤ 1
b n,T (f )kL2 (P) ≤ Cµ kf kH s
kΓT (f ) − Γ
(
1+s
T H ∆n 2 ,
1+2sH
2
T 1/2 ∆n
,
H ≥ 1/2,
H < 1/2.
Again, from this we can obtain upper bounds for Hölder and indicator functions
(cf. comments before Remark 3.7). It is interesting that the rate remains unchanged
but the dependency on T differs for H > 1/2, while this effect is reversed for
H < 1/2. The dependency on H is optimal. Indeed, if f is the identity, then for
some constant C
(
T H ∆n ,
H > 1/2,
bn,T (f )kL2 (P) ≥ C
1+2H
(3.4)
kΓT (f ) − Γ
T 1/2 ∆n 2 , H < 1/2.
Remark 3.9 applies here as well in order to relax the assumption on X0 . In particular,
we can directly consider Xt = Bt if d = 1. Comparing the theorem (at least for
H < 1/2) to Example 3.11 suggests that there is a more general result for self1/2+αs
.
similar processes with self-similarity index α and upper bound kf kH s T 1/2 ∆n
The key idea in the proof is that fractional Brownian motion is locally nondeterministic. There are many more processes (and random fields) with this property. In
principle, the proof of the theorem will apply in these cases as well, as long as the
time derivatives of Φh,r (u, v) can be controlled. This holds, for instance, for multifractional Brownian motion with time varying Hurst index H(t) (cf. Boufoussi et al.
(2007)) and stochastic differential equations driven by fractional Brownian motion
(cf. Lou and Ouyang (2017))
We will now apply Theorem 3.13 to approximate local times from discrete data.
Let d = 1 and let (LaT )a∈R be the family of local times of B until T which satisfies
RT
R
the occupation time formula 0 g(Br )dr = R g(x)LxT dx for every continuous and
bounded function g (cf. Nualart (1995, Chapter 5)). We can write LaT = δa (LT ) for
15
a ∈ R, where δa is the Dirac delta function. Note that δa ∈ H −1/2− (R) has negative
1/4
regularity. Theorem 3.13 therefore suggests the rate T 1/2 ∆n (for H = 1/2). This
turns out to be almost correct.
Theorem 3.14. Let T ≥ 1, n ≥ 2, d = 1. Let Xt = Bt , where (Bt )0≤t≤T is
a fractional Brownian motion with Hurst index 0 < H < 1. Consider fa,ε (x) =
H
(2ε)−11(a−ε,a+ε) (x) for x, a ∈ R and ε = ∆αnH with αH = 32 · 1+H
− ρ when H ≥ 1/2
and αH = H − ρ when H < 1/2 for any small ρ > 0. Then we have for some
constant C, independent of a, that
−ρ
T H ∆ 43 · 1−H
1+H
, H ≥ 1/2,
n
a
b
kLT − Γn,T (fa,ε )kL2 (P) ≤ C
1−H
−ρ
T 1/2 ∆n 2
, H < 1/2.
1/4
For Brownian motion the rate ∆n is already contained in Jacod (1998) and the
corresponding L2 (P)-bound in Kohatsu-Higa et al. (2014, Theorem 2.6). For H close
to 1 the rate of convergence becomes arbitrarily slow, because the paths of B are
almost differentiable and the occupation measure becomes more and more singular
with respect to the Lebesgue measure.
4
Lower bounds
We will now address the important question if the upper bounds for kΓT (f ) −
bn,T (f )kL2 (P) derived in the last two sections are optimal. Optimality here means
Γ
that the upper bounds cannot be improved uniformly for all f belonging to a given
class of functions. For this it is sufficient to find a candidate f where the error
bn,T (f )kL2 (P) matches the upper bound up to an absolute constant. The
kΓT (f ) − Γ
only explicit lower bound in the literature has been established by Ngo and Ogawa
(2011) for Brownian motion in d = 1 and indicator functions f = 1[a,b] , matching
3/4
the upper bound ∆n .
Apart from optimality with respect to the Riemann-sum estimator, it is interesting from a statistical point of view to ask for optimality across all possible estimators.
bn,T (f )kL2 (P) is bounded from below by
Note that kΓT (f ) − Γ
inf kΓT (f ) − gkL2 (P) = kΓT (f ) − E [ ΓT (f )| Gn ]kL2 (P) ,
g
(4.1)
where Gn = σ(Xtk : 0 ≤ k ≤ n) and where the infimum is taken over all
Gn -measurable random variables. If f is the identity, then it is well-known that
b n,T (f ), where Θ
b n,T (f ) is the trapezoid rule estimator from Section
E[ΓT (f )|Gn ] = Θ
2.1 (see e.g. (Diaconis, 1988)). If f ∈ H 1 (Rd ), then this still holds approximately.
The methods from Section 2 allow for identifying the limit of the right hand side in
(4.1) as n → ∞, yielding an explicit lower bound valid for all f ∈ H 1 (Rd ). For the
L2 -Sobolev spaces H s (Rd ) with 0 < s < 1 such a universal result is not possible. Instead, we derive a lower bound for an explicit candidate matching the upper bound
established in Example 3.10.
Theorem 4.1. Let T ≥ 1 and let Xt = X0 +Wt , where (Wt )0≤t≤T is a d-dimensional
Brownian motion and where X0 satisfies (X0).
16
(i) We have for any f ∈ H 1 (Rd ) the asymptotic lower bound
b
2
kΓ
(f
)
−
Γ
(f
)k
lim inf ∆−1
T
n,T
L (P)
n
n→∞
−1
≥ lim inf ∆n inf kΓT (f ) − gkL2 (P)
n→∞
1
=E
12
g
Z
T
2
0
k∇f (Xr )k dr
1/2
,
where the infimum is taken over all Gn -measurable random variables.
(ii) Let fα ∈ L2 (Rd ), 0 < α < 1, be the L2 (Rd ) function with Fourier transform
F fα (u) = (1 + kuk)−α−d/2 , u ∈ Rd . Then fα ∈ H s (Rd ) for all 0 ≤ s < α, but
fα ∈
/ H α (Rd ). Moreover, fα satisfies for all 0 ≤ s < α the asymptotic lower
bound
1+s
− 2
b
lim inf ∆n kΓT (fα ) − Γn,T (fα )kL2 (P)
n→∞
− 1+s
2
b
≥ lim inf ∆n
inf kΓT (fα ) − Γn,T (fα )kL2 (P) > 0.
n→∞
g
For d = 1 the lower bounds also hold for Xt = Wt (cf. Remark 3.9). Interestingly,
the asymptotic lower bound in (i) corresponds exactly to the asymptotic variance
bn,T (f )
obtained for the CLTs in Section 2. This proves the asymptotic efficiency of Γ
b n,T (f ) for f ∈ H 1 (Rd ). Note that Brownian motion is a major example for the
and Θ
upper bounds derived in the last section.
The key step in the proof is to calculate the conditional expectation E[ΓT (f )|Gn ],
which reduces to Brownian bridges interpolating between the observations. The
same calculations hold when X is a Lévy process with finite first moments (cf.
Jacod and Protter (1988, Theorem 2.6)) and similarly when X belongs to a certain
class of Markov processes (cf. Chaumont and Uribe Bravo (2011)).
Appendix A: Proofs of Section 2
In the following, T is fixed and ∆n → 0 as n → ∞. Consider first the following
preliminary observations.
A.1
Localization
By a well-known localization procedure (cf. Jacod and Protter (2011, Lemma 4.4.9))
and Assumption (SM-α-β) it is sufficient to prove the central limit theorems under
the following stronger Assumption.
Assumption (H-α-β). Let 0 ≤ α, β ≤ 1. There exists a constant K such that almost
surely
sup kXt k + kbt k + kσt k + k(σt σt⊤ )−1 k ≤ K
0≤t≤T
17
and such that for all 0 ≤ s, t ≤ T with s + t ≤ T
2α
2
2
E sup kσs+r − σs k ≤ Ct , E sup kbs+r − bs k ≤ Ct2β .
0≤r≤t
0≤r≤t
In this case we only have to consider f with compact support. Indeed, if f ∈
s
F Lsloc (Rd ) (f ∈ Hloc
(Rd )), is replaced by f˜ = f ϕ for a smooth cutoff function ϕ with
compact support in a ball BK+ε = {x ∈ Rd : kxk ≤ K + ε} of radius K + ε, ε > 0,
and ϕ = 1 on BK , then f˜ = f on BK and f˜ ∈ F Ls (Rd ) (f˜ ∈ H s (Rd )).
A.2
Preliminary estimates
We will use different approximations for X. For ε > 0 and t ≥ 0 let tε = max(⌊t/ε⌋ε−
2ε, 0) and define the processes
Z t
Z t
Xt (ε) =
b⌊r/ε⌋ε dr +
σ⌊r/ε⌋ε dWr ,
0
0
X̃t (ε) = Xtε + btε (t − tε ) + σtε (Wt − Wtε ).
Then the following estimates hold by the Burkholder-Davis-Gundy inequality. The
e
reason for introducing X(ε)
instead of X(ε) is the first inequality in (iii) which
improves on the second.
Proposition A.1. Let p ≥ 1. Assume (H-α-β) for 0 ≤ α, β ≤ 1. Then the following
holds for some absolute constant Cp and all 0 ≤ s, t ≤ T with s + t ≤ T :
e
(i) E [kZs+t − Zs kp ] ≤ Cp tp/2 for Z = X, X(ε), X(ε)
(ii) E[kXs+t − Xs − (Xs+t (ε) − Xs (ε))kp ] ≤ Cp tp/2 εαp ,
et (ε)kp ] ≤ Cp (ε(β+1)p + ε(α+1/2)p ), E[kXt − Xt (ε)kp ] ≤ Cp (εβp + εαp ),
(iii) E[kXt − X
es+t (ε) − X
es (ε))kp ] ≤ Cp tp/2 ε(β+1/2)p + εαp .
(iv) E[kXs+t − Xs − (X
The main estimates for the proofs of Theorems 2.5 and 2.7 are collected in the
following lemma.
Lemma A.2. Assume (H-α-β) for 0 ≤ α, β ≤ 1 and let either f ∈ C 1 (Rd ) have
compact support or assume (X0) in addition and let f ∈ H 1 (Rd ) have compact
support. Then it follows with κf = k∇f k∞ or κf = kf kH 1 for k = 1, . . . , n and
tk−1 ≤ r ≤ tk , uniformly in r and k:
(i) E[k∇f (Xr )k2 ] = O(κ2f ),
(ii) E[h∇f (Xtk−1 ), Xr − Xr (∆n )i2 ] = o(∆n κ2f ),
(iii) E[(f (Xr ) − f (Xtk−1 ) − h∇f (Xtk−1 ), Xr − Xtk−1 i)2 ]= o(∆n κ2f ),
(iv) E[k∇f (Xr ) − ∇f (Xtk−1 )k2 ] = o(κ2f ),
P⌊t/∆ ⌋ R tk
(tk − r − ∆n /2)E[h∇f (Xr ), br i|Ftk−1 ]dr|] = o(∆n κ2f ).
(v) E[supt | k=1 n tk−1
18
Proof. For f ∈ C 1 (Rd ) we only prove (v). The other statements follow from the
boundedness of ∇f and Proposition A.1. (v) follows immediately for bounded and
continuous b, because h∇f (Xr ), br i can be approximated uniformly at the left end
h∇f (Xtk−1 ), btk−1 i. For bounded b let gε be continuous and adapted processes such
RT
that sup0≤t≤T kgε,tk < ∞ uniformly in ε and E[ 0 kbh − gε,h kdh] → 0 as ε → 0. Then
(v) holds for gε and by approximation also for b.
For f ∈ H 1 (Rd ) we argue differently. Under (X0) the marginals Xr have uniformly bounded Lebesgue densities pr . Hence (i) follows from
d Z
X
2
(∂m f (x))2 pr (x) dx . kf k2H 1 .
E k∇f (Xr )k =
(A.1)
m=1
With respect to (ii) consider first f ∈ S(Rd ). By inverse Fourier transform and
F (∇f )(u) = iuF f (u), u ∈ Rd , it follows that h∇f (Xtk−1 ), Xr − Xr (∆n )i2 is equal
to
2
Z
−ihu,Xtk−1 −X0 i −ihu,X0 i
−2d
F f (u)i hu, Xr − Xr (∆n )i e
e
du
(2π)
Z
= −(2π)−2d F f (u)F f (v) hu, Xr − Xr (∆n )i
· hv, Xr − Xr (∆n )i e−ihu+v,Xtk−1 −X0 i e−ihu+v,X0 i d (u, v) .
As X0 and (Xt − X0 )0≤t≤T are independent, E[h∇f (Xtk−1 ), Xr − Xr (∆n )i2 ] is up to
a constant bounded by
Z
|F f (u)| |F f (v)| kukkvk |F µ (u + v)| d (u, v) E kXr − Xr (∆n )k2 ,
which is of order o(∆n kf k2H 1 ) by Lemma A.3 (see below) and Proposition A.1. This
yields (ii) for f ∈ S(Rd ). For f ∈ H 1 (Rd ) consider a sequence (fm )m≥1 ⊂ S(Rd )
converging to f with respect to k·kH 1 . Then kXr − Xr (∆n )k ≤ kXr k + kXr (∆n k .
1 + kWr − Wtk−1 k. Independence yields
kh∇f (Xtk−1 ), Xr − Xr (∆n )ikL2 (P) − kh∇fm (Xtk−1 ), Xr − Xr (∆n )ikL2 (P)
2
. E[k∇ (f − fm ) Xtk−1 k2 ]1/2 E[ 1 + kWr − Wtk−1 k ]1/2 . kf − fm kH 1 → 0,
as m → ∞. Hence (ii) also holds for f ∈ H 1 (Rd ). With respect to (iii) consider
again first f ∈ S(Rd ). Arguing by inverse Fourier transform, the left hand side is
because of Taylor’s theorem bounded by
Z 1 h
i
2
dt
E ∇f Xtk−1 + t Xr − Xtk−1 − ∇f Xtk−1 , Xr − Xtk−1
0Z
. |F f (u)| |F f (v)| kukkvkE [gn (u) gn (v)] |F µ (u + v)| d (u, v)
1/2
· E kXr − Xtk−1 k4
,
R1
where gn (u) = supr,h:|r−h|≤∆n 0 |1 − e−ithu,Xr −Xh i |2 dt and where we applied the
Cauchy-Schwarz inequality twice. Lemma A.3 together with E[kXr − Xtk−1 k4 ]1/2 =
19
O(∆n ) shows that the left hand side in (iii) is for f ∈ S(Rd ) up to a constant
bounded by
Z
1/2
∆n |F f (u)|2 kuk2E gn2 (u)
du.
A similar approximation argument as for (ii) yields the same bound for f ∈ H 1 (Rd ).
gn (u) is bounded in n and u and converges P-almost surely to 0 as n → ∞ for any
u ∈ Rd . By dominated convergence the last display is thus of order o(∆n ). This
yields (iii). (iv) is proved similarly. For (v) and bounded and continuous b the claim
follows from
h∇f (Xr ), br i − h∇f (Xtk−1 ), btk−1 i
= h∇f (Xr ), br − btk−1 i + h∇f (Xr ) − ∇f (Xtk−1 ), btk−1 i,
part (iv) and (A.1). For bounded b argue as in part (v) for f ∈ C 1 (Rd ).
Lemma A.3. Let ξ ∈ L1 (Rd ) ∩ L2 (Rd ) and let µ be a probability density on Rd .
(i) If F µ ∈ L1 (Rd ), then
Z
|ξ (u) ξ (v)| |F µ (u + v)| d (u, v) . kF µkL1 kξk2L2 .
(ii) If F µ is non-negative and µ is bounded, then the upper bound is instead
kµk∞ kξk2L2 .
(iii) If µ is the density of the N(0, Id )-distribution, then
!
Z Y
p
p
X
|ξ (uj )| F µ
uj d (u1 , . . . , up ) . kξkpL2 .
j=1
j=1
Proof. By a density argument we can assume that ξ, µ ∈ S(Rd ) and that F µ is
non-negative in (ii). Let g, h ∈ L2 (Rd ) with F g(u) = |ξ(u)|, F h(u) = |F µ(u)| such
that the d(u, v) integral is equal to
Z
Z
F g (u) F g (v) F h (u + v) d (u, v) = F g (u − v) F g (v) F h (u) d (u, v)
Z
Z
Z
2
= (F g ∗ F g) (u) F h (u) du = F g (u) F h (u) du = C g 2 (u) h (u) du,
(A.2)
where we used the Plancherel Theorem in the last line. If F µ ∈ L1 (Rd ), then the
last line is bounded by
Z
2
2
F h (x) eihu,xi dx . kF µkL1 kξk2L2 .
CkgkL2 khk∞ . kξkL2 sup
u∈Rd
20
If, on the other hand, F µ is non-negative, then h(u) = F F h(−u) = µ (−u) and
therefore (A.2) is bounded by
Ckgk2L2 khk∞ . kµk∞ kξk2L2 .
This shows (i) and (ii). With
R respect to (iii) the left hand side of the claimed
inequality can be written as (F g ∗ · · · ∗ F g)(u)F µ(u)du, where F g ∗ · · · ∗ F g is the
p-fold convolution product. Since F g ∗ · · · ∗ F g = F g p, this is also equal to
Z
Z
Z
p
p
p
F (g ) (u) F µ (u) du = C g (u) µ (u) du =
F (F g) (u) F F µ1/p (u) du
Z
p−1
Z
p
p/(p−1)
1/p
1/p
=
F Fg ∗ Fµ
(u) du .
F g ∗ F µ (u)
du
. kF gkpL2 = kξkpL2 ,
where we applied in the first equality the Plancherel Theorem and for the last two
inequalities the Hausdorff-Young and the Young inequalities, because F g ∈ L2 (Rd )
and F µ1/p ∈ Lq (Rd ) for any q > 0.
A.3
Proof of Theorem 2.5
It is enough to show the CLT in (2.2) for f ∈ F Lsloc (Rd ), which immediately yields
b n,t (f ). Recall the decomposition Γt (f ) − Γ
bn,t (f ) =
the claim in terms of Γt (f ) − Θ
Mn,t (f ) + Dn,t(f ) with Mn,t (f ) and Dn,t (f ) as in (2.3) and (2.4). By the localization
argument in A.1 and because F Ls (Rd ) ⊂ C 1 (Rd ) for s ≥ 1 the proof follows from
the following two propositions.
Proposition A.4. Assume (H-α-β) for 0 ≤ α, β ≤ 1. Let f ∈ C 1 (Rd ) have compact
support. Then we have the stable convergence
Z t
Z tD
E
1
st 1
−1
fr
h∇f (Xr ) , σr dWr i + √
∆n Mn,t (f ) →
∇f (Xr ) , σr dW
2 0
12 0
f is a d-dimensional Brownian motion defined
as processes on D([0, T ], Rd), where W
on an independent extension of (Ω, F , (Ft )0≤t≤T , P).
Proposition A.5. Assume (H-α-β) for 0 ≤ α, β ≤ 1. Let s > 2 − 2α, s ≥ 1,
s + β > 1. Then we have for f ∈ F Ls (Rd ) with compact support that
Z
1 t
ucp 1
−1
h∇f (Xr ) , σr dWr i .
(A.3)
∆n Dn,t (f ) −−→ (f (Xt ) − f (X0 )) −
2
2 0
We note in the proofs precisely where Lemma A.2 is used. This will allow us
later to deduce Theorem 2.7 by small modifications.
P⌊t/∆n ⌋
Proof of Proposition A.4. We write Mn,t (f ) =
Zk and M̃n,t (f ) =
k=1
P⌊t/∆n ⌋
Z̃k for random variables
k=1
Z tk
(A.4)
Zk =
f (Xr ) − E f (Xr )| Ftk−1 dr,
tk−1
tk
Z̃k =
Z
tk−1
∇f (Xtk−1 ), Xr (∆n ) − E Xr (∆n )| Ftk−1 dr.
21
(A.5)
Z̃k “linearizes” Zk with respect to f . The proof is based on the following statements
for 0 ≤ t ≤ T :
P
∆−1
sup Mn,t (f ) − M̃n,t (f ) −
→0,
n
(A.6)
0≤t≤T
⌊t/∆n ⌋
∆−2
n
X
k=1
⌊t/∆n ⌋
∆−2
n
X
k=1
⌊t/∆n ⌋
∆−1
n
X
k=1
h
i
P
E Z̃k2 1{|Z̃k |>ε} Ftk−1 −
→0, for all ε > 0,
E Z̃k Wtk − Wtk−1
⌊t/∆n ⌋
∆−1
n
h
i 1Z t
h
P
2
→
E Z̃k Ftk−1 −
kσ ⊤ ∇f (Xr )k2 dr,
3 0 r
X
k=1
h
⊤
E Z̃k Ntk − Ntk−1
Ftk−1
i
i
1
−
→
2
P
Z
(A.7)
(A.8)
t
0
∇f (Xr )⊤ σr dr,
P
→0,
Ftk−1 −
(A.9)
(A.10)
where (A.10) holds for all bounded (R-valued) martingales N which are orthogonal
to all components of W . (A.6) yields Mn,t (f ) = M̃n,t (f ) + oucp (∆n ). The claim
follows thus from the remaining statements (A.7) through (A.10) and Theorem 7.28
of Jacod and Shiryaev (2013).
We prove now the five statements above. Mn,t (f ) − M̃n,t (f ) is a discrete martingale such that by the Burkholder-Davis-Gundy inequality
n
2
2 X
E Zk − Z̃k
.
E sup Mn,t (f ) − M̃n,t (f )
≤
0≤t≤T
k=1
Decompose any such Zk − Z̃k into
Z tk
Ak,r − E Ak,r | Ftk−1 dr
tk−1
+
Z
tk
tk−1
(A.11)
∇f Xtk−1 , Xr − Xr (∆n ) − E Xr − Xr (∆n )| Ftk−1 dr,
(A.12)
where Ak,r = f (Xr ) − f (Xtk−1 ) − h∇f (Xtk−1 ), Xr − Xtk−1 i. The second moment
R tk
of (A.12) is bounded by 2∆n tk−1
E[h∇f (Xtk−1 ), Xr − Xr (∆n )i2 ]dr = o(∆3n ) using Lemma A.2(ii). The same order follows for the second moment of (A.11) from
Lemma A.2(iii). This yields (A.6). In order to prove the remaining statements observe first by the (stochastic) Fubini theorem that Z̃k is equal to
Z tk
h∇f (Xtk−1 ),
(tk − r)(br − E[br |Ftk−1 ])idr
tk−1
+ h∇f (Xtk−1 ), σtk−1
Z
tk
tk−1
(tk − r)dWr i.
The first term is of smaller order than the second one. By Itô isometry, because σ
is càdlàg and from Lemma A.2(i), (iv) it therefore follows that the left hand side in
22
(A.7) is equal to
Z
⌊t/∆n ⌋
2
1 t ⊤
∆n X
⊤
kσtk−1 ∇f Xtk−1 k + oP (1) =
kσr ∇f (Xr )k2 dr + oP (1) .
3 k=1
3 0
R tk
(tk −r)dWr k > ε′ for some ε′ >
With respect to (A.8) note that |Z̃k | > ε implies k tk−1
0 and sufficiently large n because of the Cauchy-Schwarz inequality. Consequently,
it follows from Lemma A.2(i) and independence that
Z tk
i
h
4
4
2
2
,
∆n + E k
(tk − r) dWr k
E Z̃k 1{|Z̃k |>ε} . E k∇f (Xtk−1 )k
tk−1
which is of order O(∆4n ), thus implying (A.8). The left hand side of (A.9), on the
P⌊t/∆ ⌋
other hand, is equal to Rn + ∆2n k=1 n ∇f (Xtk−1 )⊤ σtk−1 with E[kRn k] = o(1) by
Itô’s isometry (applied coordinatewise). (A.9) follows then from σ being càdlàg and
A.2(iv). The same argument shows that the left hand side in (A.10) is of order
oP (1).
Proof of Proposition A.5. Lemma A.6(i) below shows
⌊t/∆n ⌋
∆n X
E f (Xtk ) − f (Xtk−1 ) Ftk−1 + oucp (∆n ).
Dn,t (f ) =
2 k=1
(A.13)
In order to find the limit of this sum, write it as
⌊t/∆n ⌋
⌊t/∆ ⌋
∆n Xn
∆n X
E Ak | Ftk−1 +
E Bk | Ftk−1 ,
2 k=1
2 k=1
(A.14)
where Ak = f (Xtk ) − f (Xtk−1 ) − Bk and Bk = h∇f (Xtk−1 ), Xtk − Xtk−1 i. Note that
by the Burkholder-Davis-Gundy inequality
E sup
0≤t≤T
⌊t/∆n ⌋
X
k=1
E Ak | Ftk−1 − Ak
2
.
n
X
k=1
E A2k ,
which is of order o(∆n ) by Lemma A.2(iii). Therefore, (A.14) is up to an error of
order oucp (∆n ) equal to
⌊t/∆n ⌋
⌊t/∆ ⌋
∆n Xn
∆n X
f (Xtk ) − f Xtk−1 +
E Bk | Ftk−1 − Bk ,
2
2
k=1
k=1
The first sum is just ∆2n (f (X⌊t/∆n ⌋ ) − f (X0 )) =
the second one is equal to
∆n
(f (Xt ) − f (X0 )) + oucp (∆n ),
2
⌊t/∆n ⌋ Z t
k
∆n X
∇f Xtk−1 , E br | Ftk−1 − br dr
2 k=1 tk−1
⌊t/∆n ⌋ Z t
k
∆n X
−
∇f Xtk−1 , σr dWr .
2 k=1 tk−1
23
while
R ⌊t/∆ ⌋
This is equal to − ∆2n 0 n h∇f (Xr ), σr dWr i + oucp (∆n ) and the claim follows. In
the second line use Lemma A.2(iv) and for the first line note that it is a discrete
martingale of order oucp (∆n ) by the Burkholder-Davis-Gundy inequality and Lemma
A.2(i).
We now state and prove the lemmas used above.
Lemma A.6. Assume (H-α-β) for 0 ≤ α, β ≤ 1. Let s > 2 − 2α, s ≥ 1, s + β > 1.
Then we have for f ∈ F Ls (Rd ) with compact support that
⌊t/∆n ⌋
∆n X
Dn,t (f ) −
E[f (Xtk ) − f (Xtk−1 )|Ftk−1 ] = oucp (∆n ).
2 k=1
Proof. Consider first f ∈ S(Rd ). By applying Itô’s formula and the Fubini theorem
the left hand side in the statement is equal to Dn,t (1, f )+Dn,t(2, f ), where Dn,t (1, f )
and Dn,t (2, f ) are defined by
⌊t/∆n ⌋ Z t
k
X
k=1
tk−1
⌊t/∆n ⌋ Z t
k
X
k=1
tk−1
∆n
tk − r −
2
∆n
tk − r −
2
E ∇f Xr , br
Ftk−1 dr,
X
d
1
2
⊤ (l,m)
Ftk−1 dr.
∂ f (Xr ) σr σr
E
2 l,m=1 lm
We will show that
E sup |Dn,t (1, f ) + Dn,t (2, f )|
0≤t≤T
Z
. o (∆n kf kF Ls ) + ∆n |F f (u)| (1 + kuk)s gn (u) du,
(A.15)
with gn as in Lemma A.7 below. Choose now any sequence (fm ) ⊂ S(Rd ) converging
to f ∈ F Ls (Rd ) with respect to k·kF Ls . This means, in particular, that fm converges
to f uniformly. Therefore (A.15) also holds for f . The properties of gn and dominated
convergence therefore imply the claim.
In order to show (A.15) note first that E[sup0≤t≤T |Dn,t (1, f )|] = o(∆n kf kF Ls )
follows already from Lemma A.2(v). With respect to Dn,t (2, f ) write Σt = σt σt⊤ and
fix l, m = 1, . . . , d. For f ∈ S(Rd ) it is always justified to exchange
integrals in the
R
2
−d
following calculations. We can write ∂lm f (Xr ) = −(2π)
F f (u)ulum e−ihu,Xr i du
such that
Z
−d
F f (u) ul um Qn,t (u) du,
Dn,t (2, f ) = −(2π)
where
Qn,t (u) =
∆n
tk − r −
E e−ihu,Xr i Σr (l,m) Ftk−1 dr.
2
tk−1
⌊t/∆n ⌋ Z t
k
X
k=1
24
Consequently, because of
Z
Z
2
E sup
F f (u) ul um Qn,t (u) du ≤ |F f (u)| kuk E sup |Qn,t (u)| du,
0≤t≤T
0≤t≤T
the remaining part of (A.15) follows from Lemma A.7.
The following lemma is stronger than necessary here. This will become useful for
Theorem 2.7 and Corollary 2.9.
Lemma A.7. Assume (H-α-β) for 0 ≤ α, β ≤ 1. Let s > 2 − 2α, s ≥ 1, s + β > 1.
Then we have for any p ≥ 1 and uniformly in u ∈ Rd that
k sup Qn,t (u)kLp (P) ≤ Cp ∆n (1 + kuk)s−2 gn (u) ,
0≤t≤T
where supn≥1 supu∈Rd |gn (u)| < ∞ and gn (u) → 0 for all u ∈ Rd as n → ∞.
Proof. The proof is separated into five steps.
Step 1. Let 0 < ε ≤ 1 and define tε = max(⌊t/ε⌋ε − 2ε, 0) for 0 ≤ t ≤ T .
tε projects t to the grid {0, ε, 2ε, . . . , ⌈T /ε⌉ε} such that t − tε ≤ 3ε and t − tε ≥
min(2ε, t). Later, we will choose ε depending on n and u, i.e. ε = ε(u, n). Define
the process X̃t (ε) = Xtε + btε (t − tε ) + σtε (Wt − Wtε ). Assumption (H-α-β) implies
(l,m)
et (ε)kp ] . (ε(β+1)p +
E[(Σt (l,m) − Σtε )p ] . εαp and Proposition A.1 yields E[kXt − X
(α+1/2)p
ε
). Define
⌊t/∆n ⌋ Z t
k
X
Qn,t (ε, u) =
k=1
tk−1
∆n
tk − r −
2
h
i
e
E e−ihu,Xr (ε)i Σ(l,m)
F
dr.
t
rε
k−1
The Lipschitz-continuity of x 7→ eix therefore yields
k sup (Qn,t (u) − Qn,t (ε, u))kLp (P)
0≤t≤T
. ∆n
Z
T
h
−ihu,Xr i
E e
0
Σ(l,m)
r
er (ε)i
−ihu,X
−e
. ∆n (1 + kuk)s−2 gn,1 (u),
Σ(l,m)
rε
pi
dr
1/p
with gn,1(u) = (1 + kuk)2−s (εα + kukε1+β + kukε1/2+α ). We study now Qn,t (ε, u).
Step 2. With respect to the new grid {0, ε, 2ε, . . . , ⌈T /ε⌉ε} and 0 ≤ t ≤ T let
Ij (t) = {k = 1, . . . , ⌊t/∆n ⌋ : (j − 1)ε < tk ≤ jε}, 1 ≤ j ≤ ⌈T /ε⌉,
be the set of blocks k ≤ ⌊t/∆n ⌋ with right endpoints tk ≤ t inside the interval
P⌈T /ε⌉
P⌈T /ε⌉
(j−1)ε, jε]. Then Qn,t (ε, u) = j=1
Rj,t (u)+ j=1
E[Aj,t (u)|F(j−1)ε ] for Rj,t (u) =
Aj,t(u) − E[Aj,t (u)|F(j−1)ε ] and where
Aj,t (u) =
h
i
∆n
e
ξr,k dr, ξr,k = E e−ihu,Xr (ε)i Σ(l,m)
F
,
tk − r −
t
rε
k−1
2
tk−1
X Z
k∈Ij (t)
tk
25
such that Aj,t (u) is Fjε-measurable for fixed u. We want to show that
P⌈T /ε⌉
sup0≤t≤T | j=1
Rj,t (u)| is negligible. Note first that Ij (t) = ∅ for t ≤ (j − 1)ε and
Ij (t) = Ij (T ) for t > jε. Therefore, Aj,t (u) = 0 for t ≤ (j − 1)ε and Aj,t (u) = Aj,T (u)
for t > jε. Denote by j ∗ the unique j ∈ {1, . . . , ⌈T /ε⌉} with (j − 1)ε < t ≤ jε.
P⌈T /ε⌉
Pm
Then
j=1 Rj,t (u) = Bj ∗ −1 (u) + Rj ∗ ,t (u), where Bm (u) =
j=1 Rj,T (u) defines a complex-valued martingale (Bm (u))m=0,...,⌈T /ε⌉ with respect to the filtration
(Fmε )m=0,...,⌈T /ε⌉ . The Burkholder-Davis-Gundy inequality then yields
E sup
p
⌈T /ε⌉
X
0≤t≤T
Rj,t (u)
j=1
⌈T /ε⌉
. E
X
j=1
|Aj,T (u)|
.E
p/2
2
p
sup
m∈{0,...,⌈T /ε⌉}
+E
|Bm (u)| + sup |Rj ∗ ,t (u)|
p
0≤t≤T
sup |Aj ∗ ,t (u)|
p
0≤t≤T
.
If ε < ∆n , then each Ij (t) contains at most one block k and for tk−1 ≤ r ≤ tk ≤ jε
we have necessarily tk−1 ≤ (j − 1)ε = rε . Hence, |ξr,k | . |E[e−ihu,σrε (Wr −Wrε )i |Frε ]| ≤
kuk2
kuk2
e− 2K ε by Assumption (H-α-β) and thus |Aj,t(u)| . ∆2n e− 2K ε . Moreover, there are
clearly at most ∆−1
n many non-empty Ij (t). Consequently in this case the last display
3p/2
kuk2
is up to a constant bounded by ∆n e− 2K εp .
Assume in the following that ε ≥ ∆n . Then Ij (t) contains at most Cε∆−1
n many
blocks k and therefore
E
sup |Aj ∗ ,t (u)|p . ∆pn εp .
(A.16)
0≤t≤T
Moreover,
⌈T
p/2 2
⌈T
p
/ε⌉
/ε⌉
X
X
2
2
E
|Aj,T (u)|
≤E
|Aj,T (u)|
j=1
j=1
⌈T /ε⌉
.
X
∆2p
n
X
j1 ,...,jp =1 k1 ,k1′ ∈Ij1 (T )
···
Z
tkp
tkp −1
Z
tk′ −1
p
tk′ −1
p
···
X
kp ,kp′ ∈Ijp (T )
Z
tk1
tk1 −1
Z
tk′
1
tk′ −1
1
E ξr1 k1 ξr1′ k1′ · · · ξrp kp ξrp′ kp′ d r1 , r1′ , . . . , rp , rp′ .
Fix j and k1 , k1′ , . . . , kp , kp′ , r1 , r1′ , . . . , rp , rp′ . Let r and h be the largest and second
largest indices in the set {rl , rl′ : 1 ≤ l ≤ p} with corresponding blocks k, k̃ such
that tk−1 ≤ r ≤ tk , tk̃−1 ≤ h ≤ tk̃ . Without loss of generality assume h ≤ r. If
tk−1 ≤ rε < tk , then
kuk2
E ξr1 k1 ξr1′ k1′ · · · ξrp kp ξrp′ kp′ . E [|ξr,k |] . e− 2K ε .
If, on the other hand, h ≤ rε < tk−1 ≤ r < tk , then
kuk2
E ξr1 k1 ξr1′ k1′ · · · ξrp kp ξrp′ kp′ . E [|E [ ξr,k | Frε ]|] . e− 2K ε .
26
In the two cases rε < tk−1 ≤ h ≤ r < tk and rε < h < tk−1 ≤ r < tk conditioning on
Fh instead gives
kuk2
E ξr1 k1 ξr1′ k1′ · · · ξrp kp ξrp′ kp′ . E E e−ihu,σrε (Wr −Wh)i Fh . e− 2K |r−h| .
R tk
P
As ε ≥ ∆n , it follows that
k∈Ij (T ) tk−1 1dr ≤ ε. In all, we conclude that
P⌈T /ε⌉
E[( j=1
|Aj,T (u)|2 )p/2 ]2 is up to a constant bounded by
Z
Z
⌈T /ε⌉
t
t
k
X X
k̃
kuk2
kuk2
εp e− 2K ε + εp−1
∆2p
e− 2K |r−h| drdh .
n
tk̃−1
j=1 k,k̃∈Ij (T )
tk−1
By symmetry in r, h we find for u 6= 0 that
⌈T /ε⌉
X X Z tk̃ Z tk
kuk2
e− 2K |r−h| drdh
j=1 k,k̃∈Ij (T )
X X Z
⌈T /ε⌉
≤2
tk−1
tk̃−1
tk̃
tk̃−1
j=1 k̃∈Ij (T )
X X Z
⌈T /ε⌉
.
j=1 k̃∈Ij (T )
. kuk
−2
Z
tk̃
jε
e−
kuk2
(r−h)
2K
drdh
h
1dhkuk
−2
tk̃−1
2
− kuk
(ε+∆
)
n
1−e 2
2
(ε+∆
)
− kuk
n
,
1−e 2
kuk2
kuk2
because 1 − e− 2 (jε−h) ≤ 1 − e− 2 (ε+∆n ) for tk̃−1 ≤ h ≤ jε and k̃ ∈ Ij (T ).
Combining the estimates for ε < ∆n and ε ≥ ∆n in all we have shown in this step
that
k sup Qn,t (ε, u)kLp (P) . ∆n (1 + kuk)s−2 gn,2 (u)
0≤t≤T
⌈T /ε⌉
+ k sup
0≤t≤T
X
j=1
with
E Aj,t (u)| F(j−1)ε kLp (P)
kuk2
− 2K ε
gn,2(u) = (1 + kuk)2−s (∆1/2
n e
1/(2p)
kuk2
− 2 (ε+∆n )
1/2−1/(2p)
−1/p
+ ε).
1−e
+ε
kuk
Step 3. We need to use two martingale decompositions. Write
⌈T /ε⌉
X
j=1
E Aj,t (u)| F(j−1)ε
⌈T /ε⌉
=
X
j=1
⌈T /ε⌉
(1)
Rj,t
(u) +
X
(2)
Rj,t
⌈T /ε⌉
(u) +
j=1
X
j=1
27
E Aj,t (u)| F(j−3)ε ,
(1)
(2)
where Rj,t (u) = E[Aj,t (u)|F(j−1)ε ] − E[Aj,t (u)|F(j−2)ε ], Rj,t = E[Aj,t (u)|F(j−2)ε ] −
P⌈T /ε⌉ (1)
E[Aj,t (u)|F(j−3)ε ]. The arguments in step 2 can be applied to j=1
Rj,t (u) and
P⌈T /ε⌉ (2)
P⌈T /ε⌉
j=1 Rj,t (u) instead of
j=1 Rj,t (u). Moreover, for r ≤ 3ε observe that rε = 0.
Hence E[Aj,t (u)|F(j−3)ε ] is for j ∈ {1, 2, 3} up to a constant bounded by
X Z
k∈Ij (t)
tk
tk−1
∆n
tk − r −
2
−
e
⊤ uk2
kσ0
r
2
dr . ∆n
Z
ε
e−
kuk2
r
2K
0
dr ≤ ∆n ε.
We conclude that
⌈T /ε⌉
k sup
0≤t≤T
X
j=1
E Aj,t (u)| F(j−1)ε kLp (P)
s−2
. ∆n (1 + kuk)
gn,2 (u) + k sup
0≤t≤T
⌈T /ε⌉
X
j=4
E Aj,t (u)| F(j−3)ε kLp (P) .
Step 4. For tk−1 ≤ r ≤ tk and k ∈ Ij (t), j ≥ 4, note that rε = (j − 3)ε. Hence
E[ξr,k |F(j−3)ε ] = Yk Vr,k , where
−ihu,b(j−3)ε (r−tk−1 )i−
Vr,k = e
uk2
kσ ⊤
(j−3)ε
(r−tk−1 )
2
,
kσ ⊤
uk2
−ihu,X(j−3)ε +b(j−3)ε (tk−1 −(j−3)ε)i− (j−3)ε
(tk−1 −(j−3)ε)
2
Yk = e
(l,m)
Σ(j−3)ε .
R tk
kuk2
(tk − r −
Since also tk−1 − (j − 3)ε > ε, it follows that |Yk | . e− 2K ε . Moreover, tk−1
P
⌈T
/ε⌉
∆n
)Yk Vtk ,k dr = 0. We therefore conclude that j=4 E[Aj,t (u)|F(j−3)ε] is bounded
2
by
Z
⌈T /ε⌉
t
k
X X
kuk2
∆n
|Yk | |Vr,k − Vtk ,k | dr . ∆2n (1 + kuk)2 e− 2K ε .
j=4 k∈Ij (t)
tk−1
Consequently, it follows with gn,3 (u) = ∆n (1 + kuk)4−s e−
⌈T /ε⌉
k sup
0≤t≤T
X
j=4
kuk2
ε
2K
that
E Aj,t (u)| F(j−3)ε kLp (P) . ∆n (1 + kuk)s−2 gn,3 (u) .
Step 5. The four previous steps combined show that ksup0≤t≤T Qn,t (u)kLp (P) is up to
a constant bounded by ∆n (1 + kuk)s−2gn (u) with gn (u) = gn,1 (u) + gn,2(u) + gn,3(u).
1/2
Set ε = ε (u, n) := min (νn kuk−2, 1) for νn = 2K log(1 + kuk3∆n ). Hence 0 < ε ≤ 1
and ε → 0 for fixed u. Choose C ≥ 1 large enough to ensure that ε(u, n) < 1 for
1/2
kuk > C and n = 1 (and thus for all n). For kuk ≤ C this means ε ≤ νn kuk−2 . ∆n
28
and supu:kuk≤C gn (u) = o(1). For kuk > C, on the other hand, it follows that
gn,1 (u) . kuk2−s kuk−1−2β νn1+β + kuk−2α νn1/2+α ,
−1
2−s
∆n1/2 1 + kuk3∆1/2
gn,2 (u) . kuk
n
kuk2
−1 1/2−1/(2p)
− 2 (ε+∆n )
−2
+ kuk νn
1−e
+ kuk νn ,
gn,3 (u) . kuk4−s ∆n 1 + kuk3 ∆1/2
n
−1
.
The assumptions that 2 − s − 2α < 0, s ≥ 1, s + β > 1 and the fact that νn grows
in u only logarithmically imply that supkuk>C gn (u) is bounded in n. Consequently,
supn≥1 supu∈Rd gn (u) is bounded. Moreover, for fixed u with kuk > C it follows that
gn (u) → 0 as n → ∞. This proves the claim.
A.4
Proof of Theorem 2.7
Similar to Theorem 2.5 it is sufficient to prove the following two propositions for
f ∈ H s (Rd ).
Proposition A.8. Assume (H-α-β) for 0 ≤ α, β ≤ 1 and (X0). Then we have for
f ∈ H 1 (Rd ) the stable convergence
Z t
Z tD
E
1
st 1
−1
fr
h∇f (Xr ) , σr dWr i + √
∆n Mt,n (f ) →
∇f (Xr ) , σr dW
(A.17)
2 0
12 0
f is a d-dimensional Brownian motion defined
as processes on D([0, T ], Rd), where W
on an independent extension of (Ω, F , (Ft )0≤t≤T , P).
Proposition A.9. Assume (H-α-β) for 0 ≤ α, β ≤ 1 and (X0). Let s > 2 − 2α,
s ≥ 1, s + β > 1. Then we have for f ∈ H s (Rd ) that
∆−1
n DT,n
1
1
(f ) →
− (f (XT ) − f (X0 )) −
2
2
P
Z
0
T
h∇f (Xr ) , σr dWr i .
(A.18)
Note that the convergence in the second proposition is not functional as compared to Proposition A.5. Since the weak limit in (A.17) is a continuous process,
convergence with respect to the Skorokhod topology and thus the stable convergence
also hold at t = T (Billingsley (2013)). This yields the CLT in (2.2) for f ∈ H s (Rd )
and at the fixed time T .
Proof of Proposition A.8. The proof of Proposition A.4 can be repeated word by
word, since Lemma A.2 applies also to f ∈ H 1 (Rd ). We only have to argue differently
for (A.8),
R tkbecause ∇f (Xr ) may not be bounded.
As tk−1 (tk − r)dWr is independent of Ftk−1 , it follows from the Cauchy-
Schwarz inequality that E[Z̃k2 1{|Z̃k |>ε}|Ftk−1 ] is up to a constant bounded by
′
k∇f (X(k−1)∆n )k2 E[(∆4n + ∆3n Yk2 )1{k∇f (X
3/2
′ |Ftk−1 ] for ε > 0 and
(k−1)∆n )k∆n (1+|Yk |)>ε }
with Yk ∼ N(0, 1) independent of Ftk−1 . Since the marginals have uniformly bounded
29
Lebesgue densities (uniform in time), it follows that the first moment of the left hand
side in (A.8) is up to a constant bounded by
Z
2
2
k∇f (x)k E ∆n + Y1 1nk∇f (x)k∆3/2 (1+|Y |)>ε′ o dx.
n
1
This converges to 0 by dominated convergence, implying (A.8).
Proof of Proposition A.9. The proof follows as the one of Proposition A.5, because
Lemma A.2 applies also to f ∈ H 1 (Rd ). We only have to use Lemma A.10 instead of
Lemma A.6, while also replacing all oucp -expressions by the respective oP -terms.
Lemma A.10. Assume (H-α-β) for 0 ≤ α, β ≤ 1 and (X0). Let s > 2 − 2α, s ≥ 1,
s+β > 1. Then we have for f ∈ H s (Rd ) with compact support, s ≥ 1 and s > 2−2α,
that
n
∆n X
Dn,T (f ) −
E[f (Xtk ) − f (Xtk−1 )|Ftk−1 ] = oP (∆n ).
2 k=1
Proof. Using the notation from Lemma A.6 we only have to show for f ∈ S(Rd )
that
E [|Dn,T (1, f ) + Dn,T (2, f )|]
Z
1/2
2
2s 2
|F f (u)| (1 + kuk) gn (u) du
. o (∆n kf kH s ) + ∆n
,
(A.19)
with gn as in Lemma A.7. This can be extended to f ∈ H s (Rd ) by an approximation
argument as in Lemma A.6.
E[|Dn,T (1, f )|] = o(∆n kf kH s ) follows from Lemma A.2(v). With respect to
Dn,T (2, f ) write
Z
−d
F f (u) ul um e−ihu,X0 i Q̃n,T (u) du
Dn,T (2, f ) = −(2π)
with
Q̃n,T (u) =
∆n
tk − r −
E e−ihu,Xr −X0 i Σ(l,m)
Ftk−1 dr.
r
2
tk−1
⌊t/∆n ⌋ Z t
k
X
k=1
This corresponds to Qn,T (u) in Lemma A.7 with Xr −X0 instead of Xr . Consequently,
the independence from (X0) shows that E[|Dn,T (2, f )|2 ] is equal to
Z
h
i
−2d
F f (u) F f (v) F µ (u + v) ul um vl vm E Q̃n,T (u) Q̃n,T (v) d (u, v)
(2π)
Z
2
2
4
. |F f (u)| kuk E Q̃n,T (u) du,
by Lemma A.3. The remaining part of (A.19) follows therefore from Lemma A.7.
30
A.5
Proof of Corollary 2.8
Proof. Without loss of generality we can assume in the following that F and the
corresponding extensions are separable. In fact, it is enough to prove stable convergence for separable F , essentially because the σ-fields generated by X, b and
σ are separable (see Jacod and Shiryaev (2013, Theorem IX 7.3) for details). Assume first that X0 = 0. On a suitable extension as in Theorem 2.7, denoted by
(Ω′ , F ′ , (Ft′)0≤t≤T , P′ ), let Fn (X, x0 ) be defined as the random variables
Z T
n
X
1
−1
∆n
f (Xr + x0 ) dr − ∆n
f Xtk−1 + x0 + f (Xtk + x0 )
2
0
k=1
p
R
f r i, where Fn and F are
f , x0 ) = 1/12 T h∇f (Xr + x0 ), σr dW
and let F (X, σ, W
0
measurable functions and x0 ∈ Rd . The stable convergence in the claim is equivf, x0 ))] for any continuous bounded
alent to E[Ug(Fn (X, x0 ))] → E[Ug(F (X, σ, W
function g : R → R and any bounded F -measurable random variable U (cf.
Podolskij and Vetter (2010)). We have to show that this holds for almost all x0 ∈ Rd .
Let (Ω′′ , F ′′, (Ft′′ )0≤t≤T , P′′ ) be another extension of (Ω, F , (Ft )0≤t≤T , P) such
d
that there is a random variable Y ∼ N(0, Id ), with the d-dimensional identity
matrix Id , which is independent of F and such that Y is F0′′ -measurable. On
this space the process (Xt + Y )0≤t≤T satisfies Assumption (X0). Without loss
of generality (Ω′ , F ′, (Ft′ )0≤t≤T , P′ ) also extends this space. Theorem 2.7 yields
f, Y ))] for all continuous and bounded g and
E[Ug(Fn (X, Y ))] → E[Ug(F (X, σ, W
′′
all F -measurable random variables U. By independence of Y and F this holds in
particular for all F -measurable U independent of Y .
By a coupling argument (cf. Kallenberg (2002, Corollary 6.12)) there are (again
d
e Ỹ , σ̃, B, Ũ with (X, σ, W
f , Y, U) ∼
on another extension of the probability space) X,
e σ̃, B, Ỹ , Ũ) such that Ỹ is independent of (X,
e σ̃, B, Ũ ) and (Fn (X,
e Ỹ ), Ũ) −
(X,
→
e σ̃, B, Ỹ ), Ũ) almost surely. By conditioning on Ỹ = x0 and using independence
(F (X,
f , x0 ))] for PỸ -almost all x0 (by
this implies that E[Ug(Fn (X, x0 ))] −
→ E[Ug(F (X, σ, W
dominated convergence for conditional expectations, cf. Kallenberg (2002, Theorem
d
d
6.1)). Since Ỹ ∼ Y ∼ N(0, 1), this holds for almost all x0 . In particular, this holds
for all g ∈ Cc (Rd ), i.e. all continuous functions with compact support. Since this
space is separable and because F is separable, this implies the claim (cf. Theorem
Kallenberg (2002, 5.19)).
A.6
Proof of Corollary 2.9
As in Theorem 2.7 weRonly have to consider the
for the Riemann-sum estimator.
PnCLT
T
1
−1
Let Sn (f, x0 ) = ∆n ( 0 f (Xr + x0 )dr − ∆n k=1 2 (f (Xtk−1 + x0 ))) for x0 ∈ Rd and
p
RT
f r i. The dependence
S(f, x0 ) = 1/2(f (XT ) − f (X0 )) + 1/12 0 h∇f (Xr + x0 ), σr dW
f is suppressed. Consider first the following lemma.
on X, σ and W
Lemma A.11. Assume (H-α-β) for 0 ≤ α, β ≤ 1 and X0 ∼ N(0, Id ) independent
of (Xt − X0 )0≤t≤T such that (X0) holds. Let s > 2 − 2α, s ≥ 1. Then we have for
f ∈ C s (Rd ) with compact support that kSn (f, X)kLp (P) ≤ Cp (kf k∞ + k∇f k∞ ).
31
−1
Proof. Recall the decomposition Sn (f, x0 ) = ∆−1
n Mn,T (f ) + ∆n Dn,T (f ) from Theorem 2.7. Similar as in the proof of Proposition A.4 it follows that
h
p i
h
i
p
p
E Mn,T
(f ) . E Mn,T (f ) − M̃n,T (f )
+ E M̃n,T
(f ) . ∆pn k∇f kp∞ .
For this use the bounds on (A.11), (A.12) and use slightly modified statements
of Lemma A.2. With respect to Dn,T (f ) we argue as in Lemma A.10 (note that
f ∈ H 1+ε (Rd ) for some small ε > 0).
Write Dn,T (f ) as D̃n,T (1, f ) + D̃n,T (2, f ), where D̃n,T (1, f ) and D̃n,T (2, f ) are
defined just as Dn,T (1, f ) and Dn,T (2, f ) in Lemma A.6, but with tk −r− ∆2n replaced
by tk − r. It follows similar to Lemma A.2(v) that kD̃n,T (1, f )kLp (P) . ∆n k∇f k∞ .
Moreover, if f ∈ S(Rd ), then
Z
−d
F f (u) ul um e−ihu,X0 i Q̃n,T (u) du
D̃n,T (2, f ) = −(2π)
with Q̃n,T (u) as in Lemma A.10, but also with tk − r − ∆2n instead of tk − r. Assume
first that p ≥ 2 is even. Then we find from independence via (X0) with µ being the
density of N(0, Id ) that kD̃n,T (2, f )kpLp (P) is bounded by
(2π)
Z Y
p
−pd
j=1
|F f (uj )| kuj k
2
Fµ
p
X
j=1
uj
!
E
" p
Y
j=1
Q̃n,T (uj )
#
d (u1 , . . . , up) .
Q
Q
Because of |E[ pj=1 Q̃n,T (uj )]| ≤ pj=1 kQ̃n,T (uj )kLp (P) , Lemmas A.7 and A.3 this is
up to a constant bounded by kf kH 1 . kf k∞ + k∇f k∞ . If p is not even or p = 1,
1/2
then we have instead kD̃n,T (2, f )kLp (P) ≤ kD̃n,T (2, f )kL2p (P) . ∆n (kf k∞ + k∇f k∞ ).
This is the claimed bound for kD̃n,T (2, f )kLp (P) if f ∈ S(Rd ). For f ∈ C s (Rd ) use a
p
density argument. Together with the bound on kMn,T
(f )kLp (P) and kD̃n,T (1, f )kLp (P)
this yields the claim.
Proof. As in the proof of Proposition 2.8 assume X0 = 0 such that X + x0 has
1
initial value x0 ∈ Rd . Observe further that f ∈ C s (Rd ) ⊂ Hloc
(Rd ) and Sn (f, X +
x0 ) = Sn (fx0 , X) with fx0 (x) = f (x + x0 ). It is sufficient to prove the claim under
Assumption (H-α-β) for f with compact support (cf. Section A.1).
d
The claim is equivalent to (Sn (f, x0 ), U) −
→ (S(f, x0 ), U) for all x0 ∈ Rd and
any F -measurable real-valued random variable U. For this note that, since f is
continuous, gn (x0 ) = (Sn (f, Y + x0 ), U) defines a sequence of continuous stochastic
processes (gn (x0 ))x∈Rd . Similarly, g(x0 ) = (S(f, Y + x0 ), U) defines a continuous
d
stochastic process (g(x0 ))x0 ∈Rd . We will show below that (gn (x0 ))x0 ∈Rd −
→ (g(x0 ))x∈Rd
with respect to the sup norm on Rd . By a coupling argument as in the proof of
d
Corollary 2.8 this means that (Sn (f, y + x0 ), U)x0 ∈Rd −
→ (S(f, y + x0 ), U)x0 ∈Rd for
almost all y ∈ Rd . Since point evaluations are continuous with respect to the sup
norm, and because y + x0 runs through all of Rd , this implies the claim of the
corollary.
d
d
In order to show (gn (x0 ))x0 ∈Rd −
→ (g(x0 ))x∈Rd let Y ∼ N(0, Id ) be defined on an
appropriate extension of (Ω, F , (Ft)0≤t≤T , P) as in Corollary 2.8. The process X +Y +
32
x0 then satisfies Assumption (X0) for any x0 ∈ Rd . By linearity of f 7→ Sn (f, Y +x0 ),
the convergence of the finite dimensional distributions of (gn (x0 ))x0 ∈Rd follows from
Theorem 2.7 and the Cramér-Wold Theorem (Kallenberg (2002, Corollary 5.5)).
With respect to tightness, observe for any x0 , y0 ∈ Rd by linearity and the last
lemma that
kgn (x0 ) − gn (y0 )kLp (P) . kSn (f, Y + x0 ) − Sn (f, Y + y0 )kLp (P)
. kfx0 − fy0 k∞ + k∇fx0 − ∇fy0 k∞ . kx0 − y0 ks ,
because ∇f is (1 − s)-Hölder continuous and has compact support. Choose p ≥ 1
such that ps > d. From the Kolmogorov-Chentsov criterion for tightness on C(Rd )
(Kallenberg (2002, Corollary 16.9)) we therefore obtain the tightness of (gn (x0 ))x0 ∈Rd
d
and thus the claimed weak convergence (gn (x0 ))x0 ∈Rd −
→ (g(x0 ))x∈Rd .
Appendix B: Proofs of Section 3
Observe first the following lemma, which will be used frequently.
Lemma B.1. Let 0 < a < b ≤ T and α, β ≤ 2. It follows that
2
log n,
α = 1, β
Z
Z
n
t
t
j
k
X
T 1−α log n, α < 1, β
(b − a)−α a−β dadb .
1−α 1−β
∆n , α < 1, β
k−1>j≥2 tk−1 tj−1
T
2−α−β
T
,
α < 1, β
= 1,
= 1,
> 1,
< 1.
The same holds when α and β are switched.
Rj
P
Proof. The sum is equal to ∆n2−α−β nk−1>j≥2(k − 1 − j)−α j−1 a−β da, which is
Rn
P
bounded by ∆2−α−β
( nk=1 k −α )( 1 a−β da). If α = 1, then the sum is of order log n,
n
while it is of order n1−α when α < 1 and just finite when α > 1. The same statements hold for the integral, depending on β. Considering all possible combinations
yields the claim.
B.1
Proof of Proposition 3.1
P
bn,T (f )k2 2 = A1 + A2 + A3 , where A1 =
Proof. Write kΓT (f ) − Γ
|k−j|≤1 Mk,j ,
L
(P)
P
P
A2 = 2 k−1>j≥2 Mk,j and A3 = 2 k>2 Mk,1 and where
Z tk Z tj
E f (Xr ) − f Xtk−1
Mk,j =
f (Xh ) − f Xtj−1
dhdr.
tk−1
tj−1
Applying the Cauchy-Schwarz
inequality several times yields A1 +A
P 2 +A3 . S1 +S2 ,
Pn R tk
2
where S1 = ∆n k=1 tk−1 E[(f (Xr ) − f (Xtk−1 )) ]dr and S2 = k−1>j≥2 |Mk,j |. It
follows that
!
Z
n Z tk
X
ptk−1 ,r (x, y) dr d (x, y) .
S1 = ∆n (f (y) − f (x))2
k=1
33
tk−1
The following idea generalizes Equation (8) of Ganychenko (2015) to arbitrary processes. For (i) consider tj−1 < h < tj < tk−1 < r < tk and let gh,tj−1 ,b (x, y) =
ph,b (x, y) − ptj−1 ,b (x, y). The Fubini theorem implies for bounded f with compact
support that Mk,j is equal to
Z tk Z tj Z r Z
f (x) f (y) ∂b gh,tj−1 ,b (x, y) d (x, y) dbdhdr.
tk−1
tj−1
tk−1
ByR interchanging integration and differentiation
R the inner integral is equal to
∂b ( f (x)f (y)gh,tj−1 ,bR(x, y)d(x, y)). Observe that gh,tj−1 ,b (x, y)dy is independent of
b. Consequently, ∂b ( f 2 (x)gRh,tj−1 ,b (x, y)d(x, y)) = 0. This holds similarly if f 2 (x) is
replaced by f 2 (y), because gh,tj−1 ,b (x, y)dx = 0. It follows that Mk,j is equal to
Z
Z tj Z r Z
1 tk
−
(f (y) − f (x))2 ∂b gh,tj−1 ,b (x, y) d (x, y) dbdhdr
2 tk−1 tj−1 tk−1
and S2 is up to a constant bounded by
∆n
Z
(f (y) − f (x))
2
X Z
k−1>j≥2
tk
tk−1
Z
tj
!
∂r gh,tj−1 ,r (x, y) dhdr d (x, y) .
tj−1
Together with the bound for S1 this yields (i). For (ii) it follows similarly that Mk,j
is equal to
Z tj Z r Z h Z
Z
1 tk
2 2
(f (y) − f (x)) ∂ab pa,b (x, y) d (x, y) dadbdhdr.
−
2 tk−1 tj−1 tk−1 tj−1
(ii) follows from the bound on S1 and because S2 is up to a constant bounded by
!
Z tk Z tj
Z
X
2
∂hr
ph,r (x, y) dhdr d(x, y).
∆2n (f (y) − f (x))2
k−1>j≥2
B.2
tk−1
tj−1
Proof of Proposition 3.2
Proof. As in the proof of Proposition 3.1R it is sufficient to bound S1 + S2 . For
f ∈ S(Rd ) we can write f (Xr ) = (2π)−d F f (u)e−ihu,Xr i du for all 0 < r < T . It
follows that E[(f (Xr ) − f (Xtk−1 )(f (Xh ) − f (Xtj−1 ))] is equal to
Z
−2d
F f (u) F f (v) E e−ihv,Xr i − e−ihv,Xtk−1 i
(2π)
−ihu,Xtj−1 i
−ihu,Xh i
d (u, v) .
· e
−e
With ϕh,h (u, v) = E[eihu+v,Xh i ] the expectation is for all h, r, tk−1 , tj−1 equal to
ϕh,r (u, v) − ϕtj−1 ,r (u, v) − ϕh,tk−1 (u, v) + ϕtj−1 ,tk−1 (u, v).
34
(B.1)
For (i) this implies by symmetry in u, v that S1 is up to a constant bounded by
!
Z
n Z tk
X
gtk−1 ,r (u, v) dr d (u, v)
(B.2)
∆n |F f (u)| |F f (v)|
tk−1
k=1
with gtk−1 ,r (u, v) as in the statement. Let g̃h,tj−1 ,b (u, v) = ∂b ϕh,b (u,
R rv)−∂bϕtj−1 ,b (u, v).
Then (B.1) is for tj−1 < h < tj < tk−1 < r < tk equal to tk−1 g̃h,tj−1 ,b (u, v)db.
Therefore S2 is up to a constant bounded by
!
Z
X Z tk Z tj
g̃h,tj−1 ,r (u, v) dhdr d (u, v) .
∆n |F f (u)| |F f (v)|
k−1>j≥2
tk−1
tj−1
This yields (i). With respect to (ii) note that the last argument also applies to r = h,
k = j such that (B.2) is bounded by
!
Z
n Z tk Z tk
X
g̃h,tk−1 ,r (u, v) dhdr d (u, v) ,
∆n |F f (u)| |F f (v)|
tk−1
k=1
tk−1
Rr Rh
2
giving a bound on S1 . For S2 note that (B.1) is equal to tk−1 tj−1 ∂ab
ϕa,b (u, v)dadb.
This yields (ii), because S2 is up to a constant bounded by
!
Z
X Z tk Z tj
2
∂hr
ϕh,r (x, y) dhdr d (u, v) .
∆2n |F f (u)| |F f (v)|
k−1>j≥2
B.3
tk−1
tj−1
Proof of Theorem 3.4
Proof. If f is bounded, then fm (x) = f (x)1{kxk≤m} defines a sequence of bounded
functions with compact support converging to f pointwise with kfm k∞ ≤ kf k∞ for
all m. If f is Hölder-continuous, then we can similarly find a sequence (fm )m≥1 ⊂
Cc∞ (Rd ) converging to f pointwise with kfm kC s ≤ kf kC s . In both cases it follows Px0
bn,T (fm ) → ΓT (f ) − Γ
bn,T (f ) as m → ∞ by dominated
almost surely that ΓT (fm ) − Γ
convergence. The lemma of Fatou implies
bn,T (f )k2 2
kΓT (f ) − Γ
L
(P x 0 )
bn,T (fm )k2 2
≤ lim inf kΓT (fm ) − Γ
L
m→∞
(P x 0 )
.
It is therefore sufficient to prove the theorem for bounded f with compact support.
Conditional on x0 the random variables (Xh , Xr ), h 6= r, have the joint densities
ph,r (x, y; x0 ) = ξ0,r (x0 , x)ξh,r (x, y), x, y ∈ Rd . Moreover, the heat kernel bounds in
Assumption 3.3 imply
|ph,r (x, y; x0)| ≤ qr−h (y − x) qh (x − x0 ) ,
1
|∂r ph,r (x, y; x0)| ≤
qr−h (y − x) qh (x − x0 ) ,
r−h
1
1
2
+
qh (x − x0 ) qr−h (y − x) .
∂hr ph,r (x, y; x0 ) ≤
(r − h)2 (r − h) h
35
Then
Z
R Pn R tk
( k=1 tk−1 ptk−1 ,r (x, y; x0)dr)d(x, y) = T and Lemma B.1 yields
X Z
k−1>j≥2
.
tk
tk−1
X Z
k−1>j≥2
Z
tk
tk−1
tj
tj−1
Z
tj
tj−1
!
|∂r ph,r (x, y; x0 ) | + |∂r ptj−1 ,r (x, y; x0 ) | dhdr d (x, y)
(r − h)−1 dhdr . T log n.
Applying Proposition 3.1(i) to ph,r (·; x0 ) yields the claim in (i) for
f . For
R bounded
2s
(ii), on the other hand, the moment conditions on qa imply that ky − xk qa (x −
x0 )qb−a (yR − x)d(x, y)P
. (b −
a)2s/γ for 0 < s ≤ γ/2. Consequently, Lemma B.1 yields
R
t
k
ptk−1 ,r (x, y; x0)dr)d(x, y) up to a constant the upper
for ∆n−1 ky − xk2s ( nk=1 tk−1
R
P
t
n
k
2s/γ
dr and also
bound ∆−1
n
k=1 tk−1 (r − tk−1 )
Z
.
ky − xk
2s
X Z
k−1>j≥2
X Z
tk
k−1>j≥2 tk−1
tk Z tj
tk−1
tj−1
Z
tj
tj−1
2
∂hr
ph,r
(x, y; x0 ) dhdr d (x, y)
(r − h)2s/γ−2 + (r − h)2s/γ−1 h−1 dhdr.
For 2s/γ < 1 Lemma B.1 implies for the sum of these two upper bounds the order
2s/γ−1
O(T ∆n
+ T 2s/γ log n), while it is O(T log n) for 2s/γ = 1. In the first case note
that
log n
T 2s/γ log n = T ∆n2s/γ−1 T 2s/γ−1 ∆1−2s/γ log n ≤ T ∆2s/γ−1
,
n
n1−2s/γ
1+2s/γ
which is of order O(T ∆n
C s (Rd ).
B.4
), i.e. there is no log n-term. This implies (ii) for f ∈
Proof of Theorem 3.6
Proof. Note that L2 (Rd ) = H 0(Rd ). For f ∈ H s (Rd ), 0 ≤ s ≤ 1, let (fm )m≥1 ⊂
Cc∞ (Rd ) be a sequence of functions converging to f with respect to k·kH s with
b n,T (f )k2 2 is bounded by
kfm kH s ≤ kf kH s . Then kΓT (f ) − Γ
L (P)
bn,T (fm )k2 2 .
bn,T (f − fm )k2 2 + 2kΓT (fm ) − Γ
2kΓT (f − fm ) − Γ
L (P)
L (P)
RT R
Then kΓT (f − fm )kL2 (P) . 0 E[ (f (x) − fm (x))2 pRr (x)dx]1/2 dr, where the marginal
densities pr satisfy sup0≤r≤T |pr (x)| = sup0≤r≤T | ξ0,h (x0 , x)µ(x0 )dx0 | ≤ kµk∞ . It
follows that kΓT (f − fm )kL2 (P) is up to a constant bounded by kf − fm kL2 , which
bn,T (f − fm )kL2 (P) → 0 as
converges to 0 as m → ∞. A similar argument shows kΓ
m → ∞. It is therefore sufficient to prove the theorem for f ∈ Cc∞ (Rd ).
The random variables (Xh , Xr ), h 6= r, have the joint densities ph,r (x, y) =
36
pr (x)ξh,r (x, y), x, y ∈ Rd and the heat kernel bounds in Assumption 3.3 imply
|ph,r (x, y)| ≤ kµk∞ qr−h (y − x) ,
1
|∂r ph,r (x, y)| ≤ kµk∞
qr−h (y − x) ,
r− h
1
1
2
∂hr ph,r (x, y) ≤ kµk∞
+
qr−h (y − x) .
(r − h)2 (r − h) h
R
R tk
P
ptk−1 ,r (x, y)dr)d(x, y) . kµk∞ kf k2L2 T and it follows by
Then f 2 (x)( nk=1 tk−1
Lemma B.1 that
!
Z
X Z tk Z tj
|∂r ph,r (x, y) | + |∂r ptj−1 ,r (x, y) | dhdr d (x, y)
f 2 (x)
k−1>j≥2
.
kµk∞ kf k2L2
tj−1
tk−1
X Z
k−1>j≥2
tk
tk−1
Z
tj
tj−1
(r − h)−1 dhdr . kµk∞ kf k2L2 T log n.
By symmetry the same holds with f 2 (y) instead of f 2 (x). Applying Proposition
3.1(i) along with the trivial bound (f (x) − f (y))2 ≤ 2f (x)2 + 2f (y)2 therefore yields
(i). For (ii) we distinguish the cases γ < 2 and γ = 2. Let first 0 < s ≤ γ/2 < 1.
In this case, the L2 -Sobolev norm defined via the Fourier transform is equivalent to
the Slobodeckij-norm
!1/2
Z
2
(f
(x)
−
f
(y))
d (x, y)
,
(B.3)
kf kH̃ s = kf k2L2 +
kx − yk2s+d
i.e. kf kH̃ s . kf kH s (cf. Di et al. (2012) for more details). Similar to the proof of
Theorem 3.6 the moment conditions on qa imply for 0 < s ≤ γ/2 that
!
Z
n Z tk
X
(f (y) − f (x))2
∆−1
ptk−1 ,r (x, y)dr d(x, y)
n
sup
≤ kf k2H s ∆−1
n
x,y∈Rd
. kf k2H s ∆−1
n
Z
(f (y) − f (x))
≤
.
kf k2H s
kf k2H s
k=1 tk−1
n Z tk
X
n Z
X
k=1
tk−1
k=1
tk
tk−1
tk
tk−1
sup ky − xk2s+d
x,y∈Rd
X Z
k−1>j≥2
tk
tk−1
!
(r − tk−1 )2s/γ dr ,
X Z
2
k−1>j≥2
ky − xk2s+d ptk−1 ,r (x, y)dr
!
Z
tj
tj−1
Z
tj
tj−1
2
∂hr
ph,r
X Z
k−1>j≥2
(r − h)
tk
tk−1
2s/γ−1
(x, y) dhdr d (x, y)
Z
tj
tj−1
2
∂hr
ph,r
h dhdr .
(x, y) dhdr
−1
We surprisingly recover the same upper bounds as in the proof of Theorem 3.6. This
yields the claim in (ii) for 0 < s ≤ γ/2 < 1. Consider now γ = 2 and 0 < s ≤ 1.
37
Unfortunately, the Slobodeckij-norm is not equivalent to the k·kH s -norm when s = 1.
bn,T is a continuous linear operator
We already know from (i) that the operator ΓT − Γ
2
d
2
from L (R ) to L (P). It is therefore sufficient to show that it is also a continuous
linear operator from H 1 (Rd ) to L2 (P). Indeed, as the Sobolev spaces H s (Rd ) for
0 ≤ s ≤ 1 form interpolation spaces, the general claim is obtained by interpolating
b n,T for s = 0 and s = 1 (cf. Adams and Fournier (2003,
the operator norms of ΓT − Γ
R1
Theorem 7.23)). For s = 1 we have f (y) − f (x) = 0 h∇f (x + t(y − x), y − xi dt. It
follows for any 0 < h < r < T that
Z
(f (y) − f (x))2 qr−h (y − x) d (x, y)
Z 1Z
2
2
≤
k∇f (x + t (y − x))k ky − xk qr−h (y − x) d (x, y) dt
0
Z
= k∇f (x + tz)k2 kzk2 qr−h (z) d (x, z) . kf k2H 1 (r − h) ,
R
using kxk2 qa (x)dx . a. Proposition 3.1(ii) therefore implies
X
n Z tk
2
2
bn,T (f )k 2 . kµk∞ kf k 1 ∆n
(r − tk−1 ) dr
kΓT (f ) − Γ
H
L (P)
+∆2n
X Z
k−1>j≥2
tk
tk−1
Z
k=1
tk−1
tj
tj−1
(r − h)
−2
+h
Using the bounds from above yields the claim in (ii) for s = 1.
B.5
−1
dhdr .
Proof of Theorem 3.8
Y is independent of F0 and thus of X0 . Therefore the characteristic function of
(Xh , Xr ) at (u, v) ∈ R2d for 0 ≤ h < r ≤ T is ϕh,r (u, v) = ϕ̃h,r (u, v)F µ(u + v), where
ϕ̃h,r (u, v) = eψh,r (v)+ψ0,h (u+v) is the characteristic function of (Yh , Yr ). ψh,r (u) is for
almost all r differentiable with
Z
1 ⊤ 2
∂r ψh,r (u) = i hu, br i − kσr uk +
eihu,xi − 1 − i hu, xi 1{kxk≤1} dFr (x) ,
2
2
and also ∂hr
ψh,r (u) = 0. Hence
∂r ϕh,r (u, v)
2
∂hr ϕh,r (u, v)
= ∂r ψh,r (v) ϕ̃h,r (u, v)F µ(u + v),
(B.4)
= (∂h ψh,r (v) + ∂h ψ0,h (u + v)) ∂r ψh,r (v) ϕ̃h,r (u, v)F µ(u + v).
2
ϕh,r as well as the derivatives ∂r ϕh,r and ∂hr
ϕh,r satisfy the assumptions of Proposition 3.2(i) and (ii). Consider first the following lemma.
Lemma B.2. Fix u, v ∈ Rd such that v 6= 0 and ku + vk =
6 0 and let
Z
Z
tj
tk
X
|ϕ̃h,r (u, v)| + ϕ̃tj−1 ,r (u, v) dhdr,
Un =
Vn =
k−1>j≥2 tk−1 tj−1
n Z tk Z tk
X
k=1
tk−1
tk−1
|ϕ̃h,r (u, v)| + ϕ̃tk−1 ,r (u, v) dhdr.
Then we have the following under the assumptions of Theorem 3.8(i):
38
∗
(i) (1 + kvk)γ+β Un . T 2 (1 + kvk)β
∗ /2
(1 + kuk)β
∗ /2
.
∗
∗
(ii) (1 + kvk)γ+β Vn . T ∆n (1 + kvk)γ/2+β (1 + kuk)γ/2+β .
∗
∗
∗
∗
∗
(iii) ((1 + kvk)2γ+2β + (1 + kvk)γ+β (1 + ku + vk)γ+β )Un . T 2 (1 + kvk)γ/2+β (1 +
∗
kuk)γ/2+β .
Proof. Observe first the following estimates:
X Z tk Z tj
|ϕ̃h,r (u, v)| dhdr
tk−1
k−1>j≥2
.
Z
T
0
Z
T
γ |r−h|−cku+vkγ (h∧r)
e−ckvk
0
. kvk−γ
tj−1
Z
T
−ckvkγ (T −h)
1−e
0
dhdr
−cku+vkγ h
e
(
kvk−γ T,
dh .
kvk−γ ku + vk−γ .
(B.5)
The same holds when ϕ̃h,r (u, v) is replaced by ϕ̃tj−1 ,r (u, v). Let first kvk, ku+vk ≥ 1.
∗
∗
∗
∗
∗
Then (1 + kvk)γ+β ≤ kvkγ+β /2 kukβ /2 + kvkγ+β /2 ku + vkβ /2 and the last display,
together with T ≥ 1 and β ∗ /2 ≤ γ, yields (i). The same is true if ku + vk ≤ 1, as
∗
ku + vkβ /2 ≤ 1. If kvk < 1, then (i) holds trivially, because |ϕ̃h,r (u, v)| ≤ 1. Observe
next that Vn is bounded by
(
Z T
n
X
T ∆n ,
γ
γ
e−cku+vk tk−1 . ∆n
(B.6)
e−cku+vk h dh .
2∆2n
−γ
∆
ku
+
vk
.
0
n
k=1
∗
∗
∗
∗
Let first ku + vk ≥ 1. Then (1 + kvk)γ+β ≤ kvkγ/2+β /2 kukγ/2+β /2 + kvkγ/2+β /2 ku +
∗
vkγ/2+β /2 . The last display, together with T ≥ 1 and β ∗ /2 ≤ γ, yields (ii). Again,
this remains true if ku + vk < 1. With respect to (iii) let kvk, ku + vk ≥ 1. Then it
∗
∗
∗
follows from ku + vkβ . kukβ + kvkβ that
∗
∗
kvk2γ+2β + kvkγ+β ku + vkγ+β
∗
∗
∗
∗
∗
∗
≤ kvk3/2γ+β kukγ/2+β + kvk3/2γ+β ku + vkγ/2+β + kvkγ+2β ku + vkγ
∗
∗
+ ku + vkγ kukβ kvkγ+β .
(B.5) together with β ∗ /2 ≤ γ implies (iii). The same holds when ku + vk < 1 as
before. For kvk < 1 the trivial bound from above applies.
Proof of Theorem 3.8. Since Y and X0 are independent, the marginals Xr have uniformly bounded densities pr (x) ≤ kµk∞ , x ∈ Rd , even if the distributions of Yr have
no densities. By the argument at the beginning of the proof of Theorem 3.6 it is
therefore enough to show the claim for f ∈ S(Rd ).
Consider first the claim in (i). We only have to show it for s = β ∗ /2 and s =
γ/2 + β ∗ . As in the proof of Theorem 3.6, the general claim for β ∗ /2 ≤ s ≤ γ/2 +
β ∗ follows by interpolation. Let u, v ∈ Rd . Then for any 0 ≤ h, r ≤ T it holds
|gh,r (u, v)| . |F µ(u + v)| with g from Proposition 3.2(i). Moreover, by assumption
39
∗
|∂r ψh,r (v)| ≤ c(1 + kvk)γ+β . Lemma B.2(i) and Proposition 3.2(i) therefore imply
bn,T (f )k2 2 is up to a constant bounded by
that kΓT (f ) − Γ
L (P)
2
T ∆n
Z
|F f (u)| |F f (v)| (1 + kuk)β
∗ /2
(1 + kvk)β
∗ /2
|F µ(u + v)| d (u, v) .
(B.7)
Lemma A.3 shows for this the upper bound T 2 ∆n kf k2H s , implying the claim for
s = β ∗ /2. With respect to s = γ/2 + β ∗ it follows similarly by Lemma B.2(ii) and
bn,T (f )k2 2 is up to a
(iii), Proposition 3.2(ii) and Lemma A.3 that kΓT (f ) − Γ
L (P)
constant bounded by T 2 ∆n kf k2H s . This is the claimed bound for s = γ/2 + β ∗ . To
see that the improved bound holds note that |∂r ψh,r (v)| ≤ ckvkγ+βr simplifies the
calculations in Lemma B.2, since there is no need to distinguish the cases kvk ≥ 1
or kvk < 1.
At last, consider (ii). From |∂r ψh,r (v)| . 1 it follows immediately that ϕh,r (u, v)
2
and the time derivatives ∂r ϕh,r (u, v), ∂hr
ϕh,r (u, v) are bounded by T 2 |F µ(u + v)|. As
T ≥ 1, Proposition 3.2(ii) and Lemma A.3 imply the claim. If c1 ρ(v) ≤ ∂r ψh,r (v) ≤
c2 ρ(v) ≤ 0 for all 0 ≤ h, r ≤ T , then |ϕ̃h,r (u, v)| ≤ e−cc2 ρ(v)|r−h|−cc2 ρ(u+v)(r∧h) and
R tk R tk
P
k−1>j≥2 tk−1 tk−1 |∂r ϕ̃h,r (u, v)|dhdr is up to a constant bounded by
n Z
X
k=1
tk
tk−1
Z
tk
−cc2 ρ(v)(r−h)
(−ρ (v)) e
drdh .
h
n Z
X
k=1
tk
tk−1
e−cc2 ρ(v)(tk −h) − 1 dh,
R tk R tj
P
2
and similarly for ∂r ϕ̃tk−1 ,r (u, v), while k−1>j≥2 tk−1
|∂hr
ϕ̃h,r (u, v)|dhdr is up
tj−1
RT RT
to a constant bounded by 0 h (−ρ(v))e−cc2 ρ(v)(r−h) drdh. The first expression is of
order O(T ∆n ) and the second one of order O(T ). Again, the claim follows from
Proposition 3.2(ii) and Lemma A.3.
Remark B.3. RIf d = 1 and γ > 1, β ∗ = 0, then the proof applies to Xt = Yt . Indeed,
2
T
replace T by 0 e−cku+vk h dh in (B.5) and (B.6). Together with a slightly different
argument for kvk < 1 this yields e.g. instead of (B.7) the bound
Z
T
Z
2
|F f (u)| |F f (v)| e−cku+vk h d (u, v) dh
0
Z TZ
2
≤ T ∆n
|F f (u)|2 e−cku+vk h d (u, v) dh . kf k2H s T 2 ∆n .
T ∆n
0
γh
This works, because u 7→ e−ckuk
B.6
is integrable and because
Proof of Theorem 3.13
RT
0
h−1/γ dh is finite.
The characteristic function of (Xh , Xr ) at (u, v) ∈ R2d for 0 ≤ h < r ≤ T is
ϕh,r (u, v) = ϕ̃h,r (u, v)F µ(u + v), where ϕ̃h,r (u, v) is the characteristic function of
1
(Bh , Br ). As B is a Gaussian process, it follows that ϕ̃h,r (u, v) is equal to e− 2 Φh,r (u,v)
with Φh,r (u, v) = kuk2 h2H + kvk2 r 2H + 2 hu, vi c (h, r). Since fractional Brownian
40
motion is locally nondeterministic (cf. Pitt (1978, Proposition 7.2)), there exists a
constant c > 0 independent of u, v, r, h such that
Φh,r (u, v) = Var (hv, Br i + hu, Bh i) = Var (hv, Br − Bh i + hu + v, Bh i)
2H
2
2 2H
≥ c kvk (r − h) + ku + vk h
2 (r−h)2H −cku+vk2 h2H
Consequently, ϕ̃h,r (u, v) ≤ e−ckvk
. Moreover,
1
∂r ϕh,r (u, v) = − ∂r Φh,r (u, v)ϕh,r (u, v),
2
1 2
1
2
∂hr ϕh,r (u, v) = − ∂hr Φh,r (u, v) + ∂r Φh,r (u, v) ∂h Φh,r (u, v) ϕh,r (u, v) ,
2
4
2
2H−1
∂r Φh,r (u, v) = 2H(kvk + hu, vi)r
− 2H hu, vi (r − h)2H−1 ,
2
∂hr
Φh,r (u, v) = 2H (2H − 1) hu, vi (r − h)2H−2 .
We first prove a lemma.
Denote
function (r, h) 7→ g(r, h) and fixed u, v ∈ Rd
R tk for
R tany
P
j
by Un (g) the sum k−1>j≥2 tk−1 tj−1 g(r, h)ϕ̃h,r (u, v)dhdr.
Lemma B.4. Let T ≥ 1 and assume (X0). Fix u, v ∈ Rd \{0} and let 0 < H < 1,
H 6= 1/2. Consider for 0 < h < r < T the functions g1 (r, h) = (r−h)2H−1 , g2 (r, h) =
h2H−1 , g3 (r, h) = (r−h)4H−2 , g4 (r, h) = (r−h)2H−2 , g5 (r, h) = (r−h)2H−1 h2H−1 and
g6 (r, h) = r 2H−1 h2H−1 . Then we have the following estimates with absolute constants:
(i) (kvk2 + kvkku + vk)(Un (g1 ) + Un (g2 )) . T ,
(ii) (kvk2 + kvkku + vk)(Un (g3 ) + Un (g4 )) . T 2H or . T ∆n2H−1 when H > 1/2 or
H < 1/2,
(iii) (kvk + ku + vk)2 (Un (g5 ) + Un (g6 )) . T 2H or . T ∆n2H−1 when H > 1/2 or
H < 1/2,
P R tk R tk 2H−1
(iv) (1 + kvk) nk=1 tk−1
(r
+ (r − h)2H−1 )ϕ̃h,r (u, v)drdh . T 2H ∆n .
h
Proof. We need to bound the integrals in Un (gi ) in several different ways. Observe
(q)
for 0 ≤ a < b ≤ T and q = 2H − 1, 4H − 2, 1 the following estimates for Ra,b,v :=
R b q − 1 kvk2 r2H
dr:
r e 2
a
(2H−1)
Ra,b,v
(4H−2)
Ra,b,v
(1)
Ra,b,v
−2
kvk ,
1 2 2H
1
2
2H
. kvk−1(b2H − a2H )1/2 ,
. kvk−2 e− 2 kvk a − e− 2 kvk b
2H
b − a2H ,
(
(
Rb
kvk−2 a r 2H−2 dr
kvk−2 b2H−1 − a2H−1 ,
Rb
.
.
kvk−1 a r 3H−2 dr
kvk−1 (b3H−1 − a3H−1 ),
(
(
Rb
kvk−1(b1−H − a1−H ),
kvk−1 a r −H dr
.
.
b − a,
b−a
41
(B.8)
(B.9)
(B.10)
1
2 2H
1
2
where we used that supv∈Rd kvkp r pH e− 2 kvk r = supx≥0 xe− 2 x < ∞ for any p ≥ 0.
It follows from (B.8) and (B.10) that Un (g1 ) is bounded by
(
Z TZ T
2H
kvk−2 ,
2H−1 −c2 (kvk2 (r−h) +ku+vk2 h2H )
drdh ≤ T
(r − h)
e
kvk−1 ku + vk−1.
0
h
The estimate for g2 follows in the same way. For g3 and H > 1/2 it follows similarly
from (B.9), (B.10), T ≥ 1 and Lemma B.1 that
X Z tk Z tj kvk−2 (r − h)2H−2
dhdr
Un (g3 ) .
3H−2 −H
−1
−1
kvk
ku
+
vk
(r
−
h)
h
t
t
j−1
k−1
k−1>j≥2
(
kvk−2,
. T 2H
kvk−1ku + vk−1 ,
while for H < 1/2
Un (g3 ) . T ∆n2H−1
(
kvk−2,
kvk−1ku + vk−1 .
The estimates for g4 follow similarly (they are even easier). With respect to g5 the
integrals decompose and (B.8) and (B.10) yield for Un (g5 ) the bound
−2
kvk ,
(2H−1) (2H−1)
R0,T,v R0,T,u+v . T 2H kvk−1ku + vk−1 ,
(B.11)
−2
ku + vk .
For Un (g6 ), on the other hand, the same equations imply for H > 1/2 the upper
bound
(
Z TZ T
ku + vk−2 ,
2
2H
(2H−1)
Rh,T,v h2H−1 e−c2 ku+vk h dh . T 2H
kvk−1ku + vk−1 ,
0
h
and for H < 1/2 by r 2H−1 h2H−1 ≤ h4H−2 and Lemma B.1
−2 2H−2
X Z tk Z tj ku + vk h
dhdrdhdr
ku + vk−1 kvk−1 (r − h)−H h3H−2
−2
−2H
4H−2
t
t
j−1
k−1
k−1>j≥2
kvk r
h
−2
ku + vk ,
. T ∆n2H−1 kvk−1 ku + vk−1 ,
kvk−2 ,
because T ≥ 1 and because 1 . log n . ∆n2H−1 . Observe that we did not prove any
bound on kvk2 Un (g6 ) for H > 1/2. For this, we need a different upper bound on
2
2H
2 2H
ϕ̃h,r (u, v). If ku + vk ≥ kvk, then ϕ̃h,r (u, v) ≤ e−c2 kvk (r−h) −c2 ku+vk h is clearly
42
2 2H
bounded by e−c2 kvk h . As r 2H−1 h2H−1 . (r − h)2H−1 h2H−1 + h4H−2 for H > 1/2, it
thus follows from (B.11) and Lemma B.1 that
X Z tk Z tj
−2
h4H−2 dhdr . T 2H kvk−2.
Un (g6 ) . Un (g5 ) + kvk
k−1>j≥2
tk−1
tj−1
2 2H
2 2H
If ku + vk < kvk, however, then ϕ̃h,r (u, v) ≤ e−c2 (kvk r +kuk h ) . To see why
this holds note that in this case necessarily hu, vi ≥ 0 by elementary geometric considerations. But then Φh,r (u, v) ≥ kuk2h2H + kvk2 r 2H , since also c(h, r) =
E[(Yr − Yh )Yh ] + h2H ≥ 0 (recall that increments of fractional Brownian motion
are positively correlated when H > 1/2). From the new bound and (B.8) follows
immediately that
Z TZ T
2 2H
(2H−1)
Un (g6 ) .
Rh,T,v h2H−1 e−c2 kuk h dh . T 2H kvk−2 .
0
h
Finally, with respect to (iv), (B.8) yields
n Z
X
k=1
tk
tk−1
Z
tk
h
(r 2H−1 + (r − h)2H−1 )ϕ̃h,r (u, v)drdh . T 2H ∆n .
Arguing as for Un (g6 ) with the different upper bounds for ϕ̃h,r (u, v), it follows that
the left hand side is bounded by kvk−1 T 2H ∆n . This yields (iv).
Proof of Theorem 3.13. As in the proof of Theorem 3.8 it is sufficient to prove the
claim for f ∈ S(Rd ) and s ∈ {0, 1}. The conclusion follows by interpolation. We
consider only H 6= 1/2, since the case H = 1/2 corresponds to Brownian motion
and is already covered by Example 3.10.
Let 0 ≤ h < r ≤ T and u, v ∈ Rd . From kuk ≤ kvk + ku + vk it follows that
|∂r Φh,r (u, v)| . (kvk2 + kvkku + vk)((r − h)2H−1 + h2H−1 ). Lemma B.4(i) thereR tk R tj
P
fore implies that k−1>j≥2 tk−1
(|∂r ϕh,r (u, v)| + |∂r ϕtj−1 ,r (u, v)|)dhdr is of order
tj−1
O(T ). Moreover, |gtk−1 ,r (u, v)| . |F µ(u + v)| for all 1 ≤ k ≤ n and tk−1 ≤ r < tk
with g from Proposition 3.2(i). Applying Proposition 3.2(i) and Lemma A.3 shows
bn,T (f )k2 2 is up to a constant bounded by Cµ T ∆n kf k2 2 . With
that kΓT (f ) − Γ
L (P)
L
T ≤ T 2H for H > 1/2 this yields the claimed bound for s = 0. With respect to s = 1
note first that
|∂r Φh,r (u, v)| . (1 + kuk) (1 + kvk) (kvk + 1) r 2H−1 + (r − h)2H−1 ,
|∂r Φh,r (u, v) ∂h Φh,r (u, v)| . (1 + kuk) (1 + kvk) (kvk + ku + vk)2
· r 2H−1 h2H−1 + (r − h)2H−1 h2H−1
+ (1 + kuk) (1 + kvk) kvk2 + kvkku + vk (r − h)4H−2 ,
2
∂hr
Φh,r (u, v) . (1 + kuk) (1 + kvk) (r − h)2H−2 .
43
Lemma B.4(ii), (iii) and (iv) imply
∆−1
n
n Z
X
k=1
tk
tk−1
Z
tk
tk−1
(|∂r ϕh,r (u, v)| + |∂r ϕtk−1 ,r (u, v)|)dhdr
. (1 + kuk) (1 + kvk) (kvk + 1) T 2H |F µ (u + v)| ,
X Z tk Z tj
2
|∂hr
ϕh,r (u, v)|dhdr
k−1>j≥2
tk−1
tj−1
(
T 2H ,
H > 1/2,
. (1 + kuk) (1 + kvk) |F µ (u + v)|
2H−1
T ∆n
, H < 1/2.
This yields the claim for s = 1 by applying Proposition 3.2(ii) and Lemma A.3 as
above.
B.7
Proof of Theorem 3.14
Proof. fa,ε ∈ H 1/2−ρ (R) for any small ρ > 0 with kfa,ε kH 1/2−ρ . ε−1+ρ . By the
triangle inequality and Theorem 3.13 (Assumption (X0) can be removed for d = 1,
b n,T (fa,ε )kL2 (P) is bounded by
cf. Remark 3.9) kLaT − Γ
bn,T (fa,ε )kL2 (P)
kLaT − ΓT (fa,ε )kL2 (P) + kΓT (fa,ε ) − Γ
(
3 ρ
−
T H ∆n4 2 ,
H ≥ 1/2,
a
−1+ρ
. kLT − ΓT (fa,ε )kL2 (P) + ε
1+H
−ρH
, H < 1/2.
T 1/2 ∆n 2
R
By the occupation time formula (cf. Geman and Horowitz (1980))
and fa,ε (x)dx =
R
1 it follows that kLaT − ΓT (fa,ε )k2L2 (P) is equal to E[(LaT − fa,ε (x)LxT dx)2 ] =
R1
E[( 21 −1 (LaT − LTεx+a )dx)2 ]. Equation Pitt (1978, (4.1)) implies (together with the
proof of Pitt (1978, Theorem 4)) that E[(LaT − LbT )2 ] . (a − b)2ξ for all 0 < ξ <
1
1
(1 − H). Consequently, kLaT − ΓT (fa,ε )kL2 (P) . ε 2H (1−H)−ρ . Optimizing in ε yields
2H
the claim.
Appendix C: Proof of Theorem 4.1
Consider first the following two lemmas.
Lemma C.1. Assume (X0). For f ∈ H 1 (Rd ) we have
Z T
1
2
2
2
kΓT (f ) − E [ ΓT (f )| Gn ]kL2 (P) = ∆n E
k∇f (Xr )k dr + o ∆2n kf k2H 1 .
12 0
RT
1
2
2
In particular, ∆−2
n kΓT (f ) − E[ ΓT (f )| Gn ]kL2 (P) converges to E[ 12 0 k∇f (Xr )k dr] as
n → ∞.
Proof. By independence of X0 and (Xr − X0 )0≤t≤T the σ-algebra Gn is also generated by X0 and the increments Xtk − Xtk−1 , 1 ≤ k ≤ n. The independence
44
of increments and the Markov property then imply for tk−1 ≤ r ≤ tk that
E[f (Xr )|Gn ] = E[f (Xr )|Xtk−1 , Xtk ]. The same argument shows that the random
R tk
(f (Xr ) − E[f (Xr )|Gn ])dr are uncorrelated. Therefore
variables Yk = tk−1
kΓT (f ) − E [ ΓT (f )| Gn ]k2L2 (P)
"
n
n
X
2 X
E Yk =
E Vark
=
k=1
k=1
Z
tk
f (Xr ) dr
tk−1
!#
,
where Vark (Z) is the conditional variance of a random variable Z with respect to the
σ-algebra generated by Xtk−1 and Xtk . In order to linearize f , note that the random
R tk
R tk
(f (Xr ) − f (Xtk−1 ))dr) can be written as
f (Xr )dr) = Vark ( tk−1
variable Vark ( tk−1
Vark
Z
tk
tk−1
∇f Xtk−1 , Xr − Xtk−1 dr + κn
+Vark
Z
tk
tk−1
f (Xr ) − f Xtk−1 − ∇f Xtk−1 , Xr − Xtk−1
dr ,
where κn is the corresponding crossterm of the decomposition. From Lemma A.2(ii)
and (iii) it follows that the first and the last term are of order o(∆3n kf k2H 1 ) and
O(∆3n an (f )) = O(∆3n kf k2H 1 ), respectively, and thus by the Cauchy-Schwarz inequality κn = o(∆3n kf k2H 1 ). Hence, kΓT (f ) − E[ΓT (f ) |Gn ]2L2 (P) is equal to
"
!#
Z tk
n
X
E Vark
∇f Xtk−1 , Xr dr
+ o ∆2n kf k2H 1 .
k=1
tk−1
Conditional on Xtk−1 , Xtk , the process (Xr )tk−1 ≤r≤tk is a Brownian bridge starting
r−t
from Xtk−1 and ending at Xtk . In particular, E[Xr |Xtk−1 , Xtk ] = Xtk−1 + ∆k−1
(Xtk −
n
Xtk−1 ) (see e.g. Karatzas and Shreve (1991, 6.10)). The stochastic Fubini theorem
and Itô isometry thus imply that the last display is equal to
*
+2
Z tk
n
X
∆n
E ∇f Xtk−1 ,
tk − r −
dXr + o ∆2n kf k2H 1
2
tk−1
k=1
n
∆3n X
=
E k∇f Xtk−1 k2 + o ∆2n kf k2H 1
12
k=1
2 Z T
∆
= n
k∇f (Xr )k2 dr + o ∆2n kf k2H 1 ,
12 0
where the last line follows from Lemma A.2(iv).
2
Lemma C.2. Assume (X0). Fix 0 ≤ s < α and let ϕ(x) = (2π)−d/2 e−kxk /2 for
x ∈ Rd . Consider the approximations fα,ε = fα ∗ ϕε , where ϕε = ε−d ϕ(ε−1 (·)) and
1 1−s
· 1−α
ε = ∆n2
. Then the following statements hold as n → ∞:
(i) kΓT (fα − fα,ε ) − E [ ΓT (fα − fα,ε )| Gn ]k2L2 (P) = o(∆1+s
n ),
(ii) kΓT (fα,ε ) − E [ ΓT (fα,ε )| Gn ]k2L2 (P) = O(∆2n kfα,ε k2H 1 ) = O(∆1+s
n ),
45
1
(iii) lim inf n→∞ (ε2−2α E[ 12
RT
0
k∇fα,ε (Xr )k2 dr]) > 0.
Proof. Applying (4.1) from right to left and Theorem 3.13 for the function f = fα −
fα,ε ∈ L2 (Rd ) shows that the left hand side of the equation in (i) is up to a constant
bounded by ∆n kfα − fα,ε k2L2 . The Plancherel theorem and F ϕε (u) = F ϕ(εu) yield
that this is equal to
Z
kuk 2
−d
2α
2
kuk−2α−d 1 − e− 2
(2π) ∆n kF fα (1 − F ϕε )kL2 . ∆n ε
du.
The du-integral is finite and therefore the last line is of order O(∆n ε2α ) = o(∆1+s
n ),
because α > s, implying (i). Similarly, applying (4.1) from right to left and Theorem
3.13 for the function f = fα,ε ∈ H 1(Rd ), the left hand side of the equation in (ii) is
up to a constant bounded by ∆2n kfα,ε k2H 1 . As above this can be bounded from the
Plancherel theorem by
Z
−d
2
(2π) ∆n |F fα (u)|2 |F ϕ (εu)|2 (1 + kuk)2 du
Z
kuk 2
2 2α−2
(ε + kuk)2−2α−d e− 2 du
. ∆n ε
Z ∞
r2
2 2α−2
. ∆n ε
(ε + krk)1−2α e− 2 dr.
0
As α < 1, the dr-integral is finite for ε = 0 and thus the last line is of order
O(∆2n ε2α−2 ) = O(∆1+s
n ). This is the claimed order in (ii). Finally, with respect to
(iii), denote by pr the marginal density of Xr . Then we have by the Plancherel
RT
1
theorem, applied componentwise, for any T0 > 0 that E[ 12
k∇fα,ε (Xr )k2 dr] is
0
bounded from below up to a constant by
Z T Z
1/2
2
k∇fα,ε (x)pr (x)k dx dr
T0
Z T Z Z
−2d
2
= (2π)
k F fα (u − y) F ϕ (ε (u − y)) (u − y) hr (y) dyk du dr,
T0
1/2
2
where hr (y) = 2d/2 (2π)d/4 r d/4 e−kyk r is the Fourier transform of pr . The substitution εu 7→ u then yields that the du-integral above is equal to
Z
Z Z
2
2α−2
2α−2
k(νε ∗ hr,ε ) (u)k2 du,
k νε (u − εy) hr (y) dyk du = ε
ε
2
for hr,ε (u) = ε−d hr (ε−1 u) and νε (u) = u(ε + kuk)−α−d/2 e−kuk /2 . Interestingly, νε ∈
L1 (Rd ) ∩ L2 (Rd ) for all ε ≥ 0 as α < 1. As also hr,ε ∈ L1 (Rd ), Young’s inequality,
also applied componentwise, implies that
Z
Z
2
2
2
k((νε − ν0 ) ∗ hr,ε ) (u)k du ≤ khr,ε kL1
k(νε − ν0 ) (u)k du .
Since khr,ε k2L1 . r −d/2 , kνε (u)k ≤ kν0 (u)k and νε (u) → ν0 (u) for any u ∈ Rd we
therefore conclude by dominated convergence that the last line is of order o(r −d/2 ).
46
1/2
Moreover, it follows again by the Plancherel theorem with F hr,ε (x) = (2π)d pr (εx)
R
R
that k(ν0 ∗ hr,ε ) (u)k2 du = (2π)d kF ν0 (x)k2 pr (εx) dx. Letting ε → 0 yields the
R
−d/2
convergence to (2π)d/2
r
kF ν0 (x)k2 dx. By Pythagoras we thus find for any
R
r > T0 > 0 that also k(νε ∗ hr,ε ) (u)k2 du → cr −d/2 for some constant 0 < c < ∞.
Consequently,
Z T
Z T
1
2−2α
2
lim inf ε
E
k∇fα,ε (Xr )k dr
&
r −d/2 dr,
n→∞
12 0
T0
which is bounded from below as T0 > 0.
Now we prove the theorem.
Proof of Theorem 4.1. The first and the second inequality in (i) are clear. The limit
in the last equality follows from Lemma C.1. With respect to (ii) observe that
kΓT (fα ) − E [ ΓT (fα )| Gn ]k2L2 (P) = kΓT (fα − fα,ε ) − E [ ΓT (fα − fα,ε )| Gn ]k2L2 (P)
+ κn + kΓT (fα,ε ) − E [ ΓT (fα,ε )| Gn ]k2L2 (P) ,
where κn is the crossterm of the expansion. From Lemma C.2 it follows that the
1+s
first term is of order o(∆1+s
n ), while the third one is of order O(∆n ). Therefore,
the crossterm is via the Cauchy-Schwarz inequality itself of order o(∆1+s
n ). Hence,
−(1+s)
2
Lemma C.1 implies that lim inf n→∞ ∆n
kΓT (fα ) − E[ΓT (fα )|Gn ]kL2 (P) is equal to
lim inf ∆n−(1+s) kΓT (fα,ε ) − E [ ΓT (fα,ε )| Gn ]k2L2 (P)
n→∞
Z T
1
2
1−s
k∇fα,ε (Xr )k dr
= lim inf ∆n E
n→∞
12 0
+ lim inf o ∆n−(1+s) ∆2n kfα,ε k2H 1 .
n→∞
From part (ii) of Lemma C.2 it follows that the last term is 0, while part (iii) implies
2α−2
the wanted lower bound for the first term, as ∆1−s
= 1.
n ε
References
Adams, R. and Fournier, J. (2003). Sobolev Spaces. Pure and Applied Mathematics.
Elsevier Science.
Altmeyer, R. and Chorowski, J. (2016). Estimation error for occupation time functionals of stationary Markov processes. arXiv preprint arXiv:1610.05225.
Beskos, A. and Roberts, G. O. (2005). Exact simulation of diffusions. The Annals
of Applied Probability, 15(4):2422–2444.
Billingsley, P. (2013). Convergence of Probability Measures. Wiley Series in Probability and Statistics. Wiley.
Boufoussi, B., Dozzi, M., and Guerbaz, R. (2007). Sample path properties of the
local time of multifractional Brownian motion. Bernoulli, 13(3):849–867.
47
Catellier, R. and Gubinelli, M. (2016). Averaging along irregular curves and regularisation of ODEs. Stochastic Processes and their Applications, 126(8):2323–2366.
Chaumont, L. and Uribe Bravo, G. (2011). Markovian bridges: Weak continuity and
pathwise constructions. The Annals of Probability, 39(2):609–647.
Chesney, M., Jeanblanc-Picqué, M., and Yor, M. (1997). Brownian Excursions and
Parisian Barrier Options. Advances in Applied Probability, 29(29).
Chorowski, J. (2015). Nonparametric volatility estimation in scalar diffusions: Optimality across observation frequencies. arXiv preprint arXiv:1507.07139.
Dalalyan, A. (2005). Sharp adaptive estimation of the drift function for ergodic
diffusions. The Annals of Statistics, 33(6):2507–2528.
Di, N., Palatucci, G., and Valdinoci, E. (2012). Hitchhikers guide to the fractional
Sobolev spaces. Bulletin des Sciences Mathématiques, 136:521–573.
Diaconis, P. (1988). Bayesian numerical analysis. Statistical decision theory and
related topics IV, 1:163–175.
Falconer, K. and Liu, L. (2012). Multistable Processes and Localizability. Stochastic
Models, 28(3):503–526.
Florens-Zmirou, D. (1993). On Estimating the Diffusion Coefficient from Discrete
Observations. Journal of Applied Probability, 30(4):790.
Fournier, N. and Printems, J. (2008). Absolute continuity for some one-dimensional
processes. Bernoulli, 16(2):343–360.
Ganychenko, I. (2015). Fast L 2 -approximation of integral-type functionals of
Markov processes. Modern Stochastics: Theory and Applications, 2:165–171.
Ganychenko, I., Knopova, V., and Kulik, A. (2015). Accuracy of discrete approximation for integral functionals of Markov processes. Modern Stochastics: Theory
and Applications, 2(4):401–420.
Ganychenko, I. and Kulik, A. (2014). Rates of approximation of non-smooth integral
type functionals of Markov processes. Modern Stochastics: Theory and Applications, 1(2):117–126.
Geman, D. and Horowitz, J. (1980). Occupation Densities. The Annals of Probability, 8(1):1–67.
Gobet, E. and Labart, C. (2008). Sharp estimates for the convergence of the density
of the Euler scheme in small time. Electronic Communications in Probability,
13:352–363.
Hugonnier, J.-N. (1999). The Feynman–Kac formula and pricing occupation time
derivatives. International Journal of Theoretical and Applied Finance, 2(02):153–
178.
48
Jacod, J. (1998). Rates of convergence to the local time of a diffusion. Annales de
l’Institut Henri Poincare (B) Probability and Statistics, 34(4):505–544.
Jacod, J., Jakubowski, A., and Mémin, J. (2003). On asymptotic errors in discretization of processes. The Annals of Probability, 31(2):592–608.
Jacod, J. and Mykland, P. (2015). Microstructure noise in the continuous case:
Approximate efficiency of the adaptive pre-averaging method. Stochastic Processes
and their Applications.
Jacod, J. and Protter, P. (1988). Time Reversal on Levy Processes. The Annals of
Probability, 16(2):620–641.
Jacod, J. and Protter, P. (2011). Discretization of Processes. Stochastic Modelling
and Applied Probability. Springer Berlin Heidelberg.
Jacod, J. and Shiryaev, A. (2013). Limit Theorems for Stochastic Processes.
Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg.
Kallenberg, O. (2002). Foundations of Modern Probability. Probability and Its
Applications. Springer New York.
Karatzas, I. and Shreve, S. (1991). Brownian Motion and Stochastic Calculus.
Springer.
Kohatsu-Higa, A., Makhlouf, R., and Ngo, H.-L. (2014). Approximations of nonsmooth integral type functionals of one dimensional diffusion processes. Stochastic
Processes and their Applications, 124(5):1881–1909.
Li, J., Todorov, V., and Tauchen, G. (2013). Volatility occupation times. The Annals
of Statistics, 41(4):1865–1891.
Lou, S. and Ouyang, C. (2017). Local times of stochastic differential equations driven
by fractional Brownian motions. Stochastic Processes and their Applications.
Mattingly, J. C., Stuart, A. M., and Tretyakov, M. V. (2010). Convergence of
Numerical Time-Averaging and Stationary Measures via Poisson Equations. SIAM
Journal on Numerical Analysis, 48(2):552–577.
Ngo, H.-L. and Ogawa, S. (2011). On the discrete approximation of occupation time
of diffusion processes. Electronic Journal of Statistics, 5:1374–1393.
Nualart, D. (1995). The Malliavin Calculus and Related Topics. Probability and its
applications : a series of the applied probability trust. Springer-Verlag.
Pitt, L. (1978). Local times for Gaussian vector fields. Indiana University Mathematics Journal, 27(2):309–330.
Podolskij, M. and Vetter, M. (2010). Understanding limit theorems for semimartingales: a short survey. Statistica Neerlandica, 64(3):329–351.
49
Rényi, A. (1963). On Stable Sequences of Events. Sankhya: The Indian Journal of
Statistics, Series A, 25(3):293–302.
Romito, M. (2017). A simple method for the existence of a density for stochastic
evolutions with rough coefficients. arXiv preprint arXiv:1707.05042.
Russo, F. and Vallois, P. (1996). Ito formula for C1-functions of semimartingales.
Probability Theory and Related Fields, 104(1):27–41.
Sato, K. (1999). Lévy Processes and Infinitely Divisible Distributions. Cambridge
Studies in Advanced Mathematics. Cambridge University Press.
Tankov, P. (2003). Financial Modelling with Jump Processes.
Hall/CRC Financial Mathematics Series. CRC Press.
Chapman and
Triebel, H. (2010). Theory of Function Spaces. Modern Birkhäuser Classics. Springer
Basel.
50
| 0 |
arXiv:1711.09656v1 [math.MG] 27 Nov 2017
TWO-DIMENSIONAL SYSTOLIC COMPLEXES SATISFY
PROPERTY A
NIMA HODA
Department of Mathematics and Statistics, McGill University
Burnside Hall, Room 1005
805 Sherbrooke Street West
Montreal, QC, H3A 0B9, Canada
DAMIAN OSAJDA
Instytut Matematyczny, Uniwersytet Wroclawski
pl. Grunwaldzki 2/4, 50–384 Wroclaw, Poland
Institute of Mathematics, Polish Academy of Sciences
Śniadeckich 8, 00-656 Warszawa, Poland
Abstract. We show that 2-dimensional systolic complexes are quasi-isometric
to quadric complexes with flat intervals. We use this fact along with the weight
function of Brodzki, Campbell, Guentner, Niblo and Wright [5] to prove that
2-dimensional systolic complexes satisfy Property A.
1. Introduction
Property A is a quasi-isometry invariant of metric spaces introduced by Guoliang
Yu in his study of the Baum-Connes conjecture [16]. It may be thought of as a
non-equivariant version of amenability. As is the case for amenability, Property A
has plenty of equivalent formulations. In particular, for finitely generated groups,
it is equivalent to the exactness of the reduced C ∗ –algebra of the group and also
to the existence of an amenable action on a compact space [10, 14]. Property A
implies coarse embeddability into Hilbert space and hence the coarse Baum-Connes
Conjecture and the Novikov Conjecture. Classes of groups for which Property A
holds include Gromov hyperbolic groups [1], CAT(0) cubical groups [7], and uniform
lattices in affine buildings [6], but it is an open question whether it holds for all
CAT(0) groups.
In the current article we prove the following.
E-mail addresses: [email protected], [email protected].
Date: November 28, 2017.
2010 Mathematics Subject Classification. 20F65, 20F69, 57M20.
Key words and phrases. systolic complex, CAT(0) triangle complex, property A, boundary
amenability, exact group.
1
2
TWO-DIMENSIONAL SYSTOLIC COMPLEXES SATISFY PROPERTY A
Main Theorem. Two-dimensional systolic complexes satisfy Property A.
A 2-dimensional systolic complex can be defined as a 2-dimensional simplicial
complex which is CAT(0) when equipped with a metric in which each triangle
is isometric to an equilateral Euclidean triangle. The class of isometry groups
of such complexes is vast. It contains many Gromov hyperbolic groups. It also
contains lattices in Ã2 buildings, which were already proved to satisfy Property A
by Campbell [6]. Some of these groups satisfy interesting additional properties such
as Kazhdan’s property (T). Notably, there are numerous well developed techniques
for constructing groups acting on 2-dimensional systolic complexes (with various
additional features) making them a rich source of examples. For instance, given a
finite group F and a generating set S ⊆ F \ {1} whose Cayley graph Γ(F, S) has
girth at least 6, Ballmann and Świa̧tkowski [2, Theorem 2 and Section 4] construct
a canonical infinite 2-dimensional systolic complex X whose oriented triangles are
labeled by elements of S ∪ S −1 and in which the link of every vertex is isomorphic
to Γ(F, S) with labels induced from the triangles. The labeled automorphisms of X
act simply transitively on the oriented 1-simplices of X and if (F, S) has a relation
of the form (st)3 with s, t ∈ S then X has flat planes and so is not hyperbolic.
This construction is a particular case of a development of a complex of groups as
described in Bridson and Haefliger [4, Example 4.19(2) of Chapter III.C].
We prove the Main Theorem by showing that 2-dimensional systolic complexes
are quasi-isometric to quadric complexes whose intervals with respect to a basepoint are CAT(0) square complexes (Theorem 3.1, Theorem 3.2 and Theorem 3.3).
This allows us to apply the weight function and uniform convergence argument
of Brodzki, Campbell, Guentner, Niblo and Wright [5] in their proof that finitely
dimensional CAT(0) cube complexes satisfy Property A (Theorem 2.6).
Acknowledgements. N.H. was partially funded by an NSERC CGS M. D.O.
was partially supported by (Polish) Narodowe Centrum Nauki, grant no. UMO2015/18/M/ST1/00050. Parts of this research were carried out while D.O. was
visiting McGill University and while N.H. was visiting the University of Wroclaw.
The authors would like to thank both institutions for their hospitality.
2. Preliminaries
For basic algebraic topological notions such as those of CW complexes and simple connectedness we refer the reader to Hatcher [9]. A combinatorial map X → Y
between CW complexes is one whose restriction to each cell of X is a homeomorphism onto a cell of Y . All graphs considered in this paper are simplicial. We
consider metrics only on the 0-skeleton X 0 of a cell complex X as induced by
the shortest path metric on the 1-skeleton X 1 . The metric ball Br (v) (respectively,
metric sphere Sr (v)) of radius r centered at a vertex v in a complex is the subgraph
induced in the 1-skeleton by the set of vertices of distance at most (respectively,
exactly) r from v. The girth of a graph is the length of the embedded shortest
cycle. The link of a vertex v in a 2-dimensional simplicial complex is the graph
whose vertices are the neighbours of v and where two vertices are joined by an edge
if they span a triangle together with v.
2.1. Property A. Rather than defining Property A in full generality, we give the
characterization for graphs of Brodzki, Campbell, Guentner, Niblo and Wright [5,
Proposition 1.5]. A graph Γ satisfies Property A iff there exists a sequence of
TWO-DIMENSIONAL SYSTOLIC COMPLEXES SATISFY PROPERTY A
3
(a)
(b)
Figure 1. Replacement rules for quadric complexes.
constants (Cn )n∈N and a family of functions fn,v : Γ0 → N indexed by N × Γ0 such
that the following conditions hold.
(1) fn,v is supported on BCn (v).
(2)
||fn,v −fn,v′ ||1
||fn,v ||1
→ 0 uniformly over all pairs of vertices (v, v ′ ) joined by an
edge.
2.2. Two-dimensional systolic complexes. A 2-dimensional systolic complex
is a simply connected 2-dimensional simplicial complex in which the girth of the
link of every vertex is at least 6.
The following are well known properties of systolic complexes. See Chepoi [8,
Theorem 8.1] and Januszkiewicz and Świa̧tkowski [12, Lemma 7.7].
Lemma 2.1. Let Y be a 2-dimensional systolic complex.
(1) Metric spheres in Y are triangle-free.
(2) Let u, v ∈ Y 0 . Let w, x ∈ Y 0 be neighbours of v that are closer to u than is
v. Then w and x are joined by an edge.
(3) (Triangle Condition) Let u, v, w ∈ Y 0 such that v and w are joined by an
edge and they are equidistant to u. Then there exists x ∈ Y 0 adjacent to v
and w and closer to u than are v and w.
2.3. Quadric complexes. A square complex is a 2-dimensional CW complex with
a simplicial 1-skeleton such that the attaching maps of 2-cells are injective combinatorial maps from 4-cycles. The closed 2-cells of a square complex are called
squares. We assume that no two squares of a square complex are glued to the same
4-cycle of the 1-skeleton. A quadric complex is a simply connected square complex
in which, for any subcomplex on the left-hand side of Figure 1, there exists a replacement on the right-hand side. In other words, quadric complexes are simply
connected generalized (4, 4)-complexes, as defined by Wise [15], that are built out
4
TWO-DIMENSIONAL SYSTOLIC COMPLEXES SATISFY PROPERTY A
Figure 2. Two geodesics α and β between a pair of vertices u
and v of a quadric complex along with a geodesic γ joining αm ∈ α
and βk ∈ β. Used in the proof of Theorem 2.4.
of squares. They were first studied in depth by Hoda [11]. A quadric complex is a
CAT(0) square complex if and only if its 1-skeleton is K2,3 -free.
Theorem 2.2 (Hoda [11]). Let X be a quadric complex. Metric balls are isometrically embedded in X 1 .
Lemma 2.3 (Quadrangle Condition). Let X be a quadric complex. Let u, v, w, x ∈
Y 0 such that v and w are adjacent to x and v and w are closer to u than is x.
Then there exists y ∈ X 0 adjacent to v and w and closer to u than are v and w.
Lemma 2.3 follows from the fact that the 1-skeleta of quadric complexes are
precisely the hereditary modular graphs. This characterization is due to the result
of Hoda that the 1-skeleta are the bi-bridged graphs [11] and the theorem of Bandelt
that a graph is bi-bridged iff it is hereditary modular [3, Theorem 1].
Let X be a quadric complex. The interval I(u, v) between a pair of vertices u
and v in X 0 is the full subcomplex induced by the union of the geodesics between
u and v.
Theorem 2.4. Let X be a quadric complex and let u, v ∈ X 0 . The 1-skeleton of
I(u, v) is isometrically embedded in X 1 .
Proof. Suppose not. Then there are geodesics (αi )ℓi=0 and (βi )ℓi=0 from u to v and
indices m and k such that no geodesic (γi )ni=0 from αm to βk is contained in I(u, v).
Choose (αi )i , (βi )i , m and k so as to minimize n = d(αm , βk ). Without loss of
generality, m ≤ k.
By Theorem 2.2 we may assume that (γi )i is contained in Bk (u). Hence, since
X 1 is bipartite, d(u, γn−1 ) = k − 1. Let (δi )k−1
i=0 be a geodesic from u to γn−1 . Let
ℓ
′
(βi′ )ℓi=0 be the concatenation of the sequences( (δi )k−1
i=0 and (βi))i=k . Then (βi )i is a
geodesic from u to v. By the minimality of (αi )i , (βi )i , m, k , there is a geodesic
′
′
(γi′ )n−1
i=0 from αm to βk−1 = γn−1 such that (γi )i is contained in I(u, v). But then
appending γn to (γi′ )i we obtain a geodesic from αm to βk that is contained in
I(u, v). This is a contradiction.
□
Corollary 2.5. Intervals in quadric complexes are quadric.
Proof. Quadric complexes are characterized by metric properties of their 1-skeleta
[11], so an isometrically embedded full subcomplex of a quadric complex is quadric.
□
TWO-DIMENSIONAL SYSTOLIC COMPLEXES SATISFY PROPERTY A
5
Fix a basepoint ∗ ∈ X 0 . If, for all v ∈ X 0 , the interval I(∗, v) is a CAT(0) square
complex then (X, ∗) has flat intervals. By Corollary 2.5, (X, ∗) has flat intervals if
and only if every I(∗, v) is K2,3 -free.
We now describe how we apply the results of Brodzki et al. [5] in the special case
of 2-dimensional CAT(0) cube complexes to our present situation.
Let (X, ∗) be a based quadric complex with flat intervals. Let Zv = I(∗, v) be a
CAT(0) square complex interval in a quadric complex. We will describe the weight
function fn,v of Brodzki et al. [5] for v in Zv . For w ∈ Zv0 , let ρ(w) be the number of
neighbours of w in Zv1 that lie on geodesics from w to ∗. The deficiency of w ∈ Zv0
is defined as follows.
δ(w) = 2 − ρ(w)
Define fn,v : Zv0 → N as follows.
⎧
0
if d(w, v) > n
⎪
⎪
⎪
⎨1
if d(w, v) ≤ n and δ(w) = 0
fn,v (w) =
⎪
n − d(w, v) + 1
if d(w, v) ≤ n and δ(w) = 1
⎪
⎪
)(
)
⎩1(
n
−
d(w,
v)
+
2
n
−
d(w,
v)
+
1
if d(w, v) ≤ n and δ(w) = 2
2
We extend fn,v by zeroes to all of X 0 . Note that if v ′ is a neighbour of v, then
Zv′ ⊆ Zv or Zv ⊆ Zv′ , say the latter, and that Zv and Zv′ are both intervals of ∗
in Zv′ , which by flatness is a CAT(0) square complex. So we may apply the results
of Brodzki et al. that
1
||fn,v ||1 = (n + 2)(n + 1)
2
[5, Proposition 3.10] and for a neighbour v ′ of v,
||fn,v − fn,v′ ||1 = 2(n + 1)
[5, Proposition 3.11] and so we have the following.
Theorem 2.6. Let X be a quadric complex. If there exists ∗ ∈ X 0 such that (X, ∗)
has flat intervals then X satisfies Property A.
3. The squaring of 2-dimensional systolic complexes
Let (Y, ∗) be a 2-dimensional systolic complex with a basepoint ∗ ∈ Y 0 . Let
be the subgraph of Y 1 given by the union of all edges whose endpoints are
not equidistant to u. Note that XY1 is bipartite The squaring of (Y, ∗) is the based
square complex (XY , ∗) obtained from XY1 by attaching a unique square along its
boundary to each embedded 4-cycle of XY1 .
XY1
Theorem 3.1. Let (Y, ∗) be a based 2-dimensional systolic complex and let (XY , ∗)
be the squaring of (Y, ∗). Then Y 1 is quasi-isometric to XY1 .
Proof. Applying Lemma 2.1(3) to ∗ and an edge e of Y 1 that is not contained in
XY1 gives us a triangle, one of whose edges is e and whose remaining edges are
contained in XY1 . This ensures that distances in XY1 increase by at most a factor
of two relative to distances in Y 1 .
□
Theorem 3.2. Let (Y, ∗) be a based 2-dimensional systolic complex. The squaring
(XY , ∗) of (Y, ∗) is quadric.
6
TWO-DIMENSIONAL SYSTOLIC COMPLEXES SATISFY PROPERTY A
Figure 3. A K2,2 spanned the vertices a0 , a1 , b0 and b1 and
embedded in a particular way in the interval I(a, b) of a graph.
Such an embedding is not possible in the squaring of a based 2dimensional systolic complex. Used in the proof of Theorem 3.3.
Proof. We need to show that XY is simply connected and that, for every subgraph
of XY1 as in the left-hand side of Figure 1b, a pair of antipodal vertices in the outer
6-cycle is joined by an edge.
To show that XY is simply connected it suffices to show that, for every embedded
cycle α of XY1 , there is a 2-dimensional square complex D′ homeomorphic to a
2-dimensional disk and a combinatorial map D′ → XY whose restriction to the
boundary ∂D′ of D′ is α. By the van Kampen Lemma [13, Proposition 9.2 of
Section III.9] since Y is simply connected, there exists a 2-dimensional simplicial
complex D homeomorphic to a 2-disk D and a combinatorial map D → Y which
restricts to α on the boundary. Choose such D → Y so as to minimize the number
of triangles of D. By Lemma 2.1(1), each triangle of D has a unique edge e that
is not contained in XY1 . Then e is contained in the interior of D and, by the
minimality of D → Y , the star of e is embedded in Y . Let D′ be the result of
deleting all such e from D1 and then spanning a square on each embedded 4-cycle.
Since every embedded 4-cycle of XY spans a square, we may extend (D′ )1 → (XY )1
to D′ → XY . This proves that XY is simply connected.
Let W be a subgraph of XY1 as in the left-hand side of Figure 1b and with the
same vertex labels. By 6-largeness of Y , each of the embedded 4-cycles of W have
a diagonal. Since the girth of the link of u is at least 6, these diagonals must join u
to each of the bi . Hence u is adjacent to every vertex in the outer 6-cycle C of W .
Let v be a furthest vertex of C from ∗. By Lemma 2.1(2), the neighbours of
v in C are joined by an edge. But then there is a 5-cycle in the link of u which
contradicts the 2-dimensional systolicity of Y .
□
Theorem 3.3. Let (Y, ∗) be a based 2-dimensional systolic complex. The squaring
(XY , ∗) of (Y, ∗) has flat intervals.
Proof. Suppose there is a K2,3 in an interval I(∗, v) of XY . Let {a0 , a1 }⊔{b0 , b1 , b2 }
be the bipartition of the K2,3 . Some pair of vertices of {b0 , b1 , b2 } are equidistant
to ∗, say {b0 , b1 }.
Consider the case where a0 and a1 are equidistant to ∗. Let a be the closer of
∗ and v to a0 and let b be the closer of ∗ and v to b0 . Let a′ and b′ be obtained
by applying Lemma 2.3 to a0 , a1 and a and to b0 , b1 and b, as in Figure 3. By
6-largeness of Y , the 4-cycle (a0 , a′ , a1 , b0 ) has a diagonal in Y 1 . Since (a′ , a0 , b0 )
is a geodesic, the diagonal must join a0 and a1 . Similarly, b0 and b1 are joined
by edge in Y 1 and hence, by flagness, {a0 , a1 , b0 , b1 } spans a 3-simplex in Y . This
contradicts the 2-dimensionality of Y .
TWO-DIMENSIONAL SYSTOLIC COMPLEXES SATISFY PROPERTY A
7
In the remaining case a0 and a1 are not equidistant to ∗. Then the bi must all be
equidistant to ∗ with the (a0 , bi , a1 ) all geodesics. Applying a similar argument as
in the previous case to the 4-cycles (a0 , bi , a1 , bj ) we see that the bi span a triangle
in Y and so, together with a0 , they span a 3-simplex contradicting, again, the
2-dimensionality of Y .
□
As an immediate consequence of Theorem 3.1, Theorem 3.2, Theorem 3.3 and
Theorem 2.6 we have the Main Theorem.
References
[1] S. Adams. Boundary amenability for word hyperbolic groups and an application to smooth
dynamics of simple groups. Topology, 33(4):765–783, 1994.
[2] W. Ballmann and J. Świa̧tkowski. On L2 -cohomology and property (T) for automorphism
groups of polyhedral cell complexes. Geom. Funct. Anal., 7(4):615–645, 1997.
[3] H.-J. Bandelt. Hereditary modular graphs. Combinatorica, 8(2):149–157, 1988.
[4] M. Bridson and A. Haefliger. Metric spaces of non-positive curvature, volume 319. Springer,
1999.
[5] J. Brodzki, S. J. Campbell, E. Guentner, G. A. Niblo, and N. J. Wright. Property A and
CAT(0) cube complexes. J. Funct. Anal., 256(5):1408–1431, 2009.
[6] S. J. Campbell. Property A and affine buildings. J. Funct. Anal., 256(2):409–431, 2009.
[7] S. J. Campbell and G. A. Niblo. Hilbert space compression and exactness of discrete groups.
J. Funct. Anal., 222(2):292–305, 2005.
[8] V. Chepoi. Graphs of some CAT(0) complexes. Adv. in Appl. Math., 24(2):125–179, 2000.
[9] A. Hatcher. Algebraic topology. Cambridge University Press, Cambridge, 2002.
[10] N. Higson and J. Roe. Amenable group actions and the Novikov conjecture. J. Reine Angew.
Math., 519:143–153, 2000.
[11] N. Hoda. Quadric complexes. Preprint, arXiv:1711.05844, 2017.
[12] T. Januszkiewicz and J. Świa̧tkowski. Simplicial nonpositive curvature. Publ. Math. Inst.
Hautes Études Sci., 104(1):1–85, 2006.
[13] R. C. Lyndon and P. E. Schupp. Combinatorial group theory. Classics in Mathematics.
Springer-Verlag, Berlin, 2001. Reprint of the 1977 edition.
[14] N. Ozawa. Amenable actions and exactness for discrete groups. C. R. Acad. Sci. Paris Sér.
I Math., 330(8):691–695, 2000.
[15] D. T. Wise. Sixtolic complexes and their fundamental groups. Preprint, 2003.
[16] G. Yu. The coarse Baum-Connes conjecture for spaces which admit a uniform embedding
into Hilbert space. Invent. Math., 139(1):201–240, 2000.
| 4 |
GLOBAL TESTING AGAINST SPARSE ALTERNATIVES
UNDER ISING MODELS
By Rajarshi Mukherjee‡ , Sumit Mukherjee§ , and Ming Yuan¶
arXiv:1611.08293v2 [math.ST] 5 Oct 2017
University of California, Berkeley‡ and Columbia University
§¶
In this paper, we study the effect of dependence on detecting
sparse signals. In particular, we focus on global testing against sparse
alternatives for the means of binary outcomes following an Ising
model, and establish how the interplay between the strength and
sparsity of a signal determines its detectability under various notions
of dependence. The profound impact of dependence is best illustrated
under the Curie-Weiss model where we observe the effect of a “thermodynamic” phase transition. In particular, the critical state exhibits
a subtle “blessing of dependence” phenomenon in that one can detect much weaker signals at criticality than otherwise. Furthermore,
we develop a testing procedure that is broadly applicable to account
for dependence and show that it is asymptotically minimax optimal
under fairly general regularity conditions.
1. Introduction. Motivated by applications in a multitude of scientific
disciplines, statistical analysis of “sparse signals” in a high dimensional setting, be it large-scale multiple testing or screening for relevant features, has
drawn considerable attention in recent years. For more discussions on sparse
signal detection type problems see, e.g., Addario-Berry et al. (2010); AriasCastro, Donoho and Huo (2005); Arias-Castro and Wang (2015); AriasCastro et al. (2008); Cai and Yuan (2014); Donoho and Jin (2004); Hall
and Jin (2010); Ingster, Tsybakov and Verzelen (2010); Mukherjee, Pillai
and Lin (2015), and references therein. A critical assumption often made in
these studies is that the observations are independent. Recognizing the potential limitation of this assumption, several recent attempts have been made
to understand the implications of dependence in both theory and methodology. See, e.g., Arias-Castro, Candès and Plan (2011); Hall and Jin (2008,
2010); Jin and Ke (2014); Wu et al. (2014). These earlier efforts, setting in
the context of Gaussian sequence or regression models, show that it is important to account for dependence among observations, and under suitable
§
The research of Sumit Mukherjee was supported in part by NSF Grant DMS-1712037.
The research of Ming Yuan was supported in part by NSF FRG Grant DMS-1265202,
and NIH Grant 1-U54AI117924-01.
AMS 2000 subject classifications: Primary 62G10, 62G20, 62C20
Keywords and phrases: Detection Boundary, Ising Models, Phase Transitions, Sparse
Signals
¶
1
2
conditions, doing so appropriately may lead to tests that are as powerful as
if the observations were independent. However, it remains largely unknown
how the dependence may affect our ability to detect sparse signals beyond
Gaussian models. The main goal of the present work is to fill in this void.
In particular, we investigate the effect of dependence on detection of sparse
signals for Bernoulli sequences, a class of problems arising naturally in many
genomics applications (e.g., Mukherjee, Pillai and Lin, 2015).
Let X = (X1 , . . . , Xn )> ∈ {±1}n be a random vector such that P(Xi =
+1) = pi . In a canonical multiple testing setup, we want to test collectively that H0 : pi = 1/2, i = 1, 2, . . . , n. Of particular interest here is the
setting when Xi ’s may be dependent. A general framework to capture the
dependence among a sequence of binary random variables is the so-called
Ising models, which have been studied extensively in the literature (Ellis
and Newman, 1978; Ising, 1925; Majewski, Li and Ott, 2001; Mezard and
Montanari, 2009; Onsager, 1944; Stauffer, 2008). An Ising model specifies
the joint distribution of X as:
1
1 >
>
(1) PQ,µµ (X = x) :=
exp
x Qx + µ x ,
∀x ∈ {±1}n ,
Z(Q, µ )
2
where Q is an n × n symmetric and hollow matrix, µ := (µ1 , . . . , µn )> ∈ Rn ,
and Z(Q, µ ) is a normalizing constant. Throughout the rest of the paper,
the expectation operator corresponding to (1) will be analogously denoted
by EQ,µµ . It is clear that the matrix Q characterizes the dependence among
the coordinates of X, and Xi ’s are independent if Q = 0. Under model (1),
the relevant null hypothesis can be expressed as µ = 0. More specifically,
we are interested in testing it against a sparse alternative:
H0 : µ = 0
(2)
vs
H1 : µ ∈ Ξ(s, B),
where
µ)| = s, and
Ξ(s, B) := µ ∈ Rn : |supp(µ
min
µ)
i∈supp(µ
µi ≥ B > 0 ,
and
µ) := {1 ≤ i ≤ n : µi 6= 0}.
supp(µ
Our goal here is to study the impact of Q in doing so.
To this end, we adopt an asymptotic minimax framework that can be
traced back at least to Burnashev (1979); Ingster (1994, 1998). See Ingster
and Suslina (2003) for further discussions. Let a statistical test for H0 versus
H1 be a measurable {0, 1} valued function of the data X, with 1 indicating
DETECTION THRESHOLDS FOR ISING MODELS
3
rejecting the null hypothesis H0 and 0 otherwise. The worst case risk of a
test T : {±1}n → {0, 1} can be given by
(3)
Risk(T, Ξ(s, B), Q) := PQ,0 (T (X) = 1) +
sup
PQ,µµ (T (X) = 0) ,
µ ∈Ξ(s,B)
where PQ,µµ denotes the probability measure as specified by (1). We say
that a sequence of tests T indexed by n corresponding to a sequence of
model-problem pair (1) and (3), to be asymptotically powerful (respectively
asymptotically not powerful) against Ξ(s, B) if
(4)
lim sup Risk(T, Ξ(s, B), Q) = 0 (respectively lim inf Risk(T, Ξ(s, B), Q) > 0).
n→∞
n→∞
The goal of the current paper is to characterize how the sparsity s and
µ) jointly determine if there is a powerful test, and
strength B of the signal (µ
how the behavior changes with Q. In particular,
• for a general class of Ising models, we provide tests for detecting arbitrary sparse signals and show that they are asymptotically rate optimal
for Ising models on regular graphs in the high temperature regime;
• for Ising models on the cycle graph, we establish rate optimal results
for all regimes of temperature, and show that the detection thresholds
are the same as the independent case;
• for the Curie-Weiss model (Kac, 1969; Nishimori, 2001), we provide
sharp asymptotic detection thresholds for detecting arbitrarily sparse
signals, which reveal an interesting phenomenon at the thermodynamic
phase transition point of a Curie-Weiss magnet.
Our tools for analyzing the rate optimal tests depend on the method of
exchangeable pairs (Chatterjee, 2007b), which might be of independent interest.
The rest of the paper is organized as follows. In Section 2 we study in detail
the optimal detection thresholds for the Curie-Weiss model and explore the
effects of the presence of a “thermodynamic phase transition” in the model.
Section 3 is devoted to developing and analyzing testing procedures in the
context of more general Ising models where we also show that under some
conditions on Q, the proposed testing procedure is indeed asymptotically
optimal. Finally we conclude with some discussions in Section 5. The proof
of the main results is relegated to Section 6. The proof of additional technical
arguments can be found in Mukherjee, Mukherjee and Yuan (2017).
2. Sparse Testing under Curie-Weiss Model. In most statistical
problems, dependence reduces effective sample size and therefore makes inference harder. This, however, turns out not necessarily to be the case in
4
our setting. The effect of dependence on sparse testing under Ising model is
more profound. To make this more clear we first consider one of the most
popular examples of Ising models, namely the Curie-Weiss model. In the
Curie-Weiss model,
n
X
X
1
θ
Pθ,µµ (X = x) :=
xi xj +
exp
µi xi ,
(5)
Z(θ, µ )
n
i=1
1≤i<j≤n
where in this section, with slight abuse of notation, we rename PQ,µµ , EQ,µµ ,
and Z(Q, µ ) by Pθ,µµ , Eθ,µµ and Z(θ, µ ) respectively, for brevity. The CurieWeiss model is deceivingly simple and one of the classical examples that
exhibit the so-called “thermodynamic” phase transition at θ = 1. See, e.g.,
Kac (1969); Nishimori (2001). It turns out that such a phase transition
directly impacts how well a sparse signal can be detected. Following the
convention, we shall refer to θ = 1 as the critical state, θ > 1 the low
temperature states and θ < 1 the high temperature states.
2.1. High temperature states. We consider first the high temperature
case i.e. 0 ≤ θ < 1. It is instructive to begin with the case when θ = 0,
that is, X1 , . . . , Xn are independent Bernoulli random variables. By Central
Limit Theorem
!
!
n
n
√
1X
1X
n X̄ −
tanh(µi ) →d N 0,
sech2 (µi ) ,
n
n
i=1
i=1
where
n
X̄ =
1X
Xi .
n
i=1
In particular, under the null hypothesis,
√
nX̄ →d N (0, 1) .
√
This immediately suggests a test that rejects H0 if and only nX̄ ≥ Ln
for a diverging sequence Ln = o(n−1/2 s tanh(B)) is asymptotic powerful,
in the sense of (4), for testing (2) whenever s tanh(B) n1/2 . This turns
out to be the best one can do in that there is no powerful test for testing
(2) if s tanh(B) = O(n1/2 ). See, e.g., Mukherjee, Pillai and Lin (2015). An
immediate question of interest is what happens if there is dependence, that
is 0 < θ < 1. This is answered by Theorem 1 below.
DETECTION THRESHOLDS FOR ISING MODELS
5
Theorem 1. Consider testing (2) based on X following the Curie-Weiss
model (5) with 0 ≤ θ < 1. If s tanh(B) n1/2 , then the test that rejects H0 if
√
and only if nX̄ ≥ Ln for a diverging Ln such that Ln = o(n−1/2 s tanh(B))
is asymptotically powerful for (2). Conversely, if s tanh(B) = O(n1/2 ), then
there is no asymptotically powerful test for (2).
Theorem 1 shows that, under high temperature states, the sparse testing
problem (2) behaves similarly to the independent case. Not only the detection limit remains the same, but also it can be attained even if one neglects
the dependence while constructing the test.
2.2. Low temperature states. Now consider the low temperature case
√
when θ > 1. The naı̈ve test that rejects H0 whenever nX̄ ≥ Ln is no
longer asymptotically powerful in these situations. In particular, X̄ con√
centrates around the roots of x = tanh(θx) and nX̄ is larger than any
Ln = O(n1/2 ) with a non-vanishing probability, which results in an asymptotically strictly positive probability of Type I error for a test based on
√
rejecting H0 if nX̄ ≥ Ln .
To overcome this difficulty, we shall consider a slightly modified test statistic:
n
X
1 X
θ
X̃ =
Xj ,
Xi − tanh
n
n
i=1
Note that
j6=i
X
θ
tanh
Xj = Eθ,0 (Xi |Xj : j 6= i)
n
j6=i
is the conditional mean of Xi given {Xj : j 6= i} under the Curie-Weiss model
with µ = 0. In other words, we average after centering each observation
Xi by its conditional mean, instead of the unconditional mean, under H0 .
The idea of centering by the conditional mean is similar in spirit to the
pseudo-likelihood estimate of Besag (1974, 1975). See also Bhattacharya
and Mukherjee (2015); Chatterjee (2007a); Guyon (1995).
√
We can then proceed to reject H0 if and only if nX̃ ≥ Ln . The next
theorem shows that this procedure is indeed optimal with appropriate choice
of Ln .
Theorem 2. Consider testing (2) based on X following the Curie-Weiss
model (5) with θ > 1. If s tanh(B) n1/2 , then the test that rejects H0 if
√
and only if nX̃ ≥ Ln for a diverging Ln such that Ln = o(n−1/2 s tanh(B))
6
is asymptotically powerful for (2). Conversely, if s tanh(B) = O(n1/2 ), then
there is no asymptotically powerful test for (2).
Theorem 2 shows that the detection limits for low temperature states
remain the same as that for high temperature states, but a different test is
required to achieve it.
2.3. Critical state. The situation however changes at the critical state
θ = 1, where a much weaker signal could still be detected. This is made
precise by our next theorem, where we show that detection thresholds, in
terms of s tanh(B), for the corresponding Curie-Weiss model at criticality
scales as n−3/4 instead of n−1/2 as in either low or high temperature states.
Moreover, it is attainable by the test that rejects H0 whenever n1/4 X̄ ≥ Ln
for appropriately chosen Ln .
Theorem 3. Consider testing (2) based on X following the Curie-Weiss
model (5) with θ = 1. If s tanh(B) n1/4 , then a test that rejects H0 if
and only if n1/4 X̄ ≥ Ln for a suitably chosen diverging sequence Ln , is
asymptotically powerful for (2). Conversely, if s tanh(B) = O(n1/4 ), then
there is no asymptotically powerful test for (2).
A few comments are in order about the implications of Theorem 3 in
contrast to TheoremP1 and 2. Previously, the distributional limits for the
total magnetization ni=1 Xi has been characterized in all the three regimes
of high (θ < 1), low (θ > 1), and critical (θ = 1) temperatures (Ellis and
Newman, 1978) when µ = 0. More specifically, they show that
√
d
nX̄ → N 0,
d
√
n1/4 X̄ → W
d
n(X̄ − m(θ))|X̄ > 0
→ N 0,
1
1−θ
if θ < 1,
if θ = 1,
1
1 − θ(1 − m(θ)2 )
if θ > 1,
4
where W is a random variable on R with density proportional to e−x /12
with respect to Lebesgue measure, and m(θ) is the unique positive root of
the equation z = tanh(θz) for θ > 1. A central quantity of their analysis is studying the roots of this equation. Our results demonstrate parallel behavior in terms of detection of sparse external magnetization µ . In
particular, if the vector µ = (B, · · · , B, 0, 0, · · · , 0) with the number of
nonzero components equal to s, we obtain the fixed point equation z =
p tanh(θz + B) + (1 − p) tanh(θz), where p := s/n. One can get an informal
DETECTION THRESHOLDS FOR ISING MODELS
7
explanation of the detection boundary for the various cases from this fixed
point equation. As for example in the critical case when θ = 1, we get the
equation
z = p tanh(z +B)+(1−p) tanh(z) ⇒ z −tanh(z) = p[tanh(z +B)−tanh(z)].
The LHS of the second equality is of order z 3 for z ≈ 0, and the RHS is
of order p tanh(B). This gives the relation z 3 ∼ (p tanh B), which gives the
asymptotic order of the mean of X̄ under the alternative as z ∼ (p tanh B)1/3 .
Since under H0 the fluctuation of X̄ is n−1/4 , for successful detection we need
n−1/4 (p tanh B)1/3 , which is equivalent to s tanh(B) n1/4 on recalling
that s = np. Similar heuristic justification holds for other values of θ as well.
Interestingly, both below and above phase transition the detection problem considered here behaves similar to that in a disordered system of i.i.d.
random variables, in spite having different asymptotic behavior of the total magnetization in the two regimes. However, an interesting phenomenon
continues to emerge at θ = 1 where one can detect a much smaller signal or
external magnetization (magnitude of s tanh(B)). In particular, according to
√
Theorem 1 and Theorem 2, no signal is detectable of sparsity s n, when
θ 6= 1. In contrast, Theorem 3 establishes signals satisfying s tanh(B) n1/4
√
is detectable for n1/4 . s n, where an . bn means an = O(bn ). As mentioned before, it is well known the Curie-Weiss model undergoes a phase
transition at θ = 1. Theorem 3 provides a rigorous verification of the fact
that the phase transition point θ = 1 can reflect itself in terms of detection
problems, even though θ is a nuisance parameter. In particular, the detection is easier than at non-criticality. This is interesting in its own right since
the concentration of X̄ under the null hypothesis is weaker than that for
θ < 1 (Chatterjee et al., 2010) and yet a smaller amount of signal enables us
to break free of the null fluctuations. We shall make this phenomenon more
transparent in the proof of the theorem.
3. Sparse Testing under General Ising Models. As we can see
from the previous section, the effect of dependence on sparse testing under Ising models is more subtle than the Gaussian case. It is of interest
to investigate to what extent the behavior we observed for the Curie-Weiss
model applies to the more general Ising model, and whether there is a more
broadly applicable strategy to deal with the general dependence structure.
To this end, we further explore the idea of centering by the conditional mean
we employed to treat low temperature states under Curie-Weiss model, and
argue that it indeed works under fairly general situations.
8
3.1. Conditional mean centered tests. Note that under the Ising model
(1),
EQ,0 (Xi |Xj : j 6= i) = tanh(mi (X))
where
mi (X) =
n
X
Qij Xj .
j=1
Following the same idea as before, we shall consider a test statistic
n
X̃ =
1X
[Xi − tanh(mi (X))],
n
i=1
√
and proceed to reject H0 if and only if nX̃ ≥ Ln . The following result shows
that the same detection limit s tanh(B) n1/2 can be achieved by this test
as long as kQk`∞ →`∞ = Op (1), where kQk`p →`q = maxkxk`p ≤1 kQxk`q for
p, q > 0.
Theorem 4. Let X follow an Ising model (1) with Q such that kQk`∞ →`∞ =
O(1). Consider testing hypotheses about µ as described by (2). If s tanh(B)
√
n1/2 , then the test that rejects H0 if and only if nX̃ ≥ Ln for any Ln → ∞
such that Ln = o(n−1/2 s tanh(B)) is asymptotically powerful.
The condition kQk`∞ →`∞ = O(1) is a regularity condition which holds for
many common examples of the Ising model in the literature. In particular, Q
oftentimes can be associated with a certain graph G = (V, E) with vertex set
V = [n] := {1, . . . , n} and edge set E ⊆ [n] × [n] so that Q = (nθ)G/(2|E|),
where G is the adjacency matrix for G, |E| is the cardinality of E, and
θ ∈ R is a parameter independent of n deciding the degree of dependence in
the spin-system. Below we provide several more specific examples that are
commonly studied in the literature.
Dense Graphs:. Recall that
kQk`∞ →`∞ = max
1≤i≤n
n
X
j=1
|Qij | ≤
n2 |θ|
.
2|E|
If the dependence structure is guided by densely labeled graphs so that
|E| = Θ(n2 ), then kQk`∞ →`∞ = O(1).
DETECTION THRESHOLDS FOR ISING MODELS
9
Regular Graphs:. When the dependence structure is guided by a regular
graph of degree dn , we can write Q = θG/dn . Therefore,
kQk`∞ →`∞ = max
1≤i≤n
n
X
|Qij | =
j=1
|θ|
· dn = |θ|,
dn
and again obeying the condition kQk`∞ →`∞ = O(1).
Erdös-Rényi Graphs:. Another example is the Erdös-Rényi graph where an
edge between each pair of nodes is present with probability pn independent
of each other. It is not hard to derive from Chernoff bound and union bounds
that the maximum degree dmax and the totally number of edges |E| of an
Erdös-Rényi graph satisfy with high probability:
dmax ≤ npn (1 + δ),
and
|E| ≥
n(n − 1)
pn · (1 − δ)
2
for any δ ∈ (0, 1), provided that npn log n. This immediately implies that
kQk`∞ →`∞ = Op (1).
In other words, the detection limit established in Theorem 4 applies to
all these types of Ising models. In particular, it suggests that, under Curie√
Weiss model, the nX̃ based test can detect sparse external magnetization
µ ∈ Ξ(s, B) if s tanh(B) n1/2 , for any θ ∈ R, which, in the light of
Theorems 1 and 2, is optimal in both high and low temperature states.
3.2. Optimality. The detection limit presented in Theorem 4 matches
those obtained for independent Bernoulli sequence model. It is of interest to
understand to what extent the upper bounds in Theorem 4 are sharp. The
answer to this question might be subtle. In particular, as we see in the CurieWeiss case, the optimal rates of detection thresholds depend on the presence
of thermodynamic phase transition in the null model. To further illustrate
the role of criticality, we now consider an example of the Ising model without
phase transition and the corresponding behavior of the detection problem
(2) in that case. Let
θ
Qi,j = I{|i − j| = 1
2
mod n}
so that the corresponding Ising model can be identified with a cycle graph
of length n. Our next result shows that the detection threshold remains the
same for any θ, and is the same as the independent case i.e. θ = 0.
10
Theorem 5. Suppose X ∼ PQ,µµ , where Q is the scaled adjancency matrix of the cycle graph of length n, that is, Qi,j = 2θ 1{|i − j| = 1 mod n}
√
for some θ ∈ R. If s tanh(B) ≤ C n for some C > 0, then no test is
asymptotically powerful for the testing problem (2).
In view of Theorem 4, if s tanh(B) n1/2 , then the test that rejects H0 if
√
and only if nX̃ ≥ Ln for any Ln → ∞ such that Ln = o(n−1/2 s tanh(B))
is asymptotically powerful for the testing problem (2). Together with Theorem 5, this shows that for the Ising model on the cycle graph of length n,
which is a physical model without thermodynamic phase transitions, the detection thresholds mirror those obtained in independent Bernoulli sequence
problems (Mukherjee, Pillai and Lin, 2015).
The difference between these results and those for the Curie-Weiss model
demonstrates the difficulty of a unified and complete treatment to general
Ising models. We offer here, instead, a partial answer and show that the test
described earlier in the section (Theorem 4) is indeed optimal under fairly
general weak dependence for reasonably regular graphs.
Theorem 6. Suppose X ∼ PQ,µµ as in (1) and consider testing hypotheses about µ as described by (2). Assume Qi,j ≥ 0 for all (i, j) such that
√
kQk`∞ →`∞ ≤ ρ < 1 for some constant ρ > 0, kQk2F = O( n), and
Q1 −
1> Q1
1
n
2
= O(1).
√
If s tanh(B) ≤ C n for some constant C > 0, then no test is asymptotically
powerful for (2).
Theorem 6 provides rate optimal lower bound to certain instances pertaining to Theorem 4. One essential feature of Theorem 6 is the implied impos√
sibility result for the s n regime. More precisely, irrespective of signal
strength, no tests are asymptotically powerful when the number of signals
√
drop below n in asymptotic order. This is once again in parallel to results
in Mukherjee, Pillai and Lin (2015), and provides further evidence that low
dependence/high temperature regimes (as encoded by kQk`∞ →`∞ ≤ ρ < 1)
resemble independent Bernoulli ensembles. Theorem 6 immediately implies
the optimality of the conditional mean centered tests for a couple of common
examples.
DETECTION THRESHOLDS FOR ISING MODELS
11
High Degree Regular Graphs:. When the dependence structure is guided
by a regular graph, that is Q = dθn G, it is clear that
Q1 −
If 0 ≤ θ < 1 and dn &
Theorem 6 since
√
1> Q1
1
n
2
= 0.
n, then one can easily verify the conditions of
kQk`∞ →`∞ = θ < 1,
and
kQk2F = nθ2 /dn .
Dense Erdös-Rényi Graphs:. When the dependence structure is guided by
a Erdös-Rényi graph on n vertices with parameter pn , that is Q = θ/(npn )G
with Gi,j ∼ Bernoulli(pn ) independently for all 1 ≤ i < j ≤ n, we can also
verify that the conditions of Theorem 6 holds with probability tending to
one if 0 ≤ θ < 1 and pn bounded away from 0. As before, by Chernoff
bounds, we can easily derive that with probability tending to one,
kQk`∞ →`∞ = θ
dmax
θ(1 + δ)npn
≤
= θ(1 + δ),
npn
npn
and
kQk2F =
θ2
n2 p2n
X
Gi,j ≤
1≤i<j≤n
θ2
θ2 (1 + δ)n(n − 1)pn
≤
(1 + δ),
2n2 p2n
2pn
for any δ > 0. Finally, denote by di the degree of the ith node, then
Q1 −
1> Q1
n
2
1
=
θ2
n
X
n2 p2n
i=1
2
n
n
X
θ2 X
di − 1
dj
≤ 2 2
(di − (n − 1)pn )2 = Op (1),
n
n pn
j=1
i=1
by Markov inequality and the fact that
" n
#
X
E
(di − (n − 1)pn )2 = n(n − 1)pn (1 − pn ).
i=1
4. Simulation Results. We now present results from a set of numerical experiments to further demonstrate the behavior of the various tests in
finite samples. To fix ideas, we shall focus on the Curie-Weiss model since
12
it exhibits the most interesting behavior in terms of the effect of thermodynamic phase transitions reflecting itself on the detection thresholds for
the presence of sparse magnetization. In order to demonstrate the detection
thresholds cleanly in the simulation, we parametrized sparsity s as s = n1−α
for α ∈ (0, 1). In this parametrization, the theoretical detection thresholds
obtained for the Curie-Weiss model can be restated as follows. For θ 6= 1,
Theorem 1 and Theorem 2 suggest that the critical signal strength equals
1
tanh(B) ∼ n−( 2 −α) . In particular if tanh(B) = n−r , then no test is asymptotically powerful when r > 21 − α; whereas the test based on conditionally centered magnetization is asymptotically powerful when r < 21 − α.
Moreover, for α > 1/2, all tests are asymptotically powerless irrespective
of the amount of signal strength. However θ = 1, Theorem 3 demonstrates
3
that the critical signal strength equals tanh(B) ∼ n−( 4 −α) . In particular if
tanh(B) = n−r , then no test is asymptotically powerful when r > 43 − α;
whereas the test based on total magnetization is asymptotically powerful
when r < 34 − α. Moreover, for α > 3/4, all tests are asymptotically powerless irrespective of the amount of the signal strength. The simulation presented below is designed to capture the different scenarios where non-trivial
detection is possible i.e. α ≤ 1/2 for θ 6= 1 and α ≤ 3/4 for θ = 1.
We evaluated the power of the two tests, based on total magnetization
and the conditionally centered magnetization respectively, at the significance level of 5% and sample size n = 1000. We generated the test statistics 500 times under the null and take the 95%-quantile as the critical
value. The power against different alternatives are then obtained empirically from 500 repeats each. The simulation from a Curie-Weiss model
in the presence of magnetization is done using the Gaussian trick or the
auxiliary variable approach as demonstrated by Lemma 3. In particular
for a given θ and µ in the simulation parameter set, we generated a random variable Z (using package rstan in R) with density proportional to
Pn
2
fn,µµ (z) := nθz
i=1 log cosh(θz + µi ). Next, given this realization of
2 −
Z = z we generated each component of X = (X1 , . . . , Xn ) independently
e(µi +zθ)xi
taking values in ±1 with Pθ,µµ (Xi = xi ) = eµi +zθ
. Thereafter Lemma
+e−µi −zθ
3 guarantees the joint distribution of X indeed follows a Curie-Weiss model
with temperature parameter θ and magnetization µ . We believe that this
method is much faster than the one-spin at a time Glauber dynamics which
updates the whole chain X one location at a time. We have absorbed all
issues regarding mixing time in the simulation of Z, which being a one dimensional continuous random variable behaves much better in simulation.
In figure 1, we plot the power of both tests for θ = 0.5 (high temperature, conditionally centered magnetization), θ = 1 (critical temperature,
DETECTION THRESHOLDS FOR ISING MODELS
13
total magnetization), and θ = 1.5 (low temperature, conditionally centered
magnetization). Each plot was produced by repeating the experiment for
a range of equally spaced signal sparsity-strength pairs (α, r) with an increment of size 0.05. In addition, we plot in red the theoretical detection
boundary given by r = 1/2 − α for non-critical temperature (θ 6= 1) and
r = 3/4 − α for critical temperature (θ = 1). These simulation results agree
very well with our theoretical development.
5. Discussions. In this paper we study the asymptotic minimax rates
of detection for arbitrary sparse signals in Ising Models, considered as a
framework to study dependency structures in binary outcomes. We show
that the detection thresholds in Ising models might depend on the presence
of a “thermodynamic” phase transition in the model. In the context of a
Curie- Weiss Ising model, the presence of such a phase transition results
in substantial faster rates of detection of sparse signals at criticality. On
the other hand, lack of such phase transitions, in the Ising model on the
line graph, yields results parallel to those in independent Bernoulli sequence
models, irrespective of the level of dependence. We further show that for
Ising models defined on graphs enjoying certain degree of regularity, detection thresholds parallel those in independent Bernoulli sequence models in
the low dependence/high temperature regime. It will be highly interesting
to consider other kinds of graphs left out by Theorem 6 in the context of
proving matching lower bounds to Theorem 4. This seems highly challenging
and might depend heavily on the sharp asymptotic behavior of the partition function of more general Ising model under low-magnetization regimes.
The issue of unknown dependency structure Q, and especially the estimation of unknown temperature parameter θ for Ising models defined on given
underlying graphs, is also subtle as shown in Bhattacharya and Mukherjee
(2015). In particular, the rate of consistency of an estimator of θ under the
null model (i.e. µ = 0) depends crucially on the position of θ with respect to
the point of criticality and in particular high temperature regimes (i.e. low
positive values of θ) may preclude the existence of any consistent estimator.
The situation becomes even more complicated in presence of external magnetization (i.e. µ 6= 0). Finally, this paper opens up several interesting avenues
of future research. In particular, investigating the effect of dependence on
detection of segment type structured signals deserves special attention.
6. Proof of Main Results. In this section we collect the proofs of
our main results. It is convenient to first prove the general results, namely
the upper bound given by Theorem 4 and lower bound by Theorem 6, and
then consider the special cases of the Ising model on a cycle graph, and
14
Fig 1. The power of testing procedures in the dense signal setup. (a) shows the power of the
conditionally centered magnetization test for θ = 0.5, (b) shows the power of the total magnetization test for θ = 1 (c) shows the power of the conditionally centered magnetization
test for θ = 1.5 . The theoretical detection threshold is drawn in red.
15
DETECTION THRESHOLDS FOR ISING MODELS
Curie-Weiss model.
6.1. Proof of Theorem 4. The key to the proof is the tail behavior of
n
n
X
1X
1 X
fQ,µµ (X) :=
[Xi − EQ,µµ (Xi |Xj : j 6= i)] =
Xi − tanh
Qij Xj + µj ,
n
n
i=1
i=1
j6=i
where EQ,µµ means the expectation is taken with respect to the Ising model
(1). In particular, we shall make use of the following concentration bound
for fQ,µµ (X).
Lemma 1. Let X be a random vector following the Ising model (1). Then
for any t > 0,
nt2
PQ,µµ (|fQ,µµ (X)| ≥ t) ≤ 2 exp −
.
4 (1 + kQk`∞ →`∞ )2
Lemma 1 follows from a standard application of Stein’s Method for concentration inequalities (Chatterjee, 2005, 2007b; Chatterjee et al., 2010). We
defer the detailed proof to the Appendix.
We are now in position to prove Theorem 4. We first consider the Type I
error. By Lemma 1, there exists a constant C > 0 such that
√
PQ,0 ( nX̃ ≥ Ln ) ≤ 2 exp(−CL2n ) → 0.
It remains to consider the Type II error. Note that
n
X
X
1 X
X̃ − fQ,µµ (X) =
tanh
Qij Xj + µi − tanh
Qij Xj
n
i=1
j6=i
j6=i
X
X
X
1
tanh
=
Qij Xj + µi − tanh
Qij Xj
n
j6=i
j6=i
µ)
i∈supp(µ
X
X
X
1
tanh
Qij Xj + B − tanh
Qij Xj ,
≥
n
µ)
i∈supp(µ
j6=i
j6=i
where the inequality follows from the monotonicity of tanh.
Observe that for any x ∈ R and y > 0,
(6) tanh(x+y)−tanh(x) =
[1 − tanh2 (x)] tanh(y)
≥ [1−tanh(x)] tanh(y),
1 + tanh(x) tanh(y)
16
where the inequality follows from the fact that | tanh(x)| ≤ 1. Thus,
X
X
tanh(B)
1 − tanh
X̃ − fQ,µµ (X) ≥
Qij Xj .
n
j6=i
µ)
i∈supp(µ
Because
X
Qij Xj ≤ kQk`∞ →`∞ ,
j6=i
we get
X̃ − fQ,µµ (X) ≥
s tanh(B)
[1 − tanh (kQk`∞ →`∞ )] .
n
Therefore,
√
nX̃ −
√
nfQ,µµ (X) ≥
s tanh(B)
√
[1 − tanh (kQk`∞ →`∞ )] Ln .
n
This, together with another application of Lemma 1, yields the desired claim.
6.2. Proof of Theorem 6. The proof is somewhat lengthy and we break
it into several steps.
6.2.1. Reduction to magnetization. We first show that a lower bound can
be characterizing the behavior of X̄ under the alternative. To this end, note
that for any test T and a distribution π over Ξ(s, B), we have
Risk(T, Ξs,B , Q) = PQ,0 (T (X) = 1) +
sup
PQ,µµ (T (X) = 0)
µ ∈Ξ(s,B)
Z
≥ PQ,0 (T (X) = 1) +
µ).
PQ,µµ (T (X) = 0) dπ(µ
The rightmost hand side is exactly the risk when testing H0 against a simple
alternative where X follows a mixture distribution:
Z
µ)
Pπ (X = x) := PQ,µµ (X = x) dπ(µ
By Neymann-Pearson Lemma, this can be further lower bounded by
Z
µ),
Risk(T, Ξ(s, B), Q) ≥ PQ,0 (Lπ (X) > 1) + PQ,µµ (Lπ (X) ≤ 1) dπ(µ
where
Lπ (X) =
Pπ (X)
PQ,0 (X)
17
DETECTION THRESHOLDS FOR ISING MODELS
is the likelihood ratio.
We can now choose a particular prior distribution π to make Lπ a monotone function of X̄. To this end, let π be supported over
e B) = {µ
µ ∈ {0, B}n : |supp(µ
µ)| = s} ,
Ξ(s,
so that
µ) ∝ Z(Q, µ ),
π(µ
e B).
µ ∈ Ξ(s,
∀µ
It is not hard to derive that, with this particular choice,
!
X
Lπ (X) ∝
µ> X) = ES exp B
exp(µ
X
Xi
,
i∈S
e
µ ∈Ξ(s,B)
where ES means expectation over S, a uniformly sampled subset of [n] of
size s. It is clear, by symmetry, that the rightmost hand side is invariant
to the permutation of the coordinates of X. In addition, it is an increasing
function of
!
n
X
1
|{i ∈ [n] : Xi = 1}| =
n+
Xi ,
2
i=1
and hence an increasing function of X̄.
The observation that Lπ (X) is an increasing function of X̄ implies that
there exists a sequence κn such that
Z
µ)
Risk(T, Ξ(s, B), Q) ≥ PQ,0 (Lπ (X) > 1) + PQ,µµ (Lπ (X) ≤ 1) dπ(µ
!
!
Z
n
n
X
X
µ)
= PQ,0
Xi > κn + PQ,µµ
Xi ≤ κn dπ(µ
≥ PQ,0
i=1
n
X
i=1
!
Xi > κn
+
inf
e
µ ∈Ξ(s,B)
i=1
PQ,µµ
It now remains to study the behavior of X̄.
In particular, it suffices to show that, for any fixed x > 0,
)
( n
X
√
(7)
lim inf PQ,0
Xi > x n > 0,
n→∞
i=1
and for any xn → ∞,
(8)
lim sup
sup
n→∞ µ∈Ξ(s,B)
e
PQ,µµ
n
X
i=1
√
Xi > xn n
!
= 0.
n
X
i=1
!
Xi ≤ κn
.
18
Assuming (7) holds, then for any test T to be asymptotic powerful, we need
√
κn n to ensure that
( n
)
X
PQ,0
Xi > κn → 0.
i=1
But, in the light of (8), this choice necessarily leads to
( n
)
X
inf PQ,µµ
Xi ≤ κn → 1,
e
µ ∈Ξ(s,B)
i=1
so that
Risk(T, Ξ(s, B), Q) → 1.
In other words, there is no asymptotic powerful test if both (7) and (8) hold.
We now proceed to prove them separately.
P
6.2.2. Proof of (8):. Recall that mi (X) = nj=1 Qij Xj and assume µ ∈
√
Ξ̃(s, B) with s tanh(B) ≤ C n. Also let r = (r1 , . . . , rn )> where r = r(Q) :=
Q1. We split the proof into two cases, depending on whether B ≤ 1 or B > 1.
The case of B ∈ [0, 1] :.
n
X
n
X
Xi =
i=1
Write
[Xi − tanh(mi (X) + µi )] +
i=1
+
n
X
n
X
[tanh(mi (X) + µi ) − tanh(mi (X))]
i=1
n
X
[tanh(mi (X)) − mi (X)] +
i=1
mi (X).
i=1
Observe that,
n
X
mi (X) = 1> QX =
i=1
n
X
ri Xi = ρ∗
i=1
n
X
Xi +
i=1
n
X
(ri − ρ∗ )Xi ,
i=1
where ρ∗ = n1 1> r = n1 1> Q1. Thus,
(1 − ρ∗ )
n
X
i=1
Xi
=
n
X
[Xi − tanh(mi (X) + µi )] +
i=1
+
n
X
n
X
[tanh(mi (X) + µi ) − tanh(mi (X))]
i=1
n
X
[tanh(mi (X)) − mi (X)] +
i=1
=: ∆1 + ∆2 + ∆3 + ∆4 .
(ri − ρ∗ )Xi .
i=1
DETECTION THRESHOLDS FOR ISING MODELS
It is clear that
( n
)
4
X
X
√
PQ,µµ
Xi > xn n ≤
PQ,µµ ∆j >
i=1
j=1
19
√
1
xn n .
4(1 − ρ∗ )
We now argue that for any xn → ∞,
√
1
(9)
sup PQ,µµ ∆j >
xn n → 0,
4(1 − ρ∗ )
e
µ ∈Ξ(s,B)
j = 1, . . . , 4.
>
2
The case for ∆4 follows from our assumption ( Q1 − 1 nQ1 1 = O(1)) upon
Cauchy-Schwarz inequality. The case ∆1 follows immediately from Lemma
1. On the other hand, we note that
n
X
[tanh(mi (X) + µi ) − tanh(mi (X))] ≤
i=1
≤
n
X
i=1
n
X
|tanh(mi (X) + µi ) − tanh(mi (X))|
tanh(µi ) = s tanh(B),
i=1
where the second inequality follows from the subadditivity of tanh. The
√
bound (9) for ∆2 then follows from the fact that s tanh(B) = O( n).
We now consider ∆3 . Recall that |x − tanh(x)| ≤ x2 . It suffices to show
that, as xn → ∞,
( n
)
X
1 √
2
(10)
sup PQ,µµ
mi (X) > xn n → 0,
4
e
µ∈Ξ(s,B)
i=1
which follows from Markov inequality and the following lemma.
Lemma 2. Let X be a random vector following the Ising model (1). Assume that Qi,j ≥ 0 for all (i, j) such that kQk`∞ →`∞ ≤ ρ for some constant
√
ρ < 1, and kQk2F = O( n). Then for any fixed C > 0,
!
n
X
1
√ EQ,µµ
lim sup
sup
m2i (X) < ∞.
n
n→∞ P µ ∈[0,1]n :√
n µ ≤C
i=1 i
n
i=1
The proof of Lemma 2 is deferred to the Appendix in Mukherjee, Mukherjee and Yuan (2017).
20
√
√
The case of B > 1 :. In this case s tanh(B) ≤ P
C n implies s ≤ C 0 n,
where C 0 := C/ tanh(1). Also, since the statistic ni=1 Xi is stochastically
non-decreasing in B, without loss of generality it suffices to show that, for
a fixed S ⊂ [n] obeying |S| = s,
(
)
X
√
lim sup lim sup lim sup sup PQ,µµ
(11)
Xi > K n = 0.
n→∞
K→∞
B→∞
µ ∈Ξ̃(s,B):
µ)=S
supp(µ
i∈S c
Now, for i ∈ S we have for µ ∈ Ξ̃(s, B)
PQ,µµ (Xi = 1|Xj = xj , j 6= i) =
eB+mi (x)
1
1
=
≥
,
2−2B
B+m
(x)
−B−m
(x)
−2m
(x)−2B
i
i
i
1
+
e
e
1+e
+e
and so limB→∞ PQ,µµ (Xi = 1, i ∈ S) = 1 uniformly in µ ∈ Ξ̃(s, B) with
√
s ≤ C 0 n. Also note that for any configuration (xj , j ∈ S c ) we have
(12)
X
X
1
PQ,µµ (Xi = xi , i ∈ S c |Xi = 1, i ∈ S) ∝ exp
xi xj Qij +
xi µ̃S,i ,
2
c
c
i,j∈S
where µ̃S,i :=
(13)
P
j∈S
n
X
i=1
i∈S
Qij ≤ kQk`∞ →`∞ ≤ ρ. Further we have
µS,i =
µ̃
n X
X
i=1 j∈S
Qij =
n
XX
√
Qij ≤ C 0 ρ n.
j∈S i=1
We shall refer to the distribution in (12) as PQ̃S ,µ̃µ where Q̃S is the (n −
S
s) × (n − s) principle matrix of Q by restricting the index in S c . Therefore
we simply need to verify that Q̃S satisfy the conditions for Q in Theorem 6.
Trivially Q̃ij ≥ 0 for all i, j and kQ̃k`∞ →`∞ ≤ kQk`∞ →`∞ ≤ ρ. For verifying
the third condition, i.e.
1> Q̃1
Q̃1 −
1
n
2
= O(1),
21
DETECTION THRESHOLDS FOR ISING MODELS
note that
2
1> Q1
O(1) = Q1 −
1
n
n
1 X
(ri (Q) − rj (Q))2
=
2n
i,j=1
1 X
(ri (Q) − rj (Q))2
≥
2n
c
i,j∈S
X
n−s
1
=
×
(ri (Q) − rj (Q))2
n
2(n − s)
c
i,j∈S
2
n−s
1> Q̃1
≥
Q̃1 −
1
n
n
.
Therefore with oB (1) denoting a sequence of real numbers that converges to
0 uniformly over µ ∈ Ξ̃(s, B),
(
)
X
√
lim sup sup PQ,µµ
Xi > K n
B→∞
µ ∈Ξ̃(s,B):
µ)=S
supp(µ
i∈S c
(
≤ lim sup
B→∞
sup
= lim sup
B→∞
PQ,µµ
µ ∈Ξ̃(s,B):
µ)=S
supp(µ
sup
µ ∈Ξ̃(s,B):
µ)=S
supp(µ
≤ sup
P
i∈S c
Xi > K n|Xj = 1, j ∈ S
)
+ oB (1)
i∈S c
!
√
X
PQ̃S ,µ̃µ
Xi > K n
S
i∈S c
sup
S⊂[n]
!
√
X
µS :
µ̃
√
µ̃S,i ≤C 0 ρ n
PQ̃S ,µ̃µ
X
S
√
!
Xi > K n ,
i∈S c
where the last line follows from (13). The proof of the claim (11) thereafter
follows using the same argument as that for the case when B < 1 since
µ̃S,i ≤ ρ < 1 for each i ∈ S c .
6.2.3. Proof of (7):. It is clear that, by symmetry,
(14)
PQ,0
n
X
n
X
√
√
Xi | > K n = 2PQ,0
Xi > K n .
i=1
i=1
In establishing (8), we essentially proved that
(15)
lim sup lim sup
K→∞
sup
n→∞ µ∈Ξ̃(s,B)
PQ,µµ
n
X
i=1
√
Xi > K n
!
= 0.
22
By choosing K large enough, we can make the right hand side of (14) less
than 1/2. This gives
X
X
>
>
(16)
ex Qx/2 ≤ 2
ex Qx/2 ,
x∈{−1,1}n
x∈Dn,K
n P
P
√ o
where Dn,K := | ni=1 Xi | ≤ K n . Then, setting Cn := { ni=1 Xi >
√
λ n}, for any K > λ we have
0
P
x∈Cn ∩Dn,K
PQ,0 (Cn ) ≥ PQ,0 (Cn ∩ Dn,K ) = P
x∈{−1,1}n
ex Qx/2
ex0 Qx/2
P
x0 Qx/2
1 x∈Cn ∩Dn,K e
P
≥
x0 Qx/2
2
x∈Dn,K e
P
P
x0 Qx/2+ √tn n
i=1 xi
e−2Kt x∈Cn ∩Dn,K e
P
≥
x0 Qx/2
2
x∈Dn,K e
=
e−2Kt PQ,µµ(t) (Cn ∩ Dn,K ) Z(Q, µ (t))
2
PQ,0 (Dn,K )
Z(Q, 0)
≥
e−2Kt
PQ,µµ(t) (Cn ∩ Dn,K ),
2
where µ (t) = tn−1/2 1. In the last inequality we use the fact that the function
t 7→ Z(Q, µ(t)) is non-increasing in t on [0, ∞), as
n
n
i=1
i=1
X
X
∂
1
1
Z(Q, µ (t)) = √ EQ,µµ(t)
Xi ≥ √ EQ,0
Xi = 0.
∂t
n
n
To show (7), it thus suffices to show that there exists K large enough and
t > 0 such that
lim inf PQ,µµ(t) (Cn ∩ Dn,K ) > 0.
n→∞
To this end, it suffices to show that for any λ > 0 there exists t such that
(17)
n
X
√
lim inf PQ,µµ(t) (
Xi > λ n) > 0.
n→∞
i=1
If (17) holds, then there exists t > 0 such that
lim inf PQ,µµ(t) (Cn ) > 0.
n→∞
DETECTION THRESHOLDS FOR ISING MODELS
23
It now suffices to show that for any t fixed one has
c
lim sup lim sup PQ,µµ(t) (Dn,K
) = 0,
K→∞
n→∞
which follows from (15).
It now remains to show (17). To begin, note that for h > 0,
h
EQ,µµ(h) Xi = EQ,µµ(h) tanh mi (X) + √
n
tanh(mi (X)) + tanh √hn
= EQ,µµ(h)
1 + tanh(mi (X)) tanh √hn
h
1
E µ tanh(mi (X)) + tanh √
≥
2 Q,µ (h)
n
1
h
.
≥
tanh √
2
n
In the last inequality we use Holley inequality (e.g., Theorem 2.1 of Grimmett, 2006) for the two probability measures PQ,0 and PQ,µµ(h) to conclude
EQ,µµ(h) tanh(mi (X) ≥ EQ,0 tanh(mi (X)) = 0,
in the light of (2.7) of Grimmett (2006). Adding over 1 ≤ i ≤ n gives
Fn0 (h)
(18)
√
n
X
1
n
h
= √ EQ,µµ(h)
tanh √
,
Xi ≥
2
n
n
i=1
where Fn (h) is the log normalizing constant for the model PQ,µµ(h) . Thus,
using Markov’s inequality one gets
!
n
√1 Pn
X
√
X
−
PQ,µµ(t)
Xi ≤ λ n =PQ,µµ(t) e n i=1 i ≥ e−λ ≤ exp {λ + Fn (t − 1) − Fn (t)} ,
i=1
Using (18), the exponent in the rightmost hand side can be estimated as
√
Z t
n
t−1
0
tanh √
,
λ + Fn (t − 1) − Fn (t) = λ −
Fn (h)dh ≤ λ −
2
n
t−1
which is negative and uniformly bounded away from 0 for all n large for
t = 4λ + 1, from which (17) follows.
24
P
6.3. Proof of Theorem 5. We set mi (X) = nj=1 Qij Xj and assume µ ∈
√
Ξ̃(s, B) with s tanh(B) ≤ C n. By the same argument as that of Section
6.2.1, it suffices to show that there does not exist a sequence of positive reals
{Ln }n≥1 such that
!
!
n
n
X
X
PQ,0
Xi > Ln + PQ,µµ
Xi < Ln → 0.
i=1
i=1
Suppose, to the contrary, that there exists such a sequence. For any t ∈ R
we have
(
)
t
n
√
Z
Q,
1
t X
t n
t n
n
EQ,0 exp √
+ λ2 √
,
Xi =
= λ1 √
Z (Q, 0)
n
n
n
i=1
where
p
eθ cosh(t) + (−1)i+1 e2θ sinh(t)2 + e−2θ
λi (t) :=
.
eθ + e−θ
This computation for the normalizing constants for the Ising model on the
cycle graph of length n is standard (Ising, 1925). By a direct calculation we
have
λ1 (0) = 1 > λ2 (0) = tanh(θ),
λ01 (0) = λ02 (0) = 0,
c(θ) := λ001 (0) > 0,
and so
EQ,0 e
√t
n
Pn
i=1
Xi
2
t n
t n
n→∞ c(θ)t
= λ1 √
+ λ2 √
→ e 2) .
n
n
This implies that under H0
n
1 X
d
√
Xi → N (0, c(θ)),
n
i=1
which for any λ > 0 gives
lim inf PQ,0
n→∞
n
X
√
Xi > λ n
!
> 0.
i=1
√
Therefore, Ln n. Now invoking Lemma 1, for any K > 0 we have
( n
)
X
√
2
2
PQ,µµ
(Xi − tanh(mi (X) + µi ) > K n ≤ 2e−K /4(1+θ) .
i=1
25
DETECTION THRESHOLDS FOR ISING MODELS
On this set we have for a universal constant C < ∞
n
n
X
X
(Xi − tanh(mi (X)) ≤
(Xi − tanh(mi (X) + µi ))
i=1
i=1
n
X
+
(tanh(mi (X) + µi ) − tanh(mi (X)))
i=1
√
≤K n + C
n
X
tanh(µi )
i=1
√
≤K n + Cs tanh(B),
and so
(19)
(
PQ,µµ
n
X
√
(Xi − tanh(mi (X))) > K n + Cs tanh(B)
)
≤ 2e−K
2 /4(1+θ)2
.
i=1
Also, setting g(t) := t/θ − tanh(t), we get
n
n
X
X
(Xi − tanh(mi (X)) =
g(mi (X)) = {Qn (X) − Rn (X)}g(θ),
i=1
i=1
where
Qn (X) := |{1 ≤ i ≤ n : mi (X) = θ}| ,
Rn (X) := |{1 ≤ i ≤ n : mi (X) = −θ}| .
Indeed, this holds, as in this case mi (X) can take only three values {−θ, 0, θ},
and g(.) is an odd function. Thus using (19) gives
√
K n + Cs tanh(B)
2
2
≤ 2e−K /4(1+θ) .
PQ,µµn |Qn (X) − Rn (X)| >
g(θ)
But then we have
( n
)
( n
)
X
X
PQ,µµn
Xi > Ln =PQ,µµ
mi (X) > θLn
i=1
i=1
=PQ,µµ {Qn (X) − Rn (X) > Ln } ≤ 2e−K
as
√
K n + Cs tanh(B)
.
Ln
g(θ)
This immediately yields the desired result.
2 /4(1+θ)2
,
26
6.4. Proof of Theorem 1. By Theorem 6, there is no asymptotically powerful test if s tanh(B) = O(n1/2 ). It now suffices to show that the naı̈ve test
is indeed asymptotically powerful. To this end, we first consider the Type I
error. By Theorem 2 of Ellis and Newman (1978),
√
1
,
nX̄ →d N 0,
1−θ
which immediately implies that Type I error
√
Pθ,0
nX̄ ≥ Ln → 0.
Now consider Type II error. Observe that
n
X
X
1
tanh
Qij Xj + µi
X̄ − fQ,µµ (X) =
n
i=1
n
X
j6=i
=
1
n
=
n
1X
tanh θX̄ + µi + O(n−1 ),
n
tanh θX̄ + µi − θXi /n
i=1
i=1
where the last equality follows from the fact that tanh is Lipschitz. In addition,
n
1
1X
tanh θX̄ + µi = tanh θX̄ +
n
n
i=1
≥ tanh θX̄ +
1
n
X
tanh θX̄ + µi − tanh θX̄
µ)
i∈supp(µ
X
tanh θX̄ + B − tanh θX̄
µ)
i∈supp(µ
s tanh(B)
1 − tanh θX̄ ,
n
s tanh(B)
≥ tanh θX̄ +
[1 − tanh (θ)] ,
n
≥ tanh θX̄ +
where the second to last inequality follows from (6). In other words,
√
n(X̄ − tanh(θX̄)) −
√
nfQ,µµ (X) ≥
s tanh(B)
√
[1 − tanh (θ)] .
n
Since supx∈R x−tanh(θx)
< ∞, an application of Lemma 1, together with the
x
−1/2
fact that Ln = o(n
s tanh(B)) yields
√
Pθ,µµ
nX̄ ≥ Ln → 1.
DETECTION THRESHOLDS FOR ISING MODELS
27
6.5. Proof of Theorem 2. The proof of attainability follows immediately
from Theorem 4. Therefore here we focus on the proof of the lower bound.
As before, by the same argument as those following Section 6.2.1, it suffices
to show that there does not exist a sequence of positive reals {Ln }n≥1 such
that
!
!
n
n
X
X
PQ,0
Xi > Ln + PQ,µµ
Xi < Ln → 0.
i=1
i=1
From the proof of Lemma 1 and the inequality | tanh(x) − tanh(y)| ≤
e B) we have
|x − y|, for any fixed t < ∞ and µ ∈ Ξ(s,
t2
s
n−s
θ
t
Pθ,µµ X̄ > tanh(θX̄ + B) +
tanh(θX̄) + + √
≤ 2e− 2nan ,
n
n
n
n
where
an :=
2 2θ 2θ
+
+ 2.
n
n
n
Also note that
s
n−s
s
tanh(θX̄ + B) +
tanh(θX̄) ≤ tanh(θX̄) + C tanh(B),
n
n
n
for some constant C < ∞. Therefore
s
θ
t
Pθ,µµ X̄ − tanh(θX̄) > C tanh(B) + + √
≤ 2 exp −t2 /2nan .
n
n
n
Since s tanh(B) = O(n1/2 ), we have
C(t)
sup Pθ,µµ X̄ − tanh(θX̄) > √
≤ 2 exp −t2 /2nan
(20)
n
e
µ ∈Ξ(s,B)
for some finite positive constant C(t). Now, invoking Theorem 1 of Ellis and
Newman (1978), under H0 : µ = 0 we have
√
1 − m2
d
n(X̄ − m)|X̄ > 0 → N 0,
,
1 − θ(1 − m2 )
where m is the unique positive root of m = tanh(θm). The same argument
as that from Section 6.2.1 along with the requirement to control the Type I
error then imply that without loss of generality one can assume the test φn
rejects if X̄ > m + Ln , where Ln n−1/2 .
Now, note that g(x) = x − tanh(θx) implies that g 0 (x) is positive and
increasing on the set [m, ∞), and therefore
g(x) ≥ g(m) + (x − m)g 0 (m).
28
This gives
C(t)
Pθ,µµ X̄ > m + Ln , X̄ − tanh(θX̄) ≤ √
n
C(t)
√
≤Pθ,µµ X̄ > m + Ln , X̄ − m ≤ 0
,
g (m) n
which is 0 for all large n, as Ln n−1/2 . This, along with (20) gives
lim inf
inf
n→∞ µ ∈Ξ(s,B)
e
Eθ,µµ (1 − φn ) ≥ 1,
thus concluding the proof.
6.6. Proof of Theorem 3. The proof of Theorem 3 is based on an auxiliary variable approach known as Kac’s Gaussian transform (Kac, 1959),
2
which basically says that the moment generating function of N (0, 1) is et /2 .
This trick has already been used in computing asymptotics of log partition
functions (Comets and Gidas, 1991; Mukherjee, 2013; Park and Newman,
2004).
In particular, the proof relies on the following two technical lemmas. The
proof to both lemmas is relegated to the Appendix in Mukherjee, Mukherjee
and Yuan (2017) for brevity.
Lemma 3. Let X follow a Curie-Weiss model of (5) with θ > 0. Given
X = x let Zn be a normal random variable with mean x̄ and variance 1/(nθ).
Then
(a) Given Zn = z the random variables (X1 , · · · , Xn ) are mutually independent, with
Pθ,µµ (Xi = xi ) =
e(µi +zθ)xi
,
eµi +zθ + e−µi −zθ
where xi ∈ {−1, 1}.
(b) The marginal density of Zn is proportional to e−fn,µµ (z) , where
n
(21)
nθz 2 X
fn,µµ (z) :=
−
log cosh(θz + µi ).
2
i=1
(c)
sup
µ ∈[0,∞)n
Eθ,µµ
n
X
i=1
2
(Xi − tanh(µi + θZn )) ≤ n.
DETECTION THRESHOLDS FOR ISING MODELS
29
While the previous lemma applies to all θ > 0, the next one specializes
to the case θ = 1 and gives crucial estimates which will be used in proving
Theorem 3.
For any µ ∈ (R+ )n set
n
µ) :=
A(µ
1X
µi ).
tanh(µ
n
i=1
This can be thought of as the total amount of signal present in the parameter
µ . In particular, note that for µ ∈ Ξ(s, B) we have
µ) ≥
A(µ
s tanh(B)
,
n
µ) =
A(µ
s tanh(B)
.
n
and for µ ∈ Ξ̃(s, B) we have
In the following we abbreviate s tanh(B)/n := An .
Lemma 4.
(a) If θ = 1, for any µ ∈ Ξ(s, B) the function fn,µµ (·) defined by (21) is strictly convex, and has a unique global minimum
mn ∈ (0, 1], such that
µ)).
m3n = Θ(A(µ
(22)
(b)
lim sup lim sup Pθ,µµ (Zn − mn > Kn−1/4 ) = 0.
K→∞
n→∞
(c) If An n−3/4 then there exists δ > 0 such that
lim sup sup Pθ,µµ Zn ≤ δmn = 0.
n→∞ µ :A(µ
µ)≥An
The proof of Lemma 4 can be found in the Appendix in Mukherjee,
Mukherjee and Yuan (2017). We now come back to the proof of Theorem
3. To establish the upper bound, define a test function φn by φn (X) = 1
1/3
if X̄ > 2δAn , and 0 otherwise, where δ is as in part (c) of Lemma 4. By
Theorem 1 of Ellis and Newman (1978), under H0 : µ = 0 we have
(23)
d
n1/4 X̄ → Y,
30
where Y is a random variable on R with density proportional to e−y
Since An n−3/4 we have
4 /12
.
Pθ,0 (X̄ > 2δAn1/3 ) = o(1),
and so it suffices to show that
(24)
sup
µ)≥An
µ :A(µ
Pθ,µµ (X̄ ≤ 2δAn1/3 ) = o(1).
To this effect, note that
n
X
n
n
X
X
Xi =
(Xi − tanh(µi + Zn )) +
tanh(µi + Zn )
i=1
i=1
≥
n
X
i=1
(Xi − tanh(µi + Zn )) + n tanh(Zn )
i=1
Now by Part (c) of Lemma 3 and Markov inequality,
|
n
X
(Xi − tanh(µi + Zn ))| ≤ δnAn1/3
i=1
with probability converging to 1 uniformly over µ ∈ [0, ∞)n . Thus it suffices
to show that
sup Pθ,µµ (nZn ≤ 3δnAn1/3 ) = o(1).
µ)≥An
µ :A(µ
But this follows on invoking Parts (a) and (c) of Lemma 4, and so the proof
of the upper bound is complete.
To establish the lower bound, by the same argument as that from Section
6.2.1, it suffices to show that there does not exist a sequence of positive reals
{Ln }n≥1 such that
!
!
n
n
X
X
PQ,0
Xi > Ln + PQ,µµ
Xi < Ln → 0.
i=1
i=1
If limn→∞ n−3/4 Ln < ∞, then (23) implies
lim inf Eθ,0 φn > 0,
n→∞
DETECTION THRESHOLDS FOR ISING MODELS
31
and so we are done. Thus assume without loss of generality that n−3/4 Ln →
∞. In this case we have
n
X
Xi =
i=1
≤
n
X
i=1
n
X
(Xi − tanh(µi + Zn )) +
(Xi − tanh(µi + Zn )) +
i=1
n
X
i=1
n
X
tanh(µi + Zn )
tanh(µi ) + n|Zn |,
i=1
and so
Pθ,µµ
≤Pθ,µµ
n
X
!
Xi > Ln
i=1
( n
X
|
)
Xi − tanh(µi + Zn )| > Ln /3
+ Pθ,µµ {nZn > Ln /3} + Pθ,µµ {nZn < −Ln /3}
i=1
where we use the fact that
n
X
tanh(µi ) = O(n1/4 ) Ln .
i=1
Now by Part (c) of Lemma 3 and Markov inequality, the first term above
converges to 0 uniformly over all µ . Also by Parts (a) and (b) of Lemma
µ) =
4, Pθ,µµ {nZn > Ln /3} converges to 0 uniformly over all µ such that A(µ
−3/4
O(n
). Finally note that the distribution of Zn is stochastically increasing
in µ , and so
Pθ,µµ {nZn < −Ln /3} ≤ Pθ,0 {nZn < −Ln /3} ,
which converges to 0 by (23). This completes the proof of the lower bound.
Acknowledgments. The authors thank the Associate Editor and two
anonymous referees for numerous helpful comments which substantially improved the content and presentation of the paper. The authors also thank
James Johndrow for helping with package rstan.
SUPPLEMENTARY MATERIAL
Supplement to ”Global Testing Against Sparse Alternatives under Ising Models”
(doi: COMPLETED BY THE TYPESETTER; .pdf). The supplementary
material contain the proofs of additional technical results.
32
References.
Addario-Berry, L., Broutin, N., Devroye, L., Lugosi, G. et al. (2010). On combinatorial testing problems. The Annals of Statistics 38 3063–3092.
Arias-Castro, E., Candès, E. J. and Plan, Y. (2011). Global testing under sparse
alternatives: ANOVA, multiple comparisons and the higher criticism. The Annals of
Statistics 39 2533–2556.
Arias-Castro, E., Donoho, D. L. and Huo, X. (2005). Near-optimal detection of geometric objects by fast multiscale methods. Information Theory, IEEE Transactions on
51 2402–2425.
Arias-Castro, E. and Wang, M. (2015). The sparse Poisson means model. Electronic
Journal of Statistics 9 2170–2201.
Arias-Castro, E., Candès, E. J., Helgason, H. and Zeitouni, O. (2008). Searching
for a trail of evidence in a maze. The Annals of Statistics 1726–1757.
Besag, J. (1974). Spatial interaction and the statistical analysis of lattice systems. Journal
of the Royal Statistical Society. Series B (Methodological) 192–236.
Besag, J. (1975). Statistical analysis of non-lattice data. The statistician 179–195.
Bhattacharya, B. B. and Mukherjee, S. (2015). Inference in Ising models. arXiv
preprint arXiv:1507.07055.
Burnashev, M. (1979). On the minimax detection of an inaccurately known signal in a
white Gaussian noise background. Theory of Probability & Its Applications 24 107–119.
Cai, T. T. and Yuan, M. (2014). Rate-Optimal Detection of Very Short Signal Segments.
arXiv preprint arXiv:1407.2812.
Chatterjee, S. (2005). Concentration inequalities with exchangeable pairs (PhD thesis).
arXiv preprint math/0507526.
Chatterjee, S. (2007a). Estimation in spin glasses: A first step. The Annals of Statistics
1931–1946.
Chatterjee, S. (2007b). Steins method for concentration inequalities. Probability theory
and related fields 138 305–321.
Chatterjee, S., Dey, P. S. et al. (2010). Applications of Steins method for concentration
inequalities. The Annals of Probability 38 2443–2485.
Comets, F. and Gidas, B. (1991). Asymptotics of maximum likelihood estimators for
the Curie-Weiss model. The Annals of Statistics 557–578.
Donoho, D. L. and Jin, J. (2004). Higher criticism for detecting sparse heterogeneous
mixtures. The Annals of Statistics 32 962–994.
Ellis, R. S. and Newman, C. M. (1978). The Statistics of Curie-Weiss Models. Journal
of Statistical Physics 19 149-161.
Grimmett, G. R. (2006). The random-cluster model 333. Springer Science & Business
Media.
Guyon, X. (1995). Random fields on a network: modeling, statistics, and applications.
Springer Science & Business Media.
Hall, P. and Jin, J. (2008). Properties of higher criticism under strong dependence. The
Annals of Statistics 381–402.
Hall, P. and Jin, J. (2010). Innovated higher criticism for detecting sparse signals in
correlated noise. The Annals of Statistics 1686–1732.
Ingster, Y. I. (1994). Minimax detection of a signal in lp metrics. Journal of Mathematical Sciences 68 503–515.
Ingster, Y. I. (1998). Minimax Detection of a Signal for ln -Balls. Mathematical Methods
of Statistics 7 401–428.
Ingster, Y. I. and Suslina, I. A. (2003). Nonparametric goodness-of-fit testing under
33
DETECTION THRESHOLDS FOR ISING MODELS
Gaussian models 169. Springer.
Ingster, Y. I., Tsybakov, A. B. and Verzelen, N. (2010). Detection boundary in
sparse regression. Electronic Journal of Statistics 4 1476–1526.
Ising, E. (1925). Beitrag zur theorie des ferromagnetismus. Zeitschrift für Physik A
Hadrons and Nuclei 31 253–258.
Jin, J. and Ke, T. (2014). Rare and weak effects in large-scale inference: methods and
phase diagrams. arXiv preprint arXiv:1410.4578.
Kac, M. (1959). On the Partition Function of a One-Dimensional Gas. The Physics of
Fluids 2 8–12.
Kac, M. (1969). Mathematical Mechanisms of Phase Transitions. Technical Report, Rockefeller Univ., New York.
Majewski, J., Li, H. and Ott, J. (2001). The Ising model in physics and statistical
genetics. The American Journal of Human Genetics 69 853–862.
Mezard, M. and Montanari, A. (2009). Information, physics, and computation. Oxford
University Press.
Mukherjee, S. (2013). Consistent estimation in the two star Exponential Random Graph
Model. arXiv preprint arXiv:1310.4526.
Mukherjee, R., Mukherjee, S. and Yuan, M. (2017). Supplement to “Global Testing
Against Sparse Alternatives under Ising Models”.
Mukherjee, R., Pillai, N. S. and Lin, X. (2015). Hypothesis testing for highdimensional sparse binary regression. The Annals of statistics 43 352.
Nishimori, H. (2001). Statistical physics of spin glasses and information processing: an
introduction 111. Clarendon Press.
Onsager, L. (1944). Crystal statistics. I. A two-dimensional model with an order-disorder
transition. Physical Review 65 117.
Park, J. and Newman, M. E. (2004). Solution of the two-star model of a network.
Physical Review E 70 066146.
Stauffer, D. (2008). Social applications of two-dimensional Ising models. American Journal of Physics 76 470–473.
Wu, Z., Sun, Y., He, S., Cho, J., Zhao, H., Jin, J. et al. (2014). Detection boundary
and Higher Criticism approach for rare and weak genetic effects. The Annals of Applied
Statistics 8 824–851.
APPENDIX – PROOF OF AUXILIARY RESULTS
Proof of Lemma 1. This is a standard application of Stein’s Method
for concentration inequalities (Chatterjee, 2005). The details are included
here for completeness. One begins by noting that
EQ,µµ (Xi |Xj , j 6= i) = tanh (mi (X) + µi ) ,
mi (X) :=
n
X
Qij Xj .
j=1
0
Now let X be drawn from (1) and let X is drawn by moving one step
in the Glauber dynamics, i.e. let I be a random variable which is discrete
uniform on {1, 2, · · · , n}, and replace the I th coordinate of X by an element
drawn from the conditional distribution of the I th coordinate given the rest.
0
It is not difficult to see that (X, X ) is an exchangeable pair of random
34
n
n
vectors. Further
Pn define an anti-symmetric function F : R × R → R as
F (x, y) = i=1 (xi − yi ), which ensures that
n
1X
0
EQ,µµ F (X, X )|X =
Xj − tanh (mj (X) + µj ) = fµ (X).
n
j=1
Denoting Xi to be X with Xi replaced by −Xi , by Taylor’s series we have
tanh(mj (Xi ) + µj ) − tanh(mj (X) + µj )
1
=(mj (Xi ) − mj (X))g 0 (mj (X)) + (mj (Xi ) − mj (X))2 g 00 (ξij )
2
= − 2Qij Xi g 0 (mj (X)) + 2Q2ij g 00 (ξij )
for some {ξij }1≤i,j≤n , where g(t) = tanh(t). Thus fµ (X) − fµ (Xi ) can be
written as
n
o
1 Xn
2Xi
+
tanh mj Xi + µj − tanh (mj (X) + µj )
fµ (X) − fµ (X ) =
n
n
i
j=1
=
n
n
2Xi 2Xi X
2 X 2 00
−
Qij g 0 (mj (X)) +
Qij g (ξij )
n
n
n
j=1
j=1
Now setting pi (X) := PQ,µµ (Xi0 = −Xi |Xk , k 6= i) we have
1
v(X) := EQ,µµ |fµ (X) − fµ (X0 )k(XI − XI0 )| X
2
n
1X
=
|fµ (X) − fµ (Xi )|pi (X)
n
i=1
≤
+
≤
n
n
2 X
2 X
p
(X)
−
|Qij pi (X)g 0 (mj (X))|
i
n2
n2
2
n2
i=1
n
X
i,j=1
Q2ij g 00 (ξij )2 Xi pi (X)
i,j=1
2
2
+ 2
n n
sup
u,v∈[0,1]n
|u0 Qv| +
n
2 X 2
Qij ,
n2
i,j=1
where in the last line we use the fact that max(|g 0 (t)|, |g 00 (t)|) ≤ 1. The
proof of the Lemma is then completed by an application of Theorem 3.3 of
Chatterjee (2007b).
35
DETECTION THRESHOLDS FOR ISING MODELS
Proof of Lemma 2. Let Y := (Y1 , · · · , Yn ) be i.i.d. random variables
i.i.d.
on {−1, 1} with P(Yi = ±1) = 21 , and let W := (W1 , · · · , Wn ) ∼ N (0, 1).
Also, for any t > 0 let Z(tQ, µ) denote the normalizing constant of the p.m.f.
1
1 >
>
exp
x tQx + µ x
Z(tQ, µ )
2
Thus we have
2
−n
Z(tQ, µ ) = Eexp
t
2
>
Y QY +
n
X
µi Yi
i=1
n
t
X
>
≤ Eexp W QW +
µi W i ,
2
i=1
where we use the fact that EYik ≤ EWik for all positive integers k. Using
f = PW to
µ, W
spectral decomposition write Q = P> ΛP and set ν := Pµ
note that
t
Eexp
2
W> QW +
n
X
µi Wi = Eexp
i=1
n
t X
2
fi2 +
λi W
i=1
n
X
νi2
fi
νi W
i=1
n
Y
e 2(1−tλi )
√
=
.
1 − tλi
i=1
Combining for any t > 1 we have the bounds
(25) 2n
n
Y
νi2
i=1 2(1−tλi )
Pn
e
cosh(µi ) = Z(0, µ ) ≤ Z(Q, µ ) ≤ Z(tQ, µ ) ≤ 2n Qn √
i=1
i=1
1 − tλi
,
where the lower bound follows from on noting that log Z(tQ, µ ) is monotone
non-decreasing in t, using results about exponential families. Thus invoking
convexity of the function t 7→ log Z(tQ, µ ) we have
1
∂ log Z(tQ, µ )
EQ,µµ X> QX =
2
∂t
t=1
log Z(tQ, µ ) − log Z(Q, µ )
≤
t−1
n n
n
o X
2
X
νi
1
≤
− log cosh(µi ) −
log(1 − tλi ),
2(1 − tλi )
2
i=1
i=1
where we use the bounds obtained in (25). Proceeding to bound the rightmost hand side above, set t = 1+ρ
2ρ > 1 and note that
|tλi | ≤
1+ρ
< 1.
2
36
For x ∈ 21 [−(1 + ρ), (1 + ρ)] ⊂ (−1, 1) there exists a constant γρ < ∞ such
that
1
≤ 1 + x + 2γρ x2 , − log(1 − x) ≤ x + 2γρ x2 .
1−x
Also a Taylor’s expansion gives
− log cosh(x) ≤ −
x2
+ x4 ,
2
where we have used the fact that k(log cosh(x))(4) k∞ ≤ 1. These, along with
the observations that
n
X
λi = tr(Q) = 0,
i=1
n
X
µ||2 = ||µ
µ||2
νi2 = ||Pµ
i=1
give the bound
n n
X
i=1
n
o X1
νi2
− log cosh(µi ) −
log(1 − tλi )
2(1 − tλi )
2
i=1
n
n
n
n
n
n
n1 X
o n 1X
o
X
X
X
tX 2
≤
νi2 +
νi λi + t2 γρ
µ2i +
νi2 λ2i + −
µ4i + γρ t2
λ2i
2
2
2
i=1
i=1
i=1
i=1
t
µ + t2 γρµ > Q2µ +
= µ > Qµ
2
n
X
µ4i + γρ t2
i=1
n
X
i=1
i=1
Q2ij
i,j=1
√
√
√
t √
≤ Cρ n + t2 γρ Cρ2 n + C n + γρ t2 D n,
2
P
√
where D > 0 is such that ni,j=1 Q2ij ≤ D n. This along with (25) gives
h1
2
C(1 + tρ) + t2 γρ Cρ2 + C + γρ t2 D
i√
n
X
1
1
n ≥ EQ,µµ X> QX = EQ,µµ
Xi mi (X)
2
2
i=1
But, for some random (ξi , i = 1, . . . , n)
n
X
1
EQ,µµ
Xi mi (X)
2
i=1
n
X
1
= EQ,µµ
tanh(mi (X) + µi )mi (X)
2
i=1
1
= EQ,µµ
2
n
X
i=1
n
X
1
tanh(mi (X))mi (X) + EQ,µµ
µi mi (X) sech2 (ξi ).
2
i=1
DETECTION THRESHOLDS FOR ISING MODELS
37
Now,
n
n
i=1
i=1
X
X
1
η
EQ,µµ
tanh(mi (X))mi (X) ≥ EQ,µµ
mi (X)2 ,
2
2
where
tanh(x)
> 0.
x
|x|≤1
η := inf
The desired conclusion of the lemma follows by noting that
EQ,µµ
n
X
√
µi mi (X) sech2 (ξi ) ≤ C n.
i=1
Proof of Lemma 3. We begin with Part (a). By a simple algebra, the
p.m.f. of X can be written as
(
)
n
nθ 2 X
Pθ,µµ (X = x) ∝ exp
x̄ +
x i µi .
2
i=1
Consequently, the joint density of (X, Zn ) with respect to the product measure of counting measure on {−1, 1}n and Lebesgue measure on R is proportional to
(
)
n
nθ 2 X
nθ
x̄ +
(z − x̄)2
exp
xi µi −
2
2
i=1
(
)
n
nθ 2 X
=exp − z +
xi (µi + zθ) .
2
i=1
Part (a) follows from the expression above.
Now consider Part (b). Using the joint density of Part (a), the marginal
density of Zn is proportional to
)
(
n
X
nθ 2 X
xi (µi + zθ)
exp − z +
2
i=1
x∈{−1,1}n
(
)
n
nθ 2 X
=exp − z +
log cosh(µi + zθ) = e−fn,µµ (z) ,
2
i=1
thus completing the proof of Part (b).
38
Finally, consider Part (c). By Part (a) given Zn = z the random variables
(X1 , · · · , Xn ) are independent, with
Pθ,µµ (Xi = 1|Zn = z) =
eµi +θz
,
eµi +θz + e−µi −θz
and so
Eθ,µµ (Xi |Zn = z) = tanh(µi + θz),
Varθ,µµ (Xi |Zn = n) = sech2 (µi + θz).
Thus for any µ ∈ [0, ∞)n we have
Eθ,µµ
n
n
X
2
n X
2
o
(Xi − tanh(µi + θZn )) =Eθ,µµ Eθ,µµ
(Xi − tanh(µi + θZn )) Zn
i=1
i=1
=E
n
X
sech2 (µi + θZn ) ≤ n.
i=1
Proof of Lemma 4. We begin with Part (a). Since
00
fn,µ
µ (z) =
n
X
tanh2 (z + µi )
i=1
is strictly positive for all but at most one z ∈ R, the function z 7→ fn,µµ (z) is
strictly convex with fn,µµ (±∞) = ∞, it follows that z 7→ fn,µµ (z) has a unique
0 (z) = 0. The fact
minima mn which is the unique root of the equation fn,µ
µ
that mn is positive follows on noting that
0
fn,µ
µ (0) = −
n
X
tanh(µi ) < 0,
0
fn,µ
µ (+∞) = ∞.
i=1
Also fn0 (mn ) = 0 gives
n
1X
mn =
tanh(mn + µi ) ≤ 1,
n
i=1
0 (m ) = 0 can be written as
and so mn ∈ (0, 1]. Finally, fn,µ
n
µ
h
i
s
s
mn − tanh(mn ) =
tanh(mn + B) − tanh(mn ) ≥ C tanh(B),
n
n
for some C > 0, which proves Part (a).
DETECTION THRESHOLDS FOR ISING MODELS
39
Now consider Part (b). By a Taylor’s series expansion around mn and
using the fact that fn00 (z) is strictly increasing on (0, ∞) gives
1
fn (z) ≥ fn (mn ) + (z − mn )2 fn00 (mn + Kn−1/4 ) for all z ∈ [mn + Kn−1/4 , ∞)
2
1
fn (z) ≤ fn (mn ) + (z − mn )2 fn00 (mn + Kn−1/4 ) for all z ∈ [mn , mn + Kn−1/4 ].
2
Setting bn := fn00 (mn + Kn−1/4 ) this gives
Pθ,µµ (Zn > mn + Kn−1/4 )
R
−fn (z) dz
−1/4 e
R
= mn +Kn
−fn (z) dz
Re
R∞
− b2n (z−mn )2
dz
mn +Kn−1/4 e
≤R
mn +Kn−1/4 − bn (z−mn )2
dz
e 2
mn
√
P(N (0, 1) > Kn−1/4 bn )
√ ,
=
P(0 < N (0, 1) < Kn−1/4 bn )
from which the desired conclusion will follow if we can show that lim inf n→∞ n−1/2 bn >
0. But this follows on noting that
√
n−1/2 bn = n−1/2 fn00 (mn + Kn−1/4 )) ≥ n tanh2 (Kn−1/4 ) = K 2 Θ(1).
Finally, let us prove Part (c). By a Taylor’s series expansion about δmn
and using the fact that fn (·) is convex with unique global minima at mn we
have
fn (z) ≥ fn (mn ) + (z − δmn )fn0 (δmn ), ∀z ∈ (−∞, δmn ].
Also, as before we have
1
fn (z) ≤ fn (mn ) + (z − mn )2 fn00 (mn ), ∀z ∈ [mn , 2mn ]
2
Thus with cn := fn00 (2mn ) for any δ > 0 we have
R δmn −f (z)
e n dz
R
Pθ,µµ (Zn ≤ δmn ) = −∞
−fn (z) dz
Re
R δmn −(z−δm )f 0 (δm )
n n
n dz
e
≤ −∞
R 2mn − cn (z−m )2
n
2
dz
mn e
√
2πcn
= 0
(26)
√ .
|fn (δmn )|P(0 < Z < mn cn )
40
To bound the the rightmost hand side of (26), we claim that the following
estimates hold:
cn =Θ(nm2n ),
(27)
nm3n =O(|fn0 (δmn )|).
(28)
Given these two estimates, we immediately have
√
√
√
(29)
mn cn = Θ(m2n n) ≥ Θ(A2/3
n) → ∞,
n
as An n−3/4 by assumption. Thus the rightmost hand side of (26) can be
bounded by
√
mn n
1
= 2 √ → 0,
3
nmn
mn n
where the last conclusion uses (29). This completes the proof of Part (c).
It thus remains to prove the estimates (27) and (28). To this effect, note
that
n
X
fn00 (2mn ) =
tanh2 (2mn + µi )
i=1
≤
n
X
tanh(2mn ) + C1 tanh(µi )
2
i=1
2
≤2n tanh (2mn ) +
2C12
n
X
tanh2 (µi )
i=1
.nm2n + nA(µn ) . nm2n ,
where the last step uses part (a), and C1 < ∞ is a universal constant. This
gives the upper bound in (27). For the lower bound of (27) we have
fn00 (mn ) =
n
X
tanh2 (2mn + µi ) ≥ n tanh2 (2mn ) & nm3n .
i=1
Turning to prove (28) we have
|fn0 (δmn )| =
n
X
tanh(δmn + µi ) − nδmn
i=1
n
hX
i
=
tanh(δmn + µi ) − tanh(δmn ) − n[δmn − tanh(δmn )]
i=1
≥C2 nA(µn ) − C3 nδ 3 m3n
&nm3n ,
DETECTION THRESHOLDS FOR ISING MODELS
41
where δ is chosen small enough, and C2 > 0, C3 < ∞ are universal constants.
This completes the proof of (28), and hence completes the proof of the
lemma.
Division of Biostatistics,
Haviland Hall, Berkeley, CA- 94720.
E-mail: [email protected]
Department of Statistics
1255 Amsterdam Avenue
New York, NY-10027.
E-mail: [email protected]
E-mail: [email protected]
| 1 |
Méthode de calcul du rayonnement
acoustique de structures complexes
Marianne Viallet*,** — Gérald Poumérol* — Olivier Dessombz **
— Louis Jézéquel **
* PSA Peugeot Citroën, Direction des Plateformes Techniques et des Achats
Centre Technique de la Garenne Colombes — Case courrier LG098
18, rue des Fauvelles, 92256 La Garenne Colombes cedex
[email protected]
** Laboratoire de Tribologie et Dynamique des Systèmes, École Centrale de Lyon
CNRS UMR 5513
36, avenue Guy de Collongues, 69134 Ecully cedex
Dans l’industrie automobile, prédire le bruit rayonné est une étape importante de la
conception. Pour résoudre les problèmes acoustiques, il existe principalement deux familles
de méthodes : les Méthodes Éléments Finis (FEM) et les Méthodes Éléments de Frontières
(BEM). Pour calculer le rayonnement acoustique en champ libre, on utilise généralement
plutôt les éléments de frontières. Néanmoins ces méthodes peuvent induire des singularités, et
sont par conséquent moins faciles à utiliser que les éléments finis qui eux sont plutôt adaptés
à l’étude des milieux bornés. La méthode décrite dans cet article, la SDM, permet de tirer
avantage de ces deux méthodes en utilisant chacune là où elle est la plus performante. Une
nouvelle méthode fondée sur éléments finis est également présentée et permet de remplacer
avantageusement les éléments de frontière pour traiter le problème extérieur. L’efficacité de
la SDM couplée à cette nouvelle méthode est discutée.
RÉSUMÉ.
ABSTRACT.
In the automotive industry, predicting noise during design cycle is a necessary
step. Well-known methods exist to answer this issue in low frequency domain. Among these,
Finite Element Methods, adapted to closed domains, are quite easy to implement whereas
Boundary Element Methods are more adapted to infinite domains, but may induce singularity
problems. In this article, the described method, the SDM, allows to use both methods in their
best application domain. A new method is also presented to solve the SDM exterior problem.
Instead of using Boundary Element Methods, an original use of Finite Elements is made.
Efficiency of this new version of the Substructure Deletion Method is discussed.
MOTS-CLÉS : SDM, FEM, BEM, rayonnement acoustique, structures à géométries complexes,
algorithme de clonage, sous structuration infinie.
KEYWORDS:
SDM, FEM, BEM, acoustic radiation, geometrically complex structures, cloning
algorithm, infinite substructuring.
1. Présentation de la méthode SDM ou sous structuration soustractive
Dans l’industrie automobile, la prédiction du bruit rayonné est une étape
nécessaire pendant le cycle de conception. En effet, en plus des normes les
véhicules doivent satisfaire les attentes des futurs passagers. Dans ce but, et afin de
réduire les coûts de retouche plus importants à la fin de la conception qu’au début, il
peut être intéressant d’avoir des méthodes adaptées à l’avant projet. Nous avons
ainsi proposé d’utiliser la Substructure Deletion Method, SDM ou encore appelée
sous structuration soustractive. Cette méthode introduite à la fin des années 70 pour
le génie civil permet de calculer la réponse dynamique d’une fondation enterrée
sous chargement sismique en décomposant le problème en deux autres plus simples
à résoudre (Dasgupta, 1980) (Betti et al, 1994). Ainsi pour résoudre ce problème, on
calcule l’impédance du sol non excavé à l’emplacement de la fondation, puis celle
associée au volume de sol à retirer pour placer cette dernière. En recombinant les
deux matrices ainsi obtenues, on calcule l’impédance associée à la fondation et qui
permet de remonter aux déplacements induits par la sollicitation sismique. Le
principe est le même en acoustique. On englobe la géométrie complexe avec une
surface régulière. On traite alors les deux problèmes résultants séparément (Viallet
et al, 2006a) (Viallet et al, 2006b)
Figure 1. Description de la décomposition d’un problème à géométrie complexe en
deux problèmes plus simples
2. Présentation de l’algorithme de clonage
Afin de remplacer l’usage des éléments de frontière pour la résolution du
problème infini, nous proposons une nouvelle méthode itérative. Il s’agit d’un
algorithme de clonage, aussi appelé sous structuration infinie. Le concept a été
introduit pour le génie civil à la fin des années 70 (Dasgupta, 1982), (Dasgupta,
1984) et repris au début des années 90 (Benembarek, 1993) (Wolf, 2003).
La méthode consiste à diviser l’espace infini en une infinité de sous éléments
finis. Ces cellules grossissent au fur et à mesure qu’on se rapproche de l’infini
suivant un rapport de similitude g comme cela est illustré Figure 2. Il suffit de
n’étudier que la première cellule W1 pour calculer l’impédance de la frontière ¶ W1
et par voie de conséquence résoudre le problème extérieur.
On considère le domaine W1 . L’équation de Helmholtz dans ce volume écrit à
l’aide de la formulation des Éléments Finis nous donne l’expression suivante :
S(1) p (1) = r 0C vn(1) = C v (1) avec S(1) = K (1) - w2 M(1)
[1]
ìï C v (i ) ü
ï
éS11 ( w) S12 ( w) ùïì p1 üï
ê
úïí ïý = ïïí i ïïý
êS ( w) S ( w) úï p2 ï
ï C (i ) ïï
22
êë 21
ú
ûîï þï
ï
îïï v i + 1 þ
[2]
Figure 2. Division de l’espace en éléments finis similaires
On recherche la matrice d’impédance à l’infini D(1) telle que : v n(1)1 = D(1) p1 sur
¶ W1 . De même D(2) est définie comme suit : vn 2(2) = - vn 2(1) = D(2) p2 . Le système
[2] devient alors :
S11( w)p1 + S12 ( w)p2 = D(1) p1
[3]
S21 ( w)p1 + S22 ( w)p2 = - D(2) p2
Soit K la dimension de l’espace, K = 2 en 2D, et K = 3 en 3D. On obtient
avec des changements de variables adéquats permettant d’adimensionner les
différentes grandeurs :
Z11 ( w)P1 + g - 1Z12 ( w)P2 = X(1)P1
[4]
g Z21( w)P1 + Z22 ( w)P2 = - g K - 2 X(2)P2
Le rapport de similitude est choisi tel que g » 1 ce qui permet d’écrire
X(1) » X(2) et par combinaison linéaire des deux équations de [4], on parvient à
l’équation suivante, quelque soient les pressions adimensionnées P1 et P2 :
- 1
X = Z11 - Z12 éêëg K - 2 X + Z22 ùúû Z21
[5]
Un second changement de variables permet d’obtenir la nouvelle relation :
YY - QY + R = [0]
[6]
On pose Yy i = l i y i où l i et y i sont respectivement les valeurs propres et les
vecteurs propres de la matrice Y . On en déduit l’équation quadratique [7] qui est
résolue en passant dans l’espace d’état.
l i2y i + l i Q y i + R y i = {0 }
[7]
On obtient un jeu de 2n valeurs de l i et de 2n vecteurs y i . Seuls la moitié a un
sens physique, et il ne faut sélectionner que ceux qui respectent les conditions aux
limites de notre problème. On remonte alors à la matrice Y , puis la matrice X et
enfin la matrice D(1) .
3. Résultats
3.1.Validation de l’algorithme de clonage sur un cas simple
La méthode a ainsi pu être appliquée en 2D sur un cas très simple d’OPS test
(One Point Source Test). Le principe est de placer une source à l’intérieur du
volume délimité par la surface ¶ W1 . Il est alors aisé de calculé la pression et le
gradient de pression en tout point de l’espace grâce à la solution de Green. On
utilise ensuite le gradient de pression de Green Ñ G comme condition aux limites
du problème et on tente de retrouver la pression de Green G calculée
analytiquement. L’algorithme de clonage a été testé et validé sur une structure 2D
circulaire de rayon R = 55cm . Seul le premier volume autour de la structure doit
être maillé pour résoudre le problème de rayonnement acoustique. Plusieurs
rapports de similitude g ont été testés comme on peut le voir sur la Figure 3.
On constate que plus g se rapproche de 1 et plus les résultats se rapprochent de
la réponse analytique.
Figure 3. Résultats de l’OPS Test pour différentes valeurs de gamma
3.2.Utilisation de l’algorithme de clonage pour la SDM
La SDM est appliquée à un cas simple, c’est-à-dire un contour rectangulaire
dont on souhaite déterminer le rayonnement. Ce contour est englobé par un contour
circulaire. On applique une excitation sinusoïdale sur les nœuds du maillage fluide.
Celle-ci permet de simuler déformées modales de la structure.
Figure 4. Niveau de pression sur 6 points répartis sur la structure en dB en fonction
de la fréquence
On compare les résultats de la SDM avec les éléments de frontière pour traiter le
problème extérieur et de la SDM avec l’algorithme de clonage. Globalement le
comportement est assez bien respecté. Des pics apparaissent et correspondent aux
modes propres de l’anneau qui viennent parasiter la réponse.
4. Conclusion
Nous avons présenté une méthode permettant de calculer le rayonnement
acoustique de structures complexes. Celle-ci est tout à fait adaptée à la comparaison
de différentes géométries ayant un même encombrement et permet de gagner dans
ce cas un temps considérable. En effet, au lieu de refaire le calcul à chaque fois, il
suffit alors de conserver l’impédance à l’infini du volume englobant et d’étudier le
volume restant par éléments finis. Ainsi plus on compare d’architectures, plus la
méthode devient intéressante.
De plus, l’utilisation de l’algorithme de clonage permet de diminuer les temps de
calcul par rapport à une BEM classique.
Ces avantages font de la SDM utilisée avec l’algorithme de clonage une
méthode très bien adaptée aux études en avant projet.
5. Bibliographie
Benembarek A., Réponse dynamique d'une fondation à géométrie arbitraire dans un milieu
semi-infini et contrôle actif des structures avec prise en compte du couplage sol-structure,
Thèse de doctorat, Ecole Centrale de Lyon, 1993.
Betti R., Abdel-Ghaffar, A.M.. Analysis of embedded foundations by Substructure Deletion
Method. Journal of Engineering Mechanics, 120(6) : 1283-1303,1994. American Society
of Civil Engineers.
Dasgupta G., Foundation impedance matrix by substructure deletion. Journal of the
Engineering Mechanics Division, 106(3) : 517-524, 1980. American Society of Civil
Engineers.
Dasgupta G., A finite element formulation for unbounded homogeneous continua. Journal of
Applied Mechanics, 49(1) : 136-140, 1982.
Dasgupta G., Computation exterior potential fields by infinite substructuring. Computer
methods in applied mechanics and engineering, 46(3) : 295-305, 1984.
Viallet M., Poumérol G., Dessombz O., Jezequel L.. « Hybrid acoustical method for solving
exterior problems ». Actes Euronoise, Tampere, 30 mai au 1er juin 2006.
Viallet M., Poumérol G., Sauvage O., Dessombz O., Jezequel L.. « Acoustical method for
radiation calculation of complex structures ». Actes ISMA, Leuven, 18-20 septembre
2006.
Wolf J.P., The Scaled Boundary Finite Element Method. Hoboken, NJ J. Wiley, 2003.
| 5 |
Empirical priors for targeted posterior concentration
rates
arXiv:1604.05734v3 [math.ST] 1 May 2017
Ryan Martin
Department of Statistics
North Carolina State University
[email protected]
Stephen G. Walker
Department of Mathematics
University of Texas at Austin
[email protected]
May 2, 2017
Abstract
In high-dimensional Bayesian applications, choosing a prior such that the corresponding posterior distribution has desired asymptotic concentration properties
can be challenging. This paper develops a general strategy for constructing empirical or data-dependent priors whose corresponding posterior distributions have
targeted, often optimal, concentration rates. In essence, the idea is to place a prior
which has sufficient mass on parameter values for which the likelihood is suitably
large. This makes the asymptotic properties of the posterior less sensitive to the
shape of the prior which, in turn, allows users to work with priors of convenient
forms while maintaining the desired posterior concentration rates. General results
on both adaptive and non-adaptive rates based on empirical priors are presented,
along with illustrations in density estimation, nonparametric regression, and highdimensional structured normal models.
Keywords and phrases: Adaptation; data-dependent prior; density estimation;
empirical Bayes; nonparametric regression.
1
1.1
Introduction
Background
Current theoretical research on Bayesian methods is largely concerned with finding posterior concentration rates. To set the scene, if Πn denotes a posterior distribution for
some parameter θ in a metric space (Θ, d), with true value θ⋆ , the goal is to find the most
1
rapidly vanishing sequence εn such that, for a constant M > 0,
Eθ⋆ [Πn ({θ : d(θ⋆ , θ) > Mεn })] → 0,
n → ∞.
(1)
The traditional setting involves independent and identically distributed (i.i.d.) observations and θ is a density function with d being the Hellinger or L1 metric; see Ghosal et al.
(2000) and Walker et al. (2007). Results for the non-i.i.d. case are developed in Ghosal and van der Vaar
(2007).
In the classical Bayesian framework, especially in high- or infinite-dimensional models,
the prior must be controlled very carefully—roughly, the prior tails can be neither too
fat nor too thin—because it completely determines the attainable concentration rate.
One idea of current interest is the use of generic data-dependent measures. These are
probability measures driven by the data and not necessarily the result of a Bayesian
prior-to-posterior construction; see, e.g., Belitser (2016). Here our focus is on datadependent measures arising from an empirical Bayes approach, where the posterior is
obtained by passing an empirical or data-dependent prior through the likelihood function
via Bayes’s formula. The classical empirical Bayes approach starts with a family of priors
indexed by a parameter γ, i.e., Π(dθ | γ), and then estimates γ basedR on the data.
This is typically done by finding b
γ to maximize the marginal likelihood, Ln (θ) Π(dθ |
γ), where Ln (θ) denotes the likelihood function. The corresponding posterior has a
simple form, namely, Πn (dθ) ∝ Ln (θ) Π(θ | γb), but demonstrating that it has desirable
asymptotic concentration properties is a non-trivial exercise (e.g., Donnet et al. 2014;
Rousseau and Szabó 2016). For more on empirical Bayes, see Efron (2014).
There is no particularly compelling justification for this classical empirical Bayes
approach; so why not consider an alternative where the choice of data-dependent prior is
motivated specifically by the properties one wants the posterior to have? Hence, our goal
here is to redefine the idea of empirical Bayes, and we propose a more poignant choice
of empirical prior designed specifically so that the corresponding posterior distribution
achieves the desired concentration rate properties.
1.2
Main contributions
Martin and Walker (2014) and Martin et al. (2015) recently employed a new type of
empirical Bayes procedure in two structured high-dimensional Gaussian linear models and
obtained optimal minimax posterior concentration rates. Their main idea is to suitably
center the prior around a good estimator of the parameter, a relatively straightforward
task for these normal linear models. An important practical consequence is that the
computationally convenient normal priors, which have been shown to be suboptimal in
these problems in a classical Bayesian context (e.g., Castillo and van der Vaart 2012,
Theorem 2.8), do actually meet the conditions for optimality in this new empirical Bayes
context. It is not clear, however, if this strategy of prior centering can be applied to cases
beyond these normal linear models. In this paper, we develop a general framework for
this new kind of empirical Bayes approach, with supporting theory.
A benefit of this general framework is its simplicity and versatility, i.e., that the conditions are relatively easy to check for standard prior types and that the same techniques
can be used for a wide range of models and/or true parameters. For example, the proposed approach can handle models that involve mixtures of light- and heavy-tailed kernels
2
(Section 4.3), something that apparently the existing Bayesian nonparametric machinery currently cannot do (e.g., Kruijer et al. 2010, p. 1229). Shape-restricted problems,
such as monotone density estimation, discussed in Salomond (2014), is another situation where the standard Bayesian machinery is not fully satisfactory, but the methods
presented herein can be applied directly.
To motivate the use of our empirical priors in particular, it helps to recall one of the
essential parts of proofs of posterior concentration rates for standard Bayesian posteriors.
Suppose we have data X n with a joint density pnθ , depending on a parameter θ; high- and
infinite-dimensional parameters are our main focus in this paper but, to keep the present
discussion simple, we consider θ to be finite-dimensional. If εn is the target concentration
rate, then it is typical to consider a “neighborhood” of the true θ⋆ of the form
(2)
θ : K(pnθ⋆ , pnθ ) ≤ nε2n , V (pnθ⋆ , pnθ ) ≤ nε2n ,
where K is the Kullback–Leibler divergence and V is the corresponding second moment;
see Section 1.4. A crucial step in proving that the posterior attains the target concentration rate is to demonstrate that the prior allocates a sufficient amount of mass to the set
in (2). If the prior could be suitably centered at θ⋆ , then this prior concentration would
be trivial. The difficulty, of course, is that θ⋆ is unknown, so care is needed to construct
a prior satisfying this prior concentration property uniformly on a sufficiently wide range
of θ⋆ . In fact, this placement of prior mass is known to be problematic in the usual
Bayesian proofs and is one key reason why a number of examples, such as heavy-tailed
density estimation, are particularly challenging.
As an alternative, consider the following “empirical version” of (2),
2
Ln = {θ ∈ Θ : Ln (θ) ≥ e−nεn Ln (θ̂n )},
where Ln (θ) is the likelihood function based on X n and θ̂n is a maximum likelihood
estimator. This is effectively a neighborhood of θ̂n , which is known, so it is straightforward
to construct a prior to assign a sufficient amount of mass to Ln . The catch is that
a prior satisfying this mass condition would be data-dependent, or empirical, since it
must be appropriately centered at θ̂n . One can proceed to construct a corresponding
empirical Bayes posterior by combining this empirical prior with the likelihood via Bayes’s
formula. If θ̂n behaves badly, then the empirical prior to posterior update can correct
for it, provided certain conditions are satisfied. Our key observation is that an empirical
prior that allocates a sufficient amount of mass to Ln is easy to arrange in practice (see
Section 4) and is a significant step towards proving concentration rate results for the
corresponding empirical Bayes posterior.
Our aim is to put sufficient mass around the maximum likelihood estimator in the
prior, in fact the maximum allowed up to a constant, which ensures the target rate for the
posterior. Future work will look at how to set the constant in order to match posterior
credible regions with confidence regions, for example.
While the attainable posterior concentration rate is determined by the prior, the
targeted rate depends on the true value θ⋆ of the parameter in some way. For example,
in a nonparametric regression problem, the optimal rate will depend on the smoothness
of the true regression function. If this smoothness is known, then it is possible to tune
the prior so that the attainable and targeted rates agree. However, if the smoothness is
3
unknown, as is often the case, the prior cannot make direct use of this information, so one
needs to make the prior more flexible so that it can adapt to the unknown concentration
rate. Adaptive posterior concentration rate results have received considerable attention
in the recent literature, see van der Vaart and van Zanten (2009), Kruijer et al. (2010),
Arbel et al. (2013), Scricciolo (2015), and Shen and Ghosal (2015), with the common
denominator in all this work is that the prior should be a mixture over an appropriate
model complexity index. The empirical prior approach described above can readily handle
this modification, and we provide general sufficient conditions for adaptive empirical
Bayes posterior concentration.
1.3
Outline of the paper
In Section 2 we introduce the notion of an empirical prior and present the conditions
needed for the corresponding posterior distribution to concentrate at the true parameter
value at a particular rate. This discussion is split into two parts, depending on whether the
target rate is known or unknown. A toy example is given that shows the conditions of the
theorems are not unreasonable. Section 3 presents the proofs of the two main theorems,
and a take-away point is that the arguments are quite straightforward, suggesting that
the particular empirical prior construction is indeed very natural. Several examples are
presented in Section 4, starting from a relatively simple parametric problem and ending
with a challenging adaptive nonparametric density estimation problem. We conclude, in
Section 5, with a brief discussion. Details for the examples are in the Appendix.
1.4
Notation
Suppose that data X n , indexed by n ≥ 1, not necessarily independent or i.i.d., have
a joint distribution with density pnθ , indexed by a parameter θ ∈ Θ, possibly high- or
infinite-dimensional. Write Ln (θ) = pnθ (X n ) for the likelihood function. If Πn is a prior
distribution, possibly depending on data, supported on a subset Θn ⊆ Θ, and having a
density πn with respect to some non-data-dependent dominating measure νn on Θn , then
Bayes’s formula gives the posterior distribution
R
Ln (θ)πn (θ) νn (dθ)
Πn (A) = R A
, A ⊆ Θn .
L (θ)πn (θ) νn (dθ)
Θn n
Typically, νn would be Lebesgue or counting measure, depending on the structure of
Θn . For our theoretical analysis, if θ⋆ is the true value of the parameter, then it will be
convenient to rewrite the posterior distribution as
R
Rn (θ) πn (θ) νn (dθ)
Nn (A)
n
Π (A) =
= RA
, A ⊆ Θn ,
(3)
Dn
R (θ)πn (θ) νn (dθ)
Θn n
where Rn (θ) = Ln (θ)/Ln (θ⋆ ) is the likelihood ratio, and Nn (·) and Dn denote the numerator and denominator of the ratio, respectively. Some minor modification to this familiar
form will be considered in Section 2.2.
Our results below will establish convergence rates for the posterior Πn relative to the
Hellinger distance on the set of joint densities {pnθ : θ ∈ Θ}. Recall that the Hellinger
4
distance between two densities, say, f and g, with dominating measure µ, is given by
Z
2
H(f, g) = (f 1/2 − g 1/2 )2 dµ.
We say that the posterior distribution has (Hellinger) concentration rate (at least) εn at
θ⋆ if Eθ⋆ {Πn (AM εn )} → 0 as n → ∞, where
2
2
AM εn = θ : H 2 (pnθ⋆ , pnθ ) > 1 − e−M nεn
and M > 0 is a sufficiently large constant. This particular set can be related to some
other more familiar types of neighborhoods in certain cases; see Section 4 for details.
For example, consider the typical i.i.d. setup, so that pnθ is just a product of marginal
densities pθ . In this case,
H 2 (pnθ⋆ , pnθ ) = 1 − {1 − H 2 (pθ⋆ , pθ )}n
2
2
so, H(pθ⋆ , pθ ) > Mεn implies H 2 (pnθ⋆ , pnθ ) > 1 − e−M nεn . Therefore, in the i.i.d. case, if
Πn (AM εn ) vanishes, then so does the posterior probability of {θ : H(pθ⋆ , pθ ) > Mεn }, so
that εn is the usual Hellinger concentration rate.
In addition to the Hellinger distance, we will also have a need for the Kullback–Leibler
divergence, K, and the corresponding second moment, V , given by
Z
Z
K(f, g) = log(f /g) f dµ and V (f, g) = log2 (f /g) f dµ.
Sieves will play an important role in our prior construction and analysis. According to
Grenander (1981) and Geman and Hwang (1982), a sieve is simply an increasing sequence
of (finite-dimensional) subsets of the parameter space. We will denote these generically
by Θn . Care is needed in choosing the sieves to have the appropriate approximation
properties; see Conditions S1 and S2 in Section 2 and the examples in Section 4. We will
let θ̂n denote a sieve maximum likelihood estimator (MLE), i.e., θ̂n = arg maxθ∈Θn Ln (θ).
An important subset of Θn is the “neighborhood” of the sieve MLE eluded to above, i.e.,
2
Ln = θ ∈ Θn : Ln (θ) ≥ e−dnεn Ln (θ̂n ) , d > 0.
(4)
P
Finally, we write ∆(S) = {(θ1 , . . . , θS ) : θs ≥ 0, Ss=1 θs = 1} for the S-dimensional
probability simplex, 1(·) for the indicator function, “.” for inequality up to a universal
constant, |A| for the cardinality of a finite set A, and, for a number p > 1, we say that
q = p/(p − 1) is the Hölder conjugate of p in the sense that p−1 + q −1 = 1.
2
2.1
Empirical priors and posterior concentration
Known target rate
For our first case, suppose that the target rate, εn , is known. That is, the feature of θ⋆
that determines the desired rate, e.g., the smoothness of the true regression function, is
known. In such cases, we can make use of the known target rate to design an appropriate
sieve on which to construct an empirical prior. Condition S1 below concerns the sieve’s
approximation properties, and is familiar in the posterior concentration literature.
5
Condition S1. There exists an increasing sequence of subsets Θn of Θ and a deterministic
sequence θ† = θn† in Θn such that
max K(pnθ⋆ , pnθ† ), V (pnθ⋆ , pnθ† ) ≤ nε2n , all large n.
Remark 1. The sequence θ† = θn† in Condition S1 can be interpreted as “pseudo-true”
parameter values in the sense that n−1 K(pnθ⋆ , pnθ† ) → 0. In the case that the sieves
eventually contain θ⋆ , then θ† is trivial. However, there will be examples where the model
does not include the true distribution, in which case, identifying θ† is more challenging.
Fortunately, appropriate sieves are already known in many of the key examples.
Remark 2. An important consequence of Condition S1, that will be used in the proof
of our main theorems, is a bound on the likelihood ratio Rn (θ̂n ) at the sieve MLE, in
2
particular, there exists a constant c > 1 such that Rn (θ̂n ) ≥ e−cnεn with Pθ⋆ -probability
converging to 1. See Lemma 8.1 in Ghosal et al. (2000).
The next two conditions, namely, Conditions LP1 and GP1 below, are conditions on
the prior. The first, a local prior condition, formally describes how the empirical prior
Πn should concentrate on the “neighborhoods” Ln in (4). On one hand, this is similar to
the standard local prior support conditions in Ghosal et al. (2000), Shen and Wasserman
(2001), and Walker et al. (2007) but, on the other hand, the neighborhood’s dependence
on the data is our chief novelty and is the main driver of our empirical prior construction.
The second, a global prior condition, effectively controls the tails of the empirical prior
density πn , i.e., how heavy can the tails be and still achieve the desired rate.
Condition LP1. For Ln in (4), there exists C > 0 such that the prior Πn satisfies
2
lim inf eCnεn Πn (Ln ) > 0,
n→∞
with Pθ⋆ -probability 1.
(5)
Condition GP1. There exists constants K > 0 and p > 1, such that the density function
πn of the empirical prior Πn satisfies
Z
1/p
2
νn (dθ) ≤ eKnεn ,
Eθ⋆ {πn (θ)p }
where, again, νn is a non-data-dependent dominating measure on Θn .
Theorem 1. If Conditions S1, LP1, and GP1 hold for εn , where εn → 0 and nε2n → ∞,
then there exists a constant M > 0 such that Eθ⋆ {Πn (AM εn )} → 0 as n → ∞. If
εn = n−1/2 , then Eθ⋆ {Πn (AMn εn )} → 0 for any Mn → ∞.
Proof. See Section 3.
We have claimed that it is relatively simple to construct an empirical prior to satisfy
Conditions LP1 and GP1 above. In fact, in many cases, we can take Πn to be a normal
prior with mean θ̂n and suitable variance. Details of examples like this, as well as others
for which a normal prior is not appropriate, are given in Section 4. Here, to show that
the conditions are quite reasonable, we provide a simple illustration.
6
Toy Example. Consider X1 , . . . , Xn i.i.d. N(θ, 1), so that θ̂n = X̄, the sample mean and
νn is constant equal to Lebesgue measure on R. The target rate is εn = n−1/2 . We can
take the sieve Θn to be fixed at Θ = R and set θ† = θ⋆ , so that Conditions S1 and R1
hold trivially. Next, for Ln in (4), with d = 1, we have
√
√
Ln = {θ : X̄ − 2εn < θ < X̄ + 2εn }.
If we take Πn = N(X̄, s2 ), then it can be shown that Condition LP1 holds if we take the
prior standard deviation s as
p
2/n
.
(6)
s = −1 1
(1 + e−C )
Φ
2
We claim that Condition GP1 also holds with this choice. To see this, for the prior
density πn (θ) = N(θ | X̄, s2 ) and constant p > 1, we have
p
2
2
πn (θ)p ∝ s−p e− 2s2 (θ−X̄) ∝ s−(p−1) N(θ | X̄, sp );
the omitted proportionality constant here and below depends on p but not on s or X̄.
Then the familiar property of normal convolutions gives
2
Eθ⋆ {πn (θ)p } ∝ s−(p−1) N(θ | θ⋆ , sp + n1 ),
where θ⋆ is the true mean, so the integral of the p−1 power is
Z
1/p
(s2 + p/n)1/2
.
Eθ⋆ {πn (θ)p }
dθ ∝ s−(p−1)/p 2
(s /p + 1/n)1/(2p)
Given the form for s, the right-hand side is bounded as n → ∞ and, therefore, Condition GP1 is satisfied with εn = n−1/2 .
2.2
Unknown target rate
As discussed in Section 1, the target rate depends on some features of the unknown θ⋆ .
In this case, care is needed to construct a prior which is adaptive in the sense that it still
leads to posterior concentration at the desired rate. Towards adaptivity, we will make two
adjustments to the prior described in Section 2.1. The first step is to introduce a mixture
element into the prior, and the second step, for regularization purposes, incorporates data
in the prior again but in a different way than the prior centering step.
The starting point, again, is with the selection of a suitable sieve. Let Θ be the full
parameter space, and let Θn be an increasing sequence of finite-dimensional subsets. Express the parameter θ as an infinite vector θ = (θ1 , θ2 , . . .), e.g., θ could be the coefficients
attached to a basis expansion of a regression function or a log-density function, so that
the event “θj = 0” means that feature j is “turned off” and, therefore, the corresponding
θ is less complex. This suggests that we define the sieves as
[
ΘS ,
Θn =
S:|S|≤Tn
7
where S is a finite subset of {1, 2, . . . , Tn }, with Tn increasing with n, and
ΘS = {θ = (θ1 , θ2 , . . .) : θj = 0, j ∈
/ S}.
As in Section 2.1, we will need four conditions in order to establish our adaptive posterior concentration rate result: two conditions on the sieve and sieve MLE, a local prior
condition, and a global prior condition. Since we are seeking a stronger adaptive rate
result, naturally, the conditions here are stronger.
Condition S2. There exists Sn⋆ such that |Sn⋆ | ≤ nε2n and a θ† = θn† in ΘSn⋆ such that
max K(pnθ⋆ , pnθ† ), V (pnθ⋆ , pnθ† ) ≤ nε2n , all large n.
In some examples, it is known that the true parameter belongs to one of the sieve
sets, so that θ† can be taken as θ⋆ , and Condition S2 is trivial. In other cases, θ⋆ may
not belong to any sieve, so approximation-theoretic results on the sieve will be needed to
establish this. Examples of both types are presented in Section 4. In any case, the set of
indices Sn⋆ acts like the “true model” and θ† is a deterministic sequence of “pseudo-true”
parameter values; see Remark 1.
Towards writing down the empirical prior, it will help if we express the infinitedimensional vector θ as (S, θS ), i.e., as a pair consisting of the indices of its non-zero
entries and the corresponding non-zero values, then it is natural to introduce a prior for
θ in a hierarchical way, with a prior for wn for S and a conditional prior Πn,S for θS ,
given S. Write πn,S for the density function of Πn,S with respect to a non-data-dependent
dominating measure νn,S on ΘS . Technically, the conditional prior as a distribution for
the infinite-dimensional θ such that the components with indices in S have density πn,S
and the remaining components have point-mass distributions at 0; in other words, the
conditional prior is a product measure made up of Πn,S and a product of point masses.
To summarize, so far, the proposed empirical prior for θ is a mixture of the form
X
wn (S) Πn,S (dθ).
(7)
Πn (dθ) =
S:|S|≤Tn
Next, similar to what we did in Section 2, let us define the sets
Ln,S = θ ∈ ΘS : Ln (θ) ≥ e−d|S| Ln (θ̂n,S ) , S ∈ S, |S| ≤ Tn ,
d > 0,
which are effectively neighborhoods in ΘS centered around θ̂n,S . Then we have the following versions of the local and global prior conditions, suitable for the adaptive case.
Condition LP2. There exist constants A > 0 and C > 0 such that
⋆
lim inf eC|Sn | Πn,Sn⋆ (Ln,Sn⋆ ) > 0,
n→∞
and
2
wn (Sn⋆ ) & e−Anεn ,
with Pθ⋆ -probability 1
large n.
Condition GP2. There exists a constant K ≥ 0 such that
Z
X
1/p
2
wn (S)
νn,S (dθ) . eKnεn ,
Eθ⋆ {πn,S (θ)p }
S
ΘS
8
all large n.
(8)
In certain examples, such as those in Sections 4.4–4.5, it can be shown that the integral
in Condition GP2 above is bounded by eκ|S| for some constant κ. Then the condition
is satisfied with K = 0 if the prior wn for S is such that the marginal prior for |S|
has exponential tails (e.g., Arbel et al. 2013; Shen and Ghosal 2015). However, for other
examples, such as density estimation (Section 4.6), a bit more care is required.
To achieve adaptive posterior concentration rates, we propose a slight modification
to the previous approach, one that incorporates data into the prior in two ways: one
for prior centering, like before, and another for suitable regularization. That is, for an
α ∈ (0, 1) to be specified, if Πn is the empirical prior described above, then we consider
a double empirical prior defined as
e n (dθ) ∝ Πn (dθ) .
Π
Ln (θ)1−α
(9)
Dividing by a portion of the likelihood has the effect of penalizing those parameter values
that “track the data too closely” (Walker and Hjort 2001), hence regularization. Then
the corresponding double empirical Bayes posterior has two equivalent forms:
e n (dθ) or Πn (dθ) ∝ Ln (θ)α Πn (dθ).
Πn (dθ) ∝ Ln (θ) Π
Of the two expressions, the former is more intuitive from an “empirical Bayes” perspective, while the latter is easier to work with in our theoretical analysis. The latter
also resembles some recent uses of a power likelihood in, e.g., Grünwald and van Ommen
(2016), Bissiri et al. (2016), Holmes and Walker (2017), Syring and Martin (2016), and
others, for the purpose of robustness.
To identify an appropriate power α ∈ (0, 1), take p > 1 as in Condition GP2, and let
q > 1 be the Hölder conjugate. Then we propose to take α such that αq = 21 , i.e.,
α = 21 (1 − p−1 ).
(10)
To summarize, in its most convenient form, the posterior distribution based on a doubly
empirical prior Πn in (7) is
R
R (θ)α Πn (dθ)
n
A n
R
Π (A) =
.
(11)
R (θ)α Πn (dθ)
Θ n
Theorem 2. Let εn = εn (θ⋆ ) be a target rate corresponding to the true θ⋆ , and assume
that Conditions S2, LP2, and GP2 hold for this εn . Then there exists a constant M > 0
and α of the form (10) such that Πn in (11) satisfies Eθ⋆ {Πn (AM εn )} → 0 as n → ∞.
Proof. See Section 3.
3
3.1
Proofs
Proof of Theorem 1
The dependence of the prior on data requires some modification of the usual arguments.
In particular, in Lemma 1, the lower bound on the denominator Dn in (3) is obtained quite
simply thanks to the data-dependent prior, formalizing the motivation for this empirical
Bayes approach described in Section 1, while Lemma 2 applies Hölder’s inequality to get
an upper bound on the numerator Nn (AM εn ).
9
2
Lemma 1. Dn ≥ e−dnεn Rn (θ̂n ) Πn (Ln ).
Proof. The denominator Dn can be trivially lower-bounded as follows:
Z
Z
Ln (θ)
πn (θ) dθ.
Rn (θ)πn (θ) dθ = Rn (θ̂n )
Dn ≥
Ln Ln (θ̂n )
Ln
Now use the definition of Ln to complete the proof.
Lemma 2. Assume Condition GP1 holds for εn with constants (K, p), and let q > 1 be
the Hölder conjugate of p. Then
(
)
Nn (AM εn )
2
Eθ⋆
≤ e−Gnεn ,
1
1− 2q
Rn (θ̂n )
where G =
M2
q
− K.
Proof. Start with the following simple bound:
Z
Z
1
1− 2q
Rn (θ)πn (θ) νn (dθ) ≤ Rn (θ̂n )
Nn (AM εn ) =
AM εn
1
Rn (θ) 2q πn (θ) νn (dθ).
AM εn
1
Dividing both sides by Rn (θ̂n )1− 2q , and taking expectations (with respect to Pθ⋆ ), moving
this expectation inside the integral, and applying Hölder’s inequality, gives
) Z
(
p1
1 1
Nn (AM εn )
p
q
⋆ {πn (θ) }
⋆ {Rn (θ) 2 }
E
νn (dθ).
E
≤
Eθ⋆
θ
θ
1
AM εn
Rn (θ̂n )1− 2q
A standard argument (e.g., Walker and Hjort 2001) shows that the first expectation on
the right hand side above equals 1 − H 2 (pnθ⋆ , pnθ ) and, therefore, is upper bounded by
2
2
e−M nεn , uniformly in θ ∈ AM εn . Under Condition GP1, the integral of the second
2
expectation is bounded by eKnεn . Combining these two bounds proves the claim.
Proof of Theorem 1. To start, set
2
an = e−cnεn
2
and bn = c0 e−(C+d)nεn Rn (θ̂n ),
where the constants (C, c, d) are as in Condition LP1, Remark 2, and Equation (4), respectively, and c0 is another sufficiently small constant. Also, abbreviate Nn = Nn (AM εn )
and Rn = Rn (θ̂n ). Then
Πn (AM εn ) =
Nn
Nn
1(Rn ≥ an , Dn > bn ) +
1(Rn < an or Dn < bn )
Dn
Dn
1−
1
Rn 2q
≤
bn
Nn
1− 1
Rn 2q
(C+d)nε2n
≤
e
1(Rn ≥ an ) + 1(Rn < an ) + 1(Dn < bn )
Nn
1
2q
an
c
1
1− 2q
+ 1(Rn < an ) + 1(Dn < bn )
Rn
2
= e(C+ 2q +d)nεn
Nn
1
1− 2q
+ 1(Rn < an ) + 1(Dn < bn ),
Rn
10
Taking expectation and applying Lemma 2, we get
c
2
2
Eθ⋆ {Πn (AM εn )} ≤ e(C+ 2q +d)nεn e−Gnεn + Pθ⋆ (Rn < an ) + Pθ⋆ (Dn < bn ).
(12)
The second and third terms are o(1) by Remark 2 and Condition LP1, respectively. If
we take G > C + 2qc + d or, equivalently, M 2 > q(K + C + 2qc + d), then the first term is
o(1) as well, completing the proof of the first claim.
For the second claim, when nε2n is bounded, the conclusion (12) still holds, and the
latter two terms are still o(1). The first term in the upper bound is decreasing in G or,
equivalently, in M, so the upper bound vanishes for any Mn → ∞.
3.2
Proof of Theorem 2
Write the posterior probability Πn (AM εn ) as a ratio Nn (AM εn )/Dn , where
Z
X
wn (S)
Rn (θ)α πn,S (θ) νn,S (dθ)
Nn (AM εn ) =
AM εn ∩ΘS
S:|S|≤Tn
and
Dn =
X
S:|S|≤Tn
wn (S)
Z
Rn (θ)α πn,S (θ) νn,S (dθ).
ΘS
The strategy of the proof here is similar to that of Theorem 1. In particular, the empirical
nature of the prior makes getting the lower bound on Dn very simple.
⋆
Lemma 3. Dn ≥ e−d|Sn | wn (Sn⋆ )Rn (θ̂n,Sn⋆ )α Πn,s⋆n (Ln,Sn⋆ ).
Proof. Almost identical to the proof of Lemma 1.
Lemma 4. Assume Condition GP2 holds with constants (K, p), let q > 1 be the Hölder
conjugate of p, and let α be determined by (10). Then
2
Eθ⋆ {Nn (AM εn )} . e−Gnεn ,
where G =
M2
q
− K.
Proof. Taking expectation of Nn (AM εn ), moving expectation inside the integral, and
applying Hölder’s inequality, we get
Z
X
1
1 1
wn (S)
Eθ⋆ {Rn (θ) 2 } q Eθ⋆ {πn,S (θ)p } p νn,S (dθ).
Eθ⋆ {Nn (AM εn )} ≤
S:|S|≤Tn
AM εn ∩ΘS
2
2
The first expectation on the right hand side above is upper bounded by e−M nεn , uniformly
in θ ∈ AM εn ∩ ΘS and in S, so
Z
X
1
−(M 2 /q)nε2n
wn (S)
Eθ⋆ {πn,S (θ)p } p νn,S (dθ).
Eθ⋆ {Nn (AM εn )} ≤ e
AM εn ∩ΘS
S:|S|≤Tn
Under Condition GP2, the summation on the right-hand side above is bounded by a
2
constant times eKnεn and the claim now follows immediately.
11
Proof of Theorem 2. Under the stated conditions, by Lemma 3,
⋆
2
⋆
Dn ≥ e−d|Sn | e−Anεn Rn (θ̂n,Sn⋆ )α e−C|Sn | .
2
An argument similar to that in Remark 2 shows that Rn (θ̂n,Sn⋆ ) ≥ e−cnεn for some c >
1, with Pθ⋆ -probability converging to 1. Since |Sn⋆ | ≤ nε2n , this lower bound for the
denominator can be combined with the upper bound in the numerator from Lemma 4
using an argument very similar to that in the proof of Theorem 1, to get
Eθ⋆ {Πn (AM εn )} ≤ e−{
M2
−(K+A+C+cα+d)}nε2n
q
+ o(1).
So, for M sufficiently large, the upper bound vanishes, proving the claim.
4
4.1
Examples
Fixed finite-dimensional parameter estimation
Suppose that the parameter space Θ is a fixed subset of Rd , for a fixed d < ∞. Under
the usual regularity conditions, the log-likelihood ℓn = log Ln is twice continuously differentiable, its derivative ℓ̇n satisfies ℓ̇n (θ̂n ) = 0 at the (unique) global MLE θ̂n , and the
following expansion holds:
ℓn (θ) − ℓn (θ̂n ) = − 12 (θ − θ̂n )⊤ Σ̂n (θ − θ̂n ) + o(nkθ − θ̂n k2 ),
(13)
where Σ̂n = −ℓ̈n (θ̂n ). Then the set Ln can be expressed as
Ln = θ : (θ − θ̂n )⊤ Σn (θ − θ̂n ) < anε2n .
For rate εn = n−1/2 , this suggests an empirical prior of the form:
Πn = Nd θ̂n , n−1 Ψ−1 ,
(14)
for some fixed matrix Ψ in order to ensure S1. The proposition below states that this
empirical prior yields a posterior that concentrates at the parametric rate εn = n−1/2 .
Proposition 1. Assume that each component θj in the d-dimensional parameter θ are
on (−∞, ∞), and that the quadratic approximation (13) holds. Then Conditions LP1
and GP1 hold for the empirical prior (14) with εn = n−1/2 . Therefore, the posterior
concentrates at the rate εn = n−1/2 relative to any metric on Θ.
Proof. Similar to the toy example; see the Appendix for details.
4.2
Density estimation via histograms
Consider estimation of a density function, p, supported on the compact interval [0, 1],
based on i.i.d. samples X1 , . . . , Xn . A simple approach to develop a Bayesian model
for this problem is a random histogram prior (e.g., Scricciolo 2007, 2015). That is, we
12
consider a partition of the interval [0, 1] into S bins of equal length, i.e., [0, 1] =
, Ss ), s = 1, . . . , S. For a given S, write the model
where Es = [ s−1
S
pθ (x) =
S
X
s=1
θs Unif(x | Es ),
SS
s=1
Es ,
x ∈ [0, 1],
consisting of mixtures of uniforms, i.e., piecewise constant densities, where the parameter
θ is a vector in the S-dimensional probability simplex, ∆(S). That is, pθ is effectively a
histogram with S bins, all of the same width, S −1 , and the height of the sth bar is S −1 θs ,
s = 1, . . . , S. Here, assuming the regularity of the true density is known, we construct
an empirical prior for the vector parameter θ such that, under conditions on the true
density, the corresponding posterior on the space of densities has Hellinger concentration
rate within a logarithmic factor of the minimax rate. More sophisticated models for
density estimation will be presented in Sections 4.3 and 4.6.
Let S = Sn be the number of bins, specified below. This defines a sieve Θn = ∆(Sn )
and, under the proposed histogram model, the data can be treated as multinomial, so
the (sieve) MLE is θ̂n = (θ̂n,1 , . . . , θ̂n,S ), where θ̂n,s is just the proportion of observations
in the sth bin, s = 1, . . . , S. Here we propose a Dirichlet prior Πn for θ, namely,
θ ∼ Πn = DirS (α̂),
α̂s = 1 + c θ̂n,s ,
s = 1, . . . , S,
which is centered on the sieve MLE in the sense that the mode of the empirical prior
density is θ̂n ; the factor c = cn will be specified below. Finally, this empirical prior for θ
determines an empirical prior for the density via the mapping θ 7→ pθ .
Proposition 2. Suppose that the true density, p⋆ , is uniformly bounded away from 0
and is Hölder continuous with smoothness parameter β, where β ∈ (0, 1] is assumed to be
known. Then the target rate is εn = n−κ logκ n, where κ = β/(2β + 1). For the empirical
prior Πn described above, if S = Sn = nε2n (log n)−1 and c = cn = nε−2
n , then there exists
n
M > 0 such that the corresponding posterior Π satisfies
Ep⋆ Πn ({θ : H(p⋆, pθ ) > Mεn }) → 0.
Proof. See the Appendix.
4.3
Mixture density estimation
Let X1 , . . . , Xn be i.i.d. samples from a density pθ of the form
Z
pθ (x) = k(x | µ) θ(dµ),
(15)
where k(x | µ) is a known kernel and the mixing distribution θ is unknown. Here we
focus on the normal mixture case, where k(x | µ) = N(x | µ, σ 2 ), where σ is known, but
see Remark 3. The full parameter space Θ, which contains the true mixing distribution
θ⋆ , is the set of all probability measures on the µ-space, but we consider here a finite
mixture model of the form
θ = (ω, µ) 7→ pθ (·) =
13
S
X
s=1
ωs k(· | µs ),
(16)
for an integer S, a vector ω = (ω1 , . . . , ωS ) in the simplex ∆(S), and a set of distinct
support points µ = (µ1 , . . . , µS ). For fixed S, let θ̂ = (ω̂, µ̂) be the MLE for the mixture
weights and locations, respectively, where the optimization is restricted so that |µ̂s | ≤ B,
where B = Bn is to be determined. We propose to “center” an empirical prior on the
S-specific MLE as follows:
• ω and µ are independent;
• the vector ω is DirS (α̂) like in Section 4.2, where α̂s = 1 + c ω̂s , s = 1, . . . , S;
• the components (µ1 , . . . , µS ) of µ are independent, with
µs ∼ Unif(µ̂s − δn , µ̂s + δn ),
s = 1, . . . , S,
where δn is a sequence of positive constants to be determined.
To summarize, we have an empirical prior Πn for θ = (ω, µ), supported on the sieve
Θn = ∆(S) × RS , where S = Sn will be specified, with density function
πn (θ) = DirS (ω | α̂) ×
S
Y
s=1
Unif(µs | µ̂s − δn , µ̂s + δn ).
This determines an empirical prior for the density function through the mapping (16).
Proposition 3. Suppose that the true mixing distribution θ⋆ in (15) has compact support.
Let the target rate be εn = (log n)n−1/2 . If Sn ∝ nε2n (log n)−1 = log n, Bn ∝ log1/2 (ε−1
n ),
2
2
n
=
n
/(log
n)
,
and
δ
∝
ε
,
then
the
posterior
Π
corresponding
to
the
empircn = nε−2
n
n
n
ical prior described above satisfies
Eθ⋆ [Πn ({θ ∈ Θn : H(pθ⋆ , pθ ) > Mεn })] → 0.
Proof. See the Appendix.
Remark 3. The proof of Proposition 3 is not especially sensitive to the choice of kernel.
More specifically, the local prior support condition, LP1, can be verified for kernels other
than normal, the key condition being Equation (22) in the Appendix. For example, that
condition can be verified for the Cauchy kernel
k(x | µ) =
1 n
(x − µ)2 o−1
1+
,
σπ
σ2
where σ is a fixed scale parameter. Therefore, using the same empirical prior formulation
as for the normal case, the same argument in the proof of Proposition 3 shows that the
Cauchy mixture posterior achieves the target rate εn = (log n)n−1/2 when the true density
p⋆ = pθ⋆ is a finite Cauchy mixture. To our knowledge, this mixture of heavy-tailed kernels
has yet to be considered in Bayesian nonparametrics literature (cf., Kruijer et al. 2010,
p. 1229), but it fits quite easily into our general setup proposed here.
14
4.4
Estimation of a sparse normal mean vector
Consider inference on the mean vector θ = (θ1 , . . . , θn )⊤ of a normal distribution, Nn (θ, In ),
based on a single sample X = (X1 , . . . , Xn ). That is, Xi ∼ N(θi , 1), for i = 1, . . . , n, independent. The mean vector is assumed to be sparse in the sense that most of the
components, θi , are zero, but the locations and values of the non-zero components are
unknown. This problem was considered by Martin and Walker (2014) and they show
that a version of the double empirical Bayes posterior contracts at the optimal minimax
rate. Here we propose an arguably simpler empirical prior and demonstrate the same
asymptotic optimality of the posterior based on the general results in Section 2.2.
Write the mean vector θ as a pair (S, θS ), where S ⊆ {1, 2, . . . , n} identifies the nonzero entries of θ, and θS is the |S|-vector of non-zero values. Assume that the true mean
vector θ⋆ has |Sn⋆ | = s⋆n such that s⋆n = o(n). The sieves ΘS are subsets of Rn that
constrain the components of the vectors corresponding to indices in S c to be zero; no
constraint on the non-zero components is imposed. Note that we can trivially restrict
to subsets S of cardinality no more than Tn = n. Furthermore, Condition S2 is trivially
satisfied because θ⋆ belongs to the sieve Sn⋆ by definition, so we can take θ† = θ⋆ .
For this model, the Hellinger distance for joint densities satisfies
1
H 2 (pnθ⋆ , pnθ ) = 1 − e− 8 kθ−θ
⋆ k2
,
where k · k is the usual ℓ2 -norm on Rn . In this sparse setting, as demonstrated by
Donoho et al. (1992), the ℓ2 -minimax rate of convergence is s⋆n log(n/s⋆n ); we set this rate
equal to nε2n , so that ε2n = (s⋆n /n) log(n/s⋆n ). Therefore, if we can construct a prior such
that Conditions LP2 and GP2 hold for this εn , then it will follow from Theorem 2 that
the corresponding empirical Bayes posterior concentrates at the optimal minimax rate.
Let the prior distribution wn for S be given by
−1
n
wn (S) ∝
e−g(|S|)|S| , S ⊆ {1, 2, . . . , n},
|S|
where g(s) is a non-decreasing slowly varying function as s → ∞, which includes the case
where g(s) ≡ B for a sufficiently large constant B; see the proof of the proposition. For
the conditional prior for θS , given S, based on the intuition from the toy example, we let
θS | S ∼ N|S| (θ̂n,S , γ −1 I|S| ),
γ < 1,
where the sieve MLE is θ̂n,S = XS = (Xi : i ∈ S).
Proposition 4. Suppose the normal mean vector θ⋆ is s⋆n -sparse in the sense that only
s⋆n = o(n) of the entries in θ⋆ are non-zero. For the empirical prior described above, if
γ is sufficiently small, then there exists a constant M > 0 such that the corresponding
posterior distribution Πn satisfies
Eθ⋆ Πn ({θ : kθ − θ⋆ k2 > Ms⋆n log(n/s⋆n )}) → 0.
Proof. See the Appendix.
15
4.5
Regression function estimation
Consider a nonparametric regression model
Yi = f (ti ) + σzi ,
i = 1, . . . , n,
where z1 , . . . , zn are i.i.d. N(0, 1), t1 , . . . , tn are equi-spaced design points in [0, 1], i.e.,
ti = i/n, and f is an unknown function. FollowingPArbel et al. (2013), we consider a
Fourier basis expansion for f = fθ , so that f (t) = ∞
j=1 θj φj (t), where θ = (θ1 , θ2 , . . .)
and (φ1 , φ2, . . .) are the basis coefficients and functions, respectively. They give conditions
such that their Bayesian posterior distribution for f , induced by a prior on the basis
coefficients θ, concentrates at the true f ⋆ at the minimax rate corresponding to the
unknown smoothness of f ⋆ . Here we derive a similar result, with a better rate, for the
posterior derived from an empirical prior.
Following the calculations in Section 4.4, the Hellinger distance between the joint
distribution of (Y1 , . . . , Yn ) for two different regression functions, f and g, satisfies
n
2
H 2 (pnf , png ) = 1 − e− 8σ2 kf −gkn ,
P
where kf k2n = n−1 ni=1 f (ti )2 is the squared L2 -norm corresponding to the empirical
distribution of the covariate t. So, if the conditions of Theorem 2 are satisfied, then we
get a posterior concentration rate result relative to the metric k · kn .
Suppose that the true regression function f ⋆ is in a Sobolev space P
of index β > 21 .
⋆2 2β
. 1.
That is, there is an infinite coefficient vector θ⋆ such that f ⋆ = fθ⋆ and ∞
j=1 θj j
⋆
This implies that the coefficients θj for large j are of relatively small magnitude and
suggests a particular formulation of the model and empirical prior. As before, we rewrite
the infinite vector θ as (S, θS ), but this time S is just an integer in {1, 2, . . . , n}, and
θS = (θ1 , . . . , θS , 0, 0, . . .) is an infinite vector with only the first S terms non-zero. That
is, we will restrict our prior to be supported on vectors whose tails vanish in this sense.
For the prior wn for the integer S, we take
wn (s) ∝ e−g(s)s ,
s = 1, . . . , n,
where g(s), is a non-decreasing slowly varying function, which includes the case of g(s) ≡
B for B sufficiently large; see the proof of the proposition. Next, for the conditional prior
for θS , given S, note first that the sieve MLE is a least-squares estimator
−1 ⊤
θ̂n,S = (Φ⊤
S ΦS ) ΦS Y,
where ΦS is the n × |S| matrix determined by the basis functions at the observed covariates, i.e., ΦS = (φj (ti ))ij , i = 1, . . . , n and j = 1, . . . , |S|. As in Martin et al. (2015), this
suggests a conditional prior of the form
−1
θS | S ∼ N|S| θ̂n,S , γ −1 (Φ⊤
,
S ΦS )
where γ < 1 is sufficiently small. This empirical prior for θ ≡ (S, θS ) induces a corresponding empirical prior for f through the mapping θ 7→ fθ .
16
Proposition 5. Suppose that the true regression function f ⋆ is in a Sobolev space of
index β > 21 . For the empirical prior described above, if γ is sufficiently small, then there
exists a constant M > 0 such that the corresponding posterior distribution Πn satisfies
Eθ⋆ Πn ({θ : kfθ − f ⋆ kn > Mn−β/(2β+1) }) → 0.
Proof. See the Appendix.
Note that the rate obtained in Proposition 5 is exactly the optimal minimax rate, i.e.,
there are no additional logarithmic factors. This is mainly due to the covariance structure
in the prior for θS , given S, which is very natural in the present framework. A similar
result, without the additional logarithmic terms, is given in Gao and Zhou (2016).
4.6
Nonparametric density estimation
Consider the problem of estimating a density p supported on the real line. Like in
Section 4.3, we propose a normal mixture model and demonstrate the asymptotic concentration properties of the posterior based on an empirical prior, but with the added
feature that the rate is adaptive to the unknown smoothness of the true density function.
Specifically, as in Kruijer et al. (2010), we assume that data X1 , . . . , Xn are i.i.d. from a
true density p⋆ , where p⋆ satisfies the conditions C1–C4 in their paper; in particular, we
assume that log p⋆ is Hölder with smoothness parameter β. They propose a fully Bayesian
model—one that does not depend on the unknown β—and demonstrate that the posterior concentration rate, relative to the Hellinger distance, is εn = (log n)t n−β/(2β+1) for
suitable constant t > 0, which is within a logarithmic factor of the optimal minimax rate.
Here we extend the approach presented in Section 4.3 to achieve adaptation by incorporating a prior for the number of mixture components, S, as well as the S-specific
kernel variance σS2 as opposed to fixing their values. For the prior wn for S, we let
r
wn (S) ∝ e−D(log S) S ,
S = 1, . . . , n,
where r > 1 and D > 0 are specified constants. Given S, we consider a mixture model
with S components of the form
pS,θS (·) =
S
X
s=1
ωs,S N(· | µs,S , λ−1
S ),
where θS = (ωS , µS , λS ), ωS = (ω1,S , . . . , ωS,S ) is a probability vector in ∆(S), µS =
(µ1,S , . . . , µS,S ) is a S-vector of mixture locations, and λS is a precision (inverse variance)
that is the same in all the kernels for a given S. We can fit this model to data using,
say, the EM algorithm, and produce a given-S sieve MLE: ω̂S = (ω̂1,S , . . . , ω̂S,S ), µ̂S =
(µ̂1 , . . . , µ̂S ), and λ̂S . Following our approach in Section 4.3, consider an empirical prior
for ωS obtained by taking
ωS | S ∼ DirS (α̂S )
where α̂s,S = 1 + cω̂s,S and c = cS is to be determined. The prior for µS follows the same
approach as in Section 4.3, i.e.,
µS,s ∼ Unif(µ̂S,s − δ, µ̂S,s + δ),
17
s = 1, . . . , S,
independent,
where δ = δS is to be determined. The prior for λS is also uniform,
λS ∼ Unif(λ̂S (1 − ψ), λ̂S (1 + ψ)),
where ψ = ψS is to be determined. Also, as with µ̂S being restricted to the interval
(−B, +B), we restrict the λ̂S to lie in (Bl , Bu ), to be determined. Then we get a prior
on the density function through the mapping (S, θS ) 7→ pS,θS . For this choice of empirical prior, the following proposition shows that the corresponding posterior distribution
concentrates around a suitable true density p⋆ at the optimal minimax rate, up to a
logarithmic factor, exactly as in Kruijer et al. (2010).
Proposition 6. Suppose that the true density p⋆ satisfies Conditions C1–C4 in Kruijer et al.
(2010), in particular, log p⋆ is Hölder continuous with smoothness parameter β. For the
empirical prior described above, if B = (log n)2 , Bl = n−1 , Bu = nb−2 , and, for each S,
c = cs = n2 S −1 , δ = δS = S 1/2 n−(b+3/2) , and ψ = ψS = Sn−1 , for a sufficiently large
b > 2, then there exists constants M > 0 and t > 0 such that the corresponding posterior
distribution Πn satisfies
Eθ⋆ Πn ({θ : H(p⋆ , pθ ) > M(log n)t n−β/(2β+1) }) → 0.
Proof. See the Appendix.
5
Conclusion
This paper considers the construction of an empirical or data-dependent prior such that,
when combined with the likelihood via Bayes’s formula, gives a posterior distribution
with desired asymptotic concentration properties. The details vary a bit depending on
whether the targeted rate is known to the user or not (Sections 2.1–2.2), but the basic
idea is to first choose a suitable sieve and then center the prior for the sieve parameters
on the sieve MLE. This makes establishing the necessary local prior support condition
and lower-bounding the posterior denominator straightforward, which is a major obstacle
in the standard Bayesian nonparametric setting. Having the data involved in the prior
complicates the usual argument to upper-bound the posterior numerator, but compared
to the usual global prior conditions involving entropy, here we only need to suitably
control the spread of the empirical prior. The end result is a data-dependent measure
that achieves the targeted concentration rate, adaptively, if necessary.
The approach presented here is quite versatile, so there are many potential applications beyond those examples studied here. For example, high-dimensional generalized
linear models, sparse precision matrix estimation, shape-restricted function estimation,
time series, etc. A more general question to be considered in a follow-up work, one
that has attracted a lot of attention in the Bayesian nonparametric community recently,
concerns the coverage probability of credible regions derived from our empirical Bayes
posterior distribution. Having suitable concentration rates is an important step in the
right direction, but pinning down the constants will require some new insights.
18
Acknowledgments
The authors are extremely grateful to the associate editor and three anonymous referees
for their detailed comments and suggestions which allowed us to improve the paper.
This work is partially supported by the U. S. National Science Foundation, grants DMS–
1506879 and DMS–1507073.
A
A.1
Details for the examples
Proof of Proposition 1
For Condition LP1, under the proposed normal prior, we have
Z
Πn (Ln ) =
N θ | θ̂n , n−1 Ψ−1 dθ.
n (θ−θ̂n )⊤ Ψ(θ−θ̂n )<a
Making a change of variable, z = n1/2 Ψ1/2 (θ − θ̂n ), the integral above can be rewritten as
Z
1
1
2
e− 2 kzk dz,
Πn (Ln ) =
d/2
kzk2 <a (2π)
and, therefore, Πn (Ln ) is lower-bounded by a constant not depending on n so Πn (Ln )
is bounded away from zero; hence Condition LP1 holds with εn = n−1/2 . For Condition GP1, we can basically proceed as outlined in the toy example above. So, writing the prior as θ ∼ Nd (θ̂n , n−1 Ψ−1 ), and the asymptotic distribution of the MLE as
θ̂ ∼ Nd (θ⋆ , n−1 Σ⋆−1 ), where Σ⋆ is the asymptotic covariance matrix, i.e., the Fisher information matrix evaluated at θ⋆ , we have,
πn (θ)p ∝ |pnΨ|−1/2 |nΨ|p/2 Nd (θ | θ̂n , (pnΨ)−1 ).
Thus
Eθ⋆ {πn (θ)p } ∝ |pnΨ|−1/2 |nΨ|p/2 Nd θ | θ⋆ , (pnΨ)−1 + n−1 Σ⋆−1
and so
Z
1
1
1
Eθ⋆ {πn (θ)p } p dθ ∝ |Id + pΨΣ⋆−1 | 2 − 2p .
As long as Ψ is non-singular, the right-hand side above is not dependent on n and is finite,
which implies we can take εn = n−1/2 . It follows from Theorem 1 that the Hellinger rate
is εn = n−1/2 and, since all metrics on the finite-dimensional Θ are equivalent, the same
rate obtains for any other metric.
We should highlight the result that the integral involved in checking Condition GP1
is at most exponential in the dimension of the parameter space:
Z
1
Eθ⋆ {πn (θ)p } p dθ ≤ eκd , κ > 0.
(17)
This result will be useful in the proof of some of the other propositions.
19
A.2
Proof of Proposition 2
We start by verifying Condition LP1. Towards this, note that, for those models in the
support of the prior, the data are multinomial, so the likelihood function is
Ln (θ) = S n θ1n1 · · · θSnS ,
where (n1 , . . . , nS ) are the bin counts, i.e., ns = |{i : Xi ∈ Es }|, s = 1, . . . , S. Taking
expectation with respect to θ ∼ DirS (α̂) gives
E(θ1n1
· · · θSnS )
S
Γ(c + S) Y
≥
(n1 + cθ̂1 ) · · · (nS + cθ̂S )
Γ(c + S + n) s=1
S
Γ(c + S) Y
(1 + cθ̂s )ns
=
Γ(c + S + n) s=1
S
Γ(c + S) cn Y ns
≥
θ̂ .
Γ(c + S + n) s=1 s
Therefore,
E{Ln (θ)} ≥
Γ(c + S) cn
Ln (θ̂).
Γ(c + S + n)
(18)
Next, a simple “reverse Markov inequality” says, for any random variable Y ∈ (0, 1),
P(Y > a) ≥
E(Y ) − a
,
1−a
a ∈ (0, 1).
(19)
2
Recall that Ln = {θ ∈ Θn : Ln (θ) > e−dnεn Ln (θ̂)} as in (4), so we can apply (19) to get
2
Πn (Ln ) ≥
E{Ln (θ)}/Ln (θ̂) − e−dnεn
.
1 − e−dnε2n
Then it follows from (18) that
Πn (Ln ) ≥
Γ(c + S) cn
2
− e−dnεn
Γ(c + S + n)
and, therefore, Condition LP1 is satisfied if
Γ(c + S + n)
2
≤ ednεn .
n
Γ(c + S)c
(20)
Towards this, we have
S + n + 1 n
S + j
Γ(c + S + n) Y
≤
1
+
.
=
1
+
Γ(c + S) cn
c
c
j=1
n
So, if c = nε−2
n as in the proposition statement, then the right-hand side above is upper2
bounded by enεn (1+S/n) . Since S ≤ n, (20) holds for, say, d ≥ 2, hence, Condition LP1.
20
Towards Condition GP1, note that the Dirichlet component for θ satisfies
DirS (θ | α̂) ≤ DirS (θ̂ | α̂) ≈ (c + S)
c+S+1/2
S
Y
1
s=1 ns >0
(1 + cθ̂s
)cθ̂s +1/2
θ̂scθ̂s ,
where the “≈” is by Stirling’s formula, valid for all ns > 0 due to the value of c. This
has a uniform upper bound:
DirS (θ | α̂) ≤
(c + S)c+S+1/2
,
cc
∀ θ ∈ ∆(S).
2
Then Condition GP1 holds if we can bound this by eKnεn for a constant K > 0. Using
Stirling’s formula again, and the fact that c/S → ∞, we have
(c + S)c+S+1/2
S c
c S+1/2
S S+1/2
′
1
+
1
+
=
≤ eK S log(1+c/S) ,
c
c Γ(S)
Γ(S)
c
S
K ′ > 0.
We need S log(1 + c/S) ≤ nε2n . Since c/S ≪ n2 , the logarithmic term is . log n. But we
assumed that S ≤ nε2n (log n)−1 , so the product is . nε2n , proving Condition GP1.
It remains to check Condition S1. A natural candidate for the pseudo-true parameter
θ† in Condition S1 is one that sets θs equal to the probability assigned by the true density
p⋆ to Es . Indeed, set
Z
†
p⋆ (x) dx, s = 1, . . . , S.
θs =
Es
It is known (e.g., Scricciolo 2015, p. 93) that, if p⋆ is β-Hölder, with β ∈ (0, 1], then the
sup-norm approximation error of pθ† is
kp⋆ − pθ† k∞ . S −β .
Since p⋆ is uniformly bounded away from 0, it follows from Lemma 8 in Ghosal and van der Vaart
(2007) that both K(p⋆ , pθ† ) and V (p⋆ , pθ† ) are upper-bounded by (a constant times)
H 2 (p⋆ , pθ† ) which, in turn, is upper-bounded by S −2β by the above display. Therefore,
we need S = Sn to satisfy S −β ≤ εn , and this is achieved by choosing S = nε2n (log n)−1
as in the proposition. This establishes Condition S1, completing the proof.
A.3
Proof of Proposition 3
We start by verifying Condition LP1. Towards this, we first note that, for mixtures in
the support of the prior, the likelihood function is
Ln (θ) =
n X
S
Y
i=1 s=1
ωs k(Xi | µs ),
θ = (ω, µ),
which can be rewritten as
Ln (θ) =
X
(n1 ,...,nS )
ω1n1
· · · ωSnS
S Y
X Y
(s1 ,...,sn ) s=1 i:si =s
21
k(Xi | µs ),
(21)
where the first sum is over all S-tuples of non-negative integers (n1 , . . . , nS ) that sum
to n, the second sum is over all n-tuples of integers 1, . . . , S with (n1 , . . . , nS ) as the
2
2
corresponding frequency table, and k(x | µ) = N(x
Q | µ, σ ) for known σ . We also take
the convention that, if ns = 0, then the product i:si =s is identically 1. Next, since the
prior has ω and µ independent, we only need to bound
E(ω1n1
S Y
nY
o
and E
k(Xi | µs )
· · · ωSnS )
s=1 i:si =s
for a generic (n1 , . . . , nS ). The first expectation is with respect to the prior for ω and
can be handled exactly like in the proof of Proposition 2. For the second expectation,
which is with respect to the prior for the µ, since the prior has the components of µ
independent, we have
S
S Y
nY
o Y
nY
o
E
k(Xi | µs ) =
E
k(Xi | µs ) ,
s=1 i:si =s
s=1
i:si =s
so we can work with a generic s. Writing out the product of kernels, we get
nY
o 1 ns /2
P
2
− 12
− ns2 (µs −X̄)2
i:si =s (Xi −X̄)
2σ
2σ
}.
E
k(Xi | µs ) =
E
e
e
2
2πσ
i:s =s
i
By Jensen’s inequality, i.e., E(eZ ) ≥ eE(Z) , the expectation on the right-hand side is lower
bounded by
ns
ns
2
2
e− 2σ2 E(µs −X̄) = e− 2σ2 {vn +(µ̂s −X̄) } ,
where vn = δn2 /3 is the variance of µs ∼ Unif(µ̂s − δn , µ̂s + δn ). This implies
S Y
S Y
nY
o
Y
− nvn
2
E
k(Xi | µs ) ≥ e 2σ
k(Xi | µ̂s ).
s=1 i:si =s
(22)
s=1 i:si =s
Putting the two expectations back together, from (21) we have that
E{Ln (θ)} ≥
Γ(c + S) cn − nvn2
e 2σ Ln (θ̂)
Γ(c + S + n)
(23)
where now the expectation is with respect to both priors. Recall that Ln = {θ ∈ Θn :
2
Ln (θ) > e−dnεn Ln (θ̂)} as in (4), and define L′n = {θ ∈ Ln : Ln (θ) ≤ Ln (θ̂n )}. Since,
Ln ⊇ L′n and, for θ ∈ L′n , we have Ln (θ)/Ln (θ̂n ) ≤ 1, we can apply the reverse Markov
inequality (19) again to get
2
Πn (Ln ) ≥
E{Ln (θ)}/Ln (θ̂) − e−dnεn
.
1 − e−dnε2n
Then it follows from (23) that
Πn (Ln ) ≥
Γ(c + S) cn − Snv2n
2
e 2σ − e−dnεn
Γ(c + S + n)
22
and, therefore, Condition LP1 is satisfied if
nvn
≤ bnε2n
2
2σ
and
Γ(c + S + n)
2
≤ eanεn ,
n
Γ(c + S)c
(24)
where a + b < d. The first condition is easy to arrange; it requires that
vn ≤ 2bσ 2 ε2n ⇐⇒ δn ≤ (6bσ 2 )1/2 εn ,
which holds by assumption on δn . The second condition holds with a = 2 by the argument
presented in the proof of Proposition 2. Therefore, Condition LP1 holds.
Towards Condition GP1, putting together the bound on the Dirichlet density function
in the proof of Proposition 2 and the following bound on the uniform densities,
S
Y
s=1
Unif(µs | µ̂s − δn , µ̂s + δn ) ≤
S
1 S Y
I[−Bn −δn ,Bn +δn ] (µs ),
2δn s=1
we have that, for any p > 1,
Z
1/p
(c + S)c+S+1/2 1 S
dθ ≤
Eθ⋆ {πn (θ)p }
·
{2(Bn + δn )}S .
c Γ(S)
c
2δ
n
Θn
2
Then Condition GP1 holds if we can make both terms in this product to be like eKnεn
for a constant K > 0. The first term in the product, coming from the Dirichlet part, is
handled just like in the proof of Proposition 2 and, for the second factor, we have
1 S
Bn
{2(Bn + δn )}S ≤ eS log(1+ δn ) .
2δn
1/2
Since δn ∝ εn and Bn ∝ log1/2 (ε−1
, so the exponent above is
n ), we have Bn /δn ∝ n
2
. S log n . nεn . This takes care of the second factor, proving Condition GP1.
Finally, we refer to Section 4 in Ghosal and van der Vaart (2001) where they show
that there exists a finite mixture, characterized by θ† , with S components and locations
in [−Bn , Bn ], such that max{K(pθ⋆ , pθ† ), V (pθ⋆ , pθ† )} ≤ ε2 . This θ† satisfies our Condition S1, so the proposition follows from Theorem 1.
In the context of Remark 3, when the normal kernel is replaced by a Cauchy kernel,
we need to verify (22) in order to meet LP1. To this end, let us start with
"
#
Y
E exp − log
1 + (Xi − µs )2 /σ 2
si =s
where the expectation is with respect to the prior for the µs and the σ is assumed known.
This expectation is easily seen to be lower-bounded by
)
(
)
(
X
X
log[1 + E(Xi − µs )2 /σ 2 ] = exp −
log[1 + (Xi − µ̂s )2 /σ 2 + vn /σ 2 ] .
exp −
si =s
si =s
The right-hand term term can be written as
)
(
Y
1
1 + (Xi − µ̂s )2 /σ 2 Q
s =s
si =s 1 +
i
23
1
vn /σ2
1+(Xi −µ̂s )2 /σ2
and the second term here is lower-bounded by exp(−ns vn /σ 2 ). Therefore, Condition LP1
holds with the same εn as in the normal case.
Condition GP1 in this case does not depend on the form of the kernel, whether it
be normal or Cauchy. And S1 is satisfied if we assume the true density p⋆ = pθ⋆ is a
finite mixture of densities, for example, the Cauchy. This proves the claim in Remark 3,
namely, that the empirical Bayes posterior, based on a Cauchy kernel, concentrates at
the rate εn = (log n)n−1/2 when the true density is a finite Cauchy mixture.
A.4
Proof of Proposition 4
The proportionality constant depends on n (and g) but it is bounded away from zero
and infinity as n → ∞ so can be ignored in our analysis. Here we can check the second
part
LP2. Indeed, for the true model Sn⋆ of size s⋆n , using the inequality
of Condition
n
s
≤ (en/s) , we have
s
wn (Sn⋆ )
−1
n
⋆
⋆
⋆
∝ ⋆
e−Bsn ≥ e−[B+1+log(n/s )]sn
sn
and, since nε2n = s⋆n log(n/s⋆n ), the second condition in Condition LP2 holds for all large
n with A > 1. Next, for Condition GP2, note that the prior wn given above corresponds
to a hierarchical prior for S that starts with a truncated geometric prior for |S| and then
a uniform prior for S, given |S|. Then it follows directly that Condition GP2 on the
marginal prior for |S| is satisfied.
For Condition LP2, we first write the likelihood ratio for a generic θ ∈ ΘS :
Ln (θ)
Ln (θ̂n,S )
1
2
= e− 2 kθS −θ̂n,s k .
Therefore, Ln,S = {θ ∈ ΘS : 12 kθ − θ̂n,S k2 < |S|}. This is just a ball in R|S| so we can
bound the Gaussian measure assigned to it. Indeed,
Z
γ
2
Πn (Ln,S ) =
(2π)−d/2 γ d/2 e− 2 kzk dz
kzk2 <2|S|
> (2π)−|S|/2γ |S|/2 e−γ|S|
= γ |S|/2 e−γ|S|
π |S|/2
Γ( |S|
2
1
Γ( |S|
2
+ 1)
+ 1)
(2|S|)|S|/2
|S||S|/2.
Stirling’s formula gives an approximation of the lower bound:
−γ|S| |S|/2 |S|/2 |S|/2
e
γ
2
e
|S|/2 1/2
2π
.
For moderate to large |S|, the above display is & exp 1 − 2γ + log γ + log 2 |S|
and,
2
therefore, plugging in Sn⋆ for the generic S above, we see that Condition LP2 holds if
1 − 2γ + log γ + log 2 < 0. For Condition GP2, the calculation is similar to that in
the finite-dimensional case handled in Proposition 1. Indeed, the last part of the proof
24
showed that, for a d-dimensional normal mean model with covariance matrix Σ−1 and a
normal empirical prior of with mean θ̂n and covariance matrix proportional to Σ−1 , then
the integral specified in the second part of Condition GP2 is exponential in the dimension
d. In the present case, we have that
Z
1
Eθ⋆ {πn,S (θ)p } p dθ = eκ|S|
ΘS
for some κ > 0 and then, clearly, Condition GP2 holds with K = κ. If we take B in the
prior wn for S to be larger than this K, then the conditions of Theorem 2 are met with
ε2n = (s⋆n /n) log(n/s⋆n ).
A.5
Proof of Proposition 5
By the choice of marginal prior for S and the normal form of the conditional prior
for θS , given S, Conditions LP2 and GP2 follow immediately or almost exactly like in
Section 4.4. Indeed, the second part of Condition GP2 holds with K the same as was
derived in Section 4.4. Therefore, we have only to check Condition S2. Let pθ denote the
density corresponding to regression function f = fθ . If θ⋆ is the coefficient vector in the
basis expansion of f ⋆ , then it is easy to check that
n X ⋆2
n
θj .
K(pnθ⋆ , pnθS⋆ ) = 2 kθ⋆ − θS⋆ k2 = 2
2σ
2σ
j>|S|
If f ⋆ is smooth in the sense that it P
belongs to a Sobolev space indexed by β > 21 , i.e., the
⋆2 2β
. 1, then it follows that
basis coefficient vector θ⋆ satisfies ∞
j=1 θj j
K(pnθ⋆ , pnθS⋆ ) . n|S|−2β .
So, if we take εn = n−β/(2β+1) and |Sn⋆ | = ⌊nε2n ⌋ = ⌊n1/(2β+1) ⌋, then a candidate θ† in
Condition S2 is θ† = θS⋆ . That the desired bound on the Kullback–Leibler second moment
V also holds for this θ† follows similarly, as in Arbel et al. (2013, p. 558). This establishes
Condition S2 so the conclusion of the proposition follows from Theorem 2.
A.6
Proof of Proposition 6
Write εn = (log n)t n−β/(2β+1) for a constant t > 0 to be determined. For Condition S2, we
appeal to Lemma 4 in Kruijer et al. (2010) which states that there exists a finite normal
mixture, p† , having Sn⋆ components, with
Sn⋆ . n1/(2β+1) (log n)k−t = nε2n (log n)k−3t ,
such that max K(p⋆ , p† ), V (p⋆ , p† ) ≤ ε2n , where k = 2/τ2 and τ2 is related to the tails
of p⋆ in their Condition C3. So, if t is sufficiently large, then our Condition S2 holds.
For Condition GP2, we first note that, by a straightforward modification of the argument given in the proof of Proposition 3, we have
Z
B S Bu (1 + ψ) − Bl (1 − ψ)
p 1/p
bS log n
1+
Ep⋆ {πn,S (θ) }
dθ ≤ e
,
δ
2ψBl
∆(S)×RS ×R+
25
for some b > 0. The logarithmic term appears in the first product because, as in the proof
of Proposition 3, the exponent can be bounded by a constant times S log(1 + c/S) .
S log n since c/S = n2 /S 2 < n2 . To get the upper bound in the above display to be
exponential in S, we can take
δ&
B
nb
and ψ &
Bu − Bl
1
.
bS
log
n
Bl e
− (Bl + Bu )/(2Bl )
With these choices, it follows that the right-hand side in the previous display is upper
bounded by e3b log n , independent of S. Therefore, trivially, the summation in (8) is also
upper bounded by e3b log n . Since log n ≤ nε2n , we have that Condition GP2 holds.
Condition LP2 has two parts to it. For the first part, which concerns the prior
concentration on Ln , we can follow the argument in the proof of Proposition 3. In
particular, with the additional prior on λ, the corresponding version of (23) is
ELn (θS ) ≥
Γ(c + S) cn − 1 nδ2 λ̂ −nzψ
e 6
e
Ln (θ̂S )
Γ(c + S + n)
for some z ∈ (0, 1). This is based on the result that if λ ∼ Unif(λ̂(1 − ψ), λ̂(1 + ψ)) then
Eλ = λ̂ and E log λ > log λ̂ − zψ for some z ∈ (0, 1). With c = n2 S −1 as proposed, the
argument in the proof of Proposition 2 shows that the first term on the right-hand side of
the above display is lower-bounded by e−CS for some C > 0. To make other other terms
′
lower-bounded by something of the order e−C S , we need δ and ψ to satisfy
δ2 .
1 S
Bu2 n
and ψ .
S
.
n
Given these constraints and those coming from checking Condition GP2 above, we require
B
1 S 1/2
.
nb
Bu n
and nbS −
Bu n Bu
1
1+
.
.
2
Bl
Bl
From Lemma 4 in Kruijer et al. (2010), we can deduce that the absolute value of the
locations for p† are smaller than a constant times log ε−β
n . Hence, we can take B =
(log n)2 . Also, we need Bl . εβn which is met by taking Bl = n−1 . To meet our constraints,
we can take Bu = nb−2 , so we need b ≥ 2. These conditions on (B, Bl , Bu , δ, ψ) are met
by the choices stated in the proposition. For the second part of Condition LP2, which
concerns the concentration of wn around Sn⋆ , we have
⋆ r S⋆
n
wn (Sn⋆ ) ≥ e−D(log Sn )
2
k+r−3t
& e−Dnεn (log n)
.
2
So, just like in Kruijer et al. (2010), as long as 3t > k + r, we get wn (Sn⋆ ) ≥ e−Dnεn as
required in Condition LP2.
References
Arbel, J., Gayraud, G., and Rousseau, J. (2013). Bayesian optimal adaptive estimation
using a sieve prior. Scand. J. Stat., 40(3):549–570.
26
Belitser, E. (2016). On coverage and local radial rates of DDM credible sets. Ann. Statist.,
to appear, arXiv:1407.5232.
Bissiri, P. G., Holmes, C. C., and Walker, S. G. (2016). A general framework for updating
belief distributions. J. R. Stat. Soc. Ser. B. Stat. Methodol., 78(5):1103–1130.
Castillo, I. and van der Vaart, A. (2012). Needles and straw in a haystack: posterior
concentration for possibly sparse sequences. Ann. Statist., 40(4):2069–2101.
Donnet, S., Rivoirard, V., Rousseau, J., and Scricciolo, C. (2014). Posterior concentration
rates for empirical Bayes procedures, with applications to Dirichlet process mixtures.
Unpublished manuscript, arXiv:1406.4406.
Donoho, D. L., Johnstone, I. M., Hoch, J. C., and Stern, A. S. (1992). Maximum entropy
and the nearly black object. J. Roy. Statist. Soc. Ser. B, 54(1):41–81. With discussion
and a reply by the authors.
Efron, B. (2014). Two modeling strategies for empirical Bayes estimation. Statist. Sci.,
29(2):285–301.
Gao, C. and Zhou, H. H. (2016). Rate exact Bayesian adaptation with modified block
priors. Ann. Statist., 44(1):318–345.
Geman, S. and Hwang, C.-R. (1982). Nonparametric maximum likelihood estimation by
the method of sieves. Ann. Statist., 10(2):401–414.
Ghosal, S., Ghosh, J. K., and van der Vaart, A. W. (2000). Convergence rates of posterior
distributions. Ann. Statist., 28(2):500–531.
Ghosal, S. and van der Vaart, A. W. (2001). Entropies and rates of convergence for
maximum likelihood and Bayes estimation for mixtures of normal densities. Ann.
Statist., 29(5):1233–1263.
Ghosal, S. and van der Vaart, A. W. (2007). Posterior convergence rates of Dirichlet
mixtures at smooth densities. Ann. Statist., 35(2):697–723.
Grenander, U. (1981). Abstract Inference. John Wiley & Sons, Inc., New York. Wiley
Series in Probability and Mathematical Statistics.
Grünwald, P. and van Ommen, T. (2016). Inconsistency of bayesian inference for misspecifed llinear models and a proposal for repairing it. Unpublished manuscript,
arXiv:1412:3730.
Holmes, C. and Walker, S. G. (2017). Setting a value for the power likelihood in a general
bayesian model. Biometrika, to appear.
Kruijer, W., Rousseau, J., and van der Vaart, A. (2010). Adaptive Bayesian density
estimation with location-scale mixtures. Electron. J. Stat., 4:1225–1257.
Martin, R., Mess, R., and Walker, S. G. (2015). Empirical Bayes posterior concentration
in sparse high-dimensional linear models. Bernoulli, to appear; arXiv:1406.7718.
27
Martin, R. and Walker, S. G. (2014). Asymptotically minimax empirical Bayes estimation
of a sparse normal mean vector. Electron. J. Stat., 8(2):2188–2206.
Rousseau, J. and Szabó, B. (2016). Asymptotic behavior of the empirical Bayes posteriors
associated to maximum marginal likelihood estimator. Ann. Statist., to appear.
Salomond, J.-B. (2014). Concentration rate and consistency of the posterior distribution
for selected priors under monotonicity constraints. Electron. J. Stat., 8(1):1380–1404.
Scricciolo, C. (2007). On rates of convergence for Bayesian density estimation. Scand. J.
Statist., 34(3):626–642.
Scricciolo, C. (2015). Bayesian adaptation. J. Statist. Plann. Inference, 166:87–101.
Shen, W. and Ghosal, S. (2015). Adaptive Bayesian procedures using random series
priors. Scand. J. Stat., 42(4):1194–1213.
Shen, X. and Wasserman, L. (2001). Rates of convergence of posterior distributions. Ann.
Statist., 29(3):687–714.
Syring, N. and Martin, R. (2016). Calibrating general posterior credible regions. Unpublished manuscript, arXiv:1509.00922.
van der Vaart, A. W. and van Zanten, J. H. (2009). Adaptive Bayesian estimation using
a Gaussian random field with inverse gamma bandwidth. Ann. Statist., 37(5B):2655–
2675.
Walker, S. and Hjort, N. L. (2001). On Bayesian consistency. J. R. Stat. Soc. Ser. B
Stat. Methodol., 63(4):811–821.
Walker, S. G., Lijoi, A., and Prünster, I. (2007). On rates of convergence for posterior
distributions in infinite-dimensional models. Ann. Statist., 35(2):738–746.
28
| 1 |
arXiv:1607.05819v2 [cs.CR] 22 Oct 2016
THE STATUS OF POLYCYCLIC GROUP-BASED
CRYPTOGRAPHY: A SURVEY AND OPEN PROBLEMS
JONATHAN GRYAK AND DELARAM KAHROBAEI
Abstract. Polycyclic groups are natural generalizations of cyclic groups but
with more complicated algorithmic properties. They are finitely presented and
the word, conjugacy, and isomorphism decision problems are all solvable in
these groups. Moreover, the non-virtually nilpotent ones exhibit an exponential growth rate. These properties make them suitable for use in group-based
cryptography, which was proposed in 2004 by Eick and Kahrobaei [10].
Since then, many cryptosystems have been created that employ polycyclic
groups. These include key exchanges such as non-commutative ElGamal, authentication schemes based on the twisted conjugacy problem, and secret sharing via the word problem. In response, heuristic and deterministic methods
of cryptanalysis have been developed, including the length-based and linear
decomposition attacks. Despite these efforts, there are classes of infinite polycyclic groups that remain suitable for cryptography.
The analysis of algorithms for search and decision problems in polycyclic
groups has also been developed. In addition to results for the aforementioned
problems we present those concerning polycyclic representations, group morphisms, and orbit decidability. Though much progress has been made, many
algorithmic and complexity problems remain unsolved; we conclude with a
number of them. Of particular interest is to show that cryptosystems using
infinite polycyclic groups are resistant to cryptanalysis on a quantum computer.
Contents
1. Introduction
2
2. Algorithmic Problems in Polycyclic Groups
3
2.1. Representations of Polycyclic Groups
3
2.2. Growth Rate
4
2.3. Decision Problems
4
2.4. The Conjugacy Search Problem and its Variations
5
2.5. Properties of Automorphism Groups
7
2.6. Quantum Algorithms
7
3. Cryptosystems
8
3.1. The Anshel-Anshel-Goldfeld Key-Exchange Protocol
8
3.2. Ko-Lee Key Exchange Protocol
8
3.3. Non-Commutative ElGamal Key-Exchange
9
3.4. Non-Commutative Digital Signature
9
3.5. A Key Exchange Using the Subgroup Membership Search Problem
10
3.6. An Authentication Scheme Based on the Twisted Conjugacy Problem 11
3.7. Authentication Schemes Based on Semigroup Actions
12
1
2
J. GRYAK AND D. KAHROBAEI
3.8. Secret Sharing Schemes Based on the Word Problem
4. Cryptanalysis and Attacks
4.1. Length-Based Attack
4.2. Linear Decomposition Attack
4.3. Field-Based Attack
4.4. Quotient Attack
4.5. Linear Centralizer Attack
5. Conclusion
Acknowledgements
References
13
13
14
14
15
15
16
16
17
17
1. Introduction
In cryptography, many of the most common key exchange protocols, including
RSA and Diffie-Hellman, rely upon hardness assumptions related to integer factorization and discrete logarithms for their security. While there are no known efficient
algorithms for performing the above operations on conventional computers, Peter
Shor devised a quantum algorithm [39] that solves both of these problems in polynomial time. This has motivated the search for alternative methods for constructing
cryptosystems. One such methodology is non-commutative cryptography, which
unlike the aforementioned conventional systems does not operate over the integers.
Instead, non-commutative cryptographic systems are built upon groups and other
algebraic structures whose underlying operations are non-commutative.
. In 1999, Anshel, Anshel, and Goldfeld [1] and Ko, Lee, et al. [25] introduced key
exchange protocols whose security is based in part on the conjugacy search problem:
for a group G, given that u, v ∈ G are conjugate, find an x in G such that ux = v.
Though braid groups were the suggested platform for both protocols, other classes
of groups can be employed. In general, groups suitable for use in non-commutative
cryptography must be well-known and possess the following properties: a solvable
word problem, a computationally difficult group-theoretic problem, a “fast” word
growth rate, and the namesake non-commutativity [33].
. In 2004, Eick and Kahrobaei [10] investigated the algorithmic properties of polycyclic groups. In particular, they explored how the time complexity of the word
and conjugacy problems varied with respect to a group’s Hirsch length. Their experiments showed that the while the time complexity of the conjugacy problem
grew exponentially with increased Hirsch length, the word problem remained efficiently solvable. These results suggested the suitability of polycyclic groups for use
in cryptography, and stimulated research into cryptosystems based on these groups
and their underlying algorithmic problems.
. In this paper, we survey the development of group-based cryptography over polycyclic and metabelian groups. In section 2 we discuss the algorithmic properties of
polycyclic groups. Polycyclic groups and their intrinsic presentations are defined,
as well as several other representations. A number of group-theoretic decision
problems are introduced, including the word, conjugacy, and isomorphism decision
POLYCYCLIC GROUP-BASED CRYPTOGRAPHY
3
problems. Note that in every polycyclic group, the three aforementioned problems
are solvable. Moreover, the word problem can be solved efficiently in most cases by
using a collection algorithm.
. In section 3 we describe a number of cryptosystems that have been built around
these groups. These include additional key exchanges along with schemes for secret sharing, authentication, and digital signatures. This variety of cryptosystems
evinces the flexibility and utility of polycyclic groups in non-commutative cryptography.
. As new cryptosystems are created, so too are their dual in the form of cryptanalyses and attacks. In section 4 we discuss the length-based attack, a heuristic
technique that was the first to break the AAG protocol over braid groups. Other
attacks exploit the linear representation that all polycyclic groups admit. Some,
such as the field-based attack, are specific to a subclass of polycyclic groups. A
more general approach is the linear decomposition attack, but its feasibility is dependent upon the size of a group’s representation.
. We conclude the paper with the current status of polycyclic groups cryptography.
We also include a list of open problems, which we hope will guide researchers who
wish to work in this exciting field.
2. Algorithmic Problems in Polycyclic Groups
The nature of polycyclic groups enables them to be represented in several ways.
These approaches give rise to complementary algorithms for solving search and
decisions problems, with varying degrees of computational complexity. Due to this
flexibility, we begin our study of the algorithmic problems in polycyclic groups by
examining these representations.
2.1. Representations of Polycyclic Groups.
2.1.1. Polycyclic Sequences and Hirsch Length. A group G is said to be polycyclic
if it has a subnormal series G = G1 ⊲ · · · ⊲ Gn+1 = {1} such that the quotient groups
Gi /Gi+1 are cyclic. This series is called a polycyclic series. The Hirsch length of a
polycyclic group G is the number of infinite groups in its polycyclic series. Though
a polycyclic group can have more than one polycyclic series, as a consequence of
the Schreier Refinement Theorem, its Hirsch length is independent of the choice of
series.
2.1.2. Polycyclic Presentations. Every polycyclic group can be described by a polycyclic presentation:
hg1 , . . . , gn |
gjgi = uij
g−1
gj i
giri
= vij
= wii
for 1 ≤ i < j ≤ n,
for 1 ≤ i < j ≤ n,
for i ∈ Ii,
where uij , vij , wii are words in the generators gi+1 , . . . , gn and I is the set of indices
i ∈ {1, . . . , n} such that ri = [Gi : Gi+1 ] is finite.
This special type of finite presentation reveals the polycyclic structure of the underlying group, see [20, Chapter 10] for details. Unlike general finite presentations,
4
J. GRYAK AND D. KAHROBAEI
a polycyclic presentation enables the word problem to be solved using an algorithm
called collection. The collection algorithm is generally effective in practical applications, but its precise computational complexity remains unknown. For finite
groups, collection from the left was shown to be polynomial by Leedham-Green and
Soicher [27]. For infinite groups, the complexity of the collection algorithm and a
modified version were analyzed by Gebhardt [16]. The resultant worst-case bound
is in terms of the absolute values of all exponents occurring during the collection
process, rather than the exponents of the input word. Thus a global complexity
analysis of the collection algorithm remains elusive.
2.1.3. Polycyclic Presentations with a Malcev Basis. It has been shown by Assmann and Linton [2] that the efficacy of the collection algorithm can be improved
significantly by exploiting the Malcev structure of the underlying group. This approach determines a large nilpotent normal subgroup of the given group and then
exploits the Malcev correspondence for the normal subgroup. There is no known
complexity analysis for this methodology.
2.1.4. Polycyclic Presentations with Multiplication Polynomials. Du Sautoy [8] proved
that every polycyclic group has a normal subgroup of finite index such that multiplication in this subgroup can be achieved by evaluating certain multiplication
polynomials. This extends the well-known result by Hall [19] for torsion-free nilpotent polycyclic groups. If such multiplication polynomials are available the performance of collection in the considered group improves significantly. Additionally, it
provides a basis for the complexity analysis of multiplication in polycyclic groups;
it must be noted however that the index of the normal subgroup can be arbitrarily
large.
2.1.5. Matrix Groups. It is well-known that every polycyclic group can be embedded into GL(n, Z) for some n ∈ N. For groups that are additionally torsion-free
and nilpotent, a matrix representation can be computed. The algorithm of Lo and
Ostheimer [28] can be applied to a polycyclic presentation, while for multiplication
polynomials the technique by Nickel [35] can be utilized. Multiplication of group
elements in their matrix form is polynomial in the dimension n of the representation.
2.2. Growth Rate.
. Let G be a finitely generated group. The growth rate of a group is specified by
its growth function γ : N −→ R defined as γ(n) = #{w ∈ G : l(w) ≤ n}, where
l(w) is the length of w as a word in the generators of G. As words are used as keys
in group-based cryptography, there is a natural relationship between the growth
rate of a group and the key space, the set of all possible keys. A fast growth rate
engenders a large key space, making an exhaustive search of this space intractable.
A large class of polycyclic groups are known to have an exponential growth rate
(namely those which are not virtually nilpotent, see Wolf [46] and Milnor [30]).
Consequently, these polycyclic groups are potentially good candidates for use as
platform groups.
2.3. Decision Problems.
POLYCYCLIC GROUP-BASED CRYPTOGRAPHY
5
. In 1911, Max Dehn introduced [7] three decision problems on finitely presented
groups - the word problem, the conjugacy problem, and the isomorphism problem.
In the definitions below, let G be a finitely presented group:
• Word Decision Problem - For any g ∈ G, determine if g = 1G , the identity
element of G.
• Single Conjugacy Decision Problem - Determine for any u, v ∈ G if u is
conjugate to v (denoted u ∼ v).
• Isomorphism Decision Problem - Given groups G and G′ with respective
finite presentations hX | Ri and hX ′ | R′ i, determine if G is isomorphic to
G′ .
For polycyclic groups all three of the above problems are decidable. The conjugacy decision problem for polycyclic groups is decidable by the results of Remeslennikov [36] and Formanek [13]. That the word problem is decidable can be observed
from its formulation as a special case of the conjugacy decision problem (where
g = u, v = 1G ), or by observing that every word has a unique normal form induced
by a polycyclic presentation. The isomorphism decision problem for polycyclic
groups is solvable by a result of Segal [38].
An additional decision problem called the subgroup membership decision problem (alternatively the generalized word decision problem) asks for any g ∈ G and
subgroup H ≤ G, determine if g ∈ H. Malcev in [29] showed that this problem is
indeed solvable for polycyclic groups.
2.4. The Conjugacy Search Problem and its Variations.
. Once the solvability of a group-theoretic decision problem is affirmed, the subsequent task is to produce elements (or morphisms, etc.) that are solutions to
particular instances of it. The seminal protocols of non-commutative cryptography, Ko-Lee and AAG, are based in part on the conjugacy search problem (CSP).
Their example spurred the development of many other protocols whose security is
based on some variant of the CSP. In this section we explore these variations and
the methods designed to solve them.
2.4.1. Conjugacy Search Problem. Let G be a group and a1 , . . . , an , b1 , . . . , bn elements of it, with ai ∼ bi . The problem of finding a c ∈ G such that for all i,
aci = bi is called the (single) conjugacy search problem for i = 1 and the multiple
conjugacy search problem for 1 < i ≤ n. In polycyclic groups, the multiple conjugacy search problem for n elements reduces to n independent solutions of single
conjugacy search [10]. We will therefore speak only of the conjugacy search problem
without signifying arity.
For any finitely presented group (polycyclic groups included) the conjugacy
search problem can be solved exhaustively by recursively enumerating the conjugates of the element in question [40]. There are other approaches to solving the
conjugacy search problem, many of which can solve it efficiently. However, the applicability of these methods and their relative efficiency is contingent upon addition
restrictions on the group’s properties, as well as the manner is which the polycyclic
group is specified.
6
J. GRYAK AND D. KAHROBAEI
2.4.2. CSP Using Polycyclic Presentations. For infinite polycyclic groups the algorithm proposed by Eick and Ostheimer [11] is applicable. This algorithm uses a
variety of ideas: it exploits finite orbit and stabilizer computations, calculations in
number fields, and linear methods for polycyclic groups. The algorithm has been
implemented and seems to be efficient for groups of small Hirsch length. An analysis of the algorithm’s complexity is hindered by there being no bound on the length
of the finite orbits that may occur in the computation.
The restriction of the applicability of the above algorithm to groups of small
Hirsch length is supported by the experimental evidence provided by Eick and
Kahrobaei in [10]. They compared the performance of the Eick-Ostheimer algorithm
for the CSP against the collection algorithm for polycyclic groups of the form G =
OK ⋊ UK , where OK and UK are respectively the maximal order and group of units
of an algebraic number field K. In the table below, the column H(G) is the Hirsch
length of the group G, with the collection and conjugation entries representing the
average running time over 100 trials using random words (respectively, random
conjugate pairs) from G:
H(G)
2
6
14
Collection Conjugation
0.00 sec
9.96 sec
0.01 sec
10.16 sec
0.05 sec
> 100 hr
These results suggest that while collection remains efficient as the Hirsch length
increases, the Eick-Ostheimer algorithm becomes impractical. Presently there are
no known algorithms for infinite polycyclic groups of high Hirsch length. Such
groups remain suitable for use as platform groups.
2.4.3. CSP Using Multiplication Polynomials. Suppose that G instead is given by
a polycyclic presentation with multiplication polynomials. Let g1 , . . . , gk be the
polycyclic generating set of the presentation and consider a generic element g =
g1x1 · · · gkxk of G. g is a solution to the multiple conjugacy search problem if and
only if ai g = gbi for 1 ≤ i ≤ k. If ai = g1li1 · · · gklik and bi = g1mi1 · · · gkmik , with
f1 , . . . , fk denoting the multiplication polynomials for G, then ai g = gbi if and only
if
fj (li , x) = fj (mi , x) for 1 ≤ i, j ≤ k.
If f1 , . . . , fk are given as explicit polynomials over an extension field of Q and li , mi
are integer vectors, then the CSP is equivalent to determining an integer solution
for a set of k polynomials in k indeterminates. Thus the CSP can also be considered
from the perspective of algebraic geometry.
2.4.4. Power Conjugacy Search Problem. The key exchange presented in Section
3.3.2 makes use of the power conjugacy search problem, where if it is known for
some a, b ∈ G and n ∈ N that an = bg for some g ∈ G, the task is to find one such
n and g. Note that for n = 1 this reduces to the standard CSP, whereas if g = 1G
this reduces to the power search problem.
Just as the conjugacy search problem is solvable by enumeration, so is the power
conjugacy search variant, but no efficient algorithm is known.
POLYCYCLIC GROUP-BASED CRYPTOGRAPHY
7
2.4.5. Twisted Conjugacy Search Problem. Twisted conjugacy arises in Nielsen theory, where the number of twisted conjugacy classes is related to number of fixed
points of a mapping. The twisted conjugacy search problems is to find, given a group
G and an endomorphism φ, an element a ∈ G such that t = a−1 wφ(a), provided
that at least one such a exists.
The standard CSP can be seen as a special case of the twisted version where
φ(x) = x, the identity automorphism. The protocol by Shpilrain and Ushakov in
Section 3.6 uses the double twisted conjugacy variant, in which the above definitions
is modified to include an additional endomorphism α and the task is to then find
an element a ∈ G such that t = α(a−1 )wφ(a).
The twisted conjugacy decision problem was proven to be decidable by Roman’kov [37]. Both the single and doubly twisted conjugacy search problems are
solvable by the same method of enumeration as in the case of the standard conjugacy search problem. However, no efficient algorithm is known.
2.5. Properties of Automorphism Groups. The automorphism group Aut(G)
and its subgroups have been extensively studied for polycyclic groups G. Like
polycyclic groups themselves, Aut(G) is finitely presented [3], and the outer automorphism group Out(G) is isomorphic to a linear group [45].
A decision problem related to Aut(G) is the orbit decision problem. Given elements g, h ∈ G and a subset A ⊆ Aut(G), determine if there exists α ∈ A such that
g = α(h). Note that if A = Inn(G) this problem reduces to the standard conjugacy
decision problem. When G is polycyclic all cyclic subgroups A ≤ Aut(G) are orbit
decidable [5].
For groups G in the larger class of polycyclic-by-finite (or virtually polycyclic)
groups, the conjugacy decision problem is decidable in Aut(G) [38]. Additionally,
Aut(G) is either virtually polycyclic or it contains a non-abelian free subgroup [9].
2.6. Quantum Algorithms.
. As mentioned in the introduction, the introduction non-commutative cryptography was spurred by the publication of Shor’s algorithm. The algorithm enables a
sufficiently sized quantum computer to perform integer factorization and compute
discrete logs in polynomial time, as opposed to in exponential time on a conventional computer.
From a group-theoretic perspective, Shor’s algorithm can be seen as solving the
hidden subgroup problem in finite cyclic groups. A subgroup H ≤ G is considered
hidden by a function f from G to a set X if it constant over all cosets of H. A
2003 paper by [4] by Batty, et al. explores this and other applications of quantum
algorithms to group theory, including an algorithm by Watrous that determines the
order of a finite solvable group. Bonanome showed [6] that a modified version of
Grover’s algorithm can solve the automorphism and conjugacy decision problems
in finite groups, as well as determine fixed points. The algorithm by Ivanyos, et
al [22] solves the hidden subgroup problem for finite nilpotent groups of class 2.
There are also partial results to solving the power conjugacy problem [12].
8
J. GRYAK AND D. KAHROBAEI
Despite these developments in the use quantum algorithms for finite groups,
there are no known quantum algorithms that are applicable to infinite groups.
3. Cryptosystems
For the systems described below, the chosen platform group G should be suitable
for cryptography as delineated in the introduction. Let G be finitely presented and
non-abelian. Group operations (products, inverses) and solving the word problem
must be efficient. Additional criteria for each protocol or scheme are stated in their
respective descriptions. Note that the precise definitions of each algorithmic search
or decision problem can be found in Section 2.
3.1. The Anshel-Anshel-Goldfeld Key-Exchange Protocol. In their 1999
paper [1], Anshel, Anshel, and Goldfeld introduced the commutator key exchange
protocol, which is also referred to as AAG key exchange or Arithmetica. The groupbased version of the key exchange described below is in the style of [31]. Prior to the
key exchange, the protocol parameters N1 , N2 , L1 , L2 , L ∈ N, with 1 ≤ L1 ≤ L2 ,
are chosen and made public:
(1) Alice chooses a set Ā = {a1 , . . . , aN1 }, with Bob choosing B̄ = {b1 , . . . , bN2 },
where ai , bj ∈ G are words of length in [L1 , L2 ]. Note that Ā and B̄ both
generate subgroups of G. These sets are then exchanged publicly with each
other.
(2) Alice constructs her private key as A = aǫs11 . . . asǫLL , with ask ∈ Ā and
ǫk ∈ {−1, 1}. Similarly, Bob computes as his private key B = bδt11 . . . bδtLL ,
with btk ∈ B̄ and δk ∈ {−1, 1}.
(3) Alice then computes b′j = A−1 bj A for 1 ≤ j ≤ N2 and sends this collection
to Bob, while Bob computes and sends Alice a′i = B −1 ai B for 1 ≤ i ≤ N1 .
(4) Alice and Bob can now compute a shared key κ = A−1 B −1 AB, which is
the commutator of A and B, denoted [A, B]. Alice computes (using only
the a′i which correspond to some si of her private key):
ǫ
ǫ
κA = A−1 a′ s11 · · · a′ sLL
= A−1 B −1 aǫs11 B · · · B −1 aǫsLL B
(BB −1 )aǫsLL B
= A−1 B −1 aǫs11 (BB −1 )aǫs22 B · · · B −1 asǫL−1
L−1
aǫ L B
= A−1 B −1 aǫs11 aǫs22 · · · asǫL−1
L−1 sL
= A−1 B −1 AB.
Analogously, Bob computes κB = B −1 A−1 BA. The shared secret is then
κ = κA = κ−1
B .
As noted in [41], the security of AAG is based on both the simultaneous conjugacy
search problem and the subgroup membership search problem.
3.2. Ko-Lee Key Exchange Protocol. Originally specified by Ko, Lee, et al.
[25] using braid groups, their non-commutative analogue of Diffie-Hellman key exchange can be generalized to work over other platform groups. Let G be a finitely
presented group, with A, B ≤ G such that all elements of A and B commute.
An element g ∈ G is chosen, and g, G, A, B are made public. A shared secret
can then be constructed as follows:
POLYCYCLIC GROUP-BASED CRYPTOGRAPHY
9
• Alice chooses a random element a ∈ A and sends g a to Bob.
• Bob chooses a random element b ∈ B and sends g b to Alice.
• The shared key is then g ab , as Alice computes (g b )a , which is equal to Bob’s
computation of (g a )b as a and b commute.
The security of Ko-Lee rests upon solving the conjugacy search problem within
the subgroups A, B.
3.3. Non-Commutative ElGamal Key-Exchange. In the 2006 paper by Kahrobaei
and Khan [23], the authors proposed two adaptations of the ElGamal asymmetric
key encryption algorithm for use in non-commutative groups. Let S, T be finitely
generated subgroups such that all elements of S and T commute. In any exchange,
the triple hG, S, T i is made public.
3.3.1. Non-Commutative Key Exchange Using Conjugacy Search.
• Bob chooses s ∈ S as his private key, a random element b ∈ G, and publishes
as his public key the tuple hb, ci, with c = bs .
• To create a shared secret x ∈ G, Alice chooses x and a t ∈ T . Using Bob’s
t
public key, she publishes hh, Ei, with h = bt and E = xc .
• To recover x, Bob first computes hs , which, as elements of S and T commute, yields
hs = (bt )s = (bs )t = ct .
Bob can then calculate x = E (c
t −1
)
.
The security of this scheme relies upon the conjugacy search problem in G.
3.3.2. Non-Commutative Key Exchange Using Power Conjugacy Search. By imposing the additional requirement that the conjugacy search problem is efficiently
solvable in G, we can now describe a variation of the previous protocol:
• Bob chooses s ∈ S and n ∈ Z as his private key, as well as a random element
b ∈ G, and publishes as his public key hv, wi, with v = g n and w = g −1 sg.
Note that wn = (s−1 gs)n = s−1 g n s = s−1 vs.
• Alice chooses a shared secret x ∈ G, along with m ∈ Z and t ∈ T , and
publishes hh, Ei, with h = t−1 wm t and E = x−1 t−1 v m tx.
• To recover x, Bob first computes E ′ = shn s−1 = st−1 sg mn st, which, as
elements of S and T commute, yields
E ′ = t−1 v m t.
Knowing that E = x−1 E ′ x, Bob can then solve the conjugacy search problem to obtain the shared secret x.
The security of this scheme rests upon the power conjugacy search problem in
G.
3.4. Non-Commutative Digital Signature. The following digital signature scheme
was proposed in a paper by Kahrobaei and Koupparis [24]. The platform group G
must be infinite. The scheme uses two functions: f : G → {0, 1}∗, which encodes
elements of the group as binary strings; and H : {0, 1}∗ → G, a collision-resistant
hash function. Using these functions (which are made public along with G), a
message can be signed and verified as follows:
10
J. GRYAK AND D. KAHROBAEI
• Key Generation: The signer first chooses an element g ∈ G, whose centralizer, the set of elements that commute with g, contains 1G and powers of g
exclusively. The private key consists of s ∈ G and n ∈ N, where n is chosen
to be highly composite. The public key x = g ns is then published.
• Signing Algorithm: To sign a message m, the signer chooses a random
element t ∈ G and a random factorization ni nj of n, and computes the
following (with || denoting concatenation):
y = g ni t
h = H(m||f (y))
α = t−1 shy
The signature σ = hy, α, nj i and the message m are then send to the message recipient.
• Verification: To verify, the recipient computes h′ = H(m||f (y)), and accepts the message as authentic if and only if the following equality holds:
′
y nj α = xh y .
The security of the signature scheme is based on the collision resistance of the
hash function, the conjugacy search problem in G, and the Diffie-Hellman assumption. Moreover, Alice must maintain a public list of previously used factors of n,
and regenerate s and n after a few uses.
3.5. A Key Exchange Using the Subgroup Membership Search Problem.
In [43], Shpilrain and Zapata proposed a public key exchange protocol over relatively free groups. Given a free group Gn of rank n and R E Gn , the quotient
group Gn = Gn /R is relatively free if for any endomorphism ψ of Gn , ψ(R) ≤ R.
The protocol utilizes two types of automorphisms:
• Let {x1 , . . . , xn } be the generators of Gn . The Neilsen automorphisms are
defined as:
−1
xi xj i = k
xi
i=j
αj (xi ) =
β (x ) =
xi
i 6= k
xi
i 6= j jk i
• For relatively free groups like Gn , the Nielsen automorphisms form a subgroup of Aut(Gn ) under composition. Elements in this subgroup are called
tame automorphisms. In constructing a private key, the protocol uses both
tame and non-tame automorphisms.
In the key exchange below, let Fn and Fn+m denote the relatively free groups of
rank n and n + m, with respective generating setsQ
{x1 , . . . , xn } and
{x1 , . . . , xn , xn+1 , . . . , xn+m }. Moreover, let Fji = i Fj denote the direct product
of i instances of the relatively free group of rank j. Finally, let z(x1 , . . . , xn+m ) denote a word z written in the alphabet {x1 , . . . , xn+m }. The exchange then proceeds
as follows:
(1) Alice chooses an automorphism φ ∈ Aut(Fn+m ), where φ = τ1 ◦ · · · ◦ τk , a
composition of Nielsen automorphisms and non-tame automorphisms which
are readily invertible. Alice uses φ−1 = τk−1 ◦ · · · ◦ τ1−1 as her private
key. For each generator xi of Fn+m , Alice computes the word φ(xi ) =
yi (x1 , . . . , xn+m ). She then computes ŷi , which is the restriction of each
POLYCYCLIC GROUP-BASED CRYPTOGRAPHY
11
yi to a word in the generators of Fn . The tuple hŷ1 , . . . , ŷn+m i is then
published as the public key.
n+m
(2) Bob chooses a word w in the subgroup S of Fn+m
consisting of words of
the form v = (v1 (x1 , . . . , xn ), . . . , vn (x1 , . . . , xn ), 1, . . . , 1). Thus S ∼
= Fnn ,
and w = (w1 (x1 , . . . , xn ), . . . , wn (x1 , . . . , xn )). Using the components of
the public key, Bob encrypts w by replacing each instance of xi in ŷj by wi .
The encrypted tuple φ̂(w) = hŷ1 (w1 , . . . , wn ), . . . , ŷn (w1 , . . . , wn )i is then
sent to Alice.
(3) Alice applies φ−1 (restricted to Fnn ) component-wise to φ̂(w) to recover w′ ,
a unique normal form of w. This w′ is the shared key.
The security of the protocol is two-fold. Decrypting a particular message φ̂(w)
is equivalent to solving the subgroup membership search problem in the subgroup
generated by the public key. To recover the private key, an attacker must recover
the automorphism φ and its inverse from the public image of the generators ŷi ,
restricted to the subgroup Fn . Shpilrain and Zapata claim there is no known
method of accomplishing this outside of an exhaustive search of Aut(Fn+m ).
The authors suggest free metabelian groups of rank r (with r = 10, n = 8, m = 2)
as platform groups for their protocol. Aside from meeting the standard criteria for
platform groups, these groups have the requisite supply of non-tame automorphisms
and the subgroup membership search problem is known to be super-polynomial in
these groups.
3.6. An Authentication Scheme Based on the Twisted Conjugacy Problem. In [42], Shpilrain and Ushakov introduced a non-commutative authentication
scheme based on the Fiat-Shamir scheme. The platform group G can in fact be a
semigroup, provided that an antihomomorphism ∗ : G → G, i.e., (ab)∗ = b∗ a∗ , exists. The endomorphism group of G should also be sufficiently large to preclude an
exhaustive search. In the simulation of the protocol below, Alice is authenticating
herself to Bob:
(1) Alice chooses s ∈ G as her private key. She then chooses w, t ∈ G and
endomorphisms φ, ψ such that t = ψ(s∗ )wφ(s). The public key hφ, ψ, w, ti
is then published.
(2) The commitment/verification exchange proceeds as follows:
(a) Alice chooses an r ∈ G and computes the commitment u = ψ(r∗ )tφ(r),
sending it to Bob.
(b) Bob chooses a random bit c and sends it to Alice.
(c) Alice replies with v = r if c = 0, and v = sr otherwise.
(d) Bob verifies the commitment u by computing u′ , and accepts if u = u′ :
If c = 0, Bob computes u′ = ψ(v ∗ )tφ(v) = ψ(r∗ )tφ(r).
If c = 1, Bob computes u′ = ψ(v ∗ )tφ(v), where
u′
= ψ((sr)∗ )tφ(sr)
= ψ(r∗ )ψ(s∗ )wφ(s)φ(r)
= ψ(r∗ )tφ(r).
Note that the commitment/verification steps must be performed k times to yield
a probability of successful forgery less than 21k . The security of the scheme is based
on the apparent hardness of the double twisted conjugacy search problem.
12
J. GRYAK AND D. KAHROBAEI
3.7. Authentication Schemes Based on Semigroup Actions. Drawing inspiration from the zero-knowledge proof by Feige, Fiat, and Shamir; Grigoriev and
Shpilrain [17] introduced two generic protocol schema based upon (semi)group actions and provided several concrete examples.
3.7.1. An Authentication Scheme Based on the Endomorphism Problem. One such
instance of their second protocol is based upon the endomorphism problem. While
this scheme can be used with a semigroup or some other algebraic structure, the
structure S must meet several criteria:
• An algorithm exists to determine if function over S is an endomorphism.
If S is specified by a presentation this criterion is satisfied by S having an
efficiently solvable word problem.
• An algorithm exists to determine if function over S is an automorphism of
S.
• The endomorphism search problem in S should be demonstrably NP-hard.
As before, in the protocol exchange below Alice is authenticating herself to Bob:
(1) Alice chooses an endomorphism φ : S → S as her private key. Alice then
chooses elements s, t ∈ S such that t = φ(s). The public key hS, s, ti is then
published.
(2) The commitment/verification exchange proceeds as follows:
(a) Alice chooses an automorphism ψ and computes the commitment u =
ψ(t), sending it to Bob.
(b) Bob chooses a random bit c and sends it to Alice.
(c) Alice replies with v = ψ(t) if c = 0, and v = ψ ◦ φ otherwise.
(d) Bob verifies the commitment u by computing u′ :
If c = 0, Bob computes u′ = ψ(t) and accepts if u = u′ and ψ is an
automorphism.
If c = 1, Bob computes u′ = (ψ ◦ φ)(s) and accepts if u = u′ and ψ ◦ φ
is an endomorphism.
3.7.2. An Authentication Scheme Based on the Group Isomorphism Problem. The
following is a new instance of the first protocol, which requires a class of finitely
presented groups C with the following algorithmic properties:
• The class C must have an efficiently solvable isomorphism decision problem.
• The isomorphism search problem in C should be demonstrably NP-hard.
The protocol exchange is as follows:
(1) Alice chooses two isomorphic groups G1 and G2 from C. Alice then chooses
an isomorphism α : G1 → G2 as her private key, and publishes hG1 , G2 i.
(2) The commitment/verification exchange proceeds as follows:
(a) Alice chooses a group G ∈ C and an isomorphism β : G → G1 , sending
the commitment G to Bob.
(b) Bob chooses a random bit c and sends it to Alice.
(c) Alice replies with γ = α if c = 0, and γ = α ◦ β otherwise.
(d) Bob verifies the commitment G by computing G′ = γG:
If c = 0, Bob accepts if G′ ∼
= G1 .
If c = 1, Bob accepts if G′ ∼
= G2 .
For both of the above authentication schemes, the commitment/verification steps
must be performed multiple times to yield a low probability of successful forgery.
POLYCYCLIC GROUP-BASED CRYPTOGRAPHY
13
3.8. Secret Sharing Schemes Based on the Word Problem. Habeeb, Kahrobaei,
and Shpilrain [18] proposed two secret sharing schemes for groups whose presentations satisfy small cancellation conditions. In a (t, n) scheme, the threshold t is the
number of participants that are required to recover the shared secret (created and
disseminated by the “dealer”), with n the total number of participants.
In both schemes, the dealer wishes to share a k-bit integer x that will be represented as a column vector C ∈ Bk . Prior to initiating the secret sharing, the dealer
chooses groups Gj given by the presentations hX|Rj i, where X is a common generating set and Rj a unique set of relators for each participant Pj . The generating
set X is then made public. Note that both schemes require secure communication
channels between both the dealer and participants and between the participants
themselves. These secure channels can be achieved using any preferred public key
exchange protocol.
3.8.1. An (n, n)-threshold Scheme. In this scheme, all n participants are required
to reproduce the secret x:
(1) The dealer sends each participant Pj their unique relator set Rj .P
(2) The dealer decomposes C into n vectors Cj ∈ Bk such that C = j Cj .
(3) Each entry ckj of Cj is then encoded as a word wkj ∈ Gj , such that wkj ≡
1Gj if ckj = 1 and wkj 6≡ 1Gj otherwise. The wkj s are then sent to PJ
using an open channel.
(4) For each wkj , participant Pj solves the word problem in Gj and reconstructs
Cj .
(5) The participants can then recover C by summing over all Cj s. Note that a
secure sum protocol can be employed so that the Cj s need not be divulged
to the other participants.
3.8.2. A (t, n)-threshold Scheme. In this scheme, t participants are required to reproduce the secret x. As in Shamir’s secret sharing, x must be an element in Zp
with p prime, and a polynomial f of degree t − 1 must be chosen by the dealer such
that f (0) = x. The dealer must also choose k-bit integers yj ≡ f (j) (mod p).
(1) The dealer sends each participant Pj their unique relator set Rj .
(2) Each yj has its bits bkj encoded as words wkj ∈ Gj as in the previous
scheme.
(3) For each wkj , participant Pj solves the word problem in Gj , yielding yj .
(4) The participants can then perform polynomial interpolation using the yj s
to recover f . The shared secret x is then revealed by evaluating f (0). If
t ≥ 3, Lagrange interpolation can be employed so that the Bj s need not be
divulged to the other participants.
The security of these schemes is contingent upon the relators RJ being kept secret.
4. Cryptanalysis and Attacks
In this section we present a number of attacks against group-based cryptosystems, with an emphasis on those that are applicable to polycyclic groups.
14
J. GRYAK AND D. KAHROBAEI
4.1. Length-Based Attack. The length-based attack (LBA) is an incomplete, local search that attempts to solve the conjugacy search problem (or its generalized
version) by using the length of a word as a heuristic. It was first introduced by
Hughes and Tannenbaum [21] as a means to attack the AAG key exchange protocol
over braid groups. In [15], Garber, Kaplan, Teicher, Tsaban, and Vishne explored
the use of length functions based on the Garside normal form of braid group elements. They demonstrated experimentally that the length-based attack in this
context could break the AAG protocol, albeit inefficiently.
As the length-based attack is an iterative improvement search, it is susceptible to
failing at peaks and plateaux in the search space. In [31], Myasnikov and Ushakov
identified when these peaks occur and were able to make successive refinements to
the algorithm to yield a high success rate.
More recently, the authors of [14] analyzed the LBA against AAG over polycyclic
groups. They found that the success rate of the LBA decreased as the Hirsch length
of the platform group increased. Their version of the LBA, essentially a local beam
search, is presented below:
Algorithm 1 LBA with Memory 2
Initialize S = {(|b′ |, b′ , 1G )}.
while not time out do
for (|c|, c, x) ∈ S do
Remove (|c|, c, x) from S
ε
Compute cai for all i ∈ {1 . . . N1 } and ε = ±1
ε
if cai = b then output inverse of aεi x and stop
ε
ε
Save (|cai |, cai , aεi x) in S ′
end for
After all conjugation attempts, sort S ′ by the first element of every tuple
Copy the smallest M elements into S and delete the rest of S ′
end while
Otherwise, output FAIL
Note that the ai , b′ , b̄′ , and N1 are from the AAG protocol exchange in Section
′
3.1, while c̄′ is a candidate
P conjugator set. The length of a conjugator set c̄ =
(c1 , . . . , cj ) is defined as j |cj |.
4.2. Linear Decomposition Attack. In [32], Miasnikov and Roman’kov introduced the linear decomposition attack. The attack is a general framework for the
cryptanalysis of a number of group-theoretic analogues of Diffie-Hellman key exchange. For a protocol to be susceptible to the attack its platform groups must
admit a linear representation. Moreover, the algorithmic security assumption of
the protocol must be equivalent to commutative linear transformations. Note that
the AAG protocol is not susceptible to this attack.
Given the linear structure V and subsets W ≤ V and U ≤ End(V ), the attack
first computes a basis for the span of all vectors of the form wu , with w ∈ W and
u ∈ hU i. This can be done in polynomial time with respect to the dimension of V
POLYCYCLIC GROUP-BASED CRYPTOGRAPHY
15
and the sizes of W and U . This calculation can be performed offline if the platform
group for a particular protocol is fixed. The public group elements transmitted
during the key exchange can then be decomposed using this basis to reconstruct
the shared secret without discovering the private information of each party, negating
the need for an attacker to solve the underlying security problem.
The attack requires the platform group to be specified by either its linear representation V (as a vector space or an associative algebra) or by a presentation
coupled with a faithful embedding into GL(V ). Moreover, the linear space into
which the group is embedded must be of sufficiently small dimension to make the
attack tractable. While the dimension of the smallest linear embeddings of finite groups and some classes of infinite groups such as torsion-free nilpotent and
polycyclic-by-finite are known, the authors concede that no such bounds are known
for other linear groups, including general polycyclic groups and metabelian groups.
4.3. Field-Based Attack. Kotov and Ushakov [26] investigated the security of
the AAG key-exchange protocol used with certain polycyclic groups of the form
GF = OF ⋊ UF , where OF is the maximal order and UF is the unit group generated
by an irreducible polynomial in the algebraic number field F . In the semidirect
product, UF acts on OF by right multiplication. These groups were the original
polycyclic platform groups suggested by Eick and Kahrobaei in [10]. In [14], Garber,
Kahrobaei, and Lam showed that such groups were resistant to the length-based
attack, with the attack’s success decreasing as the Hirsch length of the group GF
increased.
Contrary to these results, the field-based attack devised by the authors is able to
recover the shared key regardless of the group’s Hirsch length. Using a deterministic, polynomial time algorithm, the key is recovered by solving a linear system of
conjugacy equations over the field F . If the group GF is specified as a semidirect
product and F is given in matrix form, the attack can be directly applied. However, if GF is given by a polycyclic presentation, the authors construct a linear
representation from the presentation prior to recovering the shared key.
While the field-based attack is successful in these particular groups, the authors
concede that their attack does not preclude other polycyclic groups from consideration for the AAG protocol. We claim that there are other classes of polycyclic
groups that are resistant to such an attack. Such platform groups would be specified by their polycyclic presentations and have matrix representations that are not
readily computable.
4.4. Quotient Attack. In attempting to recover the shared secret from the public
information of the AAG protocol, the length-based attack (LBA) operates as if the
platform group G is a free group. The success of the LBA on non-free groups
motivated Miasnikov and Ushakov in [34] to investigate the asymptotic properties
of the given platform groups. Ultimately they determined that the LBA is successful
for groups in which a random choice of elements is very likely to generate a free
subgroup of G.
16
J. GRYAK AND D. KAHROBAEI
These investigations led to a new form of attack for the AAG key exchange
protocol and others that use some variation of the membership or conjugacy search
problems. Dubbed the quotient attack, the algorithms solve the search problems
in a quotient group G/N . If G/N possesses the exponentially-generic free basis
property the solution in the quotient will yield one in the original group. The time
complexity of the attack is contingent upon the particular class of platform groups.
For pure braid groups P Bn the authors prove that the complexity is O(n2 ).
As polycyclic groups do not possess the free basis property nor any free subgroups, this attack is not applicable.
4.5. Linear Centralizer Attack. Tsaban [44] devised the linear centralizer attack
against AAG over the original braid group platform. The attack exploits a faithful
linear representation of a braid group Bn . Using this representation, the algorithm
computes a basis for the double centralizer of the public subsets of the AAG protocol
(which are contained in their respective double centralizers). This process produces
one half of the shared key, after which random elements are tested to find an inverse
that yields the other half. The algorithm runs in expected polynomial time with
respect to n, but is impractical for even modest values of n.
The applicability of the linear centralizer attack to other platform groups is
limited to those whose faithful representations are known and whose linear representations are sufficiently small. As mentioned previously with respect to the linear
decomposition attack, these aspects of polycyclic groups are currently unknown.
5. Conclusion
. In this paper we have presented a survey of over ten years of research related
to polycyclic group-based cryptography. We began with a study of the algorithmic properties of polycyclic groups. Polycyclic groups admit a number of representations, including polycyclic presentations, multiplication polynomials, and as
matrices. In addition to the decidability of the classic decision problems of word,
conjugacy, and isomorphism, the twisted conjugacy and orbit problem are also decidable. Moreover, the conjugacy decision problem for the automorphism group
Aut(G) of a polycyclic group G is decidable.
. We have seen that there are a variety of key exchanges, digital signature systems,
and secret sharing schemes for which a polycyclic group is an appropriate choice
of platform group. These schemes use several different computational problems in
polycyclic groups as listed in the paper, which are beyond use of conjugacy search
problem.
. While there has been considerable research activity concerning polycyclic groups
and their attendant cryptosystems over the last decade, many computational complexity and algorithmic questions remain unanswered. We have collected these
outstanding problems below, with the hope of stimulating interest in their solutions:
(1) What is the complexity of the isomorphism search problem in polycyclic
groups?
POLYCYCLIC GROUP-BASED CRYPTOGRAPHY
17
(2) What is the complexity of the twisted search conjugacy problem in polycyclic groups?
(3) What is the complexity of the power conjugacy problem in polycyclic
groups?
(4) What is the complexity of the geodesic length problem in polycyclic groups?
(5) What is the complexity of the n-root problem in polycyclic groups?
(6) What is the complexity of finding matrix representation of polycyclic groups?
(7) What is the complexity of the conjugacy problem in the automorphism of
polycyclic groups?
(8) What is the complexity of the search endomorphism (automorphism) problem in polycyclic groups?
(9) What is the complexity of the homomorphism problem in polycyclic groups?
(10) Are polycyclic group-based cryptosystems resistant to quantum algorithms?
(11) What is the complexity of the subgroup membership search problem in
polycyclic groups?
Acknowledgements
We would like to thank Bettina Eick for her contributions regarding polycyclic
groups and their algorithmic properties. Delaram Kahrobaei is partially supported
by a PSC-CUNY grant from the CUNY Research Foundation, the City Tech Foundation, and ONR (Office of Naval Research) grant N00014-15-1-2164. Delaram
Kahrobaei has also partially supported by an NSF travel grant CCF-1564968 to
IHP in Paris.
References
[1] I. Anshel, M. Anshel, and D. Goldfeld. An algebraic method for public-key cryptography.
Math. Res. Let., 6:287–291, 1999.
[2] B. Assmann and S. Linton. Using the Mal’cev correspondence for collection in polycyclic
groups. J. Algebra, 316(2):828–848, 2007.
[3] L. Auslander. The automorphism group of a polycyclic group. Annals of Mathematics, pages
314–322, 1969.
[4] M. Batty, S. Rees, S. Braunstein, and A. Duncan. Quantum algorithms in group theory.
Technical report, 2003.
[5] O. Bogopolski, A. Martino, and E. Ventura. Orbit decidability and the conjugacy problem for
some extensions of groups. Transactions of the American Mathematical Society, 362(4):2003–
2036, 2010.
[6] M. Bonanome. Quantum Algorithms in Combinatorial Group Theory. PhD dissertation, City
University of New York, 2007.
[7] M. Dehn. Über unendliche diskontinuierliche gruppen. Mathematische Annalen, 71(1):116–
144, 1911.
[8] M. du Sautoy. Polycyclic groups, analytic groups and algebraic groups. Proc. London Math.
Soc. (3), 85(1):62–92, 2002.
[9] B. Eick. When is the automorphism group of a virtually polycyclic group virtually polycyclic?
Glasgow Mathematical Journal, 45(03):527–533, 2003.
[10] B. Eick and D. Kahrobaei. Polycyclic groups: a new platform for cryptography, preprint
arxiv: math.gr/0411077. Technical report, 2004.
[11] B. Eick and G. Ostheimer. On the orbit-stabilizer problem for integral matrix actions of
polycyclic groups. Math. Comp., 72(243):1511–1529 (electronic), 2003.
[12] A. Fesenko. Vulnerability of cryptographic primitives based on the power conjugacy search
problem in quantum computing. Cybernetics and Systems Analysis, 50(5):815–816, 2014.
[13] E. Formanek. Conjugate separability in polycyclic groups. Journal of Algebra, 42(1):1–10,
1976.
18
J. GRYAK AND D. KAHROBAEI
[14] D. Garber, D. Kahrobaei, and H. T. Lam. Length-based attack for polycyclic groups. Journal
of Mathematical Cryptology, De Gruyter, pages 33–44, 2015.
[15] D. Garber, S. Kaplan, M. Teicher, B. Tsaban, and U. Vishne. Length-based conjugacy search
in the braid group. Contemp. Math. 418, pages 75–87, 2006.
[16] V. Gebhardt. Efficient collection in infinite polycyclic groups. J. Symbolic Comput.,
34(3):213–228, 2002.
[17] D. Grigoriev and V. Shpilrain. Zero-knowledge authentication schemes from actions on
graphs, groups, or rings. Ann. Pure Appl. Logic, 162:194–200, 2010.
[18] M. Habeeb, D. Kahrobaei, and V. Shpilrain. A secret sharing scheme based on group presentations and the word problem,. Contemp. Math., Amer. Math. Soc., 582:143–150, 2012.
[19] P. Hall. The Edmonton notes on nilpotent groups. Queen Mary College Mathematics Notes.
Mathematics Department, Queen Mary College, London, 1969.
[20] D. F. Holt, B. Eick, and E. A. O’Brien. Handbook of computational group theory. Discrete
Mathematics and its Applications (Boca Raton). Chapman & Hall/CRC, Boca Raton, FL,
2005.
[21] J. Hughes and A. Tannenbaum. Length-based attacks for certain group based encryption
rewriting systems, workshop seci02 sécurité de la communication sur internet. 2002.
[22] G. Ivanyos, L. Sanselme, and M. Santha. An efficient quantum algorithm for the hidden
subgroup problem in nil-2 groups. In LATIN 2008: Theoretical Informatics, pages 759–771.
Springer, 2008.
[23] D. Kahrobaei and B. Khan. Nis05-6: A non-commutative generalization of ElGamal key
exchange using polycyclic groups. In IEEE Globecom 2006, pages 1–5, Nov 2006.
[24] D. Kahrobaei and C. Koupparis. Non-commutative digital signatures using non-commutative
groups. Groups, Complexity, Cryptology, 4:377–384, 2012.
[25] K. H. Ko, S. J. Lee, J. H. Cheon, J. W. Han, J. Kang, and C. Park. New public-key cryptosystem using braid groups. Advances in cryptology, CRYPTO 2000 (Santa Barbara, CA),
LNCS, vol. 1880, pages 166–183, 2000.
[26] M. Kotov and A. Ushakov. Analysis of a certain polycyclic-group-based cryptosystem. Journal
of Mathematical Cryptology, 9(3):161–167, 2015.
[27] C. R. Leedham-Green and L. H. Soicher. Collection from the left and other strategies. J.
Symbolic Comput., 9(5-6):665–675, 1990. Computational group theory, Part 1.
[28] E. Lo and G. Ostheimer. A practical algorithm for finding matrix representations for polycyclic groups. J. Symbolic Comput., 28(3):339–360, 1999.
[29] A. Mal’cev. On homomorphisms onto finite groups. Trans. Amer. Math. Soc, 119:67–79, 1983.
[30] J. Milnor. Growth of finitely generated solvable groups. J. Differential Geom., 2(4):447–449,
1968.
[31] A. D. Myasnikov and A. Ushakov. Length-based attack and braid groups: cryptanalysis of
Anshel-Anshel-Goldfeld key-exchange protocol. PKC 2007, LNCS 4450, pages 76–88, 2007.
[32] A. G. Myasnikov and V. Roman’kov. A linear decomposition attack. Groups Complexity
Cryptology, 7(1):81–94, 2015.
[33] A. G. Myasnikov, V. Shpilrain, A. Ushakov, and N. Mosina. Non-commutative cryptography
and complexity of group-theoretic problems, volume 177. American Mathematical Society
Providence, RI, USA, 2011.
[34] A. G. Myasnikov and A. Ushakov. Random subgroups and analysis of the length-based and
quotient attacks. Journal of Mathematical Cryptology 2(1), pages 29–61, 2008.
[35] W. Nickel. Matrix representations for torsion-free nilpotent groups by Deep Thought. J.
Algebra, 300(1):376–383, 2006.
[36] V. Remeslennikov. Conjugacy in polycyclic groups. Algebra and Logic, 8(6):404–411, 1969.
[37] V. Roman’kov. The twisted conjugacy problem for endomorphisms of polycyclic groups. Journal of Group Theory, 13(3):355–364, 2010.
[38] D. Segal. Decidable properties of polycyclic groups. Proc. London Math. Soc, 3:61–497, 1990.
[39] P. Shor. Algorithms for quantum computation: Discrete logarithms and factoring. In Foundations of Computer Science, 1994 Proceedings., 35th Annual Symposium on, pages 124–134.
IEEE, 1994.
[40] V. Shpilrain. Search and witness problems in group theory. Groups–Complexity–Cryptology,
2(2):231–246, 2010.
POLYCYCLIC GROUP-BASED CRYPTOGRAPHY
19
[41] V. Shpilrain and A. Ushakov. The conjugacy search problem in public key cryptography:
unnecessary and insufficient. Applicable Algebra in Engineering, Communication and Computing, 17(3-4):285–289, 2006.
[42] V. Shpilrain and A. Ushakov. An authentication scheme based on the twisted conjugacy
problem. In Applied Cryptography and Network Security, pages 366–372. Springer, 2008.
[43] V. Shpilrain and G. Zapata. Using the subgroup membership search problem in public key
cryptography. Contemporary Mathematics, 418:169, 2006.
[44] B. Tsaban. Polynomial-time solutions of computational problems in noncommutativealgebraic cryptography. Journal of Cryptology, 28:601–622, 2015.
[45] B. Wehrfritz. Two remarks on polycyclic groups. Bulletin of the London Mathematical Society, 26(6):543–548, 1994.
[46] J. Wolf. Growth of finitely generated solvable groups and curvature of Riemannian manifolds.
Journal of Differential Geometry, pages 421–446, 1968.
Jonathan Gryak, CUNY Graduate Center, PhD Program in Computer Science, City
University of New York
E-mail address: [email protected]
Delaram Kahrobaei, CUNY Graduate Center, PhD Program in Computer Science
and NYCCT, Mathematics Department, City University of New York
E-mail address: [email protected]
| 4 |
Path Integral Networks:
End-to-End Differentiable Optimal Control
arXiv:1706.09597v1 [cs.AI] 29 Jun 2017
Masashi Okada, Takenobu Aoshima
AI Solutions Center, Panasonic Corporation.
{okada.masashi001,aoshima.takenobu}@jp.panasonic.com
Luca Rigazio
Panasonic Silicon Valley Laboratory,
Panasonic R&D Company of America
[email protected]
Abstract: In this paper, we introduce Path Integral Networks (PI-Net), a recurrent network representation of the Path Integral optimal control algorithm. The
network includes both system dynamics and cost models, used for optimal control
based planning. PI-Net is fully differentiable, learning both dynamics and cost
models end-to-end by back-propagation and stochastic gradient descent. Because
of this, PI-Net can learn to plan. PI-Net has several advantages: it can generalize
to unseen states thanks to planning, it can be applied to continuous control tasks,
and it allows for a wide variety learning schemes, including imitation and reinforcement learning. Preliminary experiment results show that PI-Net, trained by
imitation learning, can mimic control demonstrations for two simulated problems;
a linear system and a pendulum swing-up problem. We also show that PI-Net is
able to learn dynamics and cost models latent in the demonstrations.
Keywords: Path Integral Optimal Control, Imitation Learning
1
Introduction
Recently, deep architectures such as convolutional neural networks have been successfully applied to
difficult control tasks such as autonomous driving [2], robotic manipulation [13] and playing games
[15]. In these settings, a deep neural network is typically trained with reinforcement learning or imitation learning to represent a control policy which maps input states to control sequences. However,
as already discussed in [5, 21], the resulting networks and encoded policies are inherently reactive,
thus unable to execute planning to decide following actions, which may explain poor generalization
to new or unseen environments. Conversely, optimal control algorithms utilize specified models of
system dynamics and a cost function to predict future states and future cost values. This allows to
compute control sequences that minimize expected cost. Stated differently, optimal control executes
planning for decision making to provide better generalization.
The main practical challenge of optimal control is specifying system dynamics and cost models.
Model-based reinforcement learning [19, 7] can be used to estimate system dynamics by interacting with the environment. However in many robotic applications, accurate system identification is
difficult. Furthermore, predefined cost models accurately describing controller goals are required.
Inverse optimal control or inverse reinforcement learning estimates cost models from human demonstrations [17, 1, 30], but require perfect knowledge of system dynamics. Other inverse reinforcement
learning methods such as [3, 11, 8] do not require system dynamics perfect knowledge, however,
they limit the policy or cost model to the class of time-varying linear functions.
In this paper, we propose a new approach to deal with these limitations. The key observation is
that control sequences resulting from a specific optimal control algorithm, the path integral control
algorithm [27, 26], are differentiable with respect to all of the controller internal parameters. The
controller itself can thus be represented by a special kind recurrent network, which we call path
integral network (PI-Net). The entire network, which includes dynamics and cost models, can then
be trained end-to-end using standard back-propagation and stochastic gradient descent with fully
specified or approximated cost models and system dynamics. After training, the network will then
execute planning by effectively running path integral control utilizing the learned system dynamics
and cost model. Furthermore, the effect of modeling errors in learned dynamics can be mitigated by
end-to-end training because cost model could be trained to compensate the errors.
We demonstrate the effectiveness of PI-Net by training the network to imitate optimal controllers
of two control tasks: linear system control and pendulum swing-up task. We also demonstrate that
dynamics and cost models, latent in demonstrations, can be adequately extracted through imitation
learning.
2
Path Integral Optimal Control
Path integral control provides a framework for stochastic optimal control based on Monte-Carlo
simulation of multiple trajectories [12]. This framework has generally been applied to policy improvements for parameterized policies such as dynamic movement primitives [23, 11]. Meanwhile
in this paper, we focus on a state-of-the-art path-integral optimal control algorithm [27, 26] developed for model predictive control (MPC; a.k.a. receding horizon control). In the rest of this section,
we briefly review this path integral optimal algorithm.
Let xti ∈ Rn denote the state of a dynamical system at discrete time ti , uti ∈ Rm denotes a control
input for the system. This paper supposes that the dynamics take the following form:
xti+1 = f (xti , uti + δuti ),
(1)
where f : Rn → Rn is a dynamics function and δuti ∈ Rm is a Gaussian noise vector with
deviation σ. The stochastic optimal control problem here is to find the optimal control sequence
−1
{u∗ti }N
i=0 which minimizes the expected value of trajectory cost function S(τt0 ):
"
#
N
−1
X
1 T
J = E [S(τt0 )] = E φ(xtN ) +
q(xti ) + uti Ruti
,
(2)
2
i=0
where E[·] denotes an expectation values with respect to trajectories τt0
=
{xt0 , ut0 , xt1 , · · · , xtN −1 } by Eq. (1). φ : Rn → R and q : Rn → R are respectively
terminal- and running-cost; they are arbitrary state-dependent functions. R ∈ Rm×m is a positive
definite weight matrix of the quadratic control cost. In [27], a path integral algorithm has been
derived to solve this optimization problem, which iteratively improves {uti }1 to {u∗ti } by the
following update law:
h
i h
i
u∗ti ← uti + E exp(−S̃(τti )/λ) · δuti
E exp(−S̃(τti )/λ) ,
(3)
where S̃(τti ) is a modified trajectory cost function:
S̃(τti ) = φ(xtN ) +
N
−1
X
q̃(xtj , utj , δutj ),
(4)
j=i
1 − ν −1 T
1
δu Rδu + uT Rδu.
(5)
q̃(x, u, δu) = q(x) + uT Ru +
2
2
Eq. (3) can be implemented on digital computers by approximating the expectation value with the
Monte Carlo method as shown in Alg. 1.
Different from other general optimal control algorithms, such as iterative linear quadratic regulator
(iLQR) [24], path integral optimal control does not require first or second-order approximation of
the dynamics and a quadratic approximation of the cost model, naturally allowing for non-linear
system dynamics and cost models. This flexibility allows us to use general function approximators,
such as neural networks, to represent dynamics and cost models in the most general possible form.
2
Recurrent Network (Repeat 𝑈 times)
(a)
(c)
Monte Carlo Simulator: Recurrent Network (Repeat 𝑁 times)
Path Integral Network
PI-Net Kernel
Initial
Control Seq.
(𝑘)
Improved
Control Seq.
𝒖∗𝑡𝑖
ℓ1-16
State
𝒙𝑡0
𝒙𝑡0
ℓ5
𝒙𝑡𝑖
𝒙𝑡0
(𝑘)
(𝑘)
𝒙𝑡𝑖+1
𝒖𝑡𝑖
(𝑘)
(𝑘)
𝜙 𝒙𝑡𝑁 ; 𝜸
𝑓 𝒙𝑡𝑖 , 𝒖𝑡𝑖 + 𝛿𝒖𝑡𝑖 ; 𝜶
𝑞𝑡𝑁
ℓ4
𝒖𝑡𝑖
(𝑘)
Dynamics model
𝑓 𝒙, 𝒖; 𝜶
Cost models
𝑞 𝒙, 𝒖, 𝛿𝒖; 𝜷, 𝑅
𝜙 𝒙; 𝜸
(𝑘)
𝛿𝒖𝑡𝑖
𝑞
(𝑘)
(d)
PI-Net Kernel
Noise
Generator
ℓ9-16
Control Sequence
Updater
𝒖𝑡𝑖
ℓ11
(𝑘)
𝑞𝑡𝑖
(𝑘)
𝑞𝑡𝑖
𝑞𝑡𝑖
(𝑘)
(𝑘)
, 𝒖𝑡𝑖 , 𝛿𝒖𝑡𝑖 ; 𝜷, 𝑅
𝑞𝑡𝑖
Control Sequence Updater
ℓ1-8
Monte Carlo
Simulator
𝒙𝑡0
𝑘
𝒙𝑡𝑖
𝛿𝒖𝑡𝑖
(b)
𝒖𝑡𝑖
ℓ7
(𝑘)
𝒖𝑡𝑖
𝑗=0
𝒖𝑡𝑖
𝒖𝑡𝑖
(𝑘)
𝛿𝒖𝑡𝑖
(𝑘)
𝛿𝒖𝑡𝑖
ℓ15
(𝑘)
𝑞𝑡𝑁−𝑗
(𝑘)
𝑞𝑡𝑖
𝒖∗𝑡𝑖
(𝑘)
𝑆𝑡𝑖
𝑁−𝑖
𝒖𝑡 𝑖 +
(𝑘)
𝑘
σ𝐾−1
𝑘=0 exp − 𝑆𝑡𝑖 Τ𝜆 𝛿𝒖𝑡𝑖
𝒖∗𝑡𝑖
𝑘
σ𝐾−1
𝑘=0 exp − 𝑆𝑡𝑖 Τ𝜆
(𝑘)
𝛿𝒖𝑡𝑖
Figure 1: Architecture of PI-Net. Labels with ‘`’ indicate corresponding line numbers in Alg. 1.
Block arrows in (c,d) indicate multiple signal flow with respect to K trajectories.
3
Path Integral Networks
3.1
Architecture
We illustrate the architecture of PI-Net in
Figs. 1 (a)-(d). The architecture encodes Alg. 1
as a fully differentiable recurrent network representation. Namely, the forward pass of this
network completely imitates the iterative execution of Alg. 1.
Algorithm 1 Path Integral Optimal Control
input K, N : Num. of trajectories & timesteps
xt0 : State
{uti }: Initial control sequence
(k)
{δuti }: Gaussian noise
Top-level architecture of PI-Net is illustrated
f, q, φ, R: Dynamics and cost models
in Fig. 1 (a). This network processes input curλ, ν: Hyper-parameters
rent state xt0 and initial control sequence {uti } output {u∗t }: Improved control sequence
i
to output a control sequence {u∗ti } improved by
1: for k ← 0 to K − 1 do
(k)
the path integral algorithm. In order to execute
2:
xt0 ← xt0
the algorithm, dynamics model f and cost mod- 3: for
i ← 0 to N − 1 do
(k)
(k)
(k)
els q̃, φ are also given and embedded into the 4:
qti ← q̃(xti , uti , δuti )
network. We suppose f , q̃, φ are respectively
(k)
(k)
(k)
5:
xti+1 ← f xti , uti + δuti
parameterized by α, (β, R)2 and γ, which are
what we intend to train. We remark that PI-Net
6:
end for
(k)
(k)
architecture allows to use both approximated
7:
qtN ← φ(xtN )
parameterized models (e.g., neural networks)
8: end for
or explicit models for both system dynamics
9: for k ← 0 to K − 1 do
and cost models. In the network, PI-Net Ker- 10:
for i ← 0 to
do
PN
(k)
(k)
N
nel module is recurrently connected, represent- 11:
S̃τti ← j=i qtN
ing iterative execution Alg. 1. The number of 12:
end for
recurrence U is set to be sufficiently large for 13: end for
convergence.
14: for i ← 0 to N − 1 doh
i
PK−1
(k)
(k)
k=0 exp −S̃τt /λ ·δut+τ
PI-Net Kernel in Fig. 1 (b) contains three modi
h
i
15:
u∗ti ← uti +
PK−1
(k)
ules: Noise Generator, Monte-Carlo Simulator
k=0 exp −S̃τti /λ
and Control Sequence Updater. First, the Noise 16: end for
Generator procures K ×N Gaussian noise vec(k)
tors δuti sampled from N (0, σ). Then the noise vectors are input to Monte-Carlo Simulator along
1
2
N −1
In the rest of this paper, {uti } represents a sequence {uti }i=0
.
β is a parameter for a state-dependent cost model q; see Eqs. (2, 5)
3
(k)
with xt0 and {uti }, which estimates running- and terminal-cost values (denoted as qti ) of different K trajectories. Finally, the estimated cost values are fed into the Control Sequence Updater to
improve the initial control sequence.
Monte Carlo Simulator in Fig. 1 (c) contains the system dynamics f and cost models q̃, φ, which
are responsible to predict future states and costs over K trajectories. The simulations of K trajectories are conducted in parallel. The prediction sequence over time horizon is realized by network
(k)
recurrence. In the i-th iteration (i ∈ {0, 1, · · · , N − 1}), current K states xti and perturbated
(k)
(k)
controls uti + δuti are input to the dynamics model f to predicted next K states xti+1 . The
(k)
predicted states xti+1 are feedbacked for next iteration. While in the i-th iteration, the above inputs are also fed into the cost model q̃ to compute running-cost values. Only in the last iteration
(k)
(i = N − 1), predicted terminal-states xtN are input to the cost model φ to compute terminal-cost
values. This module is a recurrent network in a recurrent network, making entire PI-Net a nested or
double-looped recurrent network.
Control Sequence Updater in Fig. 1 (d) update input control sequence based on the equations
appeared in `9–`16 in Alg. 1. Since all equations in the loops can be computed in parallel, no
recurrence is needed for this module.
3.2
Learning schemes
We remark that all the nodes in the computational graph of PI-Net are differentiable. We can therefore employ the chain rule to differentiate the network end-to-end, concluding that PI-Net is fully
differentiable. If an objective function with respect to the network control output, denoted as Lctrl , is
defined, then we can differentiate the function with the internal parameters (α, β, R, γ). Therefore,
we can tune the parameters by optimizing the objective function with gradient descent methods. In
other words, we can train internal dynamics f and/or cost models q, φ, R end-to-end through the optimization. For the optimization, we can re-use all the standard Deep Learning machinery, including
back-propagation and stochastic gradient descent, and a variety of Deep Learning frameworks. We
implemented PI-Net with TensorFlow [6]. Interestingly, all elemental operations of PI-Net can be
described as TensorFlow nodes, allowing to utilize automatic differentiation.
A general use case of PI-Net is imitation learning to learn dynamics and cost models latent in experts’ demonstrations. Let us consider an open loop control setting and suppose that a dataset
D 3 (x?t0 , {u?ti }) is available; x?t0 is a state observation and {u?ti } is a corresponding control sequence generated by an expert. In this case, we can supervisedly train the network by optimizing
Lctrl , i.e., the errors between the expert demonstration {u?ti } and the network output {u∗ti }. For
closed loop infinite time horizon control setting, the network can be trained as an MPC controller.
If we have a trajectory by an expert {x?t0 , u?t0 , x?t1 , · · · , }, we can construct a dataset D 3 (x?ti , u?ti )
and then optimize the estimation errors between the expert control u?ti and the first value of output
control sequence output. If sparse reward function is available, reinforcement learning could be
introduced to train PI-Net. The objective function here is expected return which can be optimized
by policy gradient methods such as REINFORCE [28].
Loss functions In addition to Lctrl , we can append other loss functions to make training faster and
more stable. In an MPC scenario, we can construct a dataset in another form D 3 (x?ti , u?ti , x?ti+1 ).
In this case, a loss function Ldyn with respect to internal dynamics output can be introduced;
i.e., state prediction errors between f (x?ti , u?ti ) and x?ti+1 . Furthermore, we can employ loss functions Lcost regarding cost models. In many cases on control domains, we know goal states xg in
prior and we can assume cost models q, φ have optimum points at xg . Therefore, loss functions,
which penalize conditions of q(xg ) > q(x), can be employed to help the cost models have such
property. This is a useful approach when we utilize highly expressive approximators (e.g., neural networks) to cost models. In the later experiments, mean squared error (MSE) was used for
Lctrl,dyn . Lcost was defined as ϕ(q(xg ) − q(x)), where ϕ is the ramp function. The sum of these
losses can be jointly optimized in a single optimization loop. Of course, dynamics model can be
pre-trained independently by optimizing Ldyn .
4
3.3
Discussion of computational complexity
In order to conduct back-propagation, we must store all values fed into computational graph nodes
during preceding forward pass. Let B be a mini-batch size, then internal dynamics f and runningcost model q̃ in PI-Net are evaluated U × N × K × B times during the forward pass; this value can
grow very fast, making optimization memory hungry. For instance, the experiments of Sect. 5 used
over 100GB of RAM, forcing us to train on CPU instead of GPU.
The complexity can be alleviated by data parallel approach, in which a mini-batch is divided and
processed in parallel with distributed computers. Therefore, we can reduce the batch size B processed on a single computer. Another possible approach is to reduce U ; the recurrence number of
the PI-Net Kernel module. In the experiment, initial control sequence is filled with a constant value
(i.e., zero) and U is set to be large enough (e.g., U = 200). In our preliminary experiment, we
found that inputting desired output (i.e., demonstrations) as initial sequences and training PI-Net
with small U did not work; the trained PI-Net just passed through the initial sequence, resulting in
poor generalization performance. In the future, a scheme to determine good initial sequences, which
reduces U while achieving good generalization, must be established.
Note that the memory problem is valid only during training phase because the network does not need
to store input values during control phase. In addition, the mini-batch size is obviously B = 1 in
that phase. Further in MPC scenarios, we can employ warm start settings to reduce U , under which
output control sequences are re-used as initial sequences at next timestep. For instance in [26, 27],
real-time path integral control has been realized by utilizing GPU parallelization.
4
Related Work
Conceptually PI-Net is inspired by the value iteration network (VIN) [21], a differentiable network
representation of the value iteration algorithm designed to train internal state-transition and reward
models end-to-end. The main difference between VIN and PI-Net lies in the underlying algorithms:
the value iteration, generally used for discrete Markov Decision Process (MDP), or path integral optimal control, which allows for continuous control. In [21], VIN was applied to 2D navigation task,
where 2D space was discretized to grid map and a reward function was defined on the discretized
map. In addition, action space was defined as eight directions to forward. The experiment showed
that adequately estimated reward map can be utilized to navigate an agent to goal states by not only
discrete control but also continuous control. Let us consider a more complex 2D navigation task
on continuous control, in which velocity must be taken into reward function3 . In order to design
such the reward function with VIN, 4D state space (position and velocity) and 2D control space
(vertical and horizontal accelerations) must be discretized. This kind of discretization could cause
combinatorial explosion especially for higher dimensional tasks.
Generally used optimal controller, linear quadratic regulator (LQR), is also differentiable and
Ref. [22] employs this insight to re-shape original cost models to improve short-term MPC performance. The main advantage of the path integral control over (iterative-)LQR is that we do not
require a linear and quadratic approximation of non-linear dynamics and cost model. In order to
differentiate iLQR with non-linear models by back-propagation, we must iteratively differentiate
the functions during preceding forward pass, making the backward pass very complicated.
Policy Improvement with Path Integrals [23] and Inverse Path Integral Inverse Reinforcement Learning [11] are policy search approaches based on the path integral control framework, which train a
parameterized control policy via reinforcement learning and imitation learning, respectively. These
methods have been succeeded to train policies for complex robotics tasks, however, they assume
trajectory-centric policy representation such as dynamic movement primitives [9]; such the policy is
less generalizable for unseen settings (e.g., different initial states).
Since PI-Net is a policy representation of optimal control, trainable end-to-end by standard backpropagation, a wide variety of learning to control approaches may be applied, including:
Imitation learning DAGGER [18] and PLATO [10],
3
Such as a task to control a mass point to trace a fixed track while forwarding it as fast as possible.
5
2 (c) Pred. by teacher model
State
1
(b) Pred. by trained model
2
State
10
(a) Loss convergence
Train
Test
0
2
Cost
10
0
20
40
Number of Epochs
60
2
4
2
×10
0
50
100
150
Discrete Time
2
200
Cost
MSE
2
0
4
2
×10
0
50
100
150
Discrete Time
2
200
Figure 2: Results in linear system experiment.
Reinforcement learning Deep Deterministic Policy Gradient [14], A3C [16], Trust Region Optimization [20], Guided Policy Search [13] and Path Integral Guided Policy Search [4].
5
Experiments
We conducted experiments to validate the viability of PI-Net. These experiments are meant to test if
the network can effectively learn policies in an imitation learning setting. We did this by supplying
demonstrations generated by general optimal control algorithms, with known dynamics and cost
models (termed teacher models in the rest of this paper). Ultimately, in real application scenarios,
demonstrations may be provided by human experts.
5.1
Linear system
The objective of this experiment is to validate that PI-Net is trainable and it can jointly learn dynamics and cost models latent in demonstrations.
Demonstrations Teacher models in this experiment were linear dynamics, f ? (x, u) = F ? x +
G? u, and quadratic cost model, q ? (x) = φ? (x) = xT Q? x/2, where x ∈ R4 , u ∈ R2 , F ? ∈ R4×4 ,
G? ∈ R4×2 , Q? ∈ R4×4 , R? ∈ R2×2 . Starting from the fixed set of parameters, we produced
training and test dataset Dtrain , Dtest , which take the from: D 3 (x?t0 , {u?ti }). LQR was used to
obtain references {u?ti } from randomly generated state vectors x?t0 . The size of each dataset was
|Dtrain | = 950 and |Dtest | = 50.
PI-Net settings Internal dynamics and cost models were also linear and quadratic form whose
initial parameters were different from the teacher models’. PI-Net was supervisedly trained by
optimizing Lctrl . We did not use Ldyn and Lcost in this experiment.
Results Fig. 2 shows the results of this experiments. Fig. 2 (a) illustrates loss during training
epochs, showing good convergence to a lower fixed point. This validates that PI-Net was indeed
training well and the trained policy generalized well to test samples. Fig. 2 (b, c) exemplifies state
and cost trajectories predicted by trained dynamics and cost models, which were generated by feeding certain initial state and corresponding optimal control sequence into the models. Fig. 2 (c) are
trajectories by the teacher models. State trajectories in Fig. 2(b, c) approximate each other, indicating the internal dynamics model learned to approximate the teacher dynamics. It is well-known that
different cost functions could result in same controls [30], and indeed cost trajectories in Fig. 2(b,
c) seem not similar. However, this would not be a problem as long as a learned controller is well
generalized to unseen state inputs.
5.2
Pendulum swing up
Next, we tried to imitate demonstrations generated from non-linear dynamics and non-quadratic cost
models while validating the PI-Net applicability to MPC tasks. We also compared PI-Net with VIN.
Demonstrations The experiment focuses on the classical inverted pendulum swing up task [25].
Teacher cost models were q ? (θ, θ̇) = φ? (θ, θ̇) = (1 + cos θ)2 + θ̇2 , R? = 5, where θ is pendulum
angle and θ̇ is angular velocity. Under this model assumptions, we firstly conducted 40s MPC simulations with iLQR to generate trajectories. We totally generated 50 trajectories and then produced
6
Table 1: Training and simulation results on pendulum swing-up task. Trajectory cost shown here is
the average of 10 trajectories.
Controllers
Expert
Trained PI-Net
Freezed PI-Net
VIN (LCN)
VIN (CNN)
MSE for
Dtrain
N/A
2.22 × 10−3
1.91 × 10−3
6.44 × 10−3
4.45 × 10−3
MSE for
Dtest
N/A
1.65 × 10−3
5.73 × 10−3
6.89 × 10−3
4.72 × 10−3
Success
Rate
100%
100%
100%
0%
0%
Traj. Cost
S(τ )
404.63
429.69
982.22
2409.29
1280.62
# trainable
params
N/A
242
49
330,768
1,488
Dtrain 3 (x?ti , u?ti , x?ti+1 ), where control u is torque to actuate a pendulum. We also generated 10
trajectories for Dtest .
PI-Net settings We used a more general modeling scheme where internal dynamics and cost models were represented by neural networks, both of which had one hidden layer. The number of hidden
nodes was 12 for the dynamics model, and 24 for the cost model. First, we pre-trained the internal
dynamics independently by optimizing Ldyn and then the entire network was trained by optimizing
Lctrl + 10−3 · Lcost . In this final optimization, the dynamics model was freezed to focus on cost
learning. Goal states used to define Lcost were xg = (θ, θ̇) = (±π, 0). We prepared a model variant, termed freezed PI-Net, whose internal dynamics was the above-mentioned pre-trained one and
cost model was teacher model as is. The freezed PI-Net was not trained end-to-end.
VIN settings VIN was designed to have 2D inputs for continuous states and 1D output for continuous control. In order to define a reward map embedded in VIN, we discretized 2D continuous
state to 31 × 31 grid map. This map is cylindrical because angle-axis is cyclic. We also discretized
1D control space to 7 actions, each of which corresponds to different torque. We denote the reward map as R(s, a) ∈ R31×31×7 , where s and a respectively denote discrete state and action.
The reward map can be decomposed as the sum of R1 (s) ∈ R31×31×1 and R2 (a) ∈ R1×1×7 .
Original VIN employed convolutional neural networks (CNNs) to represent state transition kernels.
However, this representation implies that state transition probability P(s0 |s, a) can be simplified to
P(s0 − s|a)4 . Since this supposition is invalid for the pendulum system5 , we alternatively employed
locally connected neural networks (LCNs; i.e., CNNs without weight sharing) [29]. We also prepared a CNN-based VIN for comparison. The embedded reward maps and transition kernels were
trained end-to-end by optimizing Lctrl .
Results The results of training and MPC simulations with trained controllers are summarized in
Table 1. In the simulations, we observed success rates of task completion (defined as keeping the
pendulum standing up more than 5s) and trajectory cost S(τ ) calculated by the teacher cost. For
each network, ten 60-second simulations were conducted starting from different initial states. In
the table, freezed PI-Net showed less generalization performance although it was equipped with the
teacher cost. This degradation might result from the modeling errors of the learned dynamics. On
the other hand, trained PI-Net achieved the best performance both on generalization and control,
suggesting that adequate cost model was trained to imitate demonstrations while compensating the
dynamics errors. Fig. 3 illustrates visualized cost models where the cost map of the trained PI-Net
resembles the teacher model well. The reason of VIN failures must result from modeling difficulty
on continuous control tasks. Fine discretization of state and action space would be necessary for
good approximation; however, this results in the explosion of parameters to be trained, making
optimizations difficult. CNN-representation of transition kernels would not work because this is
very rough approximation for the most of control systems. Therefore, one can conclude that the use
of PI-Net would be more reasonable on continuous control because of the VIN modeling difficulty.
4
Under this supposition, probability of relative state transition, with respect to a certain action, is invariable.
See the time evolution equation of this system, θ̈ = − sin θ + k · u (k: model paramter); relative timevariation of angular velocity θ̇, with respect to certain torque u, varies with pendulum angle θ.
5
7
Angular Vel. [rad/s]
2 Teacher Model
PI-Net
VIN (LCN)
VIN (CNN)
0
Angle [rad]
0
Angle [rad]
0
Angle [rad]
0
-2
0
Angle [rad]
Figure 3: Cost/Reward maps. From left to right, the teacher cost q ? , neural cost q trained by PI-Net,
and reward maps R1 trained by the LCN- and CNN-based VINs. Dense color indicates low cost (or
high reward).
6
Conclusion
In this paper, we introduced path integral networks, a fully differentiable end-to-end trainable representation of the path integral optimal control algorithm, which allows for optimal continuous control.
Because PI-Net is fully differentiable, it can rely on powerful Deep Learning machinery to efficiently
estimate in high dimensional parameters spaces from large data-sets. To the best of our knowledge,
PI-Net is the first end-to-end trainable differentiable optimal controller directly applicable to continuous control domains.
PI-Net architecture is highly flexible, allowing to specify system dynamics and cost models in an
explicit way, by analytic models, or in an approximate way by deep neural networks. Parameters
are then jointly estimated end-to-end, in a variety of settings, including imitation learning and reinforcement learning. This may be very useful for non-linear continuous control scenarios, such as
the “pixel to torques” scenario, and in situations where it’s difficult to fully specify system dynamics
or cost models. We postulate this architecture may allow to train approximate system dynamics and
cost models in such complex scenarios while still carrying over the advantages of optimal control
from the underlying path integral optimal control. We show promising initial results in an imitation
learning setting, comparing against optimal control algorithms with linear and non-linear system
dynamics. Future works include a detailed comparison to other baselines, including other learn-tocontrol methods as well as experiments in high-dimensional settings. To tackle high-dimensional
problems and to accelerate convergence we plan to combine PI-Net with other powerful methods,
such as guided cost learning [8] and trust region policy optimization [20].
8
References
[1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In ICML,
2004.
[2] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, et al. End to end learning for
self-driving cars. arXiv:1604.07316, 2016.
[3] A. Boularias, J. Kober, and J. Peters. Relative entropy inverse reinforcement learning. In
AISTATS, 2011.
[4] Y. Chebotar, M. Kalakrishnan, A. Yahya, A. Li, S. Schaal, and S. Levine. Path integral guided
policy search. arXiv:1610.00529, 2016.
[5] C. Chen, A. Seff, A. Kornhauser, and J. Xiao. DeepDriving: Learning affordance for direct
perception in autonomous driving. In ICCV, 2015.
[6] J. Dean and R. Monga.
TensorFlow:
Large-scale machine learning on heterogeneous distributed systems.
Google Research Whitepaper,
http://research.google.com/pubs/archive/45166.pdf, 2015.
[7] M. Deisenroth and C. E. Rasmussen. PILCO: A model-based and data-efficient approach to
policy search. In ICML, 2011.
[8] C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via
policy optimization. In ICML, volume 48, 2016.
[9] A. J. Ijspeert, J. Nakanishi, and S. Schaal. Movement imitation with nonlinear dynamical
systems in humanoid robots. In ICRA, volume 2, 2002.
[10] G. Kahn, T. Zhang, S. Levine, and P. Abbeel. PLATO: Policy learning using adaptive trajectory
optimization. In ICRA, 2017.
[11] M. Kalakrishnan, P. Pastor, L. Righetti, and S. Schaal. Learning objective functions for manipulation. In ICRA, 2013.
[12] H. J. Kappen. Linear theory for control of nonlinear stochastic systems. Phys. Rev. Lett., 95
(20), 2005.
[13] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.
JMLR, 17(39), 2016.
[14] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra.
Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
[15] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, et al. Human-level control through
deep reinforcement learning. Nature, 518(7540), 2015.
[16] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, et al. Asynchronous methods for
deep reinforcement learning. In ICML, 2016.
[17] A. Y. Ng, D. Harada, and S. Russell. Policy invariance under reward transformations: Theory
and application to reward shaping. In ICML, volume 99, 1999.
[18] S. Ross, G. J. Gordon, and D. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, volume 1, 2011.
[19] J. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in
reactive environments. In IJCNN, 1990.
[20] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015.
[21] A. Tamar, S. Levine, P. Abbeel, Y. Wu, and G. Thomas. Value iteration networks. In NIPS,
2016.
9
[22] A. Tamar, G. Thomas, T. Zhang, S. Levine, and P. Abbeel. Learning from the hindsight plan–
episodic MPC improvement. arXiv:1609.09001, 2016.
[23] E. Theodorou, J. Buchli, and S. Schaal. A generalized path integral control approach to reinforcement learning. JMLR, 11, 2010.
[24] E. Todorov and W. Li. A generalized iterative LQG method for locally-optimal feedback
control of constrained nonlinear stochastic systems. In ACC, 2005.
[25] H. O. Wang, K. Tanaka, and M. F. Griffin. An approach to fuzzy control of nonlinear systems:
Stability and design issues. IEEE Trans. Fuzzy Syst., 4(1), 1996.
[26] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou. Aggressive driving with
model predictive path integral control. In ICRA, 2016.
[27] G. Williams, A. Aldrich, and E. A. Theodorou. Model predictive path integral control: From
theory to parallel computation. J. Guid. Control Dyn., 2017.
[28] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4), 1992.
[29] R. Yentis and M. Zaghloul. Vlsi implementation of locally connected neural network for
solving partial differential equations. IEEE Trans. Circuits Syst. I, Fundam. Theory and Appl.,
43(8), 1996.
[30] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI Conference on Artificial Intelligence, volume 8, 2008.
10
A
A.1
Supplements of Experiments
Common settings
We used RMSProp for optimization. Initial learning rate was set to be 10−3 and the rate was decayed
by a factor of 2 when no improvement of loss function was observed for five epochs. We set the
hyper-parameters appeared in the path integral algorithm [27] as λ = 0.01, ν = 1500.
A.2
Linear System
Demonstrations Dynamics parameters F ? , G? were randomly determined by following equations;
F ? = exp ∆t(A − AT ) , ∀a ∈ A, a ∼ N (0, 1),
(6)
?
G =
Gc
, Gc ∈ R2×2 , ∀g ∈ Gc , g ∼ N (0, ∆t),
O2,2
(7)
where exp[·] indicates the matrix exponential and the time step size ∆t is 0.01. Cost parameters Q? ,
R? were deterministically given by;
Q? = I4,4 · ∆t, R? = I2,2 · ∆t.
(8)
Elements of a state vector x?t0 were sampled from N (0, 1) and then the vector was input to LQR to
generate a control sequence {u?ti } whose length was N = 200.
PI-Net settings The internal dynamics parameters were initialized in the same manner as Eqs. (6,
7). According to the internal cost parameters, all values were sampled from N (0, ∆t). The number
of trajectories was K = 100 and the number of PI-Net Kernel recurrence was U = 200. We used
the standard deviation σ = 0.2 to generate Gaussian noise δu.
u1
Results Fig. 4 exemplifies control sequences for an unseen state input estimated by trained PI-Net
and LQR.
u2
Path Integral
LQR
0
25
50
75
100
125
Discrete Time
150
175
200
Figure 4: Control sequences
A.3
Pendulum Swing-Up
Demonstrations We supposed that the system is dominated by the time-evolution: θ̈ = − sin(θ)+
k · u, where k was set to be 0.5. We discretized this continuous dynamics by using the forth-order
Runge-Kutta method, where the time step size was ∆t = 0.1.
MPC simulations were conducted under the following conditions. Initial pendulum-angles θ in trajectories were uniformly sampled from [−π, π] at random. Initial angular velocities were uniformly
sampled from [−1, 1]. iLQR output a control sequence whose length was N = 30, and only the first
value in the sequence was utilized for actuation at each time step.
11
PI-Net settings A neural dynamics model with one hidden layer (12 hidden nodes) and one output
node was used to approximate θ̈. θ̇ and θ were estimated by the Euler method utilizing their time
derivatives. The cost model is represented as q(θ, θ̇) = ||q(θ, θ̇)||2 , where q is a neural network with
one hidden layer (12 hidden nodes) and 12 output nodes. In the both neural networks, hyperbolic
tangent is used as activation functions for hidden nodes. Other PI-Net parameters were: σ = 0.005,
K = 100, U = 200 and N = 30.
VIN settings The VIN design for the pendulum task is described here. In order to know
the value iteration algorithm and the design methodology of VINs, the authors recommend to
read
P Ref.0 [21].0 The equation of reward propagation appeared in the algorithm, i.e., Q(s, a) ←
s0 V(s )P(s |s, a)+R(s, a), was represented by LCN or CNN layer, where Q(s, a), V(s), R(s, a)
were action-value map, value-map, and reward map, respectively. State-transition kernels P(s0 |s, a)
were represented by 7 × 7 kernels of LCN or CNN. This layer processes the input value map and
creates a matrix (∈ R31×31×7 ), then this matrix is summed up with the reward map to compute
action-state map. The attention module output selected action-state value, i.e., Q(s, ·) ∈ R7 where s
corresponds input continuous states. The reactive policy was a neural network with one hidden layer
with 16 hidden nodes. This neural network has 9D inputs (2D for states and 7D for output from the
attention module) and 1D output for control. The activation function is rectified linear function. The
number of recurrence of VIN module is 50.
12
| 2 |
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
arXiv:1501.04523v3 [math.AC] 24 Jul 2016
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
Abstract. To a natural number n, a finite partially ordered set P and a poset
ideal J in the poset Hom(P, [n]) of isotonian maps from P to the chain on n
elements, we associate two monomial ideals, the letterplace ideal L(n, P ; J ) and
the co-letterplace ideal L(P, n; J ). These ideals give a unified understanding of
a number of ideals studied in monomial ideal theory in recent years. By cutting
down these ideals by regular sequences of variable differences we obtain: multichain ideals and generalized Hibi type ideals, initial ideals of determinantal ideals,
strongly stable ideals, d-partite d-uniform ideals, Ferrers ideals, edge ideals of
cointerval d-hypergraphs, and uniform face ideals.
Introduction
Monomial ideals arise as initial ideals of polynomial ideals. For natural classes of
polynomial ideals, like determinantal ideals of generic matrices, of generic symmetric
matrices, and of Pfaffians of skew-symmetric matrices, their Gröbner bases and
initial ideals [36], [26], [5] have been computed. In full generality one has the class
of Borel-fixed ideals (in characteristic 0 these are the strongly stable ideals), which
arise as initial ideals of any polynomial ideal after a generic change of coordinates.
This is a very significant class in computational algebra [2], [19].
Monomial ideals have also grown into an active research area in itself. In particular one is interested in their resolution, and in properties like shellability (implying
Cohen-Macaulayness), and linear quotients (implying linear resolution). Classes
that in particular have been studied are generalized Hibi ideals [14], d-uniform hypergraph ideals [33], Ferrers ideals [7], [33], uniform face ideals [6] and [20], and
cointerval d-hypergraph ideals [10].
This article presents a unifying framwork for these seemingly varied classes of
monomial ideals by introducing the classes of letterplace and co-letterplace ideals
associated to a a natural number n, a finite partially ordered set P , and a poset
ideal J in the poset Hom(P, [n]) of isotone maps P → [n], where [n] is the chain
on n elements. Many basic results of the abovementioned articles follow from the
general results we give here.
As it turns out, most of the abovementioned classes of ideals are not letterplace
and co-letterplace ideals. Rather they derive from these ideals by dividing out
by a regular sequence of variable differences. The main technical results of the
present paper is to give criteria for when such a sequence of variable differences is
regular, Theorems 2.1, 2.2, 5.6, and 5.12. Thus we get classes of ideals, with the
same homological properties as letterplace and co-letterplace ideals. Also we show
Date: July 26, 2016.
2000 Mathematics Subject Classification. Primary: 13F20, 05E40; Secondary: 06A11.
Key words and phrases. Monomial ideal, letterplace ideal, co-letterplace ideal, Alexander dual,
poset, regular sequence, determinantal ideal, strongly stable ideal, Ferrers ideal, hypergraph ideal,
face ideal, multichain ideal, generalized Hibi ideal.
1
2
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
that letterplace and co-letterplace ideals in themselves may not be derived from
other monomial ideals by cutting down by a regular sequence of varibale differences,
Lemma 2.8. These ideals therefore have the flavor of beeing ”free” objects in the
class of monomial ideals. We see this as accounting for many of their nice properties,
see [8] and [16].
In [14] V.Ene, F.Mohammadi, and the third author introduced the classes of
generalized Hibi ideals, associated to natural numbers s ≤ n and a finite partially
ordered set P , and investigated these ideals. This article both generalizes this, and
provides a hightened understanding of these ideals. There is an extra parameter s
involved here, but we show that these ideals can be understood as the letterplace
ideal associated to the natural number n and the partially ordered set P ×[n−s+1],
after we divide this ideal out by a regular sequence of variable differences.
The n’th letterplace ideal associated to n and P is the monomial ideal, written
L(n, P ), generated by all monomials
x1,p1 x2,p2 · · · xn,pn
where p1 ≤ p2 ≤ · · · ≤ pn is a multichain in P . These ideals L(n, P ) were shown in
[14] to be Cohen-Macaulay (CM) ideals (actually shellable) of codimension equal to
the cardinality of P . These ideals are all generated in degree n. If P is the antichain
on d elements, then L(n, P ) is a complete intersection, and defines a quotient ring
of mulitplicity nd . If P is the chain on d elements, then L(n, P ) is the initial ideal of
the ideal of maximal
minors of a generic n×(n+d−1) matrix. The quotient ring has
n+d−1
multiplicity
. These are respectively the maximal and minimal multiplicity
d
of CM ideals of codimension d generated in degree n. As P varies among posets of
cardinality d we therefore get ideals interpolating between these extremal cases.
Its Alexander dual, the n’th co-letterplace ideal associated to n and P , written
L(P, n), is the ideal generated by all monomials
Y
xp,ip
p∈P
where p < q in P implies ip ≤ iq . This is an ideal with linear quotients [14], and
therefore linear resolution.
The Cohen-Macaulay ideals L(n, P ) are all generated in a single degree n. To
obtain CM ideals with varying degrees on generators, we now add an extra layer of
structure. Given a poset ideal J in the poset Hom(P, [n]) of isotone maps P → [n],
we get first more generally the co-letterplace subideal L(P, n; J ) ⊆ L(P, n). This
ideal also has linear quotients, Theorem 5.1. When P is a chain on d elements,
all strongly stable ideals generated in degree d are regular quotients of these coletterplace ideals, Example 6.1. As P varies we therefore get a substantial generalization of the class of strongly stable ideals generated in a single degree, and with
much the same nice homological behaviour.
The Alexander dual of L(P, n; J ) we denote by L(n, P ; J ), and is explicitly described in Theorem 5.9. Since the former ideal has linear resolutions, by [11] the
latter ideal is Cohen-Macaulay and it contains the letterplace ideal L(n, P ). Even
when P is the chain on d elements, all h-vectors of embedding dimension d of
graded Cohen-Macaulay ideals (in a polynomial ring), may be realized for such
ideals L(P, n; J ). This is therefore a very large class of Cohen-Macaulay ideals.
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
3
Dividing out a monomial ring S/I by an difference of variables xa −xb , corresponds
to setting the variables xa and xb equal in I to obtain a new monomial ideal J. In
this article we therefore naturally introduce the notion of separation of variables,
Definition 2.7, of a monomial ideals: I is obtained from J by a separation of variables,
if xa − xb is a regular element for S/I. Surprisingly this simple and natural notion
does not seem to have been a topic of study in itself before for monomial ideals, but
see [15]. In particular the behaviour of Alexander duality when dividing out by such
a regular sequence of variable differences, is given in Proposition 7.2, and Theorems
2.10 and 7.3.
After the appearance of this article as a preprint, a number of further investigations has been done on letterplace and co-letterplace ideals. The article [27] studies
more generally ideals L(P, Q) where both P and Q are finite partially orded sets,
and [23] investigates Alexander duality of such ideals. In [9] resolutions of letterplace
ideals L(n, P ) are studied, in particular their multigraded Betti numbers are computed. [8] gives explicitly the linear resolutions of co-letterplace ideal L(P, n; J ),
thereby generalizing the Eliahou-Kervaire resolution for strongly stable ideals generated in a single degree. It computes the canonical modules of the Stanley-Reisner
rings of letterplace ideals L(n, P ; J ). They have the surprising property of being
multigraded ideals in these Stanley-Reisner rings. A related and remarkable consequence is that the simplicial complexes associated to letterplace ideals L(n, P ; J )
are triangulations of balls. Their boundaries are therefore triangulations of spheres,
and this class of sphere triangulations comprehensively generalizes the class of Bier
spheres [3]. The notion of separation is further investigated in [24] and in [1], which
shows that separation corresponds to a deformation of the monomial ideal, and identifies the deformation directions in the cotangent cohomology it corresponds to. In
[16] deformations of letterplace ideals L(2, P ) are computed when the Hasse diagram
has the structure of a rooted tree. The situation is remarkably nice. These ideals
are unobstruced, and the full deformation family can be explicitly computed. This
deformed family has a polynomial ring as a base ring, and the ideal of the full family
is a rigid ideal. In some simple example cases these are the ideals of 2-minors of a
generic 2 × n matrix, and the ideal of Pfaffians of a generic skew-symmetric 5 × 5
matrix.
The organization of the paper is as follows. In Section 1 we define ideals L(Q, P )
associated to pairs of posets Q and P . In particular for the totally ordered poset [n]
on n elements, we introduce the letterplace ideals L([n], P ) and co-letterplace ideals
L(P, [n]). We investigate how they behave under Alexander duality. In Section 2
we study when a sequence of variable differences is regular for letterplace and coletterplace ideals. We also define the notion of separation. Section 3 gives classes
of ideals, including generalized Hibi ideals and initial ideals of determinantal ideals,
which are quotients of letterplace ideals by a regular sequence. Section 4 describes in
more detail the generators and facets of various letterplace and co-letterplace ideals.
Section 5 considers poset ideals J in Hom(P, [n]) and the associated co-letterplace
ideal L(P, n; J ). We show it has linear resolution, and compute its Alexander dual
L(n, P ; J ). Section 6 gives classes of ideals which are quotients of co-letterplace
ideals by a regular sequence. This includes strongly stable ideals, d-uniform dpartite hypergraph ideals, Ferrers ideals, and uniform face ideals. The last sections
7 and 8 contain proofs of basic results in this paper on when sequences of variable
4
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
differences are regular, and how Alexander duality behaves when cutting down by
such a regular sequence.
1. Letterplace ideals and their Alexander duals
If P is a partially ordered set (poset), a poset ideal J ⊆ P is a subset of P such
that q ∈ J and p ∈ P with p ≤ q, implies p ∈ J. The term order ideal is also much
used in the literature for this notion. If S is a subset of P , the poset ideal generated
by S is the set of all elements p ∈ P such that p ≤ s for some s ∈ S.
1.1. Isotone maps. Let P and Q be two partially ordered sets. A map φ : Q → P
is isotone or order preserving, if q ≤ q ′ implies φ(q) ≤ φ(q ′ ). The set of isotone
maps is denoted Hom(Q, P ). It is actually again a partially ordered set with φ ≤ ψ
if φ(q) ≤ ψ(q) for all q ∈ Q. The following will be useful.
Lemma 1.1. If P is a finite partially ordered set with a unique maximal or minimal
element, then an isotone map φ : P → P has a fix point.
Proof. We show this in case P has a unique minimal element p = p0 . Then p1 = φ(p0 )
is ≥ p0 . If p1 > p0 , let p2 = φ(p1 ) ≥ φ(p0 ) = p1 . If p2 > p1 we continue. Since P
is finite, at some stage pn = pn−1 and since pn = φ(pn−1 ), the element pn−1 is a fix
point.
1.2. Alexander duality. Let k be a field. If R is a set, denote by k[xR ] the
polynomial ring in the variables xr where r ranges over R. If A is a subset of R
denote by mA the monomial Πa∈A xa .
Let I be a squarefree monomial ideal in a polynomial ring k[xR ], i.e. its generators
are monomials of the type mA . It corresponds to a simplicial complex ∆ on the
vertex set R, consisting of all S ⊆ R, called faces of ∆, such that mS 6∈ I.
The Alexander dual J of I, written J = I A , may be defined in different ways.
Three definitions are the following.
1. The Alexander dual J is the monomial ideal in k[xR ] whose monomials are those
with nontrivial common divisor with every monomial in I.
2. The Alexander dual J is the ideal generated by all monomials mS where the S
are complements in R of faces of ∆.
3. If I = ∩ri=1 pr is a decomposition into prime monomial ideals pi where pi is
generated by the variables xa as a ranges over the subset Ai of R, then J is the ideal
generated by the monomials mAi , i = 1, . . . , r. (If the decomposition is a minimal
primary decomposition, the mAi is a minimal generating set of J.)
1.3. Ideals from Hom-posets. To an isotone map φ : Q → P we associate its
graph Γφ ⊆ Q × P where Γφ = {(q, φ(q)) | q ∈ Q}. As φ ranges over Hom(Q, P ), the
monomials mΓφ generate a monomial ideal in k[xQ×P ] which we denote by L(Q, P ).
More generally, if S is a subset of Hom(Q, P ) we get ideals L(S) generated by mΓφ
where φ ∈ S.
If R is a subset of the product Q × P , we denote by Rτ the subset of P × Q we get
by switching coordinates. As L(Q, P ) is an ideal in k[xQ×P ], we may also consider
it as an ideal in k[xP ×Q ]. In cases where we need to be precise about this, we write
it then as L(Q, P )τ .
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
5
If Q is the totally ordered poset on n elements Q = [n] = {1 < 2 < · · · < n}, we
call L([n], P ), written simply L(n, P ), the n’th letterplace ideal of P . It is generated
by the monomials
x1,p1 x2,p2 · · · xn,pn with p1 ≤ p2 ≤ · · · ≤ pn .
This is precisely the same ideal as the multichain ideal In,n (P ) defined in [14] (but
with indices switched). The ideal L(P, [n]), written simply L(P, n), is the n’th coletterplace ideal of P . In [14] it is denoted Hn (P ) and is called a generalized Hibi
type ideal. For some background on the letterplace notion, see Remark 1.7 at the
end of this section.
The following is Theorem 1.1(a) in [14], suitably reformulated. Since it is a very
basic fact, we include a proof of it.
Proposition 1.2. The ideals L(n, P ) and L(P, n)τ are Alexander dual in k[x[n]×P ].
Proof. Let L(n, P )A be the Alexander dual of L(n, P ). First we show L(P, n) ⊆
L(n, P )A . This is equivalent to: For any φ ∈ Hom([n], P ) and any ψ ∈ Hom(P, [n]),
the graphs Γφ and Γψ τ intersect in [n] × P . Let i be a fix point for ψ ◦ φ. Then
φ
ψ
i −→ p −→ i and so (i, p) is in both Γφ and Γψ τ .
Secondly, given a squarefree monomial m in L(n, P )A we show that it is divisible
by a monomial in L(P, n). This will show that L(n, P )A ⊆ L(P, n) and force equality
here. So let the monomial m correspond to the subset F of P × [n]. It intersects
all graphs Γφτ where φ ∈ Hom([n], P ). We must show it contains a graph Γψ
where ψ ∈ Hom(P, [n]). Given F , let Jn = P and let Jn−1 be the poset ideal of P
generated by all p ∈ Jn = P such that (p, n) 6∈ F . Inductively let Ji−1 be the poset
ideal in Ji generated by all p in Ji with (p, i) not in F .
Claim 1. J0 = ∅.
Proof. Otherwise let p ∈ J0 . Then there is p ≤ p1 with p1 ∈ J1 and (p1 , 1) 6∈ F .
Since p1 ∈ J1 there is p1 ≤ p2 with p2 ∈ J2 such that (p2 , 2) 6∈ F . We may continue
this and get a chain p1 ≤ p2 ≤ · · · ≤ pn with (pi , i) not in F . But this contradicts
F intersecting all graphs Γφ where φ ∈ Hom([n], P ).
We thus get a filtration of poset ideals
∅ = J0 ⊆ J1 ⊆ · · · ⊆ Jn−1 ⊆ Jn = P.
This filtration corresponds to an isotone map ψ : P → [n].
Claim 2. Γψ is a subset of F .
Proof. Let (p, i) ∈ Γψ. Then p ∈ Ji \Ji−1 and so p 6∈ Ji−1 . Thus (p, i) ∈ F .
Remark
1.3. The case n = 2 was shown in [21] where the ideal HP generated by
Q
Q
p∈J xp
q∈P \J yq as J varies over the poset ideals in P , was shown to be Alexander
dual to the ideal generated by xp yq where p ≤ q. In [21] it is also shown that the
L(2, P ) are precisely the edge ideals of bipartite Cohen-Macaulay graphs.
Remark 1.4. That L(m, n) and L(n, m) are Alexander dual is Proposition 4.5 of
[17]. There the elements of these ideals are interpreted as paths in a m × n matrix
with generic linear forms (xij ) and the generators of the ideals are the products of
the variables in these paths.
6
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
1.4. Alexander dual of L(Q, P ). In general L(Q, P ) and L(P, Q) are not Alexander dual. This is easily checked if for instance Q and P are antichains of sizes ≥ 2.
However we have the following.
Proposition 1.5. Suppose Q has a unique maximal or minimal element. The least
degree of a generator of the Alexander dual L(Q, P )A and of L(P, Q) are both d = |P |
and the degree d parts of these ideals are equal. In particular, since L(P, Q) is
generated in this degree d, it is contained in L(Q, P )A .
Note that the above is equivalent to say that the minimal primes of L(Q, P ) of
height ≤ |P | are precisely the
pψ = ({xψ(p),p p ∈ P }),
where ψ ∈ Hom(P, Q).
Proof. We show that:
1. L(Q, P ) ⊂ pψ for all ψ ∈ Hom(P, Q).
2. pψ is a minimal prime of L(Q, P ).
3. Any minimal prime p of L(Q, P ) is p = pψ for some ψ.
This will prove the proposition.
1. Given φ ∈ Hom(Q, P ) and ψ ∈ Hom(P, Q). We have to show that mφ =
Q
q∈Q xq,φ(q) ∈ pψ . By Lemma 1.1 ψ ◦ φ has a fix point q, and let p = φ(q). Then
ψ(p) = q. Therefore, xq,p is a factor of mφ and a generator of pψ . This implies that
mφ ∈ pψ .
2. Next we show that pψ is a minimal prime ideal of L(Q, P ). Suppose this is not
the case. Then we may skip one of its generators, say xψ(p),p , to obtain the prime
ideal p ⊂ pψ with L(Q, P ) ⊂ p. Let φ ∈ Hom(Q,
P ) be the constant isotone map
Q
with φ(q) = p for all q ∈ Q. Then mφ = q∈Q xq,p ∈ L(Q, P ). Since no factor of
mφ is divisible by a generator of p, it follows that L(Q, P ) is not contained in p, a
contradiction.
3. Now let p be any minimal prime ideal of L(Q, P ). Since L(Q, P ) ⊂ p it follows
as in the previous paragraph that for each p ∈ P there exists an element ψ(p) ∈ Q
such that xψ(p),p ∈ p. This show that height L(Q, P ) = |P |. Assume now that
height p = |P |. Then p = ({xψ(p),p p ∈ P }). It remains to be shown that ψ P → Q
is isotone. Suppose this is not the case. Then there exist p, p′ ∈ P such that p < p′
′
and ψ(p) 6≤ ψ(p′ ). Let φ : Q → P the map with φ(q) = p ifQq ≤ ψ(p′ ) and
Q φ(q) = p
′
if q 6≤ ψ(p ). Then φ is isotone, and it follow that mφ = q≤ψ(p′ ) xq,p q6≤ψ(p′ ) xq,p′
does not belong to p, a contradiction.
Remark 1.6. In [23] they determine precisely when L(P, Q) and L(Q, P ) are Alexander dual, for finite posets P and Q.
Remark 1.7. Let X = {x1 , . . . , xn } be an alphabet. The letterplace correspondence
is a way to encode non-commutative monomials xi1 xi2 · · · xir in khXi by commutative polynomials xi1 ,1 · · · xir ,r in k[X × N]. It is due to G.-C. Rota who again
attributed it to a physicist, apparently Feynman. D. Buchsbaum has a survey article [4] on letterplace algebra, and the use of these technique in the resolution of Weyl
modules. Recently [29] use letterplace ideals in computations of non-commutative
Gröbner bases.
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
7
2. Quotients of letterplace ideals
A chain c in the product of two posets Q × P is said to be left strict if for two
elements in the chain, (q, p) < (q ′ , p′ ) implies q < q ′ . Analogously we define right
strict. The chain is bistrict if it is both left and right strict.
An isotone map of posets φ : Q × P → R is said to have left strict chain fibers if
all its fibers φ−1 (r) are left strict chains in Q × P op . Here P op is the opposite poset
of P , i.e. p ≤op p′ in P op iff p′ ≤ p in P .
The map φ gives a map of linear spaces φ1 : hxQ×P i → hxR i (the brackets here
mean the k-vector space spanned by the set of variables). The map φ1 induces a
map of polynomial rings φ̂ : k[xQ×P ] → k[xR ]. In the following B denotes a basis
for the kernel of the map of degree one forms φ1 , consisting of differences xq,p − xq′ ,p′
with φ1 (q, p) = φ1 (q ′ , p′ ).
Theorem 2.1. Given an isotone map φ : [n] × P → R which has left strict chain
fibers. Then the basis B is a regular sequence of the ring k[x[n]×P ]/L(n, P ).
Theorem 2.2. Given an isotone map ψ : P × [n] → R which has left strict chain
fibers. Then the basis B is a regular sequence of the ring k[xP ×[n] ]/L(P, n).
We shall prove these in Section 8. For now we note that they require distinct
proofs, with the proof of Theorem 2.2 the most delicate.
In the setting of Theorem 2.1, we let Lφ (n, P ) be the ideal generated by the image
of the n’th letterplace ideal L(n, P ) in k[xR ], and in the setting of Theorem 2.2, we
let Lψ (P, n) be the ideal generated by the image of the n’th co-letterplace ideal
L(P, n) in k[xR ], Note that Lφ (n, P ) is a squarefree ideal iff in the above the fibers
φ−1 (r) are bistrict chains in [n] × P op , and similarly Lψ (P, n) is a squarefree ideal
iff the fibers ψ −1 (r) are bistrict chains in P × [n]op .
We get the following consequence of the above Theorems 2.1 and 2.2.
Corollary 2.3. The quotient rings k[x[n]×P ]/L(n, P ) and k[xR ]/Lφ (n, P ) have the
same graded Betti numbers. Similarly for k[xP ×[n] ]/L(P, n) and k[xR ]/Lψ (P, n).
Proof. We prove the first statement. Let Lim φ (n, P ) be the image of L(n, P ) in
k[xim φ ], and S = R\im φ. Thus k[xim φ ]/Lim φ (n, P ) is a quotient of k[x[n]×P ]/L(n, P )
by a regular sequence, and k[xR ]/Lφ (n, P ) is k[xim φ ]/Lim φ (n, P ) ⊗k k[xS ].
For the poset P consider the multichain ideal I(n, P ) in k[xP ] generated by monomials xp1 xp2 · · · xpn where p1 ≤ p2 ≤ · · · ≤ pn is a multichain of length n in P . The
quotient k[xP ]/I(n, P ) is clearly artinian since xnp is in I(n, P ) for every p ∈ P .
Corollary 2.4. The ring k[xP ]/I(n, P ) is an artinian reduction of k[x[n]×P ]/L(n, P )
by a regular sequence. In particular L(n, P ) is a Cohen-Macaulay ideal. It is Gorenstein iff P is an antichain.
Proof. The first part is because the map [n] × P → P fulfills the criteria of Theorem
2.1 above. An artinian ideal is Gorenstein iff it is a complete intersection. Since all
xnp are in I(n, P ), this holds iff there are no more generators of I(n, P ), which means
precisely that P is an antichain.
This recovers part of Theorem 2.4 of [14] showing that L(n, P ) is Cohen-Macaulay.
The Gorenstein case above is Corollary 2.5 of [14].
8
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
Remark 2.5. The multigraded Betti numbers of the resolution of L(n, P ) is described
in [9], as well as other properties of this resolution.
Recall that a squarefree monomial ideal is bi-Cohen-Macaulay, [17], iff both the
ideal and its Alexander dual are Cohen-Macaulay ideals.
Corollary 2.6. L(n, P ) is bi-Cohen-Macaulay iff P is totally ordered.
Proof. Since L(n, P ) is Cohen-Macaulay, it is bi-Cohen-Macaulay iff it has a linear
resolution, [12]. Equivalently I(n, P ) in k[xP ] has a linear resolution. But since
I(n, P ) gives an artinian quotient ring, this is equivalent to I(n, P ) being the n’th
power of the maximal ideal. In this case every monomial xpn−1 xq is in I(n, P ) and
so every pair p, q in P is comparable. Thus P is totally ordered. Conversely, if P is
totally ordered, then clearly I(n, P ) is the n’th power of the maximal ideal.
Definition 2.7. Let R′ → R be a surjective map of sets with R′ of cardinality one
more than R, and let r1 6= r2 in R′ map to the same element in R. Let I be a
monomial ideal in k[xR ]. A monomial ideal J in k[xR′ ] is a separation of I if i) I is
the image of J by the natural map k[xR′ ] → k[xR ], ii) xr1 occurs in some minimal
generator of J and similarly for xr2 , and iii) xr1 −xr2 is a regular element of k[xR′ ]/J.
The ideal I is separable if it has some separation J. Otherwise it is inseparable. If
J is obtained from I by a succession of separations, we also call J a separation of I.
We say that I is a regular quotient by variable differences of J, or simply a regular
quotient of J. If J is inseparable, then J is a separated model for I.
This notion also occurs in [15] where inseparable monomial ideals are called maximal. The canonical example of a separation of a non-squarefree monomial ideal is
of course its polarization.
Lemma 2.8. Let I be an ideal generated by a subset of the generators of L(Q, P ).
Then I is inseparable.
Proof. Let R′ → Q × P be a surjective map with R′ of cardinality one more than
Q × P . Suppose there is a monomial ideal J in k[xR′ ] which is a separation of I.
Let a and b in R′ both map to (q, p). For any other element of R′ , we identify it
with its image in Q × P . Suppose m = xa m0 in J maps to a generator xq,p m0 of
L(Q, P ), and m′ = xb m1 maps to another generator of L(Q, P ). Then m0 does not
contain a variable xq,p′ with first index q, and similarly for m1 . Note that the least
common multiple m01 of m0 and m1 does not contain a variable with first index q.
Hence m01 is not in L(Q, P ) and so m01 is not in J. But (xb − xa )m01 is in J since
xb m01 and xa m01 are in J. By the regularity of xb − xa this implies m01 in J, a
contradiction.
As we shall see, many naturally occurring monomial ideals are separable and have
separated models which are letterplace ideals L(n, P ) or are generated by a subset
of the generators of co-letterplace ideals L(P, n).
Remark 2.9. In [15, Section 2] the first author shows that the separated models of
n−1
the squarefree power (x1 , . . . , xn )sq
are in bijection with trees on n vertices.
We now consider the Alexander dual of Lφ (n, P ).
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
9
Theorem 2.10. Let φ : [n] × P → R be an isotone map such that the fibers φ−1 (r)
τ
are bistrict chains in [n]×P op . Then the ideals Lφ (n, P ) and Lφ (P, n) are Alexander
dual.
We prove this in Section 7.
Remark 2.11. The Alexander dual of the squarefree power in Remark 2.9 is the
squarefree power (x1 , . . . , xn )2sq . Separations of this ideal are studied by H.Lohne,
[30]. In particular he describes how the separated models are also in bijection with
trees on n vertices.
3. Examples of regular quotients of letterplace ideals
The ideals which originally inspired this paper are the multichain ideals of [14].
3.1. Multichain ideals. Let Pm be P × [m] where m ≥ 1. Consider the surjective
map
[s] × Pm → P × [m + s − 1]
(i, p, a) 7→ (p, a + i − 1).
This map has left strict chain fibers. The image of L(s, Pm ) in k[xP ×[m+s−1] ]
is exactly the multichain ideal Im+s−1,s (P ) of [14]. This is the ideal generated by
monomials
xp1 ,i1 xp2 ,i2 · · · xps ,is
where
p1 ≤ · · · ≤ ps ,
1 ≤ i1 < · · · < is ≤ m + s − 1.
By Theorem 2.1 it is obtained from the s’th letterplace ideal L(s, Pm ) = L(s, P ×[m])
by cutting down by a regular sequence. Thus we recover the fact, [14, Thm. 2.4],
that these ideals are Cohen-Macaulay.
The Alexander dual of L(s, Pm ) is L(Pm , s). An element r of Hom(P × [m], [s])
may be represented by sequences
1 ≤ rp1 ≤ · · · ≤ rpm ≤ s
such that for each p ≤ q we have rpj ≤ rqj .
The element r gives the monomial generator in L(Pm , s)
mr =
m
YY
xp,i,rpi .
p∈P i=1
By Theorem 2.10, the Alexander dual of the multichain ideal Im+s−1,s (P ) is then
generated by
m
YY
xp,tpi ,
1 ≤ tp1 < tp2 < · · · < tpm ≤ m + s − 1
p∈P i=1
(where tpi = rpi + i − 1) such that p < q implies tpj ≤ tqj . These are exactly the
generators of the squarefree power ideal L(P, s + m − 1)hmi . This recovers Theorem
1.1(b) in [14].
10
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
3.2. Initial ideals of determinantal ideals: two-minors. We now let P = [n]
and s = 2. Let e, f ≥ 0. There are isotone maps
φe,f
[2] × [n] × [m] = [2] × Pm −→ [n + e] × [m + f ]
(1, a, b) 7→ (a, b)
(2, a, b) 7→ (a + e, b + f )
These maps have left strict chain fibers and we get the ideal Lφe,f (2, Pm ).
• When (e, f ) = (0, 1) we are in the situation of the previous Subsection 3.1,
and we get the multichain ideal Im+1,2 ([n]).
• When (e, f ) = (1, 0) we get the multichain ideal In+1,2 ([m]).
• When (e, f ) = (1, 1) we get the ideal in k[x[n+1]×[m+1] ] generated by monomials xi,j xi′ ,j ′ where i < i′ and j < j ′ . This is precisely the initial ideal I of
the ideal of two-minors of a generic (n + 1) × (m + 1) matrix of linear forms
(xi,j ) with respect to a suitable monomial order with respect to a diagonal
term order, [36].
In particular all of Im+1,2 ([n]), In+1,2 ([m]) and I have the same graded Betti numbers and the same h-vector, the same as L(2, [n] × [m]).
Particularly noteworthy is the following: The ideal of two-minors of the generic
(n + 1) × (m + 1) matrix is the homogeneous ideal of the Segre product of Pm × Pn
in Pnm+n+m . By J.Kleppe [28], any deformation of a generic determinantal ideal is
still a generic determinantal ideal. So if this Segre embedding is obtained from a
variety X in a higher dimensional projective space, by cutting it down by a regular
sequence of linear forms, this X must be a cone over the Segre embedding. Thus
we cannot “lift” the ideal of two minors to an ideal in a polynomial ring with more
variables than (n + 1)(m + 1). However its initial ideal may be separated to the
monomial ideal L(2, [n] × [m]) with 2nm variables.
Varying e and f , we get a whole family of ideals Lφe,f (2, [n] × [m]) with the same
Betti numbers as the initial ideal of the ideal of two-minors. When e = 0 = f we
get an artinian reduction, not of the initial ideal of the ideal of two-minors, but of
its separated model L(2, [n] × [m]). When e ≥ n + 1 and f ≥ m + 1, the map φe,f is
injective and Lφe,f (2, [n] ×[m]) is isomorphic to the ideal generated by L(2, [n] ×[m])
in a polynomial ring with more variables.
3.3. Initial ideals of determinantal ideals: higher minors. We may generalize
to arbitrary s and two weakly increasing sequences
e = (e1 = 0, e2 , . . . , es ),
f = (f1 = 0, f2 , . . . , fs ).
We get isotone maps
[s] × [n] × [m] −→ [n + es ] × [m + fs ]
(i, a, b) 7→ (a + ei , b + fi )
• When e = (0, . . . , 0) and f = (0, 1, . . . , s − 1) we get the multichain ideal
Im+s−1,s ([n]).
• When e = (0, 1, . . . , s − 1) and f = (0, . . . , 0) we get the multichain ideal
In+s−1,s ([m]).
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
11
• When e = (0, 1, . . . , s−1) and f = (0, 1, . . . , s−1) we get the ideal I generated
by monomials
xi1 ,j1 xi2 ,j2 · · · xis ,js
where i1 < · · · < is and j1 < · · · < js . This is the initial ideal I of the ideal
of s-minors of a general (n + s − 1) × (m + s − 1) matrix (xi,j ) with respect
to a diagonal term order, [36].
We thus see that this initial ideal I has a lifting to L(s, [n] × [m]) with snm
variables, in contrast to the (n + s − 1)(m + s − 1) variables which are involved in
the ideal of s-minors. We get maximal minors when, say m = 1. Then the initial
ideal I involves sn variables. So in this case the initial ideal I involves the same
number of variables as L(s, [n]), i.e. the generators of these two ideals are in one to
one correspondence by a bijection of variables.
3.4. Initial ideal of the ideal of two-minors of a symmetric matrix. Let
P = Hom([2], [n]). The elements here may be identified with pairs (i1 , i2 ) where
1 ≤ i1 ≤ i2 ≤ n. There is an isotone map
φ : [2] × Hom([2], [n]) → Hom([2], [n + 1])
(1, i1 , i1 ) 7→ (i1 , i2 )
(2, i1 , i2 ) 7→ (i1 + 1, i2 + 1).
This map has left strict chain fibers, and we get a regular quotient ideal Lφ (2, Hom([2], [n])),
generated by xi1 ,i2 xj1 ,j2 where i1 < j1 and i2 < j2 (and i1 ≤ i2 and j1 ≤ j2 ). This
is the initial ideal of the ideal generated by 2-minors of a symmetric matrix of size
n + 1, see [5, Sec.5].
3.5. Ladder determinantal ideals. Given a poset ideal J in [m] × [n]. This gives
the letterplace ideal L(2, J ). There is a map
φ : [2] × J → [m + 1] × [n + 1]
(1, a, b) 7→ (a, b)
(2, a, b) 7→ (a + 1, b + 1)
The poset ideal J is sometimes also called a one-sided ladder in [m] × [n]. The ideal
Lφ (2, J ) is the initial ideal of the ladder determinantal ideal associated to J , [34,
Cor.3.4]. Hence we recover the fact that these are Cohen-Macaulay, [26, Thm.4.9].
3.6. Pfaffians. Let T (n) be the poset in [n] × [n] consists of all (a, b) with a +
b ≤ n + 1. Then Lφ (2, T (n)) is the initial ideal of the ideal of 4-Pfaffians of a
skew-symmetric matrix of rank n + 3, [26, Sec.5]. It is also the initial ideal of the
Grassmannian G(2, n + 3), [13, Ch.6].
The poset T (2) is the V poset. The letterplace ideals L(n, T (2)) are the initial
ideals of the 2n-Pfaffians of a generic (2n + 1) × (2n + 1) skew-symmetric matrix, by
[26, Thm.5.1]. The variables Xi,2n+2−i in loc.cit. correspond to our variables xi,(1,1)
for i = 1, . . . , n, the variables Xi+1,2n+2−i correspond to the xi,(2,1) and the Xi,2n+1−i
to the xi,(1,2) .
12
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
4. Description of facets and ideals
As we have seen Hom(Q, P ) is itself a poset. The product P × Q makes the
category of posets Poset into a symmetric monoidal category, and with this internal Hom, it is a symmetric monoidal closed category [31, VII.7], i.e. there is an
adjunction of functors
−×P
Poset
⇄
Poset
Hom(P,−)
so that
Hom(Q × P, R) ∼
= Hom(Q, Hom(P, R)).
This is an isomorphism of posets. Note that the distributive lattice D(P ) associated
to P , consisting of the poset ideals in P , identifies with Hom(P, [2]). In particular
[n + 1] identifies as Hom([n], [2]). The adjunction above gives isomorphisms between
the following posets.
1. Hom([m], Hom(P, [n + 1]))
2. Hom([m] × P, [n + 1]) = Hom([m] × P, Hom([n], [2]))
3. Hom([m] × P × [n], [2])
4. Hom([n] × P, Hom([m], [2])) = Hom([n] × P, [m + 1])
5. Hom([n], Hom(P, [m + 1]))
These Hom-posets normally give distinct letterplace or co-letterplace ideals associated to the same underlying (abstract) poset. There are natural bijections between
the generators. The degrees of the generators are normally distinct, and so they have
different resolutions.
Letting P be the one element poset, we get from 2.,3., and 4. above isomorphisms
(1)
Hom([m], [n + 1]) ∼
= Hom([n], [m + 1]).
= Hom([m] × [n], [2]) ∼
An element φ in Hom([m], [n + 1]) identifies as a partition λ1 ≥ · · · ≥ λm ≥ 0
with m parts of sizes ≤ n, by φ(i) = λm+1−i + 1. The left and right side of the
isomorphisms above give the correspondence between a partition and its dual. This
poset is the Young lattice. In Stanley’s book [35], Chapter 6 is about this lattice,
there denoted L(m, n).
Letting m = 1 we get by 2.,3., and 5. isomorphims:
Hom(P, [n + 1]) ∼
= Hom(n, D(P ))
= Hom(P × [n], [2]) ∼
and so we have ideals
L(P, n + 1),
L(P × [n], [2]),
L(n, D(P ))
whose generators are naturally in bijection with each other, in particular with elements of Hom([n], D(P )), which are chains of poset ideals in D(P ):
(2)
∅ = J0 ⊆ J1 ⊆ · · · ⊆ Jn ⊆ Jn+1 = P.
The facets of the simplicial complexes associated to their Alexander duals
L(n + 1, P ),
L(2, P × [n]),
L(D(P ), n),
are then in bijection with elements of Hom([n], D(P )).
For a subset A of a set R, let Ac denote its complement R\A.
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
13
1. The facets of the simplicial complex associated to L(n + 1, P ) identifies as the
complements (Γφ)c of graphs of φ : P → [n + 1]. This is because these facets correspond to the complements of the set of variables in the generators in the Alexander
dual L(P, n + 1) of L(n + 1, P ).
For isotone maps α : [n + 1] × P → R having bistrict chain fibers, the associated
simplicial complex of the ideal Lα (n + 1, P ), has also facets in one-to-one correspondence with φ : P → [n + 1], or equivalently φ′ : [n] → D(P ), but the precise
description varies according to α.
2. The facets of the simplicial complex associated to L(2, P × [n]) identifies as
the complements (Γφ)c of the graphs of φ : P × [n] → [2]. Alternatively the facets
identifies as the graphs Γφ′ of φ′ : P × [n] → [2]op .
3. Let
α : [2] × P × [n] → P × [n + 1],
(a, p, i) 7→ (p, a + i − 1).
α
The ideal L (2, P ×[n]) is the multichain ideal In+1,2 (P ). The generators of this ideal
are xp,i xq,j where p ≤ q and i < j. The facets of the simplicial complex associated
to this ideal are the graphs Γφ of φ : P → [n + 1]op .
5. Co-letterplace ideals of poset ideals
5.1. The ideal L(P, n; J ). Since Hom(P, [n]) is itself a partially ordered set, we can
consider poset ideals J ⊆ Hom(P, [n]) and form the subideal L(P, n; J ) of L(P, n)
generated by the monomials mΓφ where φ ∈ J . We call it the co-letterplace ideal of
the poset ideal J . For short we often write L(J ) and call it simply a co-letterplace
ideal. For the notion of linear quotients we refer to [22].
Theorem 5.1. Let J be a poset ideal in Hom(P, [n]). Then L(P, n; J ) has linear
quotients, and so it has linear resolution.
Proof. We extend the partial order ≤ on J to a total order, denoted ≤t , and define
an order on the generators of L(J ) be setting mΓψ ≥ mΓφ if and only if ψ ≤t φ. We
claim that L(J ) has linear quotients with respect to this total order of the monomial
generators of L(J ). Indeed, let mΓψ > mΓφ where ψ ∈ J . Then ψ <t φ, and hence
there exists p ∈ P such that ψ(p) < φ(p). We choose a p ∈ P which is minimal with
this property. Therefore, if q < p, then φ(q) ≤ ψ(q) ≤ ψ(p) < φ(p). We set
ψ(r), if r = p,
′
ψ (r) =
φ(r), otherwise.
Then ψ ′ ∈ Hom(P, n) and ψ ′ < φ for the original order on P . It follows that ψ ′ ∈ J ,
and mΓψ′ > mΓφ . Since (mΓψ′ ) : mΓφ = (xp,ψ(p) ) and since xp,ψ(p) divides mΓψ , the
desired conclusion follows.
Remark 5.2. One may fix a maximal element p ∈ P . The statement above still holds
if J in Hom(P, [n]) is a poset ideal for the weaker partial order ≤w on Hom(P, [n])
where φ ≤w ψ if φ(q) ≤ ψ(q) for q 6= p and φ(p) = ψ(p). Just let the total order
still refine the standard partial order on the Hom(P, [n]). Then one deduces either
ψ ′ ≤w φ or ψ ′ ≤w ψ. In either case this gives ψ ′ ∈ J .
For an isotone map φ : P → [n], we define the set
(3)
Λφ = {(p, i) | φ(q) ≤ i < φ(p) for all q < p}.
14
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
It will in the next subsection play a role somewhat analogous to the graph Γφ. For
φ ∈ J we let Jφ be the ideal generated by all mΓψ with mΓψ > mΓφ , where we use
the total order in the proof of Theorem 5.1 above. In analogy to [14, Lemma 3.1]
one obtains:
Corollary 5.3. Let φ ∈ J . Then Jφ : mΓφ is {xp,i | (p, i) ∈ Λφ}.
Proof. The inclusion ⊆ has been shown in the proof of Theorem 5.1. Conversely, let
xp,i be an element of the right hand set. We set
i,
if r = p,
ψ(r) =
φ(r), otherwise.
Then mΓψ ∈ Jφ and (mΓψ ) : mΓφ = (xp,i ). This proves the other inclusion.
Corollary 5.4. The projective dimension of L(P, n; J ) is the maximum of the cardinalities |Λφ| for φ ∈ J .
Proof. This follows by the above Corollary 5.3 and Lemma 1.5 of [25].
Remark 5.5. By [14, Cor.3.3] the projective dimension of L(P, n) is (n − 1)s where
s is the size of a maximal antichain in P . It is not difficult to work this out as a
consequence of the above when J = Hom(P, [n]).
An explicit form of the minimal free resolution of L(P, n) is given in [14, Thm.
3.6], and this is generalized to L(P, n; J ) in [8].
5.2. Regular quotients of L(P, n; J ). We now consider co-letterplace ideals of
poset ideals when we cut down by a regular sequence of variable differences. The
following generalizes Theorem 2.2 and we prove it in Section 8.
Theorem 5.6. Given an isotone map ψ : P × [n] → R with left strict chain fibers.
Let J be a poset ideal in Hom(P, n). Then the basis B (as defined before Theorem
2.1) is a regular sequence for the ring k[xP ×[n] ]/L(P, n; J ).
5.3. Alexander dual of L(P, n; J ). We describe the Alexander dual of L(J ) =
L(P, n; J ) when J is a poset ideal in Hom(P, [n]). We denote this Alexander dual
ideal as L(J )A = L(n, P ; J ). Note that since L(P, n; J ) ⊆ L(P, n), the Alexander
dual L(n, P ; J ) contains the letterplace ideal L(n, P ), and since L(P, n; J ) has
linear resolution, the Alexander dual L(n, P ; J ) is a Cohen-Macaulay ideal, by [12].
Recall the set Λφ defined above (3), associated to a map φ ∈ Hom(P, [n]).
Lemma 5.7. Let J be a poset ideal in Hom(P, [n]). Let φ ∈ J and ψ be in the
complement J c . Then Λψ ∩ Γφ is nonempty.
Proof. There is some p ∈ P with ψ(p) > φ(p). Choose p to be minimal with this
property, and let i = φ(p). If (p, i) is not in Λψ, there must be q < p with ψ(q) >
i = φ(p) ≥ φ(q). But this contradicts p being minimal. Hence (p, i) = (p, φ(p)) is
both in Γφ and Λψ.
Lemma 5.8. Let S be a subset of P × [n] which is disjoint from Γφ for some φ in
Hom(P, [n]). If φ is a minimal such element w.r.t. the partial order on Hom(P, [n]),
then S ⊇ Λφ.
Proof. Suppose (p, i) ∈ Λφ and (p, i) is not in S. Define φ′ : P → [n] by
(
φ(q), q =
6 p
φ′ (q) =
i,
q=p
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
15
By definition of Λφ we see that φ′ is an isotone map, and φ′ < φ. But since S is
disjoint from Γφ, we see that it is also disjoint from Γφ′ . This contradicts φ being
minimal. Hence every (p, i) ∈ Λφ is also in S.
For a subset S of Hom(P, [n]) define K(S) ⊆ k[x[n]×P ] to be the ideal generated
by the monomials mΛφτ where φ ∈ S.
Theorem 5.9. The Alexander dual L(n, P ; J ) is L(n, P )+K(J c ). This is a CohenMacaulay ideal of codimension |P |.
Proof. It is Cohen-Macaulay, in fact shellable, since L(J ) has linear quotients by
Theorem 5.1. The facets of the simplicial complex corresponding to L(J )A are the
complements of the generators of L(J ). Hence these facets have codimension |P |.
To prove the first statement we show the following.
1. The right ideal is contained in the Alexander dual of the left ideal: Every monomial in L(n, P ) + K(J c ) has non-trivial common divisor with every monomial in
L(J ).
2. The Alexander dual of the left ideal is contained in the right ideal: If S ⊆ [n] × P
intersects every Γφτ where φ ∈ J , the monomial mS is in L(n, P ) + K(J c ).
1a. Let ψ ∈ Hom([n], P ). Since L(n, P ) and L(P, n) are Alexander dual, Γψ ∩ Γφτ
is non-empty for every φ ∈ Hom(P, [n]) and so in particular for every φ ∈ J .
1b. If ψ ∈ J c then Λψ ∩ Γφ is nonempty for every φ ∈ J by Lemma 5.7.
Suppose now S intersects every Γφτ where φ is in J .
2a. If S intersects every Γφτ where φ is in Hom(P, [n]), then since L(n, P ) is the
Alexander dual of L(P, n), the monomial mS is in L(n, P ).
2b. If S does not intersect Γφτ where φ ∈ J c , then by Lemma 5.8, for a minimal
such φ we will have S ⊇ Λφτ . Since S intersects Γφτ for all φ ∈ J , a minimal such
φ is in J c . Thus mS is divided by mΛφτ in K(J c ).
Remark 5.10. For a more concrete example, to Stanley-Reisner ideals with whiskers,
see the end of Subsection 6.4.
Remark 5.11. In [8, Thm.5.1] it is shown that the simplicial complex corresponding
to L(n, P ; J ) is a triangulation of a ball. Its boundary is then a triangulation of a
sphere. This gives a comprehensive generalization of Bier spheres, [3]. In [8, Sec.4]
there is also a precise description of the canonical module of the Stanley-Reisner
ring of L(n, P ; J ), as an ideal in this Stanley-Reisner ring.
5.4. Regular quotients of the Alexander dual L(n, P ; J ). We now take the
Alexander dual of L(P, n; J ) and cut it down by a regular sequence of variable
differences. We then get a generalization of Theorem 2.1 and we prove it in Section
8.
Theorem 5.12. Given an isotone map φ : [n] × P → R with left strict chain fibers.
Let J be a poset ideal in Hom(P, n). Then the basis B (as defined before Theorem
2.1) is a regular sequence for the ring k[x[n]×P ]/L(n, P ; J ).
6. Examples of regular quotients of co-letterplace ideals
We give several examples of quotients of co-letterplace ideals which have been
studied in the literature in recent years.
16
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
6.1. Strongly stable ideals: Poset ideals in Hom([d], [n]).
Elements of Hom([d], [n]) are in one to one correspondence with monomials in
k[x1 , . . . , xn ] of degree d: A map φ gives the monomial Πdi=1 xφ(i) . By this association,
the poset ideals in Hom([d], [n]) are in one to one correspondence with strongly stable
ideals in k[x1 , . . . , xn ] generated in degree d.
p2
Consider the projections [d] × [n] −→ [n]. The following is a consequence of
Theorem 5.1 and Theorem 5.6.
Corollary 6.1. Let J be a poset ideal of Hom([d], [n]). Then L(J ) has linear
resolution. The quotient map
pˆ2
k[x[d]×[n] ]/L(J ) −→ k[x[n] ]/Lp2 (J )
is a quotient map by a regular sequence, and Lp2 (J ) is the strongly stable ideal in
k[x1 , . . . , xn ] associated to J .
The ideals L(J ) are extensively studied by Nagel and Reiner in [33]. Poset ideals
J of Hom([d], [n]) are there called strongly stable d-uniform hypergraphs, [33, Def.
3.3]. If M is the hypergraph corresponding to J , the ideal L(J ) is the ideal I(F (M))
of the d-partite d-uniform hypergraph F (M) of [33, Def.3.4, Ex.3.5].
Furthermore the ideal Lp2 (J ) is the ideal I(M) of [33, Ex. 3.5]. The squarefree
ideal I(K) of [33, Ex.3.5] is the ideal Lφ (J ) obtained from the map:
φ : [d] × [n] → [d + n − 1]
(a, b) 7→ a + b − 1
Corollary 6.1 above is a part of [33, Thm. 3.13].
Given a sequence 0 = a0 ≤ a1 ≤ · · · ≤ ad−1 , we get an isotone map
α : [d] × [n]
(i, j)
→ [n + ad−1 ]
7→ j + ai−1
having left strict chain fibers. The ideal Lα (J ) is the ideal coming from the strongly
stable ideal associated to J by the stable operator of S.Murai [32, p.707]. When
ai−1 < ai they are called alternative squarefree operators in [37, Sec. 4].
Remark 6.2. In [18] Francisco, Mermin and Schweig consider a poset Q with underlying set {1, 2, . . . , n} where Q is a weakening of the natural total order, and
study Q-Borel ideals. This is not quite within our setting, but adds extra structure:
Isotone maps φ : [d] → [n] uses the total order on [n] but when studying poset ideals
J the weaker poset structure Q is used on the codomain.
Let n be the poset which is the disjoint union of the one element posets {1}, . . . {n},
so any two distinct elements are incomparable. This is the antichain on n elements.
6.2. Ferrers ideals: Poset ideals in Hom(2, [n]). By (1) partitions λ1 ≥ · · · ≥ λn ≥
0 where λ1 ≤ n correspond to elements of:
Hom([n], [n + 1]) ∼
= Hom([n] × [n], [2]).
Thus λ gives a poset ideal J in [n] × [n] = Hom(2, [n]). The Ferrers ideal Iλ of [7,
Sec. 2] is the ideal L(J ) in k[x2×[n] ]. In particular we recover the result from [7,
Cor.3.8] that it has linear resolution.
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
17
More generally, the poset ideals J of Hom(d, [n]) correspond to the d-partite duniform Ferrers hypergraphs F in [33, Def. 3.6]. That L(J ) has linear resolution is
[33, Thm. 3.13].
6.3. Edge ideals of cointerval d-hypergraphs. Let Homs (Q, P ) be strict isotone
maps φ, i.e. q < q ′ implies φ(q) < φ(q ′ ). There is an isomorphism of posets
(4)
Hom([d], [n]) ∼
= Homs ([d], [n + d − 1]),
by sending φ to φs given by φs (j) = φ(j) + j − 1.
Consider the weaker partial order on on Hom([d], [n]) where φ ψ if φ(i) ≤ ψ(i)
for i < d and φ(d) = ψ(d). Via the isomorphism (4) this gives a partial order s on
Homs ([d], [n + d − 1]). The poset ideals for the partial order s correspond to the
cointerval d-hypergraphs of [10, Def. 4.1] on the set {1, 2, . . . , n + d − 1}. Let Js be
such a poset ideal for s . It corresponds to a poset ideal J in Hom([d], [n]) for .
Let
(5)
φ : [d] × [n] → [d + n − 1]
(a, b) 7→ a + b − 1
The ideal Lφ (J ) is the edge ideal of the cointerval hypergraph corresponding to
Js , see [10, Def. 2.1]. By remarks 5.2 and 8.2, theorems 5.6 and 5.1 still holds for
the weaker partial order . Hence we recover the fact from [10, Cor. 4.7] that edge
ideals of cointerval hypergraphs have linear resolution. In the case d = 2 these ideals
are studied also in [7, Sec. 4] and [33, Sec. 2]. These are obtained by cutting down
by a regular sequence of differences of variables from a skew Ferrers ideals Iλ−µ .
The skewness implies the ideal comes from a poset ideal of Hom([2], [n]) rather than
Hom(2, [n]). Due to this we get the map (5) which has left strict chain fibers, and
so the ideal Iλ−µ , of [7, Sec. 4].
6.4. Uniform face ideals: Poset ideals in Hom(n, [2]). The uniform face ideal of
a simplicial complex ∆, introduced recently by D.Cook [6], see also [20], is the ideal
generated by the monomials
Y
Y
yi
xi ·
i∈F
i6∈F
as F varies among the faces of ∆. The Boolean poset on n elements is the distributive lattice D(n) = Hom(n, [2]). A simplicial complex ∆ on the set {1, 2, . . . , n}
corresponds to a poset ideal J of Hom(n, [2]), and the uniform face ideal of ∆
identifies as the subideal L(J ) of L(n, [2]).
More generally Cook considers a set of vertices which is a disjoint union of k
ordered sets C1 ∪ · · · ∪ Ck , each Ci considered a colour class. He then considers
simplicial complexes ∆ which are nested with respect to these orders [6, Def.3.3,
Prop.3.4]. He associates to this the uniform face ideal I(∆, C), [6, Def. 4.2]. Let ci
be the cardinality of Ci and consider the poset which is the disjoint union ∪ki=1 [ci ].
Then such a ∆ corresponds precisely to a poset ideal J in Hom(∪k1 [ci ], [2]). In fact
J is isomorphic to the index poset P (∆, C) of [6, Def. 6.1]. The uniform face ideal
is obtained as follows: There are projection maps pi : [ci ] × [2] → [2] and so
∪k1 pi : (∪k1 [ci ]) × [2] → ∪k1 [2].
18
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
k
This map has left strict chain fibers and the ideal L∪1 pi (∪k1 [ci ], [2]) is exactly the
uniform face ideal I(∆, C). In [6, Thm. 6.8] it is stated that this ideal has linear
resolution.
Returning again to the first case of the ideal L(J ) in L(n, [2]), its Alexander dual
is by Theorem 5.9:
L(J )A = L([2], n) + K(J c ).
Here L([2], n) isQthe complete intersection of x1j x2j for j = 1, . . . , n, while K(J c )
is generated by j∈G x1j where G is a nonface of ∆. Thus K(J c ) is the associated
ideal I∆ ⊆ k[x11 , . . . , x1n ]. This is [20, Thm.1.1]: L(J )A is the Stanley-Reisner ideal
I∆ with whiskers x1j x2j .
7. Proof concerning Alexander duality
In this section we prove Theorem 2.10 concerning the compatibility between
Alexander duality and cutting down by a regular sequence. The following lemma
holds for squarefree ideals. Surprisingly it does not hold for monomial ideals in
general, for instance for (xn0 , xn1 ) ⊆ k[x0 , x1 ].
Lemma 7.1. Let I ⊆ S be a squarefree monomial ideal and let f ∈ S be such
that x1 f = x0 f considered in S/I. Then for every monomial m in f we have
x1 m = 0 = x0 m in S/I.
Proof. Write f = xa0 fa + · · · x0 f1 + f0 where each fi does not contain x0 . The terms
in (x1 − x0 )f = 0 of degree (a + 1) in x0 , are in xa+1
0 fa , and so this is zero. Since
a−1
S/I is squarefree, x0 fa is zero, and so f = x0 fa−1 + · · · . We may continue and get
f = f0 . But then again in (x1 − x0 )f = 0 the terms with x0 degree 1 is x0 f0 and so
this is zero. The upshot is that x0 f = 0 = x1 f . But then each of the multigraded
terms of these must be zero, and this gives the conclusion.
Let S be the polynomial ring k[x0 , x1 , x2 , . . . , xn ] and I ⊆ S a squarefree monomial
ideal with Alexander dual J ⊆ S. Let S1 = k[x, x2 , . . . , xn ] and S → S1 be the map
given by xi 7→ xi for i ≥ 2 and x0 , x1 7→ x.
Let I1 be the ideal of S1 which is the image of I, so the quotient ring of S/I by
the element x1 − x0 is the ring S1 /I1 . Similarly we define J1 . We now have the
following. Part c. below is Theorem 3.1 in the unpublished paper [30].
Proposition 7.2. a) If x1 − x0 is a regular element of S/I, then J1 is squarefree.
b) If I1 is squarefree then x1 − x0 is a regular element on S/J.
c) If both x1 − x0 is a regular element on S/I and I1 is squarefree, then J1 is the
Alexander dual of I1 .
Proof. The Alexander dual J of I consists of all monomials in S with non-trivial
common factor (ntcf.) with all monomials in I.
Q
a) Let F be a facet of the simplicial complex of I. Let mF = i∈F xi . Suppose F
does not contain any of the vertices 0 and 1. Then x1 mF = 0 = x0 mF in S/I (since
F is a facet). Since x1 − x0 is regular we get mF = 0 in S/I, aQcontradiction. Thus
every facet F contains either 0 or 1. The generators of J are i∈[n]\F xi , and so no
such monomial contains x0 x1 and therefore J1 will be squarefree.
b) Suppose (x1 − x0 )f = 0 in S/J. By the above for the monomials m in f ,
we have x1 m = 0 = x0 m in S/J. We may assume m is squarefree. So x0 m has
ntcf. with all monomials in I and the same goes for x1 m. If m does not have ntcf.
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
19
with the minimal monomial generator n in I, we must then have n = x0 x1 n′ . But
then the image of n in I1 would not be squarefree, contrary to the assumption. The
upshot is that m has ntcf. with every monomial in I and so is zero in S/J.
c) A monomial m in J has ntcf. with all monomials in I. Then its image m in S1
has ntcf. with all monomials in I1 , and so J1 is contained in the Alexander dual of
I1 .
Assume now m in S1 has ntcf. with all monomials in I1 . We must show that
m ∈ J1 . If m does not contain x then m has ntcf. with every monomial in I, and
so m ∈ J1 .
Otherwise m = xm′ and so m ∈ J1 . We will show that either x0 m′ or x1 m′ is in
J. If not, then x0 m′ has no common factor with some monomial x1 n1 in I (it must
contain x1 since m has ntcf. with every monomial in I1 ), and x1 m′ has no common
factor with some monomial x0 n0 in I. Let n be the least common multiple of n0
and n1 . Then x0 n and x1 n are both in I and so by the regularity assumption n ∈ I.
But n has no common factor with x0 m′ and x1 m′ , and so n ∈ I1 has no common
factor with m = xm′ . This is a contradiction. Hence either x0 m′ or x1 m′ is in J
and so m is in J1 .
We are ready to round off this section by the following extension of Theorem 2.10.
Theorem 7.3. Let φ : [n] × P → R be an isotone map such that the fibers φ−1 (r)
τ
are bistrict chains in [n] × P op . Then the ideals Lφ (n, P ; J ) and Lφ (P, n; J ) are
Alexander dual.
Proof. Since L(P, n) is squarefree, the subideal L(P, n; J ) is also. Furthermore it is
obtained by cutting down by a regular sequence of variable differences, by Theorem
τ
5.12. Hence L(n, P ; J )φ is the Alexander dual of L(P, n; J )φ .
8. Proof that the poset maps induce regular sequences.
To prove Theorems 2.1, 2.2 and 5.6, we will use an induction argument. Let
φ
[n] × P −→ R be an isotone map. Let r ∈ R have inverse image by φ of cardinality
≥ 2. Choose a partition into nonempty subsets φ−1 (r) = R1 ∪R2 such that (i, p) ∈ R1
and (j, q) ∈ R2 implies i < j. Let R′ be R\{r} ∪ {r1 , r2 }. We get the map
(6)
φ′
[n] × P −→ R′ → R
factoring φ, where the elements of Ri map to ri . For an element p′ of R′ , denote by
p′ its image in R. Let p′ , q ′ be distinct elements of R′ . We define a partial order on
R′ by the following two types of strict inequalities:
a. p′ < q ′ if p′ = r1 and q ′ = r2 ,
b. p′ < q ′ if p′ < q ′
Lemma 8.1. This ordering is a partial order on R′ .
Proof. Transitivity: Suppose p′ ≤ q ′ and q ′ ≤ r ′ . Then p′ ≤ q ′ and q ′ ≤ r ′ and so
p′ ≤ r ′ . If either p′ or r ′ is distinct from r we conclude that p′ ≤ r ′ . If both of them
are equal to r, then q ′ = r also. Then either p′ = q ′ = r ′ or p′ = r1 and r ′ = r2 , and
so p′ ≤ r ′.
Reflexivity: Suppose p′ ≤ q ′ and q ′ ≤ p′ . Then p′ = q ′ . If this is not r we get
p′ = q ′ . If it equals r, then since we do not have r2 ≤ r1 , we must have again have
p′ = q ′ .
20
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
Proof of Theorem 2.1. We show this by induction on the cardinality of im φ. Assume
that we have a factorization (6), such that
′
k[xR′ ]/Lφ (n, P )
(7)
is obtained by cutting down from k[x[n]×P ]/L(n, P ) by a regular sequence of variable
differences.
For (a, p) in [n] × P denote its image in R′ by a, p and its image in R by a, p. Let
(a, p) map to r1 ∈ R′ and (b, q) map to r2 ∈ R′ . We will show that xr1 − xr2 is a
regular element in the quotient ring (7). So let f be a polynomial of this quotient
ring such that f (xr1 − xr2 ) = 0. Then by Lemma 7.1, for any monomial m in f we
have mxr1 = 0 = mxr2 in the quotient ring (7). We may assume m is nonzero in
this quotient ring.
′
There is a monomial x1,p1 x1,p2 · · · xn,pn in Lφ (n, P ) dividing mxa,p considered as
monomials in k[xR ]. Then we must have a, p = s, ps for some s. Furthermore there
′
is x1,q1 x1,q2 · · · xn,qn in Lφ (n, P ) dividing mxb,q in k[xR ], and so b, q = t, qt for some
t.
In R we now get
s, ps = a, p = b, q = t, qt ,
so s = t would imply qt = ps since φ has left strict chain fibers. But then
r1 = a, p = s, ps = t, qt = b, q = r2
which is not so. Assume, say s < t. Then ps ≥ qt since φ has left strict chain fibers,
and so
pt ≥ ps ≥ qt ≥ qs .
1. Suppose ps > qt . Consider x1,q · · · xt−1,qt−1 . This will divide m since x1,q1 x1,q2 · · · xn,qn
divides mxt,qt . Similarly xs+1,ps+1 · · · xn,pn divides m. Chose s ≤ r ≤ t. Then
pr ≥ ps > qt ≥ qr and so r, qr < r, pr . Then x1,q · · · xr−1,qr−1 and xr,pr · · · xn,pn do not
have a common factor since
i, qi ≤ r, qr < r, pr ≤ j, pj
for i ≤ r ≤ j. Hence the product of these monomials will divide m and so m = 0 in
the quotient ring k[xR ]/Lφ (n, P ).
2. Assume ps = qt and qt > qs . Then s, ps > s, qs since φ has left strict chain
fibers. The monomials x1,q1 · · · xs,qs and xs+1,ps+1 · · · xn,pn then do not have any
common factor, and the product divides m, showing that m = 0 in the quotient ring
k[xR ]/Lφ (n, P ).
If pt > ps we may argue similarly.
3. Assume now that pt = ps = qt = qs , and denote this element as p. Note that
for s ≤ i ≤ t we then have ps ≤ pi ≤ pt , so pi = p, and the same argument shows
that qi = p for i in this range.
Since
s, p = s, ps = a, p 6= b, q = t, qt = t, p
there is s ≤ r ≤ t such that
s, ps = · · · = r, pr < r + 1, pr+1 ≤ · · · ≤ t, pt .
This is the same sequence as
s, qs = · · · = r, qr < r + 1, qr+1 ≤ · · · ≤ t, qt .
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
21
Then x1,q1 · · · xr,qr and xr+1,pr+1 · · · xn,pn divide m and do not have a common factor,
and so m = 0 in the quotient ring k[xR ]/Lφ (n, P ).
Proof of Theorem 5.12. There is a surjection
(8)
k[x[n]×P ]/L(n, P ) → k[x[n]×P ]/L(J )A
and both are Cohen-Macaulay quotient rings of k[x[n]×P ] of codimension |P |. The
basis B is a regular sequence for the first ring by the previous argument. Hence
k[x[n]×P ]/(L(n, P ) + (B))
has codimension |P | + |B|. Since
k[x[n]×P ]/(L(J )A + (B))
is a quotient of this it must have codimension ≥ |P | + |B|. But then B must be a
regular sequence for the right side of (8) also.
Proof of Theorems 2.2 and 5.6. By induction on the cardinality of im φ. We assume
we have a factorization
φ′
P × [n] −→ R′ → R
analogous to (6), such that
′
k[xR′ ]/Lφ (J )
(9)
is obtained by cutting down from k[xP ×[n] ]/L(J ) by a regular sequence of variable
differences.
Let (p0 , a) map to r1 ∈ R′ and (q0 , b) map to r2 ∈ R′ . We will show that xr1 − xr2
is a regular element in the quotient ring (9). So let f be a polynomial of this quotient
ring such that f (xr1 − xr2 ) = 0. Then by Lemma 7.1, for any monomial m in f we
′
have mxr1 = 0 = mxr2 in the quotient ring k[xR′ ]/Lφ (J ). We may assume m is
nonzero in this quotient ring.
Q
′
There is i ∈ J ⊆ Hom(P, n) such that the monomial mi = p∈P xp,ip in Lφ (J )
Q
divides mxp0 ,a , and similarly a j ∈ J such that the monomial mj = p∈P xp,jp
divides mxq0 ,b . Hence there are s and t in P such that s, is = p0 , a and t, jt = q0 , b.
In R we then get:
s, is = p0 , a = q0 , b = t, jt ,
so s = t would imply it = jt since φ has left strict chain fibers. But then
r1 = p0 , a = s, is = t, jt = q0 , b = r2
which is not so. Assume then, say s < t. Then is ≥ jt since φ has left strict chain
fibers, and so
it ≥ is ≥ jt ≥ js .
(10)
Now form the monomials
Q
• mi>s = p>s xp,ip .
Q
• mii>j =
xp,ip .
ip >jp ,not (p>s)
Q
• mii<j =
xp,ip .
ip <jp ,not (p>s)
Q
• mii=j =
xp,ip .
ip =jp ,not (p>s)
22
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
Similarly we define mj∗ for the various subscripts ∗. Then
mi = mii=j · mii>j · mii<j · mi>s
divides xs,is m, and
mj = mji=j · mji>j · mji<j · mj>s
divides xt,jt m.
There is now a map ℓ : P → [n] defined by
(
ip for p > s
ℓ(p) =
min(ip , jp ) for not (p > s)
.
This is an isotone map as is easily checked. Its associated monomial is
mℓ = mi=j · mji>j · mii<j · mi>s .
We will show that this divides m. Since the isotone map ℓ is ≤ the isotone map i,
this will prove the theorem.
Claim 3. mji>j is relatively prime to mii<j and mi>s .
Proof. Let xp,jp be in mji>j .
1. Suppose it equals the variable xq,iq in mii<j . Then p and q are comparable since
φ has left strict chain fibers. If p < q then jp ≥ iq ≥ ip , contradicting ip > jp . If
q < p then iq ≥ jp ≥ jq contradicting iq < jq .
2 Suppose xp,jp equals xq,iq in mi>s . Then p and q are comparable and so p < q
since q > s and we do not have p > s. Then jp ≥ iq ≥ ip contradicting ip > jp .
Let abc = mii=j · mii<j · mi>s which divides mxs,is and ab′ = mji=j · mji>j which
divides m since xt,jt is a factor of mj>s since t > s. Now if the product of monomials
abc divides the monomial n and ab′ also divides n, and b′ is relatively prime to bc,
then the least common multiple abb′ c divides n. We thus see that the monomial
associated to the isotone map ℓ
mℓ = mi=j · mji>j · mii<j · mi>s
divides mxs,is . We need now only show that the variable xs,is occurs to a power in
the above product for mℓ less than or equal to that of its power in m.
Claim 4. xs,is is not a factor of mji>j or mii<j .
Proof. 1. Suppose s, is = p, ip where ip < jp and not p > s. Since p and s are
comparable (they are both in a fiber of φ), we have p ≤ s. Since φ is isotone ip ≤ is
and since φ has left strict chain fibers ip ≥ is . Hence ip = is . By (10) js ≤ is and so
jp ≤ js ≤ is = ip . This contradicts ip < jp .
2. Suppose s, is = p, jp where jp < ip and not p > s. Then again p ≤ s and
ip ≤ is ≤ jp , giving a contradiction.
If now is > js then xs,is is a factor in mii>j but by the above, not in mji>j . Since
mℓ is obtained from mi by replacing mii>j with mji>j , we see that mℓ contains a lower
power of xs,is than mi and so mℓ divides m.
Claim 5. Suppose is = js . Then the power of xs,is in mi>s is less than or equal to
its power in mj>s .
LETTERPLACE AND CO-LETTERPLACE IDEALS OF POSETS
23
Proof. Suppose s, is = p, ip where p > s. We will show that then ip = jp . This will
prove the claim.
The above implies p, ip = s, is = t, jt , so either s < p < t or s < t ≤ p. If the
latter holds, then since φ has left strict chain fibers, is ≥ jt ≥ ip and also is ≤ ip by
isotonicity, and so is = ip = jt . Thus
s, is ≤ t, jt ≤ p, ip
and since the extremes are equal, all three are equal contradicting the assumption
that the two first are unequal.
Hence s < p < t. By assumption on the fibre of φ we have is ≥ ip and by
isotonicity is ≤ ip and so is = ip . Also by (10) and isotonicity
is ≥ jt ≥ jp ≥ js .
Since is = js we get equalities everywhere and so ip = jp , as we wanted to prove.
In case is > js we have shown before Claim 5 that mℓ divides m. So suppose
is = js . By the above two claims, the xs,is in mℓ occurs only in mi=j · mi>s and
to a power less than or equal to that in mi=j · mj>s . Since mj divides mxt,jt and
s, is 6= t, jt the power of s, is in mj is less than or equal to its power in m. Hence
the power of xs,is in mℓ is less or equal to its power in m and mℓ divides m.
Remark 8.2. Suppose P has a unique maximal element p. The above proof still holds
if J in Hom(P, [n]) is a poset ideal for the weaker partial order ≤w on Hom(P, [n])
where the isotone maps φ ≤w ψ if φ(q) ≤ ψ(q) for q < p, and φ(p) = ψ(p).
References
1. Klaus Altmann, Mina Bigdeli, Juergen Herzog, and Dancheng Lu, Algebraically rigid simplicial
complexes and graphs, Journal of Pure and Applied Algebra 220 (2016), no. 8, 2914–2935.
2. David Bayer and Michael Stillman, A criterion for detecting m-regularity, Inventiones mathematicae 87 (1987), no. 1, 1–11.
3. Anders Björner, Andreas Paffenholz, Jonas Sjöstrand, and Günter M Ziegler, Bier spheres and
posets, Discrete & Computational Geometry 34 (2005), no. 1, 71–86.
4. David A Buchsbaum, Selections from the Letter-Place Panoply, Commutative Algebra,
Springer, 2013, pp. 237–284.
5. Aldo Conca, Serkan Hoşten, and Rekha R. Thomas, Nice initial complexes of some classical
ideals, Algebraic and geometric combinatorics, Contemp. Math., vol. 423, Amer. Math. Soc.,
Providence, RI, 2006, pp. 11–42.
6. David Cook, The uniform face ideals of a simplicial complex, arXiv preprint arXiv:1308.1299
(2013).
7. Alberto Corso and Uwe Nagel, Specializations of Ferrers ideals, J. Algebraic Combin. 28
(2008), no. 3, 425–437.
8. Alessio D’Ali, Gunnar Fløystad, and Amin Nematbakhsh, Resolutions of co-letterplace ideals
and generalizations of bier spheres, arXiv preprint arXiv:1601.02793 (2016).
9.
, Resolutions of letterplace ideals of posets, arXiv preprint arXiv:1601.02792 (2016).
10. Anton Dochtermann and Alexander Engström, Cellular resolutions of cointerval ideals, Math.
Z. 270 (2012), no. 1-2, 145–163.
11. John A Eagon and Victor Reiner, Resolutions of Stanley-Reisner rings and Alexander duality,
Journal of Pure and Applied Algebra 130 (1998), no. 3, 265–275.
12.
, Resolutions of Stanley-Reisner rings and Alexander duality, Journal of Pure and Applied Algebra 130 (1998), no. 3, 265–275.
13. Viviana Ene and Jürgen Herzog, Gröbner bases in commutative algebra, vol. 130, American
Mathematical Soc., 2012.
24
GUNNAR FLØYSTAD, BJØRN MØLLER GREVE, AND JÜRGEN HERZOG
14. Viviana Ene, Jürgen Herzog, and Fatemeh Mohammadi, Monomial ideals and toric rings of
Hibi type arising from a finite poset, European Journal of Combinatorics 32 (2011), no. 3,
404–421.
15. Gunnar Fløystad, Cellular resolutions of Cohen-Macaulay monomial ideals, J. Commut. Algebra 1 (2009), no. 1, 57–89.
16. Gunnar Fløystad and Amin Nematbakhsh, Rigid ideals by deforming quadratic letterplace
ideals, arXiv preprint arXiv:1605.07417 (2016).
17. Gunnar Fløystad and Jon Eivind Vatne, (Bi-)Cohen-Macaulay Simplicial Complexes and Their
Associated Coherent Sheaves, Communications in algebra 33 (2005), no. 9, 3121–3136.
18. Christopher A Francisco, Jeffrey Mermin, and Jay Schweig, Generalizing the Borel property,
Journal of the London Mathematical Society 87 (2013), no. 3, 724–740.
19. Mark L Green, Generic initial ideals, Six lectures on commutative algebra, Springer, 1998,
pp. 119–186.
20. J. Herzog and T. Hibi, The face ideal of a simplicial complex, ArXiv e-prints.
21. Jürgen Herzog and Takayuki Hibi, Distributive lattices, bipartite graphs and alexander duality,
Journal of Algebraic Combinatorics 22 (2005), no. 3, 289–302.
, Monomial ideals, Springer, 2011.
22.
23. Jürgen Herzog, Ayesha Asloob Qureshi, and Akihiro Shikama, Alexander duality for monomial
ideals associated with isotone maps between posets, Journal of Algebra and Its Applications 15
(2016), no. 05, 1650089.
24. Jürgen Herzog and Ahad Rahimi, Bi-cohen-macaulay graphs, arXiv preprint arXiv:1508.07119
(2015).
25. Jürgen Herzog and Yukihide Takayama, Resolutions by mapping cones, Homology, Homotopy
and Applications 4 (2002), no. 2, 277–294.
26. Jürgen Herzog and NgôViêt Trung, Gröbner bases and multiplicity of determinantal and pfaffian ideals, Advances in Mathematics 96 (1992), no. 1, 1–37.
27. Martina Juhnke-Kubitzke, Lukas Katthän, and Sara Saeedi Madani, Algebraic properties of
ideals of poset homomorphisms, Journal of Algebraic Combinatorics (2015), 1–28.
28. Jan O Kleppe, Deformations of modules of maximal grade and the hilbert scheme at determinantal schemes, Journal of Algebra 407 (2014), 246–274.
29. Roberto La Scala and Viktor Levandovskyy, Letterplace ideals and non-commutative Gröbner
bases, Journal of Symbolic Computation 44 (2009), no. 10, 1374–1393.
30. Henning Lohne, The many polarizations of powers of maximal ideals, arXiv preprint
arXiv:1303.5780 (2013).
31. Saunders Mac Lane, Categories for the working mathematician, vol. 5, springer, 1998.
32. Satoshi Murai, Generic initial ideals and squeezed spheres, Adv. Math. 214 (2007), no. 2,
701–729.
33. Uwe Nagel and Victor Reiner, Betti numbers of monomial ideals and shifted skew shapes,
Electron. J. Combin. 16 (2009), no. 2, Special volume in honor of Anders Bjorner, Research
Paper 3, 59.
34. Himanee Narasimhan, The irreducibility of ladder determinantal varieties, Journal of Algebra
102 (1986), no. 1, 162–185.
35. Richard P Stanley, Algebraic Combinatorics: Walks, Trees, Tableaux and More, Undergraduate
Texts in Mathematics (2013).
36. Bernd Sturmfels, Gröbner bases and stanley decompositions of determinantal rings, Mathematische Zeitschrift 205 (1990), no. 1, 137–144.
37. Kohji Yanagawa, Alternative polarizations of Borel fixed ideals, Nagoya Math. J. 207 (2012),
79–93.
Matematisk institutt, University of Bergen
E-mail address: [email protected]
Matematisk institutt, University of Bergen
E-mail address: [email protected]
Matematisches Institut, Universität Essen
E-mail address: [email protected]
| 0 |
JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 1, ISSUE 1, MAY 2010
72
An approach to find dynamic slice for C++
Program
Santosh Kumar Pani and Priya Arundhati
Abstract—Object-oriented programming has been considered a most promising method in program development and maintenance. An
important feature of object-oriented programs (OOPs) is their reusability which can be achieved through the inheritance of classes or reusable components.Dynamic program slicing is an effective technique for narrowing the errors to the relevant parts of a program when debugging. Given a slicing criterion, the dynamic slice contains only those statements that actually affect the variables in the slicing criterion. This
paper proposes a method to dynamically slice object-oriented (00) programs based on dependence analysis. It uses the Control Dependency
Graph for object program and other static information to reduce the information to be traced during program execution. In this paper we
present a method to find the dynamic Slice of object oriented programs where we are finding the slices for object and in case of function overloading.
Index Terms— Program Slicing, Dynamic Slicing, Control dependency Graph.
—————————— ——————————
1 INTRODUCTION
Program slicing is an effective technique for narrowing
the focus of attention to the relevant parts of a program.
The slice of a program consists of those statements and
predicates that may directly or indirectly affect the variables computed at a given program point [1]. The program point s and the variable set V, denoted by <s, V>, is
called a slicing criterion. The basic idea of program slicing
is to remove irrelevant statements from source codes
while preserving the semantics of the program such that
at the specified point the variable produces the same value as its original program. Program slicing has been
widely used in many software activities, such as software
analyzing, understanding, debugging, testing, maintenance, and so on [2, 3, 4]. Slicing algorithm can be classed
according to whether they only use static information
(static slicing) or dynamic execution information for a
specific program input (dynamic slicing). This paper focuses on the dynamic slicing methods for finding the slices for class,object and in case of function overloading.
2 REVIEW OF RELATED WORKS
Agrawal proposes a dynamic slicing method by marking
nodes or edges on a static program dependence graph
during execution [5]. The result is not precise, because
some dependencies might not hold in dynamic execution.
Agrawal also proposes a precise method based on the
dynamic dependence graph (DDG) [5], and Zhao applies
it to slice object-oriented programs [6]. The shortcoming
is that the size of the DDG is unbound.
Korel [8], Song [7] and Tibor [9] propose forward dynamic slicing methods and Song also proposes method to slice
OOPs using dynamic object relationship diagram
(DORD). In these methods, they compute the dynamic
slices for each statement immediately after this statement
is executed. After the last statement is executed, the dynamic slices of all statements executed have been obtained. However, only some special statements in loops
need to compute dynamic slices. In this paper we
represent the object oriented program in control dependence Graph (CDG).
3 BASIC CONCEPT & DEFINITION
Slicing object oriented program is quite difficult as it
comprises of many features like Class, Object, Polymorphism, inheritance etc. In this paper we are trying to finding the slices for object and in case of function overloading.
In object oriented programming language it is possible to
define more then one function that share a common name
this feature is called function overloading.Hence our aim
is to find the Slice where the function is overloaded.In
that case the Slicer should know that which version of the
function to call.For this,the Slicer uses the following information about the function Call.
1. The type of the argument.
2. The no.of argument.
3. The name of the function.
After getting these information from the calling node the
unquiue function will be called and the slice of the function will be found out in normal method.
The following Data Structures are used for this purpose.
————————————————
Santosh Kumar Pani is with the School of Computer Engineering, KIIT
University, Bhubaneswar.
Priya Arundhati is with the Department of CSE, Nalanda Institute of
Technology, Bhubaneswar.
1. Control Dependence Graph: The control depen-
dence graph (CDG) G of an object oriented program P is a graph G=(N,E),where each node nεN
represents a statement of the program P. For any
© 2010 JCSE
http://sites.google.com/site/jcseuk/
73
2.
3.
4.
5.
6.
7.
pair of nodes x and y,(x,y)εE if node x is control
dependent on node y.
Def(dvar) : If dvar is a variable in a program P.A
node u of CDG Gp is said to be a Def(dvar) node
if u is definition statement that defines the variable dvar.
Use(dvar) : If dvar is a variable in a program P.A
node u of CDG Gp is said to be a Use(dvar) node
if he statement u uses the value of the variable
dvar.
Defvarset(u):Let u be a node in the CFG Gp of a
program P.The set DefVarSet(u)={dvar:dvar is a
data variable in the program P,and u is a
Def(dvar) node
Usevarset(u):Let u be a node in the CFG Gp of
program P. The set UseVarSet(u)={dvar:dvar is a
data variable in the program P,and u is a
Use(dvar) node}.
ActiveControlSlice(s): Let s be a test
node(predicate statement) in, the CDG Gp of a
program
P
and
UseVarSet(s)={var1,var2,…..vark}. Before execution of
the program P, ActiveControlSlice(s) = Ø.After
each execution of the node s in the actual run of
the
program,
ActiveControlSlice(s)={s}UActiveDataSlice(var1)…UActiveDat
aSlice(vark) U ActiveControlSlice(t), where t is
most recently executed successor node of s in
Gp.If s is a loop control node and the present execution of the node s corresponds to exit from
loop,then ActiveControlSlice(s)= Ø
ActiveDataSlice(var):Let dvar be a data variable
in the object oriented program P. Before execution
of
the
program
P
,ActiveDataSlice(obj.var)=Ø.Let
u
be
a
Def(obj.var)
node,
and
UseVarSet(u)={var1,var2,..vark).After each execution of
the node u in the actual run of the program, ActiveDataSlice(obj.var)={s}U ActiveDataSlice(var1)
U………UActiveDataSlice(dvark) U ActiveControlSlice(t), where t is most recently executed
successor node of s in Gp.
/*For each data member of an object a separate
slice will be mainted in a shared memory which
is only accessable to the member function called
on that object.*/
8. DyanSlice(obj):-Let obj be an object of the class
in the proram P. Before execution of program P,
DyanSlice(obj)=ф. Let the object contains the data
member mem1.mem2,…memn then DyanSlice(obj)=DyanSlice(obj.mem1)
U
DyanSlice(obj.mem2)U…..U DyanSlice(obj.memn)
9. DyanSlice(s,var):-Let s be a node of the CDG of a
program P,& var be a variable in the set i.e. var(DefVarSet(s) U UseVarSet(s) Before executionof
the program P DyanSlice(s,var)=фAfter each ex-
ecution of the node s in an actual run DyanSlice(s,var)=Active Data Slice(var) U Active Control Slice(t)
10. Active Call Slice:-Let Gp represents the CDG of
the multiprocedure program P. Before execution
of the program P Active Call Slice=ф. Let Uactive
be an active Call node then Active Call
Slice={Uactive} U Active Call Slice U Active Control Slice(t).
11. Call Slice Stack:-This stack is maintained to keep
track of the Active Call slice during the actual
run of the program.
12. Formal(x,var),Actual(x,var)): Let p1 be a procedure of a program p having multiple procedures,
and x be a calling node to the procedures, and x
be a calling node to the procedure p1.Let f be a
formal parameter of procedure P1 and its corresponding actual parameter at the calling node to
the procedure P1 and its corresponding actual
parameter at the calling node x be a. We define
Formal(x,a)=f and Actual(x,f)=a.
4
AN ALGORITHM FOR FINDING DYNAMIC SLICE
FOR OBJECT ORIENTED PROGRAMS IN CASE OF
FUNCTION OVERLOADING.
Step 1: Start
Step2: At first Control Dependence Graph G of object
oriented program P is statically constructed .
Step 3: Before each execution of the program do step 4 to
step 7.
Step 4: For each node u of G do step 5 to step 6.
Step 5 : If u is a test node, then ActiveControlSlice(u) = Ø.
Step 6: For each variable var εDefVarSet(u) U UseVarSet(u) do DyanSlice(u,var) = Ø.
Step 7: CallSliceStack=NULL.
ActiveCallSlice= Ø.
Step 7a. For each data variable dvar of the program
do
ActiveDataSlice(var)= Ø.
Step 7b. If var is a data member of an corresponding class
of an object to then
do
ActiveDataSlice(obj.var) = Ø
for each member declared with in the class of an
object
Step 8: Run the program P with the given set of input
values and repeat steps 9 to 19 until the program terminates.
Step 9: Before execution of each call node u do step 10
and step 11
Step 10a. Let u be a call node to a procedure Q, update
CallSliceStack and ActiveCallSlice.
//In case of function having same name the signature of
the function is foundout as each function have unquiue
signature ,hence the slicer will know that which version
of the function to call.//
Step 10b. For each actual parameter var in the procedure
74
call Q do ActiveDataSlice (Formal(u,var)) = ActiveDataSlice(var) U ActiveCallSlice.
//Here when a member finction of an class is called with
an object then the actual parameter of the function is individually copied to the formal parameter of the member
function.//
Step 11:Update ActiveReturnSlice before execution of the
return node.
Step 12: After execution of each node u of the program P
do step 13 to step 18.
Step 13:If u is a Def(var) node and not a call node then
Update ActiveDataSlice(var).
//var can be a data variable or data member of an object
then Update ActiveDataSlice(var) will be done accordingly //
Step 14: If u is a call node to a procedure Q then for every
formal reference parameter var in the procedure Q do
ActiveDataSlice(Actual(u,var))= ActiveDataSlice(var).
Step 15: If u is a Def(var) node then ActiveDataSlice(var)=ActiveReturnSlice.
Step16a. For every variable var if declared as automatic
local in the procedure Q do ActiveDataSlice= Ø.
//For local static variable and global variable the ActiveDataSlice remains unchanged
Step16b. Update CallSliceStack and ActiveCallSlice and
Set ActiveReturnSlice= Ø .
Step 17: For every variable var ε DefVarSet(u) U UseVarSet(u) Update DyanSlice(u,var).
Step 18: If u is a test node, then update ActivecontrolSlice(u).
Step 19:Exit
22.b=tp1.b+tp2.b;
}
test add(test tp3,int s)
{
23.a=tp3.a +s;
24.b=tp3.b +s;
}
};
void main()
{
test T1,T2,T3,T4;
int p,q;
1.Cout<<”Enter the value of p”;
2.Cin>>p;
3.Cout<<”Enter the value of q”;
4.Cin>>q;
5.T1.get(p,q);
6.T1.display();
7.Cout<<”Enter the value of p”;
8.Cin>>p;
9.Cout<<”Enter the value of q”;
10.Cin>>q;
11.T2.get(p,q)
12.T2.display()
13.T3.add(t1,t2);
14.T3.display();
15.T4.add(T3,5);
16.T4.display();
}
Control Dependency Graph
5 SAMPLE PROGRAM AND WORKING OF THE
ALGORITHM
Sample Program
The following object oriented Program where only the
concept of class and object are used is discussed to iilustrate our proposed Algorithm.
Class test
{
int a;
int b;
public:
void get(int x,int y)
{
17.a=x;
18.b=y;
}
Void display()
{
19.Cout<<a;
20.Cout<<b;
}
Test add (test tp1,test tp2)
{
21.a=tp1.a+tp2.a;
Working of the Algorithm
After Execution of node-2
ADS(p)={2}
DyanSlice(2,p)={2}
After Execution of node-4
ADS(q)={4}
75
DyanSlice(4,q)={4}
After Execution of node-5
Here one of the object of test class is T1.get() is a member
function of the test class.The formal parameter of get() is x
& y as well as these are the local variable to get().The actual parameter is p & q.
So,ADS(x)=ADS(p) U Active Call Slice
={2} U {5}={2,5}
ADS(y)=ADS(q) U Active Call Slice
={4} U{5}={4,5}
Again T1 is calling the get() so here a & b are T1’s variable
ADS(T1.a)=ADS(x)U{15}={2,5} U{15}={2,5,17}
DyanSlice(15,T1.a)={2,5,17}
ADS(T1.b)=ADS(y)U{16}={4,5,18}
DyanSlice(16,T1.b)={4,5,18}
After Execution of node-6
Now T1 is calling display()
So,DyanSlice(17,T1.a)=ADS(T1.a)={2,5,17}
DyanSlice(18,T1.b)=ADS(T1.b)={4,5,18}
DyanSlice(T1)=DyanSlice(T1.a) U DyanSlice(T1.b)
={2,5,17} U {4,5,18}
={2,4,5,17,18}
After Execution of node-8
ADS(p)={8}
DyanSlice(8,p)={8}
After Execution of node-10
ADS(q)={10}
DyanSlice(10,q)={10}
After Execution of node-11
ADS(x)=ADS(p) U Active Call Slice
={8} U{11}={8,11}
ADS(y)=ADS(q) U Active Call Slice
={10}U{11}={10,11}
ADS(T2.a)=ADS(x) U 17
={8,11,17}
DyanSlice(17,T2.a)={8,11,17}
ADS(T2.b)=ADS(y) U 18
={10,11} U 18={10,11,18}
After Execution of node-12
DyanSlice(19,T2.a)=ADS(T2.a)={8,11,17}
DyanSlice(20,T2.b)=ADS(T2.b)={10,11,18}
DyanSlice(T2)=DyanSlice(T2.a) U DyanSlice(T2.b)
={8,11,17} U {10,11,118}
={8,10,11,17,18}
After Execution of node-13
Here T3 is calling the add() where the formal parameter
are tp1 & tp2 but here the object T1 & T2 are passed as the
actual parameter.and there are two version of
add()function.Now the Slicer will decide which function
to call by checking the signature of the function.when this
node will be executed the control will go to the first
add()function where the argument are the objects.
So,ADS(tp1.a)=ADS(T1.a) U Active Call Slice
={2,,5,17} U {13}
={2,5,17,13}
ADS(tp1.b)= ADS(T1.b) U Active Call Slice
={4,5,18} U 13
={4,5,18,13}
ADS(tp1)=ADS(tp1.a) U ADS(tp1.b)={ 2,5,17,13}
{4,5,18,13}={ 2,5,17,13,4,5,18,13}
ADS(tp2.a)=ADS(T2.a) U Active Call Slice
={8,11,17} U {13}
={8,11,17,13}
ADS(tp2.b)= ADS(T2.b) U Active Call Slice
={10,11,18} U 13
={10,11,18,13}
ADS(tp2)=ADS(tp2.a) U ADS(tp2.b)={ 8,11,17,13}
{10,11,18,13}={ 8,10,11,13,17,18}
ADS(T3.a)=ADS(tp1.a) U ADS(tp2.a) U {21}
={2,,5,13,17} U {8,11,17,13} U{21}
={2,5,8,11,13,17,21}
ADS(T3.b)= ADS(tp1.b) U ADS(tp2.b) U{20}
={4,5,13,18} U {10,11,13,18} U{22}
={4,5,10,11,13,18,22,}
DyanSlice(13,T3.a)=ADS(T3.a)={2,5,8,11,13,17,21,}
DyanSlice(13,T3.b)=ADS(T3.b)={4,5,10,11,13,18,22}
After Execution of node-14
DyanSlice(19,T3.a)=ADS(T3.a)={ 2,5,8,11,13,17,21}
DyanSlice(18,T3.b)=ADS(T3.b)={ 4,5,10,11,13,18,22}
DyanSlice(T3)=DS(T3.a) U DS(T3.b)
=
{2,5,8,11,13,17,21}
{4,5,10,11,13,18,22}={2,4,5,8,10,11,13,17,18,21,22}
After Execution of node-15
ADS(tp3.a)=ADS(T3.a) U Active Call Slice
={2,5,8,11,13,17,21} U 15
={2,5,8,11,13,15,17,21}
ADS(tp3.b)=ADS(T3.b) U Active Call Slice
={4,5,10,11,13,18,22,} U15
={4,5,10,11,13,15,18,22,}
ADS(T4.a)=ADS(tp3.a) U Active Call Slice
={2,5,8,11,13,15,17,21} U{23}
={2,5,8,11,13,15,17,21,23}
ADS(T4.b)=ADS(tp3.b) U Active Call Slice
={4,5,10,11,13,15,18,22,} U {24}
={4,5,10,11,13,15,18,22,24}
After Execution of node-16
DyanSlice(16,T4.a)=ADS(T4.a)
={2,5,8,11,13,15,17,21,23}
DyanSlice(16,T4.b)=ADS(T4.b)
={4,5,10,11,13,15,18,22,24}
DyanSlice(T4)= DyanSlice(T4.a) U DyanSlice(T4.b)
={2,5,8,11,13,15,17,21,23}
{4,5,10,11,13,15,18,22,24}
={2,4,5,8,10,11,13,17,18,21,22,15,23,24}
6
U
U
U
U
ANALYSIS OF THE ALGORITHM
The improved InterSlice Algorithm also uses a collection of
control dependence graphs as the intermediate program
representation & computes precise dynamic slice. The space
complexity of the improved Interslice algorithm is also
O(n2)The time complexity of this algorithm is linear as no
extra data structures are required for this enhancement .This
algorithm does not require to store the past execution histo-
76
ry.
7
CONCLUSION
In this paper of the dynamic Slicing of Object Oriented Programs is discussed. The Object Oriented Programs has various features like Class, Object, Dynamic Binding, Polymorphism, Inheritance etc. In this proposed Algorithm only the
Class, Object and function overloading are discussed. Finding the Slices for Inheritance, Dynamic binding and other
features are the future work of this Research
REFERENCES
[1]
M. Weiser, Program Slicing, IEEE Trans. Software Engineering,
1984, 16(5): 498-509.
[2] H. Agrawal, et al. Debugging with Dynamic Slicing and Backtracking. Software – Practice and Experience, 1993, 23(6): 589616
[3] B. Korel. Application of Dynamic Slicing in Program Debugging. Third International Workshop on Automated Debugging,
1997: 59-74
[4] F. Tip. A Survey of Program Slicing Techniques, J. Programming Languages, 1995, 3(3): 121-189
[5] H. Agrawal, J. Horgan. Dynamic Program Slicing. Proceedings
of the ACM SIGPLAN’90 Conference on Programming Language Design and Implementation, 1990: 246-256
[6] J. Zhao, Dynamic Slicing of Object-Oriented Programs. Technical-Report SE-98-119, Information Processing Society of Japan, May 1998: 17-23
[7] Y. Song, D. Huynh, Forward Dynamic Object-Oriented Program Slicing, Application-Specific Systems and Software Engineering and Technology (ASSET '99), IEEE CS Press, 1999: 230 –
237
[8] B. Korel, S. Yalamanchili. Forward Derivation of Dynamic Slices. Proceedings of the International Symposium on Software
Testing and Analysis, 1994, pp. 66-79.
[9] G, Tibor, et al. An Efficient Relevant Slicing Method for Debugging, Software Engineering Notes, Software Engineering
―
―
ESEC/FSE’99 Springer ACM SIGSFT, 1999: 303-321
[10] G.B. Mund, R. Mall, S. Sarkar,2002a. An efficient dynamic program slicing technique. Information and Software Technology
44(2),123-132.
| 1 |
arXiv:1611.02589v1 [cs.DS] 8 Nov 2016
An Optimal Ancestry Labeling Scheme
with Applications to XML Trees and Universal Posets∗
Pierre Fraigniaud
Amos Korman
CNRS and Univ. Paris Diderot
[email protected]
CNRS and Univ. Paris Diderot
[email protected]
Abstract
In this paper we solve the ancestry-labeling scheme problem which aims at assigning the
shortest possible labels (bit strings) to nodes of rooted trees, so that ancestry queries between
any two nodes can be answered by inspecting their assigned labels only. This problem was
introduced more than twenty years ago by Kannan et al. [STOC ’88], and is among the most
well-studied problems in the field of informative labeling schemes. We construct an ancestrylabeling scheme for n-node trees with label size log2 n + O(log log n) bits, thus matching the
log2 n + Ω(log log n) bits lower bound given by Alstrup et al. [SODA ’03]. Our scheme is based
on a simplified ancestry scheme that operates extremely well on a restricted set of trees. In
particular, for the set of n-node trees with depth at most d, the simplified ancestry scheme
enjoys label size of log2 n + 2 log2 d + O(1) bits. Since the depth of most XML trees is at
most some small constant, such an ancestry scheme may be of practical use. In addition, we
also obtain an adjacency-labeling scheme that labels n-node trees of depth d with labels of size
log2 n + 3 log2 d + O(1) bits. All our schemes assign the labels in linear time, and guarantee that
any query can be answered in constant time.
Finally, our ancestry scheme finds applications to the construction of small universal partially
ordered sets (posets). Specifically, for any fixed integer k, it enables the construction of a
universal poset of size Õ(nk ) for the family of n-element posets with tree-dimension at most k.
Up to lower order terms, this bound is tight thanks to a lower bound of nk−o(1) due to Alon
and Scheinerman [Order ’88].
∗
Preliminary results of this paper appeared in the proceedings of the 42nd ACM Symposium on Theory of Computing (STOC), 2010, the 21st ACM-SIAM Symposium on Discrete Algorithms (SODA), 2010, and the 21st ACM
Symposium on Parallel Algorithms and Architectures (SPAA), 2009, as part of [21, 22, 23]. This research is supported
in part by the ANR project DISPLEXITY, and by the INRIA project GANG.
1
Introduction
1.1
Background and motivation
How to represent a graph in a compact manner is a fundamental data structure question. In most
traditional graph representations, the names (or identifiers) given to the nodes serve merely as
pointers to entries in a data structure, and thus reveal no information about the graph structure
per se. Hence, in a sense, some memory space is wasted for the storage of content-less data. In
contrast, Kannan et al. [36] introduced the notion of informative labeling schemes, which involves
a mechanism for assigning short, yet informative, labels to nodes. Specifically, the goal of such
schemes is to assign labels to nodes in such a way that allows one to infer information regarding any
two nodes directly from their labels. As explicated below, one important question in the framework
of informative labeling schemes is how to efficiently encode the ancestry relation in trees. This is
formalized as follows.
The ancestry-labeling scheme problem: Given any n-node rooted tree T , label the nodes
of T using the shortest possible labels (bit strings) such that, given any pair of nodes u and v in T ,
one can determine whether u is an ancestor of v in T by merely inspecting the labels of u and v.
The following simple ancestry-labeling scheme was suggested in [36]. Given a rooted n-node
tree T , perform a DFS traversal in T starting at the root, and provide each node u with a DFS
number, dfs(u), in the range [1, n]. (Recall, in a DFS traversal, a node is visited before any
of its children, thus, the DFS number of a node is smaller than the DFS number of any of its
descendants). The label of a node u is simply the interval I(u) = [dfs(u), dfs(u0 )], where u0 is
the descendant of u with the largest DFS number. An ancestry query then amounts to an interval
containment query between the corresponding labels: a node u is an ancestor of a node v if and
only if I(v) ⊆ I(u). Clearly, the label size, namely, the maximal number of bits in a label assigned
by this ancestry-labeling scheme to any node in any n-node tree, is bounded by 2 log n bits1 .
The 2 log n bits scheme of [36] initiated an extensive research [1, 3, 11, 34, 35, 43] whose goal
was to reduce the label size of ancestry-labeling schemes as much as possible. The main motivation
behind these works lies in the fact that a small improvement in the label size of ancestry-labeling
schemes may contribute to a significant improvement in the performances of XML search engines.
Indeed, to implement sophisticated queries, XML documents are viewed as labeled trees, and
typical queries over the documents amount to testing relationships between document items, which
correspond to ancestry queries among the corresponding tree nodes [2, 18, 48, 49]. XML search
engines process such queries using an index structure that summarizes this ancestry information. To
allow good performances, a large portion of the XML index structure resides in the main memory.
Hence, the length of the labels is a main factor which determines the index size. Thus, due to the
enormous size of the Web data, even a small reduction in the label size may contribute significantly
to both memory cost reduction and performance improvement. A detailed explanation regarding
this application can be found in various papers on ancestry-labeling schemes (see, e.g., [1, 35]).
In [5], Alstrup et al. proved a lower bound of log n + Ω(log log n) bits for the label size of an
ancestry-labeling scheme. On the other hand, thanks to a scheme by Abiteboul et al. [1], the
1
All logarithms in this paper are taken in base 2.
1
√
current state of the art upper bound is log n + O( log n) bits. Thus, a large gap is still left between
the best known upper and lower bounds on the label size. The main result of this paper closes the
gap. This is obtained by constructing an ancestry-labeling scheme whose label size matches the
aforementioned lower bound.
Our scheme is based on a simplified ancestry scheme that operates extremely well on a restricted
set of trees. In particular, for the set of n-node trees with depth at most d, the simplified ancestry
scheme enjoys label size of log2 n + 2 log2 d + O(1) bits. This result can be of independent interest
for XML search engines, as a typical XML tree has extremely small depth (cf. [14, 17, 39, 38]). For
example, by examining about 200,000 XML documents on the Web, Mignet et al. [38] found that
the average depth of an XML tree is 4, and that 99% of the trees have depth at most 8. Similarly,
Denoyer and Gallinari [17] collected about 650,000 XML trees taken from the Wikipedia collection2 ,
and found that the average depth of a node is 6.72.
In addition, our ancestry-labeling scheme on arbitrary trees finds applications in the context
of universal partially ordered sets (posets). Specifically, the bound on the label size translates to
an upper bound on the size of the smallest universal poset for the family of all n-element posets
with tree-dimension at most k (see Section 2 for the definitions). It is not difficult to show that the
smallest size of such a universal poset is at most n2k . On the other hand, it follows from a result
by Alon and Scheinerman [12] that this size is also at least nk−o(1) . As we show, it turns out that
the real bound is much closer to this lower bound than to the n2k upper bound.
1.2
1.2.1
Related work
Labeling schemes
As mentioned before, following the 2 log n-bit ancestry-labeling scheme in [36], a considerable
amount of research has been devoted to improve the upper bound on the label size as much as
possible. Specifically, [3] gave a first √
non-trivial upper bound of 32 log n + O(log log n) bits. In [34],
a scheme with label size log n + O(d log n) bits was constructed to detect ancestry only between
nodes at distance at most d from each other. An ancestry-labeling scheme with label size of
log n + O(log
√ n/ log log n) bits was given in [43]. The current state of the art upper bound of
log n + O( log n) bits was given in [11] (that scheme is described in detail in the journal publication [1] joint with [3]). Following the aforementioned results on ancestry-labeling schemes for
general rooted trees, [35] gave an experimental comparison of different ancestry-labeling schemes
over XML tree instances that appear in “real life”.
The ancestry relation is the transitive closure of the parenthood relation. Hence, the following
parenthood-labeling scheme problem is inherently related to the ancestry-labeling scheme problem:
given a rooted tree T , label the nodes of T in the most compact way such that one can determine
whether u is a parent of v in T by merely inspecting the corresponding labels. The parenthoodlabeling scheme problem was also introduced in [36], and a very simple parenthood scheme was
constructed there, using labels of size at most 2 log n bits. (Actually, [36] considered adjacencylabeling schemes in trees rather than parenthood-labeling schemes, however, such schemes are
2
XML trees taken from the Wikipedia collection have actually relatively larger depth compared to usual XML
trees [16].
2
equivalent up to a constant number of bits in the label size3 ). By now, the parenthood-labeling
scheme problem is almost completely closed thanks to Alstrup and Rauhe [10], who constructed a
parenthood scheme for n-node trees with label size log n + O(log∗ n) bits. In particular, this bound
indicates that encoding ancestry in trees is strictly more costly than encoding parenthood.
Adjacency labeling schemes where studied for other types of graphs, including, general graphs [9],
bounded degree graphs [4], and planar graphs [24]. Informative labeling schemes were also proposed for other graph problems, including distance [5, 26, 42], routing [20, 43], flow [31, 33], vertex
connectivity [27, 31, 32], and nearest common ancestor in trees [6, 8, 40].
Very recently, Dahlgaard et al. [15] and Alstrup et al. [7] claim to provide asymptotically
optimal schemes for the ancestry problem and the adjacency problem on trees, respectively.
1.2.2
Universal posets
When considering infinite posets, it is known that a countable universal poset for the family of all
countable posets exists. This classical result was proved several times [19, 29, 30] and, in fact, as
mentioned in [28], has motivated the whole research area of category theory.
We later give a simple relation between the label size of consistent ancestry-labeling schemes
and the size of universal posets for the family of all n-element posets with tree-dimension at most k
(see Section 2 for the corresponding definitions). The 2 log n-bit ancestry-labeling scheme of [36] is
consistent, and thus it provides yet another evidence for the existence of a universal poset with n2k
elements for the family of all n-element posets with tree-dimension at most k. It is not clear whether
the ancestry-labeling schemes in [3, 11, 34, 43] can be somewhat modified to be consistent and still
maintain the same label size. However, even if this is indeed the case, the universal poset for the
family of all n-element posets with
√ tree-dimension at most k that would be obtained from those
schemes, would be of size Ω(nk 2k log n ).
The lower bound of [5] implies a lower bound of Ω(n log n) for the number of elements in a
universal poset for the family of n-element posets with tree-dimension 1. As mentioned earlier,
for fixed k > 1, the result of Alon and Scheinerman [12] implies a lower bound of nk−o(1) for the
number of elements in a universal poset for the family of n-element posets with tree-dimension at
most k.
1.3
Our contributions
The main result of this paper provides an ancestry-labeling scheme for n-node rooted trees, whose
label size is log n + O(log log n) bits. This scheme assigns the labels to the nodes of any tree in
linear time and guarantees that any ancestry query is answered in constant time. By doing this,
we solve the ancestry-labeling scheme problem which is among the main open problems in the field
of informative labeling schemes.
3
To see this equivalence, observe that one can construct a parenthood-labeling scheme from an adjacency-labeling
scheme in trees, as follows. Given a rooted tree T , first label the nodes of T using the adjacency-labeling scheme
(which ignores the fact that T is rooted). Then, for each node u, in addition to the label given to it by the adjacencylabeling scheme, add two more bits, for encoding d(u), the distance from u to the root, calculated modulo 3. Now
the parenthood-labeling scheme follows by observing that for any two nodes u and v in a tree, u is a parent of v if
and only if u and v are adjacent and d(u) = d(v) − 1 modulo 3.
3
Our main scheme is based on a simplified ancestry scheme that is particularly efficient on a
restricted set of trees, which includes the set of n-node trees with depth at most d. For such trees,
the simplified ancestry scheme enjoys label size of log2 n + 2 log2 d + O(1) bits. A simple trick allows
us to use this latter ancestry-labeling scheme for designing a parenthood -labeling scheme for n-node
trees of depth at most d using labels of size log n + 3 log d + O(1) bits. Each of these two schemes
assigns the labels to the nodes of any tree in linear time. The schemes also guarantee that the
corresponding queries are answered in constant time.
Our schemes rely on two novel tree-decompositions. The first decomposition, called spine
decomposition, bears similarities with the classical heavy-path decomposition of Sleator and Tarjan [41]. It is used for the construction of our simplified ancestry-labeling scheme. Our main
ancestry-labeling scheme uses another tree-decomposition, called folding decomposition. The spine
decomposition of the folding decomposition of any tree has a crucial property, that is heavily
exploited in the construction of our main labeling scheme.
Finally, we establish a simple relation between compact ancestry-labeling schemes and small
universal posets. Specifically, we show that there exists a consistent ancestry-labeling scheme for
n-node forests with label size ` if and only if, for any integer k ≥ 1, there exists a universal poset
with 2k` elements for the family of n-element posets with tree-dimension at most k. Using this
equivalence, and slightly modifying our ancestry-labeling scheme, we prove that for any integer k,
there exists a universal poset of size Õ(nk ) for the family of all n-element posets with tree-dimension
at most k. Up to lower order terms4 , this bound is tight.
1.4
Outline
Our paper is organized as follows. Section 2 provides the essential definitions, including the definition of the spine decomposition. In Section 3 we describe our labeling schemes designed for a
restricted family of trees, which includes trees of bounded depth. The main result regarding the
construction of the optimal ancestry-labeling scheme is presented in Section 4. Our result concerning small universal posets appears in Section 5. Finally, in Section 6, we conclude our work and
introduce some directions for further research on randomized labeling schemes.
2
Preliminaries
Let T be a rooted tree, i.e., a tree with a designated node r referred as the root of T . A rooted
forest is a forest consisting of several rooted trees. The depth of a node u in some (rooted) tree T
is defined as the smallest number of nodes on the path leading from u to the root. In particular,
the depth of the root is 1. The depth of a rooted tree is defined as the maximum depth over all its
nodes, and the depth of a rooted forest is defined as the maximum depth over all the trees in the
forest.
For two nodes u and v in a rooted tree T , we say that u is an ancestor of v if u is one of the
nodes on the shortest path in T connecting v and the root r. (An ancestor of v can be v itself;
Whenever we consider an ancestor u of a node v, where u 6= v, we refer to u as a strict ancestor
4
The Õ notation hides polylogarithmic terms.
4
of v). For two nodes u and v in some (rooted) forest F , we say that u is an ancestor of v in F if
and only if u and v belong to the same rooted tree in F , and u is an ancestor of v in that tree.
A node v is a descendant of u if and only if u is an ancestor of v. For every non-root node u, let
parent(u) denote the parent of u, i.e., the ancestor of u at distance 1 from it.
The size of T , denoted by |T |, is the number of nodes in T . The weight of a node u ∈ T , denoted
by weight(u), is defined as the number of descendants of u, i.e., weight(u) is the size of the subtree
hanging down from u. In particular, the weight of the root is weight(r) = |T |.
For every integer n, let T (n) denote the family of all rooted trees of size at most n, and let
F(n) denote the family of all forests of rooted trees, were each forest in F(n) has at most n nodes.
For two integers a ≤ b, let [a, b] denote the set of integers {a, a + 1, · · · , b}. (For a < b, we
sometimes use the notation [a, b) which simply denotes the set of integers {a, a + 1, · · · , b − 1}). We
refer to this set as an interval. For two intervals I = [a, b] and I 0 = [a0 , b0 ], we say that I ≺ I 0 if
b < a0 . The size of an interval I = [a, b] is |I| = b − a + 1, namely, the number of integers in I.
2.1
The spine decomposition
Our ancestry scheme uses a novel decomposition of trees, termed the spine decomposition (see
Figure 1). This decomposition bears similarities to the classical heavy-path decomposition of Sleator
and Tarjan [41]. Informally, the spine decomposition is based on a path called spine which starts
from the root, and goes down the tree along the heavy-path until reaching a node whose heavy
child has less than half the number of nodes in the tree. This is in contrast to the heavy-path which
goes down until reaching a node v whose heavy child has less than half the number of nodes in the
subtree rooted at v. Note that the length of a spine is always at most the length of the heavy-path
but can be considerably smaller. (For example, by augmenting a complete binary tree making it
slightly unbalanced, one can create a tree with heavy-path of length Ω(log n) while its spine is of
length O(1).)
Formally, given a tree T in some forest F , we define the spine of T as the following path S.
Assume that each node v holds its weight ω(v) (these weights can easily be computed in linear
time). We define the construction of S iteratively. In the ith step, assume that the path S contains
the vertices v1 , v2 , · · · vi , where v1 is the root r of T and vj is a child of vj−1 , for 1 < j ≤ i. If the
weight of a child of vi is more than half the weight of the root r then this child is added to S as
vi+1 . (Note, there can be at most one such child of vi .) Otherwise, the construction of S stops.
(Note that the spine may consist of only one node, namely, the root of T .) Let v1 , v2 , · · · , vs be the
nodes of the spine S (Node v1 is the root r, and vs is the last node added to the spine). It follows
from the definition that if 1 ≤ i < j ≤ s then vi is a strict ancestor of vj . The size of the spine S
is s. We split the nodes of the spine S to two types. Specifically, the root of T , namely v1 , is called
the apex node, while all other spine nodes, namely, v2 , v3 , · · · , vs , are called heavy nodes. (Recall
that the weight of each heavy node is larger than half the weight of the apex node).
By removing the nodes in the spine S (and the edges connected to them), the tree T breaks
into s forests F1 , F2 , · · · , Fs , such that the following properties holds for each 1 ≤ i ≤ s:
• P1. In T , the roots of the trees in Fi are connected to vi ;
• P2. Each tree in Fi contains at most |T |/2 nodes;
5
v1
vi
F1
Fi
vs
Fs
Figure 1: Spine decomposition
• P3. The forests Fi are unrelated in terms of the ancestry relation in T .
The spine decomposition is constructed iteratively, where each level of the process follows the
aforementioned description. That is, given a forest F , after specifying the spine S of each tree T in
F , we continue to the next level of the process, operating in parallel on the forests F1 , F2 , · · · , Fs .
The recursion implies that each node is eventually classified as either apex or heavy. The depth of
the spine decomposition of a forest F , denoted Spine-Depth(F ) is the maximal size of a spine,
taken over all spines obtained in the spine decomposition of F . Note that Spine-Depth(F ) is
bounded from above by the depth of F .
For any two integers n and d, let F(n, d) denote the set of all rooted forests with at most n
nodes, whose spine decomposition depth is at most d.
2.2
Ancestry labeling schemes
An ancestry-labeling scheme (M, D) for a family F of forests of rooted trees is composed of the
following components:
1. A marker algorithm M that assigns labels (i.e., bit strings) to the nodes of all forests in F.
2. A decoder algorithm D that given any two labels `1 and `2 in the output domain of M, returns
a boolean value D(`1 , `2 ).
These components must satisfy that if L(u) and L(v) denote the labels assigned by the marker
algorithm to two nodes u and v in some rooted forest F ∈ F, then
D(L(u), L(v)) = 1 ⇐⇒ u is an ancestor of v in F .
6
It is important to note that the decoder algorithm D is independent of the forest F . That is,
given the labels of two nodes, the decoder algorithm decides the ancestry relationship between the
corresponding nodes without knowing to which forest in F they belong.
The most common complexity measure used for evaluating an ancestry-labeling scheme is the
label size, that is, the maximum number of bits in a label assigned by M, taken over all nodes
in all forests in F. When considering the query time of the decoder algorithm, we use the RAM
model of computation, and assume that the length of a computer word is Θ(log n) bits. Similarly to
previous works on ancestry-labeling schemes, our decoder algorithm uses only basic RAM operations
(which are assumed to take constant time). Specifically, the basic operations used by our decoder
algorithm are the following: addition, subtraction, multiplication, division, left/right shifts, lessthan comparisons, and extraction of the index of the least significant 1-bit.
Let F be a family of forests of rooted trees. We say that an ancestry-labeling scheme (M, D) for
F is consistent if the decoder algorithm D satisfies the following conditions, for any three pairwise
different labels `1 , `2 and `3 in the output domain of M:
• Anti-symmetry: if D(`1 , `2 ) = 1 then D(`2 , `1 ) = 0, and
• Transitivity: if D(`1 , `2 ) = 1 and D(`2 , `3 ) = 1 then D(`1 , `3 ) = 1.
Note that by the definition of an ancestry-labeling scheme (M, D), the decoder algorithm D
trivially satisfies the two conditions above if `i = L(ui ) for i = 1, 2, 3, and u1 , u2 and u3 are different
nodes belonging to the same forest in F.
2.3
Small universal posets
The size of a partially ordered set (poset) is the number of elements in it. A poset (X, ≤X ) contains
a poset (Y, ≤Y ) as an induced suborder if there exists an injective mapping φ : Y → X such that
for any two elements a, b ∈ Y : we have
a ≤Y b ⇐⇒ φ(a) ≤X φ(b).
A poset (X, ≤) is called universal for a family of posets P if (X, ≤) contains every poset in P as an
induced suborder. If (X, ≤) and (X, ≤0 ) are orders on the set X, we say that (X, ≤0 ) is an extension
of (X, ≤) if, for any two elements x, y ∈ X,
x ≤ y =⇒ x ≤0 y.
A common way to characterize a poset (X, ≤) is by its dimension, that is, the smallest number of
linear (i.e., total order) extensions of (X, ≤) the intersection of which gives rise to (X, ≤) [44]. The
following fact is folklore, and its proof straightforward (this proof is however stated for the sake of
completeness).
Fact 1 The smallest size of a universal poset for the family of n-element posets with dimension at
most k is at most nk .
7
Proof. Let ≤ be the natural total order defined on the set of integers. We present a universal
poset (U, ) for the family of n-element posets with dimension at most k. The set of elements U is
U = [1, n]k = {u = (u1 , u2 , · · · , uk ) | ui ∈ [1, n] for all i ∈ [1, k]},
and the relation is defined for two elements u, v ∈ U by:
u v ⇐⇒ ui ≤ vi , ∀i ∈ [1, k].
Clearly U has nk elements. Now consider any n-element poset (X, E) with dimension at most k.
For i ∈ [1, k], let (Li , ≤i ) be the total orders the intersection of which gives rise to (X, E). By the
definition of intersection, there exists a collection of injective mappings ψi : X → Li such that for
any two elements x, y ∈ X, we have
x E y ⇐⇒ ψi (x) ≤i ψi (y), ∀i ∈ [1, k].
For every i ∈ [1, k], since (Li , ≤i ) is a total order, it is isomorphic to ([1, n], ≤), that is, there exists
an injective and onto mapping φi : Li → [1, n] such that for a, b ∈ Li , we have
a ≤i b ⇐⇒ φi (a) ≤ φi (b).
We define the mapping f : X → U so that for any x ∈ X, we have the ith coordinate f (x)i ∈ [1, n]
of f (x) be defined as f (x)i = φi ◦ ψi (x). The fact that f preserves the order E, i.e., the fact that,
for every x, y ∈ X,
x E y ⇐⇒ f (x) f (y).
is now immediate.
Another way of characterizing a poset (X, ≤) is by its tree-dimension. A poset (X, ≤) is a tree5
[13, 46] if, for every pair x and y of incomparable elements in X, there does not exist an element
z ∈ X such that x ≤ z and y ≤ z. (Observe that the Hasse diagram [44] of a tree poset is a forest of
rooted trees). The tree-dimension [13] of a poset (X, ≤) is the smallest number of tree extensions
of (X, ≤) the intersection of which gives rise to (X, ≤).
For any two positive integers n and k, let P(n, k) denote the family of all n-element (nonisomorphic) posets with tree-dimension at most k. The following fact follows rather directly from
previous work.
Fact 2 Fix an integer k and let M (n) denote the smallest size of a universal poset for P(n, k). We
have nk−o(1) ≤ M (n) ≤ n2k .
Proof. The fact that the smallest size of a universal poset for P(n, k) is at most n2k follows
from Fact 1, and from the well known fact that the dimension of a poset is at most twice its
tree-dimension6 . For the other direction, Alon and Scheinerman showed that the number of nonisomorphic n-element posets with dimension at most k is at least nn(k−o(1)) /n! (this result is explicit
5
Note that the term “tree” for ordered sets is used in various senses in the literature, see e.g., [45].
This follows from the fact that a tree-poset T = (X, ) has dimension at most 2. Indeed, consider the two linear
orders for T obtained as follows. We perform two DFS traversals over the Hasse diagram of T , which is a directed
forest F , starting from the root in each tree in F , so that to provide every element x with two DFS numbers, dfs1 (x)
and dfs2 (x). DFS1 is arbitrary, and DFS2 reverses the order in which the trees are considered in DFS1 , and in which
the children are visited in DFS1 , so that x y if and only if dfs1 (x) ≤ dfs1 (y) and dfs2 (x) ≤ dfs2 (y).
6
8
in the proof of Theorem 1 in [12]). Since the dimension of a poset is at least its tree-dimension,
this result of [12] yields also a lower bound on the number of non-isomorphic n-element posets with
tree-dimension at most k, specifically, we have
nn(k−o(1)) /n! ≤ |P(n, k)| .
On the other hand,
|P(n, k)| ≤
M (n)
n
by definition of M (n). Therefore, by combining the above two inequalities, it directly follows that
M (n) ≥ nk−o(1) .
3
Labeling schemes for forests with bounded spine decomposition
depth
In this section we construct an efficient ancestry-labeling scheme for forests with bounded spine
decomposition depth. Specifically, for forests with spine decomposition depth at most d, our scheme
enjoys label size of log n + 2 log d + O(1) bits. (Note that the same bound holds also for forests with
depth at most d.) Moreover, our scheme has O(1) query time and O(n) construction time.
3.1
Informal description
Let us first explain the intuition behind our construction. Similarly to the simple ancestry scheme
in [36], we map the nodes of forests to a set of intervals I, in a way that relates the ancestry relation
in each forest with the partial order defined on intervals through containment. I.e., a label of a
node is simply an interval, and the decoder decides the ancestry relation between two given nodes
using the interval containment test on the corresponding intervals. While the number of intervals
used for the scheme in [36] is O(n2 ), we managed to show that, if we restrict our attention to forests
with spine decomposition depth bounded by d, then one can map the set of such forests to a set
of intervals I, whose size is only |I| = O(nd2 ). Since a label is a pointer to an interval in I, the
bound of log n + 2 log d + O(1) bits for the label size follows. In fact, we actually manage to provide
an explicit description of each interval, still using log n + 2 log d + O(1) bits, so that to achieve
constant query time.
3.1.1
Intuition
Let F(n, d) be the family of all forests with at most n nodes and spine decomposition depth at
most d. The challenge of mapping the nodes of forests in F(n, d) to a small set of intervals I
is tackled recursively, where the recursion is performed over the number of nodes. That is, for
k = 1, 2, · · · , log n, level k of the recursion deals with forests of size at most 2k . When handling the
next level of the recursion, namely level k + 1, the difficult case is when we are given a forest F
containing a tree T of size larger than 2k , i.e., 2k < |T | ≤ 2k+1 . Indeed, trees in F of size at most
9
2k are essentially handled at level k of the recursion. To map the nodes of Tree T , we use the spine
decomposition (see Subsection 2.1).
Recall the spine S = (v1 , . . . , vs ) of T and the forests F1 , F2 , · · · , Fs , obtained by removing S
from T . Broadly speaking, Properties P2 and P3 of the spine decomposition give hope that the
forests Fi , i = 1, 2, · · · , s, could be mapped relying on the previous level k of the recursion. Once we
guarantee this, we map the s nodes of the spine S in a manner that respects the ancestry relations.
That is, the interval associated with a spine node vi must contain all intervals associated with
descendants of vi in T , which are, specifically, all the spine nodes vj , for j > i, as well as all nodes
in Fj , for j ≥ i. Fortunately, the number of nodes on the spine is s ≤ d, hence we need to deal
with only few such nodes.
The intervals in I are classified into log n levels. These interval levels correspond to the levels of
the recursion in a manner to be described. Level k of the recursion maps forests (of size at most 2k )
into Ik , the set of intervals of level at most k. In fact, even in levels of recursion higher than k,
the nodes in forests containing only trees of size at most 2k are mapped into Ik . (In particular, a
forest consisting of n singleton nodes is mapped into I1 .) Regarding level k + 1, a forest of size at
most 2k+1 contains at most one tree T , where 2k < |T | ≤ 2k+1 . In such a case, the nodes on the
spine S of T are mapped to level-(k + 1) intervals, and the forests F1 , F2 , · · · , Fs are mapped to Ik .
As mentioned before, to have the ancestry relations in T correspond to the inclusion relations
in I, the level-(k + 1) interval I(vi ) to which some spine node vi is mapped, must contain the
intervals associated with nodes which are descendants of vi in T . In particular, I(vi ) must contain
all the intervals associated
with the nodes in the forests Fi , Fi+1 , · · · , Fs . Since the number of such
P
intervals is at least ij=1 |Fi | (note, this value can be close to 2k+1 ), the length of I(vi ) must be
relatively large. Moreover, since level-1 intervals are many (at least n because they need to be
sufficiently many to handle a forest containing n singleton nodes), and since I contains all level-k
intervals, for log n values of k, we want the number of level-k intervals to decrease with k, so that
the total number of intervals in I will remain small (recall, we would like to have |I| = O(nd2 )).
Summing up the discussion above, in comparison with the set of level-k intervals, we would like
the set of level-(k + 1) intervals to contain fewer but wider intervals.
Example 1 Let us consider the example depicted in Figure 2. We have a tree T of size roughly
2k+1 , with two spine nodes v1 and v2 and two corresponding forests F1 and F2 . We would like
to map v2 to some interval I(v2 ) and map all nodes in F2 to intervals contained in I(v2 ). In
addition, we would like to map v1 to some interval I(v1 ) containing I(v2 ), and map all nodes in F1
to intervals contained in I(v1 ) \ I(v2 ).
The mapping in the above example can be done using the method in [36]. Specifically, v1
is mapped to I(v1 ) = [1, n], and v2 is mapped to I(v2 ) = [n − |F2 | + 1, n]. The nodes of F2
are mapped to intervals, all of which are contained in I(v2 ), and the nodes of F1 are mapped to
intervals which are contained in I(v1 ) \ I(v2 ). Note that this scheme guarantees that all intervals
are contained in [1, n]. One of the crucial properties making this mapping possible is that fact
that the interval I(v2 ) = [n − |F2 | + 1, n] exists in the collection of intervals used in [36], for all
possible sizes of F2 . Unfortunately, this property requires many intervals of level k + 1, which is
undesirable (the scheme in [36] uses n2 intervals in total). In a sense, restricting the number of
level-(k + 1) intervals costs us, for example, the inability to use an interval I(v2 ) that precisely
10
v1
I(v1 )
I(v2)
k+1
k
v2
2
1
F1
I(F1 )
I(F2)
F2
(b)
(a)
Figure 2: Illustration of Example 1
xk a
level k:
xk (a+b)
I(k,a,b)
1 x k 2x k
N
xk
Figure 3: A level-k interval Ik,a,b
covers the set of S
intervals associated with F2 . In other words, in some cases, I(v2 ) must strictly
contain I(F2 ) := v∈F2 I(v). In particular, we cannot avoid having |I(v2 )| ≥ x + |I(F2 )|, for some
(perhaps large) positive x. In addition, the nodes in F1 must be mapped to intervals contained
in some range that is outside of I(v2 ) (say, to the left of Interval I(v2 )), and Node v1 must be
mapped to an interval I(v1 ) that contains all these intervals, as well as I(v2 ). Hence, we cannot
avoid having |I(v1 )| ≥ x + x0 + |I(F2 )| + |I(F1 )|, for positive integers x and x0 . Therefore, the
total slack (in this case, coming from x and x0 ), does not only propagate over the s spine nodes,
but also propagates up the levels. One of the artifacts of this propagation is the fact that we can
no longer guarantee that all intervals are contained in [1, n] (as guaranteed by the scheme of [36]).
Somewhat surprisingly, we still manage to choose the parameters to guarantee that all intervals in
I are contained in the range [1, N ], where N = O(n).
Being slightly more formal, we introduce a hierarchy of intervals called bins. A bin J of level
k is an interval of length ck 2k , i.e., |J| = ck 2k , for some value ck to be described. Intuitively,
the length ck 2k corresponds to the smallest length of a bin J for which our scheme enables the
proper mapping of any forest of size at most 2k to J. It is important to note that this property
is shift-invariant, that is, no matter where in [1, N ] this bin J is, the fact that its length is at
least ck 2k should guarantee that it can potentially contain all intervals associated with a forest of
size at most 2k . Because of the aforementioned unavoidable (non-negligible) slack that propagates
up the levels, we must allow ck to increase with k.
11
I(v1)
I(v2)
I(vs)
bin J
level k+1
x k+1
J1
J2
Js
level k
xk
I(F1)
I(F2)
ck|F1 |
ck|F2 |
I(Fs )
s
ck|Fs |
I( U Fi )
i=1
Figure 4: Overview of the bins and intervals assignment in level k + 1
3.1.2
The intuition behind the tuning of the parameters
We now describe the set of intervals I, and explain the intuition behind the specific choice of
parameters involved. Consider a level k, and fix a resolution parameter xk for interval-level k, to
be described later. Let Ak ≈ N/xk and Bk ≈ ck 2k /xk . The level-k intervals are essentially all
intervals in [1, N ] which are of the form:
Ik,a,b = [a xk , (a + b) xk )
where
a ∈ [1, Ak ] and b ∈ [1, Bk ].
(1)
See Figure 3. The resolution parameter xk is chosen to be monotonically increasing with k in a
manner that will guarantee fewer intervals of level k, as k is increasing. Moreover, the largest
possible length of an interval of level k is xk Bk = ck 2k , which is the length of a bin sufficient to
accommodate the intervals of a tree of size at most 2k . This length is monotonically increasing
with the level k, as desired.
Consider now a bin J of length ck+1 2k+1 located somewhere in [1, N ]. This bin J should suffice
for the mapping of a tree T of size 2k+1 . By executing the spine decomposition, we obtain the spine
nodes v1 , v2 , · · · , vs and the forests F1 , F2 , · · · Fs (see Figure 1). We allocate a level-(k + 1) interval
I(vi ) to each spine node vi , and a bin Ji ⊆ J to each forest Fi , i = 1, . . . , s, in the same spirit as
we did in the specific Example 1 (see Figure 2).
The general allocation is illustrated in Figure 4. Since I(v1 ) is of the form Ik+1,a,b , and should
be included in Bin J, and since this interval I(v1 ) must contain all intervals assigned to nodes in F1 ,
Bin J1 is chosen to start at the leftmost multiple of xk+1 in Bin J. Note that F1 contains trees of
size at most 2k each. Hence, by induction on k, each of these trees, T 0 , can be properly mapped to
any bin of size ck |T 0 |. Therefore, setting J1 of size ck |F1 | suffices to properly map all trees in F1 .
The bin J2 , of size ck |F2 |, is then chosen to start at the leftmost multiple of xk+1 to the right of the
12
end point of J1 , in the bin J. And so on: we set Ji of size ck |Fi |, and place it in J so that to start
at the leftmost multiple of xk+1 to the right of the end point of Ji−1 , 1 < i ≤ s. The level-(k + 1)
intervals associated to the spine nodes are then set as follows. For i = 1, . . . , s, the interval I(vi )
starts at the left extremity of Ji (which is a multiple of the resolution xk+1 ). All these intervals end
at the same point in [1, N ], which is chosen as the leftmost multiple of xk+1 to the right of Js , in J.
Putting the right end-point of J at the point where all the intervals of spine nodes end, suffices to
guarantee that J includes all the intervals I(vi ), and all the bins Ji , for i = 1, . . . , s.
Observe that the length of I(vs ) must satisfy |I(vs )| ≈ ck |Fs | + xk+1 , where the slack of xk+1
comes from the fact that the interval must start at a multiple of the resolution xk+1 . More generally,
for 1 ≤ i < s, the length of I(vi ) must satisfy
|I(vi )| ≈ ck |Fi | + xk+1 + |I(vi+1 )| ≈ ck (
s
X
|Fi |) + i · xk+1 .
j=i
P
Therefore, the length of I(v1 ) must satisfy |I(v1 )| ≈ ck ( si=1 |Fi |) + s · xk+1 ≈ ck · 2k+1 + s · xk+1 .
Now, since J may start at any point between two multiples of the resolution xk+1 , we eventually get
that setting the bin J to be of length |J| ≈ ck · 2k+1 + (s + 1) · xk+1 suffices. Since s can be at most
the spine decomposition depth d, we must have |J| be approximately ck+1 2k+1 ≈ ck 2k+1 + d · xk+1 .
To agree with the latter approximation, we choose the values of ck so that:
ck+1 − ck
≈
d · xk+1
.
2k+1
(2)
Ultimately, we would like to map that whole n-node forest to a bin of size clog n ·n. This bin must fit
into [1, N ], hence, the smallest value N that we can choose is clog n · n. Since we also want the value
of N to be linear in n, we choose the ck ’s so that clog n = O(1). Specifically, for k = 1, 2, · · · , log n,
we set
k
X
1
ck ≈
j 1+
j=1
for some small > 0. Note that ck ≥ P
1 for each k, and the ck ’s are increasing with k. Moreover, we
1+ converges. Hence, all the c ’s are bounded from
take large enough so that the sum ∞
k
j=1 1/j
above by some constant γ. In particular, clog n ≤ γ, and thus N = O(n). The fact that all ck ’s are
bounded, together with Equation 2, explains why we choose
xk ≈
2k
.
d · k 1+
This choice for the resolution parameter xk implies that the number of level-k intervals is
O(Ak · Bk ) = O(nd2 k 2(1+) /2k ),
yielding a total of O(nd2 ) intervals in I, as desired. In fact, in order to reduce the label size even
further, by playing with the constant hidden in the big-O notation, we actually choose less than
a constant. Indeed, we will later pick
ck ≈
k
X
j=1
1
2k
and
x
≈
.
k
j log2 j
d · k log2 k
13
3.2
The ancestry scheme
We now turn to formally describe the desired ancestry-labeling scheme (M, D) for F(n, d). For
simplicity, assume without loss of generality that n is a power of 2.
3.2.1
The marker algorithm M
We begin by defining the set I = I(n, d) of intervals. For integers a, b and k, let
Ik,a,b = [a xk , (a + b) xk )
where
2k−1
x1 = 1 and xk =
(d + 1)k log2 k
for k > 1.
For integer 1 ≤ k ≤ log n, let ck be defined as follows. Let c1 = 1, and, for any k, 2 < k ≤ log n, let
2
ck = ck−1 + 1/k log k = 1 +
k
X
1/j log2 j .
j=2
Note that the sum
some constant
P
j≥2 1/j
log2 j converges, and hence all the ck ’s are bounded from above by
γ =1+
∞
X
1/j log2 j .
j=2
Let us set:
N = γn.
Then let A1 = N , B1 = 0, and, for k = 2, . . . , log n, let us set
N (d + 1)k log2 k
Ak = 1 +
and Bk = d2ck (d + 1)k log2 ke
2k−1
Next, define the set of level-k intervals:
Sk = {Ik,a,b | a ∈ [1, Ak ], and b ∈ [1, Bk ]}.
Finally, define the set of intervals of level at most k as
Ik =
k
[
Si ,
i=1
and let
I = Ilog n .
Definition 1 Let F ∈ F(n, d). We say that a one-to-one mapping I : F → I is a legal-containment
mapping if, for every two nodes u, v ∈ F , we have
u is an ancestor of v in F ⇐⇒ I(v) ⊆ I(u).
14
Note that since a legal-containment mapping is one-to-one, we get that if u is a strict ancestor
of v in F , then I(v) ⊂ I(u), and vice-versa.
We first wish to show that there exists a legal-containment mapping from every forest in F(n, d)
into I. For this purpose, we introduce the concept of a bin, which is simply an interval of integers.
For a bin J, and for any integer k, 1 ≤ k ≤ log n, we use the following notation:
Ik (J) = {Ii,a,b ∈ Ik | Ii,a,b ⊆ J} .
I.e., Ik (J) is the set of all intervals of level at most k which are contained in the bin J.
Claim 1 Let F be a forest, and let F1 , F2 , · · · , Ft be pairwise-disjoint forests such that ∪ti=1 Fi = F .
Let J be a bin and let J1 , J2 , · · · , Jt be a partition of J into t pairwise-disjoint bins, i.e., J = ∪ti=1 Ji
with Ji ∩ Jj = ∅ for any 1 ≤ i < j ≤ t. For any level k, 1 ≤ k ≤ log n, if there exists a legalcontainment mapping from Fi to Ik (Ji ) for every i, 1 ≤ i ≤ t, then there exists a legal-containment
mapping from F to Ik (J).
Proof. The proof follows directly from the definitions above. More specifically, for every integer i,
1 ≤ i ≤ t, we embed the forest Fi into Ik (Ji ) using a legal-containment mapping. For two nodes v
and u in the same forest Fi , the condition specified in Definition 1, namely, that u is an ancestor of v
in F if and only if I(v) ⊆ I(u), holds by the fact that each Fi is embedded using a legal-containment
mapping. On the other hand, if v and u are in two different forests Fi and Fj , then the condition
holds simply because Ji ∩ Jj = ∅.
We are now ready to state the main technical lemma of this section.
Lemma 1 For every k, 1 ≤ k ≤ log n, every forest F ∈ F(2k , d), and every bin J ⊆ [1, N ), such
that |J| = bck |F |c, there exists a legal-containment mapping from F into Ik (J). Moreover this
mapping can be computed in O(|F |) time.
Proof. We prove the lemma by induction on k. The case k = 1 is simple and can be verified easily.
Assume now that the claim holds for k with 1 ≤ k < log n, and let us show that it also holds for
k + 1. Let F be a forest of size |F | ≤ 2k+1 , and let J ⊆ [1, N ) be a bin, such that |J| = bck+1 |F |c.
Our goal is to show that there exists a legal-containment mapping of F into Ik+1 (J). We consider
two cases.
• The simpler case: when all the trees in F are of size at most 2k . For this case, we show
that there exists a legal-containment mapping of F into Ik (J) for every bin J ⊆ [1, N ) such that
|J| = bck |F |c. (Note that this claim is slightly stronger than what is stated in Lemma 1)7 .
Let T1 , T2 , · · · Tt be an arbitrary enumeration of the trees in F . We divide the given bin J of
0
size bck |F |c into t + 1 disjoint sub-bins
PJt = J1 ∪ J2 · · · ∪ Jt ∪ J , where |Ji | = bck |Ti |c for every i,
1 ≤ i ≤ t. This can be done because i=1 bck |Ti |c ≤ bck |F |c = |J|. By the induction hypothesis,
we have a legal-containment mapping of Ti into Ik (Ji ) for every i, 1 ≤ i ≤ t. The stronger claim
thus follows by Claim 1.
7
Indeed, we show that the size of Bin J can be only bck |F |c, which is smaller than bck+1 |F |c (that is, the size
required to prove the lemma) by an additive term of −|F |/(k + 1) log2 (k + 1) .
15
Observe that, in the above, the enumeration of the trees T1 , T2 , · · · Tt in F was arbitrary. In the
context of our general scheme described in the next section, it is important to enumerate these trees
in a specific order. Once this order is fixed, we can implement the mapping of F by choosing the
disjoint sub-bins J1 , . . . , Jt of J, so that Ji is “to the left” of Ji+1 , i.e., Ji ≺ Ji+1 , for i = 1, . . . , t − 1.
This will guarantee that all the intervals associated with the nodes in Ti are “to the left” of all the
intervals associated with a nodes of Tj , for every 1 ≤ i < j ≤ t. We state this observation as a fact,
for further reference in the next section.
Fact 3 Let ` be a positive integer. Let T1 , T2 , · · · Tt be an arbitrary enumeration of the trees in a
forest F , all of size at most 2` , and let J ⊆ [1, N ) be a bin with |J| = bc` |F |c. Then, our legalcontainment mapping from F into I` (J) guarantees that for every u ∈ Ti and v ∈ Tj where j > i,
we have I(u) ≺ I(v).
• The more involved case: when one of the subtrees in F , denoted by Tb, contains more than 2k
b = bck+1 |Tb|c, there exists
nodes. Our goal now is to show that for every bin Jb ⊆ [1, N ), where |J|
b
b
a legal-containment mapping of T into Ik+1 (J). Indeed, once this is achieved we can complete the
proof as follows. Let F1 = F \ Tb, and F2 = Tb. Similarly to the simple case above, let J1 and J2 be
two consecutive intervals in J (starting at the leftmost point in J) such that |J1 | = bck |F1 |c and
b Since we have a legal-containment mapping that maps F1 into Ik (J1 ), and one that
|J2 | = |J|.
maps F2 into Ik+1 (J2 ), we get the desired legal-containment mapping of F into Ik+1 (J) by Claim 1.
(The legal-containment mapping of F1 into Ik (J1 ) can be done by the induction hypothesis, because
|F1 | ≤ 2k .)
For the rest of the proof, our goal is thus to prove the following claim:
Claim 2 For every tree T of size 2k < |T | ≤ 2k+1 , and every bin J ⊆ [1, N ), where |J| = bck+1 |T |c,
there exists a legal-containment mapping of T into Ik+1 (J).
In order to prove the claim, we use the spine decomposition described in Subsection 2.1. Recall
the spine S = (v1 , v2 , · · · , vs ), and the corresponding forests F1 , F2 , · · · , Fs . The given bin J can be
expressed as J = [α, α + bck+1 |T |c) for some integer α < N − bck+1 |T |c. We now describe how we
allocate the sub-bins J1 , J2 . . . , Js of J so that, later, we will map each Fi to Ik (Ji ).
The sub-bins J1 , J2 . . . , Js of J: For every i = 1, . . . , s, we now define a bin Ji associated with Fi .
Let us first define J1 . Let a1 be the smallest integer such that α ≤ a1 xk+1 . We let
J1 = [a1 xk+1 , a1 xk+1 + bck |F1 |c) .
Assume now that we have defined the interval Ji = [ai xk+1 , ai xk+1 + bck |Fi |c) for 1 ≤ i < s. We
define the interval Ji+1 as follows. Let bi be the smallest integer such that bck |Fi |c ≤ bi xk+1 , that
is
(bi − 1)xk+1 < bck |Fi |c ≤ bi xk+1 .
(3)
Then, let ai+1 = ai + bi , and define
Ji+1 = [ai+1 xk+1 , ai+1 xk+1 + bck |Fi+1 |c).
16
Hence, for i = 1, 2, · · · , s, we have |Ji | = bck |Fi |c. Moreover,
Ji ⊆ [ai xk+1 , (ai + bi )xk+1 ) = Ik+1,ai ,bi .
(4)
Also observe that, for every i, 1 ≤ i ≤ s − 1, we have:
Ji ≺ Ji+1 .
(5)
Since a1 xk+1 < α+xk+1 , and since we have a “gap” of at most xk+1 −1 between any consecutive
sub-bins Ji and Ji+1 , we get that
s
[
h
Ik+1,ai ,bi ⊆ α, α + (s + 1)(xk+1 − 1) + bck |T |c .
i=1
Now, since s ≤ d and 2k < |T |, and since (d + 1)(xk+1 − 1) ≤
s
[
Ik+1,ai ,bi
Since
Ss
i=1 Ji
⊆
Ss
2k
(k+1) log2 (k+1)
k
, it follows that,
|T |
+ ck |T |
= [α, α + bck+1 |T |c) = J .
⊆ α, α +
(k + 1) log2 (k + 1)
i=1
j
i=1 Ik+1,ai ,bi ,
(6)
we finally get that
s
[
Ji ⊆ J.
i=1
On the other hand, since, for 1 ≤ i ≤ s, the length of Ji is bck |Fi |c, and since each tree in Fi
contains at most 2k nodes, we get, by the induction hypothesis, that there exists a legal-containment
S
mapping of each Fi into Ik (Ji ). We therefore get a legal-containment mapping from si Fi to Ik (J),
by Claim 1.
By Equation 5, we have Ji ≺ Ji+1 for every i, i = 1, 2, · · · , s − 1. This guarantees that all the
intervals associated with the nodes in Fi are “to the left” of all the intervals associated with a nodes
of Fj , for every 1 ≤ i < j ≤ s. We state this observation as a fact, for further reference in the next
section.
Fact 4 Let ` be a positive integer. Let F1 , F2 , · · · Fs be the forests of the spine S = (v1 , v2 , . . . , vs )
of the tree T with 2`−1 < |T | ≤ 2` , and let J ⊆ [1, N ) be a bin satisfying |J| = bc` |T |c. Our
legal-containment mapping from T into I` (J) guarantees that, for every u ∈ Fi and v ∈ Fj where
j > i, we have I(u) ∈ Ji and I(v) ∈ Jj , and hence I(u) ≺ I(v).
It is now left to map the nodes in the spine S into Ik+1 (J), in a way that respects the ancestry
relation.
The mapping of spine nodes into level-(k + 1) intervals: For every i, 1 ≤ i ≤ s, let
bbi = Ps bj where the bj s are defined by Equation 3. We map the node vi of the spine to the
j=i
interval
I(vi ) = Ik+1,ai ,bbi .
17
Observe that, by this definition,
I(vi ) =
s
[
Ik+1,ai ,bi .
j=i
By Equations 4 and 6, we get
s
[
Jj ⊆ I(vi ) ⊆ J .
j=i
To show that I(vi ) is indeed in Ik+1 (J), we still need to show that ai ∈ [1, Ak+1 ] and bbi ∈ [1, Bk+1 ].
Before showing that, let us make the following observation resulting from the combination of
Equation 5 and Fact 4, to be used for further reference in the next section.
Fact 5 Let ` be a positive integer. Let F1 , F2 , · · · Fs be the forests of the spine S = (v1 , v2 , . . . , vs )
of the tree T with 2`−1 < |T | ≤ 2` , and let J ⊆ [1, N ) be a bin satisfying |J| = bc` |T |c. Our
legal-containment mapping from T into I` (J) guarantees that, for every u ∈ Fi , i ∈ {1, . . . , s − 1},
and for every j > i, we have I(u) ≺ I(vj ).
Let us now show that I(vi ) is indeed in Ik+1 (J). It remains to show that ai ∈ [1, Ak+1 ] and
that bbi ∈ [1, Bk+1 ]. Recall that J is of the form J = [α, α + bck+1 |T |c) with 1 ≤ α < N − bck+1 |T |c.
Note that,
2k
(d + 1)(k + 1) log2 (k + 1)
= (Ak+1 − 1)xk+1 .
N ≤ N
2k
(d + 1)(k + 1) log2 (k + 1)
Therefore, we get that
α < (Ak+1 − 1)xk+1 − bck+1 |T |c .
(7)
On the other hand, by definition of the ai ’s, we get that, for every i,
ai xk+1
i
X
≤ a1 xk+1 + i · xk+1 +
bck |Fi |c ≤ a1 xk+1 + d · xk+1 + bck |T |c .
j=1
Moreover, by the minimality of a1 , we have a1 xk+1 ≤ α+xk+1 . Combining the latter two inequalities
we get
ai xk+1 ≤ α + (d + 1) · xk+1 + bck |T |c .
Combining this with Equation 7, we get
ai xk+1 ≤ Ak+1 xk+1 + (d · xk+1 + bck |T |c − bck+1 |T |c) .
It follows directly from the definition of xk+1 and ck , and from the fact that |T | > 2k , that
d · xk+1 + bck |T |c − bck+1 |T |c ≤ 0, implying that ai ≤ Ak+1 , as desired.
Ps
Let us now show that bbi ∈ [1, Bk+1P
]. Recall that, by definition, bbi ≤
j=1 bj for every i,
s
1 ≤ i ≤ s. So it is enough to show that i=1 bi ≤ Bk+1 . By construction,
xk+1
s
X
bi ≤ |J| = bck+1 |T |c ≤ ck+1 2k+1 .
i=1
18
So, it suffices to show that ck+1 2k+1 /xk+1 ≤ Bk+1 . Fortunately, this holds by definition of the
three parameters ck+1 , xk+1 , and Bk+1 .
The above discussion shows that for all i = 1, . . . , s, we have ai ≤ Ak+1 and bbi ≤ Bk+1 , which
implies that I(vi ) ∈ Ik+1 (J).
We now show that our mapping is indeed a legal-containment mapping. Observe first that, for
i and j such that 1 ≤ i < j ≤ s, we have
Ik+1,ai ,bbi ⊃ Ik+1,aj ,bbj .
Thus, I(vi ) ⊃ I(vj ), as desired.
In addition, recall that, for every j = 1, . . . , s, the forest Fj is mapped into Ik (Jj ). Therefore,
if I(v) is the interval of some node v ∈ Fj , then we have I(v) ⊂ Jj . Since Jj ⊂ Ik+1,ai ,bbi for every i
such that 1 ≤ i ≤ j ≤ s, we obtain that I(v) is contained in I(vi ), the interval associated with vi .
This establishes the fact that I : F → Ik+1 (J) is a legal-containment mapping. Since the recursive
spine decomposition of a forest F takes O(|F |) time, it follows that this legal-containment mapping
also takes O(|F |) time. This completes the proof of Lemma 1.
Lemma 1 describes the interval assignments to the nodes. We now describe the labeling process.
The label assignment: By Lemma 1, we get that there exists a legal-containment mapping
I : F → I, for any F ∈ F(n, d). The marker M uses this mapping to label the nodes in F .
Specifically, for every node u ∈ F , the interval I(u) = Ik,a,b ∈ I is encoded in the label L(u) of u
as follows. The first dlog ke bits in L(u) are set to 0 and the following bit is set to 1. The next
dlog ke bits are used to encode k. The next log d + log k + 2 log log k + O(1) bits are used to encode
the value b ∈ Bk , and the next log n + log d + log k + 2 log log k − k + O(1) bits to encode a. Since
2(log k + 2 log log k) − k = O(1), we get the following.
Lemma 2 The marker algorithm M assigns the labels of nodes in some forest F ∈ F(n, d) in
O(n) time, and each such label is encoded using log n + 2 log d + O(1) bits.
3.2.2
The decoder D
Given a label L(u) of some node u in some forest F ∈ F(n, d), the decoder D extracts the interval
I(u) = Ik,a,b as follows. First, since F ∈ F(n, d), the decoder D knows d, and thus knows log d. By
finding the first bit that equals 1 it can extract the value dlog ke. Then by inspecting the next dlog ke
bits it extracts the value k. Subsequently, the decoder inspects the next log d + log k + 2 log log k +
O(1) bits and extracts b. Finally, D inspects the remaining log n+log d+log k +2 log log k −k +O(1)
bits in the label to extract a ∈ Ak . At this point the decoder has k, a, and b and it can recover
Ik,a,b using O(1) multiplication and division operations. Recall, in the context of this section, we
assume that such operations take constant time. We thus get the following.
Observation 1 Given a label L(u) of some node u in some forest F ∈ F(n, d), the decoder D can
extract I(u) = Ik,a,b in constant time.
19
Given the labels L(u) and L(v) of two nodes u and v is some rooted forest F ∈ F(n, d), the
decoder finds the ancestry relation between the nodes using a simple interval containment test
between the corresponding intervals, namely I(u) and I(v).
The fact that the intervals are assigned by a legal-containment mapping ensures the following:
Lemma 3 (M, D) is a correct ancestry-labeling scheme for F(n, d).
Lemmas 2 and 3 imply that there exists an ancestry-labeling scheme for F(n, d) with label size
at most log n + 2 log d + O(1) bits. Recall that F(n, d) denotes the set of all forests with at most n
nodes, and spine-decomposition depth at most d, while F(n) denotes the set of all forests with at
most n nodes. In general, an ancestry scheme that is designed for a family F of forests may rely on
a decoder that takes advantage from the fact that the labels are the ones of nodes belonging to a
forest F ∈ F. For instance, in the case of F = F(n, d), the decoder can rely on the knowledge of n
and d. Although the ancestry-labeling scheme described above was designed for forests in F(n, d),
we show that a slight modification of it applies to all forests, at a very small cost in term of label
size. This will establish our main theorem for this section.
3.2.3
Main theorem for forests with bounded spine-decomposition depth
Theorem 1
1. There exists an ancestry-labeling scheme for F(n) such that any node in a forest of spine
decomposition depth at most d is labeled using at most log n + 2 log d + O(1) bits.
2. There exists an ancestry-labeling scheme for the family of all forests such that any node in
a forest of size at most n and spine decomposition depth at most d is labeled using at most
log n + 2 log d + 2 log log d + O(1) bits.
In both schemes, the query time of the scheme is constant, and the time to construct the scheme
for a given forest is linear.
Proof. Lemma 3 together with Lemma 2 and Observation 1 establishes the fact that there exists
an ancestry-labeling scheme for F(n, d) with label size at most log n + 2 log d + O(1) bits. Moreover,
the query time of the scheme is constant, and the time to construct the scheme for a given forest
is linear.
To obtain a scheme for F(n), we can no longer assume that the decoder of the scheme knows d.
However, by applying a simple trick we show that the scheme for F(n, d) can be easily transformed
to a scheme for F(n) with the desired bound. We define a labeling scheme (M, D) for F(n). Given
a forest F ∈ F(n) of spine-decomposition depth d, let db = 2dlog de , i.e., db is the smallest integer
b The first part of the proof tells us that
larger than d which is a power of 2. Obviously, F ∈ F(n, d).
b which uses labels each composed of
there exists an ancestry-labeling scheme (Mdb, Ddb) for F(n, d)
precisely L = log n + 2 log db + O(1) bits. (By padding enough zeros to the left of the label, we can
assume that each label consists of precisely L bits.) Moreover, the labels are assigned in O(|F |)
time. The marker M uses this scheme to label the nodes in F . The decoder D operates as follows.
20
Given the labels of two nodes u and v in F , the decoder D first finds out what is db (this can be
done, since n and L are known to D, and since db is a power of 2), and then uses the (constant time)
decoder Ddb to interpret the relation between u and v. The bound on the size of a label follows as
L = log n + 2 log d + O(1).
Finally, we now show that, with a slight increase on the label size, one can have an ancestrylabeling scheme for the family of all forests (i.e., in such a scheme, given the labels of two nodes in
a forest F , the decoder doesn’t have bounds on neither the size of F nor on its spine-decomposition
depth). Let F be a forest with n nodes and depth d. Let n
b = 2dlog ne . We label the nodes of F
using the marker of the scheme (Mnb , Dnb ) for F(b
n) mentioned above. By adding 2dlog log de bits to
the label of each node in F , one can assume that given a label of a node in F , the decoder knows
the value of log d. Now given a label of a node in F , the decoder can extract log n
b (using the size
log
n
b
of the label, in a method similar to one described above). Since n
b=2
, we can assume that the
decoder knows n
b. Thus, to extract the ancestry relation between the two nodes in F , the decoder
uses Dnb .
Note that the depth of a forest F is bounded from above by the spine decomposition depth of F .
Hence, all our aforementioned results for forests with bounded spine decomposition depth hold also
restricted to bounded depth forests. Hence, Theorem 1 translates to the following corollary.
Corollary 1
1. There exists an ancestry-labeling scheme for F(n) such that any node in a forest of depth at
most d is labeled using at most log n + 2 log d + O(1) bits.
2. There exists an ancestry-labeling scheme for the family of all forests such that any node in a
forest of size n and depth at most d is labeled using at most log n + 2 log d + 2 log log d + O(1)
bits.
In both schemes, the query time of the scheme is constant, and the time to construct the scheme
for a given forest is linear.
3.3
A parenthood-labeling scheme
The ancestry-labeling scheme described in Corollary 1 can be advantageously transformed into
a parenthood-labeling scheme which is very efficient for trees of small depth. Recall that an
parenthood-labeling scheme for the family of rooted forests F is a pair (M, D) of marker and
decoder, satisfying that if L(u) and L(v) are the labels given by the marker M to two nodes u and
v in some forest F ∈ F, then: D(L(u), L(v)) = 1 ⇐⇒ u is the parent of v in F .
Similarly to the ancestry case, we evaluate a parenthood-labeling scheme (M, D) by its label
size, namely the maximum number of bits in a label assigned by the marker algorithm M to any
node in any forest in F.
For two nodes u and v in a rooted forest F , u is a parent of v if and only if u is an ancestor
of v and depth(u) = depth(v) − 1. It follows that one can easily transform any ancestry-labeling
scheme for F(n) to a parenthood-labeling scheme for F(n) with an extra additive term of dlog de
21
bits to the label size (these bits are simply used to encode the depth of a vertex). The following
theorem follows.
Theorem 2
1. There exists a parenthood-labeling scheme for F(n) such that any node in a forest of depth at
most d is labeled using log n + 3 log d + O(1) bits.
2. There exists a parenthood scheme for the family of all rooted forests such that any node in an
n-node forest of depth at most d is labeled using log n + 3 log d + 2 log log d + O(1) bits.
For both schemes, the query time is constant and the time to construct the scheme for a given forest
is linear.
4
The general ancestry-labeling scheme
This section is dedicated to the construction of an ancestry-labeling scheme for forests, which has
label size log n + O(log log n) bits for n-node forests. Moreover, given an n-node forest F , the
labels can be assigned to the nodes of F in O(n) time, and any ancestry query can be answered in
constant time.
In Section 3, we managed to construct an efficient ancestry labelling scheme for forests with
bounded spine decomposition depth. Unfortunately, the situation becomes more difficult if the
input forests can have long spines. To handle this case, we introduce a new tree-decomposition,
called the folding decomposition. Applying this decomposition to a forest F results in a forest F ∗ ,
on the same set of vertices, whose spine decomposition is of depth at most 2. Moreover, this
transformation partially preserves the ancestry relation in F in the following sense: if v is an
ancestor of u in F ∗ then v is also an ancestor of u in F . Next, we apply the ancestry scheme
from Section 3 over F ∗ , resulting in labels of size log n + O(1) for the nodes of F ∗ . Giving the
corresponding labels to the nodes of F enables the decoder to detect all ancestry relations in F that
are preserved in F ∗ . At this point, it remains to deal with ancestry relations that are not preserved
by this transformation. For this purpose, we shall provide nodes with additional information which
can be encoded using O(log log n) bits per node. To explain how we do so, let us first describe the
folding decomposition.
4.1
The folding decomposition
We construct the folding decomposition recursively, according to the recursive construction of the
spine decomposition, as described in Subsection 2.1. In a given level of the recursion, we are given
a tree T with spine S = (v1 , v2 , · · · , vs ) (see Figure 5). In the folding decomposition T ∗ of T , the
apex node v1 remains the root, and all its children in T remain children of v1 in T ∗ . In addition,
all heavy nodes v2 , v3 , · · · , vs also become children of v1 in T ∗ . Furthermore, given a heavy node vi ,
2 ≤ i ≤ s, all of its non-heavy children in T remain children of vi also in T ∗ . The next level of the
recursion now applies separately to all trees in all forests F1 , F2 , · · · , Fs resulting from removing
the spine nodes from the tree (see Subsection 2.1). Note that all these trees have at most half the
22
v1
=
v 1*
v1
u1 u2 v 2
vi
F1
u1 u2
v2
vi
vj
v s = v 2*
vj
Fi
vs
F1
Fi
Fs
F1*
(a)
Fs = F2*
(b)
Figure 5: Folding decomposition T ∗ (b) from the spine decomposition of T (a)
number of nodes in T . Also note that the roots of these trees are going to be apex nodes in the
next level of the recursion.
The following lemma provides a critical property of the folding decomposition.
Lemma 4 Let T be a tree, and let T ∗ be its folding decomposition. The spine decomposition depth
of T ∗ is at most 2. Moreover, if v is a strict ancestor of u in T ∗ , then v is an ancestor of apex(u)
in T ∗ .
Proof. Consider a level ` of the folding decomposition of T , dealing with some subtree T 0 of T
(see Figure 5). Let S = (v1 , . . . , vs ) be the spine of T 0 , and recall that Fi0 is the forest consisting of
all subtrees rooted at the apex children of vi in T 0 , for i = 1, . . . , s. For every 1 ≤ i < s, we have
|Fi0 | < |T 0 |/2, because vi has a heavy child vi+1 whose subtree contains at least |T 0 |/2 nodes. Thus,
the spine of T 0∗ contains the root v1 of T 0 and the last heavy node vs on the spine S (which can
be v1 itself). Thus the spine of T 0∗ contains at most two nodes. This completes the proof of the
first part of the lemma. Assume now that v is a strict ancestor of u in T ∗ . If v is an apex, then
v is an ancestor of apex(u) in T ∗ . If v is heavy, then all its children are apexes, and therefore it
follows that v is an ancestor of apex(u) in T ∗ .
Observe that, as mentioned above, the ancestry relation in T is partially preserved in T ∗ . To
see why, notice first that no new ancestry relations are created in T ∗ , that is, if v is an ancestor of
u in T ∗ then v is also an ancestor of u in T . Second, consider a node v and one of its descendants u
in T . If v is an apex node in T ∗ , then v is also an ancestor of u in T ∗ . If v is heavy, namely, v = vi ,
for some 2 ≤ i ≤ s in a spine v1 , v2 , · · · , vs , then there are two cases. If u belongs to Fi then v is
23
also an ancestor of u in T ∗ . However, if u ∈ Fj for j > i, then the ancestry relation between v and
u is not preserved in T ∗ . (Note, in this case u is a descendant of vj in both T and T ∗ but vi and vj
are siblings in T ∗ – each being a child of the corresponding apex node v1 ). In other words, the
only case where the ancestry relation between a node v and its descendant u in T is not preserved
in T ∗ is when v is heavy, i.e., v = vi in some spine S = (v1 , v2 , · · · , vs ) where 1 < i < s, and u is a
descendant in T of v’s heavy child vi+1 (including the case where u = vi+1 ). In this case, v is an
ancestor of u in T but not in T ∗ .
For a node v, let apex(v) be the closest ancestor of v in T that is an apex. Note that if v is an
apex then apex(v) = v. Recall that every node is either heavy or apex. Consider a DFS traversal
over T that starts from the root and visits apex children first. For any node u, let dfs(u) be the
DFS number of u (where the DFS number of the root is 1).
The following lemma provides a characterization of the ancestry relations in T in terms of the
structure of T ∗ .
Lemma 5 For any two different nodes v and u, we have: v is an ancestor of u in T if and only if
at least one of the following two conditions hold
• C1. Node v is an ancestor of u in T ∗ ;
• C2. Node apex(v) is a strict ancestor of u in T ∗ and dfs(v) < dfs(u).
Proof. Consider first the case that v is an ancestor of u in T . If v is an apex node in T ∗ , then, by
construction, v is also an ancestor of u in T ∗ , and thus Condition C1 holds. If v is heavy, namely,
v = vi , for some 2 ≤ i ≤ s in some spine S = (v1 , v2 , · · · , vs ), then there are two cases. If u belongs
to Fi then v is also an ancestor of u in T ∗ , and thus Condition C1 holds. If u ∈ Fj or u = vj , for
j > i, then we show that Condition C2 holds. First, since v is an ancestor of u in T , we immediately
get that dfs(v) < dfs(u). Second, apex(v) is the apex v1 which is, by definition, an ancestor of all
nodes in the subtree of T rooted at v1 . Therefore, since apex(v) is an apex, apex(v) is an ancestor
of u in T ∗ . In fact, apex(v) is a strict ancestor of u in T ∗ since u = vj for j > i > 1, and thus
u 6= v1 = apex(v).
Conversely, consider the case that either Condition C1 or Condition C2 holds. If Condition C1
holds, i.e., v is an ancestor of u in T ∗ , then the fact that v is an ancestor of u in T is immediate since
no new ancestry relations are created in T ∗ . (Indeed, for any two nodes w and w0 , if w is a parent
of w0 in T ∗ then w is an ancestor or w0 in T ). Consider now the case that Condition C2 holds.
Assume, by way of contradiction, that v is not an ancestor of u in T . Since dfs(v) < dfs(u) it
follows that u is not an ancestor of v, and hence v and u have a least common ancestor w in T which
is neither v or u. Since the DFS traversal visits apex nodes first, and since v is visited before u, we
get that v is a descendant of one of w’s apex children z (because w has at most one heavy child).
Hence, apex(v) is either the apex z or a strict descendant of it in T . In either case, apex(v) is not
an ancestor of u in T . This contradicts Condition C2 since no new ancestry relations are created
in T ∗ .
24
4.2
4.2.1
The labeling scheme
Informal description
An important property of the folding decomposition is that its spine decomposition depth is at
most 2 (cf. Lemma 4). Essentially, our ancestry scheme is based on Lemma 5. We first apply
the scheme described in Section 3 on T ∗ , resulting in each node u having a label I(u) which is an
interval encoded using log n + O(1) bits. Given the intervals I(u) and I(v) of two nodes u and
v, we can then detect whether or not Condition C1 holds. Indeed, given two nodes u and v, if
I(u) ⊆ I(v) holds, then v is an ancestor of u in T ∗ , and hence also in T . This takes care of all
ancestry relations preserved under the folding decomposition, and in particular the case where v is
an apex. To handle unpreserved ancestry relations, it is sufficient, by Lemma 5, to check whether
or not Condition C2 holds. For this purpose, we would like our decoder to reconstruct not only
the interval I(v), but also the interval I(apex(v)). Indeed, having I(apex(v)) and I(u) already
enables to detect whether or not the first part of Condition C2 holds, namely, whether apex(v)
is a strict ancestor of u in T ∗ . To detect whether or not dfs(v) < dfs(u) we shall actually use
a specific implementation of our ancestry labelling scheme from Section 3, which will relate the
interval relation ≺ (defined in Section 2) to the DFS numbering in T . This implementation will
guarantee that, for any heavy node v, dfs(v) < dfs(u) if and only if I(v) ≺ I(u).
It is now left to explain how to obtain, for every node u, the interval I(apex(u)), given the label
of u resulting from the scheme described in Section 3, and using few additional bits (apart from the
ones used to encode I(u)). First, we use one additional bit of information in the label of each node
u, for indicating whether or not u is an apex. In the case u is an apex, we already have u = apex(u),
and hence, no additional information is required to reconstruct I(apex(u)). For the case that u is
heavy, we use additional O(log log n) bits of information at the label of u. Specifically, in addition
to its own interval I(u), every node u stores the level of the interval I(apex(u)), consuming dlog ke
bits. Now, notice that I(u) ⊂ I(apex(u)). Moreover, we will later prove that, in the scheme
described in Section 3, for every level k, the number of intervals J used by the scheme on level k,
such that I(u) ⊂ J, is at most Bk2 (where Bk is defined in Subsection 3.2.1). Thus, once we know
level k of I(apex(u)), and the interval I(u), additional 2 log Bk bits are sufficient to reconstruct
I(apex(u)). Note that 2 log Bk = 2 log k + O(log log k) because Bk = O(k log2 k). Since k ≤ log n,
the total information per node (stored in its label) amounts to log n + 3 log log n + O(log log log n)
bits.
We are now ready to describe our ancestry labeling scheme formally.
4.2.2
The marker algorithm M
Given a forest F , we describe the labels assigned to the nodes of each tree T in F , so that to enable
detecting ancestry relations. First, we apply the spine decomposition to T . Next, we consider a
DFS traversal over T that starts from the root, numbered 1, and visits apex children first. For any
node u, let dfs(u) be the DFS number of u. (These DFS numbers will not be directly encoded
in the labels, since doing that would require the consumption of too many bits; however, these
numbers will be used by the marker to assign the labels). From the spine decomposition of T , we
construct the folding decomposition T ∗ of T , as described in Subsection 4.1. Recall from Lemma 4
that the spine decomposition depth of T ∗ is at most 2.
25
Next, we apply to T ∗ the ancestry scheme defined in Section 3. More specifically, we perform
a particular implementation of this ancestry scheme, as described now. Consider a level k of the
recursive procedure applied by the marker for assigning labels to the nodes in T ∗ . At this level,
the marker is dealing with some subtree T ∗0 of T ∗ with size at most 2k . We revisit the assignment
of the intervals and bins considered by the marker at this level, for T ∗0 . Note that the nodes in T ∗0
are the nodes of a subtree T 0 of T . In fact T ∗0 is the folding decomposition of T 0 . Thus, in order to
avoid cumbersome notations, in the remaining of this description, we replace T 0 and T 0∗ by T and
T ∗ , respectively. The reader should just keep in mind that we are dealing with the general case of
a given tree of a given level k in the recursion, and not necessarily with the whole tree in the first
level of this recursion. So, given T , and T ∗ , both of size at most 2k , we show how to implement
our ancestry scheme in a special manner on T ∗ .
Recall the DFS traversal over the whole tree, which assigns a DFS number dfs(u) to every
node u in the tree. This DFS traversal induces a DFS traversal on (the subtree) T that starts from
the root of T and visits apex children first.
Let S = (v1 , . . . , vs ) be the spine of T , and Fi be the forest consisting of all subtrees rooted at
the apex children of vi in T , for i = 1, . . . , s. Hence, T ∗ consists of a tree rooted at v1 , where v1
has s − 1 children v2 , . . . , vs , plus v1 ’s apex children u1 , . . . , ut in T (see Figure 5). We order these
nodes ui so that dfs(ui ) < dfs(ui+1 ). Therefore, we get the following important ordering implied
by the DFS traversal:
dfs(u1 ) < dfs(u2 ) < . . . < dfs(ut ) < dfs(v2 ) < dfs(v3 ) < . . . < dfs(vs−1 ) < dfs(vs ).
(8)
As mentioned in the proof of Lemma 4, in T ∗ , vs is the heavy child of v1 , and all the other children
of v1 are apexes. Therefore, the spine of T ∗ is (v1∗ , v2∗ ) = (v1 , vs ). (Recall, it maybe the case that
v1 = vs , in which case the spine would consist of a single node, namely v1 ). Moreover, the forest F1∗
consists of all trees in T ∗ hanging down from u1 , . . . , ut , as well as all trees in T ∗ hanging down
from v2 , . . . , vs−1 . Similarly, the forest F2∗ (if it exists) consists of all trees in T ∗ hanging down from
the children of vs in T .
In this level k of the recursion of our ancestry scheme, we are given a bin J of size bck |T ∗ |c, and
we need to allocate intervals to the nodes in T ∗ contained in J. For this purpose, we assign intervals
to v1∗ and v2∗ , and bins to the forests F1∗ and F2∗ . These bins are J1 and J2 , and, by Equation 5, we
have J1 ≺ J2 . Moreover, if v1 6= vs , then, by Facts 4 and 5, we get that:
for every v ∈ F1∗ and every u ∈ F2∗ ∪ {v2∗ }, we have I(v) ≺ I(u).
(9)
Furthermore, observe that all trees in F1∗ are of size at most 2k−1 . Therefore, the assignment of
intervals to the nodes of F1∗ are performed according to the simpler case described in the proof of
Lemma 1. We order the trees Ti , i = 1, . . . , t + s − 2, in F1∗ according to the DFS numbers of their
roots. In particular, for i = 1, . . . , t, the root of Ti is ui , and, for i = 1, . . . , s − 2, the root of Tt+i
is vi+1 . By Fact 3, we get that the interval assignment satisfies the following.
For every v ∈ Ti and u ∈ Tj where j > i, we have I(v) ≺ I(u).
(10)
We are now ready to state the crucial relationship between the ancestry relations in the whole
tree and the assignment of intervals to nodes in its folding decomposition. Essentially, this lemma
replaces the dependencies on the DFS numbering mentioned in the statement of Lemma 5 by a
26
dependency on the relative positions of the intervals according to the partial order ≺. This is
crucial because the DFS numbers are not part of the labels.
Lemma 6 For any two different nodes v and u, we have: v is an ancestor of u in T if and only if
at least one of the following two conditions hold
• Node v is an ancestor of u in T ∗ ;
• Node apex(v) is a strict ancestor of u in T ∗ and I(v) ≺ I(u).
Proof. Given Lemma 5, we only need to prove that if v is not an ancestor of u in T ∗ , but apex(v)
is a strict ancestor of u in T ∗ , then
dfs(v) < dfs(u) ⇐⇒ I(v) ≺ I(u).
Let us consider the subtrees of T and T ∗ , rooted at apex(v). By slightly abusing notation, we
reuse the same notation as before. Recall the spine S = (v1 , . . . , vs ), whose apex is v1 = apex(v).
The children of apex(v) in T ∗ are the apex nodes u1 , . . . , ut , v2 , . . . , vs−1 , plus the heavy node vs .
Each apex child is the root of a tree Ti for some i ∈ {1, . . . , t + s − 2}. All these trees belong to F1∗ .
All the trees in F2∗ are rooted at children of vs .
Since v is not an ancestor of u in T ∗ , but apex(v) is an ancestor of u in T ∗ , we get that
v 6= apex(v). It follows that v must be one of the spine nodes v2 , . . . , vs , say v = vj with j ≥ 2.
Node u is a strict descendent of apex(v), but is not a descendent of v. Therefore, u belongs either
to one of the trees Ti , i = 1, . . . , t + s − 2, i 6= j, in F1∗ , or to F2∗ ∪ {vs } in case v 6= vs .
Assume first that dfs(v) < dfs(u). In that case, v cannot be vs . Indeed, if v = vs then u
cannot be in F2∗ ∪ {vs } because v is not an ancestor of u. Hence, u belongs to one of the trees Ti .
This contradicts the fact that dfs(v) < dfs(u), by Equation 8. So, v = vj , 2 ≤ j < s. Since
dfs(v) < dfs(u), we know that u belongs either to one of the trees Ti , i = t + j + 1, . . . , t + s − 2,
or to F2∗ ∪ {vs }. In the former case, we get I(v) ≺ I(u) by applying Equation 10; In the latter case,
we get I(v) ≺ I(u) by applying Equation 9.
Conversely, assume that I(v) ≺ I(u). In that case, again v cannot be vs . Indeed, if v = vs
then u cannot be in F2∗ ∪ {vs } because v is not a ancestor of u. Hence, u belongs to one of the
trees Ti . This contradicts the fact that I(v) ≺ I(u) by Equation 10. So v = vj , 2 ≤ j < s. In that
case, u belongs to one of the trees Ti , i = t + j + 1, . . . , t + s − 2, of F1∗ or to F2∗ ∪ {vs }, which
implies that dfs(v) < dfs(u) by Equation 8.
Lemma 6 provides a characterization for ancestry in T with respect to properties of the intervals
assigned to the nodes of T ∗ . More specifically, the fact that v is an ancestor of u can be detected
using the intervals I(v) and I(u), as well as the interval I(apex(v)) of apex(v). The interval I(v)
is encoded directly in the label of v using log n + O(1) bits. Directly encoding I(apex(v)) would
consume yet another log n + O(1) bits, which is obviously undesired. We now show how to encode I(apex(v)) in the label of v using few bits of information in addition to the ones used to
encode I(v). We use the following trick (see Figure 6). Let k 0 , a0 , and b0 be such that
I(apex(v)) = Ik0 ,a0 ,b0 = [xk0 a0 , xk0 (a0 + b0 )].
27
xk' a'
level k':
xk' (a'+b')
x k'a"
I k',a',b'
N
I(v)
level k:
Figure 6: Encoding I(apex(v)) using I(v) and few additional bits
Since 1 ≤ k ≤ log n and 1 ≤ b0 ≤ Bk0 , we get that 2 log log n + O(log log log n) bits suffice to encode
both k 0 and b0 . To encode a0 , the marker algorithm acts as follows. Let I(v) = [α, β], and let a00
be the largest integer such that xk0 a00 ≤ α. We have, a00 − Bk0 ≤ a00 − b0 because b0 ≤ Bk0 . Since
I(v) ⊆ I(apex(v)), we also have xk0 (a0 + b0 ) ≥ β ≥ α ≥ xk0 a00 . Thus a00 − b0 ≤ a0 . Finally, again
since I(v) ⊆ I(apex(v)), we have xk0 a0 ≤ α, and thus a00 ≥ a0 . Combining the above inequalities,
we get that a0 ∈ [a00 − Bk0 , a00 ]. The marker algorithm now encodes the integer t ∈ [0, Bk0 ] such that
a0 = a00 − t. This is done in consuming another log Bk0 = log log n + O(log log log n) bits. Hence,
we obtain:
Lemma 7 The marker algorithm M assigns labels to the nodes of n-node trees in linear time, and
each label is encoded using log n + 3 log log n + O(log log log n) bits.
Proof. Let v be a node of an n-node tree T . The label of v consists of the interval I(v) plus
the additional information that will later be used to extract I(apex(v)), namely, according to the
notations above, the values of t, k 0 , and b0 . The leftmost part of the label L(v) will be dedicated
to encode these latter values t, k 0 , and b0 . Each of these values can be separately encoded using
log log n + O(log log log n) bits. By adding O(log log log n) bits per value, we can describe where the
binary encoding of each of these values starts and ends. Therefore, in total, encoding these three
values together with their positioning in the label requires 3 log log n + O(log log log n) bits. Now,
since I(v) is encoded using log n + O(1) bits, each label uses log n + 3 log log n + O(log log log n)
bits.
Note that the bound on the label size in Lemma 7 includes the encoding of the positioning of
the different fields in the label, each of which being encoded using O(log log log n) bits.
4.3
The decoder algorithm D
Now, we describe our decoder algorithm D. Given the labels L(v) and L(u) assigned by M to two
different nodes v and u in some tree T , the decoder algorithm D needs to find whether v is an
ancestor of u in T . (Observe that since each node receives a distinct label, the decoder algorithm
can easily find out if u and v are in fact the same node, and, in this trivial case, it simply outputs 1.)
28
The decoder algorithm first extracts the three values t, k 0 , and b0 by inspecting the first
3 log log n + O(log log log n) bits of L(v), and then the interval I(v) by inspecting the remaining log n + O(1) bits of L(v). Using this information, it computes I(apex(v)). Now, given the two
intervals I(v) and I(apex(v)), as well as the interval I(u) of a node u, the decoder acts according
to the characterization stated in Lemma 6, that is: D(L(v), L(u)) = 1 if and only if at least one of
the following two conditions holds:
• I(u) ⊆ I(v);
• I(u) ⊂ I(apex(v)) and I(v) ≺ I(u).
The fact that D(L(v), L(u)) = 1 if and only if v is an ancestor of u in T follows from Lemma 6 and
the correctness of the interval assignment to the nodes of T ∗ given by Lemma 3. This establishes
the following result.
Lemma 8 (M, D) is a correct ancestry-labeling scheme for the family of all forests.
By combining Lemmas 7 and 8, we obtain the main result of the paper.
Theorem 3 There exists an ancestry-labeling scheme for the family of all forests, using label size
log n + 3 log log n + O(log log log n) bits for n-node forests. Moreover, given an n-node forest F , the
labels can be assigned to the nodes of F in O(n) time, and any ancestry query can be answered in
constant time.
Remark: The ancestry-labeling scheme described in this section uses labels of optimal size log n+
O(log log n) bits, to the price of a decoding mechanism based of an interval condition slightly more
complex than the simple interval containment condition. Although this has no impact on the
decoding time (our decoder still works in constant time), the question of whether there exists an
ancestry-labeling scheme with labels of size log n + O(log log n) bits, but using solely the interval
containment condition, is intriguing. In Section 3, we have shown that, at least in the case of trees
with bounded spine-decomposition depth, extremely small labels can indeed be coupled with an
interval containment decoder.
5
Small universal posets
Recall that an ancestry-labeling scheme for F(n) is consistent if the decoder satisfies the antisymmetry and transitivity conditions (see Section 2.2). Our next lemma relates compact consistent
ancestry-labeling schemes with small universal posets.
Lemma 9 There exists a consistent ancestry-labeling scheme for F(n) with label size ` if and only
if for every integer k, there exists a universal poset of size 2k` for P(n, k).
Proof. Assume first that for every integer k there exists a universal poset (U, ) of size 2k` for
P(n, k). In particular, there exists a universal poset (U, ) of size 2` for P(n, 1), i.e., for the family
29
of n-element posets with tree-dimension 1. We construct the following ancestry-labeling scheme
(M, D) for F(n). The marker algorithm M first considers some bijective mapping φ : U → [1, |U |].
Given a rooted forest F ∈ F(n), view F as a poset whose Hasse diagram is F . I.e., F is a poset
with tree-dimension 1. Now, let ρ : F → U be a mapping that preserves the partial order of F .
The marker algorithm assigns the label
L(u) = φ(ρ(u))
to each node u ∈ F . Given the labels L(u) and L(v) of two nodes u and v in some F ∈ F(n), the
decoder algorithm acts as follows:
D(L(u), L(v)) = 1 ⇐⇒ φ−1 (L(u)) ≤ φ−1 (L(v)) .
By construction, (M, D) is a consistent ancestry-labeling scheme for F(n) with label size log |U | = `.
For the other direction, assume that there exists a consistent ancestry-labeling scheme (M, D)
for F(n) with label size `. Note that it may be the case that D(a, b) is not defined for some
a, b ∈ [1, 2` ], which may happen if the marker M is not using all values in [1, 2` ]. In that case, we
set D(a, b) = ⊥, that is D has now three possible outputs {0, 1, ⊥}. Let U = [1, 2` ]k . We define a
relation between pairs of elements in U . For two elements u, v ∈ U , where u = (u1 , u2 , · · · , uk )
and v = (v1 , v2 , · · · , vk ), we set
u v ⇐⇒ ∀i ∈ [1, k], D(ui , vi ) = 1 .
Recall that every poset in P(n, k) is the intersection of k tree-posets, each of which having a Hasse
diagram being a forest. Since the ancestry relations in those forests precisely captures the ordering
in the forest, it follows that (U, ) is a universal poset for P(n, k). The lemma follows.
For the purpose of constructing a small universal poset for P(n, k), let us revise the ancestrylabeling scheme (M, D) for F(n) given by Theorem 3, in order to make it consistent. We define
a new decoder algorithm D0 as follows. Recall that the marker M assigns to each node u a label
L(u) that encodes the pair (I(u), I(apex(u)). Given two labels L(v) and L(u), we set the modified
decoder as follows: D0 (L(v), L(u)) = 1 if and only if at least one of the following three conditions
holds:
• [D0]: L(u) = L(v);
• [D1]: I(u) ⊂ I(v) and I(apex(u)) ⊆ I(v);
• [D2]: I(u) ⊂ I(apex(v)), I(v) ≺ I(u), and I(apex(u)) ⊆ I(apex(v)).
Lemma 10 (M, D0 ) is a consistent ancestry-labeling scheme for F(n) whose label size is log n +
3 log log n + O(log log log n) bits.
Proof. The required label size of (M, D0 ) follows directly from Theorem 3. The correctness of
the ancestry-labeling scheme (M, D0 ) follows from the observation that the additional conditions
to the definition of the decoder D are redundant in the case v and u belong to the same forest. To
see why this observation holds, we just need to consider the situation in which v in an ancestor of
u in some tree, and check that D0 (L(v), L(u)) = 1. By the correctness of D, since v is an ancestor
of u, we get that either
30
• I(u) ⊆ I(v), or
• I(u) ⊂ I(apex(v)) and I(v) ≺ I(u).
– If I(u) ⊆ I(v), then either I(u) = I(v) or I(u) ⊂ I(v). In the former case, we actually have
u = v because u and v are in the same tree, and thus D0 holds. In this latter case, u is a strict
descendant of v in the folding decomposition T ∗ . By Lemma 4, apex(u) is a descendent of v in
T ∗ , implying that D1 holds.
– If I(u) ⊂ I(apex(v)) and I(v) ≺ I(u), then just the strict inclusion I(u) ⊂ I(apex(v))
already implies I(apex(u)) ⊆ I(apex(v)), and hence D2 holds.
It remains to show that (M, D0 ) is consistent. To establish the anti-symmetry property, let `1
and `2 be two different labels in the domain of M, and assume that D0 (`1 , `2 ) = 1. We show that
D0 (`2 , `1 ) = 0. Let u and v be two nodes, not necessarily in the same forest, such that L(v) = `1
and L(u) = `2 . Since `1 6= `2 , we have that either D1 or D2 holds. Therefore, either I(u) ⊂ I(v)
or I(v) ≺ I(u). This implies that D0 (L(u), L(v)) = 0, by the anti-symmetry of both relations ⊂
and ≺.
To establish the transitivity property, let `1 , `2 and `3 be three pairwise different labels in the
domain of M, and assume that D0 (`1 , `2 ) = 1 and D0 (`2 , `3 ) = 1. We show that D0 (`1 , `3 ) = 1. Let
u, v, and w be three nodes, not necessarily in the same forest, such that L(w) = `1 , L(v) = `2 , and
L(u) = `3 . Since the three labels are pairwise different, D0 does not hold, and thus we concentrate
the discussion on D1 and D2. In other words, we must have the situation:
• [D1(v, u)]: I(u) ⊂ I(v) and I(apex(u)) ⊆ I(v); or
• [D2(v, u)]: I(u) ⊂ I(apex(v)), I(v) ≺ I(u), and I(apex(u)) ⊆ I(apex(v)).
and
• [D1(w, v)]: I(v) ⊂ I(w) and I(apex(v)) ⊆ I(w); or
• [D2(w, v)]: I(v) ⊂ I(apex(w)), I(w) ≺ I(v), and I(apex(v)) ⊆ I(apex(w)).
We examine each of the four combinations of the above conditions, and show that, for each of them,
at least one of the following two conditions holds:
• [D1(w, u)]: I(u) ⊂ I(w) and I(apex(u)) ⊆ I(w);
• [D2(w, u)]: I(u) ⊂ I(apex(w)), I(w) ≺ I(u), and I(apex(u)) ⊆ I(apex(w)).
– Case 1.1: [D1(v, u)] and [D1(w, v)] hold. We get immediately that I(u) ⊂ I(w), and I(apex(u)) ⊆
I(w) by transitivity of ⊆ and ⊂, and thus [D1(w, u)] holds.
– Case 1.2: [D1(v, u)] and [D2(w, v)] hold. We show that [D2(w, u)] holds. First, we have
I(u) ⊂ I(v) ⊂ I(apex(w)). Second, I(u) ⊂ I(v) and I(w) ≺ I(v), so I(w) ≺ I(u). Finally,
I(apex(u)) ⊆ I(v) ⊆ I(apex(v)) ⊆ I(apex(w)). Thus [D2(w, u)] holds.
– Case 2.1: [D2(v, u)] and [D1(w, v)] hold. We show that [D1(w, u)] holds. First, I(u) ⊂
I(apex(v))) ⊆ I(w). Second, I(apex(u)) ⊆ I(v)) ⊆ I(w). Thus [D1(w, u)] holds.
31
– Case 2.2: [D2(v, u)] and [D2(w, v)] hold. We show that [D2(w, u)] holds. First, I(u) ⊂
I(apex(v)) ⊆ I(apex(w)). Second, I(w) ≺ I(v) ≺ I(u). Finally, I(apex(u)) ⊆ I(apex(v)) ⊆
I(apex(w)). Thus [D2(w, u)] holds.
This completes the proof of the lemma.
By combining Lemma 9 and Lemma 10, we obtain the following.
Theorem 4 For every integer k, there exists a universal poset of size O(nk log3k+o(k) n)
P(n, k).
6
for
Further work
In this paper, we solved the ancestry-labeling scheme problem. In general, by now, the area of
informative labeling-schemes is quite well understood in the deterministic setting. In particular,
for many of the classical problems, tight bounds for the label size are established (see Section 1.2).
In contrast, the randomized framework, initiated in [21], remains widely open. We conclude this
paper by mentioning one open problem in the framework of randomized ancestry-labeling schemes
in order to illustrate the potential of randomization in this domain.
We describe a simple one-sided error ancestry-labeling scheme with label size dlog ne bits. The
scheme guarantees that if u is an ancestor of v, then, given the corresponding labels Lu and Lv , the
decoder is correct, that is, D(Lu , Lv ) = 1, with probability 1; and if u is not an ancestor of v then
the decoder is correct, that is, D(Lu , Lv ) = 0, with probability at least 1/2. The scheme operates
as follows. Given a tree T , the marker first randomly chooses, for every node v, an ordering of v’s
children in a uniform manner, i.e., every ordering of the children of v has the same probability to
be selected by the marker. Then, according to the resulted orderings, the marker performs a DFS
traversal that starts at the root r, and labels each node it visits by its DFS number. Given two
labels i, j, the decoder outputs:
1 if i < j;
D(i, j) =
0 otherwise.
Let u and v be two nodes in T , and let Lu = dfs(u) and Lv = dfs(v) denote their labels. If u is an
ancestor of v, then Lu < Lv , no matter which orderings were chosen by the marker. Thus, in this
case, D(Lu , Lv ) = 1 with probability 1. Now, if u is not an ancestor of v, we consider two cases.
First, if u is a descendant of v then Lu > Lv and therefore D(Lu , Lv ) = 0. If, however, u and v
are non-comparable, i.e., neither one is an ancestor of the other, then the probability that the DFS
tour visited u before it visited v is precisely 1/2, i.e., Pr[Lu < Lv ] = 1/2. Hence the guarantee for
correctness in this case is 1/2. It was also shown in [21] that a one-sided error ancestry-labeling
scheme with constant guarantee must use labels of size log n − O(1). An interesting open question
is whether for every constant p, where 1/2 ≤ p < 1, there exists a one-sided error ancestry-labeling
scheme with guarantee p that uses labels of size log n + O(1).
Acknowledgments. The authors are very thankful to William T. Trotter, Sundar Vishwanathan
and Jean-Sebastien Séreni for helpful discussions.
32
References
[1] S. Abiteboul, S. Alstrup, H. Kaplan, T. Milo and T. Rauhe. Compact labeling schemes for
ancestor queries. SIAM Journal on Computing 35, (2006), 1295–1309.
[2] S. Abiteboul, P. Buneman and D. Suciu. Data on the Web: From Relations to Semistructured
Data and XML. Morgan Kaufmann, (1999).
[3] S. Abiteboul, H. Kaplan, and T. Milo. Compact labeling schemes for ancestor queries. In
Proc. 12th ACM-SIAM Symp. on Discrete Algorithms (SODA), 2001.
[4] D. Adjiashvili and N. Rotbart. Labeling Schemes for Bounded Degree Graphs. ICALP, 375-386,
2014.
[5] S. Alstrup, P. Bille and T. Rauhe. Labeling Schemes for Small Distances in Trees. In Proc.
14th ACM-SIAM Symp. on Discrete Algorithms (SODA), 2003.
[6] S. Alstrup, C. Gavoille, H. Kaplan and T. Rauhe. Nearest Common Ancestors: A Survey and
a new Distributed Algorithm. Theory of Computing Systems 37, (2004), 441–456.
[7] S. Alstrup, S. Dahlgaard, M. Baek, and T. Knudsen. Optimal induced universal graphs and
adjacency labeling for trees. Arxiv report: CoRR abs/1504.02306 (2015).
[8] S. Alstrup, E. B. Halvorsen, and K. G. Larsen. Near-optimal labeling schemes for nearest
common ancestors. In Proc. 25th ACM-SIAM Symposium on Discrete Algorithms (SODA),
972-982, 2014.
[9] S. Alstrup, H. Kaplan, M. Thorup and U. Zwick. Adjacency labeling schemes and induceduniversal graphs. In 47th Annual Symposium on the Theory of Computing (STOC), 2015.
[10] S. Alstrup and T. Rauhe. Small induced-universal graphs and compact implicit graph representations. In Proc. 43rd IEEE Symp. on Foundations of Computer Science (FOCS), 2002.
[11] S. Alstrup and T. Rauhe. Improved labeling scheme for ancestor queries. In Proc. 13th ACMSIAM Symposium on Discrete Algorithms (SODA), pages 947-953, 2002.
[12] N. Alon and E. R. Scheinerman. Degrees of Freedom Versus Dimension for Containment
Orders. In Order 5 (1988), 11–16.
[13] G. Behrendt. On trees and tree dimension of ordered sets. Order 10(2), (1993), 153–160.
[14] Cohen, E., Kaplan, H., and Milo, T. Labeling dynamic XML trees. In Proc. 21st ACM Symp.
on Principles of Database Systems (PODS), 2002.
[15] S. Dahlgaard, M. Baek, T. Knudsen, and Noy Rotbart. Improved ancestry labeling scheme for
trees. Arxiv report: CoRR abs/1407.5011 (2014).
[16] L. Denoyer. Personnal communication, 2009.
[17] L. Denoyer and P. Gallinari. The Wikipedia XML corpus. SIGIR Forum Newsletter 40(1):
64-69 (2006)
33
[18] A. Deutsch, M. Fernndez, D. Florescu, A. Levy and D. Suciu. A Query Language for XML.
Computer Networks 31, (1999), 1155-1169.
[19] R. Fraisse Theory of relations. North-Holland, Amsterdam, 1953.
[20] P. Fraigniaud and C. Gavoille. Routing in trees. In Proc. 28th Int. Colloq. on Automata,
Languages, and Programing (ICALP), LNCS 2076, pages 757–772, Springer, 2001.
[21] P. Fraigniaud and A. Korman. On randomized representations of graphs using short labels. In
Proc. Proc. of the 21st Ann. ACM Symp. on Parallel Algorithms and Architectures (SPAA),
2009, pages 131–137.
[22] P. Fraigniaud and A. Korman. Compact Ancestry Labeling Schemes for XML Trees. In Proc.
21st ACM-SIAM Symp. on Discrete Algorithms (SODA), 2010, pages 458–466.
[23] P. Fraigniaud and A. Korman. An optimal ancestry-labeling scheme and small universal posets.
In Proc. of the 42nd ACM Symp. on Theory of Computing (STOC), 2010, pages 611–620.
[24] C. Gavoille and A. Labourel. Shorter implicit representation for planar graphs and bounded
treewidth graphs. In 15th Annual European Symposium on Algorithms (ESA), pages 582–593,
2007.
[25] C. Gavoille and C. Paul. Split decomposition and distance labelling: an optimal scheme for
distance hereditary graphs. In Proc. European Conf. on Combinatorics, Graph Theory and
Applications, Sept. 2001.
[26] C. Gavoille, D. Peleg, S. Pérennes and R. Raz. Distance labeling in graphs. In Proc. 12th
ACM-SIAM Symp. on Discrete Algorithms (SODA), pages 210–219, 2001.
[27] T. Hsu and H. Lu. An Optimal Labeling for Node Connectivity. In Proc. 20th International
Symposium on Algorithms and Computation (ISAAC), 303–310, 2009.
[28] J. Hubicka and J. Neetril. Universal partial order represented by means of oriented trees and
other simple graphs. Euro. J. of Combinatorics 26(5), (2005), 765 – 778.
[29] B. Jonson. Universal relational systems. Math Scan. 4, (1956), 193–208.
[30] J. B. Johnston. Universal infinite partially ordered sets. Proc. Amer. Math. Soc. 7, (1957),
507–514.
[31] M. Katz, N.A. Katz, A. Korman, and D. Peleg. Labeling schemes for flow and connectivity.
SIAM Journal on Computing 34 (2004),23–40.
[32] A. Korman. Labeling Schemes for Vertex Connectivity. ACM Transactions on Algorithms,
6(2) (2010).
[33] A. Korman and S. Kutten. Distributed Verification of Minimum Spanning Trees. Distributed
Computing 20(4): 253-266 (2007).
[34] H. Kaplan and T. Milo. Short and simple labels for small distances and other functions. In
Workshop on Algorithms and Data Structures, Aug. 2001.
34
[35] H. Kaplan, T. Milo and R. Shabo. A Comparison of Labeling Schemes for Ancestor Queries.
In Proc. 19th ACM-SIAM Symp. on Discrete Algorithms (SODA), 2002.
[36] S. Kannan, M. Naor, and S. Rudich. Implicit representation of graphs. SIAM J. on Discrete
Math 5, (1992), 596–603.
[37] A. Korman, D. Peleg, and Y. Rodeh. Constructing Labeling Schemes Through Universal
Matrices. Algorithmica 57(4): 641–652 (2010).
[38] L. Mignet, D. Barbosa and P. Veltri. Studying the XML Web: Gathering Statistics from an
XML Sample. World Wide Web 8(4), (2005), 413–438.
[39] I. Mlynkova, K. Toman and J. Pokorny. Statistical Analysis of Real XML Data Collections.
In Proc. 13th Int. Conf. on Management of Data, (2006), 20–31.
[40] D. Peleg. Informative labeling schemes for graphs. Theoretical Computer Science 340(3),
(2005), 577–593.
[41] D. D. Sleator and R. E. Tarjan. A data structure for dynamic trees. J. Comp. and Sys. Sci.,
26(3), 362–391, 1983.
[42] M. Thorup. Compact oracles for reachability and approximate distances in planar digraphs.
J. of the ACM 51, (2004), 993–1024.
[43] M. Thorup and U. Zwick. Compact routing schemes. In Proc. 13th ACM Symp. on Parallel
Algorithms and Architecture (SPAA), pages 1–10, 2001.
[44] W. T. Trotter. Combinatorics and partially ordered sets: Dimension theory. The Johns
Hopkins University Press, (1992).
[45] W.T. Trotter and J. I. Moore. The dimension of planar posets J. Comb. Theory B 22, (1977),
54–67.
[46] E.S. Wolk. The comparability graph of a tree. Proc. Amer. Math. Soc. 3, (1962), 789–795.
[47] W3C.Extensive markup language (XML) 1.0. http://www.w3.org/TR/REC-xml.
[48] W3C. Exensive stylesheet language (xsl) 1.0. http://www.w3.org/Style/XSL/.
[49] W3C. Xsl transformations (xslt) specification. http://www.w3.org/TR/WD-xslt
35
| 8 |
Offline and Online Models of Budget Allocation for
Maximizing Influence Spread
arXiv:1508.01059v3 [cs.DS] 12 Mar 2018
Noa Avigdor-Elgrabli∗
Gideon Blocq†
Iftah Gamzu‡
Ariel Orda§
Abstract
The research of influence propagation in social networks via word-of-mouth processes has been given
considerable attention in recent years. Arguably, the most fundamental problem in this domain is the
influence maximization problem, where the goal is to identify a small seed set of individuals that can
trigger a large cascade of influence in the network. While there has been significant progress regarding this
problem and its variants, one basic shortcoming of the underlying models is that they lack the flexibility in
the way the overall budget is allocated to different individuals. Indeed, budget allocation is a critical issue
in advertising and viral marketing. Taking the other point of view, known models allowing flexible budget
allocation do not take into account the influence spread in social networks. We introduce a generalized
model that captures both budgets and influence propagation simultaneously.
For the offline setting, we identify a large family of natural budget-based propagation functions that
admits a tight approximation guarantee. This family extends most of the previously studied influence
models, including the well-known Triggering model. We establish that any function in this family implies
an instance of a monotone submodular function maximization over the integer lattice subject to a knapsack
constraint. This problem is known to admit an optimal 1 − 1/e ≈ 0.632-approximation. We also study the
price of anarchy of the multi-player game that extends the model and establish tight results.
For the online setting, in which an unknown subset of agents arrive in a random order and the algorithm
needs to make an irrevocable budget allocation in each step, we develop a 1/(15e) ≈ 0.025-competitive
algorithm. This setting extends the celebrated secretary problem, and its variant, the submodular knapsack
secretary problem. Notably, our algorithm improves over the best known approximation for the latter
problem, even though it applies to a more general setting.
∗
Yahoo Labs, Haifa, Israel. Email: [email protected].
Technion, Haifa, Israel. Email: [email protected]. This work was done when the author was an intern at
Yahoo Labs, Haifa, Israel.
‡
Amazon, Israel. Email: [email protected]. This work was done when the author was affiliated with Yahoo Labs,
Haifa, Israel.
§
Technion, Haifa, Israel. Email: [email protected].
†
1 Introduction
The study of information and influence propagation in societies has received increasing attention for several
decades in various areas of research. Recently, the emergence of online social networks brought forward
many new questions and challenges regarding the dynamics by which information, ideas, and influence spread
among individuals. One central algorithmic problem in this domain is the influence maximization problem,
where the goal is to identify a small seed set of individuals that can trigger a large word-of-mouth cascade of
influence in the network. This problem has been posed by Domingos and Richardson [13, 36] in the context
of viral marketing. The premise of viral marketing is that by targeting a few influential individuals as initial
adopters of a new product, it is possible to trigger a cascade of influence in a social network. Specifically,
those individuals are assumed to recommend the product to their friends, who in turn recommend it to their
friends, and so on.
The influence maximization problem was formally defined by Kempe, Kleinberg and Tardos [26, 27].
In this setting, we are given a social network graph, which represents the individuals and the relationships
between them. We are also given an influence function that captures the expected number of individuals that
become influenced for any given subset of initial adopters. Given some budget b, the objective is to find a
seed set of b initial adopters that will maximize the expected number of influenced individuals. Kempe et
al. studied several operational models representing the step-by-step dynamics of propagation in the network,
and analyzed the influence functions that are derived from them. While there has been significant progress
regarding those models and related algorithmic issues, one shortcoming that essentially has not been treated
is the lack of flexibility in the way that the budget is allocated to the individuals. Indeed, budget allocation is
a critical factor in advertising and viral marketing. This raises some concerns regarding the applicability of
current techniques.
Consider the following scenario as a motivating example. A new daily deals website is interested in
increasing its exposure to new audience. Consequently, it decides to provide discounts to individuals who are
willing to announce their purchases in a social network. The company has several different levels of discounts
that it can provide to individuals to incentivize them to better communicate their purchases, e.g., making
more enthusiastic announcements on their social network. The company hopes that those announcements
will motivate the friends of the targeted individuals to visit the website, so a word-of-mouth process will
be created. The key algorithmic question for this company is whom should they offer a discount, and what
amount of discounts should be offered to each individual.
Alon et al. [1] have recently identified the insufficiency of existing models to deal with budgets. They
introduced several new models that capture issues related to budget distribution among potential influencers
in a social network. One main caveat in their models is that they do not take into account the influence
propagation that happens in the network. The main aspect of our work targets this issue.
1.1
Our results
We introduce a generalized model that captures both budgets and influence propagation simultaneously. Our
model combines design decisions taken from both the budget models [1] and propagation models [26]. The
model interprets the budgeted influence propagation as a two-stage process consisting of: (1) influence related
directly to the budget allocation, in which the seed set of targeted individuals influence their friends based on
their budget, and (2) influence resulting from a secondary word-of-mouth process in the network, in which
no budgets are involved. Note that the two stages give rise to two influence functions whose combination
designates the overall influence function of the model. We study our model in both offline and online settings.
An offline setting. We identify a large family of natural budget-based propagation functions that admits a
tight approximation guarantee. Specifically, we establish sufficient properties for the two influence functions
1
mentioned above, which lead to a resulting influence function that is both monotone and submodular. It is important to emphasize that the submodularity of the combined function is not the classical set-submodularity,
but rather, a generalized version of submodularity over the integer lattice. Crucially, when our model is associated with such an influence function, it can be interpreted as a special instance of a monotone submodular
function maximization over the integer lattice subject to a knapsack constraint. This problem is known to
have an efficient algorithm whose approximation ratio of 1 − 1/e ≈ 0.632, which is best possible under P
6= NP assumption [40]. We then focus on social networks scenario, and introduce a natural budget-based
influence propagation model that we name Budgeted Triggering. This model extends many of the previously
studied influence models in networks. Most notably, it extends the well-known Triggering model [26], which
in itself generalizes several models such as the Independent Cascade and the Linear Threshold models. We
analyze this model within the two-stage framework mentioned above, and demonstrate that its underlying
influence function is monotone and submodular. Consequently, we can approximate this model to within a
factor of 1 − 1/e. We also consider a multi-player game that extends our model. In this game, there are
multiple players, each of which is interested to spend her budget in a way that maximizes her own network
influence. We establish that the price of anarchy (PoA) of this game is equal to 2. This result is derived by
extending the definition of a monotone utility game on the integer lattice [32]. Specifically, we show that one
of the conditions of the utility game can be relaxed, while still maintaining a PoA of at most 2, and that the
refined definition captures our budgeted influence model.
An online setting. In the online setting, there is unknown subset of individuals that arrive in a random order.
Whenever an individual arrives, the algorithm learns the marginal influence for each possible budget assignment, and needs to make an irrevocable decision regarding the allocation to that individual. This allocation
cannot be altered later on. Intuitively, this setting captures the case in which there is an unknown influence
function that is partially revealed with each arriving individual. Similarly to before, we focus on the case that
the influence function is monotone and submodular. Note that this setting is highly-motivated in practice. As
observed by Seeman and Singer [37], in many cases of interest, online merchants can only apply marketing
techniques on individuals who have engaged with them in some way, e.g., visited their online store. This
gives rise to a setting in which only a small unknown sample of individuals from a social network arrive in
an online fashion. We identify that this setting generalizes the submodular knapsack secretary problem [5],
which in turn, extends the well-known secretary problem [14]. We develop a 1/(15e) ≈ 0.025-competitive
algorithm for the problem. Importantly, our results not only apply to a more general setting, but also improve
the best known competitive bound for the former problem, which is 1/(20e) ≈ 0.018, due to Feldman, Naor
and Schwartz [17].
1.2
Related work
Models of influence spread in networks are well-studied in social science [22] and marketing literature [18].
Domingos and Richardson [13, 36] were the first to pose the question of finding influential individuals who
will maximize adoption through a word-of-mouth effect in a social network. Kempe, Kleinberg and Tardos [26, 27] formally modeled this question, and proved that several important models have submodular influence functions. Subsequent research have studied extended models and their characteristics [29, 34, 6, 24, 8,
39, 15, 37, 28, 12], and developed techniques for inferring influence models from observable data [20, 19, 31].
Influence maximization with multiple players has also been considered in the past [6, 21, 25]. Kempe et
al. [26], and very recently, Yang et al. [43], studied propagation models that have a similar flavor to our budgeted setting. We like to emphasize that there are several important distinctions between their models and
ours. Most importantly, their models assume a strong type of fractional diminishing returns property that our
integral model does not need to satisfy. Therefore, their models cannot capture the scenarios we describe.
The reader may refer to the cited papers and the references therein for a broader review of the literature.
2
Alon et al. [1] studied models of budget allocation in social networks. As already mentioned, our model
follows some of the design decisions in their approach. For example, their models support constraints on
the amount of budget that can be assigned to any individual. Such constraints are motivated by practical
marketing conditions set by policy makers and regulations. Furthermore, their models focus on a discrete
integral notion of a budget, which is consistent with common practices in many organizations (e.g., working
in multiplications of some fixed value) and related simulations [38]. We include those considerations in our
model as well. Alon et al. proved that one of their models, namely, the budget allocation over bipartite
influence model, admits an efficient (1 − 1/e)-approximation algorithm. This result was extended by Soma
et al. [40] to the problem of maximizing a monotone submodular function over the integer lattice subject to a
knapsack constraint. The algorithm for the above problems is a reminiscent of the algorithm for maximizing a
monotone submodular set function subject to a knapsack constraint [41]. Note that none of those papers have
taken into consideration the secondary propagation process that occurs in social networks.
The classical secretary problem was introduced more than 50 years ago (e.g., [14]). Since its introduction,
many variants and extension of that problem have been proposed and analyzed [30, 2, 3, 4]. The problem that
is closest to the problem implied from our online model is the submodular knapsack secretary problem [5,
23, 17]. An instance of this problem consists of a set of n secretaries that arrive in a random order, each of
which has some intrinsic cost. An additional ingredient of the input is a monotone submodular set function
that quantifies the value gained from any subset of secretaries. The objective is to select a set of secretaries
of maximum value under the constraint that their overall cost is no more than a given budget parameter. Note
that our model extends this setting by having a more general influence function that is submodular over the
integer lattice. Essentially, this adds another layer of complexity to the problem as we are not only required
to decide which secretaries to select, but we also need to assign them budgets.
2 Preliminaries
We begin by introducing a very general budgeted influence propagation model. This model will be specialized
later when we consider the offline and online settings. In our model, there is a set of n agents and an influence
function f : Nn → R+ . Furthermore, there is a capacity vector c ∈ Nn+ and a budget B ∈ N+ . Our objective
is to compute a budget assignment to the agents b ∈ Nn , which maximizes the influence f (b). The vector b
must
Pn (1) respect the capacities, that is, 0 ≤ bi ≤ ci , for every i ∈ [n], (2) respect the total budget, namely,
i=1 bi ≤ B. In the following, we assume without loss of generality that each ci ≤ B.
We primarily focus on influence functions that maintain the properties of monotonicity and submodularity.
A function f : Nn → R+ is called monotone if f (x) ≤ f (y) whenever x ≤ y coordinate-wise, i.e., xi ≤ yi ,
for every i ∈ [n]. The definition of submodularity for functions over the integer lattice is a natural extension
of the classical definition of submodularity over sets (or boolean vectors):
Definition 2.1. A function f : Nn → R+ is said to be submodular over the integer lattice if f (x) + f (y) ≥
f (x ∨ y) + f (x ∧ y), for all integer vectors x and y, where x ∨ y and x ∧ y denote the coordinate-wise maxima
and minima, respectively. Specifically, (x ∨ y)i = max{xi , yi } and (x ∧ y)i = min{xi , yi }.
In the remainder of the paper, we abuse the term submodular to describe both set functions and functions
over the integer lattice. We also make the standard assumption of a value oracle access for the function f .
A value oracle for f allows us to query about f (x), for any vector x. The question of how to compute the
function f in an efficient (and approximate) way has spawned a large body of work in the context of social
networks (e.g., [26, 10, 9, 33, 11, 7]).
Notice that for the classical case of sets, the submodularity condition implies that f (S) + f (T ) ≥ f (S ∪
T ) + f (S ∩ T ), for every S, T ⊆ [n], and the monotonicity property implies that f (S) ≤ f (T ) if S ⊆ T .
3
An important distinction between the classical set setting and the integer lattice setting can be seen when we
consider the diminishing marginal returns property. This property is an equivalent definition of submodularity
of set functions, stating that f (S ∪ {i}) − f (S) ≥ f (T ∪ {i}) − f (T ), for every S ⊆ T and every i ∈
/ T.
However, this property, or more accurately, its natural extension, does not characterize submodularity over
the integer lattice, as observed by Soma et al. [40]. For example, there are simple examples of a submodular
function f for which
f (x + χi ) − f (x) ≥ f (x + 2χi ) − f (x + χi )
does not hold. Here, χi is the characteristic vector of the set {i}, so that x + kχi corresponds to an update of
x by adding an integral budget k to agent i. Note that a weaker variant of the diminishing marginal returns
does hold for submodular functions over the integer lattice.
Lemma 2.2 (Lem 2.2 [40]). Let f be a monotone submodular function over the integer lattice. For any
i ∈ [n], k ∈ N, and x ≤ y, it follows that
f (x ∨ kχi ) − f (x) ≥ f (y ∨ kχi ) − f (y) .
3 An Offline Model
In this section, we study the offline version of the budgeted influence propagation model. As already noted, we
consider budgeted influence propagation to be a two-stage process consisting of (1) direct influence related to
the budget assignment, followed by (2) influence related to a propagation process in the network. In particular,
in the first stage, the amount of budget allocated to an individual determines her level of effort and success
in influencing her direct friends. This natural assumption is consistent with previous work [1]. Then, in the
second stage, a word-of-mouth propagation process takes place in which additional individuals in the network
may become affected. Note that the allocated budgets do not play role at this stage.
We identify a large family of budget-based propagation functions that admit an efficient solution. Specifically, we first identify sufficient properties of the influence functions of both stages, which give rise to a
resulting (combined) influence function that is monotone and submodular. Consequently, our model can be
interpreted as an instance of a monotone submodular function maximization over the integer lattice subject
to a knapsack constraint. This problem is known to have an efficient (1 − 1/e)-approximation [40], which
is best possible under the assumption that P 6= NP. This NP-hardness bound of 1 − 1/e already holds for the
special case of maximum coverage [16, 1].
We subsequently develop a natural model of budgeted influence propagation in social networks that we
name Budgeted Triggering. This model generalizes many settings, including the well-known Triggering
model. Note that the Triggering model already extends several models used to capture the spread of influence
in networks, like the Independent Cascade, Linear Threshold, and Listen Once models [26]. We demonstrate
that the influence function defined by this model is monotone and submodular, and thus, admits an efficient
(1 − 1/e)-approximation. Technically, we achieve this result by demonstrating that the two-stage influence
functions that underlie this model satisfy the sufficient properties mentioned above.
Finally, we study an extension of the Budgeted Triggering model to a multi-player game. In this game,
there are multiple self-interested players (e.g., advertisers), each of which is interested to spend her budget in
a way that maximizes her own network influence. We establish that the price of anarchy (PoA) of the game is
exactly 2. In fact, we prove that this result holds for a much more general type of games. Maehara et al. [32]
recently defined the notion of a monotone utility game on the integer lattice, and demonstrated that its PoA
is at most 2. Their utility game definition does not capture our multi-player game. We show that one of the
conditions in their game definition can be relaxed while still maintaining the same PoA. Crucially, this relaxed
definition captures our model.
4
3.1
A two-stage influence composition
The two-stage process can be formally interpreted as a composition of two influence functions, f = h◦g. The
first function g : Nn → {0, 1}n captures the set of influenced agents for a given budget allocation, while the
second function h : {0, 1}n → R+ captures the overall number (or value) of influenced agents, given some
seed agent set for a propagation process. In particular, the influenced agents of the first stage are the seed set
for the second stage. We next describe sufficient conditions for the functions g and h which guarantee that
their composition is monotone and submodular over the integer lattice. Note that we henceforth use notation
related to sets and their binary vector representation interchangeably.
Definition 3.1. A function g : Nn → {0, 1}n is said to be coordinate independent if it satisfies g(x ∨ y) ≤
g(x) ∨ g(y), for any x, y ∈ Nn .
Definition 3.2. A function g : Nn → {0, 1}n is said to be monotone if g(x) ≤ g(y) coordinate-wise whenever
x ≤ y coordinate-wise.
Many natural influence functions are coordinate independent. One such example is the family of functions
in which the output vector is a coordinate-wise disjunction over a set of n vectors, each of which captures the
independent influence implied by some agent. Specifically, the ith vector in the disjunction is the result of
some function fi : N → {0, 1}n indicating the affected agents as a result of any budget allocation assigned
only to agent i. We are now ready to prove our composition lemma.
Lemma 3.3. Given a monotone coordinate independent function g : Nn → {0, 1}n and a monotone submodular function h : {0, 1}n → R+ , the composition f = h ◦ g : Nn → R+ is a monotone submodular function
over the integer lattice.
Proof. The coordinate independence properties of g and the monotonicity of h imply that
h(g(x) ∨ g(y)) ≥ h(g(x ∨ y)).
In addition, from the monotonicity of g we know that g(x ∧ y) ≤ g(x) and g(x ∧ y) ≤ g(y). Thus, together
with the monotonicity of h, we get that
h(g(x) ∧ g(y)) ≥ h(g(x ∧ y)).
Utilizing the above results, we attain that f is submodular since
f (x) + f (y) = h(g(x)) + h(g(y))
≥ h(g(x) ∨ g(y)) + h(g(x) ∧ g(y))
≥ h(g(x ∨ y)) + h(g(x ∧ y))
= f (x ∨ y) + f (x ∧ y) ,
where the first inequality is by the submodularity of h.
We complete the proof by noting that f is monotone since both g and h are monotone. Formally, given
x ≤ y then it follows that f (x) = h(g(x)) ≤ h(g(y)) = f (y) by h’s monotonicity and since g(x) ≤ g(y) by
g’s monotonicity.
As a corollary of the lemma, we get the following theorem.
5
Theorem 3.4. Given a monotone coordinate independent function g : Nn → {0, 1}n and a monotone submodular function h : {0, 1}n → R+ , there is a (1 − 1/e)-approximation algorithm for maximizing the
influence function f = h ◦ g : Nn → R+ under capacity constraints c ∈ Nn+ and a budget constraint
B ∈ N+ , whose running time is polynomial in n, B, and the query time of the value oracle for f .
Proof. We know by Lemma 3.3 that f is monotone and submodular. Consequently, we attain an instance of
maximizing a monotone submodular function over the integer lattice subject to a knapsack constraint. Soma
et al. [40] recently studied this problem, and developed a (1 − 1/e)-approximation algorithm whose running
time is polynomial in n, B, and the query time for the value oracle of the submodular function.
3.2
The budgeted triggering model
We now focus on social networks, and introduce a natural budget-based influence model that we call the
Budgeted Triggering model. This model consists of a social network, represented by a directed graph G =
(V, E) with n nodes (agents) and a set E of directed edges (relationships between agents). In addition, there
is a function f : Nn → R+ that quantifies the influence of any budget allocation b ∈ Nn to the agents. The
concrete form of f strongly depends on the structure of the network, as described later. The objective is to
find a budget allocation b that maximizes the numberPof influenced nodes, while respecting the feasibility
constraints: (1) bi ≤ ci , for every node i ∈ V , and (2) i∈V bi ≤ B.
For ease of presentation, we begin by describing the classic Triggering model [26]. Let N (v) be the set
of neighbors of node v in the graph. The influence function implied by a Triggering model is defined by the
following simple process. Every node v ∈ V independently chooses a random triggering set T v ⊆ N (v)
among its neighbors according to some fixed distribution. Then, for any given seed set of nodes, its influence
value is defined as the result of a deterministic cascade process in the network which works in steps. In the
first step, only the selected seed set is affected. At step ℓ, each node v that is still not influenced becomes
influenced if any of its neighbors in T v became influenced at time ℓ − 1. This process terminates after at most
n rounds.
Our generalized model introduces the notion of budgets into this process. Specifically, the influence function in our case adheres to the following process. Every node v independently chooses a random triggering
vector tv ∈ N|N (v)| according to some fixed distribution. Given a budget allocation b ∈ Nn , the influence
value of that allocation is the result of the following deterministic cascade process. In the first step, every
node v that was allocated a budget bv > 0 becomes affected. At step ℓ, every node v that is still not influenced
becomes influenced if any of its neighbors u ∈ N (v) became influenced at time ℓ − 1 and bu ≥ tvu . One can
easily verify that the Triggering model is a special case of our Budgeted Triggering model, where the capacity
vector c = 1n , and each tvu = 0 if u ∈ T v , and tvu = B + 1, otherwise.
Intuitively, the triggering vectors in our model capture the amount of effort that is required from each
neighbor of some agent to affect her. Of course, the underlying assumption is that the effort of individuals
correlates with the budget they receive. As an example, consider the case that a node v selects a triggering
value tvu = 1 for some neighbor u. In this case, u can only influence v if it receives a budget of at least 1.
However, if v selects a value tvu = 0 then it is enough that u becomes affected in order to influence v. In
particular, it is possible that u does not get any budget but still influences v after it becomes affected in the
cascade process.
Given a budget allocation b, the value of the influence function f (b) is the expected number of nodes
influenced in the cascade process, where the expectation is taken over the random choices of the model.
Formally, let σ be some fixed choice of the triggering vectors of all nodes (according to the model distribution),
and let Pr(σ) be the probability of this outcome. Let fσ (b) be the (deterministic) number of
Pnodes influenced
when the triggering vectors are defined by σ and the budget allocation is b. Then, f (b) = σ Pr(σ) · fσ (b).
6
Theorem 3.5. There is a (1 − 1/e)-approximation algorithm for influence maximization under the Budgeted
Triggering model whose running time is polynomial in n, B, and the query time of the value oracle for the
influence function.
Proof. Consider an influence function f : Nn → R+ resulting from the Budgeted Triggering model. We next
show that the function f is monotone and submodular over the integer lattice. As a result, our model can be
interpreted as an instance of maximizing a monotone submodular function over the integer lattice subject to
a knapsack constraint, which admits an efficient (1 − 1/e)-approximation. Notice that it is sufficient to prove
that each (deterministic) function fσ is monotone submodular function over the integer lattice. This follows as
f is a non-negative linear combination of all fσ . One can easily validate that submodularity and monotonicity
are closed under non-negative linear combinations.
Consider some function fσ : Nn → R+ . For the purpose of establishing that fσ is monotone and submodular, we show that fσ can be interpreted as a combination of a monotone coordinate independent function
gσ : Nn → {0, 1}n , and a monotone submodular function hσ : {0, 1}n → R+ . The theorem then follows
by utilizing Lemma 3.3. We divide the diffusion process into two stages. In the first stage, we consider the
function gσ , which given a budget allocation returns (the characteristic vector of) the set S of all the nodes
that were allocated a positive budget along with their immediate neighbors that were influenced according to
the Budgeted Triggering model. Formally,
gσ (b) = S , v : bv > 0 ∪ u : ∃v ∈ N (u), bv > 0, bv ≥ tuv .
In the second stage, we consider the function hσ that receives (the characteristic vector of) S as its seed set,
and makes the (original) Triggering model interpretation of the vectors. Specifically, the triggering set of
each node v is considered to be T v = {u : tvu = 0}. Intuitively, the function gσ captures the initial budget
allocation step and the first step of the propagation process, while the function hσ captures all the remaining
steps of the propagation. Observe that fσ = hσ ◦ gσ by our construction. Also notice that hσ is trivially
monotone and submodular as it is the result of a Triggering model [26, Thm. 4.2]. Therefore, we are left to
analyze the function gσ , and prove that it is monotone and coordinate independent. The next claim establishes
these properties, and completes the proof of the theorem.
Claim 3.6. The function gσ is monotone and coordinate independent.
Proof. Let x, y ∈ Nn , and denote w = x ∨ y. We establish coordinate independence by considering each
influenced node in gσ (w) separately. Recall that gσ (w) consist of the union of two sets {v : wv > 0} and
{u : ∃v ∈ N (u), wv > 0, wv ≥ tuv }. Consider a node v for which wv > 0. Since wv = max{xv , yv }, we
know that at least one of {xv , yv } is equal to wv , say wv = xv . Hence, v ∈ gσ (x). Now, consider a node
u ∈ gσ (w) having wu = 0. It must be the case that u is influenced by one of its neighbors v. Clearly, wv > 0
and wv ≥ tuv . Again, we can assume without loss of generality that wv = xv , and get that u ∈ gσ (x). This
implies that for each v ∈ gσ (x ∨ y), either v ∈ gσ (x) or v ∈ gσ (y), proving coordinate independence, i.e.,
gσ (x ∨ y) ≤ gσ (x) ∨ gσ (y).
We prove monotonicity in a similar way. Let x ≤ y. Consider a node v ∈ gσ (x) for which xv > 0. Since
yv ≥ xv > 0, we know that v ∈ gσ (y). Now, consider a node u ∈ gσ (x) having xu = 0. There must be a
node v ∈ N (u) such that xv > 0, and xv ≥ tuv . Accordingly, we get that yv ≥ tuv , and hence, u ∈ gσ (y). This
implies that gσ (x) ≤ gσ (y), which completes the proof.
3.3
A multi-player budgeted influence game
We now focus on a multi-player budgeted influence game. In the general setting of the game, which is
i M
formally defined by the tuple (M, (Ai )M
i=1 , (f )i=1 ), there are M self-interested players, each of which needs
7
to decide how to allocate its budget B i ∈ N+ among n agents. Each player has a capacity vector ci ∈ Nn+ that
bounds the amount of budget she may allocate to every agent. The budget assignment of player i is denoted
The strategy of player i is feasible if it respects the constraints:
by bi ∈ Nn+ , and is referred to as its strategy.
P
(1) bij ≤ cij , for every j ∈ [n], and (2) nj=1 bij ≤ B i . Let Ai be the set of all feasible strategies for player i.
Note that we allow mixed (randomized) strategies. Each player has an influence function f i : NM ×n → R+
that designates her own influence (payoff) in the game. Specifically, f i (b) is the payoff of player i for the
budget allocation b of all players. This can also be written as f i (bi , b−i ), where the strategy of i is bi and the
strategies of all the other players are marked as b−i . Note that the goal of each player is to maximize
PM i her own
influence, given her feasibility constraints and the strategies of other players. Let F (b) = i=1 f (b) be the
social utility of all players in the game.
One of the most commonly used notions in game theory is Nash equilibrium (NE) [35]. This notion
translates to our game as follows: A budget allocation b is said to be in a NE if f i (bi , b−i ) ≥ f i (b̃i , b−i ), for
every i and b̃i ∈ Ai .
Monotone utility game on the integer lattice. We begin by studying a monotone utility game on the integer
lattice, and establish that its PoA is no more than 2. Later on, we demonstrate that our Budgeted Triggering
model can be captured by this game. Utility games were defined for submodular set functions by Vetta [42],
and later extended to submodular functions on the integer lattice by Maehara et al. [32]. We build on the latter
work, and demonstrate that one of the conditions in their utility game definition, namely, the requirement that
the submodular function satisfies component-wise concavity, can be neglected. Note that component-wise
concavity corresponds to the diminishing marginal returns property, which does not characterize submodularity over the integer lattice, as noted in Section 2. Therefore, removing this constraint is essential for proving
results for our model.
We refine the definition of a monotone utility game on the integer lattice [32], so it only satisfies the
following conditions:
(U1) F (b) is a monotone submodular function on the integer lattice.
P
i
(U2) F (b) ≥ M
i=1 f (b).
(U3) f i (b) ≥ F (bi , b−i ) − F (0, b−i ), for every i ∈ [M ].
Theorem 3.7. The price of anarchy of the monotone utility game designated by U1-U3 is at most 2.
1
M
Proof. Let b∗ = (b1∗ , . . . , bM
∗ ) be the social optimal budget allocation, and let b = (b , . . . , b ) be a budget
i
1
i
allocation in Nash equilibrium. Let b̃ = (b∗ , . . . , b∗ , 0, . . . , 0) be the optimal budget allocation restricted to
the first i players. Notice that
F (b∗ ) − F (b) ≤ F (b∗ ∨ b) − F (b)
=
M
X
F (b̃i ∨ b) − F (b̃i−1 ∨ b)
i=1
≤
M
X
F (bi∗ ∨ bi , b−i ) − F (bi , b−i ) ,
i=1
where the first inequality is due to the monotonicity of F , the equality holds by a telescoping sum, and the
last inequality is due to submodularity of F . Specifically, submodularity implies that inequality since
F (bi∗ ∨ bi , b−i ) + F (b̃i−1 ∨ b) ≥ F (b̃i ∨ b) + F (bi , b−i ) .
8
Now, observe that
F (bi , b−i ) + F (bi∗ , b−i ) ≥ F (bi∗ ∨ bi , b−i ) + F (bi∗ ∧ bi , b−i ) ≥ F (bi∗ ∨ bi , b−i ) + F (0, b−i ) .
Here, the first inequality holds by the submodularity of F , while the last inequality follows from the monotonicity of F . Consequently, we derive that
F (b∗ ) − F (b) ≤
M
X
F (bi∗ , b−i ) − F (0, b−i ) ≤
M
X
i=1
i=1
f i (bi∗ , b−i ) ≤
M
X
f i (bi , b−i ) ≤ F (b) ,
i=1
where the second inequality is by condition (U3) of the utility game, the third inequality holds since b is a
Nash equilibrium, and the last inequality is by condition (U2) of the utility game. This completes the proof as
F (b∗ ) ≤ 2F (b).
A multi-player Budgeted Triggering model. We extend the Budgeted Triggering model to a multi-player
setting. As before, we have a social network, represented by a directed graph G = (V, E), such that every
node v has an independent random triggering vector tv ∈ N|N (v)| . Each player i has a budget B i ∈ N+ , and
a function f i : NM ×n → R+ that quantifies her influence, given the budget allocation of all players. The
objective of each player i is to find a budget allocation bi , given the budget allocations of other players, that
maximizes the number of nodes that she influences, while respecting the feasibility constraints: (1) bij ≤ cij ,
P
for every node j ∈ V , and (2) j∈V bij ≤ B.
The process in which nodes become affected is very similar to that in Budgeted Triggering, but needs some
refinement for the multi-player setting. We follow most design decisions of Bharathi et al. [6]. Specifically,
whenever a player influences a node, this node is assigned the color of that player. Once a node become
influenced, its color cannot change anymore. If two or more players provide positive budgets to the same
node, then the node is given the color of the player that provided the highest budget. In case there are several
such players, the node is assigned a color uniformly at random among the set of players with the highest
budget assignment. If a node u becomes influenced at step ℓ, it attempts to influence each of its neighbors. If
the activation attempt from u to its neighbor v succeeds, which is based on the triggering vector of v, then v
becomes influenced with the same color as u at step ℓ + Tuv , assuming that it has not been influenced yet. All
Tuv ’s are independent positive continuous random variables. This essentially prevents simultaneous activation
attempts by multiple neighbors.
PM i
Lemma 3.8. The social function F (b) =
i=1 f (b) is a monotone submodular function on the integer
lattice.
Proof. We prove this lemma along similar lines to those in the proof of Theorem 3.5, which attains to the
single-player scenario. Let σ be some fixed choice of triggering vectors of all nodes and all the activation
times Tuv . We also assume that σ encodes other random decisions in the model, namely, all tie-breaking
choices related to equal (highest) budget assignments for nodes. Let Fσ (b) be the deterministic
number of
P
influenced nodes for the random choices σ and the budget allocation b, and note that F (b) = σ Pr(σ)·Fσ (b).
Similar to Theorem 3.5, it is sufficient to prove that Fσ is monotone submodular function on the integer
lattice. Again, we view the social influence as a two-stage process. In the first step, we consider a function
Gσ : NM ×n → {0, 1}n that given the budget allocation of all players returns a set S of immediate influenced
nodes. Formally,
Gσ (b) = S , v : ∃i, biv > 0 ∪ u : ∃v ∈ N (u), ∃i, biv > 0, biv ≥ tuv .
9
In the second stage, we consider the function Hσ that receives S as its seed set, and makes the original
Triggering model interpretation of the vectors, that is, it sets each T v = {u : tvu = 0}. Notice that the fact
that there are multiple players at this stage does not change the social outcome, i.e., the number of influenced
nodes, comparing to a single-player scenario. The only difference relates to the identity of the player that
affects every node. This implies that Hσ is monotone and submodular as its result is identical to that of the
original (single-player) Triggering model [26]. Observe that Fσ = Hσ ◦ Gσ by our construction. Therefore,
by Lemma 3.3, we are left to establish that the function Gσ is monotone and coordinate independent. The
next claim proves that.
Claim 3.9. The function Gσ is monotone and coordinate independent.
We prove this claim using almost identical line of argumentation to that in Claim 3.6. Let x, y ∈ NM ×n ,
and denote w = x ∨ y. We establish coordinate independence by considering every affected node in
Gσ (w) separately. Recall that Gσ (w) consist of the union of two sets {v : ∃i, wvi > 0} and {u : ∃v ∈
N (u), ∃i, wvi > 0, wvi ≥ tuv }. Consider a node v that has some player i with wvi > 0. Since wvi =
max{xiv , yvi }, we know that at least one of {xiv , yvi } is equal to wvi , say wvi = xiv . Hence, v ∈ Gσ (x), since
in particular, player i competes on influencing v. Now, consider a node u ∈ Gσ (w) with wui = 0, for all i. It
must be the case that u is influenced by one of its neighbors v. Clearly, there exists some player i such that
wvi > 0 and wvi ≥ tuv . Again, we can assume without loss of generality that wvi = xiv , and get that u ∈ Gσ (x),
since in particular, player i competes on influencing u via v. This implies that for each v ∈ Gσ (x ∨ y), either
v ∈ Gσ (x) or v ∈ Gσ (y), proving coordinate independence, i.e., Gσ (x ∨ y) ≤ Gσ (x) ∨ Gσ (y).
We prove monotonicity in a similar way. Let x ≤ y. Consider a node v ∈ Gσ (x) such that there is a
player i for which xiv > 0. Since yvi ≥ xiv > 0, we know that player i competes on influencing v, and thus,
v ∈ Gσ (y). Now, consider a node u ∈ Gσ (x) with xiu = 0, for all i. There must be a node v ∈ N (u)
and a player i such that xiv > 0 and xiv ≥ tuv . Accordingly, we get that yvi ≥ tuv . Therefore, player i also
competes on influencing u via v, and thus, u ∈ Gσ (y). This implies that Gσ (x) ≤ Gσ (y), which completes
the proof.
Theorem 3.10. The Budgeted Triggering model with multiple players has a PoA of exactly 2.
Proof. We begin by demonstrating that the model satisfies conditions U1-U3 of the monotone utility game
on the integer lattice. As a result, we can apply Theorem 3.7 to attain an upper bound of 2 on the PoA of the
model. Notice that condition (U1) holds by Lemma 3.8. Also, condition (U2) trivially holds by the definition
of the social function F .
For the purpose of proving that the model satisfies condition (U3), let σ be some fixed choice of triggering
vectors of all nodes and all the activation times Tuv . We also assume that σ encodes other random decisions
in the model, namely, all tie-breaking choices related to equal (highest) budget assignments for nodes. Let
Fσ (b) be the deterministic number of influenced nodes for the random choices σ and the budget allocation
b. Finally, let fσi (b) be the deterministic number of nodes influenced by player i for the random choices σ
and the budget allocation b. We next argue that fσi (b) ≥ Fσ (bi , b−i ) − Fσ (0, b−i ), for any σ. Notice that this
implies condition (U3) since
X
X
f i (b) =
Pr(σ)fσi (b) ≥
Pr(σ) Fσ (bi , b−i ) − Fσ (0, b−i ) = F (bi , b−i ) − F (0, b−i ) .
σ
σ
We turn to prove the above argument. Notice that it is sufficient to focus only on cases that Fσ (bi , b−i ) >
Fσ (0, b−i ), since otherwise, the argument is trivially true as fσi (b) ≥ 0. We concentrate on all nodes u that
are not influenced by any player when the mutual strategy is (0, b−i ), but became influenced for a strategy
(bi , b−i ). We claim that all those nodes must be assigned the color of player i. It is easy to verify that
10
increasing the budget assignment of a player to any node can only negatively affect other players, that is,
they may only influence a subset of the nodes. This follows as all the activation results are deterministically
encoded in the choices σ, so adding a competition can only make the outcome worse, i.e., players may not
affect a node that they previously did. This implies the claim. As a result, fσi (b) ≥ Fσ (bi , b−i ) − Fσ (0, b−i ).
This completes the proof that the model is an instance of the monotone utility game on the integer lattice, and
thus, has a PoA of at most 2.
We proceed by proving the tightness of the PoA result. We show there is an instance of the multi-player
Budget Triggering model whose PoA is 2N/(N + 1). Notice that as N → ∞, the lower bound on the
PoA tends to 2. This instance has been presented in a slightly different context by He and Kempe [25,
Proposition 1]. Concretely, the input graph is a union of a star with one center and N leaves, and N additional
(isolated) nodes. The triggering vectors are selected from a degenerate distribution that essentially implies
that each activated node also activates all of its neighbors. Every player has one unit of budget. One can
easily verify that the solution in which all players assign their unit budget to the center of the star is a NE.
This follows since the expected payoff for each player is (N + 1)/N , while unilaterally moving the budget
to any other node leads to a payoff of 1. However, the strategy that optimizes the social utility is to place one
unit of budget at the center of the star graph, and the remaining budget units at different isolated nodes.
4 An Online Model
We study the online version of the budgeted influence propagation model. This setting can capture scenarios
in which the social influences in a network are known in advance, but the (subset of) agents that will arrive and
their order is unknown. The input for this setting is identical to that of the offline variant with the exception
that the n agents arrive in an online fashion. This intuitively means that we do not know the monotone
submodular influence function f : Nn → R+ in advance, but rather, it is revealed to us gradually with time.
More specifically, upon the arrival of the ith agent, we can infer the (constrained) function fi , which quantifies
the influence of f for the set of the first i agents, while fixing the budget of all other agents to 0. Note that we
also learn the maximum budget ci that can be allocated to agent i whenever she arrives. For every arriving
agent i, the algorithm needs to make an irrevocable decision regarding the amount of budget bi allocated to that
agent without knowing the potential contribution of future arriving agents. As mentioned in the introduction,
this problem is a generalization of the classical secretary problem. This immediately implies that any online
algorithm preforms very poorly under an unrestricted adversarial arrival of the agents. We therefore follow
the standard assumption that the agents and their influence are fixed in advanced, but their order of arrival is
random. Note that the overall influence of some budget allocation to the agents is not affected by the arrival
order of the agents.
We analyze the performance of our algorithm, ON, using the competitive analysis paradigm. Note that
competitive analysis focuses on quantifying the cost that online algorithms suffer due to their complete lack
of knowledge regarding the future, and it does not take into account computational complexity. Let OPT be
an optimal algorithm for the offline setting. Given an input instance I for the problem, we let OPT(I) and
ON(I) be the influence values that OPT and ON attain for I, respectively. We say that ON is c-competitive
if inf I E[ON(I)]/OPT(I) ≥ c, where E[ON(I)] is the expected value taken over the random choices of the
algorithm and the random arrival order of the agents. We like to note that our algorithm and its analysis are
inspired by the results of Feldman et al. [17] for the submodular knapsack secretary problem. However, we
make several novel observations and identify some interesting structural properties that enable us to simultaneously generalize and improve their results. Also note that in the interests of expositional simplicity, we
have not tried to optimize the constants in our analysis.
11
Theorem 4.1. There is an online randomized algorithm that achieves 1/(15e) ≈ 0.025-competitive ratio for
the budgeted influence maximization problem.
Proof. Recall that an instance of the online budgeted influence maximization problem consists of a set of
n agents that arrive in a random order, a budget constraint B ∈ N+ , capacity constraints c ∈ Nn+ , and a
monotone submodular influence function over the integer lattice f : Nn → R+ . We begin by describing
the main component of our algorithm. This component is built to address the case that the contribution
of each agent is relatively small with respect to the optimal solution. That is, even when one assigns the
maximum feasible budget to any single agent, the contribution of that agent is still small compared to the
optimum. We refer to this component as light influence algorithm (abbreviated, LI). This component will
be later complemented with another component, derived from the classical secretary algorithm, to deal with
highly influential agents.
Let ha1 , a2 , . . . , an i be an arbitrary fixed ordering of the set of agents. This is not necessarily the arrival order of the agents. Algorithm light influence, formally described below, assumes that each agent ai
is assigned a uniform continuous random variable ti ∈ [0, 1) that determines its arrival time. Note that this
assumption does not add restrictions on the model since one can create a set of n samples of the uniform distribution from the range [0, 1) in advance, and assign them by a non-decreasing order to each arriving agent
(see, e.g., the discussion in [17]).
The algorithm begins by exploring the first part of the agent sequence, that is, the agents in L = {ai :
ti ≤ 1/2}. Note that it does not allocate any budget to those agents. Let bL be an optimal (offline) budget
allocation for the restricted instance that only consists of the agents in L, and let f (bL ) be its influence value.
Furthermore, let f (bL )/B be a lower bound on the average contribution of each unit of budget in that solution.
The algorithm continues by considering the remainder of the sequence. For each arriving agent, it allocates
a budget of k if the increase in the overall influence value is at least αkf (bL )/B, for some fixed α to be
determined later. That is, the average influence contribution of an each unit of budget is (up to the α-factor)
at least as large as the average unit contribution in the optimal solution for the first part. If there are several
budget allocations that satisfy the above condition then the algorithm allocates the maximal amount of budget
that still satisfies the capacity and budget constraints.
Prior to formally describing our algorithm, we like to remind that χi corresponds to the characteristic
vector of ai , i.e., (χi )i = 1 and (χi )j = 0 for every j 6= i. Accordingly, if b ∈ Nn is a budget allocation
vector in which the ith coordinate represents the allocation to agent ai , and bi = 0, then the allocation b ∨ kχi
corresponds to an update of b by adding a budget k to agent ai . We say that the marginal value of assigning
k units of budget to ai is f (b ∨ kχi ) − f (b), and the marginal value per unit is (f (b ∨ kχi ) − f (b))/k.
Having described our main component, we are now ready to complete the description of our algorithm. As
already , we randomly combine algorithm LI with the classical algorithm for the secretary problem. Specifically, algorithm LI is employed with probability 5/8 and the classical secretary algorithm with probability
3/8. This latter algorithm assigns a maximal amount of budget to a single agent ai to attain an influence value
of f (ci χi ). The algorithm selects ai by disregarding the first n/e agents that arrive, and then picking the first
agent whose influence value is better than any of the values of the first n/e agents. This optimal algorithm is
known to succeed in finding the single agent with the best influence with probability of 1/e [14].
4.1
Analysis
We begin by analyzing the performance guarantee of algorithm LI, and later analyze the complete algorithm.
Let OPT∗ = [OPT∗1 , . . . , OPT∗n ] be the optimal budget allocation for a given instance, and let OPTL be the
L
∗
budget allocation for the agents in L, that is, OPTL
i = OPTi whenever i ∈ L and OPTi = 0, otherwise.
R
R
Similarly, OPT is the budget allocation for the agents in R = [n] \ L, i.e., OPTi = OPT∗i for i ∈ R, and
12
Algorithm 1: Light Influence (LI)
Input : an online sequence of n agents, a budget constraint B ∈ N+ , capacity constraints c ∈ Nn+ , a monotone
submodular function f : Nn → R+ , a parameter α ∈ R+
Output: A budget allocation b
b ← (0, 0, . . . , 0)
f (bL ) ← value of the optimal budget allocation for agents in L = {ai : ti ≤ 1/2}
for every agent
ai such that ti ∈ (1/2, 1] do
P
Ki ← k ≤ ci : f (b ∨ kχi ) − f (b) ≥ αkf (bL )/B ∪ k + j6=i bj ≤ B
if Ki 6= ∅ then
k ← maxk {Ki }
b ← b ∨ kχi
end
end
return b
Algorithm 2: Online Influence Maximization
Input : an online sequence of n agents, a budget constraint B ∈ N+ , capacity constraints c ∈ Nn+ , a monotone
submodular function f : Nn → R+ , a parameter α ∈ R+
Output: A budget allocation b
r ← random number in [0, 1]
if r ∈ [0, 3/8] then
b ← run the classical secretary algorithm with (n, B, c, f )
else if r ∈ (3/8, 1] then
b ← run algorithm LI with (n, B, c, f, α)
end
return b
OPTR
/ R. Recall that algorithm LI attends to the case in which no single agent has a significant
i = 0 for i ∈
influence contribution compared to the optimal value. More formally, let β = maxi f (ci χi )/f (OPT∗ ) be the
ratio between the maximal contribution of a single agent and the optimal value.
Lemma 4.2. If α ≥ 2β then f (b) ≥ min{αf (OPTL )/2, f (OPTR ) − αf (OPT∗ )}.
Proof. We prove this lemma by bounding the expected influence value of the algorithm in two cases and
taking the minimum of them:
Case I: Algorithm LI allocates a budget of more than B/2 units. We know that the algorithm attains a value
of at least αf (bL )/B from each allocated budget unit by the selection rule f (b ∨ ki χi ) − f (b) ≥ αkf (bL )/B.
Hence, the total influence of this allocation is at least
f (b) >
αf (bL )
αf (OPTL )
B αf (bL )
·
=
≥
.
2
B
2
2
Case II: Algorithm LI allocates at most B/2 budget units. We utilize the following lemma proven in [40,
Lem 2.3].
Lemma 4.3. Let f be a monotone submodular function over the integer lattice. For arbitrary x, y,
X
f (x ∨ y) ≤ f (x) +
f (x ∨ yi χi ) − f (x) .
i∈[n]:
yi >xi
13
This lemma applied to our case implies that
f (b ∨ OPTR ) ≤ f (b) +
X
i∈[n]:
OPTR
i >bi
f (b ∨ OPTR
i χi ) − f (b) .
(1)
We consider two sub-cases:
Subcase A: There is ℓ ∈ [n] such that OPTR
ℓ > B/2. Clearly, there can only be one agent ℓ having this
∗
property. One can easily validate that f (b ∨ OPTR
ℓ χℓ ) − f (b) ≤ β · f (OPT ) by the definition of β and
R
Lemma 2.2. Now, consider any agent i 6= ℓ with OPTi > bi . The reasonP
that the optimal solution allocated
more budget to i than our algorithm cannot be the lack of budget since i bi < B/2 and OPTR
i < B/2.
Hence, it must be the case that
f (bL )
f (b ∨ OPTR
i χi ) − f (b)
<
α
,
(2)
B
OPTR
i
by the selection rule of the algorithm. Note that b in the above equation designates the budget allocation at
the time that the agent ai was considered and not the final allocation. However, due to the weak version of
marginal diminishing returns that was described in Lemma 2.2, the inequality also holds for the final allocation
vector. As a result,
f (OPTR ) ≤ f (b ∨ OPTR )
≤ f (b) + f (b ∨ OPTR
ℓ χℓ ) − f (b) +
X
i∈[n]\{ℓ}:
OPTR
i >bi
f (b ∨ OPTR
i χi ) − f (b)
f (bL ) B
·
≤ f (b) + βf (OPT∗ ) + α
B 2
α
,
≤ f (b) + f (OPT∗ ) · β +
2
where the first inequality follows due to the monotonicity of f , and the third inequality uses the sub-case
assumption
that there is one agent that receives at least half of the overall budget in OPTR , and thus,
P
R
R
∗
i6=ℓ OPTi ≤ B/2. Recall that α ≥ 2β, and thus, f (b) ≥ f (OPT ) − αf (OPT ).
Subcase B: OPTR
i ≤ B/2, for every i ∈ [n]. The analysis of this sub-case follows the same argumentation of
the previous sub-case. Notice that for every agent i ∈ [n] such that OPTR
i > bi , we can apply inequality (2).
Consequently, we can utilize inequality (1), and get that
f (OPTR ) ≤ f (b ∨ OPTR ) ≤ f (b) +
X
i∈[n]:
OPTR
i >bi
f (bL )
f (b ∨ OPTR
·B ,
i χi ) − f (b) ≤ f (b) + α
B
which implies that f (b) ≥ f (OPTR ) − αf (OPT∗ ).
Recall that we considered some arbitrary fixed ordering of the agents ha1 , a2 , . . . , an i that is not necessary
their arrival order. Let wi the marginal contribution of agent ai to the optimal value when calculated according
to this order. Namely, let OPT∗<i = [OPT∗1 , . . . , OPT∗i−1 , 0, . . . , 0] be the allocation giving the same budget
as OPT∗ for every agent aj with j < i, and 0 for the rest, and define wi = f (OPT∗<i ∨OPT∗i χi )−f (OPT<i ).
This point of view allow us to associate fixed parts of the optimal value to the agents in a way that is not
affected
by their order of arrival. Let Xi be a random indicator for the event that ai ∈ L, and let W =
Pn
i=1 wi Xi . Let α ≥ 2β to be determined later.
14
By the weak version of marginal
in Lemma 2.2, it holds that f (OPTL ) ≥
Pn diminishing returns specified
R
∗
W , and similarly, f (OPT ) ≥ i=1 wi (1 − Xi ) = f (OPT ) − W . Using this observation, in conjunction
with Lemma 4.2, we get that f (b) ≥ min{αW/2, f (OPT∗ )·(1−α−W/f (OPT∗ ))}. Let Y = W/f (OPT∗ ),
and observe that
f (b) ≥ f (OPT∗ ) · min{αY /2, 1 − α − Y } .
(3)
Note that Y ∈ [0, 1] captures the ratio between the expected optimum value associated with the agents in L
and the (overall) optimum value. We continue to bound the expected value of f (b) by proving the following
lemma.
Lemma 4.4. Let α = 2/5 and assume that β ≤ 1/5, then,
E[f (b)] ≥
p 2
f (OPT∗ )
· 1− β .
20
Proof. By assigning α = 2/5 to the bound in inequality 3, we obtain that
Y 3
∗
, −Y .
f (b) ≥ f (OPT ) · min
5 5
Notice that the expected value of f (b) is
∗
E[f (b)] ≥ f (OPT )
Z
3
5
′
[Pr(Y ≤ γ)] · min
0
γ 3
, − γ dγ ,
5 5
since [Pr(Y ≤ γ)]′ is the probability density function of Y . Now, observe that we can split the integral range
into two parts
∗
E[f (b)] ≥ f (OPT )
f (OPT∗ )
≥
5
Z
1
2
[Pr(Y ≤ γ)]
0
Z
′γ
1
2
5
∗
dγ + f (OPT )
Z
3
5
1
2
′
[Pr(Y ≤ γ)]
[Pr(Y ≤ γ)]′ γdγ.
3
− γ dγ
5
(4)
0
To bound Pr(Y ≤ γ), we use Chebyshev’s inequality, while noting that
E[Y ] =
n
X
wi E[Xi ]/f (OPT∗ ) = W/(2f (OPT∗ )) = 1/2 ,
i=1
since E[Xi ] = 1/2 and W = f (OPT∗ ). Now,
1
β
Var[Y ]
Pr Y −
≤ 2 ,
≥c ≤
2
c2
4c
where the last inequality follows from [17, Lem B.5]. For completeness, the proof of this lemma appears as
Lemma A.1 in the Appendix. Now, observe that Y is symmetrically distributed around 1/2, and therefore,
Pr(Y ≤ 12 − c) = Pr(Y ≥ 21 + c) ≤ β/(8c2 ). This implies that for every γ ≤ 1/2,
Pr(Y ≤ γ) ≤
15
β
.
8( 21 − γ)2
Note that we cannot simply plug this upper bound on the cumulative distribution function into inequality (4).
R 1/2
The fact that Y is symmetrically distributed around 1/2 implies that 0 [Pr(Y ≤ γ)]′ dγ = 1/2, and this
does hold with this bound. To bypass this issue, and maintain the later constrain, we decrease the integration
range. One can easily verify that
#′
Z 1−√β "
2
β
1
dγ = ,
1
2
2
8( 2 − γ)
0
and as a result, we can infer that
Z
1
2
′
[Pr[Y ≤ γ]] γdγ ≥
0
Z
√
1− β
2
0
"
β
1
8( 2 − γ)2
#′
γdγ.
Specifically, this inequality holds since we essentially considered the worst distribution (from an algorithms
analysis point of view) by shifting probability from higher values of Y to smaller values (note that multiplication by γ). The proof of the lemma now follows since
#′
Z 1−√β "
2
β
f (OPT∗ )
γdγ
E[f (b)] ≥
5
8( 12 − γ)2
0
√
∗ Z 1−2 β
β · f (OPT )
γ
=
dγ
1
20
( 2 − γ)3
0
1−√β
2
4γ − 1
β · f (OPT∗ )
=
20
(1 − 2γ)2 0
∗
p 2
f (OPT )
· 1− β
=
.
20
Recall that β = maxi f (ci χi )/f (OPT∗ ). We next consider two cases depending on the value of β. When
β > 1/5, our algorithm executes the classical secretary algorithm with probability 3/8. This algorithm places
a maximal amount of budget on the agent having maximum influence, maxi f (ci χi ), with probability 1/e.
Consequently,
3 β · f (OPT∗ )
3f (OPT∗ )
f (OPT∗ )
E[f (b)] ≥ ·
>
>
.
8
e
40e
15e
When β ≤ 1/5, we know that our algorithm executes the classical secretary algorithm with probability 3/8,
and algorithm LI with probability 5/8. Utilizing Lemma 4.4 results in
p 2
p 2
5
3 βf (OPT∗ ) 5 f (OPT∗ )
3β
1− β
+ ·
· 1− β =
+
E[f (b)] ≥ ·
· f (OPT∗ ).
8
e
8
20
8e
160
One can validate that this latter term is minimized for β = 1/(12/e + 1)2 ≈ 0.034, which implies that
E[f (b)] ≥
f (OPT∗ )
3
f (OPT∗ ) >
.
96 + 8e
15e
This completes the proof of the theorem.
16
References
[1] Noga Alon, Iftah Gamzu, and Moshe Tennenholtz. Optimizing budget allocation among channels and
influencers. In WWW, pages 381–388, 2012.
[2] Moshe Babaioff, Nicole Immorlica, David Kempe, and Robert Kleinberg. A knapsack secretary problem
with applications. In APPROX, pages 16–28, 2007.
[3] Moshe Babaioff, Nicole Immorlica, David Kempe, and Robert Kleinberg. Online auctions and generalized secretary problems. SIGecom Exchanges, 7(2), 2008.
[4] Siddharth Barman, Seeun Umboh, Shuchi Chawla, and David L. Malec. Secretary problems with convex
costs. In ICALP, pages 75–87, 2012.
[5] MohammadHossein Bateni, Mohammad Taghi Hajiaghayi, and Morteza Zadimoghaddam. Submodular
secretary problem and extensions. ACM Transactions on Algorithms, 9(4):32, 2013.
[6] Shishir Bharathi, David Kempe, and Mahyar Salek. Competitive influence maximization in social networks. In WINE, pages 306–311, 2007.
[7] Christian Borgs, Michael Brautbar, Jennifer T. Chayes, and Brendan Lucier. Maximizing social influence
in nearly optimal time. In SODA, pages 946–957, 2014.
[8] Ning Chen. On the approximability of influence in social networks. SIAM J. Discrete Math., 23(3),
2009.
[9] Wei Chen, Chi Wang, and Yajun Wang. Scalable influence maximization for prevalent viral marketing
in large-scale social networks. In KDD, pages 1029–1038, 2010.
[10] Wei Chen, Yajun Wang, and Siyu Yang. Efficient influence maximization in social networks. In KDD,
pages 199–208, 2009.
[11] Edith Cohen, Daniel Delling, Thomas Pajor, and Renato F. Werneck. Sketch-based influence maximization and computation: Scaling up with guarantees. In CIKM, pages 629–638, 2014.
[12] Erik D. Demaine, MohammadTaghi Hajiaghayi, Hamid Mahini, David L. Malec, S. Raghavan, Anshul
Sawant, and Morteza Zadimoghaddam. How to influence people with partial incentives. In WWW, pages
937–948, 2014.
[13] Pedro Domingos and Matthew Richardson. Mining the network value of customers. In KDD, pages
57–66, 2001.
[14] E. B. Dynkin. The optimum choice of the instant for stopping a markov process. Sov. Math. Dokl.,
4:627–629, 1963.
[15] Milad Eftekhar, Yashar Ganjali, and Nick Koudas. Information cascade at group scale. In KDD, pages
401–409, 2013.
[16] Uriel Feige. A threshold of ln n for approximating set cover. In J. ACM, pages 634–652, 1998.
[17] Moran Feldman, Joseph Naor, and Roy Schwartz. Improved competitive ratios for submodular secretary
problems. In APPROX, pages 218–229, 2011.
17
[18] Jacob Goldenberg, Barak Libai, and Eitan Muller. Talk of the network: A complex systems look at the
underlying process of word-of-mouth. Marketing Letters, 2001.
[19] Manuel Gomez-Rodriguez, Jure Leskovec, and Andreas Krause. Inferring networks of diffusion and
influence. TKDD, 5(4):21, 2012.
[20] Amit Goyal, Francesco Bonchi, and Laks V. S. Lakshmanan. Learning influence probabilities in social
networks. In WSDM, pages 241–250, 2010.
[21] Sanjeev Goyal and Michael Kearns. Competitive contagion in networks. In STOC, pages 759–774,
2012.
[22] M. Granovetter. Threshold models of collective behavior. Am. J. Sociol., 83(6):1420–1443, 1978.
[23] Anupam Gupta, Aaron Roth, Grant Schoenebeck, and Kunal Talwar. Constrained non-monotone submodular maximization: Offline and secretary algorithms. In WINE, pages 246–257, 2010.
[24] Jason D. Hartline, Vahab S. Mirrokni, and Mukund Sundararajan. Optimal marketing strategies over
social networks. In WWW, pages 189–198, 2008.
[25] Xinran He and David Kempe. Price of anarchy for the n-player competitive cascade game with submodular activation functions. In WINE, pages 232–248, 2013.
[26] David Kempe, Jon M. Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social
network. In KDD, pages 137–146, 2003.
[27] David Kempe, Jon M. Kleinberg, and Éva Tardos. Influential nodes in a diffusion model for social
networks. In ICALP, pages 1127–1138, 2005.
[28] Sanjeev Khanna and Brendan Lucier. Influence maximization in undirected networks. In SODA, pages
1482–1496, 2014.
[29] Jon Kleinberg. Cascading behavior in networks: Algorithmic and economic issues. Algorithmic Game
Theory, 2007.
[30] Robert D. Kleinberg. A multiple-choice secretary algorithm with applications to online auctions. In
SODA, pages 630–631, 2005.
[31] Siyu Lei, Silviu Maniu, Luyi Mo, Reynold Cheng, and Pierre Senellart. Online influence maximization.
In SIGKDD, pages 645–654, 2015.
[32] Takanori Maehara, Akihiro Yabe, and Ken-ichi Kawarabayashi. Budget allocation problem with multiple
advertisers: A game theoretic view. In ICML, pages 428–437, 2015.
[33] Michael Mathioudakis, Francesco Bonchi, Carlos Castillo, Aristides Gionis, and Antti Ukkonen. Sparsification of influence networks. In KDD, pages 529–537, 2011.
[34] Elchanan Mossel and Sebastien Roch. On the submodularity of influence in social networks. In STOC,
pages 128–134, 2007.
[35] John F. Nash. Equilibrium points in n-person games. Proc. Natl. Acad. Sci., 36:48–49, 1950.
18
[36] Matthew Richardson and Pedro Domingos. Mining knowledge-sharing sites for viral marketing. In
KDD, pages 61–70, 2002.
[37] Lior Seeman and Yaron Singer. Adaptive seeding in social networks. In FOCS, pages 459–468, 2013.
[38] Hsu-Shih Shih and E. Stanley Lee. Discrete Multi-Level Programming in a Dynamic Environment, pages
79–98. Physica-Verlag HD, 2001.
[39] Yaron Singer. How to win friends and influence people, truthfully: influence maximization mechanisms
for social networks. In WSDM, pages 733–742, 2012.
[40] Tasuku Soma, Naonori Kakimura, Kazuhiro Inaba, and Ken-ichi Kawarabayashi. Optimal budget allocation: Theoretical guarantee and efficient algorithm. In ICML, pages 351–359, 2014.
[41] Maxim Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint.
Oper. Res. Lett., 32(1):41–43, 2004.
[42] Adrian Vetta. Nash equilibria in competitive societies, with applications to facility location, traffic
routing and auctions. In FOCS, pages 416–425, 2002.
[43] Yu Yang, Xiangbo Mao, Jian Pei, and Xiaofei He. Continuous influence maximization: What discounts
should we offer to social network users? In SIGMOD, pages 727–741, 2016.
A
Additional details
The following claim was established by [17, Lem B.5]. We include it here for completeness.
P
Lemma A.1. Consider the random variable Y = i wi Xi /f (OPT∗ ), defined in the proof of Theorem 4.1.
Its variance is Var[Y ] ≤ β/4.
Proof.
P 2
P 2
P
w
V ar[ i wi Xi ]
i wi V ar[Xi ]
=
= 2 i i ∗
V ar[Y ] =
∗
∗
2
2
f (OPT )
f (OPT )
4f (OPT )
P
maxi wi · i wi
maxi wi · f (OPT∗ )
β
=
≤ .
≤
∗
∗
2
2
4f (OPT )
4f (OPT )
4
19
| 8 |
Decoupled Learning for Conditional Adversarial Networks
Zhifei Zhang, Yang Song, and Hairong Qi
University of Tennessee
arXiv:1801.06790v1 [cs.CV] 21 Jan 2018
{zzhang61, ysong18, hqi}@utk.edu
Abstract
Incorporating encoding-decoding nets with adversarial
nets has been widely adopted in image generation tasks.
We observe that the state-of-the-art achievements were obtained by carefully balancing the reconstruction loss and
adversarial loss, and such balance shifts with different network structures, datasets, and training strategies. Empirical studies have demonstrated that an inappropriate weight
between the two losses may cause instability, and it is tricky
to search for the optimal setting, especially when lacking
prior knowledge on the data and network. This paper gives
the first attempt to relax the need of manual balancing
by proposing the concept of decoupled learning, where a
novel network structure is designed that explicitly disentangles the backpropagation paths of the two losses. In existing works, the encoding-decoding nets and GANs are integrated by sharing weights on the generator/decoder, thus
the two losses are backpropagated to the generator/decoder
simultaneously, where a weighting factor is needed to balance the interaction between the two losses. The decoupled
learning avoids the interaction and thus removes the requirement of the weighting factor, essentially improving the
generalization capacity of the designed model to different
applications. The decoupled learning framework could be
easily adapted to most existing encoding-decoding-based
generative networks and achieve competitive performance
without the need of weight adjustment. Experimental results
demonstrate the effectiveness, robustness, and generality of
the proposed method. The other contribution of the paper is
the design of a new evaluation metric to measure the image
quality of generative models. We propose the so-called normalized relative discriminative score (NRDS), which introduces the idea of relative comparison, rather than providing
absolute estimates like existing metrics. The demo code is
available1 .
1 https://github.com/ZZUTK/
Decoupled-Learning-Conditional-GAN
1. Introduction
Generative adversarial networks (GANs) [3] is an adversarial framework that generates images from noise while
preserving high fidelity. However, generating random images from noise doesn’t meet the requirements in many real
applications, e.g., image inpainting [16], image transformation [4, 21], image manipulation [23, 25], etc. To overcome this problem, recent works like [19, 13] concatenate
additional features generated by an encoder or certain extractor to the random noise or directly replace the noise
by the features. In most recent practices, the encodingdecoding networks (ED), e.g., VAE [5], AAE [10], Autoencoder [8], etc., have been the popular structure to be
incorporated with GANs [3] for image-conditional modeling, where the encoder extracts features, which are then
fed to the decoder/generator to generate the target images.
The encoding-decoding network tends to yield blurry images. Incorporating a discriminator, as empirically demonstrated in many works [6, 4, 7, 23, 9, 26, 24], effectively
increases the quality (i.e., reality and resolution) of generated images from the encoding-decoding networks. In recent two years, the adversarial loss has become a common
regularizer for boosting image quality, especially in image
generation tasks.
In existing works that incorporate the encoding-decoding
networks (ED) with GANs, the decoder of ED and generator of GAN share the same network and parameters, thus the
reconstruction loss (from ED) and the adversarial loss (from
discriminator) are both imposed on the decoder/generator.
Although ED is known to be stable in training, and many
alternatives of GANs, e.g., DCGAN [17], WGAN [1], LSGAN [11], etc., have stabilized the training of GANs, coupling the reconstruction loss and the adversarial loss by
making them interact with each other may yield unstable
results, e.g., introducing artifacts as shown in Fig. 1. We observe the increased details of generated images as compared
to the image generated from ED only (the top row in Fig. 1
where the weight of adversarial loss is 0). However, we
also observe the obvious artifacts introduced by adding the
adversarial loss (e.g., the 1st, 2nd faces with weights 0.01
and 0.1). A higher weight on the adversarial loss preserves
0
0.001
0.01
0.1
Figure 1. Results generated from the coupled network of ED and
GAN. Please zoom in for more details. The weight of reconstruction loss is 1, and the weight of adversarial loss is on the left.
richer details in generated images but suffering higher risk
of introducing significant artifacts or even causing instability, while a lower weight on the adversarial loss would not
effectively boost the image fidelity. Generally, the trade-off
between the two losses needs to be carefully tuned, otherwise, the generated images may present significant artifacts,
e.g., stripe, spots, or anything visually unrealistic.
Existing works generally arrive at an appropriate weight
between the two losses by conducting extensive empirical
study; and yet this weight may vary with different network
structures or different datasets used.
In this paper, we give the first attempt to relax the manual balancing between the two losses by proposing a novel
decoupled learning structure. Moving away from the traditional routine of incorporating the ED and GAN, decoupled
learning explicitly disentangles the two networks, avoiding interaction between them. To make the presentation
easy to follow, we denote the coupled structure used in existing works as ED+GAN2 , and the proposed method as
ED//GAN3 . The contributions of this paper could be summarized from the following three aspects:
• We propose the decoupled learning (ED//GAN) to
tackle the ubiquitous but often neglected problem in
the widely adopted ED+GAN structure that removes
the need for manual balancing between the reconstruction loss and adversarial loss. To the best of our knowledge, this is the first attempt to deal with this issue.
• Based on the proposed decoupled learning
(ED//GAN), we further observe its merit in visualizing the boosting effect of adversarial learning.
Although many empirical studies demonstrated the
effect of GAN in the visual perspective, few of them
could demonstrate how GAN sharpens the blurry
output from ED, e.g., what kinds of edges and textures
could be captured by GAN but missed by ED.
• Moving away from providing absolute performance
metrics like existing works, we design the normalized
relative discriminative score (NRDS) that provides relative estimates of the models in comparison. After all,
the purpose of model evaluation is mostly to rank their
performance; therefore, many times, absolute measurements are unnecessary. In essence, NRDS aims
to illustrate whether one model is better or worse than
another, which is more practical to arrive at a reliable
estimate.
2. Decoupled Learning
In the widely used network structure ED+GAN, ED appears to generate smooth and blurry results due to minimization of pixel-wise average of possible solutions in the
pixel space, while GAN drives results towards the natural image manifold producing perceptually more convincing solutions. Incorporating the two parts as in existing
works causes competition between the two networks, and
when the balance point is not appropriately chosen, bad solutions might result causing artifacts in the generated images. Many empirical studies have demonstrated that it does
not necessarily boost the image quality by topping a GAN
to ED. We aim to avoid such competition by training ED
and GAN in a relatively independent manner – we preserve
the structures of ED and GAN without sharing parameters,
as compared to existing works where the parameters of decoder in ED and generator in GAN are shared. The independent network design explicitly decouples the interaction
between ED and GAN, but still follows the classic objective
functions — the reconstruction loss and minimax game for
ED and GAN, respectively. Thus, any existing work based
on ED+GAN can be easily adapted to the proposed structure
without significantly changing their objectives, meanwhile
gaining the benefit of not having to find a balance between
ED and GAN.
2.1. Difference between ED+GAN and ED//GAN
Compared to ED+GAN, the uniqueness of the proposed ED//GAN lies in the two decoupled backpropagation paths where the reconstruction and adversarial losses
are backpropagated to separate networks, instead of imposing both losses to generator/decoder as done in ED+GAN.
Fig. 2 illustrates the major difference between ED+GAN
and ED//GAN.
In ED+GAN, both reconstruction loss and adversarial
loss are backpropagated to Dec, and the general objective
could be written as
min
Enc,Dec,D
Lconst + λLadv ,
(1)
2 The coupled structures used in existing works are denoted as ED+GAN because they add the effects of ED and
GAN together during training.
3
The proposed decoupled learning is denoted as ED//GAN, indicating that the effect from ED and GAN are
learned/propagated separately through the two networks.
where Lconst and Ladv denote the reconstruction and adversarial losses, respectively. The parameter λ is the weight
ED+GAN
Enc
ED//GAN
G
Dec
Adversarial
loss
D
Enc
Reconstruction
loss
Dec
Adversarial
loss
D
Reconstruction
loss
Figure 2. Comparison between ED+GAN and ED//GAN. Left: the
existing ED+GAN. Right: the proposed ED//GAN, i.e., decoupled
learning. Enc and Dec are the encoder and decoder networks, and
G and D are the generator and discriminator, respectively. Solid
black arrows denote the feedforward path, and dashed arrows in
red and blue indicate backpropagation of the reconstruction loss
and the adversarial loss, respectively.
to balance the two losses.
In ED//GAN, we are no longer in need of the weight λ
because the backpropagation from two losses are along different paths without interaction. Then, the general objective
for ED//GAN becomes
min
Enc,Dec,G,D
= min
Enc,Dec
Lconst + Ladv
(2)
Lconst + min Ladv .
(3)
G,D
2.2. The General Framework
The general framework of the proposed decoupled learning (ED//GAN) is detailed in Fig. 3, incorporating the
encoding-decoding network (ED) with GAN (i.e., D and
G) in a decoupled manner, i.e., G and Dec are trained
separately corresponding to the adversarial loss and reconstruction loss, respectively. Assuming the input image
𝑰
G
I ≈ Iˆ = IED +IG , the generated images from G is actually
ˆ Assuming Iˆ is close
the residual map between IED and I.
to the real image I, then IG would illustrate how adversarial learning increases the resolution and photo-realism of
a blurry image. Generally, IG contains details, e.g., edges
and texture, and specifically, wrinkles and edges of the eyes
and mouth in face images. Therefore, a byproduct of the
decoupled learning is that it provides a direct mechanism to
conveniently illustrate how the adversarial learning boosts
the performance of ED.
In the proposed ED//GAN framework, the gradient derived from the reconstruction loss and adversarial loss are
directed in separated paths without any interaction, avoiding the competition between reconstruction and adversarial
effects which may cause instability as discussed in the introduction. G serves as the subsequent processing block of
ED, recovering details missed by the output from ED. The G
and Dec share the latent variable because of the correspondence between the blurry image IEG and the corresponding
recoverable details IG .
2.3. Training of the ED//GAN
The proposed decoupled learning can be divided into two
parts: 1) reconstruction learning of Enc and Dec and 2)
adversarial learning of G and D. Enc and Dec (i.e., ED)
are trained independently of G and D (i.e., GAN), updated
through the `1 −norm in pixel level as shown by the red
dashed arrow in Fig. 3. G and D are trained by the original
objective of GAN, and G is only updated by the adversarial
loss as indicated by the blue dashed arrow. The final output
image is obtained by pixel-wise summation of the outputs
from G and Dec.
I
Input
Dec
Enc
Residual IG
Output
z
L1
Reconstruction
D
Real/
Fake
IED
Figure 3. The flow of proposed decoupled learning, i.e., ED//GAN.
L1 indicates the pixel-level `1 -norm. Solid black arrows denote
the feedforward path, and dashed arrows in red and blue indicate
the backpropagation from reconstruction loss (L1 ) and adversarial loss (from D), respectively. The reconstructed image IED is
generated from the decoder (Dec), and the residual IG is generated from the generator (G). G and Dec share the latent variable z
derived from the encoder (Enc). The final output image Iˆ is obtained through pixel-wised adding
L the two generated images IG
and IED , as indicated by the
marker.
I ∈ RH×W ×C , where H, W , and C denote the height,
width, and the number of channels, respectively. ED (i.e.,
Enc and Dec) is trained independently from GAN (i.e., G
and D), and the reconstructed image from ED is IED , which
is a blurred version of the input image I. The generator G, together with the discriminator D, learns IG which
ˆ Since
is added to IED to yield the final output image I.
2.3.1
Reconstruction Learning
The encoding-decoding network (ED) aims to minimize
the pixel-level error between the input image I and reconstructed image IED . The training of ED is well known to be
stable, and ED could be any structures specifically designed
for any applications, e.g., U-Net [4] or conditional network [23] with/without batch normalization. Most works
that adopted batch normalization to enhance stability of the
ED+GAN structure may bring a few unfortunate side effects [2] and hurt the diversity [23] of generated images.
With the proposed ED//GAN, however, batch normalization
becomes unnecessary because the training of ED is isolated
from that of GAN, and ED itself could be stable without
batch normalization. The reconstruction loss of the ED part
can be expressed by
Lconst (Enc, Dec) =kI − Dec(Enc(I))k1
(4)
=kI − Dec(z)k1
(5)
=kI − IED k1 ,
(6)
where Enc and Dec indicate the functions of encoder and
decoder, respectively. The latent variable derived from Enc
is denoted by z. k · k1 indicates `1 -norm in pixel level.
More general, the latent variable z could be constrained
to certain prior distribution (e.g., Gaussian distribution or
uniform distribution) to achieve generative ability like in
VAE [5] and AAE [10].
2.3.2
eyebrow, eyes, mouth, and teeth for face images. For the
bird and flower images, the residues further enhance the
color. In some cases, the added details also create artifacts.
In general, adding the residual to the blurry images from
ED (Fig. 4 left), the output images present finer details.
Adversarial Learning
In the proposed ED//GAN, GAN works differently from the
vanilla GAN in two aspects: 1) The inputs of G are features
of the input image (sharing the latent variable z with Dec)
rather than the random noise. 2) The fake samples fed to
D are not directly generated by G. Instead, they are conditioned on the output from Dec. Therefore, the losses of
GAN can be expressed as
Ladv (D) =E [log (1 − D(I))] + E [log D (IED + IG ))],
(7)
Ladv (G) =E [log (1 − D (IED + G(Enc(I))))] ,
(8)
Finally, we obtain the objective of the proposed decoupled
learning (ED//GAN),
min Lconst (Enc, Dec) + minLadv (G) + minLadv (D).
Enc,Dec
G
D
(9)
Note that there are no weighting parameters between the
losses in the objective function, which relaxes the manual tuning that may require an expert with strong domain
knowledge and rich experience. During training, each component could be updated alternatively and separately because the three components do not overlap in backpropagation, i.e., the backpropagation paths are not intertwined.
In practice, however, ED could be trained first because it is
completely independent from GAN and GAN operates on
the output of ED.
2.4. Boosting Effect from Adversarial Learning
A side product of the proposed ED//GAN is that it helps
to investigate how the discriminator independently boosts
the quality of generated images. In ED+GAN, however,
the effect of discriminator is difficult to directly identify
because it is coupled with the effect of ED. The learned
residual in ED//GAN is considered the boosting factor from
the adversarial learning (discriminator). Generally, the images from ED tend to be blurry, while the residual from
GAN carries the details or important texture information
for photo-realistic image generation. Imposing the residual
onto the reconstructed images is supposed to yield higherfidelity images as compared to the reconstructed images.
In Fig. 4 (middle picture in each triple), we can observe
that the adversarial learning mainly enhances the edges at
Figure 4. Visualization of the boost from adversarial learning
trained on UTKFace [23], CUB-200 [22], and Oxford Flower [14]
datasets. From left to right in each triple: reconstruction, residual,
and output images from ED//GAN.
An argument on the visualization of adversarial effect
may be that subtracting the result of ED from that of
ED+GAN could also obtain the residual. Although this process can roughly visualize the boost from GAN, we emphasize that “ED+GAN” minus “ED” is not purely the effect
from GAN because the training of GAN is affected by ED in
ED+GAN and vice versa. In the proposed ED//GAN, however, ED is trained independently from GAN, thus GAN
only learns the residual between real images and those from
ED.
3. Normalized Relative Discriminative Score
In the evaluation of image quality (e.g., reality and resolution), how to design a reliable metric for generative models has been an open issue. Existing metrics (e.g., inception
score [20] and related methods [15]), although successful in
certain cases, have been demonstrated to be problematic in
others [12]. If a perfect metric exists, the training of generative models would be much easier because we could use
such metric as loss directly without training a discriminator. The rationale behind our design is that if it is difficult
to obtain the absolute score (perfect metric) of a model, we
could at least compare which model generates better images
than others. From this perspective, we propose to perform
relative comparison rather than providing evaluation based
on absolute score like existing works. More specifically, we
train a single discriminator/classifier to separate real samples from generated samples, and those generated samples
closer to real ones will be more difficult to be separated. For
example, given two generative models G1 and G2 , which
define the distributions of generated samples pg1 and pg2 ,
respectively. Suppose the distribution of real data is pdata ,
if JSD(pg1 |pdata ) < JSD(pg2 |pdata ) where JSD denotes the Jensen-Shannon divergence and assume pg1 and
pg2 intersect with pdata , a discriminator/classifier D trained
to classify real samples as 1 and 0 otherwise would show
the following inequality,
Ex∼pdata [D(x)] ≥ Ex∼pg1 [D(x)] ≥ Ex∼pg2 [D(x)]. (10)
The main idea is that if the generated samples are closer
to real ones, more epochs would be needed to distinguish
them from real samples. The discriminator is a binary classifier to separate the real samples from fake ones generated
by all the models in comparison. In each epoch, the discriminator output of each sample is recorded. The average discriminator output of real samples will increase with epoch
(approaching 1), while that of generated samples from each
model will decrease with epoch (approaching 0). However,
the decrement rate of each model varies based on how close
the generated samples to the real ones. Generally, the samples closer to real ones show slower decrement rate. Therefore, we compare the “decrement rate” of each model to
relatively evaluate their generated images. The decrement
rate is proportional to the area under the curve of average
discriminator output versus epoch. Larger area indicates
slower decrement rate, implying that the generated samples
are closer to real ones. Fig. 5 illustrates the computation of
normalized relative discriminative score (NRDS).
G1
G2
Gn
Fake 1
Fake 2
Fake n
Random
sampling
Real
samples
Training
Real?
D
Share weight
Fake?
0.1
D
0.5
Testing
0.2
Figure 5. Illustration of NRDS. Gn indicates the nth generative
model, and its corresponding fake samples are Fake n, which are
smapled randomly. The fake samples from n models, as well as
the real samples, are used to train the binary classifier D (bottom
left). Testing only uses fake samples and performs alternatively
with the traing process. The bottom right shows an example of
averaged output of D from fake samples of each model.
There are three steps to compute the proposed normalized relative discriminative score (NRDS): 1) Obtain the
curve Ci (i = 1, 2, · · · , n) of discriminator output versus
epoch (or mini-batch) for each model (assuming n models
in comparison) during training; 2) Compute the area under
each curve A(Ci ); and 3) Compute NRDS of the ith model
by
A(Ci )
.
(11)
N RDSi = Pn
j=1 A(Cj )
To illustrate the computation of NRDS, Fig. 6 shows a
toy example. Assume the samples named “fake-close” and
“fake-far” are generated from two different models to simulate the real samples. We train a discriminator on the real
and fake (i.e., fake-close and fake-far) samples. The structure of discriminator is a neural network with two hidden
layers, both of which have 32 nodes, and ReLU is adopted
as the activation function. After each epoch of training on
the real and fake samples, the discriminator is tested on the
same samples from real, fake-close, and fake-far, respectively. For example, all the real samples are fed to the discriminator, and then we compute the mean of the outputs
from the discriminator. By the same token, we can obtain the average outputs of fake-close and fake-far, respectively. With 300 epochs, we plot the curves shown in Fig. 6
(right). Intuitively, the curve of fake-close approaches zero
slower than that of fake-far because the samples in fakeclose are closer (similar) to the real samples. The area
Figure 6. A toy example of computing NRDS. Left: the real
and fake samples randomly sampled from 2-D normal distributions with different means but with the same (identity) covariance.
The real samples (blue circle) is with zero mean. The red “x”
and yellow “+” denote fake samples with the mean of [0.5, 0] and
[1.5, 0], respectively. The notation fake-close/far indicates that the
mean of correspondingly fake samples is close to or far from that
of the real samples. Right: the curves of epoch vs. averaged output
of discriminator on corresponding sets (colors) of samples.
under the curves of fake-close (C1 ) and fake-far (C2 ) are
A(C1 ) = 145.4955 and A(C2 ) = 71.1057, respectively.
From Eq. 11,
A(C1 )
= 0.6717
N RDS1 = P2
i=1 A(Ci )
A(C2 )
N RDS2 = P2
= 0.3283.
i=1 A(Ci )
(12)
(13)
Therefore, we can claim that the model generating fakeclose is relatively better. Note that the actual value of
NRDS for certain single model is meaningless. We can
only conclude that the model with higher NRDS is better
than those with lower NRDS in the same comparison, but
a high NRDS does not necessarily indicate an absolutely
good model.
4. Experimental Evaluation
We evaluate the proposed decoupled learning mainly
from 1) its ability in relaxing the weight setting and 2) its
generality in adapting to existing works. First, we compare the proposed ED//GAN to the traditional ED+GAN
based on the UTKFace dataset [23] using a general (not
fine-tuned) network structure. Then, two existing works,
i.e., Pix2Pix [4] and CAAE [23], are adopted for adaptation,
where the corresponding datasets are use, i.e. UTKFace and
CMP Facade databases [18], respectively.
The UTKFace dataset consists of about 20,000 aligned
and cropped faces with large diversity in age and race. The
decoupled learning applied on the UTKFace dataset aims to
demonstrate the performance on image manipulation tasks.
The CMP Facade dataset is utilized to illustrate the performance of the decoupled learning on image transformation
tasks. without parameter tuning on any datasets.
4.1. Comparison between ED//GAN and ED+GAN
For fair comparison, we compare ED//GAN and
ED+GAN on the same network and dataset. This network is
neither specifically designed for any applications nor delicately fine-tuned to achieve the best result. The goal is
to illustrate the advantages of ED//GAN as compared to
ED+GAN. Table 1 details the network structure used in this
experiment.
Table 1. The network structure shown in Fig. 3. The size of each
layer is denoted by h × w × c, corresponding to height, width, and
number of channels, respectively.
Enc / D
Input
Conv, BN, ReLU
Conv, BN, ReLU
Conv, BN, ReLU
Conv, BN, ReLU
Conv, BN, ReLU
Reshape, FC, tanh
Dec / G
Input
FC, ReLU, BN, Reshape
Deconv, BN, ReLU
Deconv, BN, ReLU
Deconv, BN, ReLU
Deconv, BN, ReLU
Deconv, tanh
Size
128 × 128 × 3
64 × 64 × 64
32 × 32 × 128
16 × 16 × 256
8 × 8 × 512
4 × 4 × 1024
50 / 1
Size
50
4 × 4 × 1024
8 × 8 × 512
16 × 16 × 256
32 × 32 × 128
64 × 64 × 64
128 × 128 × 3
To demonstrate the effectiveness of ED//GAN on relaxing the weight setting, we compare it to ED+GAN by
changing the weight between the reconstruction loss and
adversarial loss. In the objective function of ED//GAN
(Eq. 9), no weight is required. For comparison purpose,
we intentionally add a weighting parameter λ to the adversarial loss like the objective of GAN (Eq. 1). Iterating λ
from 0.001 to 1 with the step of 10x, we obtain the results
as shown in Fig. 7 after 200 epochs with the batch size of
25. The output images from ED//GAN are relatively higher-
Figure 7. Comparison of ED//GAN (top) and ED+GAN (bottom)
on the UTKFace dataset. From left to right, the weights on the
adversarial loss are 0.001, 0.01, 0.1, and 1, respectively. Please
zoom in for better view.
fidelity and maintain almost the same quality regardless of
the weight change. However, the outputs of ED+GAN significantly vary with the weight. In addition, ED+GAN generates unstable results, e.g., model collapsing and significant artifacts. The corresponding NRDS is calculated in Table 2, where the number of models in comparison is i = 8
(Eq. 11). The discriminator adopted in NRDS is the same
as D in Table 1.
Table 2. NRDS for different weight settings (Fig. 7).
ED+GAN
ED//GAN
0.001
.1066
.1320
0.01
.1143
.1300
0.1
.1268
.1300
1
.1267
.1336
std
.0099
.0017
Now, we remove the batch normalization in Enc and Dec
to see whether ED//GAN still yields stable results. Fig. 8
compares the results from ED//GAN and ED+GAN by removing the batch normalization in Enc and Dec. The corresponding NRDS is listed in Table 3.
Table 3. NRDS for different weight settings (Fig. 8).
ED+GAN
ED//GAN
0.001
.1172
.1432
0.01
.1143
.1434
0.1
.1163
.1458
1
.0731
.1466
std
.0215
.0017
From the two experiments, ED//GAN vs. ED+GAN
with/without batch normalization on ED (i.e., Enc and Dec),
we observe that ED//GAN generally yields higher NRDS,
indicating better image quality. In addition, the NRDS values for ED//GAN vary much less than those of ED+GAN,
Adaptation to ED//GAN
Pix2Pix (ED+GAN)
U-Net
D
U-Net
Adversarial
loss
Reconstruction
loss
D
U-Net
Adversarial
loss
Reconstruction
loss
Figure 9. Left: the network structure of Pix2Pix (ED+GAN).
Right: the adaptation to the proposed ED//GAN. Solid black arrows denote the feedforward path, and dashed arrows in red and
blue indicate backpropagation from the reconstruction loss and the
adversarial loss, respectively.
Figure 8. Comparison between ED//GAN (top) and ED+GAN
(bottom) without batch normalization on the UTKFace dataset.
From left to right, the weights on the adversarial loss are 0.001,
0.01, 0.1, and 1, respectively.
as observed from the lower standard deviation (std), indicating robustness against different weights. These observations completely agree with our claim — ED//GAN stabilizes the training regardless of the trade-off issue in the
traditional ED+GAN structure.
We notice that for ED//GAN, the NRDS value slightly
changes with the change of weight. However, the change
is too small to be observable from visual inspection. We
also observe that NRDS achieves the peak value at certain
weight settings. For example, NRDS achieves the highest
value at λ = 1 in both Tables 2 and 3, which happens to be
the case of the proposed ED//GAN without weight setting.
4.2. Adaptation from Existing Works to ED//GAN
An essential merit of ED//GAN is its adaptability for existing ED+GAN works. Specifically, an existing work that
adopted the ED+GAN structure could be easily modified to
the ED//GAN structure without significantly reducing the
performance but with the benefit of not having to fine-tune
the weight. To demonstrate the adaptability of ED//GAN,
we modify two existing works: 1) Pix2Pix [4] for image
transformation and 2) CAAE [23] for image manipulation.
According to Fig. 2, the modification is simply to parallelize
a G (the copy of Dec) to the original network. The objective functions are modified from Eq. 1 to Eq. 3, which is
straightforward to implement.
4.2.1
Adaptation on Pix2Pix
We adapt the network in Pix2Pix [4], which is ED+GAN
structure, to the proposed ED//GAN structure as shown in
Fig. 9.
In Pix2Pix, ED is implemented by the U-Net, which directly passes feature maps from encoder to decoder, preserving more details. In order not to break the structure of
U-Net, we apply another U-Net as the generator G in the
corresponding ED//GAN version. Fig. 10 compares the results from Pix2Pix and its ED//GAN version. The reported
weight in Pix2Pix is 100:1, where the weight on reconstruction loss is 100, and 1 on the adversarial loss. We change
the weight setting to 1:1 and 1000:1 to illustrate its effect
on the generated images.
Input
Real
1:1
100:1
1000:1
ED//GAN
Figure 10. Comparison between Pix2Pix and its ED//GAN version. Pix2Pix generates images at different weight settings as denoted by λ:1, where λ and 1 indicate the weights of the reconstruction loss and adversarial loss, respectively. ED//GAN denotes the
generated images from the modified decoupled structure.
We observe that the generated images with the weight of
1:1 introduce significant artifacts (zoom in for better view).
With higher weight on the reconstruction loss, e.g., 100:1
and 1000:1, more realistic images can be generated, whose
quality is similar to that from the ED//GAN version that
does not need weight setting.
4.2.2
Adaptation on CAAE
We next adapt CAAE [23], a conditional ED+GAN structure, to the proposed ED//GAN structure as shown in
Fig. 12. CAAE generates aged face by manipulating the
label concatenated to the latent variable z from Enc.
The original network used in CAAE has an extra discriminator on z to force z to be uniformly distributed. We
do not show this discriminator in Fig. 12 because it does
not affect the adaptation. Fig. 11 shows some random examples to compare the original and modified structures. The
weights of the reconstruction loss and adversarial loss are 1
and 10−4 (i.e., 1:10−4 ) as reported in the original work. We
use another two different weight settings, 1:10−2 and 1:1,
for the original structure and compare the results with the
corresponding ED//GAN version.
1:10−4
1:10−2
1:1
ED//GAN
Figure 11. Comparison between CAAE [23] and its ED//GAN version. CAAE generates images at different weights as denoted by 1:λ,
where 1 and λ indicate the weights of the reconstruction loss and adversarial loss, respectively. ED//GAN denotes the generated images
from the modified decoupled structure.
Adaptation to ED//GAN
Face Aging (ED+GAN)
G
z
z
Enc
Dec
D
Adversarial
loss
Enc
Dec
D
Adversarial
loss
y
y
Reconstruction
loss
Reconstruction
loss
Figure 12. Left: the ED+GAN structure used in CAAE [23] (the
discriminator on z is neglected for simplicity because the modification will not affect that part). Right: the adaptation to the proposed ED//GAN. Solid black arrows denote the feedforward path,
and dashed arrows in red and blue indicate backpropagation from
the reconstruction loss and the adversarial loss, respectively. The
label y is concatenated to z and D to control the age.
The NRDS is provided for both adaptation experiments
to statistically analyze the adaptability of ED//GAN, as
shown in Tables 4 and 5, respectively. We observe that the
Table 4. NRDS for Pix2Pix adaptation.
Method
NRDS
1:1
.2190
ED+GAN
100:1 1000:1
.2641 .2572
ED//GAN
.2597
Table 5. NRDS for CAAE adaptation.
Method
NRDS
1:10−4
.2527
ED+GAN
1:10−2
1:1
.2496
.2430
ED//GAN
.2547
ED//GAN structure still, in general, yields higher or similar
NRDS values than the coupled counterpart. Although in Table 4, the proposed ED//GAN ranks number two, ED//GAN
achieves the competitive result without the need of tuning
the weight parameter. In Table 5, ED//GAN ranks the top as
compared to the ED+GAN structure. Note that we show al-
ternatives of parameter setting based on the optimal settings
that are already known from the original papers. If designing a new structure without any prior knowledge, however,
it could be difficult to find out the optimal weight with only
a few trials.
It is worth emphasizing that the goal is not to beat the
best result from fine-tuned ED+GAN. Rather, ED//GAN
aims at achieving stable and competitive results without
having to fine-tune the weight.
5. Conclusion
This paper proposed the novel decoupled learning structure (ED//GAN) for image generation tasks with imageconditional models. Different from existing works where
the reconstruction loss (from ED) and the adversarial loss
(from GAN) are backpropagated to a single decoder, referred to as the coupled structure (ED+GAN), in ED//GAN,
the two losses are backpropagated through separate networks, thus avoiding the interaction between each other.
The essential benefit of the decoupled structure is such that
the weighting factor that has to be fine-tuned in ED+GAN
is no longer needed in the decoupled structure, thus improving stability without looking for the best weight setting.
This would largely facilitate the wider realization of more
specific image generation tasks. The experimental results
demonstrated the effectiveness of the decoupled learning.
We also showed that existing ED+GAN works can be conveniently modified to ED//GAN by adding a generator that
learns the residual.
References
[1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein
gan. arXiv preprint arXiv:1701.07875, 2017.
[2] I. Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160,
2016.
[3] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu,
D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in
Neural Information Processing Systems (NIPS), pages
2672–2680, 2014.
[4] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Imageto-image translation with conditional adversarial networks. In IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017.
[5] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[6] A. B. L. Larsen, S. K. Sønderby, H. Larochelle,
and O. Winther. Autoencoding beyond pixels using a learned similarity metric.
arXiv preprint
arXiv:1512.09300, 2015.
[7] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz,
Z. Wang, et al. Photo-realistic single image superresolution using a generative adversarial network.
arXiv preprint arXiv:1609.04802, 2016.
[8] C.-Y. Liou, W.-C. Cheng, J.-W. Liou, and D.-R. Liou.
Autoencoder for words. Neurocomputing, 139:84–96,
2014.
[9] M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised
image-to-image translation networks. arXiv preprint
arXiv:1703.00848, 2017.
[10] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and
B. Frey. Adversarial autoencoders. arXiv preprint
arXiv:1511.05644, 2015.
[11] X. Mao, Q. Li, H. Xie, R. Y. Lau, and Z. Wang.
Least squares generative adversarial networks. arXiv
preprint ArXiv:1611.04076, 2016.
[12] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint
arXiv:1611.02163, 2016.
[13] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and
J. Clune. Synthesizing the preferred inputs for neurons
in neural networks via deep generator networks. In
Advances in Neural Information Processing Systems,
pages 3387–3395, 2016.
[14] M.-E. Nilsback and A. Zisserman. Automated flower
classification over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Dec. 2008.
[15] A. Odena, C. Olah, and J. Shlens. Conditional image
synthesis with auxiliary classifier gans. arXiv preprint
arXiv:1610.09585, 2016.
[16] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and
A. A. Efros. Context encoders: Feature learning by
inpainting. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages
2536–2544, 2016.
[17] A. Radford, L. Metz, and S. Chintala.
Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint
arXiv:1511.06434, 2015.
[18] R. Š. Radim Tyleček. Spatial pattern templates for
recognition of objects with regular structure. In Proc.
GCPR, Saarbrucken, Germany, 2013.
[19] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele,
and H. Lee. Generative adversarial text to image synthesis. In Proceedings of The 33rd International Conference on Machine Learning, volume 3, 2016.
[20] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung,
A. Radford, and X. Chen. Improved techniques for
training gans. In Advances in Neural Information Processing Systems, pages 2226–2234, 2016.
[21] Y. Song, Z. Zhang, and H. Qi. Recursive cross-domain
face/sketch generation from limited facial parts. arXiv
preprint arXiv:1706.00556, 2017.
[22] C. Wah, S. Branson, P. Welinder, P. Perona, and
S. Belongie. The caltech-ucsd birds-200-2011 dataset.
2011.
[23] Z. Zhang, Y. Song, and H. Qi.
Age progression/regression by conditional adversarial autoencoder. In IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017.
[24] Z. Zhang, Y. Song, and H. Qi. GANs powered by
autoencoding a theoretic reasoning. ICML Workshop
on Implicit Models, 2017.
[25] J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A.
Efros. Generative visual manipulation on the natural
image manifold. In European Conference on Computer Vision, pages 597–613. Springer, 2016.
[26] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired
image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593,
2017.
| 1 |